id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
267410478
pes2o/s2orc
v3-fos-license
A Blinded Evaluation of Brain Morphometry for Differential Diagnosis of Atypical Parkinsonism Abstract Background Advanced imaging techniques have been studied for differential diagnosis between PD, MSA, and PSP. Objectives This study aims to validate the utility of individual voxel‐based morphometry techniques for atypical parkinsonism in a blinded fashion. Methods Forty‐eight healthy controls (HC) T1‐WI were used to develop a referential dataset and fit a general linear model after segmentation into gray matter (GM) and white matter (WM) compartments. Segmented GM and WM with PD (n = 96), MSA (n = 18), and PSP (n = 20) were transformed into z‐scores using the statistics of referential HC and individual voxel‐based z‐score maps were generated. An imaging diagnosis was assigned by two independent raters (trained and untrained) blinded to clinical information and final diagnosis. Furthermore, we developed an observer‐independent index for ROI‐based automated differentiation. Results The diagnostic performance using voxel‐based z‐score maps by rater 1 and rater 2 for MSA yielded sensitivities: 0.89, 0.94 (95% CI: 0.74–1.00, 0.84–1.00), specificities: 0.94, 0.80 (0.90–0.98, 0.73–0.87); for PSP, sensitivities: 0.85, 0.90 (0.69–1.00, 0.77–1.00), specificities: 0.98, 0.94 (0.96–1.00, 0.90–0.98). Interrater agreement was good for MSA (Cohen's kappa: 0.61), and excellent for PSP (0.84). Receiver operating characteristic analysis using the ROI‐based new index showed an area under the curve (AUC): 0.89 (0.77–1.00) for MSA, and 0.99 (0.98–1.00) for PSP. Conclusions These evaluations provide support for the utility of this imaging technique in the differential diagnosis of atypical parkinsonism demonstrating a remarkably high differentiation accuracy for PSP, suggesting potential use in clinical settings in the future. The differential diagnosis of atypical parkinsonian syndromes, including multiple system atrophy (MSA) and progressive supranuclear palsy (PSP), from Parkinson's disease (PD) is challenging during the early stages of the disease, since these conditions may present with overlapping clinical features in the early phase. 1,2pecifically, MSA may feature varying degrees of parkinsonism, cerebellar ataxia, and autonomic failure, and can be classified into two motor phenotypes: parkinsonian variant (MSA-P), characterized by striatonigral degeneration, and cerebellar variant (MSA-C), associated with olivopontocerebellar cell loss. 2,3Richardson's type PSP is typically characterized by supranuclear vertical gaze palsy, postural instability with falls, bradykinesia, and axial rigidity; however, PSP may present with several other phenotypes as has been acknowledged in the recent revision of PSP diagnostic criteria and may include a pure akinesia and gait freezing, a PD-like, a corticobasal syndrome, and a frontal-executive phenotype. 4,5Nevertheless, an accurate diagnosis during the early stage is crucial for appropriate patient counseling and identifying homogenous patient cohorts for clinical trial. 6or differentiating parkinsonism, continuous clinical examination and monitoring, including the response to treatment, remain essential, but biomarker support using molecular or imaging markers play an important role.Conventional MRI studies have demonstrated classical findings in MSA and PSP, such as the "hot cross bun" or the "hummingbird" signs. 7However, these findings are not consistently present in all patients, limiting the utility of conventional MRI in the differential diagnostic process.][10][11][12][13][14] Despite these advances, advanced methods are not yet widely applied in clinical settings and often lack validation in independent cohorts eliciting concerns on the generalizability of these advanced methods. In this study, we applied the individual voxel-based morphometry adjusting covariates (iVAC) system to the anatomical T1-WI, an easy-to-process anatomical analysis pipeline introduced in 2021. 15This method enables visualization of individual statistical z-score brain maps relative to healthy controls using color gradation, making it easier to recognize regional atrophic changes with potential for clinical application.The previous paper reported a high sensitivity in detecting anatomical abnormalities in the putamen, pons, middle cerebellar peduncle in MSA patients as well as high accuracy in differentiating MSA from PD.In the present study, we aimed to validate this method in an independent cohort as has been recommended in guidelines on the development of biomarkers.While the previous report only examined MSA and PD patients, in the present work, we also sought to investigate the diagnostic potential of this method in patients with PSP.We designed this study with two independent approaches: (1) a voxel-based z-score visualization map diagnosis by raters blinded to the clinical information and final diagnosis; and (2) a region-of interest (ROI)-based automated and observer-independent index encompassing all relevant brain structure to facilitate the clinical applicability of the presented method. Participants This study includes MRI images of 134 patients clinically diagnosed with parkinsonism, including PD (n = 96), MSA (n = 18; MSA-P: n = 12 and MSA-C: n = 6), and PSP (n = 20; Richardson's syndrome: n = 11; PSP-Parkinsonism: n = 9).All patients met clinical criteria for "probable" disease as defined in corresponding sets of diagnostic criteria, 4,16,17 where MRI findings are not required, and were recruited as part of prospective biomarker studies between December 2011 and May 2013. 18,19or the analysis pipeline, we also used 48 images from healthy controls as a referential database (age: mean AE standard deviation (SD) = 57.9AE 11.8; female: n = 27; see the "Processing with Individual voxel-based morphometry adjusting covariates (iVAC)" subsection below). The study was approved by the Ethics Committee of the Medical University of Innsbruck.Participants' written informed consent was obtained prior to study inclusion. Processing with Individual Voxel-Based Morphometry Adjusting Covariates (iVAC) The iVAC is a toolbox for SPM12 (https://amrc.iwate-med.ac.jp/en/project-2/download/).Detailed explanations of the statistical process were described previously. 15In summary, the process was divided into following the steps. Creation of a referential database from healthy controls (HC) A referential dataset was generated only using HC participants' images.In this analysis, a general linear model was applied, incorporating age, sex, and total intracranial volume (TIV) as explanatory variables.Brain maps, consisting of voxel-level coefficients and residuals matrices, were then generated for the next step.Creating a referential database takes several to dozens of minutes to complete. 2. Transformation of segmented images into z-score maps using HC statistics Using the constructed referential database, individual voxel values of PD, MSA, and PSP patients for both GM and WM compartments were transformed into z-scores, creating z-score brain maps for GM and WM, taking into account age, sex, and TIV.The z-scores were inverted to indicate that a higher z-score represents the presence of atrophic changes. 3. Visualization of voxel-based z-score maps (for approach 1) These resulting voxel-based z-score maps can be visualized with color gradation reports overlays on the standard brain.In this study, we set a z-score threshold of 2 and only visualized voxels above this threshold, since a z-score above 2 indicates that atrophic changes are observed in fewer than approximately 2.5% of the HCs.This approach allows us to focus on significantly atrophic regions. Extraction of z-scores within ROI (for approach 2) The iVAC also outputs the mean voxel z-score values within ROIs, which are described below. In the analysis, any regions that were not part of the gray or white matter were excluded by masking them out.Transforming voxels into z-scores and extracting ROI values takes a few to several minutes per step, depending on the number of images to transform, as conducted using a graphical user interface (GUI). Approach 1: Visual Inspection and Blind Diagnostic Decision with z-Score Map Reports The iVAC toolbox generates individual reports with color gradation z-score maps overlaid onto template images.We thresholded GM and WM z-score maps at a z-score ≥2, indicating significant atrophic changes outside the 95% confidence intervals of healthy controls. The two raters (K.K., B.H.) each assessed iVAC z-score map reports for both GM and WM, generated from the anonymized T1-WI (n = 134) of patients clinically diagnosed with MSA, PSP, and PD, and assigned a random ID.These were provided by the experienced neurologist (F.K.).The first rater (K.K.) had prior experience in using iVAC maps to differentiate MSA from PD from previously published work, 15 while the second rater (B.H.) evaluated iVAC maps without any prior training.The raters conducted a blind diagnostic assessment without any clinical information based on individual reports, classifying patients as having MSA, PSP, or PD.Specifically, the raters focused on color gradations, particularly in the putamen and cerebellum for GM, and the pons, middle cerebellar peduncle (MCP), cerebellar white matter, and midbrain for WM reports.After the evaluation, the clinical diagnoses were opened and verified for consistency.The Cohen's kappa was then calculated for the agreement of two raters. Approach 2: Automated ROI-Based Classification with Observer-Independent Index The Neuromorphometrics atlas (http://www.neuromorphometrics. com) and the Eve white matter atlas 20 (https://github.com/Jfortin1/EveTemplate) were used for ROI-level z-score analysis. 15The Neuromorphometrics atlas is included in the CAT12 software.The Eve atlas, a WM parcellation map, was co-registered and resliced using SPM12 to fit the preprocessed T1-WI images.ROI-level z-scores were calculated as the mean of the voxel z-scores within each ROI.For the ROI analysis, we used the GM z-scores in the right and left putamen volumes from the Neuromorphometrics atlas, as well as WM z-scores in the right and left midbrain and the right and left MCP from the Eve atlas.These ROI areas were selected based on relevant brain structures with MSA and PSP. We generated a new index for differentiating MSA from others (PD and PSP) and PSP from others (PD and MSA) using Z msa and Z psp values.Z psp represents the higher side of midbrain z-score, the region involved in PSP, and Z msa represents the highest z-score value of the putamen and MCP, regions that may be involved in either MSA-P or MSA-C, or in both subtypes.As described above, the z-score was inverted; therefore, higher Z psp and Z msa values are considered indicative of more atrophic changes.The Index was defined as follows: In summary, to define the Index for differentiation, we used the following flow: 1. Z psp extracts higher midbrain z-score, and Z msa extracts the highest z-score value of the putamen and MCP. 2. If both Z psp and Z msa are negative, the Index is set to 0. 3.If the Z psp is greater than Z msa (Z psp ≥ Z msa ), the Index corresponds to the midbrain z-score (Z psp ). 4. If the Z psp is smaller than Z msa (Z psp < Z msa ), the Index represents the negative of the highest z-score of either putamen or MCP. By using the Index, we can assess which region is more affected, either the midbrain or putamen/MCP, in the context of PSP and MSA. Statistical Analysis Differences in demographics were assessed using the Kruskal-Wallis test for age, disease duration, and Hoehn-Yahr stage, and the chi-square test for sex.Post-hoc analysis was performed using the Mann-Whitney U test, with the P values corrected for multiple comparisons using Bonferroni correction. For individual z-scores of ROIs, we performed a pairwise Mann-Whitney U test to compare z-score differences between two groups shown in Figure 4.The P-values were corrected using Bonferroni correction for multiple comparisons for each ROI. Receiver operating characteristic (ROC) analyses were conducted to differentiate MSA versus others, and PSP versus others with the Index described above.To estimate the optimal cutoff point, we applied the Youden Index (Youden's J), which is defined as J = sensitivity + specificity À1.DeLong's method was applied to calculate the 95% confidence interval (CI) for ROC curve. Validation for the Cutoffs from ROC Analyses To validate the optimal cutoffs in the ROC analyses, we calculated leave-one-out cross-validation (LOOCV) accuracy and Cohen's kappa.ROC and LOOCV analyses were performed using pROC and caret packages running on R version 4.3.1. Demographics The demographics of participants are shown in Table S1.There were no significant differences in age or sex among the PD, MSA, and PSP groups.While disease duration was shorter and Hoehn & Yahr stage was higher in MSA and PSP compared with PD, no differences were observed between MSA and PSP. Z-Score Map Appearance and Visual Decision of Diagnosis for MSA and PSP Typical changes in MSA and PSP are illustrated in Fig. 1 and 2. In MSA, atrophy in the putamen presented especially for the MSA-P subtype, while atrophy in the pons and MCP exhibited especially for MSA-C.Blind diagnosis results of rater 1 (trained) showed that sensitivity is 0.889 (95%CI: 0.744-1.000),specificity 0.940 (0.896-0.975), and an accuracy 0.933 (0.890-0.975) for MSA, whereas ratings from rater 2 (untrained) yielded a sensitivity of 0.944, a specificity 0.802, and an overall accuracy 0.821 (0.756-0.886).The inter-rater agreement was good with a Cohen's kappa of 0.614 (Table 1A). Automated ROI-Based Classification Using the Putamen, MCP, and Midbrain The z-score of ROIs, where a higher z-score represents greater atrophy (refer to the "Processing with Individual Voxel-Based Morphometry Adjusting Covariates (iVAC)" subsection in Methods section), on the higher z-score side of the midbrain, putamen, and MCP are shown in Figure 4A-C. The plots of index we created are shown in Fig. 4D (see Methods for creating the index).The index represents the higher side of midbrain and the highest z-scores of putamen and MCP.Positive index values suggest greater atrophic changes in the midbrain compared to the putamen/MCP.In contrast, negative index values indicate a higher z-score in the putamen or MCP than in the midbrain, indicating more pronounced atrophic changes in the putamen/MCP than in the midbrain.From this perspective, all PSP participants displayed positive index values greater than 1.On the other hand, all but two in MSA patients showed negative values.The two MSA patients with positive indices had z-scores in the putamen (higher side) of 1.18 and 0.61, MCP of À0.08 and À0.16, and midbrain of 2.14 and 1.64. Discussion In the present work, we were able to independently validate the diagnostic utility of individual-adjusted brain morphometry for the differential diagnosis of atypical parkinsonism.One of the particular strengths of the current work is the blinded evaluation of the observer-dependent atrophy ratings and the imaging-based classification.Notably, the differentiation of PSP from PD and MSA shows remarkably high accuracy using both a visual interpretation of automatically generated atrophy maps as well as an automated ROI classification algorithm. Recent advancements in MRI techniques have introduced several measures to detect structural changes in the brain.Voxel-based morphometry (VBM) is one of the most widely used neuroimaging methods that quantitatively assess differences in anatomical brain structures. 21It primarily focuses on grouplevel comparisons, but an individual-level analysis method has been reported for detecting hippocampal atrophy in Alzheimer's Disease.The "Voxel-based Specific Regional Analysis system for Alzheimer's Disease" (VSRAD) has been mainly developed and used as clinical support software in Japan. 22This software generates statistical results of regional brain volumes, such as the hippocampus, by referencing the mean and standard deviation of a control dataset.In contrast, the iVAC approach considers the effects of age, sex, intracranial volumes, and other factors in the regression model for the reference control dataset. 15he previous study applied the iVAC to MSA patients, demonstrating its increased sensitivity in detecting atrophic changes Cohen's Kappa 0.837 compared to conventional T2-weighted image readings by experienced neurologists.Using this approach, we examined the visual evaluation of the individual z-score maps for both gray matter and white matter to differentiate MSA, PSP, and PD.Most PSP patients consistently showed the midbrain atrophic appearance yielding a high specificity with high agreement between two blind raters, even though one rater did not have prior training for iVAC analyses.Two patients in PSP did not display typical appearance due to smaller midbrain atrophic regions having the voxels with a z-score above 2. MSA patients exhibited characteristic appearance for putamen atrophy and for the pons/MCP atrophy.These appearances were consistent with the findings of a previous report. 15However, our results indicated a slightly lower accuracy for MSA compared to the PSP and to the previous reports.In the previous paper utilizing iVAC, most patients exhibited pons and MCP atrophy changes (96.2%), even in MSA-P patients. 15This observation was suspected to be due to the predominance of the cerebellar type in East Asia 23 or the possibility of different susceptibility in the pons and cerebellum among distinct populations.These factors might contribute to the varying diagnostic accuracy between raters in the present study due to less specificity in putaminal atrophic changes in MSA.In addition, the number of healthy controls were larger in the previous report (n = 189) than the current study (n = 48), which may have an impact on the accuracy.Furthermore, we examined the diagnostic potential of ROIbased z-score assessments, which facilitates an automatic decision-making process.A number of different modalities, ROI-based, and machine learning methods has been introduced to enhance diagnostic accuracy.Regarding the analysis using T1-WI images, the machine learning method using a 3-node C4.5 decision tree was applied and showed the high differentiate accuracy of 97.4% for PSP and MSA from PD, 24 and 96.8% of MSA patients from PD. 25 The automatic analysis pipeline of magnetic resonance parkinsonism index (MRPI) or MRPI 2.0 has been introduced in differentiating PSP. 26,27Recent reports of MRPI 2.0 using two different cohorts showed the differentiation between PSP-P and PD with AUC = 0.93 for training data and 0.92 for test data. 28In addition, AUCs in discriminating between PSP and non-PSP parkinsonism at a clinically unclassifiable stage were 0.91 for both the pons-to-midbrain ratio (P/M) and the MRPI and 0.98 for the P/M 2.0 and the MRPI 2.0. 29Diffusion MRI has been reported to detect abnormalities of the MCP and putamen in MSA.A diffusion MRI in the MCP and putamen, can effectively discriminate between MSA and PD. 30,31Another multicenter study involving 1002 patients, a support vector algorithm with a linear kernel was applied to differentiate between PD and atypical parkinsonism using diffusion-weighted images and motor scores, reported an AUC of 0.962 for PD versus atypical parkinsonism and 0.897 for MSA versus PSP. 32In our study, we have created a new, simple index to enhance the diagnostic accuracy.This index was derived by comparing the highest z-score values, indicating the greatest atrophy region, from the midbrain, representing relevant region of PSP, and either the putamen or MCP, representing relevant regions of both MSA-P and MSA-C.The concept behind this index is to assign the highest z-scores to either the positive or negative side based on the most affected brain region related to PSP and MSA.If the midbrain has the highest z-score, indicating it is the most affected brain region, the index will be positive.Conversely, if the putamen or MCP has the highest z-score, the index will be placed in the negative.Notably, all indices in PSP were on the positive side, while those of all except two MSA patients were on the negative side.This dichotomy contributes to the high differentiation diagnostic potential of the algorithm with an AUC of 0.991 and the LOOCV accuracy 0.940 with kappa 0.798 for PSP.While the accuracy for MSA still requires improvement, the use of both ROI-based automatic and visualbased z-score map assessments for PSP may be potentially useful in the clinical settings, along with a physical examination. Limitations Long-term clinical follow-up rather than pathological confirmation was considered the gold standard diagnosis in the present paper.Consequently, the accuracy of the clinical diagnosis has an impact on the diagnostic outcome, and clinical misdiagnosis can occur even when the latest diagnostic criteria are applied. 33This study did not differentiate between subtypes of MSA or PSP, nor did it include other variants of PSP.The potential differences between disease subtypes when using our imaging approach needs to be investigated in further studies.While the system has the potential to be clinically useful, our results suggest that the effective use may require learning or establishing consensus on subjective decisions, and the prior probability is also crucial for using this system to increase the accuracy.Further, the accuracy may also be influenced by factors such as the performance of the MRI, including the manufacturer, magnetic field strength, noise, slice thickness, anomaly, or other abnormality in the brain, and the reliability of the segmentation pipeline or software.As these techniques continue to develop, the accuracy could further improve.Finally, although we have validated the accuracy of automated ROI analysis using LOOCV, our approach requires further validation using other cohorts with larger sample size, and comparisons with other advanced techniques, to assess its clinical utility as an easy-to-process approach. Conclusion The present study confirms the high diagnostic accuracy of individual-supported brain morphometry for differentiating PSP, though improvements are still needed for MSA.One of the particular strengths of the present work is the blinded assignment of an imaging-based diagnosis mitigating various forms of bias which are common in research on diagnostic tests.The MSA results consistently characteristic of atrophic patterns that can be easily identified visually, supporting the results of a previous report from Japan.Furthermore, PSP patients consistently displayed a distinct appearance in the midbrain with high diagnostic potentials.Finally, we were able to develop a simple index to differentiate parkinsonism, which resulted in notably high diagnostic accuracy for PSP patients. Figure 1 . Figure 1.Typical Appearance of MSA in Putamen and Pons/MCP.This figure presents images with gray matter (A) and white matter (B) z-score maps.(a)-(h) correspond to individual participants with MSA per letter.The red-yellow color gradient represents atrophic regions with a zscore >2, where atrophic changes occur in fewer than approximately 2.5% of the healthy controls.The number located in the bottom left of each normalized brain indicates the Z plane value in the Montreal Neurological Institute (MNI) coordinates.Panel (A) displays the typical appearance in MSA with the putamen atrophic changes (arrows).Panel (B) shows the typical appearance in MSA characterized by atrophy in the pons and middle cerebellar peduncle (MCP) (arrows).GM represents gray matter, and WM represents white matter.R represents the right side. Figure 2 . Figure 2. Typical Appearance of PSP in the Midbrain.This figure shows the images with white matter z-score maps with typical appearance in PSP with the midbrain atrophic changes.(A)-(I) correspond to individual participants with PSP per letter.Red-yellow color gradient shows atrophic regions with z-score >2.The number located in the bottom left of each normalized brain indicates the Z plane value in the Montreal Neurological Institute (MNI) coordinates.WM represents white matter.R represents the right side. Figure 3 . Figure 3.Other appearances in PSP Patients.This figure presents images featuring white matter z-score maps with other appearance in PSP.The red-yellow color gradient shows atrophic regions with z-score >2.Panel (A) displays the pons and middle cerebellar peduncle (MCP) atrophy as well as the midbrain shown in the PSP patients.Panel (B) presents the patients with PSP misclassified into other diagnosis.(n) exhibited significant atrophic changes in both midbrain and pons/MCP atrophic changes with high z-scores.Patients (o) and (p) showed smaller midbrain atrophic changes compared to other PSP patients, although atrophic changes in the midbrain were still present.R represents the right side. Figure 4 . Figure 4. Z-scores, Index values, and ROC analysis.Panels (A-C) show dotplots with boxplots of the highest z-score values per participant for each ROI in the midbrain, putamen, and middle cerebellar peduncle (MCP).Higher z-scores represent more atrophic changes.In the MSA group, bright green indicates MSA-P, and medium turquoise indicates MSA-C.**P < 0.01, ***P < 0.001, ****P < 0.0001, with P values calculated using Mann-Whitney U test, adjusted using Bonferroni correction for multiple comparisons.Panel (D) displays dotplots with boxplots of the index values we examined.The index represents the higher side of the midbrain and the highest z-scores of the putamen and MCP, focusing only on positive z-score values.Positive index values indicate that the z-score of the midbrain is higher than that of the putamen and MCP, representing greater atrophic changes in the midbrain.Negative index values indicate that the z-score of the putamen or MCP is higher than that of the midbrain, suggesting more atrophic changes in the putamen or MCP than in the midbrain.Panels (E) and (F) show the receiver operating characteristic (ROC) curves for differentiating MSA (Panel E) and PSP (Panel F).The cutoff indicates the optimal cutoff value identified using the Youden Index (J) method.Sp represents specificity, Se represents sensitivity, and AUC represents the area under the curve. TABLE 1 Contingency table of iVAC-supported rater decisions and clinical diagnoses
2024-02-06T06:17:22.274Z
2024-02-05T00:00:00.000
{ "year": 2024, "sha1": "9fe42f83d4b317f6f30df61b2d090ef2683b7bba", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mdc3.13987", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c2f609a0e5dcaca87883b5f09f9e36b6f74f2d4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218900041
pes2o/s2orc
v3-fos-license
Efficacy of indigenous strain of entomopathogenic nematode against diapausing larvae of Codling moth, Cydia pomonella L. (Lepidoptera: Tortricidae), in apple-growing hilly areas of Ladakh Region Indigenous entomopathogenic nematode, Heterorhabditis pakistanensis, NBAIR H-05 strain was evaluated against diapausing larvae of the Codling moth, Cydia pomonella L. (Lepidoptera: Tortricidae) at 3 different dosages, i.e., at 15, 20, and 25 gm/ l of water in apple orchards at the district Kargil of Ladakh Region, India, during 2017 and 2018.Two year’s pooled average density of diapausing ranged 34.6 to 56.8 larvae/trunk band before treatment, which declined ranging 43.85 to 86.27 % with respect to different treatments of entomopathogenic nematode at concentrations between 7.5 × 105 IJs to 1.25 × 106 IJs/tree. Percent reduction in larvae over control varied from 41.78 to 85.77% for 7.5 × 105 IJs and 1.25 × 106 IJs respectively. Two-year pooled data indicated larval mortality between 39.85 and 73.38% and 4.0 to 12.89% with respect to different treatments at 48 and 72 h respectively, with statistically significant difference (P = ˂ 0.001). Increase in dosage of nematode formulation from 15 gm to 25 gm resulted in increased larval mortality (r = 0.92**). Post wetting of trunk band after 24 h in each treatment resulted in significantly higher larval mortality than non-post wetting. There was non-significant difference (t = 0.83) between larval mortality with respect to treatments during 2017 and 2018. from August is spent as diapause condition wherein mature larvae survives under loose barks of the trees, crevices, under the rocks or tree debris in orchard, till May of the next year (Ahmad et al. 2018). Nine month's prolonged diapausing stage of larvae, being the most susceptible stage offers a good opportunity for the management of this pest for considerable reduction of first generation population density. Overwintering larvae ranged between 35.37 and 99.56 per tree trunk up to a height of 1 m from ground level (Ahmad et al. 2018) is an indicative of future population level of the pest, if not targeted in time. In the past two decades, many convenient and cost effective tactics have been developed for managing diapausing larvae of C. pomonella (Higbee et al. 2001;Cossentine et al. 2004 andHansen et al. 2006) among them is the application of entomopathogenic nematodes (EPNs), belonging to genera Steinernema and Heterorhabditis. These biological control agents are non-hazardous, safe to humans, easy to apply, and have proved remarkably outstanding in the management of C. pomonella (Lacey and Unruh, 1998;Lacey and Chauvin, 1999;Unruh and Lacey, 2001;Cossentine et al., 2002 andLacey et al. 2005). Lacey and Chauvin (1999) reported 100% larval mortality of C. pomonella treated with different dosages of Steinernema carpocapsae and S. feltiae whereas Cossentine et al. (2002) documented 93% larval mortality with the same nematode species. De Waal et al. (2010) reported 80% larval mortality with a local African isolate of Heterorhabditis zealandica. Odendaal et al. (2015) recorded larval mortality between 41 and 67% with 3 EPN species, H. bacteriophora, S. jeffreyense, and S. yirgalemense. The present study is the first attempt in apple-growing hilly areas of Ladakh Region, India, utilizing a local strain of EPN, Heterorhabditis pakistanensis, NBAIR H-05 against diapausing larvae of C. pomonella. Banding of tree trunk The experiment was conducted in the 2 consecutive year 2017 and 2018 in 4 different apple orchards of Kargil District viz., Slikchey, Shanigund, Bagh-e-Khomini, and Hardas located at 34°33′ 27.54′′ N and 76°07′ 34.39′′ E of Ladakh Region, India. By the end of August, in each year (2017 and 2018), 10 tree trunks of each orchard were banded with gunny bags up to 1 m height from the ground level. The banding was performed in order to provide shelter to overwintering third generation larvae of Codling moth. Application of entomopathogenic nematode The freshly prepared clay formulation of local EPN strain, Heterorhabditis pakistanensis NBAIR H-05, used in the study, was obtained from ICAR-National Bureau of Agricultural Insect Resources (NBAIR), Bengaluru, India. One gram of clay formulation contained approximately 50,000 live infective juveniles (IJs) of H. pakistanensis. The clay powder formulation of EPN was evaluated at 3 different concentrations, 15 g (7.5 × 10 5 IJs), 20 g (1.0 × 10 6 IJs), and 25 g (1.25 × 10 6 IJs). The treatments were accompanied with and without post wetting of tree trunk. Besides, these 6 treatments (T1-T6) and 1 treatment (T7) was included as untreated control. All the 10 banded tree trunks were made thoroughly wet (1 l of water/ tree trunk) with their respective treatments, using rose can sprinkler provided with a nozzle having small holes to break up the stream of water into small droplets. Application was performed in evening hours in order to allow the bands to remain moist for longer period unlike during sunshine hours for survival ability and aggressive foraging of EPN against over wintering larvae, during the last week of August, which marked termination of larval overwintering of Codling moth. Five trees of the treatment T2, T4, and T6 were provided post wetting by fresh water, after 12 h of EPN treatment and marked as "post wet." Collection and storage of dead larvae Thirty-six hours after EPN treatment, the trunk bands of all 10 trees were opened for collection of larvae present under the band, and/or under the bark of each tree, for post treatment count. Larvae collected from each treatment (with or without post wet) were kept separately in plastic container (250 ml) half filled with moist soil. The container was brought safely to laboratory, placed in BOD (Bio-Oxygen Demand) incubator, maintained at 27 ± 1°C. Data regarding larval density per tree trunk, larval mortality after 48 and 72 h, treatment wise and year wise was duly recorded for subsequent analysis. Confirmation of nematode killed larvae Change in larval color from original light pink to brick red indicated specifically Heterorhabditis induced mortality (Fig. 1). But for further confirmation, the dead insect larvae (cadavers) were placed on a White trap (White, 1927) for the release of infective juveniles. Statistical analysis Minitab 11.12 (Minitab LLC) was used to analyze the data for ANOVA. Percent larval mortality was determined by dividing the number of dead larvae from total number of larvae in a sample. Percent reduction over control was calculated by using Abbott's (1925) formula: T−C/100−C*100 (where T = mortality in treated condition and C = mortality in untreated control condition) Results and discussion Effect of tree trunk banding and nematode on diapausing larvae Gunny bags wrapped around the apple tree trunks provide an ideal shelter for overwintering and also protection from birds and other predators to the diapausing larvae of Codling moth. In a previous study at Kargil, where in apple tree trunks were banded by gunny bags, 35.37 to 99.56 overwintering larvae per tree trunk was observed (Ahmad et al. 2018). Similar trends were observed in the present experiment. The average larval density of Codling moth per tree trunk in the 2 experimental years was found between 34.6 and 56.8 larvae/trunk (Fig. 2). The larval population declined from 28.5 to 6.7 larave per tree trunk after nematode application (Fig. 2) exhibiting 43.85 to 86.27% mortality. When compared location wise, the larval density was found statistically different (P ≤ 0.01) from each other. Cumulative larval mortality during 2017 and 2018 varied from 46.57 to 90.12 and 41.13 to 82.43%, respectively (Table 1). Pooled larval mortality ranged from 43.85 to 86.27% (Table 2) and was statistically different (P ≤ 0.01) with respect to treatments at 48 and 72 h interval. Similar observations were made when data was analyzed separately for each year (Table 3). Percent reduction in larvae over control ranged 41.78 to 85.77% with respect to different treatments of H. pakistanensis at concentrations between 7.5 × 10 5 IJs and 1.25 × 10 6 IJs/tree (Table 2) and were statistically significant (P ≤ 0.01) from each other. Effect of nematode dosage on larval mortality An increase in EPN concentrations from 15 g to 25 g resulted in increasing larval mortality, indicating a strong positive correlation (r = 0.92**) between the 2 parameters (Table 2). This may be attributed to increase in the number of IJs with respect to increased dosage in water suspension, which maximized the chances of IJs encountering with diapausing larvae, resulting in higher larval mortality. Similar dosage dependent results were also reported by Lacey et al. (2005) who evaluated S. feltiae against Codling moth larvae and recorded 80% larval mortality at higher concentration of 50 IJs/ml of water in comparison to only 50 and 70% mortality at lower dose of 10 and 25 IJs/ml of water, respectively. The findings also confirm the report of Laznik et al. (2010) who found a high IJs concentration of S. feltiae applied in field against the Colorado potato beetle (Leptinotarsa decemlineata) significantly reduced the larval population of the pest as compared to the lowest IJs concentration. Nematode efficacy against larva on post-wet tree trunk Increased performance of EPNs due to pre and postwetting has been reported by several workers (Cossentine et al. 2002 andDe Waal et al. 2010). Similar trend was observed in the present study. Post-wetting, i.e., wetting of the tree trunk after 24 h resulted in maximum larval mortality than the non-post-wet (Table 2). This may be due to retaining of adequate moisture by the gunny wraps that enhanced mobility of nematodes, which in turn increased host finding ability and ultimately penetration of IJs into the host insect body. Comparison of data for larval mortality with respect to different treatments through Student's t test between 2017 and 2018 however indicated non-significant differences (Table 3). Apart from dosages used, several other factors like nematode application with rose can sprinkler during evening hours, optimum temperature of 18-23°C during 3rd week of August in Ladakh Region and probably cold adapted nature of H. pakistanensis might have contributed in achieving high level of larval mortality. Reduced larval mortality with air blast spraying of EPNs during morning hours was reported by Lacey and Unruh (1998). Temperature between 15 and 25°C was reported to favor active host searching and penetration of EPN in diapausing larvae of Codling moth (Shapiro-Ilan et al. 2017), whereas at 14°C, activity of most of EPNs slow down (Odendaal et al. 2015), except the supremacy of cold adapted EPN species over warm adapted ones (Lacey et al. 2006). Conclusion In the present study, efficacy of H. pakistanensis NBAIR H-05 against diapausing larvae of Codling moth provided encouraging results. Tree trunk banding for collecting diapausing larvae at one place and killing the larvae in masses by applying nematodes proved an excellent strategy for the management of Codling moth. H. pakistanensis NBAIR H-05, as a valuable biological component may be recommended in the integrated management program of Codling moth in Ladakh. In the long run, amalgamation of this cost-effective, eco-friendly tactics will certainly help to promote apple industry of Ladakh Region, if popularized at a large scale.
2020-05-27T08:41:24.172Z
2020-05-26T00:00:00.000
{ "year": 2020, "sha1": "df2c25c594d5851cdd36ec924cf6f01a6710a089", "oa_license": "CCBY", "oa_url": "https://ejbpc.springeropen.com/track/pdf/10.1186/s41938-020-00263-8", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "df2c25c594d5851cdd36ec924cf6f01a6710a089", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
249836952
pes2o/s2orc
v3-fos-license
The Twelve Principles of Green Tribology: Studies, Research, and Case Studies—A Brief Anthology : Sustainability has become of paramount importance, as evidenced by the increasing number of norms and regulations concerning various sectors. Due to its intrinsic trans-sectorial nature, tribology has drawn the attention of the supporters of sustainability. This discipline allows the environmental, economic, and social impacts to be decreased in a wide range of applications following the same strategies. In 2010, Nosonovsky and Bhushan drew up 12 approaches based on the 12 principles of green chemistry and the 12 principles of green engineering, defining the “12 principles of green tribology.” This review exploits the 12 principles of green tribology to fathom the developed research related to sustainability and tribology. Different approaches and innovative studies have been proposed in this short selection as references to consider for further development, pursuing the efforts of the scientific community for a sustainable future through the contribution also of tribosystems. The manuscript aims to provide practical examples of materials, lubricants, strategies, and technologies that have contributed to the overall progress of tribology, decreasing wear and friction and increasing efficiency, and at the same time promoting sustainable development, lowering toxicity, waste production, and loss of energy and resources. Introduction The environment has become a topic of critical importance in the last decade.Technology plays a significant role in the battle against climate change to save our planet.The development of new technologies may lower pollution, reduce raw material exploitation, and improve efficiency [1].As a trans-sectorial discipline, tribology affects the efficiency of several fields, e.g., automotive, industry, biomedical, and aerospace; its contribution to sustainable development may be consistent and different strategies may be employed.The future perspectives of research in tribology were previously investigated by Dowson and Taylor [2] in 1985 and Jost [3] in 1990; the main problems indicated concerned the technological aspects, the surface treatments, and the wear of metallic materials in a view of economic savings and improvement of efficiency.Since then, the environmental aspect of tribology has become more and more important.Nowadays, it is of paramount significance, and its connections with economic and social impacts are well known [4][5][6][7]. The approach named Life Cycle Tribology (LCT) was presented by Kato and Ito [8] for the first time in 2005.They applied the methodology of the Life Cycle Assessment (LCA) to a tribological system.The influence of tribology was evaluated considering three main areas of the impact categories, namely, human health, ecosystem quality, and resources.Indeed, improving friction properties could positively affect the climate change and fossil fuels impact categories, reducing energy consumption and hence CO 2 emissions.The reduction in wear that particles generate, for example, from roads, cars, and vehicles, decreased the impacts within the respiratory organic and inorganic substances and the ecotoxicity and acidification/eutrophication categories thanks to the reduction in NOx and particulate matter production.Control of wear allowed the impacts in the carcinogenesis, respiratory substances, ecotoxicity, acidification, and land use categories to be cut.The evaluation of the impacts was not limited to the environmental aspect only.Kato and Ito considered and combined the Life Cycle Cost (LCC) and wealth evaluation with the LCA.The result was the definition of a holistic methodology called Life Cycle Tribology. The application of LCA to a tribosystem is complicated due to the interaction among several elements.Wani and Anand [8] individuated the attributes of a tribosystem that affect the LCA, including material conservation, lubricant lifetime, energy conservation, protection of the environment, and disposal of triboelements and lubricants.The LCA attributes incorporate the requirements of LCA for triboelements, such as longevity of materials, high wear resistance, minimum power consumption, minimum wastage, no carcinogenesis, minimum replenishment of lubricants, minimum contamination, minimum emission of toxic gases and materials, easy and low-cost reclamation or disposal, ease of assembly and disassembly, biodegradability to renewability, and minimal environmental hazards.The authors highlighted the interrelation among the LCA attributes, underlying how the different aspects of a tribosystem affect the sustainability and the environmental impacts from various points of view.These relations have been translated into a matrix representation that developed the LCA expression of the triboelements and allowed different tribosystems to be compared and evaluated. The interconnection of the tribological analysis methods was investigated by Kurdi et al. [9].The authors evaluated the physical experiments, the different modeling and simulation methods, and the LCA methodology that have been employed to examine tribological systems.The presented techniques complement each other because one fills the gaps that the others, due to technical impediments or weaknesses, cannot cover.Combining the different approaches leads to a more accurate system investigation and can be successfully integrated into the LCA. Sasaki [1], in the work published in 2010, listed some of the topics related to what he called "eco-tribology": tribo-materials to enhance recyclability, friction and wear properties, and environmental safety.Examples of eco-tribological elements are diamond-like carbon coatings to decrease friction; water-based lubricants, whose operative life should be extended to lessen their impact on the environment; and advanced machine elements, considering that they represent one of the primary sources of inefficiency in industry.The advancement of these and other tribo-elements may reduce the energy demand, for example, in vehicles and industries.Tribosystems should be individually analyzed to find out particular solutions to reduce energy consumption and carbon dioxide (CO 2 ) emissions; the maintenance of the tribosystems should represent the bedrock of eco-design and be integrated with the concept of LCT to control waste production and increase efficiency [1]. In 2010, Nosonovsky and Bhushan [10] published a pioneering work about green tribology.They broadened the concept of eco-tribology introduced by Sasaki [1] in the same year, formulating 12 principles that became the basis of the sustainable aspects of this discipline.The concept of green tribology is defined as "the science and technology of the tribological aspects of ecological balance and environmental and biological impacts" [10].The green tribology principles (GTPs) provide guidelines to developing and producing sustainable tribo-elements and tribosystems. The idea that tribology has wide-ranging impacts is supported by Assenova et al. [11], who mainly focused on the effects on quality of life.Five different sustainability indicators have been considered that affect the quality of life: social, environmental, resource, economic, and technological.By using green tribology principles, Assenova centered the study on the environmental quality of life.The development of advanced tribological systems and the conscientious use of the tribological tenets may determine savings from 1.5% to 6% of the gross national product in nations like China and the United States of America.The employment of innovative lubricants, e.g., natural or biodegradable lubricants, and biomimetic materials, reduces health and pollution hazards.Tribology is one of the disciplines that may drive the transition to renewable energy, increasing efficiency through an accurate system design to limit energy and heat dissipation.The minimization of friction, wear, and wear debris generation, and the employment of sustainable lubrication and suitable surface finishing, in synergy with the push towards more efficient use of renewable energy, contribute to the enhancement of the quality of life and sustainable development. Three different case studies were discussed by Tzanakis et al. [4], considering the impacts of tribology on global sustainable development.The authors analyzed micro combined heat and power units from the domestic sector, slipways for lifeboats, and skateboard wheels, underlying the aspects affected by tribology.The GTPs have been considered the basis to structure sustainable thinking, expressed through sustainable design strategies, production in service of durability, and life quality.These strategies are reflected in the analyzed case studies in the accurate selection of materials and the design alteration.Sustainability of the tribological systems is strictly connected to the extension of the triboelements' lifetime, the reduction of wear, the increase in energy use efficiency, and the decrease of waste. One of the most famous and complete studies that calculated the economic impacts of tribology on energy consumption and carbon emissions is the one by Holmberg and Erdemir [7].By estimating the global energy consumption in the analyzed sectors and calculating friction, wear, and energy losses, the estimation of their effects and the potential forecast was dug out and the possible savings were assessed.In 2014, the world energy consumption was 396 × 10 12 MJ, distributed among industrial activities, transportation, domestic, and raw materials.The implementation of technologies aimed at reducing friction and wear could reduce 21.5 × 10 12 MJ in energy consumption in the short period (assumed to be eight years), corresponding to EUR 455 billion and 1460 Mt of CO 2 emissions.In the long term (15 years), the savings in energy consumption could reach 46 × 10 12 MJ, EUR 973 billion and 3140 Mt of CO 2 emissions.The new technologies presented include new models of engine lubricant, additives, new materials, surface treatments, coatings, and new adaptable designs that can be easily integrated into the current market or require the replacement of used technologies and components.The mentioned solutions perfectly fall within the strategies proposed by the GTPs. Stachowiak [12] faced the innovation areas from a different point of view, firstly considering the industrial problems connected to friction and wear.Once these issues had been solved or held, new development areas appeared, namely, biotribology, environmental tribology, and nanotribology.Biotribology developed due to the increasing number of protheses and implants to satisfy the human desire for eternal life.Environmental tribology is the answer to energy consumption and production problems, the degradation of the environment, and climate change.Nanotribology is an intrinsic consequence of the development in the nanotechnology field. Zhang [5] better explained the holistic approach of green tribology and the ability of this discipline to meet the sustainability demand of today's society.Green tribology is intended as a "new mode of thinking that represents views on ecological balance and environmental protection, and so embodies the ideology of the sustainable developments of nature and society perfectly" [5].As one of the pioneers in this field, Zhang broadened the concept of green tribology over time, grasping the many different aspects embedded in the term "green tribology."The analysis investigated the technological aspects of the sustainable development of tribology, considering technologies to save energy and materials and to extend the lifetime of triboelements such as new lubricants and coatings or the use of nanostructured materials to reach a super-low-friction regime, and techniques to reduce human health hazards and ecological impacts, e.g., eco-and bio-lubricants and biomimetic elements.LCA, and, in particular, the abovementioned approach presented by Wani and Anand [8], is seen as a valuable tool to evaluate the environmental impact of tribosystems both at the development and operative stages.Five main development areas of green tribology have been individuated: • Implementation of the knowledge, methodologies, and technologies; Tribology represents an excellent opportunity to lower carbon emissions and develop a sustainable economy, supporting society and the environment.Jost perfectly explained the cause of green tribology: " [ . . .] the cause of green tribology is indeed a worthy cause for all tribologists and their organizations to pursue, as it will help tribology to play its rightful part, not only for the benefit of science and technology, but much more importantly, for the benefit of mankind [ . . .]" [5]. The abovementioned 12 green tribology principles will be described in the following section and exploited to introduce selected research works.The aim is to collect practical examples of effective strategies to decrease the environmental impact of tribosystems, investigate and explain the GTPs in the current scenario, and move away from the theoretical aspect of green tribology, giving practical and compelling examples. The 12 Principles of Green Tribology The principles listed by Nosonovsky and Bhushan [10] are reported here and discussed.Each principle is analyzed considering the latest research works to better understand the direction that tribology and researchers have taken in the last few decades. Minimization of Heat and Energy Dissipation The discussion on worldwide energy consumption has become a topic of primary importance to deal with for the future of our planet.This section focuses on heat and energy dissipation by friction systems in different sectors and possible strategies adopted to increase the energy efficiency of tribosystems. According to the abovementioned study by Holmberg and Erdemir [7], worldwide total energy consumption in 2014 was about 396 EJ, divided into industrial activity (29%), transportation (28%), domestic (34%), and other (the remaining 9%).In industry, the estimated energy spent to overcome friction is 20% of the sector consumption [7].Wearrelated energy losses consider the energy for producing new parts for wear replacement and spare downtime equipment.According to the study by Holmberg et al. [13], the mining industry is estimated to consume about 40% of the total energy absorbed by the sector due to friction, producing about 970 million tons of CO 2 eq annually (2.7% of the total emissions).Holmberg et al. [13] also analyzed the friction and wear impacts in the paper industry, which resulted in a waste of energy ranging from 15% to 25% of the total energy employed in this field. Specific data about the energy consumption in the energy industry are missing in the literature.Holmberg and Erdemir [7] estimated wasted energy due to friction as 20%, in accordance with the average energy dissipated in similar industry fields, and the energy loss due to wear as 22% of that due to friction. Considering the transport sector, the energy lost to overcoming friction has been evaluated at about 30% of the energy consumption.In this case, the share of energy losses due to wear is smaller than in the industry because of more effective lubricant technologies [7].Passenger cars were analyzed in detail by Holmberg et al. [14]: Friction phenomena are responsible for 38% of fuel energy use.This accurate analysis showed that this consumption is distributed as 35% to overcome the rolling friction in the tire-road contact, 35% for the friction in the engine system, 15% in the transmission system, and the remaining 15% in brake contacts, as represented in Figure 1.As a result, only 21.5% of the fuel energy is employed to move a car due to viscous losses, friction in gears, bearings, and seals.A similar study by Holmberg et al. [15] on heavy vehicles, in particular, tracks and buses, found that friction losses reached 33% of the fuel energy, dissipation is generally reduced compared to passenger cars, and the total energy employed to move the vehicle is equal to the 34% of the fuel energy.Residential and services fields include heating, cooling, ventilation, lighting, consumer products, and business equipment.Holmberg [7] estimated that these sectors have the lowest consumption of energy dissipated by friction: about 10% of the consumed energy in these areas, of which 14% is spent due to wear. Heat is one of the main methods of energy consumption.The reduction of friction minimizes heat generation, saving energy and avoiding its dispersion.Moreover, heat can generate pollution; damage some components, tools, and machines; and alter the lubricant.Stachowiak and Batchelor [16] and Abdel-Aal et al. [17] explained how heat is dissipated between two surfaces.The asperities of the two sliding surfaces are fundamental for heat dissipation because they promote thermal flow.The accumulation produced when heat exceeds the heat removed by the asperities promotes unwanted results like variations in the microstructure of the material or alteration of the surface.In particular, Stachowiak and Batchelor [16] reported on how the conjunction temperature could modify the wear and dry friction by forming oxides, metallurgically transformed surfaces, alteration of local geometry due to the thermal expansion, or even local surface melting.This temperature is called "flash temperature" due to its very short time for creation.The authors analyzed the alteration of the elastohydrodynamic lubrication as an effect of temperature.The dependence of this phenomenon as a function of the film thickness and its thermal conductivity was shown.The temperature profile along the film is like a parabolic curve, with the highest temperature in the center.This can alter the viscosity and can lead to a variation in the lubricating mechanism changing the wear rate and bringing the lubricant and the machinery to failure. The control of temperature to avoid lubricant degradation was analyzed by Márton et al. [18].The method to calculate the temperature of the lubricant during friction was based on the temperature of the housing, the temperature of the environment, the velocity, and the mechanical system to predict slow changes in friction parameters.The reduction of heat transmittance using lubricants was investigated in the work by Zohdi [19]: The research aimed to design microscopic additives that allow for a smaller heat dissipation during sliding. To completely assess the sustainability of the presented strategies to face energy and heat dissipation, the energy saved should be compared with the environmental impact of the additives or technology, considering their production, lifetime, and end-life.The evaluation is further complicated by the fact that the wear of the lubricated component should be included because it determines maintenance and substitution impacts.Considering all these data on energy consumption due to friction and the relevant problem of heat dissipation, a lot of wasted energy can be saved, and the environmental impacts can be reduced through accurate tribological research. Minimization of Wear The reduction of wear is one of the main tasks of tribology, and hence is at the base of green tribology.The wear phenomenon consists of material loss during sliding between two surfaces.Except for those applications in which friction is required, e.g., welding, this damaging interaction represents something to be avoided.The role of lubricants has become increasingly important in dealing with wear, and the huge consumption of lubricants makes them at the center of tribosystem lifetime preservation and environmental impact discussion.For these reasons, this section focuses on the most employed techniques to avoid wear employing green lubricants. The study by Shi et al. [20] about biopolymers evaluated hydroxypropyl methylcellulose (HPMC) as a dry green lubricant in sustainable manufacturing.Different tests analyzing tribological features of this compound were executed, highlighting the decrease in the coefficient of friction (COF) and the anti-wear behavior promoted by the formation of a lubricating transfer layer.Furthermore, HPMC is an environmentally friendly material with good mechanical performance.It represents an appealing research direction; nevertheless, its practical applications showed weaknesses such as short protection life.Shi et al. [21] evaluated the addition of a solid lubricant additive to enhance wear resistance, like MoS 2 .This study investigated the effects of the presence of MoS 2 in a biopolymer coating.The results highlighted a significant reduction of the coefficient of friction by 40% due to the presence of MoS 2, improving the stability and the wear resistance of the bio-based composite.An important role was played by the crystalline morphology and the content of the nanoparticles, which was assessed as best in the range of 5-10 wt.%.Chen et al. [22] found the improvement of tribological performance by means of the dispersion of WS 2 nanoparticles in a green-base oil made up of different biodegradable lubricants.The tested steel surfaces exhibited a reduction of COF from 0.15 up to 0.07 thanks to 1 wt.% of WS 2 nanoparticles, which additionally were responsible for the autoreconditioning effect observed on the worn surfaces. In the last few years, scientific research has increasingly oriented toward developing water-based green lubricants; nonetheless, oil-based lubricants still play an important role in the green lubricant market.In the study by Hernández-Sierra et al. [23], the tribological performances of five of the most common lubricants for general applications were investigated: water, seawater, graphite nanoparticles dispersed in water, synthetic oil, mineral oil, natural oil mixed with mineral oil and additives, natural oil with synthetic esters and additives, and castor oil.This detailed analysis accurately characterized the lubricants and reported the comparison between the average kinetic friction coefficient and their wear resistance.The bio-based lubricants exhibited the lowest wear rate and friction values when tested on steel samples. As will be later discussed in Section 2.4 about natural lubricants, the main disadvantages linked to oil-based lubricants are the possible non-eco-friendly production methods and their recyclability, because oil-based lubricants are typically toxic or harmful.Based on these considerations, Afifah et al. [24] proposed an oil-based lubricant synthesized from a renewable and biodegradable source, namely, a mix of palm stearin methyl ester and the candida Antarctica Lipase B (a yeast), considered non-toxic for the environment.The analysis compared palm stearin methyl ester, epoxidized palm stearin methyl ester, and a traditional mineral-based oil.The produced epoxidized palm stearin methyl ester demonstrated the lowest friction coefficient, between 0.04 and 0.06, and a wear scar diameter that was smaller than the non-epoxidized lubricant but higher with respect to the mineral-based oil.The modification of the chemical structure of vegetable oil also had good effects on its tribological characteristics at different temperatures, namely, 75 • C and 95 • C. An interesting overview of anti-corrosion and lubricating properties of fully green lubricants was given by Zheng et al. [25]: The authors analyzed several chemically modified oil-based lubricants to overcome structural problems.The enhancement of the tribological performance of oil-based lubricants may lead to the substitution of petrol-based lubricants in the market.This conversion should promote their renewability and biodegradability, decreasing the energy consumption and the carbon footprint. Hu et al. [26] studied the tribological behavior of a water-based lubricant mixed with nanosized carbon dots (CDs).This technique allowed a shift from sliding to rolling friction, lowering the friction coefficient.The CDs in the water-based lubricants filled the asperities of the surface and avoided direct contact between the two surfaces.The rolling and sliding CDs provided mechanical support during the motion.A reduction in the coefficient of friction of up to almost 40% and a wear rate of 38% were found.Moreover, the CDs' presence enhanced the poor corrosion resistance of the water-based lubricant, inhibiting the trigger mechanisms. Eco-friendly production processes should support the fruitful results obtained by the use of nano-additives in bio-lubricants, and not affect the green characteristics of the lubricant.Sarno et al. [27] proposed sustainable production methods for synthesizing carbon nanotubes and reducing graphene oxides by means of recycled plastic and charcoal.The tribological behavior of the thus-produced nanotubes and reduced graphene oxide in oil-based lubricants was assessed; the evaluated performances were very similar to traditionally produced nano-additives.Using 0.1 wt.% of reduced graphene oxide reduced the friction coefficient by 16% and 18% when dispersed in two commercial oils.The highest mean wear scar diameter reduction was observed for the same reduced graphene oxide concentration, 14-15%. Various strategies aim to significantly reduce the wear rate of sliding surfaces.The presented works faced the problems of energy consumption and component degradation by proposing different methodologies, e.g., nanostructured additives or bio-based materi-als.The use of renewable, non-hazardous, and non-toxic materials should be promoted; nonetheless, an efficient lubricant reduces wear and energy waste. Reduction or Complete Elimination of Lubrication, and Self-Lubrication The third principle aimed to reduce the use of external lubricants till their elimination when possible and to use self-lubricating materials if applicable.A huge amount of lubricants is wasted in wet machining processes: Kim et al. [28] reported that every year in Germany and the USA, more than 70,000 tons and 350 million liters of oil are consumed with this aim.Another typical problem that affects these cutting fluids is the impossibility of recycling due to their workpiece chip content. Reduction of Lubrication Minimum quantity lubrication (MQL) was discussed in the study by Kim et al. [28] to minimize wasted products, environmental pollution due to the use of additives in these fluids, and energy consumption.The MQL method consists of injecting a small amount of cutting fluid in the form of mist through compressed air between the two contact surfaces.The micro dimension of the particles allows penetration into parts with complex geometry.Consequently, the chip production and the temperature at the interface are minimized.The amount of oil used with this method is about less than 1/10000 of the usual machining process, the total cost is reduced by 15%, and the problems related to operator health are avoided.The employment of self-lubricating materials can obtain the extreme elimination of cutting fluid. Self-Lubricating Materials Self-lubricating materials (SLMs) are an optimal solution to assure lubrication in different systems like vehicles, cutting tools, electronics, and home appliances.The increasing market demand in every field necessitates short manufacturing times; hence, a huge amount of lubricant is needed.Consequently, the efficiency of all the ancillary processes related to treatments of lubricating fluids should be implemented to manage the environmental impact due to the typical not-eco-friendly behavior of these substances.Self-lubricating materials are used in applications where liquid lubricants are difficult or impossible to use-for example, at cryogenic temperatures, in a vacuum, under extreme contact pressure, and where the processing waste must be minimized.SLMs can be classified according to the material acting as a lubricant.An overview of the most widespread solid lubricants employed in the market was provided by Furlan et al. [29] and is reported in Figure 2. According to Evans and Senior [30], a self-lubricating material is defined as "able to slide against a counter-body at efficient speeds and loads and in the absence of a lubricating fluid, it does not suffer damages that normally occurs when two metals slide under relative movement in the absence of lubrication."It is possible to find SLMs as reinforced phases in a matrix, forming a composite material on the market.Metal matrix self-lubricating composites exhibit excellent tribological properties thanks to the gradual release of the solid lubricant from the matrix and the subsequent formation of a tribo-film, as explained by Xiao et al. [31]. Molybdenum disulfide (MoS 2 ) is a transition metal dichalcogenide (TMD) compound that is increasingly employed thanks to the important lubricating effect due to its lamellar crystalline structure [32].These lamellae are bounded together by a Van der Waals interaction, allowing the material to slide parallelly to the direction of shear stress [29,33,34].It has been seen that the film coverage of MoS 2 depends exponentially on the volume content of MoS 2 .For this reason, the coefficient of friction decreases with the percentage of MoS 2 volume content [31].A detailed analysis of tribological properties of MoS 2 was provided by Vazirisereshk [35], reporting the correlation between temperature, friction force, and friction coefficient.It was found that MoS 2 behavior changes depending on environmental operating conditions such as temperature, humidity, oxygen, and the microstructural properties of MoS 2 .In a vacuum, the friction coefficient decreases with increasing temperature; in general, the presence of water and oxygen determines a decrease in the lubricating performance due to the inhibition of the sliding motion.The best performance has been detected in a vacuum, which is the reason for the extensive use of this solid lubricant in aerospace applications.The presence of humidity limits the sliding of the MoS 2 layers, preventing the employment of molybdenum disulfide in common applications on earth [16,36].MoS 2 compounds are generally doped with other substances like antimony trioxide (Sb 2 O 3 ) and lead (Pb) to improve their lifetime.The use of lead has been limited or banned [37], promoting the use of Sb 2 O 3 due to lower environmental impacts.The development of more sustainable materials that reach the performance of traditional ones should be investigated.Another example of a self-lubricating compound from the TMDs is tungsten disulfide (WS 2 ), which usually finds application in metal matrix composite materials.Freschi et al. [38] studied the enhancement of the tribological behavior of copper matrix composite through the use of micro and nano WS 2 structures.It was shown how the synergetic effect of the WS 2 structures could extend the lifetime of a component, contributing to the reduction of material exploitation.Considering the different concentrations of WS 2 in a copper matrix, the evaluation of friction coefficient, specific wear rate, and wear coefficient led to an optimum in the range of 10-15 wt.% of the second phase [39].The specific wear rate was reduced by almost 30% with respect to pure copper, and the friction coefficient was lowered from 0.75 to 0.15. Graphite is the most common solid lubricant employed due to its good tribological properties and high mechanical resistance [40][41][42].The development of nanotechnologies determined the introduction to tribosystems of carbon-based nanostructured materials that enhance the tribological properties exhibited by graphite, paving the way to materials like carbide-derived carbon that is composed of heavily misaligned graphene layers [32].The study by Rivera et al. [43] evaluated the decrease in the coefficient of friction promoted by carbide-derived carbon.A huge improvement was underlined by the obtained results, reaching a COF of 0.1 and a wear rate decrease of up to 70% compared to the commercial crystalline graphite particles.The outstanding performance detected depended on the crystalline structure and the slow release of solid lubricant from pores during the wear test, leading to the formation of a uniform and stable tribo-film. Reinert et al. [44] studied reinforcement particles for metal composites to improve the wear mechanism in a dry sliding regime.The analyzed second phases were made of carbon nanoparticles with different structures: multi-wall carbon nanotubes (CNTs), onion-like carbon, and nanodiamonds, characterized by different hybridization and morphology.The best properties were those of CNTs: The lubricating effect increased with the increase in the volume content of CNT to 50 vol.%. Polytetrafluoroethylene is another material employed for its lubricating performance.Lince [45] explained that its particular structure consists of arrays of long helixes with covalent bonds along the chain and held together by weak interactions.These helixes do not form chemical bonds with other molecules.For that reason, this material has low surface energy, and hence good lubricant behavior.This characteristic is also given by the alignment of the chains along the motion direction and the drawing out of some chains onto the contact surface. The minimization, accurate calibration, or complete elimination of lubricants paves the way for new research programs with the interesting aim of finding out which option is the best for the selected application and how it can be optimized in order to achieve the best performance with minimal waste. Natural Lubrication Lubrication is fundamental to decreasing friction, wear, and adhesion, therefore saving fuel and energy and reducing carbon dioxide [45,46].Most mineral and synthetic lubricants are non-renewable and toxic since they impact the soil, water, and atmosphere [47][48][49].There are legislative restrictions and standards to be respected to preserve the environment and limit harmful waste production. European Legislation for Sustainable Lubricants The management of chemicals, including lubricants, is regulated by legislation in a few countries, whereas in others, it is under development, as can be seen in Figure 3.The European Commission stipulated the "EU Ecolabel" for lubricants (2018/1702, updated to version 1.4 in 2021) [46] to reduce the hazard to the environment, human health, and any living organisms, promoting the conscious use of bio-lubricants from sustainable production and a circular economy.In particular, eco-friendly lubricants should satisfy the following requisites [46] The standardization issued by the European Committee for Standardization, Technical Committee number 19 (CEN/TC19), explains methods of sampling and testing, terminology, and classification of petroleum and synthetic and biological lubricants (together with fuels) [56]. Possible non-toxic and sustainable replacements, especially natural lubricants, are coming forward as long-term environmentally friendly alternatives to traditional lube methods.Even if the natural oleochemicals are reduced to water and carbon dioxide, as well as petrochemical ones, their carbon cycle is closed, which means that the CO 2 is compensated [57].Moreover, natural lubrication is potentially more economic than traditional methods, as less energy and workforce are required for production and maintenance [49,58].Notwithstanding being more sustainable options with higher lubricity properties, their formulation currently requires a higher budget than traditional mineral oils (MOs) [48], despite the cost of petroleum continuously rising [47].Research is being conducted to improve their formulation at a more competitive price and exploit the reduction of energy use; hence, the overall economic and green advantages are still in progress. In support of selecting the most suitable lubricant, Balo et al. [47] proposed the criterion weighting method.It is a multi-criteria decision method that states relative priorities to different physical and chemical properties, cost, efficiency, and environmental risk to evaluate the most feasible lube for a specific application.It can be applied to all kinds of lubricants, from MOs to natural ones. Composition and Properties Natural lubrication is based on oil, mainly vegetable oils and biomass-sourced oils and grease, or on water.The common and key element is the presence of fatty acids, whose type, chain length, and polarity are the main influencing factors [48,57].Animal grease, plants, and microorganism-based oils are characterized by an amphiphilic structure, which contains of long-chain fatty acids (from four to 36 carbon atoms with carboxylic acids on the head [48]).In particular, the involved polar end groups of glycols and tri-glycerol enable better lubricant characteristics with respect to MOs.They show lower volatility, higher flash and fire points, higher biodegradability, and low water and environmental toxicity [10,47,59], depending on process conditions and genetic or chemical changes during production [59].Moreover, their viscosity index is usually higher than the traditional lube oils since it directly depends on the polarity of esters and glycols [45], the length of the chain (carboxylic acid or alcohol hydrocarbon chains), and the saturation, but it is inversely proportional to the shear rate (non-Newtonian behavior) [60]. The most suitable fatty acids as lubricants are palmitic, linoleic, and high oleic acids [61].The latter one is rich in monounsaturated fats, and it shows high viscosity at low temperatures [47] and, in high concentrations, enhances even the oxidation prop-erties [62].They are usually obtained from oils through transesterification with alkali catalysts, followed by hydrolysis and conversion to esters, amines, or amides, or reduction to alcohol, to obtain the final lubricant [59].Particularly in boundary lubrication [48] and in general in thin-and thick-film regimes, the polar group of fatty acids interacts with the metallic surfaces, forming a metallic soap layered structure (i.e., the tribo-film) [47,61].This lube film reduces friction, corresponding to lower energy losses, and increases wear volume because of corrosion products.Indeed, the main drawback is wear caused by abrasive particles from fatty acid degradation [48] and by the formation of peroxides at relatively high temperatures that thin the lubricant layer [47,60].Moreover, the peroxides can react with fatty acids, resulting in oxidation, which is enhanced by bis-allylic protons and unsaturations (in particular, by the presence of glycerol) in the chain [10,48,59,60].However, if high performance is not required or the operating temperature range is around 20-35 • C, applying VOs directly as full lubricants is acceptable [57]. Moreno et al. [66] employed different kinds of xanthophylls, which are substances belonging to carotenoids, as antioxidant additives in castor oil in a molal concentration of 0.001, obtaining a lubricant film thickness in boundary lubrication decreasing up to 30%, and thus an increase in friction of up to 25% (with zeaxanthin additive), but a wear reduction of up to 42% (with astaxanthin additive), with respect to the pure castor oil.Reducing the concentration of the best-performing tested additives, i.e., astaxanthin, the wear was reduced to 50%.Nagendramma et al. [67] added 2 wt.% of ionic liquids as green additives to polyol ester-based lubricant.The additives were derived from aspartic acid and glutamic acid, obtained from Across Organics and not purified, and showed improved friction and wear behavior: The COF decreased by 48% and the wear scar diameter by almost 31%.Room-temperature ionic liquid was studied by Reeves et al. [68].In particular, they studied ionic liquids based on imidazolium and phosphonium as additives in avocado oil, resulting in a decrease in friction and wear with a negative correlation coefficient (R-value) between property values and composition of the lubricant mixture (respectively -0.982 and -0.991).Indeed, these tribological values decreased with the increment of ionic liquid content, up to 69% for COF and 73% for wear volume. Nanoparticles are lubricant additives that can induce rolling, mending, and polishing effects or create a protective film [69].Cortes et al. [62] employed SiO 2 and TiO 2 in sunflower oil, forming a tribo-film and attaining a high reduction of COF of 78% and 94% and a volume loss of 74% and 70%, respectively; in another study [69], the same authors used SiO 2 and CuO in coconut oil, obtaining a decrease in the COF of almost 93% with both oxides at the optimal concentrations.These processes generally occur at a moderate temperature with high conversion, low by-products, and low greenhouse gas (GHG) emissions [24].Therefore, they are considered green alternatives to the traditional additives, e.g., dibutyl phthalate (DBP), tricresyl phosphate, zinc di-alkyl di-thiophosphate, and molybdenum dialkyl dithiocarbamate (MoDTC), which are polluting and contribute to global warming [61].Furthermore, advanced biotechnological methods enable the genetic development of oils that are already stabilized [47], avoiding the addition of chemicals.Another less sustainable alternative approach is to create a blend with other conventional lubricants, such as MOs or synthetic oils (SOs) (<20%) [46,50]. Vegetable Oils Vegetable oils derive from seeds, fruits of plants, and residues from agriculture [47].They are typically biodegradable (depending on the chemical modification carried out), non-toxic, and renewable.The most used VOs are extracted from edible crops, such as castor beans, coconut, corn, moringa, olive, palm, rapeseed, rice husks or bran, sesame, soybean, and sunflower [47,57,59].Thus, their main drawback is the land use to cultivate edible sources, which compete with the food chain and are linked to eventual deforestation and GHG emissions due to land-use changes [49,56,57,60].Some VOs are drawn from inedible crops: cottonseed, jatropha, jojoba, neem, and nyamplung [47,57,59].Kazeem et al. [70] analyzed two little-known oils, watermelon and Jatropha, as potential green cutting fluids through the design of an experimental approach and variance analysis, outlining optimized process parameters, since they obtained a high grade of accuracy without linear or interaction effects.A potential food-source alternative was proposed by Liu et al. [71]: basil seed gel, based on water (98%) and mucilage, and eventually with ethanol added.The fluid showed non-Newtonian characteristics and static ultra-low friction. Whether VOs are edible or not, they are mainly composed of glycerol and fatty acids, which are esterified to obtain boundary or hydrodynamic lubricants [48,59].They can be combined and mixed, as Aisyah et al. [72] did, using olive and sunflower oils as cutting oils in the pre-treatment of Jatropha-based lubricant, increasing viscosity and thus optimizing wear and lubrication.In addition, the vegetable residual oil can be reconverted to base oil.In this regard, Nagendramma et al. [73] formulated a performing grease using lithium soap, additives, and Jatropha residual oil, which is rich in free fatty acids (50-85%), obtaining a wear scar diameter of 0.24 mm. Biomass-Source Oils and Fats Natural lubricants can also be formulated from biomasses: cellulose, straw, wastecooking fats or oils, sugar [45,49,59], and oleaginous microorganisms, i.e., algae, bacteria, fungi, yeasts [57,60], and molds.Lube oils are extracted from organisms that can grow in different environments: fresh, marine, or even wastewater, or in an artificial medium [57].They can be cultivated in a controlled environment [60], giving the possibility of large-scale production.They synthesize lipids (i.e., TAGs, diacylglycerols, monoacylglycerols, and sterols) extracted in liquid droplets (up to 60% of their dry mass).These organisms contain ester functional groups that provide lubricity and initially decrease the friction coefficient until a steady-state condition.Patel et al. [60], in their study on single-cell (or microbial) oil, analyzed microalgae that showed a lower COF than other microbes thanks to the lower amount of unsaturations in the carbon chain. Moreover, microbe-based lubricants could valorize biomass refinery by recovering different kinds of non-edible products [57].Paul et al. [74] characterized the physical, chemical, and rheological properties of epoxide waste cooking oil and its methyl esters, which have an optimal viscosity index; for this reason, these oils may be implemented as green lube oils from waste products.Nevertheless, the biomass produced by most of the abovementioned methods is currently not enough to suit commercial production [57]. Water-Based Lubricants Water-based lubricants (WBLs) are typically obtained by mixing water with glycerol.They generally show low COF, high thermal conductivity, fire protection, and safety for operators [75].WBLs are required to be mixed with additives, e.g., solid nanoparticles (TiO 2 , SiO 2 , graphene), ionic liquids (ILs), or bio-based oils to reduce friction, wear, and corrosion and improve viscosity, wettability, and fire protection [76,77].Sagraloff et al. [77], in a preliminary study, investigated sliding wear and scuffing of wear lubricated with WBLs and polymers extracted from plants, resulting in higher scuffing load capacity (of 2-3 failure load stages) and lower COF, but poor wear resistance due to insufficient thickness of the lubricant layer between sliding surfaces.However, this research was challenged to find a modification to maintain the advantages of WBLs and improve the gaps. Grace et al. [78] studied the particular case of ILs in coffee bean oil from spent coffee grounds in steel-steel contacts, resulting in the optimization of wettability, wear, friction, and plastic deformation comparable to the traditional commercialized oils.Hasnul et al. [79] found that the combination of graphene nanoplatelets, usually unstable in suspension, and IL additive in bio-lubricants enhances COF reduction compared to the use of a single additive, while other physical properties remain almost unchanged.WBLs are considered sustainable and non-toxic whenever they contain only green additives, which have to be easily dissolved in water or soil without the risk of contamination to be considered safer options.Moreover, they are potentially recyclable and reusable, extending the lifetime of lubricants. Natural lubricants are mainly based on oil (VOs, biomass oils) or water and supplemented with natural or synthetic additives to offset the poor thermal and oxidation stability.They are non-toxic and have a closed carbon cycle.Thus, they are considered long-term sustainable alternatives to petroleum-based oils, even if there is still no mandatory or regulatory legislation and their current cost is not competitive on the market. Biodegradable Lubrication Biodegradable-lubricants (BLs) derive from natural or animal, renewable raw material, or recycled sources.They are classified as natural if they originate directly from plants or animals, and synthetic if they are chemically modified or undergo different catalytic processes [49,60].Indeed, biodegradable lubricants are strictly connected to natural lubricants since they have similar compositions, characteristics, and preparation methods.They are mainly composed of saturated esters (derived from fatty acids), glycols, or bio-olefins [45,49,57]; produced by cationic or free radical condensation or ring-opening polymerization of fatty esters, eventually with nanofillers [59]; and modified through transesterification, epoxidation, or hydrogenation [49].As well as natural lubricants and a fortiori biodegradable lubricants, the production has not outgrown MOs yet.They are needed to limit, or wherever possible, avoid, environmental contamination, which is estimated at 55% of total lubricants every year [48].Moreover, only 25% of petroleum-based oils degrade [49,80], and 30-50% of the conventional lubricants are dispersed into the environment during their life [81]. Degradation is not the only factor to be considered: The economic performance is one of the main weaknesses of green lubricants; they degrade faster than traditional MOs, requiring frequent changes or refills.BLs impact land use, becoming a possible trigger for deforestation.It is required to compare their entire life cycle, following a cradle-to-grave method (i.e., from raw material to end-of-life), in order to evaluate their environmental impact and energy use.Bart et al. [81] collected different comparative LCAs.As often occurs, it is difficult to evaluate LCAs with different goals and scopes, functional units, life-cycle stage tools, and databases.The presented analyses mainly concerned rapeseed, soybean oil, and traditional industrial mineral oil such as Variocut G500, Castrol, and trimethylolpropane trioleate.The impact categories that were evaluated were global warming potential (GWP), ozone depletion potential, acidification potential (AP), eutrophication potential (EP), and cumulative energy demand (CED) [81].The study showed that VOs have low CED and are usually non-toxic or non-cancerogenic; moreover, if they are cultivated in rotational crops, they require fewer nitrogen fertilizers, reducing GHG emissions, AP, and EP [81].However, the specific case of rapeseed oil highlighted that it could be considered sustainable only if the acidification and ozone depletion impact categories are excluded from the LCA [81].Athaley et al. [82] found that the production of fatty acids from furfural biomass is the main culprit of fossil depletion and soil occupation.Nevertheless, MOs' contribution to GWP and ozone depletion is generally higher than that of BLs, and their solid waste and volatile species are typically more disruptive [81]. Standards and Tests for Bio-Lubricants There are many standards and testing regulations regarding criteria to define bio-based products and, in particular, bio-lubricants [45,46,49,56 OECD 311 (Anaerobic biodegradability) [89]. According to these standards and regulations, ready biodegradability is achieved if ≥70% is dissolved organic carbon [51,87] or ≥ 60% for oxygen depletion [51,87,88].It is worth emphasizing the difference compared to environmentally acceptable lubricants (EALs), which concern human and environmental toxicology [90].Bio-lubricants can be considered EALs if they do not exceed the limit of ecotoxicity of 2 g/kg as the lethal dose to kill 50% of the population [49].Acceptable degradation rates and minimal toxicity for the earth and marine environment, combined with homogeneity of product and compatibility with machines, are challenges to including bio-lubricants as an attractive alternative in the industry [49,57], especially in developing countries, where agriculture is the main source of income [47].In Europe, bio-lubricants cover only a few percent (3-3.5%) of the market since they are recommended by the authorities and regulations but are still not mandatory or promoted [45]. Biodegradable lubricants are green alternatives to petroleum-based ones if they derive from waste biomasses or sustainable agriculture.Thus, they avoid habitat destruction and competition with the food chain and reduce GHG emissions (related to land conversion).Their LCAs should be seen both globally and locally to consider the significant effects on the environment and the possible strategies and regulations to decrease their impacts. Sustainable Chemistry and Green Engineering Principles The GTPs were inspired by the 12 principles of sustainable chemistry and the 12 principles of green engineering.Green chemistry was defined by Anastas and Warner [91] in 1998 as "the utilization of a set of principles that reduces or eliminates the use or generation of hazardous substances in the design, manufacture and application of chemical products" [91].It became the pillar for further green scientific and technical disciplines; indeed, it was consulted in other documents to define the 12 principles of green engineering during the 2003 "Green Engineering: Defining the Principles" conference in Florida [92], previously drafted by Allen and Shonnard (2001) [93] and Anastas and Zimmerman (2003) [94].The two green doctrines present some basic contact points that it is possible to revise in the GTPs, e.g., the prevention of waste production, reduction of hazardous or toxic material employment, and maximization of efficiency. The green chemistry principles (GCPs) and green engineering principles (GEPs) are listed in Table 1, and their hypothesized influence on the GTPs is proposed in Figure 4. The green tribology principles fail to explicit some concepts that are embedded in the GCPs and GEPs.This sixth principle allows the inclusion of concepts like material toxicity, human health, security of the production processes and during the use phase, avoiding unnecessary manufacturing and transformation, and efficiency in materials, energy, and time depletion.Some principles are topic-specific, like the reference to catalytic reagents, chemical products, and chemical processes; nonetheless, their generalized fundamental idea is applicable in the tribology discipline. Green tribology principles added three new aspects: a biomimetic approach (n.7), to develop and engineer new strategies that mimic living nature; surface texturing (n.8), which is specific to the tribology field; and sustainable energy applications (n.12), which claim this purpose as the most important one to focus on to reach sustainable development. It is better to prevent waste than to treat or clean up waste after it is formed. 1. Designers need to strive to ensure that all material and energy inputs and outputs are as inherently nonhazardous as possible. 2. Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product. 2. It is better to prevent waste than to treat or clean up waste after it is formed. 3. Wherever practicable, synthetic methodologies should be designed to use and generate substances that possess little or no toxicity to human health and the environment. 3. Separation and purification operations should be designed to minimize energy consumption and material use. 4. Chemical products should be designed to preserve efficacy of function while reducing toxicity. 4. Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency. 5. The use of auxiliary substances should be made unnecessary wherever possible and innocuous when used. 5. Products, processes, and systems should be "output pulled" rather than "input pushed" through the use of energy and materials. 6. Energy requirements should be recognized for their environmental and economic impacts and should be minimized.Synthetic methods should be conducted at ambient temperature and pressure. 6. Embedded entropy and complexity must be viewed as an investment when making design choices on recycle, reuse, or beneficial disposition. 7. A raw material of feedstock should be renewable rather than depleting wherever technically and economically practicable. 7. Targeted durability, not immortality, should be a design goal. 8. Unnecessary derivatization should be avoided whenever possible. 8. Design for unnecessary capacity or capability solutions should be considered a design flaw. Material diversity in multicomponent products should be minimized to promote disassembly and value retention. 10.Chemical products should be designed so that at the end of their function they do not persist in the environment and break down into innocuous degradation products. 10. Design of products, processes, and systems must include integration and interconnectivity with available energy and material flows. 11. Analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances. 11. Products, processes, and systems should be designed for performance in a commercial "afterlife." 12. Substances and the form of a substance used in a chemical process should be chosen so as to minimize the potential for chemical accidents, including releases, explosions, and fires. 12. Material and energy inputs should be renewable rather than depleting. * As proposed by Anastas and Zimmerman [94] in March 2003.In May of the same year, the participants discussed and modified the principles during the Green Engineering: Defining the Principles conference, reducing the number to nine [92].Green tribology principles fall within the green path previously traced by the green chemistry and green engineering principles.Besides providing sustainable strategies for tribological systems, the added value indicates a crucial field where tribology can strongly make a difference, that is, renewable energy production. Biomimetic Approach The attribution "biomimetic" refers to the biological approach to engineering, distinguished from "biophysics" by Schmitt in the 1950s [95]; it is the inspiration from nature to achieve engineered lubrication systems and represents a chance to improve the overall efficiency.Many naturally occurring examples can be artificially replicated.The main ones concerning tribology in the human body are the skin sebum and the synovial fluid in joints, which provides low friction to cartilage [12,96].Indeed, biomedical engineering is one of the main applications of biomimetic lubricants (for example, for arthrosis and osteoporosis treatments); the same mechanisms can be extended to other fields, e.g., cartilage lubrication-inspired seal lips in the automotive sector [97].In general, the aim is to recreate the roughness and the surface finishing of natural elements that, through micro-textures, reduce friction, being potentially applicable in every tribological system.The most used mimic technique is laser surface texturing (LST) (through ablation, cladding, and shock processing [98]) since it is highly efficient, accurate, and economical [99][100][101], and it can be applied from nano to macro scale [1].Wei et al. [102] employed LST on the surface of nickel-based coated plunger pumps to create specific uniformly distributed round dimples.The friction coefficient improved from 0.3 to 0.18 with oil lubrication and from 0.5 to 0.3 with water lubrication with respect to the non-textured one, as each dimple acts as a micro-bearing enhancing the hydrodynamic effect.Mechanical machining, micro-electro-mechanical systems, electrodeposition, laser hardening, or ablation can be used as well, and they can be combined in hybrid processes (e.g., vibration-assisted or mixed machining) [1,98,99].Micro-texture has a synergic triple action: reserve for liquid or solid lubricants, collector of wear particles, limiting abrasion, and forming a hydrodynamic film (in case of relative motion between contact surfaces) [98,99,103].A more detailed analysis will be discussed in Section 2.8, which deeply investigates innovative surface-texturing strategies. Biomimetic Texturing Optimization Bio-inspired surface micro-texturing can be optimized through the mathematical, finite element, and computational fluid dynamics (CFD) methods based on the Navier-Stokes and Reynolds equations for hydrodynamic pressure [1,98,101,103,104].Paggi et al. [105] modeled the hydrodynamic lubrication of a sliding bearing with 3D complex roughness using software combining CFD and smoothed particle hydrodynamics.This mesh-less method accurately simulated the fluid flow and predicted the load-bearing capacity, the speed, and the pressure field, also considering inertial effects.Furthermore, a genetic algorithm was applied to simulate the evolution of the tribological behavior of the system, optimizing the micro-texture.Zhang et al. [106] used CFD on bullet-, fish-, and circularshaped textures under unidirectional sliding, resulting in friction reduction thanks to the uplifting hydrodynamic pressure around the geometries, especially in the bullet one, which showed a COF below 0.14 at a pressure below 0.25 MPa and a sliding velocity above 125 mm/s.The optimization methods typically describe regular surfaces, which are usually not transferable to real cases [98]; research and studies have been devoted to filling this gap in the last decades.Nevertheless, it is possible to evaluate with a good approximation the shear thinning effect (typical of the hydrodynamic region), film thickness, COF, and energy use associated with the artificial texture created both in dry conditions and in combination with a lubricant layer of natural oils or grease, ceramic composites, or solid lubricants [96,98].In particular, Huang et al. [107] found through repeated two-factor analysis of variance with F-values at a 0.01 significance level that the combination of a hexagonal surface texture inspired by tree frogs and a SnAgCu-TiC infiltrated solid lubricant as mucus inspiration on steel (AISI 4140) better improved the tribological behavior of the system.The optimal combination was SnAgCu with nano TiC at 4 wt.%, which reduced the average COF by 80.55% compared to steel only. Inspiration from Nature Hydrophobic surfaces of the leaves of lotus, Salvinia, or some carnivorous plants, e.g., Nepenthes [97,108,109], are some examples from the plant world that can inspire tribology.Roughness is exploited to obtain superhydrophobic or even self-slippery liquid-infused porous surface (SLIPS) [108], which provides long-term usage.SLIPS can be obtained by infusing lubricant oils (such as paraffine) as droplets onto the surface that replace air inside porosities [108].Yang et al. [109] applied perfluoropolyether oil SLIPS to anodic aluminum oxide (5086 AAO) by vacuum permeation, obtaining a non-wetting layer due to capillary forces that retained the lubricant in the nanopores and antifouling properties. The biomimetic approach can also be inspired by animals: the skin of fishes, sharks, snakes, earthworms, armadillos, and frogs [99,104,107].These are usually characterized by biomimetic units, mainly with hexagonal grooves, that can be recreated on one of the mating surfaces, choosing adequate orientation and distribution to obtain good tribological properties [99].It was demonstrated by Zhang et al. [99] that the application of the texture on both the sliding surfaces worsened the tribological contact.Lu et al. [104] showed that the shark skin could be artificially replicated as elliptical dimples or rhomboid cells since it improves water resistance and, more in general, fluid resistance, friction, and shear stress.It is possible to modify the groove height to raise the gap and, in turn, favor the formation of the hydrodynamic thick lubricating film [104].Huang et al. [107] applied multi-scale micro and nano diamond-like textures to steel (AISI 4140) rotation bearings, recreating the fluid regulation of tree frog toes.Zeng et al. [103] reproduced earthworm dimples by LST on medium carbon steel (C = 0.45 wt.%) and GCr15, obtaining a phase transitional zone between the martensitic melting zone and the pearlitic and ferritic substrate that, according also to the finite element method, improved the tribological properties. Replication of surfaces is not the only source of inspiration: natural lubrication, e.g., aqueous protein solution, is a different possible biomimetic approach.Sukumaran et al. [65] used a non-Newtonian solution based on water and bovine serum albumin as an additive in order to improve water lubricity in combination with vegetable oil (rice bran oil).COF was reduced from 0.09 (rice bran oil only) to 0.73 and the wear scan diameter from 0.565 mm to 0.472 mm by adding the optimal concentration of 0.4 mg/mL of bovine serum, whose proteins contain aromatic amino acids and create a layer of boundary lubricant. The biomimetic approach for lubrication is inspired by natural elements, such as human cartilage, hydrophobic plants, and animal skins.In particular, it replicates microtexture with different techniques (mostly LST) to recreate the desired beneficial features, i.e., oil reservoir, particle collector, and tribo-film precursor, in the tribosystems.The biomimetic approach is increasingly analyzed and investigated, and it can be further optimized through the abovementioned finite element methodology, computational fluid dynamics, and genetic algorithm. Surface Texturing The control of surface properties such as roughness represents an additional strategy for making tribological systems more efficient and thus more environmentally friendly.Surface texturing is a relevant method for characterizing surface properties [10].Surface texturing techniques and solid lubricants have been shown to efficiently reduce friction and wear, reducing or avoiding the use of external liquid lubricants [110].In particular, changes in surface roughness and topography guarantee improvements in the tribological behavior of the involved components [111].Among the most intuitive approaches to increasing the efficiency of moving mechanical parts and reducing friction and wear, external lubricants are the most widely and traditionally employed.Nonetheless, it is also possible to improve the tribological performances through more innovative methodologies, such as surface texturing [112]. Surface texture treatments generally modify the contact area between the sliding parts, lowering friction and improving efficiency and service life [113].One of the most widely used techniques for creating a surface texture is laser surface texturing.This technique allows dimples to be created that can have different functions depending on the working conditions [112].Indeed, if the system is subjected to a local lack of lubricant, the textured material can store quantities of lubricant and then release them during the use phase [114,115].In the dry condition regime, for example, the texture of a surface can trap possible wear debris.In this way, the efficiency can be increased depending on the texture geometry by decreasing friction [116].In lubricated conditions, the surface texture can increase hydrodynamic pressure, reducing friction and wear [115,117].However, surface texturing effects are highly dependent on operating conditions, such as sliding speed or contact pressure [114]. To further enhance the tribological performance of coated tribosystems, it is possible to apply surface texture on coatings.There are two general approaches to surface-coating treatments.The first one aims to reduce shear resistance; thus, it facilitates sliding between the two involved surfaces, for example, by the selection of a soft coating to protect a hard substrate or using a hard coating on a soft substrate to reduce the contact area between the mechanical parts and the friction.The second strategy is the development of solid lubricants, which are materials with specific chemical and physical properties that allow the reduction of friction and wear without using external lubricants [110,118], as previously discussed in Section 2.3.2.In particular, the interfacial sliding between the coating and the transfer film is the main reason for the lubricating action of solid lubricants.The key parameters in solid lubricant design are the contact area between the moving parts, the loading, how this affects the microstructural changes of the surface regions in contact, and the film formation between the surfaces [110,118].Despite the great advantages of these innovative techniques, some limitations have been observed when the system is subjected to high loads and low sliding speeds between parts [114].Under these operating conditions, surface treatments can degrade and lead to the formation of debris caused by wear between the components. Consequently, working conditions and, in general, tribological properties worsen [113].Multiple techniques can be combined to limit and solve these limitations.Blending surface texturing with surface coatings could solve problems related to harsh working conditions.For applications requiring hard and wear-resistant coating, textured coatings act on the contact area between the parts and reduce friction, whereas for long-operating-life applications, coatings can be applied to textured surfaces [110,113].High temperatures and high operating pressures can be challenging working conditions that strain more traditional lubricants [119].However, the effects of surface texture can also change heat transfer [120] since better heat removal in the contact zone improves the wear resistance of components [121].Depending on the application, surface texture can also change the stress distribution and wettability of the surface [110,122,123]. There are numerous techniques for surface texturing, such as surface laser texturing, micro-ball end milling, micro-casting, and electrochemical machining.One of the most advanced techniques that allows the creation of dimples of micrometric dimensions with a high degree of accuracy and precision is surface laser texturing.By adjusting the process parameters, it is possible to control the shape and optimize the geometric factors of the texture itself.However, this technique is difficult to apply on a large scale due to the enormous energy required for its operation.A technique that has recently proven capable of producing micrometric textures while avoiding such a large energy expenditure is microball end milling.With this technique, micro dimples are machined.However, recent studies aim to improve the algorithms required for accurate surface textures.A methodology that offers the possibility of limiting further machining and material waste is micro-casting.However, research on this technology is still limited.Finally, electrochemical machining is a technique that uses anodic dissolution during an electrochemical process to remove material.Electrochemical processes guarantee high efficiency compared to other processes, low production costs, and no heating of the surface [123].Depending on the type of surface texture, the quality required for its implementation, and the size and scale of the work, it is possible to choose the most suitable technique. The study by Voevodin and Zabinski [124] combined surface textures with solid lubricants.In particular, a focused ultraviolet laser beam was used to create the surface texture, creating small dimples of micrometric dimensions very accurately.This texture was machined onto TiCN surfaces, and then MoS 2 and graphite-based solid lubricants were applied by burnishing and sputtering onto the treated surfaces.The properties of the untreated TiCN surface and the laser-treated TiCN surface with solid lubricants were compared through dynamic friction and wear tests.Through wet and dry tests, the superior durability of the treated surfaces was proven, partly thanks to the micro dimples, which acted as reservoirs and supplied the surface with lubricant. The choice of suitable technique for creating the surface texture is important, as are the geometric factors.The study by Matele and Pandey [125] showed how to improve the geometry of the surface texture to achieve the objectives of green tribology.The analysis of the influence of geometry on surface properties, particularly the dynamic characteristics, showed that they are affected not only by the surface texture but also by the location of the texture.In fact, with the correct localization and geometry of the texture, the obtained results are better than with an untreated surface.The study proposed three different surface textures: square, circular, and densely distributed square.The best results were obtained with the circular texture compared to the square texture, which, on the contrary, gave the worst results.The data evaluation employed models and programs that provided a range of values specific to the test conditions, varying over many test parameters.The study highlighted important improvements in the dynamic and hydrodynamic characteristics, which depended on the shape and location of the surface texture.Furthermore, the surface texture application was evaluated to maximize the contact area in the axial direction, improving the final results. The application of textures to surfaces can be a helpful way of increasing the overall efficiency of tribological processes.A careful technique selection and design of texture geometry and location should be analyzed during the design of components, depending on the application, to achieve the best possible results. Environmental Implications of Coatings Coatings are subject to rapid deterioration, with the consequent deployment of new materials, energy consumption, and costs due to maintenance and substitution.During the lifetime of a coating, it may release powders and debris that are potentially toxic or hazardous to human health.The consumption and replacement of coatings generate waste that is typically dismissed. Diamond-like Carbon More efficient coatings may extend the lifetime of components [124,125] in the tribological system.Careful use of such cladding prevents catastrophic damage to equipment and tribosystems.In the last few decades, diamond-like carbon (DLC) coating has stirred up a growing interest thanks to its attractive mechanical properties and biocompatibility [126][127][128], and the excellent possibility of employing vegetable oil instead of mineral oils due to its demonstrated competitive performance [129,130]. The overview presented by Love et al. [127] provided the last updated data about using DLC coatings for biological purposes.DLC includes a wide range of amorphous carbon coatings that differ for the hybridization of carbon, namely, sp2 and sp3, and the level of hydrogen.The difference in these values determines different properties, particularly the friction coefficient, wear rate, and debris production.The processing of DLC coatings determines high internal stresses that may cause harmful delamination and failure.Dopants like silver, nitrogen, and fluorine yielded beneficial results, reducing internal stresses with no drawbacks and thus overcoming this problem.The antibacterial property of DLC represents an additional interesting feature of this material in biomedical applications.The processing and application methodologies should be improved.Nevertheless, DLC coatings represent an interesting and viable possibility to develop durable and non-hazardous coatings to improve the tribological performance of implants. As previously mentioned, DLC allows the profitable use of vegetable oils, which, as discussed in Section 2.4 regarding natural lubrication, represent an environmentally friendly alternative to mineral oils.The lubricating properties of these oils are well-known, but their poor performance in traditional tribosystems limited their employment.Different DLC coatings were tested in the study by Mahmud et al. [129], highlighting the limitation of the operative temperature-i.e., above 150 • C, the wear rate drastically increased.A lower friction coefficient was recorded for increasing temperature due to the promotion of graphitization of the coating that produced a graphitic layer between the sliding surfaces even under lubricating conditions.The study focused on the DLC structure and morphology, and the role of the vegetable oil remained unclear.A more detailed analysis of vegetable oil, namely, palm trimethylolpropane ester, was carried out by Zahid et al. [131].Palm trimethylolpropane ester exhibited better performance than the polyalphaolefin reference oil: a higher viscosity index, indicating better thermal stability, higher load-carrying capacity, and better friction performance, thanks to the unsaturated and polar structure.The graphitization of DLC was been observed when additive-free lubricants were employed. Self-Healing Coatings A completely different strategy to enhance the durability of coatings is the development of self-healing materials.Li et al. [132] developed a multi-functional coating made of microcapsules containing tung oil incorporated into an epoxy matrix.Through the release of tung oil, the broken microcapsules decreased the wear rate and friction coefficient due to a tribo-film formation and self-healing property.Tezel et al. [133] adopted a similar strategy, employing capsules containing epoxy resin as healing agents and confirming the self-healing behavior through micro-cracks.Cao et al. [134] exploited the intrinsic self-healing properties of tungsten disulfide thanks to its anisotropic trigonal prismatic structure.Tungsten disulfide can be employed in liquid lubricants, coatings, and composite materials.The combination of scanning electron microscopy and micro-tribotester highlighted the healing process during different cycles, with an induced crack being observed.Cycle after cycle, the ductile nature of tungsten disulfide allowed the lubricant to fill the damage and reduce the friction coefficient up to a superlubricity state (i.e., lower than 0.01).Thermal activation of the self-healing behavior was observed by Zhang et al. [135] in an epoxy coating.The study found the complete recovery of the micro-scratch after 20 min at 80 • C. The analyzed mechanism occurred thanks to the capability of 2-aminophenyl disulfide molecules in the coatings to break and form disulfide bridges under specific conditions, in this case, increasing temperature, leading to radical exchanges and the rearrangement of sulfur bonds. The concept of circular economy is spilling over to different fields and sectors, and it contributes to waste reduction and decreases production impact while extending the life of processed materials in a loop system.This strategy was analyzed by Bendikiene et al. [136] to produce hard facing made of chips and turnings from metal industry waste.A hardness measurement of different compositions and different thermal treatments was analyzed and compared, achieving the feasibility of using metal scrap of steel, tungsten carbide, and iron to produce hard and wear-resistant coatings. Coatings are an essential component in many tribosystems: Their presence contributes to reducing energy consumption, wear, and the use of lubricants, which are the first three principles of this list.Research strategies have focused on improving coating behavior, increasing hardness, reducing volatility, and promoting stability in a broader temperature range.The accurate design of coatings and the selection of methods to extend their lifetime are effective strategies to increase the sustainability of these elements.Circular economy methodologies should be further promoted and improved because they may provide unexpected, stunning results. Design for Degradation As mentioned before, Life Cycle Assessment (LCA) is a methodology to analyze the environmental impact of a given system from a qualitative and quantitative point of view [137].It is fundamental to foresee an adequate end-of-life during the design phase to limit environmental impacts.When possible, promoting a circular economy should be encouraged, keeping the material value in circulation as long as possible while avoiding discarding or destruction, such as the closed-loop supply chain concept, which optimizes efficiency and sustainability [138].The European Union's directive aims to minimize the impact on the environment of products end-of-life [139].Regarding waste oil management, the regulations govern the best techniques for waste oil management, collection, and recycling [140].In particular, several points contribute to the protection, prevention, and improvement of environmental quality and energy conservation. Mineral oil-based lubricants (MOBL) are often employed due to their low cost, availability, and good overall properties.MOBL can be divided according to the dominating structure in the crude oil: paraffinic, aromatic based, or naphthenic [141].Paraffin-based oils are among the most commonly used; they present good viscosity-temperature properties that make them very attractive for engine lubricants [142].However, synthetic oils are being developed more and more to meet the market's growing demands, where strict working conditions are required for machinery.The defined molecular structures lend chemical and mechanical properties that are generally superior to the more traditional mineral oils [143,144].Given the chemical nature of lubricants, the uncontrolled disposal of waste oils into the environment represents an outrageous problem for ecosystems, particularly water, soil, and air. Water Water can be polluted due to illegal dumping of waste oil or stormwater dragging contaminants from streets into waterways.The study by Vazquez and Duhalt [145] identified vehicle oils as the principal source of hydrocarbon pollution of waterways.In particular, contaminants such as certain metals can inhibit microorganisms and act as a mutagen in the aquatic environment [146,147].The rate of oil degradation in the environment is strongly influenced by environmental conditions and the complexity of the lubricant chain itself.Generally, those with a short chain are more easily degraded, whereas those with branching or aromatic groups are more complex [148]. Soil Waste oil can contaminate soil and generally results from engine leaks and unpermitted discharge.These contaminations can have devastating consequences, e.g., high concentrations of toxic metals can inhibit normal microbiological activities, and the direct penetration into the soil may affect the entire food chain [149,150]. Air Waste oil can be used as a fuel due to its high combustion heat, comparable to petroleum-derived fuels, and lower price [151]; examples of uses include burners, incinerators, and rotary cement kilns.However, due to their degradation during the service, lubricants generally contain a high content of metals and other substances that can be released into the atmosphere during combustion [152].Therefore, it is crucial to adopt filtering systems to avoid this emission into the environment. Considering LCA studies for waste oil treatment, several options and recovery options have been proposed.Through the environmental impact assessment of waste oils, the study by Boughton and Horvath [153] showed that zinc and lead emissions are the main contributors to terrestrial and human toxicity impacts.In particular, the authors demonstrated that the quantities of these heavy metals emitted into the environment are lower in the case of refining and distillation; therefore, it would be necessary to support, from an environmental point of view, treating waste oils by employing these methods instead of as fuel.Another possible strategy to reduce the end-of-life impacts of spent lubricants is to recycle and refine them into new oils.Removing the contaminants and additives present in the old oil allows the production of lubricating oils with properties similar to those of the base oils [154]. The study by Kanokkantapong et al. [155] proposed a selection of technologies to handle exhaust-lubricating oils, focusing on the environmental point of view.The authors analyzed the environmental impacts of the entire life cycle of waste oil, analyzing six different scenarios.Four scenarios aimed to generate energy using exhausted lubricants in cement kiln, small boilers, vaporizing burner boilers, and atomizing burner boilers.The other two scenarios were related to acid clay and solvent extraction.The study focused on four parameters related to environmental impact: acidification potential, global warming potential, heavy metals, and eutrophication potential.Concerning global warming potential and heavy metals, cement kilns are the technology with the best results due to the high temperatures, and the acid clay process offers the worst results regarding acidification potential. Economic and social issues have to be considered along with the environmental aspects.The market uses, energy consumption, technological development, and energy production are possible objective criteria that may provide an unbiased assessment of the various recycling methods for waste oil and thus indicate the best technologies for recycling waste oil. Thanks to new technologies, it is possible to reduce oil consumption and have better control over lubricant formulation, extending the life cycle and improving the impact on the environment.Potential drawbacks that have to be further investigated and considered are the possibility that the required advanced processes to exploit waste oil may be more energy demanding and the strong dependence of the regenerate oil performance on the oil employed for the formulation.Therefore, it is necessary to improve the efficiency of the processes and achieve the ownership requirements to have the most detailed information about the formulation of waste oil.This mechanism may improve collection and segregation accuracy, and waste oil regeneration targets could be met. In conclusion, the uncontrolled disposal of waste lubricating oils has a high environmental impact; managing waste oils in a circular economy view could improve energy efficiency and reduce the global impacts, maximizing recycling and energy recovery.Despite the lower cost, previous research highlighted that the use of waste oil as a fuel has a high environmental impact.Therefore, the regeneration of new oils or energy recovery should be preferred and promoted. Real-Time Monitoring Lubricants can significantly extend the life of machines and mechanical components, which is essential for energy saving and conservation [11].As different mechanical interfaces are in contact with each other and in relative motion, lubricants can directly control friction and wear [156].Therefore, it is fundamental to make a proper lubricant selection and conduct on-site monitoring.Checking the efficiency and analyzing a lubricant within a tribosystem can provide important information regarding the proper functioning of the equipment itself [157].Thus, faults can be detected early, and machinery shutdown can be avoided through an appropriate detection strategy.In this way, the overall efficiency of the process increases, maintenance and substitution costs decrease, and possible large-scale failures that could invalidate the entire process during a malfunctioning are prevented [157,158].Another crucial aspect of lubricant monitoring, as mentioned in the 12 principles of green tribology [10], is the implementation of analytical controls during machinery service to avoid the possible formation of hazardous substances.Indeed, the lubricant quality is of paramount importance to reduce friction losses and increase service life, e.g., for the automotive sector [157,159,160]. The general parameters governing the quality of the lubricants in use are viscosity, density, pour point, flash point, and thermal and oxidation stability [48].Contaminants from wear are often present and are also one of the leading causes of improper lubrication.During regular use, the size of the debris is constant and small, generally between 10 and 20 µm.However, debris gradually increases in size (between 50 and 100 µm) and concentration [161].Considering the transport sector, which is one of the most diffuse tribosystems, as presented in Section 2.1, the possible lubricant contamination with water, fuels, or other substances may reduce and worsen the oil performance.Moreover, mechanical components can reach high temperatures, such as the engine, and the heat can affect the overall oil performance.Poor oxidation stability may affect the lubricant quality through acidification and consequent material deposition [48,158]. In the energy efficiency panorama, it is necessary to monitor lubricant degradation.Many simultaneous reactions drive the degradation of lubricants and the consequent loss of lubricating properties, which lead to poorer performance and increased energy consumption [162,163].The achievement of high efficiencies in new engines is now a key focus of technological research.It has to face gas contaminant recirculation and elevated working condition temperature, which increase the stress of lubricants [164,165] and require the implementation of an oil degradation study [163,165] analyzing the different changes that can occur in the main oil properties [165].Oil degradation generates chemical changes that can modify viscosity, causing losses in efficiency due to increased energy consumption to overcome friction, and can invalidate the engine's overall correct functioning [166,167]. As mentioned in previous sections, friction coefficient and wear scar diameter are evaluated to assess the effectiveness of a lubricant.It is also essential to determine other lubricant characteristics, such as the pour point, which is the temperature below which the lubricant loses its flow properties and can be assessed by the standard procedure described by ASTM D97 [48,168].Generally, the abovementioned biological lubricants have a higher pour point than mineral lubricants because they do not contain additives.Moreover, the chemical nature of biological lubricants may raise the pour point thanks to the presence of unsaturated chains [48].The flash point indicates the minimum vapor concentration to prove ignition; the standard governing its assessment is ASTM D56-21a [169,170].In this case, the chemical nature of the biological lubricant is dominant: Due to strong molecular interactions, high flash points can be achieved compared to conventional oils [160]. Several methods are employed to monitor the quality of lubricants, such as vibration condition monitoring techniques and visual inspection [171], acoustic emission testing [172], and magnetic non-destructive techniques [173].Through these techniques, it is possible to detect the presence of debris and its size, which is essential for assessing the level of wear on individual parts of a tribosystem.The optical-based methods are strongly influenced by the lubricants' transparency and the refractive indices, and therefore by possible contaminants in the lubricant, such as air bubbles.The methodologies based on vibration analysis require complex systems for data acquisition [174]; this is a strong limit in the use of these techniques, although they are relatively non-complex.More sophisticated methodologies, such as those based on acoustic emissions, are sensitive to interference caused by background noise and temperature gradients [172].Moreover, these assessment methods have an intrinsic limitation: They can detect debris, but they fail to differentiate between ferrous and non-ferrous debris [174].This missing distinction ability represents a substantial restriction for the identification of specific areas that may be worn.Therefore, techniques using inductive magnetic fields are employed to detect metal particles [158,161,174].It should not be underestimated that to perform these assessments, it is necessary to carry out significant installations and empower the machines themselves, causing possible interferences and impacting the overall life cycle assessment of the system.For this reason, recent studies aimed to find evaluation systems that do not require external energy sources to operate, are small in size, and allow real-time assessment of lubricating oil performance [158]. On-Site Monitoring The biggest challenge is to make the monitoring systems self-powered and independent from the external power grid.The solution presented by Zhao et al. [158] represents a possible technology to be implemented in the automotive sector.The proposed triboelectric nanogenerator (TENG) can self-power and monitor the condition of the lubricant, and it is outlined in Figure 5. TENG was made of a non-metallic tube externally partially covered by a copper foil.The lubricant motion within the tube generated an electric signal due to triboelectrification and electrostatic induction.The interaction of the non-metallic tube with the copper foil produced a layer of negative charges at the interface with the lubricant.The amplitude of the output values and the variation over time in the voltage gave information on the lubricant contamination.Contaminants modify the performance of the oil electrification process, generating a different electrical output value.By comparing the initial and the in-use output values, it is possible to estimate the level of deterioration typically caused by thermal oxidation.This technology could detect metal contaminants or water due to the different interactions with the tube surface with respect to the lubricant-solid interaction; therefore, they generate different electrical outputs. The evolution of viscosity, among the other properties, gives information about the degradation progress.The study by Notay et al. [163] demonstrated the close relationship between viscosity and the chemical evolution of the lubricant itself; an increase in viscosity is generally observed as lubricant degradation increases [163,166].Notay et al. [163] presented a monitoring system based on the observation of the laser-induced fluorescence.A small amount of a fluorescent additive in the lubricant allowed the evaluation of the oil degradation based on the fluorescent activity.Exhaust gas recycling techniques are usually adopted to reduce NOx emissions, and the reintroduction of these gases can accelerate lubricant deterioration [175].A study by Toledo et al. [176] aimed to identify possible diesel fuel contamination in the lubricant.Using resonant microstructures, the natural frequency of the system was assessed.Resonators are films made of piezoelectric materials (aluminum nitride) that act as actuators and detectors.It is possible to correlate these factors with lubricant density and viscosity through suitable oscillation frequencies, voltage gain, and appropriate models.Since implementing a liquid-immersed oscillator can be challenging, a strategy to cancel the parasitic signal was proposed using a reference device. Rossegger et al. [177] marked the lubricant with a non-radioactive isotope of hydrogen, namely, deuterium.The main properties, such as viscosity, were not affected by trace amounts up to 10%.The quantity of the substance was monitored in the exhaust gas, and a mass balance was carried out to determine lubricant consumption. Lubricant Performance Modeling The study by Blaine and Savage [178] and the research by Grandgirard et al. [179] presented models of lubricant degradation, obtaining good results in agreement with the experimental data.Blaine and Savage [178] proposed a predictive model of the chemical reactions in a lubricant during use.The study focused on oxidation and deterioration of properties as the degree of oxidation advanced, considering n-hexadecane as a reference.This substance was chosen because of the similarity of the bonds within it compared with petroleum-derived lubricating oils.The reactions of n-hexadecane provided information about the chemical reactions occurring within the lubricant.Grandgirard et al. [179] proposed a kinetic chemical model to predict the properties of an automotive lubricant, in particular for a diesel engine.Experimental data were collected and then implemented in the model, obtaining results in agreement with the actual testing results.The model was based on the mechanical processes and chemical reactions that govern the quality of lubricant within the engine.By modeling the evolution of these chemical reactions through kinetic models and predicting the mechanical processes through computer simulations, a predictive model of the lubricant within the engine can be achieved.Pfaendtner et al. [162] proposed a library of coefficients and parameters to model the thermal degradation of lubricants, broadening the knowledge of this field and allowing the implementation of different predictive models. Marian and Tremmel [180] and Mokhtari et al. [172] proposed computer-based prediction and simulation models based on machine learning and artificial intelligence.The development of advanced data management and analysis methods allows predictive models based on rich research and data quality to be built.Moreover, being inherently predictive, existing data can be expanded.Mathematical equations cannot yet predict many parameters in the field of tribology.However, thanks to the adaptability and efficient data handling of machine learning and artificial intelligence techniques, they can adapt to different solutions by proposing analysis, predictions, and optimizations in the short term, if not in real time.A limitation of these techniques is the acquisition and comparability of data obtained from different tests: They can have different origins, and the scale from which they are extrapolated can generate heterogeneity in the results [180]. Real-time monitoring of the main parameters that characterize a lubricant is essential to understanding the correct functioning of the lubricant itself, reducing friction and wear between the moving parts and thus improving efficiency.Real-time monitoring systems are fundamental to preventing further damage and downtime.In addition, to improve the overall efficiency of tribosystems, it is advisable, where possible, to focus on the development of miniaturized and self-powered monitoring systems. Sustainable Energy Applications In 2020, the European Union reached 22.1% of gross final energy consumption from renewable energy due to different national action plans of the Union members, as reported in Figure 6.They laid out the roadmap for renewable energy development to meet their obligations concerning the total percentage of renewable energy in gross final energy consumption, particularly in the transport sector [181].In the last few years, geopolitics, society, and environmental weaknesses highlighted the importance for the European Community Members, and in general for each country worldwide, of becoming independent from an energetic point of view to assure economic and social growth.Tribological design should consider sustainable energy applications as one of its top priorities.As underlined in Section 2.6, identifying a priority field to focus on is one of the main differences between green tribology principles and green chemistry and green engineering principles. A suitable tribology study can increase the efficiency of clean energy production, decrease energy dissipation, and make these technologies economically competitive compared to traditional fossil fuel-based technologies. Material Improvement The first stage to enhancing the development of efficient renewable power generation systems is the improvement of the traditional materials employed in this sector.A suitable material design can improve the overall system performance and prevent damage or failure.Somberg et al. [184] proposed a polyphenylene sulfide (PPS) and short carbon fiber (SCFs) composite as a performing bearing material.The study compared the PPS-SCF composite with commercial materials typically employed for bearing components.The PPS-SCFs exhibited higher hardness than the other polymeric materials and the lowest friction coefficient and specific wear rate in water-lubricated conditions.The result was achieved thanks to the synergetic effect of the graphene oxide and SCFs within the polymeric matrix, enhancing the wear resistance.The study underlined how much the performance of the investigated materials depended on the considered environment.The proposed composite did not exhibit the same good friction coefficient and wear rate values in dry or lubricated conditions, where other materials should be selected. Failure Understanding the components' inefficiency or failure is necessary to propose new materials or lubricants for a tribosystem.Dhanola and Garg [194] analyzed the principal components prone to the most common failure modes.Their review found that the electrical system had the highest failure frequency in one year, namely, 0.6, but one of the shortest downtimes per failure, namely, less than two days.Components such as generator, gearbox, and drivetrain had the lowest annual frequency, below 0.2, but the highest downtimes per failure, more than six days.Bearings in such components are prone to failure due to the operative environment, mechanical stress, and temperature that may cause their premature damage and failure.The main failure modes individuated by the review were scuffing due to plastic deformation, electric discharge, micro-pitting, white etching cracks due to microstructure flaking, fretting wear, and false brinelling usually generated by low-amplitude vibrations.Gearbox bearings are primarily responsible for gearbox fails.Proper lubrication usually prevents tooth breaking or pitting wear.The primary reason for gearbox bearing damage was found in the steel debris generated by rolling contact fatigue, white etching areas, and surface pitting.Loss of lubrication can be generated by various causes, e.g., lack of heat removal, inadequate lubricant, pump loss, filter failure, and alteration of the lubricant.This phenomenon produces undesirable and detrimental effects on the system, leading to overall failure. In their accurate review, Liu and Zhang [193] analyzed different modes of failure and the relations with their principal causes.As reported in Figure 8, one failure mechanism can be generated by various reasons, e.g., electrical pitting can be caused by electrical arc erosion, bearing overheating owing to mounting failure, or lubricant failures.It is fundamental to analyze the main mechanisms to adequately individuate requirements for the materials and a suitable monitoring system. Monitoring As mentioned in Section 2.11, real-time monitoring is one of the strategies to avoid failure and prevent downtimes and related costs.The development of monitoring is fundamental to anticipating failure within the system.The previous analysis of data referring to failure led to the possibility of recognizing and detecting the signs that indicate possible damage of power generation plants using sensors and equipment.The most widespread indicators evaluated are [193]: • Vibration-analyzed by accelerometers that cover several frequencies.This is the most common and employed technique; • Acoustic emission-detected by transducers that collect the propagation of the elastic waves within the solid subjected to stresses; • Lubricants and debris analysis-by the use of filters that allow debris within the lubricant to be collected and removed; • Power quality-indicates possible component damage. The development of efficient energy production from renewable sources is strictly connected to improving all the different aspects of tribology: material selection and design, monitoring, and continuous investigation of causes of failure. Conclusions The 12 principles of green tribology were taken as a cue to explore research development in the tribology field that may contribute to improving the discipline in the direction of more sustainable tribosystems.The cross-cutting nature of tribology may determine a possible relevant contribution to energy efficiency, material conservation, waste reduction, and decrease in pollutants. Each principle indicated a valuable strategy to support the enhancement of sustainability from different points of view, namely, environmental, economic, health, and security.Significant studies were reported to supply practical examples to move from the theoretical approach, saying what should be done, to pragmatic works, i.e., what has been done. Figure 1 . Figure 1.Representation of fuel energy employment in a passenger car, adapted from [14]. Figure 2 . Figure 2. Most common solid lubricants employed in the market, adapted from [29]. Figure 3 . Figure 3. Worldwide legislation to manage industrial and consumer chemicals, adapted from [50]. Figure 4 . Figure 4. Sankey diagram of the proposed relations and influences of the green chemistry principles and the green engineering principles on the green tribology principles. Figure 6 . Figure 6.Average share of energy from renewable sources in the European Union from 2004 to 2020 [181]. Figure 8 . Figure 8. Principal causes of failure and their relations with the failure mechanism, adapted from[193]. : • Biodegradability, tested by the Organization for Economic Co-operation and Development (OECD) 301 [51]; • Bioaccumulation, defined according to molecular weight, bioconcentration factor, and water partition coefficient (K ow ); • Water toxicity within limits, set by OECD 201 [52], 202 [53], and 203 [54]; • Derivation from at least 25% of renewable and traced sources.If vegetable oils (VOs) are used, the Forest Stewardship Council Chain of Custody certification is required, and in the particular case of palm oil, the Roundtable on Sustainable Palm Oil certification or analogous is required; • Recycled content-this consists of a minimum content of 25% for post-consumer plastic in packaging, which should be designed in order to avoid overuse and waste; • Minimum technical performance in terms of fit for purpose (ISO 12924 for generic lubricants) [55]; • Consumer information for use and disposal. Table 1 . Green chemistry and green engineering principles.
2022-06-19T15:13:07.647Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "36ae0d8a9895ffc6e7e434793ee41b56cf2e3610", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4442/10/6/129/pdf?version=1655465819", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e50a5ca515b237f2873be62768b8093afb2e75df", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
3519121
pes2o/s2orc
v3-fos-license
Fully-automated left ventricular mass and volume MRI analysis in the UK Biobank population cohort: evaluation of initial results UK Biobank, a large cohort study, plans to acquire 100,000 cardiac MRI studies by 2020. Although fully-automated left ventricular (LV) analysis was performed in the original acquisition, this was not designed for unsupervised incorporation into epidemiological studies. We sought to evaluate automated LV mass and volume (Siemens syngo InlineVF versions D13A and E11C), against manual analysis in a substantial sub-cohort of UK Biobank participants. Eight readers from two centers, trained to give consistent results, manually analyzed 4874 UK Biobank cases for LV end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), ejection fraction (EF) and LV mass (LVM). Agreement between manual and InlineVF automated analyses were evaluated using Bland–Altman analysis and the intra-class correlation coefficient (ICC). Tenfold cross-validation was used to establish a linear regression calibration between manual and InlineVF results. InlineVF D13A returned results in 4423 cases, whereas InlineVF E11C returned results in 4775 cases and also reported LVM. Rapid visual assessment of the E11C results found 178 cases (3.7%) with grossly misplaced contours or landmarks. In the remaining 4597 cases, LV function showed good agreement: ESV −6.4 ± 9.0 ml, 0.853 (mean ± SD of the differences, ICC) EDV −3.0 ± 11.6 ml, 0.937; SV 3.4 ± 9.8 ml, 0.855; and EF 3.5 ± 5.1%, 0.586. Although LV mass was consistently overestimated (29.9 ± 17.0 g, 0.534) due to larger epicardial contours on all slices, linear regression could be used to correct the bias and improve accuracy. Automated InlineVF results can be used for case-control studies in UK Biobank, provided visual quality control and linear bias correction are performed. Improvements between InlineVF D13A and InlineVF E11C show the field is rapidly advancing, with further improvements expected in the near future. Introduction UK Biobank is a large prospective cohort study designed to assess the determinants of diseases of middle and old age [1]. Initial data collection in 500,000 participants, including genetic, physical and functional measures, was completed in 2010. Participants will be followed for 20 years, enabling nested case-control studies to assess exposures and preexisting characteristics in the development of disease and the effect of treatment. In 2013, an imaging extension was initiated with the goal of imaging 100,000 UK Biobank participants by 2020 [2]. The imaging studies include a 20 min cardiovascular magnetic resonance (CMR) examination, to assess cardiac phenotypes including ventricular function [3]. However, analysis of ventricular function parameters in 100,000 cases is impractical using current manual methods, which require drawing the ventricular boundaries at enddiastole (ED) and end-systole (ES) [4]. Also, manual assessment requires substantial training and is subject to interobserver and inter-center variation [5]. Large-scale CMR studies, such as UK Biobank, therefore present substantial challenges and opportunities for epidemiological analysis of cardiac phenotypes [6][7][8][9]. Recently, fully-automatic analyses of ventricular function are becoming available, with immediate application to large cohort studies [10][11][12][13]. In the UK Biobank CMR imaging examination, the Siemens syngo InlineVF (Siemens Healthcare, Erlangen, Germany) fully automated analysis of left ventricular (LV) volume was performed during acquisition. This software automatically identifies LV landmarks at the LV base (mitral valve) and apex in long-axis cine acquisitions, locates endocardial and epicardial contours at ED and ES in each short-axis cine slice, and performs volume calculations to determine ventricular function parameters (Fig. 1). However, the software was designed for supervised analysis with visual assessment for quality control in a clinical setting. Since these results are already available to researchers as part of the initial UK Biobank CMR image dataset, their application to large cohort studies such as UK Biobank requires investigation. Although the D13A version was used in the initial automated analysis, a subsequent E11C version will also be made available. This paper compares the performance of these versions in the first 5,000 UK Biobank cases. We sought firstly to evaluate the performance of automated ventricular function analysis against a standard manual analysis, in a substantial sub-cohort of UK Biobank. The second objective was to correct for bias between the automated and manual analyses to enable automated results to be used in future UK Biobank case-control studies. Subjects CMR examinations from the first 5065 UK Biobank imaging extension participants were assessed. All participants gave written informed consent and the appropriate institutional review boards approved the study protocol (National Research Ethics Service North West 11/NW/0382). Imaging protocol The full CMR protocol and rationale have been described in detail previously [3]. Briefly, all imaging was conducted on a 1.5 T scanner (MAGNETOM Aera, syngo MR D13A, Siemens Healthcare GmbH, Erlangen, Germany) using a phased-array cardiac coil. Ventricular function scans consisted of retrospectively gated cine balanced steady-state free precession breath-hold acquisitions performed in horizontal long axis, vertical long axis, left ventricular outflow tract orientations, as well as a complete short axis stack covering the left and right ventricles. Typical parameters were: TR/ TE = 2.6/1.1 ms, flip angle 80°, GRAPPA factor 2, voxel size 1.8 × 1.8 × 8 mm 3 (6 mm for long axis). The actual temporal resolution of 32 ms was interpolated to 50 phases per cardiac cycle (~20 ms). Manual analysis Manual analysis of LV volumes and mass were performed in accordance with the Society of Cardiovascular Magnetic Resonance recommendations [4]. Eight readers in two core laboratories were trained according to standard operating procedures prior to study commencement, to ensure minimal inter-observer bias. CMR examinations were analysed using cvi 42 post-processing software (Version 5.1.1, Circle Cardiovascular Imaging Inc., Calgary, Canada). The ED frame was selected as the first frame of the series and the ES frame was selected as the smallest LV blood pool area in the mid-ventricular slice. At both ED and ES, the most basal Fig. 1 a InlineVF results for a typical case with good agreement for volume (<5 ml for EDV and ESV) but overestimation of mass (61 g) compared with manual analysis. b InlineVF results for a case with relatively large discrepancy between manual and inlineVF results (30 ml in EDV). Contours show errors at the base slice slice included had at least 50% of the LV blood pool surrounded by myocardium. Papillary muscles were included in the blood pool. Inter-observer errors were quantified in 50 randomly selected cases. The software provided ED and ES volume (EDV, ESV respectively), ejection fraction (EF), stroke volume (SV) and mass (LVM). LVM was calculated assuming a myocardial density of 1.05 g/ml. InlineVF The D13A version of the InlineVF analysis algorithm was performed as part of the image acquisition and results stored as a separate DICOM image series as part of the image data for each case. The InlineVF algorithm has been described previously [10,14,15]. Briefly, shortest path algorithms were used to determine epicardial and endocardial contours that were propagated to other frames and used in other slices as a geometric prior. All frames were segmented in each slice using an inverse consistent deformable registration to register all frames to the first frame. The segmentation was propagated to other frames through the forward and backward deformation fields. The long axis slices were used to detect basal and apical landmarks using machine learning methods [15]. These landmarks were used to define a base plane approximation at the level of the mitral valve, which was used to cut contours to avoid inclusion of atrial volume in the ventricle. Papillary muscles were included in the blood pool. LV volumes and mass were calculated by slice summation, with a correction for the location of the base plane. LVM was calculated by assuming a myocardial density of 1.05 g/ml. In this paper we used the LVM calculated at ED for both InlineVF and manual estimates. A subsequent release of the software, version E11C, was applied retrospectively in a batch-processing mode prototype. The E11C version provided an estimate of LVM calculated from the epicardial contours, whereas the D13A version did not. Other changes incorporated in E11C included a refined detection of the LV blood pool: In addition to the detection of the heart based on a Fourier transform over time to detect moving objects, RV insert points were also detected to derive a blood pool feature point. Thresholded connected components were then clustered across slices to recover the blood pool, using information about the location of the connected component with respect to the blood pool point as an additional feature in the clustering algorithm. The batchprocessing prototype was implemented on a Windows 7 workstation using Python and Windows Batch scripts. The input was the directory containing DICOM images for an entire study and the output was the LV mass and volume. Outliers with EDV or ESV > 500 ml in the automated results were rejected as unphysiological. For the E11C results, a rapid visual assessment of resulting contours was also performed. Algorithm failures were identified if the InlineVF contours were grossly erroneous (e.g. contouring of organs other than the LV) or identification of the landmarks or LV base plane was grossly incorrect or absent. Statistics Agreement was assessed by Bland-Altman analysis of bias (mean difference) and precision (standard deviations of the differences), 95% limits of agreement, and two-way random single measures intra-class correlation coefficient (ICC) for agreement (i.e. including systematic differences) [16]. The Levene test was used to test for differences in precision. Significant differences were defined at p < 0.05. Linear regression was performed to determine a correction between manual and InlineVF parameters. The correction parameters were assessed using Monte Carlo cross-validation [17]. The dataset was randomly divided into 90% training and 10% test cases, and prediction errors calculated in the test cases using the linear correction derived from the training cases. The resulting prediction errors were averaged for 500 trials. Statistical analysis was performed using R (version 3.3.2) statistical software [18]. Results A total of 5065 consecutive UK Biobank CMR examinations were evaluated. Of these, 191 cases had either CMR data of insufficient quality for manual LV analysis or the CMR identifier could not be matched with the UK Biobank identifier. Manual LV analysis was performed in the remaining 4874 cases. Table 1 shows participant demographics. Typical inter-observer errors quantified in 50 cases were −2.2 ± 4.7 ml for EDV, −2.4 ± 4.7 ml for ESV, 0.53 ± 5.8 ml for SV, 2.7 ± 6.6% for EF, 1.9 ± 6.5 g for LVM. InlineVF D13A results were obtained in 4423 cases (9% failure rate). However, several cases returned erroneous volumes due to gross failures of the algorithm to detect LV features. Some of these cases could be readily identified as unphysiological EDV or ESV. However, many cases could not be automatically identified as failures from the volumes alone. Excluding the 10 cases with EDV or ESV > 500 ml as implausible, and so outliers for this cohort, comparisons between manual and automated results in the remaining 4413 cases are shown in Table 2. Bias (mean differences) were small but standard deviations of the differences were relatively large, leading to wide limits of agreement. InlineVF E11C results were obtained in 4775 cases (2% failure rate). Excluding the 101 cases with EDV or ESV > 500 ml, comparisons between manual and automated results in the remaining 4674 cases are shown in Table 3. Biases were again small and precision in EDV and ESV was somewhat improved (p < 0.05) over the D13A version. LVM showed consistent overestimation relative to manual results. Rapid visual assessment of the 4775 InlineVF E11C contours and landmarks found 178 cases with gross errors in automated contour or landmark placement (36 in contours only, 46 in the landmarks only, and 96 in both contours and landmarks). Results for the remaining 4597 cases (6% total failure rate) are shown in Table 4. The precision in the EDV, ESV, SV and EF estimates were considerably improved (all p < 0.05), whereas LVM precision (p = 0.12) was unaffected, compared with Table 3. Figure 1a shows an example of InlineVF E11C results for a typical case with good agreement for volume (<5 ml for EDV and ESV) but overestimation of mass (61 g) compared with manual analysis. This shows some errors in contour placement for the basal slice, but the difference in LVM was mainly due to consistently larger epicardial contours for all slices. Figure 1b shows a case with a relatively large discrepancy between manual and InlineVF results (30 ml in EDV). This case illustrates good contours for most slices, except for the basal slice. Figure 2a shows a case that was classified as a failure by visual inspection. The algorithm in this case has detected both ventricles as the LV. Figure 2b shows another case classified as a failure, but with errors at the basal and apical slices only. Figure 3 shows Bland-Altman plots for ventricular function parameters and LV mass for the InlineVF E11C results with visual failures removed (n = 4597). LVM showed a consistent overestimation with the InlineVF results, increasing with increasing mass. This is verified in Fig. 4 (linear regression plots), in which LV EDV and ESV showed regression lines near the line of identity, and LVM had the largest deviation from identity, with a consistent overestimation that was well characterized by a linear regression. A tenfold cross-validation was performed to determine the robustness of the linear regression parameters. The resulting regression parameters are shown in Table 5. Slopes were close to identity for EDV and ESV, but lower slopes and higher intercepts were found for SV, EF and LVM. Table 5 also shows the errors of prediction if automated results were used in place of manual results (after linear correction using the mean slope and intercept found by crossvalidation). Bias has been removed, as expected, but precision is also improved for the LVM estimate. The inter-class correlation coefficients between the corrected automated results and the manual results also show improvement for all parameters. In order to estimate the number of cases required for case control studies using the InlineVF estimates in UK Biobank, a number of assumptions are required. The error of the measurement can be estimated from the precision values shown in Table 5. Table 6 shows indicative power calculations illustrating the number of subjects required to detect a difference in CMR variables, assuming a type I error rate of 5%, and standardized effect size (mean effect divided by standard deviation) of 30-100%. For example, a study designed to detect a 30% standardized effect size for LV mass, assuming a standard deviation of 13 g (Table 5), would require 234 patients in each group to detect of a mean change of 4 g (30% of 13 g) with 90% power. However, additional variation is likely due to variability in the manual results and intrinsic biological variability. Discussion Fully automated image analysis methods are desirable for large cohort studies such as UK Biobank, due to the complex nature of image analysis and the requirement for large numbers of cases. Automated analysis tools for LV function are now becoming more widely available, for a recent LVM (g) InlineVF Manual review see [11]; however, most studies have reported limited numbers of cases. An open benchmark challenge comparison of fully-automated and semi-automated methods in 95 cases showed that the fully-automated Siemens InlineVF algorithm performed as well as semi-automated methods [10]. More recently, automated methods have been reported in studies with over 1000 participants [13,19]. The Siemens InlineVF analysis tool was one of the first fully automated LV analysis methods commercially available on standard scanners [20,21], and the D13 version was enabled for the initial UK Biobank imaging acquisitions and these results are available to researchers as part of the initial image dataset. However, this tool was designed for clinical review in association with visual inspection of results, as required by regulatory and certification bodies. Application to epidemiological research studies such as UK Biobank is therefore unclear. In this study, we report the largest evaluation of a fully-automated LV analysis algorithm performed to date, to our knowledge. Improvements with the E11C version of InlineVF as well as LVM quantification, using visual inspection and linear bias correction, are demonstrated. Although the detection failure rate was considerably improved in the E11C version, a review of remaining failures highlighted some conditions where misdetection was more likely. Firstly, the aorta can appear very bright and pulsate strongly in some cases, leading to mis-detection of the left ventricular blood pool. Secondly, the whole heart (left and right ventricles) may be detected as the left ventricle if the contrast between blood and myocardium is weak, or if there is some blurring due to irregular heart rate or breathing. Thirdly, the algorithm can fail if the gray level distributions of the different regions (blood, myocardium, lungs, partial voluming) cannot be modeled correctly due to unexpected intensities and contrast in the images. To some extent, such failures could be mitigated by re-acquisition with better breath-holds, adjusted slice positioning, or arrhythmia rejection. However, in the context of large cohort studies such as UK Biobank, it is not desirable to expend a large effort to achieve a 100% success rate, since a small number of dropouts can be accommodated. The best precision (standard deviation of the differences) obtainable was about twice that of the manual inter-observer precision for EDV and ESV, and over three times for LVM. However, the technology of automated image analysis is currently advancing at a rapid pace, with new developments in machine learning (e.g. deep convolutional neural networks) showing considerable promise [22]. Therefore, we expect that improvements in algorithms will lead to improved precision, leading to a reduction in the number of cases required for case-control studies. Limitations of the study include the visual assessment required to detect algorithm failures. Although this is fast on a case-by-case basis, review of many thousands of cases is time-consuming. In the future it would be useful to automatically assess the quality of the analysis, for example to automatically flag failures, or give an uncertainty in the estimate. Some failures could be detected simply by implausibly large or small LV volumes (as in Fig. 2a). However this is not possible for the case in Fig. 2b, a more complex method is required. It may be possible to detect such failures using machine learning methods, which would in turn lead to better performance of the original detection. This is an active area of further study. Another area of future research is the correction of breath-hold misregistration. The long axis slices were used to determine a basal cut-off plane below which volume was included in the ventricle. Inconsistent breath-holding can influence the position of this plane. Although the base plane is an average of all the long axis slices, and is therefore robust to moderate breath-hold misregistation, future methods will enable better registration of the short and long axis slices. Another potential limitation was that the manual results were treated as correct for all calculations. It is known that the manual contouring can show bias between centers due to differences in training [5]. Suinesiaputra et al. [5] provided a consensus dataset of 15 cases derived from analyses from seven independent centers for benchmarking purposes. Readers from the current study also analyzed these cases, resulting in typical consensus errors of EDV = −6.72 ± 12.03 ml; ESV = −3.58 ± 12.75 ml; EF = −0.72 ± 3.51%; LVM = −1.28 ± 11.96 g. Thus, the manual results of the current study are in good agreement with previous studies and other centers. Another source of potential error is the choice of ED and ES frames, since this was assessed manually by visual inspection of the mid-ventricular slice. The automatic algorithm, in contrast, computed volume for all frames and reported the maximum and minimum volumes. Future studies should investigate the performance of different vendor's software and quantify differences between methods. Conclusions Automated InlineVF results provided in UK Biobank can be used for case-control studies, provided visual assessment for quality control and linear adjustment of bias are performed. Further improvements in performance are expected in the near future with rapid advances in automated analysis technologies.
2017-08-25T05:33:47.111Z
2017-08-23T00:00:00.000
{ "year": 2017, "sha1": "d76b08f13bb1dd92c091c98afcd2dc36614b6c1a", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5809564?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "74bd511b53b69b79d1fe5961fecc5f9899764aaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244661068
pes2o/s2orc
v3-fos-license
Estimating the effective fields of spin configurations using a deep learning technique The properties of complicated magnetic domain structures induced by various spin–spin interactions in magnetic systems have been extensively investigated in recent years. To understand the statistical and dynamic properties of complex magnetic structures, it is crucial to obtain information on the effective field distribution over the structure, which is not directly provided by magnetization. In this study, we use a deep learning technique to estimate the effective fields of spin configurations. We construct a deep neural network and train it with spin configuration datasets generated by Monte Carlo simulation. We show that the trained network can successfully estimate the magnetic effective field even though we do not offer explicit Hamiltonian parameter values. The estimated effective field information is highly applicable; it is utilized to reduce noise, correct defects in the magnetization data, generate spin configurations, estimate external field responses, and interpret experimental images. Results Preliminaries of dataset and training. We select magnetization datasets containing several varieties in the spin configuration but with a certain rule that can be learned by our network. For this purpose, the magnetic labyrinth configurations of the two-dimensional magnetic system are used in this study. Magnetic labyrinth configurations, as shown in Fig. 1a, have variety in their shape, but the structures are in their local energy minimum state, and thus, they are energetically and topologically stable. The local magnetic moment is aligned along the effective field. The strength of the magnetic moment in the structure is a constant, whereas the strength of local effective fields varies spatially. The effective field strength is dominantly determined by the exchange interaction, but small spatial variances exist due to the Dzyaloshinskii-Moriya interaction (DMI) and the detailed labyrinth structure. In Fig. 1a, we can see how the effective field strength varies spatially. Therefore, the strength of effective fields is the hidden information that cannot be directly obtained from the spin configurations. In our study, we use a system that includes exchange interactions and DMI, and we train the network to infer the effective field from these two interactions. If our network is applied to a system that has additional energy contributions, such as Zeeman energy or weak anisotropy, their effective field contributions can be added to correct the inferred effective field. If we use a dataset containing additional energies explicitly, it is also possible to train the network to estimate the effective fields from them. The properties of the labyrinth magnetic structure have been extensively investigated, both numerically and experimentally [22][23][24]29,30 . With a theoretical model, it is possible to calculate the effective field from the spin structure and to generate a magnetic structure with Monte Carlo simulation. Therefore, the magnetic labyrinth configuration provides a model system for evaluating the trained network and checking whether the network can estimate the physically plausible effective field from the structures. Details of dataset generation are explained in the "Methods" section. In this study, we use an FCN to estimate the effective field from the spin configuration. Figure 1b shows the schematic network training workflow. We feed the spin configurations from the simulation as the input, and the FCN is trained to estimate the effective field. An FCN can derive output from the input image of any size, even if it is trained with data of a specific size. Thus, we use the FCN as our network, and it can be applied to estimate the effective field from the spin configuration with data of any size. Due to these properties of the FCN, we can apply our network to the magnetization image from experiments as well as data generated by simulation annealing. Details of the network structure are discussed in the "Methods" section. Characteristics of the trained network. During the network training process, the training loss and validation loss are decreased to the order of 10 −5 (Fig. 2a). We first investigate the training results of the deep learning algorithms. This is done by estimating the effective fields from the spin configurations in four randomly chosen samples from the test dataset and analyzing the ratio between the true effective fields F x,y, or z and estimated effective fields F * x,y, or z , with the subscript denoting the x, y, or z components of the fields. In Fig. 2b, we see that F x,y, or z and F * x,y, or z have a strong linear correlation. The effective field information obtained from the trained network provides expanded information on the spin structure, and it can be manipulated to recover or evolve the spin structure. When a new spin structure is obtained from the effective field, the trained network can be used to infer its effective field again. To apply the effective field information, we use the recursive process presented in Fig. 3a. The recursive process is composed of feeding the input spin configuration to the trained network, generating a new spin configuration from the output effective field, and refeeding the spin configuration as a new input of the network. First, the trained network provides the estimation of the effective field, and then, the effective field is modified according to the application necessity. Additional fields such as external fields or fluctuations, which are not considered in the training www.nature.com/scientificreports/ dataset, can be included in the estimated effective field. With effective field information, a new magnetization map is generated by an evolutionary method. In a statistical study, magnetic moments can be sampled by thermal distribution. In a dynamic study, they can be evolved using equations of motion such as the Landau-Lifshitz-Gilbert (LLG) equation so that they precess around the effective field inferred by our network. In our discussion, we use a spin evolution method where the magnetic moments are immediately adjusted to be parallel with the effective field in a step (greedy method) because it is the simplest method for evaluating our network. Details of the recursive process are explained in the "Methods" section. Figure 3b shows how the magnetic energy evolves with the simple recursive process in which no field modification is used and the greedy method is applied to the magnetic moments. Energy is calculated through the dot product of the spin vector and the effective field. When the energy calculated from the true effective field is compared with the energy calculated from the estimated effective field, the accuracy is approximately 99.95%. Although the network is only trained to estimate the effective field from the spin configuration, we find that the initial spin configuration can evolve to a lower energy state during the recursive process. Some truncated magnetic structures are connected, and some connected structures are separated during a recursive process, resulting in the total energy being lowered. Although there are topological energy barriers indicated by the energy peaks in Fig. 3c, transitions among metastable states appear in the recursive process. www.nature.com/scientificreports/ The reorganization of the magnetic structure is a notable result. In general, changing the topological structure requires a significant amount of energy, as each metastable state is located at a local energy minimum. Thus, considering that the training is only performed with thermal-fluctuation-less structures, it is interesting that escaping the local minimum state naturally occurs in the FCN's recursive process. We speculate that the reason for this phenomenon is because we train to estimate the effective field from various metastable spin configurations, and the estimated value is not completely accurate. So, it can reflect the general feature of the group of metastable states. Therefore, spin configuration can be changed to another plausible state during the recursive process, passing energy barriers among metastable states. In case that we apply our network to the spin systems without global stable states due to frustrations, we expect that various metastable states can be searched during the recursive process. The initial attempts at the recursive process show that the network suitably learns the general properties of the spin configurations in the training. It tends to remove atypical features in the spin configurations and fix them to have general features learned in the training. These characteristics enable us to apply the network to correct or modify spin configurations. In the following sections, we show several application methods, which fully exemplify the advantages of these aforementioned characteristics. Application: noise removal and defect correction. One possible network application is denoising, a field in which artificial intelligence is efficiently utilized 31,32 . To see if our network can be effective for this purpose, we intentionally injected random noise and defects in the spin configuration into our datasets. Random noise was injected into the spin configurations using Ŝ ′ = L 2 Ŝ + αR , where Ŝ ′ and Ŝ are the noisy and noiseless spin configurations, respectively R is a unit vector map randomly oriented in any direction, α is the coefficient for varying the amplitude of the random map, and L 2 is the L2-normalization process. The representative case of α = 2.5 is shown in the leftmost column of Fig. 3d. When we feed the noisy spin configuration into the trained network, the noise is almost instantly removed within a few iterations; the energy decrease indicates that the noise has been removed. We also intentionally place defect sites in the spin configuration dataset. The process of injecting defects involves erasing the magnetization information in a specific region of the spin configuration. We use two types of defects: Defect I is made by erasing the middle rows of the data and adding random unit vectors in the erased part (center column of Fig. 3d), whereas Defect II is made by simply erasing a square-shaped center region of the data (rightmost column of Fig. 3d). When we feed the defect-containing-spin configuration into the trained network, the defect regions in the spin configurations are reconstructed such that they show plausible spin configurations. The recursive process of our trained network also lowers the spin configuration energy, as shown in Fig. 3e. The energy decrease is achieved by removing noise and reconstructing defects. From these results, we clearly www.nature.com/scientificreports/ observe that the trained network is capable of outputting plausible effective fields that are used to construct a spin configuration even when the input magnetization map does not contain complete information. The output result is built to have lower energy and hence becomes one of the most plausible states based on the training set information. Application: extraction of hidden information from experimental data. Given that the trained network has the characteristic of estimating the effective field from the spin configuration without full information, we feed simulated test data that contain only one magnetization vector information component. In Fig. 4a, we see that the network successfully estimates all components (x, y, and z) of the effective field even when the input data contain only one (z) spin configuration component. This capability of our network is fully exhibited when applied to experimental data. Most magnetic microscopy techniques, such as STXM and MOKE microscopy, only provide one axial spin component; thus, it is necessary to infer other directional components from it. To prove the capability, our network is applied to actual experimental data where only one magnetization component is measured. Figure 4b and c shows the results when the network input data are experimental magnetic domain images of a [Pt(3 nm)/GdFeCo(5 nm)/MgO(1 nm)] 20 multilayer system. Detailed information about the experimental environment is given in a previous study 33 . The magnetic domains shown in Fig. 4b and c are observed using STXM and MOKE, respectively. We note that the effective field or the in-plane magnetization inferred by our network is valid if the Hamiltonian used for the training data is applicable to the experimental system. Therefore, the method can be suitably applied in cases where the certain theoretical model is known but the experimental data do not provide the entire information. In our case, all three components of the effective fields are well estimated by the network. Although the experimental data are unnormalized, the image size is different from the training data, and only one axial spin component is given. The Hamiltonian used in our training includes the interfacial DMI, typical of Pt/GdFeCo/MgO multilayers 33 . As a result, we see in-plane components from the effective field (Fig. 4b, c), as we expect in the systems where the interfacial DMI induces Neél-type domain walls. Application: generative model. The trained network in this study also has the potential to generate new spin configurations as a generative deep learning model, as shown in Fig. 5a. Details of the generation recursive process are given in the "Methods" section. When we feed a random spin map to the network, the output data become a plausible spin configuration within a few recursive iterations. Figure 5b shows that if we feed a different random map to the network, another spin configuration is output. From these results, the trained network can be considered a generative model that generates the different spin configurations whenever different random maps are seeded. We compare spin configurations generated by several generative methods: the Monte Carlo (MC) method, the greedy method, and spin configurations generated by our trained network. In the MC method, we generate the spin configuration by dropping the temperature from higher than the Curie temperature to zero. In Fig. 5c, the spin configuration, such as the ones we used to train the network, is only generated after thousands of iterations when using the MC method. The greedy method is a model that generates spin configuration with the MC method in a state where the temperature is 0. In the greedy method, the result of the final iteration (Fig. 5d) shows multiple skyrmions. Not only is the physical result different in this case, but the energy of this multiskyrmion state is higher than the data of the spin configuration dataset that we used to train the network. To quantitatively investigate whether the spin configurations generated by a recursive process are physically plausible, we compare the energy of the resultant states of this process with those of the greedy and MC methods (Fig. 5e). The energy values from the recursive process are minimized in a few iterations. In contrast, the greedy and MC methods require thousands of iterations to generate a sufficiently minimized energy state. This clearly shows that the generation method using the recursive process proposed in this study can generate a new metastable spin configuration with a much lower computational cost. Additionally, since we use the FCN, we can generate a new spin configuration of any size by feeding a random map of the desirable size other than 128 × 128 (size used for training) into the network. This again exemplifies the advantage of our network as a spin configuration generator. Application: addition of external fields. Experimentally, it is well known that when an appropriate outof-plane (z-direction) external field is applied, the labyrinth spin configuration changes to magnetic skyrmions before all magnetic moments become uniformly aligned when the out-of-plane field is further increased [34][35][36][37] . Since the trained FCN estimates the effective field without any external field, we can include additional fields in the recursive process to observe how the additional external field modifies the original structure. We add the external field in the field modification step in the recursive process such that the total effective field to produce the magnetization map in the next iteration becomes � F ′ = � F * + Hẑ . The other type of effective field, such as the anisotropy field for weak anisotropy energy or the Langevin field for thermal fluctuation, can be added similarly when necessary. Details of the field addition in the recursive process are given in the "Methods" section. We use a labyrinth spin configuration as the initial state. As shown in Fig. 6a, the labyrinth structure starts to break into smaller domains when a field of H z,ext = 0.03 is applied. At H z, ext = 0.05 , the skyrmion spin configurations appear. When the field is further increased, the skyrmion configuration gradually disappears ( H z, ext = 0.07 ) and becomes gradually saturated out-of-plane ( H z, ext = 0.09 ). To confirm that these results are reasonable, we similarly apply the external field with the MC method (Fig. 6b). In the MC method, we find the spin configuration as the temperature is decreased from above the Curie temperature to zero while applying an external field. The results of applying the field to the trained FCN and the MC method are interestingly similar. Figure 6c shows the magnetization as a function of the external field. We see that two graphs from our FCN and www.nature.com/scientificreports/ MC methods show almost identical field-dependent magnetizations. During the training process, the network is trained only to estimate the effective field without any external field; we confirm that adding external fields to our method can generate physically plausible states. Conclusion We devised a novel method based on a deep learning technique to estimate the effective field information of spin configurations. An FCN was trained using various spin configurations generated by a simulated annealing process. We confirmed that the trained network can estimate the effective fields of input spin configurations even though we did not offer the explicit Hamiltonian parameters that are used in the data generation process. Through the recursive process introduced in this study, we found a surprising feature of the trained network: it prefers to make the output spin configurations more stable or more plausible than the input spin configurations. We utilized useful features to devise several application methods for various purposes, such as noise reduction, correcting defects, estimating external field responses, and inferring the hidden information of underinformed experimental data. Generating plausible spin configurations with a less computational cost is also a possible application of the trained network, as presented in this study. We believe that the interesting properties and various applicability of our method can be adopted as novel numerical methods in many other scientific research areas. Methods Dataset generation. The dataset is chosen to evaluate whether the network structure can properly estimate the effective fields from the spin configurations. The input data should be well characterized under certain conditions, while they should have a variety of structures. Therefore, in this study, we generate magnetic labyrinth configurations as a dataset. They have been extensively studied in two-dimensional magnetic systems due to the potential for new spin device applications, and it is well known that a phase transition to a skyrmion structure occurs with an external field. These properties provide advantages for evaluating our network. To implement two-dimensional magnetic systems, we use the Heisenberg spin model in a square lattice system of 128 × 128 size. The magnetic labyrinth configurations are generated under the Hamiltonian shown in Eq. (1), where S is a normalized spin vector, J is an exchange parameter, and D ij is a DMI vector. i and j represent the spin sites index, and the summation is on every nearest pair site. The ratio between J and | � D ij | determines the length scale of the magnetic structure, and we choose it at J/| � D ij | = 1/0.3 to have enough structure in a simulation size. The effective field of the spin configuration is also obtained from Eq. (1), � F = − � ∇ � S H. A simulated annealing process is used to generate various labyrinth spin configurations; the temperature of the system is gradually decreased from above the Curie temperature to zero temperature. The total number of generated data points is 30,100, and we divide it into three subdatasets: training, validation, and test. These three datasets are composed of 25,000, 5000, and 100 spin configurations, respectively. Network structure and loss function. The goal of this study is to devise an algorithm for estimating the effective fields from the spin configurations using the deep learning technique. We construct a neural network structure to obtain the effective field from the input spin configuration. The structure is similar to an autoencoder that has an encoder and a decoder. The encoder, composed of four FCN layers with 8, 16, 32, and 64 filters, abstracts the spin configuration. The filter sizes are 3 × 3 . Since our spin configuration dataset is generated under the periodic boundary condition, we add a periodic padding process in front of all FCN layers to train with the same conditions. After every FCN layer in the encoder, we attach the batch normalization layer, rectified linear unit (ReLU) activation, and max-pooling layer whose pooling size is 3 × 3 . The decoder decodes the abstracted information into the effective field. It is constructed using four upsampling blocks, and the single block is composed of both an upsampling layer with a 2 × 2 filter and an FCN layer with a 3 × 3 filter. The number of filters for the FCN layers in upsampling blocks are 32, 16, 8, and 3 for each. After the decoder, we add one more FCN layer, the last FCN layer, with three filters. The periodic padding process we use in the encoder is added in front of all FCN layers in the decoder. The batch normalization layer and ReLU activation are attached after all FCN layers in the decoder except for the last FCN layer. The input and output data dimensions are the same as [400, 128, 128, 3]; the input data are hundreds of spin configurations generated under the Hamiltonian shown in Eq. (1), and the output data are hundreds of two-dimensional vector maps composed of three-dimensional vectors. We want to train our network structure to make the output vector maps become the effective fields of input spin configurations. Therefore, the mean squared error (MSE) � F − � F * 2 is used as a loss function, where F is a true effective field and � F * is the estimated effective field. The difference between the true effective fields of input spin configurations and the output vector maps is used as the total loss of our network, which should be minimized during the training process. The minimization of the total loss means that the output vector maps become identical to the true effective input data fields; thus, after the training process, our network structure can appropriately estimate the effective input data fields. The Adam optimizer is adopted to minimize the total loss with a 0.01 learning rate. Recursive process. In the recursive process shown in Fig. 2a, the spin configuration is fed as input data, and the trained FCN estimates the effective field. Through the field modification step, we can change the field in the way we want. In most discussions, we do not modify the field � F ′ ← � F * , but in the last discussion on the www.nature.com/scientificreports/ effect of the external field, we add a constant field to the field from the FCN � F ′ ← � F * + Hẑ . The effective field is used to change the spin configuration in the spin evolution step. Suitable methods can be applied depending on the purpose. In our study, we simply align the spin direction parallel to the effective field � S ′ ← � F ′ /| � F ′ | . This process is repeated until the output condition is satisfied.
2021-11-27T06:17:17.771Z
2021-11-25T00:00:00.000
{ "year": 2021, "sha1": "54abfb88a4de9a7c82ceea8d86a947186e6c47af", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-02374-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82be368165c325ae1de731db311363ad865785b0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
187148910
pes2o/s2orc
v3-fos-license
Effect of early tracheostomy in mechanically ventilated patients Objective To investigate the effect of the timing of tracheostomy in patients who required prolonged mechanical ventilation using two methods: analysis of early versus late tracheostomy and landmark analysis. Study Design Retrospective cohort study. Methods Patients who were emergently intubated and admitted into the intensive care unit or high dependency unit between January 2011 and August 2016, with or without tracheostomy, were included. In the early and late tracheostomy analysis, all patients were divided into early (≤10 days, n = 88) and late (>10 days, n = 132) groups. In the landmark analysis, 198 patients requiring ventilation for more than 10 days were divided into early tracheostomy (≤10 days, n = 57) and nonearly tracheostomy (>10 days, n = 141) groups. We compared 60‐day ventilation withdrawal rate and 60‐day mortality. Results Early tracheostomy was a significant factor for early ventilation withdrawal, as shown by log‐rank test results (early and late tracheostomy: P = .001, landmark: P = .021). Multivariable analysis showed that the early group was also associated with a higher chance of ventilation withdrawal in each analysis (early and late tracheostomy: adjusted hazard ratio [aHR] = 1.69, 95% confidence interval [CI] = 1.20–2.39, P = .003; landmark: aHR = 1.61, 95% CI = 1.06–2.38, P = .027). Early tracheostomy, however, was not associated with improved 60‐day mortality (early and late tracheostomy: aHR = 0.88, 95% CI = 0.46–1.69, P = .71; landmark: aHR = 1.46; 95% CI = 0.58–3.66; P = .42). Conclusion For patients requiring ventilation, performing tracheostomy within 10 days of admission was independently associated with shortened duration of mechanical ventilation; 60‐day mortality was not associated with the timing of tracheostomy. Level of Evidence 2b INTRODUCTION Tracheostomy is a well-established procedure for critically ill patients requiring prolonged mechanical ventilation. This procedure is invasive and carries some risk; however, it also has advantages, including decreased tube dead space and breathing effort compared with endotracheal intubation. Clinicians are required to consider the risk-benefit profile for each patient; however, the ideal timing for performing tracheostomy remains unclear. In 1989, the National Association of Medical Directors of Respiratory Care recommended, based on expert opinion alone, that translaryngeal intubation should be reserved for patients requiring <10 days of mechanical ventilation. Furthermore, they recommended that tracheostomy should be performed in patients requiring intubation beyond 21 days. 1 However, to date, there is no recommendation or guideline that has been based on objective evidence. Recently, the potential advantage of early tracheostomy has attracted considerable attention; several retrospective and prospective studies have suggested a clinical benefit of early tracheostomy on patients requiring prolonged mechanical ventilation. [2][3][4][5][6][7][8][9][10][11] However, the design of these trials is insufficient to investigate the effect of early tracheostomy. For example, in retrospective studies, it is difficult to match patients because there are no consistent indication criteria for tracheostomy. The timing of tracheostomy is affected by various factors, including illness severity, clinical physician preference, patient and family requests, and hospital resources. For example, in patients with a favorable prognosis, clinicians might be more likely to perform early tracheostomy, causing selection bias. Furthermore, as tracheostomy timing is variable, one must consider immortal time bias: patients undergoing tracheostomy must be alive before the surgery is performed. Therefore, the time-to-event for patients undergoing late tracheostomy is necessarily longer than that for patients undergoing early tracheostomy, even if the timing does not affect the event. In addition, patients undergoing late tracheostomy appear to have a longer duration of ventilation dependence. Randomized prospective trials could eliminate the risk of such selection and immortal time biases; however, no criteria or tools are currently available to predict accurately upon admission which of the patients might require prolonged ventilation support, and there is a risk of over-treatment with tracheostomy. Such criteria are crucial for safe prospective trials of early tracheostomy; therefore, they are recommended targets for future research. Evidence-based guidance for tracheostomy timing is long overdue; however, establishing this evidence remains difficult. To address this, we have constructed a doubleanalysis approach: first, we compared patients undergoing early and late tracheostomy; and second, we performed a landmark analysis using a tertiary referral center cohort of ventilation-dependent critically ill patients. In the tracheostomy analysis, we divided the patients into early (≤10 days) and late (>10 days) tracheostomy groups, and compared the duration of ventilation dependence from time of tracheostomy to withdrawal or death. The landmark analysis included ventilated patients who had not undergone tracheostomy, and we evaluated the effect of early tracheostomy on patients with prolonged mechanical ventilation use (>10 days). Our objective was to determine the benefits of early tracheostomy using these methods. METHODS This retrospective cohort study was conducted in accordance with the Declaration of Helsinki and approved by the institutional review board of Teine-Keijinkai Hospital, Sapporo, Japan. Patient Selection Between January 2011 and August 2016, the following patients were included in the study: 1) Tracheostomy group: patients emergently intubated in the emergency room (ER) and admitted to an intensive care unit (ICU) or high dependency unit (HDU) who subsequently underwent tracheostomy; and 2) Nontracheostomy group: patients emergently intubated in the ER who were admitted to ICU/HDU without tracheostomy. TRACHEOSTOMY GROUP. All medical records of the 337 patients who were intubated in the ER because of emergent respiratory distress, and admitted to ICU/HDU and underwent tracheostomy due to prolonged mechanical ventilation, were reviewed retrospectively. The exclusion criteria were as follows: age <20 years, ventilation withdrawal before undergoing tracheostomy, tracheostomy due to control of suctioning, and upper airway obstruction (deep neck infection, neck trauma, or difficulty with laryngeal intubation). After application of the above exclusion criteria, the remaining 220 patients were reviewed retrospectively. NONTRACHEOSTOMY GROUP. A total of 946 patients were identified who had been intubated in the ER because of emergent respiratory distress, admitted to ICU/HDU, and were withdrawn from ventilation or died without undergoing tracheostomy. The exclusion criteria were as follows: death within 24 hours of intubation; age <20 years; and intubation due to upper airway obstruction or for other investigations, such as bronchoscopy. After application of the exclusion criteria, 563 patients remained. Of these, 110 were chosen at random using a computer algorithm and reviewed retrospectively. A group size of 110 was chosen for the control group because it was half the size of the tracheostomy group, which is suitable for statistical analysis. Study Analysis This study consisted of two separate analyses: an early and late tracheostomy analysis and a landmark analysis. We selected patients for each analysis from the tracheostomy and nontracheostomy groups. Study enrollment is detailed in Figure 1. In the early and late tracheostomy analysis, all the patients in the tracheostomy group were assigned to the early (tracheostomy performed ≤10 days after admission) or late (>10 days after admission) groups, and we compared the outcomes between the groups. However, using this analysis, we were not able to evaluate the patients who required ventilation for a long duration because there was a considerable difference in the median (95% confidence interval [CI]) day of ventilation withdrawal or death from intubation of the patients in our cohort undergoing early (median = 7 days; range = 2-18 days) and late (median = 14 days; range = 5--32 days) tracheostomy. Furthermore, to evaluate effectiveness of the tracheostomy, it was important to include patients who were withdrawn from ventilation without tracheostomy (nontracheostomy group) in our analysis. The nontracheostomy group also showed early ventilation withdrawal or death (median = 3 days; range = 2-7 days). To avoid any selection bias due to a higher chance of ventilation withdrawal or death among the early tracheostomy and nontracheostomy groups compared with late tracheostomy group (which had a longer time-to-outcome, which is a type of immortal time bias), we performed a landmark analysis by excluding patients in the tracheostomy and nontracheostomy groups who died or were withdrawn from ventilation before the landmark, which was set at day 10 from endotracheal intubation. This type of analysis was introduced by Anderson et al. to match the conditions within each group. 12,13 In this analysis, patients who underwent tracheostomy within 10 days of admission were categorized into the early-tracheostomy (ET) group, while patients who underwent tracheostomy more than 10 days after admission (from the tracheostomy group), or were withdrawn from ventilation or died after 10 days without tracheostomy (from the nontracheostomy group) were categorized into non-ET group. We used day 10 as the landmark point and for defining early tracheostomy for the following reasons: previous trials 14, 15 have defined early tracheostomy as that performed within 7, 10, or 14 days of admission; the median duration from intubation to tracheostomy in our study was 12 days, and we sought to investigate the effect of tracheostomy at days earlier than our current average. There are currently no set criteria for the timing of tracheostomy; however, the attending physician evaluated whether patients could be weaned from ventilation daily, and tracheostomy was considered via clinical evaluation. Written informed consent for tracheostomy was obtained from the patients or their family. Open tracheostomy was Laryngoscope Investigative Otolaryngology 4: June 2019 Dochi et al.: Effect of Early Tracheostomy the most common procedure, but percutaneous dilatational tracheostomy was also performed in selected patients. Variables We collected the following data from the patients: demographic and clinical data; number of ventilationdependent days; time to tracheostomy (if performed); number of days with use of intravenous medication, such as opioid analgesics, sedatives, or antibiotics; total ICU/HDU and overall hospital stay; Acute Physiology and Chronic Health Evaluation 16 (APACHE II) score in the first 24 hours of ICU/HDU admission, which estimates severity of disease and risk of death; and Charlson Comorbidity Index 17 (CCI) score, which estimates the risk of death due to a selection of comorbid conditions. Outcome Measures The primary outcome was mechanical ventilation withdrawal by day 60. Ventilation withdrawal was defined as maintaining spontaneous breathing for at least 2 days. Day 60 was chosen because of a high withdrawal rate by 60 days from intubation, and effectiveness of tracheostomy timing was not thought to be associated with the cases requiring mechanical ventilation for more than 60 days. The secondary outcome was 60-day survival. In both analyses, we evaluated the primary and secondary outcomes. Statistical Analysis Time-to-event data of primary and secondary outcomes were estimated using the Kaplan-Meier method, and group comparisons were performed using a log-rank test. In the early and late tracheostomy analysis, the outcomes were not measured as time postintubation, but as time post-tracheostomy. In the landmark analysis, the outcomes were measured as time postintubation because the timing of tracheostomy did not interfere with duration of events. For the primary outcome (60-day ventilation withdrawal) measurement, the patients who were lost to follow-up or died within 60 days were censored at the date of last follow-up. For the secondary outcome (60-day survival), only the patients who were lost to follow-up were censored at the date of last follow-up. The patients who were withdrawn from mechanical ventilation or died after 60 days were also censored at day 60. For each analysis, univariate and multivariate Cox proportional hazard regression analyses were constructed. In the multivariate analyses, the hazard ratio (HR) was adjusted for age, sex, APACHE-II score, CCI score, admission diagnosis, hypoxic encephalopathy, head trauma and stroke, cardiovascular disease, respiratory disease (infection or chronic obstructive pulmonary disease), and nonrespiratory infectious diseases. We compared categorical parameters using Fisher's exact test, and continuous parameters using unpaired t test or Mann-Whitney U test, where appropriate. Data analysis was performed using EZR, which is a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). This modified version of R commander is designed to add statistical functions frequently used in biostatistics. 18 A significance level of P < .05 was used. Baseline Patient Characteristics In the early and late tracheostomy analysis, a total of 220 patients in the tracheostomy group were included and divided into early (≤10 days, n = 88) and late (>10 days, n = 132) tracheostomy groups. The landmark analysis included patients requiring ventilation support for at least 10 days. Patients who were weaned off the ventilator or died before day 10 were excluded from the tracheostomy and nontracheostomy groups. We excluded 132 patients; therefore, the landmark analysis cohort included 198 patients (Fig. 1). In the landmark group, 57 and 141 patients were categorized into the ET and non-ET groups, respectively. The baseline characteristics of both groups were similar in both analyses (Table I). 60-Day Ventilation Withdrawal In the early and late tracheostomy analysis, median ventilation-dependent time by day 60 was 9 days (95% CI = 5-13) in the early tracheostomy group, which was significantly higher than that in the late tracheostomy group (median = 20 days; 95% CI = 14-40; P = .001; Fig. 2a). Multivariate analysis showed that performing tracheostomy within 10 days of admission was associated with a significantly higher chance of mechanical ventilation withdrawal (adjusted hazard ratio [aHR] = 1.69; 95% CI = 1.20-2.39; P = .003; Table II). 60-Day Survival The Kaplan-Meier curve of 60-day survival of the early and late tracheostomy groups, and landmark analyses Table III). Other Clinical Outcomes In the early and late tracheostomy analysis, the early tracheostomy group had significantly shorter In the early and late tracheostomy analysis, median ventilation-dependent time by day 60 was 9 days (95% confidence interval [CI] = 5-13) in the early tracheostomy group, which was significantly higher than that in the late tracheostomy group (median = 20 days; 95% CI = 14-40; P = .001). (b) In the landmark analysis, median ventilation-dependent time by day 60 was 27 days (95% CI = 20-36) in the early-tracheostomy group, which was significantly higher than that in the nonearly tracheostomy group (median = 37 days; 95% CI = 31-55; P = .021). ICU/HDU and overall hospital admission durations, and less medication use than the late tracheostomy group (Table IV). In the landmark analysis, the ET group tended to have better clinical outcomes, such as shorter hospital admission duration, and less antibiotic use than the non-ET group; however, these differences did not reach statistical significance. Patients in the ET group tended to have fewer complications. No deaths were directly attributed to complications arising from tracheostomy. DISCUSSION Several previous trials have evaluated the clinical effect of early tracheostomy; however, the best timing for tracheostomy in patients requiring prolonged ventilation remains to be elucidated. Our analyses revealed that early tracheostomy within 10 days significantly decreased the degree of ventilation dependence at 60 days; however, it did not improve 60-day survival. Our findings are consistent with previous reports. Furthermore, we argue that our analysis is less susceptible to bias, and therefore, makes a significant contribution to the literature. Prospective trials are hindered by uncertainty regarding inclusion criteria and lack of guidelines. In the TracMan prospective trial, almost half of the patients assigned to the late tracheostomy (≥10 days) group were withdrawn from ventilation without tracheostomy. 15 Based on this result, if the groups were comparable, tracheostomy could have been avoided for half of the patients undergoing early tracheostomy (≤4 days). Similarly, a meta-analysis of 12 randomized controlled trials (RCTs) showed a much higher tracheostomy rate in patients undergoing early tracheostomy (87% vs. 53%). 19 In these trials, the patients may have been excessively treated and exposed to the complications related to this surgery. Although our study reported no life-threatening complications of tracheostomy, avoiding unnecessary treatment is important in clinical practice. Our approach is unique in that it uses two complementary analyses to reduce selection and immortal-time biases. In the early and late tracheostomy analysis, we defined the time-to-event as the time from tracheostomy to the event, which reduced immortal-time bias. However, this analysis did not include patients who did not undergo tracheostomy. To assess this, we sought to compare the clinical outcomes among patients who were intubated with or without tracheostomy. Therefore, we added the landmark analysis. Comparing outcomes between tracheostomy and nontracheostomy groups can introduce selection bias for different factors from each group; therefore, we included only the patients who required ventilation until the landmark of day 10. Patients who were successfully extubated or died soon after early tracheostomy were excluded. This reduced any biases caused by the time-dependent variable: the timing of tracheostomy. To the best of our knowledge, this is the first report to apply landmark analysis to investigate the effect of early tracheostomy. Furthermore, the primary outcome of early tracheostomy was consistent in both analyses. In our analysis, early tracheostomy did not improve 60-day survival, which is consistent with the findings of two RCTs and a meta-analysis. 7,20,21 In contrast, another RCT and two meta-analyses have shown a mortality benefit with early tracheostomy. 6,14,19 Thus, the association between early tracheostomy and improved survival remains controversial. In the early and late analysis performed in the present study, early tracheostomy was significantly associated with shorter ICU/HDU and overall hospital stays, and reduced medication use. We expected that early tracheostomy would be associated with these outcomes in the landmark analysis also; however, the results were not statistically significant. We hypothesize that the non-ET patients in the landmark analysis who were ventilated for a longer period required intensive care and medication unrelated to early tracheostomy intervention. This suggests that the clinical benefits of early tracheostomy could not be demonstrated in the landmark analysis. Furthermore, in both analyses, the ET group tended to have fewer complications than late tracheostomy or the non-ET group; however, these differences did not reach statistical significance. These outcomes might contribute to improvements in cost-effectiveness and quality of life. Further studies with validated outcome measures for cost-effectiveness, quality of life, and tracheostomy complications would be useful to evaluate the potential benefits of early tracheostomy. Our findings demonstrated the effectiveness of early tracheostomy for critically ill patients who required prolonged ventilation. However, a tool to prospectively predict the need for prolonged ventilation is yet to be developed. In the present study, 28 of the 88 patients who underwent early tracheostomy were withdrawn from ventilation within 10 days of intubation. We hypothesize that the effectiveness of early tracheostomy in these patients may be limited. To avoid unnecessary early tracheostomy, it is crucial to analyze the indicators for predicting longer ventilation requirements at the time of endotracheal intubation. By applying such tools prospectively, the effect of early tracheostomy might be assessed more accurately. CONCLUSION We performed an early and late tracheostomy analysis, and a landmark analysis to avoid selection and immortal-time bias, and to assess the effectiveness of tracheostomy in critically ill patients. Early tracheostomy performed within 10 days of admission was significantly associated with an earlier ventilation withdrawal of patients in both analyses. Further studies are needed to predict which patients require prolonged ventilation support, and to investigate the clinical benefits of early tracheostomy.
2019-06-13T13:23:50.533Z
2019-04-22T00:00:00.000
{ "year": 2019, "sha1": "ae3c7c844d9587e7c7e99f25982e22a172011c89", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lio2.265", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae3c7c844d9587e7c7e99f25982e22a172011c89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118851477
pes2o/s2orc
v3-fos-license
Phase transition from a $d_{x^2-y^2}$ to $d_{x^2-y^2}+d_{xy}$ superconductor We study the phase transition from a $d_{x^2-y^2}$ to $d_{x^2-y^2}+d_{xy}$ superconductor using the tight-binding model of two-dimensional cuprates. As the temperature is lowered past the critical temperature $T_c$, first a $ d_{x^2-y^2}$ superconducting phase is created. With further reduction of temperature, the $ d_{x^2-y^2}+d_{xy}$ phase is created at temperature $T=T_{c1}$. We study the temperature dependencies of the order parameter, specific heat and spin susceptibility in these mixed-angular-momentum states on square lattice and on a lattice with orthorhombic distortion. The above-mentioned phase transitions are identified by two jumps in specific heat at $T_c$ and $T_{c1}$. Phase transition from a d x 2 −y 2 to d x 2 −y 2 + d xy superconductor Angsula Ghosh and Sadhan K Adhikari We study the phase transition from a d x 2 −y 2 to d x 2 −y 2 + dxy superconductor using the tight-binding model of two-dimensional cuprates. As the temperature is lowered past the critical temperature Tc, first a d x 2 −y 2 superconducting phase is created. With further reduction of temperature, the d x 2 −y 2 + dxy phase is created at temperature T = Tc1. We study the temperature dependencies of the order parameter, specific heat and spin susceptibility in these mixed-angular-momentum states on square lattice and on a lattice with orthorhombic distortion. The above-mentioned phase transitions are identified by two jumps in specific heat at Tc and Tc1. Inspite of many theoretical and experimental studies on high-T c cuprates the exact symmetry of the order parameter is still a subject of active research [1]. However, there is evidence that the cuprates have singlet d-wave Cooper pairs and the order parameter has d x 2 −y 2 symmetry in two dimensions [1]. Recent measurements [2] of penetration depth and superconducting specific heat at different temperatures T and related theoretical analyses [3,4] also support this. However, several phase-sensitive measurements of the order parameter of the cuprates indicate a significant mixing of a distinct angular momentum component with a predominant d x 2 −y 2 state at temperatures below a second critical temperature T c1 . For temperatures between T c1 and T c only the d x 2 −y 2 state survives. Below T c1 the order parameter can have a mixed-symmetry state of type d x 2 −y 2 + exp(iθ)χ, where χ represents a state of different symmetry. The most probable possibilities for χ are the s or d xy wave. The possibility of a mixed (s − d)-wave symmetry was first suggested theoretically by Ruckenstein et al. and Kotliar [5]. There are experimental evidences based on Josephson supercurrent for tunneling between a conventional s-wave superconductor (Pb) and twinned or untwinned single crystals of YBa 2 Cu 3 O 7 (YBCO) that YBCO has mixed d x 2 −y 2 ± s or d x 2 −y 2 ± is symmetry [6] at lower temperatures. Recently, the existence of these mixed-symmetry states has been explored to explain the nuclear magnetic resonance data in the superconductor YBCO and the Josephson critical current observed in YBCO-SNS and YBCO-Pb junctions [7]. Kouznetsov et al. [8] performed some c-axis Josephson tunneling experiments by depositing conventional superconductor (Pb) across a single twin boundary of a YBCO crystal. By measuring the critical current as a function of the angle and magnitude of a magnetic field applied in the plane of the junction they also found the evidence of a mixed-symmetry order parameter in YBCO involving d x 2 −y 2 and s waves. By measuring the microwave complex conductivity in the superconducting state of high quality YBa 2 Cu 3 O 7−δ single crystals at 10 GHz using a high-Q Nb cavity Sridhar etal also suggested the ex-istence of a multicomponent superconducting order parameter in YBCO [9]. A similar conclusion of the existence of mixedsymmetry states may also be obtained based on the results of angle-resolved photoemission spectroscopy experiment by Ma et al. in which a temperature dependent gap anisotropy in oxygen-annealed Bi 2 Sr 2 CaCu 2 O 8+x was detected [10]. The measured gaps along directions Γ − M and Γ − X are non-zero at low temperatures and their ratio was strongly temperature dependent. Using Ginzburg-Landau theory, Betouras and Joynt [11] demonstrated that one way of explaining this behavior is to employ a mixed-symmetry state of the d x 2 −y 2 +s-wave type. They also conclude that the actual symmetry of the order parameter should vary substantially from one compound to another and for different levels of doping. This also suggests the possible appearance of a d x 2 −y 2 + d xy state under favorable conditions. More recently, Krishana et al. [12] reported a phase transition in the superconductor Bi 2 Sr 2 CaCu 2 O 8 induced by a magnetic field from a study of the thermal conductivity as a function of temperature and applied field. Laughlin [13] provided a theoretical explanation of the observation by Krishana et al. [8] that for weak magnetic field a time-reversal symmetry breaking state of mixed symmetry is induced in Bi 2 Sr 2 CaCu 2 O 8 . From a study of vortex in a d-wave superconductor using a self-consistent Bogoliubov-de Gennes formalism, Franz and Teśanović [13] also predicted the possibility of a superconducting state of mixed symmetry. This mixedsymmetry state is likely to be a minor s or d xy component superposed on a d x 2 −y 2 state for T < T c1 . From different experimental observations it is now generally accepted that a time-reversal symmetry breaking state of type d x 2 −y 2 + iχ is possible in the presence of an external field or magnetic impurity. This mixedsymmetry state is observed close to these impurities, surfaces/twin boundaries in the ab-plane or vortices. The nature of the mixed state varies from compound to compound. There are physical reasons for the appearance of these states. Either spin-orbit coupling with magnetic impurities or Andreev reflected bound states which create internal currents at the boundaries is responsible for these states [14]. However, orthorhombicity plays a crucial role in the generation of time-reversal symmetric mixed states. For example, it is established from a Ginzburg-Landau functional analysis [15] that the orthorhombicity has a consequence in the development of a d + s state instead of a time-reversal symmetry broken one. Moreover, from a theoretical point of view, timereversal symmetric states of type d x 2 −y 2 + χ are expected to be allowed depending on the orthorhombic distortion. There have been some studies [16] on the phase transition to a d x 2 −y 2 + exp(iθ)χ phase from a d x 2 −y 2 phase with θ = π/2 and χ = d xy or a s state. From theoretical considerations we find that there are two possibilities for the phase θ: 0 or π/2. For θ = 0, we find numerically that there is no stable d x 2 −y 2 + s phase. Here we study the phase transition to a d x 2 −y 2 +d xy phase from a d x 2 −y 2 phase below T c1 . In particular we study the temperature dependencies of the order parameter, specific heat, and spin susceptibility in the mixed-symmetry state. There is no suitable microscopic theory for high-T c superconductors and there is controversy about a proper description of the normal state and the pairing mechanism for such materials [1]. In the absence of a microscopic theory, a phenomenological tight-binding model in two dimensions with the proper lattice symmetry will be used [17]. This model has been successful in describing many properties of high-T c materials. We study the temperature dependencies of specific heat and susceptibility of a d x 2 −y 2 + d xy -wave superconductor with a weaker d xy wave both on square lattice and on a lattice with orthorhombic distortion. The order parameter of a d x 2 −y 2 +d xy -wave superconductor has nodes on the Fermi surface and changes sign across it, and consequently, its superconducting observables also exhibit power-law dependencies on temperature. On the other hand, the order parameters for the mixed d x 2 −y 2 + is and d x 2 −y 2 + id xy -wave states do not have a node on the Fermi surface and the corresponding observables have a exponential dependencies on temperature. In the present study on d x 2 −y 2 + d xy -wave states the specific heat exhibits two jumps at T = T c1 and T = T c , which clearly exhibits the phase transition at T c1 . In the present two-dimensional tight binding model the effective interaction V kq for transition from a momentum q to k is taken to be separable, and is expanded in terms of some general basis functions η ik , labelled by the index i, as V kq = − i V i η ik η iq [18]. The functions η ik are associated with a one dimensional irreducible representation of the point group of square lattice C 4v and are appropriate generalizations of the circular harmonics incorporating the proper lattice symmetry. The effective interaction after including the two appropriate basis functions for singlet pairing is taken as where η 1q ≡ (cos q x − β cos q y ) corresponds to d x 2 −y 2 symmetry, η 2q ≡ sin q x sin q y corresponds to d xy symmetry, and where β = 1 corresponds to a square lattice, and β = 1 represents orthorhombic distortion. In this case the quasiparticle dispersion relation is given by ǫ k = −2t[cos k x + β cos k y − γ cos k x cos k y ], where t and βt are the nearest-neighbour hopping integrals along the in-plane a and b axes, respectively, and γt/2 is the second-nearest-neighbour hopping integral. The energy ǫ k is measured with respect to the Fermi surface. At a finite T , one has the following BCS equation where E F is the Fermi temperature and k B the Boltzmann's constant. The order parameter has the following anisotropic form: where C is a complex number of unit modulas |C| 2 = 1. If we substitute Eqs. (0.1) and (0.3) into the BCS equation (0.2), one can separate the resultant equation in its real and imaginary parts. The resultant equations only have solution for real ∆ 1 and ∆ 2 , when the complex parameter C is either purely real or purely imaginary. The solution for purely imaginary C, e.g., C = i have been extensively studied in relation to mixed d x 2 −y 2 + is and d x 2 −y 2 + id xy states [16]. Here we consider the solution for C = 1, for d x 2 −y 2 + d xy state. Using the form (0.3) of ∆ q with C = 1 and potential (0.1), Eq. (0.2) becomes the following coupled set of BCS equations where both the interactions V 1 and V 2 are assumed to be energy-independent constants for |ǫ q − E F | < k B T D and zero for |ǫ In Figs. 1 and 2 we plot the temperature dependencies of different ∆'s for the following two sets of d x 2 −y 2 + d xy wave corresponding to models 1 and 2, respectively. In all cases, with the lowering of temperature passed T c , the parameter ∆ 1 increases up to T = T c1 . As T is lowered further, ∆ 2 becomes nonzero at T = T c1 and begins to increase. As temperature is lowered, both ∆ 1 and ∆ 2 first increase and then attain a constant value at zero temperature. The different superconducting and normal specific heats are plotted in Figs. 3 and 4 for square lattice [mod-els 1(a) and 1(b)] and orthorhombic distortion [models 2(a) and 2(b)], respectively. In both cases the specific heat exhibits two jumps − one at T c and another at T c1 . From Figs derivative of |∆ q | 2 has discontinuities at T c and T c1 due to the vanishing of ∆ 1 and ∆ 2 , respectively, responsible for the two jumps in specific heat (see, definition in Ref. [4]). For a pure d x 2 −y 2 wave we find that the specific heat exhibits a power-law dependence on temperature. However, the exponent of this dependence varies with temperature. For small T the exponent is approximately 2.5, and for large T (T → T c ) it is nearly 2. For the mixed d x 2 −y 2 + d xy -wave model, for T c > T > T c1 the specific heat exhibits d-wave power-law behavior. For dwave models C s (T c )/C n (T c ) is a function of T c and β. In Figs. 2 and 3 this ratio for the d x 2 −y 2 -wave case, for T c = 70 K, is approximately 3 (2.44) for β = 1 (0.95). For the d xy -wave case, for T c = 70 K, this ratio is approximately 1.81 (1.9) for β = 1 (0.95). In a continuum calculation this ratio was 2 in the absence of a van Hove singularity [4]. Next we exhibit the temperature dependence of spin susceptibility (defined in Ref. [4]) in Figs. 5 and 6 where we also plot the results for pure d x 2 −y 2 and d xy waves for comparison. In Figs. 5 and 6, we show the results for models 1 and 2 on square lattice and with orthorhombic distortion, respectively. For pure d x 2 −y 2 and d xy waves we obtain power-law dependencies on temperature. The exponent for this power-law scaling was independent of critical temperature T c but varied from a square lattice to that with an orthorhombic distortion. For d x 2 −y 2 wave, the exponent for square lattice (orthorhombic distortion, β = 0.95) is 2.6 (2.4). For d xy wave, the exponent for square lattice (orthorhombic distortion, β = 0.95) is 1.1 (1.6). For the mixed d x 2 −y 2 + d xy wave these exponents are nearly identical to the pure d x 2 −y 2 wave case. Hence, by studying the temperature dependency of spin susceptibility, it will be impossible to detect the phase transition at T = T c1 from a d x 2 −y 2 wave to a d x 2 −y 2 + d xy wave, at least within the present tight-binding model. In conclusion, we have studied the d x 2 −y 2 + d xy -wave superconductivity employing the two-dimensional tight binding BCS model on square lattice and also on a lattice with orthorhombic distortion and confirmed a second second-order phase transition at T = T c1 in the presence of a weaker d xy wave. This phase transition is marked by a jump in the specific heat at T = T c1 . We have kept the s-and d-wave couplings in such a domain that a coupled d x 2 −y 2 + d xy -wave solution is allowed. The d x 2 −y 2 + d xywave state is similar to a d x 2 −y 2 -wave-type state with nodes on the Fermi surface in the order parameter. Consequently, we find power-law temperature dependencies of specific heat and spin susceptibility in the d x 2 −y 2 +d xy wave. The exponents of these power laws for the mixed d x 2 −y 2 + d xy wave are very close to those for the pure d x 2 −y 2 wave. The work was supported by the CNPq and FAPESP.
2019-04-14T02:19:39.943Z
1998-12-20T00:00:00.000
{ "year": 1999, "sha1": "b8b84483361c20b7a6ded1601c13e0168e0aa6a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9908100", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e5a9b045030d434c3b9b8d891ca1cd9ad5c78779", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18913816
pes2o/s2orc
v3-fos-license
A novel nonsense mutation in the NOG gene causes familial NOG-related symphalangism spectrum disorder The human noggin (NOG) gene is responsible for a broad spectrum of clinical manifestations of NOG-related symphalangism spectrum disorder (NOG-SSD), which include proximal symphalangism, multiple synostoses, stapes ankylosis with broad thumbs (SABTT), tarsal–carpal coalition syndrome, and brachydactyly type B2. Some of these disorders exhibit phenotypes associated with congenital stapes ankylosis. In the present study, we describe a Japanese pedigree with dactylosymphysis and conductive hearing loss due to congenital stapes ankylosis. The range of motion in her elbow joint was also restricted. The family showed multiple clinical features and was diagnosed with SABTT. Sanger sequencing analysis of the NOG gene in the family members revealed a novel heterozygous nonsense mutation (c.397A>T; p.K133*). In the family, the prevalence of dactylosymphysis and hyperopia was 100% while that of stapes ankylosis was less than 100%. Stapes surgery using a CO2 laser led to a significant improvement of the conductive hearing loss. This novel mutation expands our understanding of NOG-SSD from clinical and genetic perspectives. INTRODUCTION The human noggin (NOG) gene consists of a single exon and encodes a secreted protein that is critical for normal bone and joint development. 1 Noggin binds to bone morphogenetic protein (BMP) of the transforming growth factor-β superfamily and prevents its binding to the cognate receptor. 1,2 This interaction affects a number of developmental processes such as morphogenesis and body patterning, 1,3 middle ear formation, 4,5 and apoptosis in digital and interdigital regions. 4,6,7 Mutations in the NOG gene result in aberrant functioning of the noggin protein, which is linked to various autosomal dominant syndromes characterized by proximal symphalangism (SYM1: MIM #185800), 8 multiple synostosis syndrome (SYNS1: MIM#186500), 8 tarsal-carpal coalition syndrome (TCC: MIM#186570), 9,10 brachydactyly type B2 (BDB2: MIM#611377), 11 and stapes ankylosis with broad thumb and toes (SABTT: MIM#184460) (i.e., Teunissen-Cremers syndrome). [12][13][14] Precise diagnosis is complicated by the overlapping clinical features of these syndromes. Given the variable phenotypic manifestations within and among families with the same mutations, the term NOG-related symphalangism spectrum disorder (NOG-SSD) has been proposed. 15 In the present study, we describe a novel nonsense mutation in the NOG gene causing familial NOG-SSD and report the associated clinical and molecular findings as well as the results of surgery for conductive hearing loss. MATERIALS AND METHODS Patients Medical history including hearing loss, symphalangism, dactylosymphysis, brachydactyly, and hyperopia as well as results of a clinical examination were obtained for four members of a Japanese family, three of whom were affected (proband, father, and grandmother), and one who was unaffected (mother). Auditory function was assessed by pure tone audiometry, tympanometry, and the stapedius reflex test. High-resolution computed tomography scans were carried out to identify any middle and inner ear abnormalities, and X-rays images of the hand were obtained to identify any fusion of the bones. Stapes surgery was performed in the proband to restore hearing. The father had undergone stapes surgery in the right ear at a different hospital in his childhood. Study participants and the parents of the child provided written, informed consent. The research protocol was approved by the Ethical Review Committee of Sapporo Medical University, Japan. Genetic analysis Genomic DNA was extracted from blood samples using the Gentra Puregene Blood kit (Qiagen, Hamburg, Germany). PCR primers specific for the NOG exon (GenBank NG_011958.1) and the amplification program were as previously reported. 16 Sanger sequencing data were analyzed using SeqScape software v.2.6 (Applied Biosystems, Foster City, CA, USA) and DNASIS Pro (Hitachisoft, Tokyo, Japan). The variant allele frequency was evaluated using the dbSNP 146 public database (http://www.ncbi.nlm. parents. She was referred to our hospital at the age of 5 years because of bilateral hearing loss that had starting in early childhood. Physical and X-ray examinations of the hands showed symphalangism and short intermediate phalanges (brachydactyly) in both fifth fingers (Figure 2a). The range of motion in her elbow joint was restricted, and she was unable to touch her shoulders with her hands (Figure 2f). She had undergone surgical treatment for dactylosymphysis in the second and third toes of both feet in her early childhood. An ophthalmologic examination revealed hyperopia. She had experienced bilateral progressive hearing loss from early childhood, and pure tone audiometry at age 6 showed bilateral conductive hearing loss (Figure 3a). At this time, she also underwent stapedotomy using a Teflon piston, which detected ankylosis of the stapes footplate with hypertrophy of the anterior and posterior crus; the footplate was also distant from the facial nerve ( Figure 4). The patient's postoperative hearing threshold improved to 25 dB in the operated ear, and her hearing level has remained stable for more than 3 years since the surgery (Figure 3b). The proband's father (III: 3), who underwent right stapedotomy at 13 years old at another hospital, had bilateral hearing loss since early childhood, and pure tone audiometry showed conductive hearing loss on the left side and an improvement of hearing level in the operated ear ( Figure 3c). His hearing condition had not worsened for 15 years. Physical examination of his hands showed brachydactyly in both fifth fingers (Figure 2b), and he could not touch his shoulders with his hands due to a restricted range of motion in his elbow joints (Figure 2d). He had undergone surgical treatment in his early childhood for dactylosymphysis in the second and third toes of both feet. An ophthalmologic examination revealed hyperopia. The proband's grandmother (II: 2) did not show conductive hearing loss ( Figure 3d); however, she had also had surgery during childhood for dactylosymphysis in the second and third toes of the both feet. Physical examination of her hands did not reveal brachydactyly ( Figure 2c). As in the case of the other two patients, Nonsense mutation in the NOG gene causes familial NOG-SSD K Takano et al she was unable to touch her shoulders with her hands due to restricted range of motion of the elbow joint ( Figure 2e). An ophthalmologic examination showed hyperopia. Genetic analysis A genetic analysis detected a heterozygous c.397A4T (p.K133*) variant of the NOG gene in the proband (IV: 1) as well as in II-2 and III-3 ( Figure 5), which has not been previously reported according to the HGMD and is not registered in other databases such as dbSNP, 1000 Genome Browser, HGVD, ESP6500, or ExAC. Given that other nonsense mutations such as p.Q110* (rs104894614) 14 and p.L129* (rs104894613) 17 have been reported to be pathogenic, the p.K133* variant is presumed to produce a truncated noggin protein (132 of 232 amino acid residues) with disrupted function. DISCUSSION The present study identified a novel nonsense mutation in the NOG gene in a family with NOG-SSD. The clinical features included proximal symphalangism in one of the fingers, dactylosymphysis of the toes, brachydactyly, pilonidal cyst, hyperopia, and conductive hearing loss as a result of stapes ankylosis. The most common phenotypes in the family were dactylosymphysis (5/5), hyperopia (5/5), and hearing loss (4/5). Heterozygous NOG mutations have been identified in several syndromes including SYM1, 8 SYNS1, 8 TCC, 9,10 BDB2, 11 and SABTT. [12][13][14] To date, a total of 45 human variations in NOG have been reported; the term NOG-SSD was put forth to describe these syndromes, 15,17 which exhibit shared but also some distinct clinical features. In our patients, the prevalence of dactylosymphysis and hyperopia was 100% while that of stapes ankylosis was less than 100%. Mutations reported in the literature to date are shown in Table 1. NOG gene mutations including frameshift, missense, and nonsense mutations as well as deletions and insertions have been previously identified in patients with NOG-SSD. NOG gene mutations are mainly dominant; however, de novo mutations have also been reported in sporadic cases. 8,18 Therefore, genetic investigations are sometimes needed to clarify the pathogenesis of conductive hearing loss due to stapes ankylosis with stiffness of the proximal interphalangeal joints in patients with no familial history. NOG gene mutations are autosomal dominant, and is presumed to be manifested as either haploinsufficiency-which can lead to an aberrant gradient during development-or may have a dominant-negative effect due to the defective protein. 19 The NOG gene has a critical role in joint formation and bone development, and mutations in noggin compromise the folding stability of the protein and cause defective binding to BMP. 6,20 Noggin-mediated inhibition of BMP signaling is regulated by a two-step process: 21 noggin binds to BMP and prevents its binding to the BMP receptor, with the complex binding instead to heparin sulfate proteoglycan, a major cell surface and extracellular matrix proteoglycan. Sulfate induces the release of the noggin-BMP complex at the cell surface, increasing the accessibility of BMP to its receptor and thereby activating BMP signaling. A docking simulation of noggin to heparin analog and estimation of the change in interaction with p.R136C mutation demonstrated that the positively charged R136 in the heparin-binding site is required for retention of the noggin-BMP complex by negatively charged heparin sulfate proteoglycan at the plasma membrane. 16 The altered binding of mutant noggin and heparin sulfate proteoglycan may lead to hyperactivation of BMP signaling, ultimately leading to ankylosis of the joints and stapes. 16 Stapes surgery for conductive hearing loss due to NOG mutations leads to an improvement in hearing for most patients, 10,19,22 as confirmed in the present study. However, it is necessary to exercise caution when performing stapes surgery for this syndrome due to the risk of bony reclosure of the oval window after surgery. It was previously reported that the hearing level of patients who underwent stapes surgery deteriorated during the follow-up period for this reason, which resulted in a dislocated piston. 18,23,24 Therefore, partial or total stapedectomy has been proposed as an alternative procedure to prevent reclosure of the oval window. 7,23 In the present case (IV: 1), we performed stapedotomy using a CO 2 laser. There have been no reports to date of CO 2 laser-assisted stapedotomy for treatment of stapes ankylosis due to NOG mutations; therefore, the surgical outcome must be carefully assessed after long-term follow-up. In conclusion, we identified a novel nonsense mutation in the NOG gene (p.K133*) in a NOG-SSD family. NOG gene mutations lead to aberrant functioning of the noggin protein, giving rise to a large spectrum of clinical features. Our patients exhibited a phenotype that included proximal symphalangism, dactylosymphysis, brachydactyly of the toes, pilonidal cyst, hyperopia, and conductive hearing loss. Stapes surgery for 39 Nucleotic numbering is based on GenBank reference sequence NM_005450.4. Nonsense mutation in the NOG gene causes familial NOG-SSD K Takano et al
2017-11-08T22:26:43.985Z
2016-08-04T00:00:00.000
{ "year": 2016, "sha1": "60c86bb1237cf8838cdb599505e72e47f4b047db", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/hgv201623.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60c86bb1237cf8838cdb599505e72e47f4b047db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49256726
pes2o/s2orc
v3-fos-license
Experimental huts trial of the efficacy of pyrethroids/piperonyl butoxide (PBO) net treatments for controlling multi-resistant populations of Anopheles funestus s.s. in Kpomè, Southern Benin Background: Insecticides resistance in Anopheles mosquitoes limits Long-Lasting Insecticidal Nets (LLIN) used for malaria control in Africa, especially Benin. This study aimed to evaluate the bio-efficacy of current LLINs in an area where An. funestus s.l. and An. gambiae have developed multi-resistance to insecticides, and to assess in experimental huts the performance of a mixed combination of pyrethroids and piperonyl butoxide (PBO) treated nets on these resistant mosquitoes. Methods: The study was conducted at Kpomè, Southern Benin. The bio-efficacy of LLINs against An. funestus and An. gambiae was assessed using the World Health Organization (WHO) cone and tunnel tests. A released/recapture experiment following WHO procedures was conducted to compare the efficacy of conventional LLINs treated with pyrethroids only and LLINs with combinations of pyrethroids and PBO. Prior to huts trials, we confirmed the level of insecticide and PBO residues in tested nets using high performance liquid chromatography (HPLC). Results: Conventional LLINs (Type 2 and Type 4) have the lowest effect against local multi-resistant An. funestus s.s. and An. coluzzii populations from Kpomè. Conversely, when LLINs containing mixtures of pyrethroids and PBO (Type 1 and Type 3) were introduced in trial huts, we recorded a greater effect against the two mosquito populations (P < 0.0001). Tunnel test with An. funestus s.s. revealed mortalities of over 80% with this new generation of LLINs (Type 1 and Type 3),while conventional LLINs produced 65.53 ± 8.33% mortalities for Type 2 and 71.25 ±7.92% mortalities for Type 4. Similarly, mortalities ranging from 77 to 87% were recorded with the local populations of An. coluzzii. Conclusion: This study suggests the reduced efficacy of conventional LLINs (Pyrethroids alone) currently distributed in Benin communities where Anopheles populations have developed multi-insecticide resistance. The new generation nets (pyrethroids+PBO) proved to be more effective on multi-resistant populations of mosquitoes. mortalities ranging from 77 to 87% were recorded with the local populations of . An. coluzzii This study suggests the reduced efficacy of conventional LLINs Conclusion: (Pyrethroids alone) currently distributed in Benin communities where populations have developed multi-insecticide resistance. The new Anopheles generation nets (pyrethroids+PBO) proved to be more effective on multi-resistant populations of mosquitoes. Introduction Malaria is responsible for about 438,000 deaths with an estimated 214 million disease cases annually 1 . Malaria vector control tools have been encouraging worldwide, resulting in a decreased morbidity and mortality as of 2016 compared to the 2000 2 . Unfortunately, as this disease has reduced globally, it has been a different case in Africa, where malaria is still a serious challenge 2 . Long lasting insecticide-treated nets (LLINs) are major components of malaria control tools, and they have helped to combat malaria disease when in good conditions and properly used 2 . LLINs are effective, simple to use, easy to deliver to rural communities, and cost-effective when used in highly endemic malaria areas 3 . In Benin, malaria control is hugely dependent on LLINs and indoor residual spraying (IRS) 4,5 . In October 2014, there was a country-wide distribution campaign of mosquito nets to ensure universal coverage, with the free distribution of 6,077,272 LLINs to 2,199,522 households surveyed 6 . After this exercise, LLINs utilization by children under five rose from 70% in 2012, to 73% in 2014 6 . However, the emergence and spread of resistant malaria vectors to insecticidal components used for treating these nets have threatened the earlier progress made with this malaria vector control tool 7-9 . Resistance to insecticides ofone of the main malaria vectors, An. funestus against control tools has since become a serious challenge facing the quest for malaria elimination in Africa. Reported cases of resistance are available in countries such as Cameroon 10 , Uganda 11 , Mozambique 12 , Malawi 13,14 , Ghana 15 , Nigeria 16 and Benin 17,18 . There are also multiple mechanisms that are driving observed resistance in this mosquito population, although over-expression of detoxification genes remains the main driving force of insecticide resistance in this Anopheles species 15,19 . Another observation is that resistance mechanisms are known to differ from one mosquito population to another suggesting the contribution of geographical differences in resistance profiling 15,20 . This revelation is of serious concern because it is becoming a significant threat to existing malaria control tools. Recently, a study by Agossa et al. 9 in the northern part of Benin, showed that the efficacy of existing malaria vector control tools treated with pyrethroid have decreased in wild An. gambiae s.l. populations. Considering the fact that An. funestus and An. gambiae have developed resistance to almost all classes of insecticides across Benin 17,21,22 , it might follow a similar trend as the above study. Indeed, there is a serious quest for alternative insecticides since pyrethroids are becoming less effective with recorded reports of resistance in malaria vectors 23 . Pyrethroids are very safe, acceptable and suitable for LLINs, but degrade very fast, especially when exposed to sunlight, which can be avoided if nets are well preserved 24 . A different insecticide resistance management approach combining a chemical synergist, piperonyl butoxide (PBO), with pyrethroids on net fibres could be a promising way to fight insecticide resistance. PBO, a synergist capable of inhibiting the action of oxidase enzymes, has potential to combat the growing problem of oxidase based pyrethroid resistance in mosquito vectors species. Two types of long lasting nets treated with permethrin+PBO and deltamethrin+ PBO are the new generation of LLINs for improved resistance management 25,26 . These new generation nets have shown their efficacy on some resistant populations of Anopheles in experimental hut trials 25,27 . In hut trials, the new generation of LLINs increases mortality and inhibits blood feeding against pyrethroid-resistant An. gambiae in some Africa regions 23,25,26,28 . In Nigeria, the efficacy of LLINs treated with deltamethrin +PBO was highly effective on resistant An. gambiae compared with standard treated nets with no PBO 29 . Also, in Southern Africa (Mozambique), this combination proved to be more effective against resistant An. funestus and An. gambiae 12,20 . Due to the widespread of insecticide resistance in most populations of An. funestus from South to the North of Benin 17,18,21 , it is important to assess the efficacy of currently used LLINs (conventional LLINs) and also conduct in experimental huts a comparative assessment of the performance of the new generation of treated LLINs (Pyrethroids+PBO) against conventional LLINs currently used by communities in areas of Benin where the main malaria vectors An. gambiae and An. funestus have developed multiple resistance to insecticides. Study area The assessment was conducted in the rural locality of Kpomè (6° 23' N, 2° 13'E) located in South Benin, approximately 81 km from Cotonou. The study area has a sub-tropical climate, receives 1,100 mm of mean rainfall annually and has a mean monthly temperature between 27 and 31°C. The rainfall pattern in this area is similar to other southern localities of Benin, with two rainy seasons and two dry seasons. The constant presence of water bodies in this locality favors the development of An. funestus and other mosquito species 18 . Previous studies carried out in Kpomè showed that An. funestus s.s. is mainly predominant during the dry season and transitional periods, and exhibited high resistance to permethrin and deltamethrin with mortalities rates (World Health Organization (WHO) susceptibility tests) of 13% and 46.5% respectively 17 . P450s are the main family of detoxification enzymes involved in observed pyrethroid resistance in the An. funestus population in this locality 17 . An. gambiae populations from this same locality have also developed multi-resistance to several insecticides families 30 . This set of available environmental and entomological data has prompted building of seven experimental huts at Kpomè for trials to identify best LLINs types for improved control of insecticide resistant populations of malaria vectors. Collection of mosquitoes for planed experiments Early morning collections of blood-fed, semi-gravid and fully gravid females of resistant An. funestus resting inside houses were collected using electric aspirators between 06h00 and 10h00 in June 2017 (Consent from head of household was obtained prior to collection). These mosquitoes were identified morphologically using Gillies and De Meillon 31 and Gillies and Coetzee 32 key as An. funestus were kept in small cups and immediately transported to the laboratory (Relative humidity of 70-80% and a temperature of 25-30 °C) until fully gravid (for blood-fed and semi-gravid females). Eggs were obtained from F 0 generation (Collected females from the field) using the forced egg laying method 33 and were allowed to hatch to obtain F 1 generation to be used for different experiments. Piperonyl butoxide (PBO) synergist tests According to the level of observed resistance against permethrin and deltamethrin, and because of pyrethroids resistant, An. funestus population has been shown to express P450s genes more than in previous studies 19,39,40 , 2-5 days old F1 mosquitoes were pre-exposed to 4% PBO paper for 1 h and immediately exposed to 0.75% permethrin and 0.05% deltamethrin for 1 h. Two controls were used during this experiment. The first control was the mosquitoes exposed to untreated papers without PBO, and the second comprised of mosquitoes exposed to paper treated with PBO only. Mortalities were recorded 24h post exposure and were later compared to the un-synergized group in order to evaluate the potential role of cytochromes P450 genes in the observed resistance. Characteristics of long-lasting insecticidal nets used during the various assessments Five types of LLINs were used for the phase I (Cone and tunnel tests) and phase II (experimental hut) evaluations. The Type 1 LLINs made of monofilament polyethylene (100 mesh size) fabric treated with deltamethrin at 4 g/kg±25% and piperonyl butoxide (PBO) at 25g/kg±25%, side panels made of multifilament polyester fabric with a strengthened border treated with deltamethrin at 2.1 g/kg±25%. The Type 2 LLINs was made of multifilament polyester fabric (100 mesh size), treated with a deltamethrin only (no PBO added) at 1.4 g/kg±25%. The Type 3 LLINs was treated with 20 g/kg of permethrin and 10 g/kg of PBO in the whole polyethylene net fibres (150 mesh size). The Type 4 LLINs made of polyethylene fibers treated with permethrin only (no PBO) at 20 g/kg, incorporated during fibers extrusion (150 mesh size). The Type 5 was an untreated net, a multifilament polyester (100 mesh size) fabric with neither insecticide nor PBO treatments. All nets used had sizes of 160 cm wide, 180 cm long and 150 cm high. All types of nets treated and control nets were procured by the Liverpool School of Tropical Medicine (LSTM), UK, properly wrapped and shipped to us at IITA for our various trials. WHO cone tests with of 5 types of nets under tests Four cones were fixed in contact to 25 × 25 cm pieces of nets taken from the sides and top panels of LLINs (Methods in Anopheles Research, 2010). 2 to 5 days old individuals from the 3 colonies of Anopheles mosquitoes (resistant-An. funestus Kpomè, Resistant-An. gambiae Kpomè and Susceptible-Anopheles Kisumu) were exposed to nets for 3 min, after which they were transferred into recovery paper cups and provided with cotton wool soaked in a 10 % honey solution. A minimum of 50mosquitoes was tested for each net. At least three pieces per net was used for this test. Mosquito knock-down rate (kd) was recorded at 1h post-exposure period and the mortality rate was determined 24 h post-exposure. The mortality rate was corrected using Abbott's formula if needed. These tests were conducted at a room temperature and relative humidity of 25-30°C and 70-80% respectively. WHO tunnel tests with fragments of the 5 types of nets under tests Tunnel test was carried out with the same samples of LLINs used for cone test. Adults An. funestus, An. gambiae and Kisumu strain were also used for this test 41 . Adult mosquitoes aged between 5 to 8 days were released at 6.00 PM in the first compartment C1 of a 60-cm long tunnel made of glass divided by a transverse netting (25cm x 25 cm) insert, fitted onto a frame that slots across the tunnel. The LLINs fragments used had been pierced (1-cm diameter holes) to allow mosquitoes to pass through it into the tunnel to the compartment C2 where a guinea pig was placed for mosquitoes feeding. Each guinea pig was used only once for this study. Guinea pigs were sourced from local markets where they are sold for food consumption. At 8.00 AM of the following morning, mosquitoes were collected from both compartments and transferred into plastic cups. The mortality and feeding status (blood-fed or unfed) of each mosquitoes collected from the tunnel were recorded. Blood-feeding rate and penetration rate across the tunnel were also assessed. Experimental hut trials of the efficacy of the 5 Net Types on insecticide resistant mosquitoes Experimental huts newly built in Kpomè are specially designed to test the efficacy of different vector control products against freely entering mosquitoes under natural but controlled conditions. This facility was used for our release and recaptures tests. Huts were typical of the West African model as recommended by WHO 41 . The 3.5 × 2 × 2 m huts were made from concrete bricks, with a corrugated iron top and a ceiling of thick polyethylene sheeting lined, and each was built on a concrete base surrounded by a water-filled moat to exclude ants. Mosquito access was through 4 window slits, constructed from pieces of iron fixed at an angle to create a funnel with a 1-cm gap, present on 3 sides of the huts. Mosquitoes had to fly upward to enter through the gaps and downwards to exit; this precluded or limited exodus through the aperture and enabled us to account for most entering mosquitoes. A veranda trap made of concrete bricks and mesh screening (2 m long × 1.5 m wide × 1.5 m high) projected from the back wall of each hut. Movement of mosquitoes between a room and the veranda was unimpeded. Study design The described 5 types of mosquito nets were assessed against pyrethroid resistant An. funestus and An. gambiae. The control mosquito population used was only An. gambiae Kisumu as we had neither laboratory/field susceptible An. funestus, nor field susceptible An. gambiae. 5-Type 5 Untreated Polyester net (control) Blank assessment of hut attractiveness Prior to introducing nets into huts, we conducted preliminary experiments which showed the huts to be evenly attractive to mosquitoes. Briefly, assessment for freely-entered mosquitoes in the hut was conducted during 2 weeks and the attractiveness effects of each hut were evaluated. Adult male volunteers slept under the untreated net in the huts from 20:00 hours to 05:00 hours each night after cleaning the hut to remove any spiders and ants. To minimize biases in individual attractiveness, sleepers were rotated between huts on successive nights throughout the 2 weeks. Blank assessment of huts lethality Still prior to assessments, an initial series of bioassays was conducted to determine the mortality of susceptible mosquitoes exposed to various surfaces in the huts to know the lethal effect of the huts. Bioassays were performed with WHO cones tests attached to the surfaces with masking tape. In each hut, surfaces tested included doors, walls, screening-mesh of veranda, ceiling and floor. Ten females of An. gambiae Kisumu strain of 2 to 5 days old were put into each cone for 30 min. After this exposure time, they were removed from the cone and put into plastic cups covered with untreated mosquito net and given access to 10% honey solution and mortalities recorded after 24h. Release and recapture experiment A release/recapture experiment was conducted in experimental huts with resistant populations of An. funestus and An. gambiae both from Kpomè and a susceptible An. gambiae Kisumu. These 3 populations of mosquitoes were released different days into huts where the 5 described types of nets were erected. The experiment was conducted as described by WHO protocols 41 . The main trial was conducted in August 2017. The treatments were allocated randomly to five experimental huts in study site. Each net was deliberately holed with six 4cm×4cm holes to simulate a worn net. Before experimental hut evaluation, adult volunteers had been recruited among the inhabitants of the villages where experimental huts were implemented and informed consent to participate in the study was given beforehand and, chemoprophylaxis was provided during the trial. Female mosquitoes aged 5 days were released in each hut at 20:00 h in the night and monitored till morning. Early in the morning, released mosquitoes were recaptured from the hut, veranda and inside the nets and were scored as dead or alive and as fed or unfed. Live mosquitoes were kept in small cups containing sugar solution for 24 hours to assess the delayed mortality. Entomological effects of treatments were compared in-between nets and with the untreated net (control Net Type 5). Target entomological parameters monitored included: induced exiting, blood-feeding inhibition and mortality. (i) insecticide-induced exiting, i.e. the proportion of mosquitoes found in hut verandahs relative to control huts; (ii) blood-feeding inhibition, i.e. the proportional reduction in blood-feeding relative to untreated nets; and (iii) mortality, the proportion of mosquitoes killed (immediate plus delayed). Chemical analysis of net used in the experimental hut trial Prior to the trial, chemical analysis were conducted on pieces of nets (pieces from holes made on nets) from the five Net Types erected in each hut. This experiment was to confirming the presence, or absence, and the concentration of pyrethroids and PBO in each net to be used in this trial. For the LLINs Type 1, the side panels and top panel were tested separately. Chemical analysis was conducted using high performance liquid chromatography (HPLC) machine (Agilent technology1260 infinity, Germany). Deltamethrin, permethrin and PBO was extracted using acetonitrile as solvent and the mixture was sonicated for 15min. Afterwards, the solution without the net was transferred into a new flask and filtered through a 0.45µm PTFE syringe filter into an HPLC vial for analysis. For HPLC analysis, standard solutions of each insecticide (Permethrin cat no. 45614, Deltamethrin cat no.45423 and PBO cat no.45626) purchased from Sigma Aldrich were prepared from stock solution in acetonitrile. Standard curve of each insecticide were drawn. The HPLC system condition was as follow: mobile phase: Acetonitrile /H2O (90:10), C18 Column, flow rate: 1ml/min, injection volume: 50µl and UV detector wavelength: 226nm.The quantities of insecticides were calculated based on the peak area and expressed in g/kg of net. Data analysis Data from bioassays were compared between each net using MedCalc easy-to-use online statistical software, version 18.2.1 42 , while the Fisher's exact test was used to test for significant difference of mortality rates. Significance between treatments was set at 5% level. The proportion of mosquitoes that exited early (induced exophily), the proportion that was killed within the hut (mortality) and the proportion that successfully blood-fed (blood feeding rate) were compared and analyzed using the logistic regression with treatments as fixed effects and huts, sleepers as random effects (STATA 9 Software). for permethrin and deltamethrin respectively ( Figure 1). Low mortality rates were also recorded for An. coluzzii population from Kpomè against permethrin (19.27±3.52%) and deltamethrin (60.11±7.19%). No mortality was recorded when subsets of these mosquitoes species were exposed to papers with no insecticides (Control). Molecular speciation of Synergist assay with PBO When permethrin and deltamethrin were combined with PBO ( Figure 2) Blank assessment of experimental huts attractiveness A total of 603 mosquitoes were allowed to freely enter the seven experimental huts during the 12 trial nights. The mean number of mosquitoes collected in huts was high in hut N°7 (18.41) followed by hut N°2 (8.41). The mean number of mosquito per night was almost similar in huts N° 6, 5, 4 and 3. The hut N°1 showed a relatively low attractiveness (Table 4). However, similar attractiveness in terms of Anopheles mosquitoes was observed between the hut N° 1, 2 and 5. The recorded mean numbers of Anopheles mosquito in hut N° 3, 4, 6 and 7, which are similar, were higher than the others. Blank assessment of experimental huts lethality The cones bioassay conducted on various surfaces, such as doors, walls, screening-mesh of veranda, ceiling and floor, of each hut revealed that all the huts built in Kpomè locality had no lethal effect on susceptible Anopheles gambiae Kisumu strain. The mortality rate for all exposed mosquitoes was very low as only one mosquito died out of the total of 73 exposed in hut N°4 and N°7 (Table 5). Release and recapture experiments Induced exophily. When Kisumu strain was released in rooms containing treated nets, we recorded a significant movement of mosquitoes from the room to the veranda; all treated nets induced significant exophily rates ranging from 50 to 73% compared to the untreated net, where the observed exophily was 30% (P< 0.0009). A similar trend was observed with the pyrethroids resistant An. funestus s.s. population from Kpomè with exophily rates ranging from 30 to 40% with treated nets compare to 9.46% with untreated Net Type 5 ( Table 6). The induced exophily rates recorded with An. funestus s.s. was not significant in between huts containing treated nets (induced exophily rate ranging from 23 to 34%) ( Figure 4). As for the pyrethroids resistant An. coluzzii, all treated nets induced exophily rates ranging from 8% to 37% ( Figure 5). More specifically there was blood-feeding inhibition with An. funestus s.s. populations in presence of Net Type 4 (44% blood feeding inhibition) and Net Type 2 (58% blood feeding inhibition) compared to untreated nets. Generally, higher bloodfeeding inhibition rates were provided by Net Type 1 containing deltamethrin + PBO (92% blood feeding inhibition) than Type 2 containing deltamethrin only (P<0.0001). Similarly, blood feeding inhibition rates in huts with Net Type 3 containing permethrin + PBO (100% blood feeding inhibition) was higher than those with Net Type 4 containing permethrin alone (Figure 4) ( Table 6). As for the pyrethroid resistant An. coluzzii, blood feeding was inhibited more with Net Type 1 which contains deltamethrin + PBO (76% blood feeding inhibition) than Type 2 which contains deltamethrin only (58% blood feeding inhibition). Respectively, 44% and 57% blood feeding inhibition rates were recorded in huts with Net Type 4 and Net Type 3 ( Figure 5). LLINs Types Induced exophily Blood feeding inhibition % Percentage 33.03% and 90.28% mortality rates were recorded in the huts containing Net Type 4 and Type 3 ( Figure 6). Consequently, the overall killing effect offer by Net Type 1 was significantly higher than Net Type 2 against resistant An. funestus s.s. (Table 6). Same thing with Net Type 3 in comparison to Net Type 4. The same trend was observed against resistant An, coluzzii, where very low overall killing effect was provided by Net Type 2 (28.49%) compared to Net Type 1 (73.65%). Mortalities rose from 10.7% to 71.8%, when resistant An. coluzzii were released in huts containing respectively Net Type 4 and Type 3. Consequently, high overall killing effect was provided by Net Type 3 against this Anopheles specie compared to Net type 4 ( Table 6). A combined pyrethroids-PBO Net Type 1 and Type 3 was found to demonstrate a greater efficacy against these resistant mosquito populations. Discussion This study aimed to assess the response of resistant An. funestus s.s. from Benin to pyrethroid treated nets (current LLINs) and to combined PBO + pyrethroid nets for improved control of resistant populations of malaria vectors. Bio-efficacy of selected LLINs types Results obtained from the response of susceptible mosquitoes (An. gambiae Kisumu) to treated nets showed that pyrethroid and pyrethroid + PBO treated nets remained effective for controlling susceptible Anopheles mosquitoes. It was also observed that the bio-efficacy of nets treated with deltamethrin only (Type 2) was significantly lower when we compared the recorded mortality rates from the cone test in the resistant populations of An. funestus s.s. and An. coluzzii and the susceptible strain Kisumu. These observations further confirm the high pyrethroid resistance observed in both malaria vectors in Kpomè like in others localities of Southern Benin 17,21 . A more recent study conducted across a South-North transect of Benin, revealed that more than 50% of An. gambiae mosquitoes are unaffected by lethal effects of the current form of Net Type 2 43 . However, in the Ivory Coast, this net was effective against An. gambiae s.s. 44 . When resistant mosquitoes were exposed to the combined deltamethrin-PBO (Net Type 1), the mortality rose from 56.67 to 95.77% and from 34.67 to 69.54% for respectively An. funestus s.s. and An. coluzzii. This finding showed the important involvement of P450s genes in observed pyrethroids resistance in this study and confirms also the results of synergist bioassays test performed with these same resistant mosquitoes as almost all individuals were dead when they were exposed to PBO and immediately after to deltamethrin. Similarly, significantly lower mortality of An. funestus s.s. in the presence of current permethrin treated Net Type 4 was observed compared to the combined Net Type 3 (permethrin + PBO). The loss of bio-efficacy of this current Net Type 4 was also demonstrated in Malawi, Mozambique and Democratic Republic of Congo, where recorded mortality rates of An. funestus to Net Type 4 in this study were respectively 3%, 20% and 34% 12,13,20 . The study conducted in Benin in 2013 demonstrated the efficacy of combined permethrin-PBO net (Olyset plus) against resistant An. gambiae 25 . Surprisingly, only 25.83% of An. coluzzii was affected by lethal effect of this net in this study. It could probably due to the presence of other mechanisms involved in multi-resistance of An. coluzzii from Kpomè like kdr mutations 30 . This result supports the relatively low mortality (69.67%) obtained from the synergist test when we pre-exposed An. coluzzii to PBO before to permethrin. Therefore, a combined permethrin -PBO net does not provide a solution to pyrethroid resistance with An. coluzzii from Kpomè, Southern Benin. Tunnel test performed on the all net Types used in this study confirmed the reduced bio-efficacy of only pyrethroids treated nets, showing a decrease in their effectiveness in areas of high resistance. This observation could be related to the resistance selection pressure generated by the use and misuse of the same class of insecticides for malaria vector control in public health and for pest control in agriculture 21,45,46 . Indeed, reduced repellent effect of Net Type 2 against wild resistant Anopheles mosquitoes compared to high repellent effect against Kisumu strain could be as a result of their resistance nature. However, crossing of mosquitoes through Net Type 1was highly inhibited for each resistant population, even for susceptible strain, penetrating the compartment C2 of the tunnel containing Net Type 1. Nevertheless, deltamethrin alone used for the treated net (Type 2) continues to have moderate performance against resistant Anopheles mosquitoes in terms of reducing human and malaria vector contact and also blood feeding rate. Crossing rates with Net Type 1 Malima et al. 52 where the recorded mortality of An. funestus was 71.6% against nets treated with permethrin only. Survival rates of these mosquitoes in the huts suggest that the protective nature of currently used net Type 4 in Benin is compromised, as previously reported 53 . These semi-field controlled experiments confirmed the results from laboratory phase I evaluations and displays faith in combined pyrethroids-PBO nets, despite the multiple resistance mechanisms present in these mosquito species 4,19,20,21,[54][55][56] . However, it is necessary to further investigate the impact of these multiple mechanisms on the efficacy of nets treated with pyrethroids only against An. funestus s.s. A combination of the synergist PBO to pyrethroids made treated nets more efficient as PBO acted both as a metabolic enzyme inhibitor and as an adjuvant through its effect on enhanced cuticular penetration of deltamethrin 57 . The fact that these new generation nets (Type 1 and Type 3) were able to inhibit blood feeding more than current nets (Type 2 and Type 4), could suggest their capability to confer high personal protection against resistant mosquito biting. Studies conducted in Benin and other African countries showed a loss efficacy of pyrethroids treated net against An. gambiae 9,23,50, 58 . This research has demonstrated that the efficacy of a combined pyrethroids-PBO nets on resistant malaria vector populations could be a promising strategy against pyrethroid resistance populations of Anopheles as previously highlighted 20,25,27,28,43,59,60 . This study further confirms a role of oxidases in pyrethroids resistance of An. funestus and the need to develop nets combining pyrethroid and synergist against pyrethroid resistant malaria vectors 25,33,40,61 . Nevertheless, several other studies have been conducted on the insecticide resistance management of Anopheles especially An. gambiae, using nonpyrethroid insecticides alone or in mixture of pyrethroids [62][63][64][65][66][67] . These studies also provided relatively good pointers for management of resistant mosquitoes but the problems with nonpyrethroid ingredients are human toxicity and their irritant effect 62,68 . A more recent study conducted by Malima et al. 69 in an area where An. funestus is resistant to pyrethroid, showed that even when non-pyrethroid insecticide-treated nets with durable wall lining (ITWL) are used, this cannot guarantee up to 50% protection against resistant An. funestus. Conclusion Pyrethroid resistance in the major malaria vectors An. funestus s.s. and An. coluzzii in Kpomè is high, and is likely to limit the impact of currently used LLINs. This study showed that the use of new generation bed nets could provide additional protection and reduce malaria burden in endemic environments. This study is of importance to Malaria control programs for improved control of pyrethroids resistant malaria vectors in Benin. Ethical considerations Approval was obtained from the ethics review boards of the International Institute of Tropical Agriculture (IITA) ref.PJ/ CC5339. All volunteers recruited to sleep in the experimental huts gave written and verbal consents. Chemoprophylaxis was provided to volunteer prior to the hut studies. Data availability All data generated and analyzed during this study will be included in the published article. Raw data are available from Open Science Framework. General remarks: The paper presents key results of an operational research topic with high significance for preventing malaria transmission in the context of multiple insecticide resistance among vector populations. The study is well designed and methods used are appropriate and linked with expected outcomes of the study. Though, I noticed some limitations that have not been clearly sound for me (or confusing) while reading the whole manuscript. For example: In Abstract: Comment 1: Key results presented do not meet with the objectives to achieve in the subsection "background": i) "bioefficacy of current LLINs" and ii) "performance of PBO LLINs in experimental hut". Key results on the second objective are missing. Suggestion 1: provide key outcomes from experimental huts (at least mortalities by type of LLINs). In Method section: Comment 2: the sequence of subsections as presented is confusing. The subsection "study design" in page 5 appears as heading for experiments in huts with a description of the five types of LLINs to evaluate. The description of such types of LLINS was made in relation with experimental huts and seems like a redundancy of the subsection "characteristics of LLINs ….." in page 4. But no confirmation if these 5 types of LLINS were different on not to those tested with cones or tunnels. The way you have calculate some key variables in tunnels (blood feeding inhibition, overall mortality), and in huts (personal protection, overall killing effect..) is not clear. Suggestion 2: Confirm if types of LLINs (1 to 5) described in sections "chemical analysis of …" are also those used for all the bioassays including phase I (cone and tunnel tests) and phase II (experimental huts). -If so (the 5 types of LLINs related as arms for all bioassay experiments), the study design may be moved after the "study area" and adjusted accordingly prior the description of "mosquito collection", "species identification", "susceptibility tests with insecticide paper and with PBO moved after the "study area" and adjusted accordingly prior the description of "mosquito collection", "species identification", "susceptibility tests with insecticide paper and with PBO synergist", "cone tests", "tunnel tests", "experimental huts" and "data analysis". Under "experimental hut" subsection may consider subparts like blank assessment of hut effects and release/recapture experiments. -If no (the 5 types of LLINs differ between tests), clearly indicate as "experimental hut design" and keep at this position. -Provide formula (available in WHOPES guidelines 2013) for the estimation of key parameters of LLIN bioefficacy in tunnels (blood feeding inhibition, overall mortality) and experimental huts (personal protection, overall killing effect..). In Result section: Comment 3: All key results related to methods used have been presented. However, redundant data and information should be limited or avoided to shorter the paper, for example: Suggestion 3: - Table 1 (distribution of members of ….) in page 6 is not necessary (optional), since data have been considered in the text at the subsection "molecular speciation …"; - Figure 1 in page 6 does not add value since same data are considered in figure 2 in page 7. Consider only figure 2 and insert the specific data on Kisumu strain from figure 1 in the main text. - Table 4 and 5 can be combined in one table of blank assessment of attractiveness and lethal effect of experimental huts (4 columns of variables per experimental hut: two for attractiveness: "overall mosquitoes" and "Anopheles" and two for lethal effect: "tested mosquitoes" and "mortality rate") -Check the last sentence of the third paragraph in page 8 (column on the right)…" Against resistant An. funestus s.s., blood feeding inhibition rates (removed "s") with net type 4 (41.75%) was significantly lower than net Type 4 (against? Should be type 3) (100%). Respectively... -Titles of figures 2 (page 7) and 3 (page 8) may be more comprehensive: Figure 2. "Mortality rates of Anopheles to pyrethroids and when combined …" Figure 3: "Mortality rates reported with cone tests on different types of nets using pyrethroids resistant …" -Provide statistical evidences of the conclusion at the end of subsection "blank assessment of hut attractiveness" (page 9, 2nd paragraph in column on the left).. In Conclusion section: Comment 4: The conclusions of this paper are adequately supported by the results. Probably add elsewhere the justification and pertinence for undergoing phase II in experimental huts when LIN efficacy does not meet the WHO criteria using standard methods i.e. cone test (≥ 80% mortality or ≥ 95% knock-down) or tunnel tests (≥ 80% mortality or ≥ 90% blood-feeding inhibition). Is the work clearly and accurately presented and does it cite the current literature? Is the study design appropriate and is the work technically sound? Yes Are sufficient details of methods and analysis provided to allow replication by others? Partly If applicable, is the statistical analysis and its interpretation appropriate? Yes Are all the source data underlying the results available to ensure full reproducibility? Partly Are the conclusions drawn adequately supported by the results? Yes No competing interests were disclosed. Competing Interests: I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This manuscript describe results of experimental experiences performed to assess the efficacy of of pyrethroids/piperonyl butoxide (PBO) net treatments for controlling multi-resistant populations of . The study is highly interesting as deals with development of strategies of malaria Anopheles funestus s.s control in the context of high resistance to pyrethroid observed in main vectors. The manuscript is well written and presented, however there are very few minor concerns I would like the authors to take into consideration so that their work will be more efficient. : Comments The title could be modified to also take into account as all the results of the manuscript An. coluzzi are also presenting data from this species 2 paragraph of the introduction, in the 5 sentence : "resistance to insecticide ofone of ………" replace "ofone" by "of on" In the section "Methods", on the part "WHO cone tests with the nets" authors must keep the same nd th
2018-06-17T08:44:51.861Z
2018-06-13T00:00:00.000
{ "year": 2018, "sha1": "89dbb4a2d668c8382d034f8b4ca878b2f6b8b7b0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12688/wellcomeopenres.14589.1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f30f7c64ed0d65b4bddab7f0980a769939892c54", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208290953
pes2o/s2orc
v3-fos-license
Suppressing charge-noise sensitivity in high-speed Ge hole spin-orbit qubits Strong spin-orbit interactions make hole quantum dots central to the quest for electrical spin qubit manipulation enabling fast, low-power, scalable quantum computation. Yet it is important to establish to what extent spin-orbit coupling may expose the qubit to electrical noise, facilitating decoherence. Here we show that, unlike electron spin qubits, the hole spin-3/2 leads generically to sweet spots in the dephasing rate of gate-defined hole qubits as a function of the gate electric field. At these sweet spots, the dephasing rate vanishes to first order in the perpendicular electric field, the EDSR dipole moment is maximized, and the relaxation rate can be drastically reduced by working at small magnetic fields. The existence of the sweet spots is traced to properties of the Rashba spin-orbit interaction unique to spin-3/2 systems. Our results suggest that the coherence of Ge hole spin qubits in quantum dots can be optimized at sweet spots where rapid electric control is also possible, characteristics that make hole spin qubits very favourable for scalable quantum computing. Introduction Quantum computing architectures require reliable initialization, robust single-qubit operations, long coherence times, and a clear pathway towards scaling up. Solid-state platforms are supported by the well developed solid-state device industry, with mature microfabrication and miniaturization technologies. Among solid-state platforms, semiconductor quantum dot (QD) spin qubits have been actively pursued, 1 with an energetic recent focus on hole spins in diamond and zincblende nano-structures. The primary motivation for this focus is the strong spin-orbit interaction of hole systems, which enables one to control qubits via electron dipole spin resonance (EDSR), making quantum computing faster, more power-efficient and easier to operate. [26][27][28][29][30][31][32][33] This because electric fields are much easier to apply and localize than magnetic fields used in electron spin resonance. Only a global static magnetic field is required to split the qubit levels. In addition, the p-symmetry of the hole wave function causes the contact hyperfine interaction to vanish, and no complications involving valley degrees of freedom are present. [34][35][36][37] Initial studies indicate that hole spins may possess sufficiently long coherence times for quantum computing. [38][39][40][41][42] Meanwhile, much progress has been made in the initialization and readout of hole spin qubits. 8,11,14,17,43,44 The existential question that will determine the future of hole QD spin qubits is: Does the strong spin-orbit interaction that allows fast qubit operation also enhance undesired couplings to stray fields such as phonons and charge noise leading to intractable relaxation and dephasing? In this paper, we demonstrate theoretically that this is emphatically not the case. Due to the spin-3/2 nature of holes that sets them entirely apart from electrons, dephasing can be essentially eliminated to at least the first order in the gate electric field at specific sweet spots in parameter space. 15,16,37,[45][46][47][48][49][50] At these sweet spots, electrical qubit rotations are at their most efficient, with the EDSR gate time reaching a minimum. At the same time, the relaxation rate due to phonons can be made as small as is desired by working at small magnetic fields, of the order of 0.1 T, which can allow 10 6 − 10 7 operations in one relaxation time for an in-plane alternating field E AC ∼ 10 5 V/m. We argue that every gate-defined hole qubit has a sweet spot at a certain value of the gate electric field. Whereas our analysis is generically applicable to all gate-defined hole quantum dots, our focus in this paper is on Ge, which has witnessed enormous progress in the last few years. As a group-IV element, it has isotopes with zero nuclear spin and no piezoelectric phonons, while the bulk Dresselhaus spin-orbit interaction is absent. 6,7,9,10,51 Holes in planar Ge quantum wells have a very large out of plane Landé g-factor, g ≈ 20, enabling operation at very small magnetic fields, which would not impede coupling to a superconducting resonator. The low resistivity of Ge when contacting with metals makes couplings between other devices such as superconductors easier. 9, 52,53 In the past decade, spectacular results have been reported, for example, the EDSR detection techniques, 18,51 structures of quantum confinement systems, 7,54-56 the anisotropy of g-tensors, 10,54 spin-orbit couplings and transport phenomena in two-dimensional hole systems. 4,6,10,57,58 As we shall show below, strong cubic-symmetry terms enable EDSR with ultra-short gate times by inducing a special kind of Rashba interaction, yet can still be understood within a perturbative scheme. Hole Quantum Dot Our focus in this work is on single dots. A prototype device, including a neighboring dot, is shown in Figure 1. The Hamiltonian describing a single hole quantum dot has the general B2 P1 P2 B1 B3 T1 QD1 QD2 Figure 1: A prototype double quantum dot in a two-dimensional hole gas. The red shaded circles represent two quantum dots confined by a set of gates. Our work is concerned with a single dot; two dots are shown to illustrate scaling up strategies, e.g. gates B 2 and T 1 control inter-dot tunnelling. form H = H LK + H BP + H Z + H ph + H conf , where H LK represents the Luttinger-Kohn Hamiltonian, H Z is the Zeeman interaction between the hole and an external magnetic field, and H ph the hole-phonon interaction. H conf is the confinement potential including the vertical and lateral confinement. The vertical confinement is achieved by applying a gate electric field F z in the growth direction, leading to a term eF z z in the Hamiltonian; the lateral confinement is modelled as an in-plane parabolic potential well. The strain term H BP is represented by the Bir-Pikus Hamiltonian, 18 due to the experimental implementation during the fabrication of the two-dimensional hole system. A typical configuration of holes in Ge is achieved by growing a thin strained Ge layer (usually about 10 nm to 20 nm) between SiGe layers such that, if the barrier between the two layers is high enough, a quantum well can be formed. In this paper, we consider Si x Ge 1−x , where x = 0.15. 4,6,7,54 First we determine the effective 2 × 2 Hamiltonian for a qubit composed of HH states, we start from the bulk band structure of holes as derived by Luttinger and Kohn. 59 The spinor basis is formed by the eigenstates of J z , + 3 2 , − 3 2 , . For a 2D hole gas grown alongẑ (001), we write the Luttinger-Kohn Hamiltonian as: where P =h and m 0 is the free electron mass, γ 1 , γ 2 , γ 3 are Luttinger parameters which are determined by the band structure. The in-plane wave vector will be k 2 = k 2 x + k 2 y , k ± = k x ± ik y , the wave vector in the growth direction will bek z = −i∂/∂z. We have also usedγ = (γ 2 + γ 3 )/2 and δ = (γ 3 − γ 2 )/2 to simplify the algebra. In Ge δ/γ < 0.15, hence δ can be treated perturbatively, while bulk Dresselhaus terms are absent. Although interface inversion asymmetry terms with the same functional form may exist, 60 at the strong gate fields considered here they will be overwhelmed by the Rashba interaction and are not discussed in detail. The diagonal terms of H BP in the HH manifold are P ε +Q ε = −a v (ε xx + ε yy + ε zz ), while in the LH manifold they are P ε −Q ε = −(b v /2) (ε xx + ε yy − 2ε zz ), where a v = −12 eV and b v = −2.3 eV are deformation potential constants. 18 In our chosen configuration ε xx = ε yy = −0.0063, the minus sign indicates that the germanium is compressed in xy-plane. In theẑ-direction, the Ge layer will be stretched, and ε zz = (−2C 12 /C 11 )ε xx = 0.0044, with C 12 = 44 GPa, C 11 = 126 GPa for Ge. The diagonal terms of the strain-relaxed barrier configuration will change the HH-LH energy splitting by a constant, which is approximately 50 meV. The growth direction provides the spin quantization axis, with the heavy holes states To define a quantum dot a series of gates are added on top of the 2D hole gas confinement, as in Figure 1, and we ultimately seek an effective Hamiltonian describing the two lowestlying HH states in a quantum dot. Since we expect the HH-LH splitting to be much larger than the quantum dot confinement energy, we proceed with the standard assumptions of k · p theory, retaining at first only terms containing k z , with k x and k y initially set to zero. This determines the approximate eigenstates ψ H,L (z) corresponding to the growth-direction. These are described by two variational Bastard wave functions ψ H and ψ L , 63,64 where the dimensionless variational parameters β H,L are sensitive to the gate electric field due to the term eF z z, and d is the width of the quantum well in the growth direction, which is an input parameter. The orthogonality of the HH and LH states is ensured by the spinors. This wave function is suitable for inversion layers as well as accumulation layers, although our focus will be primarily on the latter. 1 We employ analogous wave functions for the first excited states, omitting the details. 65 We first project the Luttinger Hamiltonian onto the wave functions for the growthdirection, which in our model comprise eight sub-bands: HH1, LH1, HH2, LH2, each with two spin projections. Carrying out a Schrieffer-Wolff transformation we obtain the effective 2 × 2 spin-orbit coupling for a 2D hole gas, 66 which, for a system with cubic symmetry, contains two terms with different rotational properties: whereσ + = (σ x +iσ y )/2,σ − = (σ x −iσ y )/2. In the absence of a magnetic field, the α 2 -Rashba term winds around the Fermi surface three times, whereas the α 3 -Rashba term winds only once. The latter term enables EDSR, as we show below. The two coefficients are evaluated as: where E H and E L , obtained by the variational method, are the energies of the lowest-lying heavy hole and light hole states, respectively, and are strong functions of the gate electric field. Next, in a perpendicular magnetic field, the in-plane wave functions are found from where m p = m 0 /(γ 1 + γ 2 ) is the in-plane effective mass of the heavy holes, the subscript refers to the xy-plane. The vector potential is A = (B/2) (−y, x, 0), ω 0 is the oscillator frequency, a 0 the QD radius which satisfy a 2 0 =h/(m p ω l ), a magnetic field will narrow the QD radius. The ground and first excited states are: and the eigen-energies ε n 1 ,n 2 =h(n 1 − n 2 + 1)ω l + 1 2h (n 2 − n 1 )ω c , where ω l = ω 2 0 + ω 2 c /4, ω c = eB/m p is the cyclotron frequency. Finally, the hole-phonon interaction is: [67][68][69] H i,j,s = α,β=x,y,z where q is the phonon wave vector, V c is the unit cell volume, N V c is the crystal volume, e s is the polarization directon vector. The density of the material is denoted by ρ, D α,β represents the deformation potential matrix, andâ † andâ are the phonon creation and the annihilation operators. The details of the process of reducing the above to an effective QD spin qubit Hamiltonian are presented in the Supporting Information. 35,[70][71][72][73][74] Results and Discussion Figure 2: (a) Qubit Zeeman splitting. When the gate electric field is turned off, the qubit Zeeman splitting will be g 0 µ B B = 120 µeV. As the gate electric field increases, the Rashba spin-orbit coupling will change the quantum dot energy levels, leading to a sweet spot. (b) A comparison of the magnitude of the α 2 -and α 3 -Rashba terms that lead to the change in the qubit Zeeman splitting. In all of these figures, we used the quantum well width d = 15 nm, and the dot radius a 0 = 8 nm. The out-of-plane magnetic field is B = 0.1 T, and the confinement energyhω 0 = 20 meV. We begin with the qubit Larmor frequency, which has been plotted in Figure 2a and 2b as a function of the gate electric field. The relationship between the spin-orbit coupling coefficients and the qubit Zeeman splitting is 2 where µ B is the Bohr magneton, g 0 is the same as the bulk g-factor in Ge which is 6κ = 20.36. We note the non-monotonic behaviour as a function of gate field, which is directly related to the behaviour of the two Rashba spin-orbit coupling terms α 2 and α 3 , as shown in Figure 2b, both of which contribute to the Zeeman energy. As is seen from the figures, the magnitude of the energy splitting is dominated by the α 2 -Rashba terms, and both the α 2 -and α 3terms have maxima at the same value of the gate field. Figure 3 shows the magnitude of the spin-orbit coupling coefficients. The magnitude of the α 2 -Rashba term for a specific quantum well width will be larger than the α 3 -term, which explains the relative magnitudes of the qubit Zeeman splitting in Figure 2b. The sweet spot in the qubit Zeeman splitting Figure 2 always coincides with the maximum in the spin-orbit coupling constants Figure 3. However, importantly, the location of the sweet spot is different for each qubit and can vary considerably depending on the width of the quantum well -it can be calculated or determined experimentally for each qubit. Physically, the behaviour of the qubit Zeeman splitting and Rashba coefficients is understood by recalling that the Rashba effect for the HH sub-bands is primarily driven by the off-diagonal matrix element L in Eq. 1 connecting the HH and LH sub-bands. This term, which is ∝ k z k + , increases with the top gate field. At small gate fields, therefore, the Rashba spin-orbit constants increase monotonically due to the increase in the k z overlap integral. This continues until a critical top gate field is reached at which the HH-LH splittings, determined by the matrix element Q, begin to increase faster than the off-diagonal matrix element L. This physics has been shown previously by Winkler and collaborators. [75][76][77] Beyond this critical field the Rashba terms decrease, resulting in a relatively broad sweet spot, at which the qubit is insensitive to background electric field fluctuations in theẑ-direction and hence the dephasing rate vanishes to first order in theẑ-electric field. As we shall show below, electric field fluctuations in theẑ-direction are by far the most damaging to the qubit, and are the primary source of decoherence to be avoided. The breadth and smoothness of the extreme make the tuning of the electric field to reach the sweet spot easier, as will be quantified below. The sweet spot reflects the interplay of the quadrupole degree of freedom with the gate electric field unique to spin-3/2 systems. In all these plots, the quantum well width is d = 15 nm, a 0 = 8 nm, the external magnetic field is B = 0.1 T. The cyclotron frequency is ω c = 3 × 10 11 Hz, the confinement frequency is ω l = 3.2×10 13 Hz, the density of Germanium ρ = 5.33×10 3 kg/m 3 . The phonon propagation speed along the transverse direction is v t = 3.57 × 10 3 m/s, along the longitudinal direction it is v l = 4.85 × 10 3 m/s. For EDSR, an in-plane oscillating electric field represented in the Hamiltonian by eE AC (t)x drives spin-conserving transitions between the QD ground state φ 0↑,↓ and the first excited state φ ±1↑,↓ . Spin flips come from the spin-orbit interaction. In a single-hole dot the Rashba term ∝ α 2 , which has a winding number of three, only couples the QD ground state to the third excited state and does not give rise to EDSR. On the other hand, the Rashba term ∝ α 3 gives rise to spin-flip transitions between the ground state φ 0↑,↓ the first excited state φ ±1↓,↑ . The combined action of the electric field term and the α 3 -Rashba term is a second-order process resulting in a spin-flip in the ground state, namely EDSR. For a multiply occupied hole dot the excited state structure may be more complex but the argument above remains valid because the α 2 and α 3 Rashba terms couple the ground state to different excited states. The EDSR Rabi time describes the time taken to accomplish an operation. The EDSR Rabi frequency, expanded to first order in the magnetic field B, reads (details in The in-plane electric field E AC is set to be 10 5 V/m. The EDSR Rabi frequency can be tuned by changing the gate electric field and with it the Rashba spin-orbit coupling constant. Note, however, that, because the two Rashba terms directly determine the correction to the g-factor, clearly the Rashba interaction and the g-factor cannot be tuned independently at present. Next, we discuss qubit relaxation. Hyperfine interactions and phonon-hole interactions are two major factors affecting the relaxation time, hence the quality of the qubit. However, the p-type symmetry of the valence band excludes the contact hyperfine interaction. There is no bulk inversion asymmetry in group IV elements; this leads to no Dresselhaus spinorbit coupling. However, there is still the Rashba spin-orbit coupling due to the structure inversion asymmetry, which couples the heavy-hole states to the light-hole states. Neither the spin nor the orbit angular momentum will be a good quantum number, as the admixture of the spin-down and the spin-up states will modify the wave functions Eqs. 7 and 8. We emphasize that, whereas EDSR comes only from the α 3 -Rashba term, the qubit relaxation is caused by both the α 2 -and the α 3 -Rashba terms. Using Fermi's golden rule, we can evaluate the relaxation time of the QD, as we can see in Figure 4. For completeness, we also consider two-phonon relaxation processes, which include virtual emission and absorption of a phonon between two heavy hole states, since in the firstorder relaxation calculation there is no direct matrix element between the two heavy-hole states (see Supporting Information for a detailed explanation). However, the two-phonon process calculation returns a negligible relaxation rate, which will not contribute significantly to the relaxation time. The relaxation rate will depend on the external magnetic field as (1/T 1 ) ∝ B 7 for the α 3 -Rashba term and (1/T 1 ) ∝ B 9 for the α 2 -Rashba term. This is shown in Figure 4a. We also plot the ratio between the relaxation time and the EDSR time, demonstrating the EDSR time will depend on the spin-orbit coupling coefficients, therefore, their extrema coincide. From Figure 4a, we can see that the Ge hole quantum dot has a long relaxation time at dilution refrigerator temperatures. It is also useful to study the relaxation time at slightly higher temperatures, e.g. 4 K, at which both phonon absorption and emission must be taken into account. The phonon occupation number is given by the Bose-Einstein distribution where N is the occupation number, ω = qv, q is the phonon wave vector and v is the phonon propagation velocity, T is the temperature, k B is the Boltzmann constant. More details can be found in the Supporting Information, where a plot of the temperature dependence of the relaxation rate is presented in Figure 8. For T = 4 K, the relaxation time is 56 ms, suggesting that the qubit can easily be operated at this temperature. Finally, we focus on dephasing, for which the main mechanism is provided by fluctuating electrical fields such as charge noise. We focus on random telegraph noise (RTN) due to charge defects, noting that a similar discussion can be presented for 1/f noise, which is typically caused by an incoherent superposition of RTN sources. For this reason, we expect the trends for the two types of noise to be similar, while reliable numbers for 1/f noise must await experimental determination of the noise spectral density S(ω) for hole qubits. To begin with, we estimate the dephasing time T * 2 , which is expected to be primarily determined by fluctuations in the Larmor frequency of the qubit stemming from fluctuations in the spinorbit coupling constants α 2 and α 3 induced by charge noise. The electric potential induced at the qubit by a defect located at r D , which may give rise to RTN, can be modelled as a quasi-2D screened Coulomb potential: where 0 is the vacuum permeability, r is the relative permeability for Ge, q TF is the Thomas-Fermi wave vector, and k F is the Fermi wave vector. 45 In a dilution refrigerator, the high energy modes of the Coulomb potential is negligible, therefore the q > 2k F part is ignored. Another source of dephasing is dipole defects due to the asymmetry in bond polarities. where R D is the distance between the dot and the unscreened charge dipole. p is the dipole moment of the charge p = el, the size of the dipole is about 1Å. As a worst-case estimate of the dephasing time, we use the motional narrowing result, 45,46 the dephasing time T * −1 2 = (δω) 2 τ /2, where δω is the change in qubit Larmor frequency due to the fluctuator, and we consider τ = 10 3 t Rabi , where t Rabi is the single-qubit operation time (the inverse of the EDSR frequency), which can be found from Figure 4b. Because of the weak coupling between the spin degree of freedom and external reservoirs, slower fluctuators can be eliminated via pulse sequences and the spin echo techniques. 78 We consider a defect located 20 nm away from the quantum dot in the plane of the dot as a worst-case scenario. Here we used r D = 20 nm since regions inside this range will be depleted by the top gate, and the defects will not be active; the dipole defect is right under the gate, and we assume R D = 20 nm, inẑ direction. The sweet spot is at F = 1.3 × 10 7 V/m. To estimate the pure dephasing time at the sweet spot due to such a defect, we first note that the in-plane electric field will not contribute to dephasing. An in-plane electric field enters the QD Hamiltonian as E · r . This in-plane electric field term does not couple states with different spin orientations. When we consider the qubit Zeeman splittings, the corrections to the effective quantum dot levels due to the in-plane electric field will read the same for H 1,1 and H 2,2 up to the second-order, therefore, fluctuations in qubit Zeeman splitting H 1,1 − H 2,2 will not depend on the in-plane electric field, a detailed calculation can be found in the Supporting Information. However, higher-order terms in the expansion of the electrostatic potential of the defect will lead to dephasing, and these are responsible for dephasing at the sweet spot itself. To determine their effect, we write the ground state energy as where E 0 is the lateral confinement energy, E z is the Zeeman energy, and v 0 is the energy correction due to the defect. We would like to estimate the approximate qubit window of operation around the sweet spot. Away from the sweet spot, due to the fluctuating electric potential of the defect, the energy levels of the quantum dot will gain a correction, and there will be fluctuations in α 2 and α 3 due to theẑ-electric field of the defect, given by F z = dU dz . This fluctuation F z will affect the Bastard wave functions, changing the HH-LH energy splitting, and the spinorbit coupling constants according to Eqs. 4 and 5. A plot of variational parameters as a function of the gate electric field is given in the Supporting Information. Diagrammatically, we can read off the gradient of the variational parameters β H or β L as a function of the gate electric field; and estimate the fluctuations in β as: δβ H = ∂β ∂F δF z , δβ H = ∂β ∂F δF z . Consequently, the spin-orbit coupling constants will also gain a correction via the fluctuations in the variational parameters. To evaluate the change of spin-orbit coupling constants, we have: δα = ∂α ∂β H δβ H + ∂α ∂β L δβ L , where α ∈ {α 2 , α 3 }. With these assumptions, the dephasing time is plotted as a function of the gate electric field in Figure 5a. At the sweet spot, the dephasing time due to the out-of-plane fluctuations is calculated to the second-order, since the first-order fluctuation vanishes, the in-plane fluctuations will dominate the dephasing. Away from the sweet spot, the motional narrowing result is much smaller than the quasistatic limit result. This is because the first-order variation of the qubit Zeeman splitting will weaken the correlation time, while the quasi-static limit does not consider any correlations. However, as the gate electric field approaches to the sweet spot, the variation of qubit Zeeman splitting is getting smaller; at the sweet spot, compared with the quasi-static limit result, longer correlation time will lead to a larger dephasing time. We also determine the pure dephasing time in the quasi-static limit, where the switching time is the longest time scale in the system. This is essentially given by T 2 = 2π/(δω), and is plotted in Figure 5b. Dephasing time in the quasi-static limit. In both plots, the quantum well width is d = 15 nm and the size of the quantum dot is a 0 = 8 nm. We considered both the in-plane fluctuations due to the screened potential and the out-of-plane fluctuations due to the screened potential and dipole defects. Although we have used a simple parabolic model for the in-plane QD confinement, our conclusions are very general. Firstly, the dephasing sweet spot will be present for potentials of arbitrary complexity (for example hut wire geometries), 7,51,52 since it is due to the fundamental interplay between the HH and LH that gives rise to the Rashba spin-orbit coupling in the HH manifold. Secondly, we have examined the possibility that the insensitivity of the g-factor to in-plane electric fields is an artefact of the model. We have tested three deviations from parabolicity and found that none of them exposes the qubit to dephasing by fluctuating in-plane electric fields. This implies (i) that the dot does not have to be perfectly parabolic allowing for some flexibility in the gate structure; (ii) that in-plane electric field fluctuations generally have a negligible effect on the g-factor, while out-of-plane electric field fluctuations cause fluctuations in the Rashba spin-orbit coupling and affect the g-factor, therefore it is most important to avoid the effect of the out-of-plane field; and (iii) that dephasing at the sweet spot itself comes about primarily from higher-order terms in the electrical potential, i.e. electrical quadrupole and higher. We expect our results to hold qualitatively in Si as well, where the spin-orbit interaction is weaker than in Ge, while δ is larger. However, the large δ and frequent failure of the Schrieffer-Wolff approximation in Si calls for a fully numerical treatment. 64 In GaAs, the hyperfine interaction is a source of dephasing that cannot be tuned away by the gate, albeit much weaker for holes than for electrons. Experimentally, the configuration we describe requires a double-gated device with separate plunger gates and barrier gates allowing the number density and the gate electric-field (and spin-orbit coupling) to be controlled independently. 54 The numerical estimates above suggest that, in general, a smooth and broad sweet spot will enable the Ge hole qubit to work insensitively to the charge noise inside a large range of gate electric field accessible to experiment. Exchange-based two-qubit gates are expected to be possible for hole QDs, and their speed depends on the values of exchange obtained, which is expected to be tunable by gates. Moreover, It is likely to simplify the coupling between the two qubits since the valley degree of freedom is absent in hole systems. However, two-qubit gates in the setup discussed here is not well optimized for long distance coupling, which leads to the two-qubit gate time is of the order of microseconds for dipole-dipole interactions and hundreds of microseconds for circuit QED, limited by the Ge Luttinger parameters. They can be sped up by using strain to enhance the spin-orbit interaction, but we defer the discussion to a future publication. Summary We have demonstrated that electrostatically defined hole quantum dot spin qubits naturally exhibit a sweet spot at which sensitivity to charge noise is minimized while the speed of electrical operation is maximized. The location of the sweet spot can be determined from the width of the quantum well and the strain tensors applied. Relaxation times are long even at 4 K, while dephasing is determined by higher-order terms in the expansion of the electrostatic potential due to charge defects, but are expected to allow for a large window of operation around the sweet spot. Our results provide a theoretical guideline for achieving fast, highly coherent, low-power electrically operated spin qubits experimentally. Future studies must consider in-plane magnetic fields, which interact much more weakly with HH spins and are considerably more complicated to treat theoretically. Supporting Information Available The following files are available free of charge. Strain terms, Bir-Pikus Hamiltonian The Bir-Pikus Hamiltonian reads: where where a v , b v , d are deformation potential constants, and ε i,j are components of the strain tensor. In our case, we have ε xy = ε yz = ε zx = 0, therefore, we only have diagonal matrix elements. To calculate the magnitude of the strain, we use the Vegard's law: Where a is the lattice constant, for Si, a Si = 0.543 nm, for Ge, a Ge = 0.566 nm. To match the Ge layer on the top of the Si 0.15 Ge 0.85 , the lattice constant for Ge layer should a Si 0.15 Ge 0.85 , therefore, we can find the compressive strain for the Ge in the xy-plane ε xx = ε yy = −0.0063. Due to the compression in the xy-plane, the Ge layer will expand in the z-direction, and the tensile strain due to the expansion can be found using Poisson's ratio, and the compressive strain in the xy-plane will be ε zz = (−2C 12 /C 11 )ε xx = 0.0044. Energy splitting between heavy holes and light holes In this section, we first plot the variational parameters β H and β L used in Eq. 2, then we plot the energy splitting between the heavy-hole state (HH) and the light-hole state (LH). The variational parameters are evaluated by minimizing the expectation value of the Hamiltonian in the states ψ H and ψ L : where the subscript can be either h for the heavy-hole state or l for the light-hole state. The energy then reads: where F is the applied gate electric field, d is the quantum well width, the effective masses m h and m l are obtained from fitting the band diagram, which can be described by Luttinger parameters. For germanium, we have γ 1 = 13.18, γ 2 = 4.24, γ 3 = 5.69, which give the following effective masses: where m 0 is the bare electron mass. By minimising Eqs. 15 and 16, we obtain the variational parameters β h and β l (Figure 6a), as well as the HH-LH energy splitting (Figure 6b), as a function of the gate electric field. Spin-orbit coupling coefficients In this section, we first derive the spin-orbit coupling coefficients. To this end we first project the Luttinger Hamiltonian onto the zero-node HH and LH states ψ 0 where the superscript denote zero-node wave-functions. Where P =h the in-plane wave vector will be k 2 = k 2 x + k 2 y , k ± = k x ± ik y , the wave vector in the growth direction will bek z = −i∂/∂z. Now we include the contribution from one-node HH and LH states ψ 1 H , ψ 1 H , ψ 1 L , ψ 1 L , where the superscript 1 denotes one-node wave-functions. To find the one-node HH and LH wave-functions, we use the Gram-Schmidt process, first we define a set of orthogonal basis to span the excited states where β n denotes the variational parameter for the corresponding energy levels, in our calculations, we only consider β 1 h and β 1 l , which are the one-node HH and LH states variational parameters. Variational parameters are solved by minimizing the expectation of the Hamiltonian under ψ 1 H , ψ 1 H . Then the one-node HH and LH states can be constructed as ψ 1 = u 2 − u 2 |φ 1 φ 1 . We project the Luttinger Hamiltonian onto basis ψ 0 HH1 , ψ 0 HH1 , ψ 0 LH1 , ψ 0 LH1 , : To obtain an effective Hamiltonian between the two zero-node HH states, we apply the Schrieffer-Wolff transformation on the 8-band Luttinger Hamiltonian. The off-diagonal element H 1,2 will now read: Now we can write down the expression for α 3 -Rashba term and α 2 -Rashba term: We plot the spin-orbit coupling coefficients with contributions from the zero-node states and the one-node states in Figure 7. We should note that including the one-node wave-functions will not change the sweet spot for a give quantum well width because the energy splitting between the one-node state and the zero-node state is very large. EDSR Rabi time In this section, we derive the EDSR Rabi time. The in-plane wave-functions are the Fock-Darwin wave-functions The EDSR Hamiltonian include two terms: the spin-orbit coupling constants and the in- Where a 0 is the size of the quantum dot and α 3 is the spin-orbit coupling constants defined in section 2. Considering all contributions, perform the Schrieffer-Wolff transformation, we can write down our effective quantum dot Hamiltonian H 1,2 , therefore we can find its complex conjugate to find H 2,1 . The effective 2 × 2 EDSR Hamiltonian can be written as If we expand the Ω EDSR onto first order and convert it to frequency, we have where E AC = 10 5 V/m is the in-plane electric field, g 0 is the bulk g-factor for germanium, m p = m 0 /(γ 1 + γ 2 ) = 0.057 m 0 is the in-plane effective mass. Relaxation time In this section, we evaluate the relaxation time. We first calculate the first-order contribution to the relaxation time at a dilution refrigerator temperature T = 100 mK. Then, we discussed the relaxation due to the second-order processes. Finally, we calculate the relaxation time when T = 4 K, and a plot is given to illustrate the relation between the relaxation rate and the temperature. To calculate the first-order contributions to the relaxation time, we need to find the transition matrix elements describing the emission of a phonon. We use the Fermi's golden rule to find the transition rate. The hole-phonon interaction Hamiltonian can be written as We consider the three polarization directions: For each polarization direction, the phonon matrix elements will read differently. For example, in the longitudinal direction, we have: where a, b, d are the deformation constants, N is the number of the unit cells, V c is the unit crystal volume, ρ is the density of germanium. We can then project the phonon Hamiltonian onto our in-plane quantum dot Hamiltonian. To calculate the relaxation rate we use the Fermi's golden rule: The delta function indicates that the relaxation is completed by emitting a phonon, To find the effective transition matrix elements, we use the Schrieffer-Wolff transformation and substitute the effective matrix element into the transition rate: where E 1 , E 2 , E 3 , E 6 are the eigen-energies of the unperturbed in-plane QD Hamiltonian, i.e., the Fock-Darwin Hamiltonian. We then repeat this process for other two polarization directions and the α 2 -Rashba terms. Now we consider the second-order contribution to the relaxation rate, which includes virtual emission and absorption between the heavy-hole state and the light-hole state. We first start from the time-dependent perturbation where V n,m are the phonon matrix elements in Eq. 30, but we should also consider the time-dependence: where C(q) are the time-independent parts. We then substitute the occupation numbers and phonon-hole interactions back to the transition rate, integrating over the momentum space. For example, we consider the longitudinal polarization and denote two heavy-hole energy levels by E 1 and E 2 , as well as two virtual energy levels E 3 and E 4 . One possible secondary process can be caused by a heavy state in E 1 undergoing virtual emissions and adsorptions which then goes back to its original energy level E 1 . The transition probability for this process can be written as Considering the longitudinal direction Eq. 35 becomes Repeating the calculations for other polarizations, we evaluate the second-order contribution to the relaxation time. As expected, the second order contribution to the relaxation time is small. To test the quality of the qubit, we also calculate the contribution of the first-order process to the relaxation rate at T = 4 K. When the temperature is getting larger, both the absorption and emission process are getting very important. The logic of calculating the relaxation time at higher temperature is similar to the calculations at T = 100 mK. We have to evaluate the occupation numbers for both absorption and emission process. As an example, we consider the longitudinal polarization, the projection of the phonon-hole Hamiltonian onto the ground QD state and the first excited QD state should read Similarly, we can obtain the matrix elements for other directions. We then substitute the new matrix elements into the Fermi's golden rule. We can then evaluate the new relaxation time. Here we present a plot of the temperature dependence of the relaxation rate Figure 8. The relaxation rate increases with temperature, because a higher temperature will increase the possibility of adsorptions of a phonon which weaken the coherence of the system. In-plane electric field In this section, we demonstrate why the in-plane electric cannot have contribution to the qubit Zeeman splitting. First, we should note that the qubit Zeeman splitting is the difference between two effective QD energy levels. Therefore, if the in-plane electric field has the same contribution to the effective QD energy levels, the qubit Zeeman splitting will not be changed. Now we consider an in-plane electric field (for example, the in-plane electric field E =E AC in the EDSR Hamiltonian) E leading to a term eE · r , the subscript is used to denote the in-plane field. The projections of this term onto the quantum dot levels (consider both α 2 -Rashba term and α 3 -Rashba term, there are 20 quantum dot levels) are: where φ will be Fock-Darwin states. Applying the Schrieffer-Wolff transformation on the in-plane electric field: where and Therefore, when we evaluate the qubit Zeeman splitting, the corrections due to the in-plane electric field will be cancelled out. So, the in-plane electric field does not contribute to the fluctuations in the qubit Zeeman splitting, therefore, it leads to no dephasing. However, the dephasing can be from higher-order terms of the in-plane electric field. Distortion of the parabolic confinement In experiments, it is hard to establish a perfect parabolic confinement described in Eq. 6. To describe the distortion due to the parabolic confinements, we study the following three models as new perturbation: For each of the distortion model, we set the perturbation parameters λ, δ, ξ to satisfy 2V /(m p ω 2 0 a 2 0 ) = 0.1. If we consider regions close to the quantum dot, the energy correction due to the distortion φ|V |φ will be small and we can treat it as off-diagonal terms and use the Schrieffer-Wolff transformation to evaluate the correction to the quantum dot energy levels. In this regime, the corrections to the first two effective quantum dot levels are the same, therefore will be no change in the qubit Zeeman splitting. However, if we consider a larger region, the energy correction due to the distortion φ|V |φ will be comparable to the confinement energyhω 0 , i.e, the new quantum dot energy levels will read E = E 0 + E z + φ i |V |φ i . For example, when the quantum well width is d = 15 nm, F = 1.3 × 10 7 V/m. The first model (cubic term) will change the qubit Zeeman splitting by 1.5%, and the second model (quadratic term) will change the qubit Zeeman splitting by 0.46% and the third model (linear term) will change the qubit Zeeman splitting by 0.24%. A different quantum well width In this section, we discussed the effect on the sweet spot due to the quantum well width. In our early calculations, we used the quantum well width d = 15 nm. Here we produce some plots for the quantum well width d = 9 nm, but the strain terms will the same as before. In Figure 9a, due to a smaller quantum well width, the variational parameters will be smaller. As we decrease the quantum well width, the energy splitting Figure 9b is getting larger. Figure 9: (a) Variational parameters as a function of the gate electric field. (b) Energy splitting as a function of the gate electric field. In both plots, we use the width of the quantum well d = 9 nm, the size of the quantum dot a 0 = 7 nm. As in Figure 10, the decrease of the quantum well width will shift the sweet spot to a higher gate electric field, which suggests that for each different quantum well width the sweet spot, the qubit Larmor frequency will be changed significantly. Figure 10: (a) Qubit Zeeman splitting, when the gate electric field is turned off, the qubit Zeeman splitting will be g 0 µ B B = 120 µeV. As the gate electric field increase, the Rashba spin-orbit coupling will change the quantum dot energy levels, leading to a sweet spot. (b) A comparison of the magnitude of α 2 -and α 3 -Rashba terms that leads to the change in the qubit Zeeman splitting. In both plots we used the width of the quantum well d = 9 nm, the size of the quantum dot a 0 = 7 nm. We can notice that the sweet spot move to a higher gate field compared with the configuration when d = 15 nm. Now the sweet spot is at about 45 MV/m, and the confinement energyhω 0 = 27 meV. The change in the qubit Zeeman splitting as a function of the gate electric field follows the change in the Rashba spin-orbit coupling coefficients with the gate electric field as shown in Figure 11. Figure 11: Spin-orbit coupling coefficients, the width of the quantum well d = 9 nm, the size of the quantum dot a 0 = 7 nm. We also report the change in relaxation time and EDSR Rabi time and the allowable number of single qubit operations in Figure 12. As we can see that the relaxation time is getting larger if we decrease the size of our quantum dot. However, the EDSR Rabi term is getting larger due to smaller spin-orbit couplings and smaller dot size. For the dephasing time of the system, we expect a larger dephasing time for a smaller confinement width (given that that dot size does not vary to much). This is because the relative fluctuations in α 2 -and α 3 -Rashba coefficients will be getting smaller. In all these plots, we use the width of the quantum well d = 9 nm, the size of the quantum dot a 0 = 7 nm.
2019-11-25T19:00:00.000Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "9192cfa2984999bc2cf77c27370ff9d2b01c9b9e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9192cfa2984999bc2cf77c27370ff9d2b01c9b9e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235287032
pes2o/s2orc
v3-fos-license
Study on the Interference of Multi-point Shaped Charge Structure on Jet in order to improve the defense ability of reactive armor against shaped charge jet, a multi-point shaped charge structure is designed in the first layer of traditional explosive reaction armor, and the active passive combination method is used to interfere with the jet. The specific action process is as follows: after obtaining the signal of series warhead charge impacting explosive reactive armor. Take the initiative to detonate the shaped charge. Multiple damage elements are formed to interfere with the shaped charge jet actively. Based on LS-DYNA, the numerical simulation of multi-point shaped charge structure is carried out, and the charge structure and angle are optimized. The results show that the damage elements in different directions can be focused along the axial direction of the incoming jet by adjusting the charge structure. Changing the focal length of the converging jet can cause interference or fracture at different positions of the jet. The research results of this paper can provide reference for the design of active passive reactive armor. Introduction Before 1960, tanks used protective armor through the use of different materials and structural design. With the progress of technology, the traditional protective armor has been difficult to meet the needs, people have carried out a lot of research, so the concept of new protective armor was put forward. In 1974, M. held [1] put forward the concept of explosive reactive armor. Two steel plates were added at both ends of the explosive. When the jet penetrated the reactive armor, the reactive armor would be detonated first, and the generated detonation wave would drive the steel plate to move, which would affect the jet and make it slow down and bend or fracture. Tong Zongbao and others [2] studied the problem that the opening of the front stage shaped charge of the tandem warhead is not big, and proposed an M-shaped liner. Li Yongsheng et al. [3] Optimized the design of annular cutter liner. Through numerical simulation, the annular cutting effect of copper, tungsten and iron liner on 100 mm thick homogeneous target was compared. Cao Tao et al. [4] Studied the influence of liner shape on the performance of lateral annular jet by numerical simulation, and set up four groups of liner with different parameters. The results are as follows: 1. The jet shape formed by conical liner is better than that of elliptical liner; 2. The jet formed by variable wall thickness liner with 60°conical angle has the maximum head velocity. Wu Haijun et al. [5] Simulated the formation and penetration process of annular jet under the condition of multi-point synchronous initiation, and obtained that the penetration ability of annular jet increases with the increase of initiation points. Xue Zhen et al. [6] Studied the jet convergence performance of annular shaped charge with deflection angle, and found that the jet can converge in the axial direction and the head velocity can be improved by using the liner with deflection angle. Based on the traditional reactive armor, a surface jet converging structure is proposed for armor protection. In this study, the charge in the box detonates at the moment when the front charge impacts the outer shielding plate. Under the action of detonation wave, the deflected liner is used to form a surface jet which can converge to the central axis, so as to cut the main jet of the penetrator. The numerical calculation model is established by LS-DYNA to analyze the influence of the multi-point shaped charge structure on the interference effect of the jet under different structures and angles. Basic assumptions In the calculation model of multi-point shaped charge structure studied in this paper, the initiation system is not considered, and an idealized initiation surface is formed at the front end of shaped charge in the form of coordinate points. At the same time, the influence of temperature on the formation, convergence and penetration process of jet is not considered. Calculation model The multi-point shaped charge technology involves the large deformation movement of materials and the fluid solid structure coupling, so the multi-material ALE (arbitrary Lagrangian Eulerian) method and motion grid are used in this study. The structure of multi-point shaped charge used in the calculation is shown in Figure 1. A conical liner with a top cone angle of 40°was used. The deflection angle of the liner was 15°and the wall thickness was 1.6 mm. The outer diameter of the large end of the liner was 48 mm and the inner diameter of the small end of the inner liner was 20 mm. The wall thickness of the inner liner and the outer liner were 1.2 mm and 1 mm respectively, and the charge height was 20 mm. Table 1. Where, ρ is the density of liner material; G is the shear modulus; A、B、n、c、m are material constants; Tm is the melting temperature of material. Formation process of converging jet After initiation, the multi-point shaped charge structure first forms a cone-shaped jet, which moves and stretches along the direction of the cone generatrix, forming a convergence trend in the direction of the axis, so as to form a damage element to interfere with the shaped charge. Figure 2 shows the action process of the multi-point shaped charge structure moving along the axis from the collapse of the liner to the formation of the jet. It can be seen from Figure 2 that the liner detonates in 0~12μs, extrudes inward in 13~40μs, forms jet in 41~80μs, and converges to the axis direction. The jet around 80 μs is basically formed and tends to be stable, and then the jet is gradually elongated under the effect of velocity gradient. Influence of liner cone angle on convergent jet shape Based on the numerical simulation of the jet formation process, only changing the size of the cone angle of the liner, the charge structure with the cone angle α of 40°and 50°is simulated respectively. Figure 3 shows the cross section of the jet convergence process at different cone angles. Influence of cone angle of liner on jet shape. It can be seen from Figure 3 that the diameter of the jet increases with the increase of the cone angle of the liner. This is because when the cone angle of the liner is large, the short and thick metal jet will be produced. In addition, the shape and velocity of the jet are affected by the cone angle of the liner. The jet formed by the liner with a cone angle of 50°breaks the jet shape due to expansion, resulting in the instability of the jet. The jet with a cone angle of 40°has better shape and higher armor breaking power. Test plan In order to further verify the influence of multi-point shaped charge structure on jet interference, static armor breaking test was carried out. The liner with deflection angle of 40°and 50°is designed respectively, and the electric detonator of B explosive is used for ring initiation. Figure 4 shows the layout of the test site. The target plate is a 45#steel plate with a diameter of 200 mm and a thickness of 150 mm. Fig. 5 shows the effect of the jet generated by the multi-point shaped charge structure penetrating the target plate under different cone angles of the liner. Table 3 shows the comparison between the test results and the simulation results. From the target plate after penetration, it can be seen that under the condition of 5 times of charge diameter, the jet formed by small cone angle (α = 40°) converges twice and produces the jet with smaller head diameter and smaller entrance size of penetration hole; while the jet produced by large cone angle (α = 50°) has larger head diameter and larger entrance size of penetration hole, which is basically consistent with the results of numerical simulation. From the perspective of penetration depth, the jet produced by the charge structure with smaller cone angle (α = 40°) penetrates deeper into the target plate, and the test results under the two cone angles are lower than the simulation results. The main reason is that it is difficult to guarantee the closure delay at the junction between the inner shell and the liner, and the detonation products leak out prematurely from the gap between the inner shell and the liner, This can be seen from the copper coating on the surface of the target. Conclusion The multi-point shaped charge structure is designed in the first panel of traditional explosive reactive armor. The active passive mode is used to interfere with the jet. The numerical simulation of the action process of the multi-point shaped charge structure is carried out based on LS-DYNA, and the charge structure and angle are optimized. 1. Innovative reactive armor structure, multi-point shaped charge mechanism can form stable and efficient jet, and converge in the axis direction.
2021-06-03T00:44:30.329Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bfef660d31064ecc6bee1f1a276ceb797a6f9a8f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1855/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bfef660d31064ecc6bee1f1a276ceb797a6f9a8f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
105734695
pes2o/s2orc
v3-fos-license
Multiplex real-time PCR assay combined with rolling circle amplification (MPRP) using universal primers for non-invasive detection of tumor-related mutations With the continuous development and application of targeted drugs, it is particularly desirable to find a non-invasive diagnostic approach to screen patients for precision treatment. Specifically, detection of multiple cancer-related mutations is very important for targeted therapy and prediction of drug resistance. Although numerous advanced PCR methods have been developed to discriminate single nucleotide polymorphisms, their drawbacks significantly limit their application, such as low sensitivity and throughput, complicated operations, and expensive costs. In order to overcome these challenges, in this study, we developed a method combining multiplex and sensitive real-time PCR assay with rolling circle amplification. This allows specific and sensitive discrimination of the single nucleotide mutation and provides convenient multiplex detection by real-time PCR assay. The clinical potential of the MPRP assay was further demonstrated by comparing samples from 8 patients with a digital PCR assay. The coincident results between these two methods indicated that the MPRP assay can provide a specific, sensitive, and convenient method for multiplex detection of cancer-related mutations. Introduction Tumorigenesis has been reported to result from the accumulation of multiple genetic mutations. 1 Understanding the genetic context of cancer is of upmost importance for precision treatment of patients. Non-small cell lung cancer (NSCLC), for example, has an EGFR mutation rate as high as 30-60% in East Asian populations. The phase III study of IPASS showed a signicant clinical benet of using EGFR tyrosine kinase inhibitor (TKI) in patients with positive EGFR mutations compared to traditional chemotherapy and in patients with negative EGFR mutations. 2 Therefore, detection of cancerrelated mutations, especially driver mutations, is signicantly benecial for the screening of patients for targeted therapy and the early detection of drug resistance. This is termed companion diagnostics. 3 Since 2011, authorities such as ASCO and NCCN have successively established norms and guidelines for the detection of driver mutations such as EGFR and BRAF. Authorities reached an agreement that all patients receiving targeted drug treatment such as with an EGFR TKI or a BRAF inhibitor need to undergo molecular detection of EGFR, BRAF, etc. [4][5][6] Therefore, developing a multiplex, sensitive and convenient method to detect tumor-related mutations is an urgent need for clinical precision treatment. Liquid biopsies of blood or urine were recently established to be used for the diagnosis of cancer and other diseases. The clinical application of cell-free DNA, which contains the genetic information of tumors, took center stage. 7 However, the low amount and shorter length of cell-free DNA limited the effective detection of tumor-related mutations. 8 Moreover, sensitive detection of the mutation under the highly abundant background of wild-type DNA remains difficult. At present, several real-time PCR methods such as ARMS, 9,10 COBAS, 11,12 etc., have been developed to detect this mutation. Unfortunately, low sensitivity and false-positive results have limited their clinical application. Due to the increased number of approved targeted medications and the subsequent increase in DNA targets that need to be detected, more input samples need to be prepared. Excitingly, the padlock RCA method has been established to specically identify single nucleotide polymorphisms (SNP). 13 Aer complete annealing to the target DNA strand, two ends of the padlock probe can be ligated to circularize. A single mismatched base present at the 3 0 terminal of the probe will abolish the ligation, thus ensuring high specicity in DNA and RNA detection. Then, rolling circle amplication (RCA), which is based on rolling circle replication of circular DNA pathogens in nature, was carried out to achieve exponential amplication of the circularized DNA. 14,15 Combined with RCA technology, padlock RCA can detect SNP from 1 ng (300 copies) of genomic DNA and the accuracy rate of SNP genotyping results can reach 100%. 16 However, the effectiveness of ligation and the stability of the reaction conditions can still be problematic. On the other hand, some methods require the removal of the residual probe which cannot hybridize to the target template, thereby increasing the complexity of detection. 17 Hence, padlock RCAbased assay needs further improvement. In this study, we presented the MPRP method to combine padlock RCA assay and real-time PCR technology to detect multiplex tumor-related mutations. We found that this combined method can specically and sensitively distinguish the genetic mutation and allow convenient multi-detection of tumor-related mutations by real-time PCR. Furthermore, we demonstrated the clinical applicability of this combined method by multi-detection of EGFR mutations L858R and T790M and BRAF mutation V600E in patients' plasma. Results and discussion Detection of cancer-related mutations helps identify patients who can greatly benet from targeted therapy through improving treatment outcomes and reducing healthcare expenditures. The traditional way to detect cancer-related mutations involves a tissue biopsy, through which a sufficient sample is necessary. 18 However, some patients cannot undergo surgery or puncture to obtain the quantity necessary for genetic testing. Recently, liquid biopsies, especially those which contain cell-free DNA samples, provide a convenient way for the detection of genetic mutations. Its advantage lies in the ability to reduce the risk of biopsy through non-invasive sampling and effectively prolonging the survival of patients. At present, an increasing number of drug targets have been developed, which makes it highly attractive to clinicians to be able to detect multiplex genetic mutations in patients in order to achieve the best clinical outcome. Hence, this non-invasive and multiplex detection approach of tumor-related mutations has great promise in clinical cancer therapies. Although numerous efforts have been made towards the development of new technologies to detect genetic mutations, including ARMS, 9,10 COBAS, 11,12 BEAMing technology and digital PCR, their applications are signicantly limited by low sensitivity, false-positive results, complicated operations, low throughput, expensive chips and closed reagent. [19][20][21] In the current study, we combined the padlock probe technology with multiplex real-time PCR to detect genetic mutations, and demonstrated the feasibility, great specicity and high sensitivity of the assay. Principle of the MPRP assay The principle of the MPRP assay for the detection of gene mutations was illustrated in Fig. 1. To summarize, the padlock probe was composed of 3 parts: the H5 region, the link region, and the H3 region. The mismatched base was designed at the 3 0 end of the H3 region ( Table 1). The sequences of the link region within different padlock probes were the same, thus the primers uniF and uniR in this sequence are universal for RCA and real time PCR reactions. The length of the H3 region should be shorter than the H5 region to improve the specicity of subsequent ligation. The 5 0 -PO 4 group and 3 0 -OH group that were modied to the end of the padlock probe can be specically linked to form a DNA ring in the presence of HiFi Taq DNA ligase, when the padlock probe is completely complementary to the target DNA at the H5 and H3 regions. Otherwise, the ligation cannot occur if mismatched base pairs exist at the 3 0 end of the H3 region. Aer the circularization of the padlock probe, exonuclease I and exonuclease III were added to digest the ssDNA and dsDNA, respectively. The RCA reaction was followed to amplify the circularized DNA ring, and nally the product was detected for mutant discrimination by multiplex real time PCR. The abundance of each target DNA is negatively correlated with its C t value obtained from quantitative real-time PCR. Optimization, specicity and sensitivity of MPRP assay for mutation detection In this study, padlock probe technology was used to discriminate the specic mutation, which has been demonstrated in microbiological and microRNA detection because of its high sensitivity and specicity for single nucleotide discrimination resolution. 22,23 Only the perfect matched padlock probe could be ligated and circularized. Compared with oligonucleotide gap-ll ligation, one step of ligation in our study not only acquired high specicity, but also avoided the ligation events occurring at the 5 0 -end of the wrong padlock gap probe, which occupied the position and decreased the efficiency of ligation. 24 The successful circularization and specic discrimination of padlock probes depends on the activity of DNA ligase and the optimization of the reaction. T4 DNA ligase, which is one of the most popular DNA ligases used for joining the 5 0 phosphate and 3 0 hydroxyl terminal of duplex DNA or RNA, 25 has the capability of SNP discrimination in RNA, but no capacity of SNP detection in DNA. 24,26 HiFi Taq DNA ligase, which demonstrates strong capability of SNP discrimination in DNA, is quite suitable for the padlock probe. 27 Moreover, a couple of factors affecting the padlock RCA reaction for SNP discrimination have been reported. [28][29][30] Ligase temperature is one of the most important factors for the efficiency of ligation, which can be predicted using the Thermostable Ligase Reaction Temperature Calculator v0.8.4 tool from NEB (http://ligasecalc.neb.com/#!/ligation). In our experiment, 43 C was selected as the ligase temperature. Moreover, the bases at the 3 0 end of the padlock probe were LNA modied to enhance the base stacking of perfectly matched base pairs and decrease the stacking stability of the mismatched pairs. 31 The optimal concentration of the padlock probe for padlock RCA was calculated to be 0.1 pM according to the C t value of template DNA determined by real time PCR, which was chosen in the following experiments (Fig. S1 †). In addition, the stability of the padlock RCA reaction was further improved by endonuclease digestion and betaine solution as previously reported. 17,32 In order to investigate the feasibility and specicity of the MPRP system, three padlock probes were utilized to detect three different target DNAs with point mutations in a Padlock RCA reaction. For example, probe pEG858A and pEG858C were perfectly matched with target tEG858T and tEG858G, respectively. Similarly, probe pEG790G and pEG790A were perfectly matched with target tEG790C and tEG790T, while probe pBR600A and pBR600T were perfectly matched with target tBR600T and tBR600A, respectively (Table 1). Finally, one pair of universal primers and three different probes labeled with different uorescent moieties were used to perform multiplex real time PCR. It was shown in Table 2 that in the presence of mutant DNA and corresponding padlock probes, the C t value of mutant DNA obtained from the MPRP assay was able to reach 9.74 AE 0.565, 7.99 AE 0.143 and 15.27 AE 0.243, respectively (Fig. S2 †). No valid C t value was observed with those padlock probes complementary to the wild type DNA. The results demonstrated the feasibility and specicity of the MPRP assay to detect multiplex DNA point mutations. Multi-gene detection and dual mutation discrimination mode In our MPRP method, RCA was used to pre-amplify the target DNA before the real time PCR reaction. This step not only amplies and enriches the circularized padlock probe but also increases the volume of input material for the subsequent multi-gene detection, which provides possibilities for targeted mutation screening using 96-well PCR panels. Aer enrichment of the single-base mutation, multiplex real time PCR is used for the quantication of gene mutations, which further increases the specicity of SNP detection. Dual mutation discrimination mode prevents the false positive results that can be caused by high sensitivity. Furthermore, instead of the traditional uorescent detection method utilizing a uorescent reader or microscope that is difficult to calculate and has low sensitivity, 17,24,26 the real time PCR method provides automatic detection of signals which can be calculated and presented with visual results using professional analysis soware. In our study, we detected the mutation with the mutant ratio as low as 0.1%, which is much better than the sensitivity of traditional real time PCR such as ARMS-PCR. 33,34 Clinical evaluation of MPRP assay Developing a non-invasive diagnostic approach to detect tumorrelated mutations is highly desirable for precision cancer treatment. In order to demonstrate its potential in clinical application, a MPRP assay was performed using cell free DNA extracted from the plasma of 8 lung cancer patients to determine the mutation involved in EGFR T790M, EGFR L858R and BRAF V600E, respectively. Meanwhile, for comparison the mutation ratios of these samples were also detected using digital PCR, which provides highly sensitive and absolute quantication of target DNA. 33,35,36 Both MPRP assay and digital PCR consistently detected one or two EGFR mutations in 4 patients from a total of 8 patients, and the gene mutation level reected by the C t value of the MPRP assay was highly correlated with the mutation ratio determined by digital PCR (Table S1 †). Digital PCR only exhibited higher sensitivity in detection of the EGFR L858R mutation in patient P6 compared to the MPRP assay (Table 4, Fig. S4 † and 2). There was no BRAF V600E mutation detected by the MPRP assay in any of the patients. Most of the results obtained by MPRP assay were consistent with those from digital PCR, strongly supporting its reliability. Considering the drawbacks of digital PCR, including its complicated operation, low throughput and expensive costs, the MPRP method is a more convenient, efficient and sensitive way to detect cancer-related mutations and can satisfy the clinical needs. According to our knowledge, it's an innovative strategy to combine padlock RCA with multiplex real-time PCR for the noninvasive detection of lung cancer-related mutations, which appears to be very promising in clinical application. Materials Human non-small cell lung cancer cell A549 was cultured in DEME medium (ThermoFisher, CA, US) with 10% fetal bovine serum (FBS) at 37 C in 5% CO 2 atmosphere. 5 mL of blood samples from patients with non-small cell lung cancer were collected in a Cell-Free DNA Blood Collection Tube (Streck, La Vista, USA) in the department of thoracic surgery at Hebei Chest Hospital before surgery, according to a protocol approved by the Ethics Committee of these institutions. All patients provided written informed consent. This study was approved by the ethics board of the institute of Hebei Chest Hospital and complied with the Declaration of Helsinki. Within 1 hour, all whole blood samples were centrifuged at 820 g for 10 min. Plasma was collected and subjected to a second centrifugation at 16 000 g for 10 min. The supernatant was then transferred to fresh tubes and stored at À80 C. Genomic DNA was extracted from the cells using a DNA Extraction Kit (Apexbio, Beijing, China) according to the user manual. Circulating DNA from plasma was extracted with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, CA) according to the manufacturer's protocol. DNA quantication was performed in a Colibri microvolume spectrophotometer (Titertek-Berthold, Pforzheim, Germany). MPRP assay All oligonucleotides in this study were synthesized by Sangon Biotech Co., Ltd (Shanghai, China). The padlock probes and template oligos of EGFR L858R, T790M and BRAF V600E, including both wild-type and mutant, were designed using Primer 3 soware (version 4.1.0). The link part of the padlock probe cannot be complementary with the human genomic DNA validated by BLASTN. For the padlock RCA reaction of specicity test, the following were added into three PCR tubes for EGFR L858R, EGFR T790M and BRAF V600E, respectively: 1 mL of each 0.1 pM corresponding padlock probe (both of mutant type and wild type), 1 mL of 0.1 pM corresponding mutant template which was synthesized, 22 mL of ligation solution containing 0.5 mL HiFi Taq DNA ligase (NEB, Ipswich, MA), and 1Â HiFi Taq DNA ligase reaction buffer. For the padlock RCA reaction of sensitivity test, the following were added into the PCR tubes for each dilution series of standard DNA: 1 mL of each 0.1 pM padlock probe of mutant types (pEG858C, pEG790A and pBR600A), 1 mL of 0.1 pM standard DNA, 21 mL of ligation solution containing 0.5 mL HiFi Taq DNA ligase (NEB, Ipswich, MA), and 1Â HiFi Taq DNA ligase reaction buffer. For the padlock RCA reaction of clinical test, the following were added into the PCR tubes for each sample: 1 mL of each 0.1 pM padlock probe of mutant types (pEG858C, pEG790A and pBR600A), 5 mL of plasma DNA, 17 mL of ligation solution containing 0.5 mL HiFi Taq DNA ligase (NEB, Ipswich, MA), and 1Â HiFi Taq DNA ligase reaction buffer. The mixture was incubated at 55 C for 1 hour aer being heated at 95 C for 3 min. Aer the circularization of the padlock probe, 10 U exonuclease I (NEB, Ipswich, MA) and 40 U exonuclease III (NEB, Ipswich, MA) were added to digest the ssDNA and dsDNA, respectively. Then, the RCA reaction was immediately performed to elongate the padlock probe circulated for subsequent real time PCR assay in 25 mL of 1Â Thermopol buffer (50 mM Tris-HCL, 10 mM MgCL 2 , 10 mM (NH 4 ) 2 SO 4 , 4 mM DTT, PH 7.5@RT), 4 mL 5 M betaine (ThermoFisher, USA), 14 mM dNTPs mix, 1 mL 100 mM uniF, 1 mL 100 mM uniR and 5 mL ligation products. Aer heating at 95 C for 3 min and cooling down in ice, 1Â BSA, 0.5 mL betaine, 6 mM MgSO 4 and 10 U mL À1 of Bst DNA polymerase (NEB, Ipswich, US) were added. The mixture was incubated at 65 C for 2 h before heating at 85 C for 10 min to deactivate the polymerase. Then, quantitative real-time PCR testing was performed in a 20 mL reaction containing 1Â PerFecTa Multiplex qPCR ToughMix (Quanta Biosciences, Gaithersburg, USA), 400 nM of each uniF and uniR, 200 nM of each multiplex probes, and 1 mL RCA product. For specicity test, the PCR probes included both wild type and mutant type for EGFR L858R, EGFR T790M and BRAF V600E, respectively. For sensitivity test and clinical test, the PCR probes included three probes of mutant type (P858M, P790M and PBM) simultaneously. This was programmed as: 95 C for 3 min, followed by 40 cycles of 94 C for 10 s, 60 C for 20 s on a CFX96 real-time PCR instrument (Bio-Rad, Hercules, CA). All experiments were replicated to ensure reproducibility. Digital PCR assay Detection of the EGFR mutation L858R and T790M in cell free DNA was carried out on the Naica digital PCR system (Stilla Technologies, Villejuif, France) with Sapphire chips (Stilla Technologies, Villejuif, France) in a 25 mL reaction mix containing the following components: 1Â PerFecTa Multiplex qPCR ToughMix, 40 nM FITC (Saint Louis, MO, USA), 1 mL of primer and probes multiplex mix and 3 mL of DNA template. The chip was loaded into the Naica Geode thermocycler to compartmentalize the droplets and perform the PCR reaction. PCR conditions were 95 C for 10 min, followed by 45 cycles of 95 C for 20 s and 60 C for 30 s. Aer amplication, the Sapphire chips were imaged using the Naica Prism3 reader and the uorescent data were analyzed using Crystal Miner soware (Stilla Technologies, Villejuif, France). Each patient sample was tested in duplicate. NTC and EGFR Gene-Specic Multiplex Reference Standard gDNA HD802 (Horizon Discovery, Cambridge, UK) were used as negative and positive controls, respectively. Negative and positive droplets were also used to check the uorescence spill-over compensation. Conclusions In this study, we developed a MPRP system based on rolling circle amplication and multiplex real time PCR using universal primers for SNP discrimination, which showed high specicity and sensitivity for multiplex detection of tumor-related mutations including EGFR mutations L858R and T790M and BRAF mutation V600E. In order to validate its potential application in clinical diagnosis, we tested the samples from the patients' plasma using both MPRP assay and digital PCR assay. The coincident results between the two methods indicated that the MPRP assay provided a more sensitive, specic and convenient method for the detection of cancer-related mutations. Conflicts of interest There are no conicts to declare.
2019-04-10T13:11:41.774Z
2018-07-30T00:00:00.000
{ "year": 2018, "sha1": "ce789521b5226c9be6a42bf086bd674b1ef21784", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra05259j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0120da313e6f8989463355adc3b6818db6bec25", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
18831453
pes2o/s2orc
v3-fos-license
Effect of Inorganic and Organic Carbon Enrichments (DIC and DOC) on the Photosynthesis and Calcification Rates of Two Calcifying Green Algae from a Caribbean Reef Lagoon Coral reefs worldwide are affected by increasing dissolved inorganic carbon (DIC) and organic carbon (DOC) concentrations due to ocean acidification (OA) and coastal eutrophication. These two stressors can occur simultaneously, particularly in near-shore reef environments with increasing anthropogenic pressure. However, experimental studies on how elevated DIC and DOC interact are scarce and fundamental to understanding potential synergistic effects and foreseeing future changes in coral reef function. Using an open mesocosm experiment, the present study investigated the impact of elevated DIC (pHNBS: 8.2 and 7.8; pCO2: 377 and 1076 μatm) and DOC (added as 833 μmol L-1 of glucose) on calcification and photosynthesis rates of two common calcifying green algae, Halimeda incrassata and Udotea flabellum, in a shallow reef environment. Our results revealed that under elevated DIC, algal photosynthesis decreased similarly for both species, but calcification was more affected in H. incrassata, which also showed carbonate dissolution rates. Elevated DOC reduced photosynthesis and calcification rates in H. incrassata, while in U. flabellum photosynthesis was unaffected and thalus calcification was severely impaired. The combined treatment showed an antagonistic effect of elevated DIC and DOC on the photosynthesis and calcification rates of H. incrassata, and an additive effect in U. flabellum. We conclude that the dominant sand dweller H. incrassata is more negatively affected by both DIC and DOC enrichments, but that their impact could be mitigated when they occur simultaneously. In contrast, U. flabellum can be less affected in coastal eutrophic waters by elevated DIC, but its contribution to reef carbonate sediment production could be further reduced. Accordingly, while the capacity of environmental eutrophication to exacerbate the impact of OA on algal-derived carbonate sand production seems to be species-specific, significant reductions can be expected under future OA scenarios, with important consequences for beach erosion and coastal sediment dynamics. Introduction The rise of oceanic pCO 2 caused by increasing CO 2 concentrations in the atmosphere is leading to significant changes in the ocean carbonate system, which are primarily reflected in an increase in bicarbonate concentration and a decrease in seawater pH (ocean acidification-OA) [1,2]. These changes also induce a significant decline in the saturation state of the different crystallization forms of calcium carbonate in the marine environment, which will facilitate the dissolution of existing calcium carbonate deposits and cause severe impacts on marine calcifiers. Many coral reef habitats and their lagoons are particularly threatened by ocean acidification. Studies conducted at natural low pH sites have shown that under OA the reef framework is less stable [3], and reef accretion is compromised [4], as are the ecosystem services provided by the reef [5]. Local impacts associated with nutrient enrichment, pollution and overfishing have also increased in the last decades, leading to so called "phase shifts" in many parts of the Caribbean and coral reefs worldwide [6,7]. One of the main drivers of "phase shifts" is related to inorganic and organic nutrient inputs derived from untreated or poorly treated sewage. The impact of elevated DOC concentrations on coral reef health is currently of major concern in coral reef research [8][9][10], as elevated DOC has been associated with enhanced bacterial growth and other processes that lead to oxygen depletion and the accumulation of toxic substances, and ultimately to an increase in coral mortality [9,11,12]. High concentrations of DOC, predominantly in the form of dissolved carbohydrates, can also enter the coral reef system in the form of exudates released by the benthic community [13,14]. Previous results have shown minimal or no significant differences in the DOC concentrations released by benthic calcifying algae (Halimeda opuntia) compared to coral exudates (Porites lobata) [10,15], although it has been postulated that bacterial growth is primarily triggered by algal-derived DOC rather than DOC released by corals [10,15]. The sandy bottom of Caribbean reef lagoons are commonly colonized by rhizophytic calcareous green algae (Siphonales) of the genera Halimeda, Udotea, Penicillus and Rhipocephalus, which are associated with the seagrass habitat builder Thalassia testudinum [16][17][18]. Calcareous green algae produce an important fraction of coral reef carbonate production in the form of calcareous sand, essential to support reef accretion [19][20][21][22]. Most of the studies that have investigated the responses of marine macrophytes to OA and other local threats have focused on species of the genus Halimeda, due to this genus is considered one of the most productive. Limited attention has been given to other important reef calcifiers, such as species from the genus Udotea, Penicillus, Rhipocephalus. For experimental studies focused on Halimeda spp., it has been concluded that this genus displays a large species-specific variation to increasing levels of dissolved inorganic carbon (DIC). Some species reduce their photosynthetic rates [23,24], while others have shown positive [25] or no effect on algal photosynthesis [26][27][28]. Similarly, large inter-specific variation has been documented for the response of calcification of Halimeda spp. to DIC increases [24,[26][27][28][29][30][31][32][33] indicating that some species may be more tolerant to OA than others. Altered skeletal structure of different Halimeda spp. in response to OA conditions has also been reported [34,35], being indicative of potential needle dissolution [29] and/or the formation of more slender crystals during exposure to reduced pH [25]. Yet, alteration of skeletal structure may also affect the contribution of species from the genus Halimeda to sediment carbonate production under different OA scenarios, irrespective of the severity of the impact detected on algal physiology. In contrast to OA, nutrient enrichment enhances Halimeda spp. production and growth [23,36,37], with the exception of phosphate enrichment, for which a large species-specific variation has been also reported [36]. An analysis on the combined effect of inorganic nutrient enrichment and reduced pH on Halimeda opuntia has shown decreased enhancement in algal production under nutrient enrichment and reduced pH, relative to the estimated values for ambient pH [38]. Meyer et al. [28] have recently shown the negative effects of increased DOC concentration on the photosynthesis of two Halimeda species from the Great Barrier Reef, H. opuntia and H. macroloba, but no effect was found on algal calcification rates under illumination. These authors further investigated the combined effect of elevated DOC and DIC concentrations, and documented an adverse impact on thallus photosynthesis for both species, while only H. opuntia showed a negative effect on dark calcification rates. These findings support the large species-specific component of the response of marine algae to the combined effects of OA and increased DOC, and the importance of increasing the number of this type of experimental studies for other sites and species. Halimeda incrassata (J. Ellis) J.V. Lamouroux and Udotea flabellum (J. Ellis & Solander) M. Howe are two abundant rhizophytic species of the macrophyte community of shallow seagrass habitats dominated by the species Thalassia testudinum. In the Puerto Morelos reef lagoon, Mexican Caribbean, both species are among the most abundant calcareous algae. Estimates of H. incrassata primary production for this lagoon (0.2-0.5 g dwt m -2 day -1 ) [22] are lower than the reported values for seagrass leaf production (0.9-1.2 g dwt m -2 day -1 ) [39]. Interestingly, annual carbonate production for H. incrassata from this area is in the same range or even lower (between 0.5 and 1.0 kg CaCO 3 m -2 y -1 ) [22] than estimates of annual carbonate production recently documented for the dominant seagrass T. testudinum (between 0.5 and 5.63 kg CaCO 3 m -2 y -1 ) by Enríquez and Schubert [40]. To our knowledge, no estimates for Udotea spp. annual carbonate production are yet available for this area or other locations. The only study on the daily carbonate production of U. flabellum [41] has documented that this species produces about 45% less carbonate per day than H. incrassata. The Mexican Caribbean (Cancún and the Riviera Maya) has experienced a 4.3-fold population increase during the period from 2000-2009 [18], which is associated with the rapid coastal development of large tourist complexes. These changes have severely affected the reef habitat, particularly the benthic macrophyte community associated with seagrass beds, which is shifting towards an increased presence of fleshy macroalgae [18], and increased biomass of green calcifiers [42]. To understand the combined effect of these local impacts with the predicted negative effect of OA on marine calcifiers, this study investigated the direct and combined effects of experimental increases in DIC and DOC on the physiological performance of Halimeda incrassata and Udotea flabellum. This multi-factorial study aims to analyze a more realistic scenario that may be also very useful for other areas affected by similar coastal eutrophication derived from different anthropogenic impacts. As calcareous macroalgae are considered important contributors to reef carbonate budgets, this study can also contribute to improve our understanding of future impacts caused by the combined effects of global and local threats on reef accretion and the stability of the reef system. Algal Collection and Maintenance Several individuals of H. incrassata and U. flabellum were collected by SCUBA diving from the Puerto Morelos reef lagoon, Mexican Caribbean (20°52' N, 86°52' W), in March 2012 at 3-3.5 m depth, and transported in mesh-covered ziplog bags to the mesocosm facilities of the Universidad Nacional Autónoma de México (UNAM). To minimize physiological variability among replicates associated with age and photoacclimatory condition of the thallus, thalli of similar size and position were selected for the experimental analysis: 5-7 apical segments for H. incrassata, and U. flabellum individuals of 3-5 cm height. The selected individuals were acclimated for 5 days to the experimental conditions by placing them in 50 L experimental tanks receiving filtered (~50 μm) ambient seawater (~28°C, pH 8.2) from the lagoon, with continuous flow of 1 L min -1 . Irradiance levels at mesocosms were adjusted using neutral density shade mesh to simulate light conditions at collection depth (51% of surface irradiance, E s ). E s was calculated using surface irradiance data and the down-welling light attenuation coefficient of the reef lagoon estimated for the sampling period (February-March 2012) of K d = 0.2 m -1 , which was similar to previous values reported [43]. Variation in diurnal irradiance was continuously recorded throughout the experiment using a cosine-corrected light sensor (LI-190SA; LI-COR, Lincoln, NE, USA) connected to a data logger (LI-1400; LI-COR, Lincoln, NE, USA), located at the mesocosm system. After the pre-acclimation period (5 days), initial measurements of photosynthesis, respiration and calcification rates were performed (n = 6) as described below, and 12 individuals of each species (n = 12) were randomly positioned into each tank. Experimental Treatments The experiment was conducted over 10 days in an open flow-through system, which consisted of 12 tanks of 50 L each. To enhance water movement in the tanks and prevent carbon limitation of algal photosynthesis, aquaria pumps (500 L h -1 ) were located in each tank and connected to a rectangular PVC frame surrounding the tank with holes facing inward to create homogenous flow conditions according to Cayabyab and Enríquez [44]. We used three replicate tanks per treatment and placed tanks with the different treatment levels (ambient and increased, see below) in alternating order. The treatments comprised ambient and elevated pCO 2 concentrations in order to simulate OA changes in CO 2 availability from 380 to 1000 μatm, respectively, as well as ambient and increased DOC conditions (see Table 1). The increase in pCO 2 concentration was achieved by pH manipulation via CO 2 gas injection by a potentiometric pH sensor controlled pH stat system (IKS Aquaristic Products, Karlsbad, Germany). pH-reading by the pH-stat system was continuous (every other second) to adjust pH levels in the system, and pH sensors were calibrated every other day according to values measured by a WTW Multi 3430 probe (WTW, Weilheim, Germany). pH in the tanks was maintained at 8.2 for the control treatment, and reduced to 7.8 for the high DIC treatments (see Table 1). The elevated pCO 2 levels used here were selected considering the Representative Concentration Pathways 8.5 (RCP8.5), which predicts a decrease of seawater pH between 7.7-7.8 [45]. Table 1. Variation in the experimental conditions. Values for the carbonate system parameters were calculated using CO2SYS with temperature, salinity, total alkalinity (TA) and pH NBS as input parameters (n = 3). Additionally, the Biological Oxygen Demand (BOD) of the seawater in each treatment are given, determined through oxygen consumption using dark incubations over 24 h and referring the estimated changes to the water volume of the incubation (n = 3). Data represent mean ± SD (n = 6) and the results of a one-way ANOVA (p<0.05) performed to determine significant differences in BOD between treatments, are indicated by different letters. DOC treatment levels were adjusted to concentrations described in previous studies on coral communities, using glucose and lactose in high concentrations as DOC [8,9,46]. In this study, the DOC treatment in the form of highly bioavailable DOC was achieved by additions of 833 μmol L -1 DOC (Glucose, D-Glucose, Sigma Aldrich) twice daily at 08:00 and 20:00 to each of the six high-DOC treatment tanks, simulating sudden DOC enrichments events, common in nature associated with strong rain. To quantify the resulting DOC treatment conditions, DOC concentration were measured over a 12-hour cycle (Fig 1), showing an average DOC concentration of 550 μmol L -1 (Fig 1). Samples for TOC were filtered through 0.45 μm GFF filters (Whatman), acidified with 150 μL fuming HCl and frozen at -20°C until analysis using a Shimadzu TOC-5000A (Shimadzu, USA). Salinity and temperature were also monitored for each tank twice daily throughout the experiment (WTW Multiprobe 3430), and total alkalinity (TA) every second day. Water samples from the tanks were filtered through 0.45 μm GFF filters and stored with a drop of chloroform at 4°C until TA analysis. These parameters, together with the pH values on the NBS scale taken every two days with a multiprobe (WTW3430, Weilheim, Germany), were used to calculate the carbonate system using the CO2SYS excel spreadsheet software, with the constants from Mehrbach et al. [47] (Table 1). Biological Oxygen Demand (BOD) To evaluate potential enhancements in microbial respiration rates in seawater, BOD was measured at the end of the experiment. We incubated 150 mL of unfiltered seawater in Winkler bottles for 24 h in the dark (n = 3), under constant temperature conditions (28°C). Oxygen concentration (mg L -1 and % saturation), as well as salinity and temperature were also measured before and after incubations. O 2 consumption rates in mg O 2 L -1 h -1 were calculated and corrected for water volume and length of incubation. Assessment of Maximum Photosynthetic Quantum Efficiency Maximum photochemical efficiency of photosystem II-PSII (F v /F m ) of experimental organisms was measured every evening at 20:00 on the apical segments of the organisms, using a Pulse Amplitude Modulated fluorometer (Diving-PAM, Walz, Germany). At this time, one hour after sunset, algal thalli had already achieved the maximum F v /F m of the day, as all the non-photochemical quenching processes were relaxed, and the maximum PSII recovery of the day had been already reached (see [44,48]). Quantification of Photosynthesis, Respiration and Light Calcification Rates For physiological measurements, young 4-5 apical segments of H. incrassata thalli and the uppermost 2 cm of U. flabellum thalli were selected (two organisms per tank and species) to reduce the variation among replicates in the thallus physiological condition, due to age, photoacclimation, abundance of epiphytes and/or accumulation of damage. The segments were separated from the parent plant at least 2 h before physiological determinations were started in order to allow complete wound healing [49]. Before and at the end of the experiment, photosynthesis and calcification rates were simultaneously determined by incubating algal thalli for 30 min under a saturating light intensity of 500 μmol quanta m -2 s -1 (three times the E k of the species, data not shown), in freshly filtered seawater obtained from the respective treatment tanks. The incubation water (17 mL) was collected at the beginning and at the end of the light incubation to determine the alkalinity changes induced by algal activity (see below). The samples were incubated in darkness for another 10-15 min to determine the post-illuminatory respiration rate (R L ). Oxygen evolution rates were measured polarographically in water-jacked chambers (DW3, Hansatech Instruments Ltd., Norfolk, UK), using Clark-type O 2 electrodes (Hansatech). A circulating bath with a controlled temperature system (RTE-100/RTE 101LP; Neslab Instruments Inc., Portsmouth, NH, USA) allowed maintenance of a constant temperature of 28°C (treatment temperature) during the incubation. The electrodes were calibrated with air-and N 2 -saturated filtered seawater. Freshly filtered seawater (0.45 μm) from the respective treatment tank was used for the incubations, with DIC and DOC concentrations corresponding to the treatment conditions (see Table 1). Data were captured with a computer equipped with an analog/digital converter using DATACAN V software (Sable Systems, Inc., Las Vegas, NV, USA). Gross photosynthesis was calculated adding to the net photosynthesis determined in the incubations, the oxygen consumption through post-illuminatory respiration. Calcification rates were determined using the alkalinity anomaly principle based on the ratio of two equivalents of total alkalinity for each mole of precipitated CaCO 3 [50]. For alkalinity measurements, a modified spectrophotometer procedure as described by [51] and [40] was used. For quality control, a certified reference material of known total alkalinity (CRM, Scripps Institution of Oceanography, USA) was used to calibrate the method. Quantification of Algal Surface Area For normalization of the measured metabolic rates, the surface area of each algal segment was determined by scanning the thalli and analyzing the digital images using ImageJ software. Statistical Analyses Data were tested for normality using the Shapiro-Wilk test, and for equal variance using the Levene median test. Analyses of variance (ANOVA) allowed for the determination of significant differences (p<0.05) between the different descriptors used to characterize the physiological response of the species. A one-way ANOVA was used to compare initial photosynthetic, respiratory and calcifications rates; the calcification / photosynthesis ratio; and for the comparison of BOD values between each treatment and the control. A t-student test was used to evaluate significant differences between initial and final F v /F m values with respect to the control organisms. To analyze whether F v /F m , photosynthesis, respiration and calcification rates differed significantly between treatments, two-way-ANOVA tests were used, considering the DIC and DOC treatments as fixed factors to test for direct effects, as well as the interaction (DIC x DOC). For the comparison of differences between individuals and treatment combinations, a Newman-Keuls Post-hoc test was used. The statistical analyses were conducted using Statistica 12.0. In addition, the two species showed contrasting responses to experimental DIC and DOC treatments. The response of maximum photosynthetic rates was similar in both species, but more pronounced than indicated by the F v /F m response. The variation in F v /F m was closely related to the diurnal variation in solar radiation. Control organisms showed a similar pattern of variation in both species, with a slight, but non-significant decline over time (t-test, p = 0.528 for H. incrassata, p = 0.560 for U. flabellum) compared to initial values (Fig 2A and 2B). When comparing final F v /F m values, H. incrassata showed a significant decline in the DOC treatments, under both ambient and elevated DIC concentrations (Fig 2C, Table 3), while U. flabellum only showed a negative response of F v /F m under elevated DIC (Fig 2D, Table 2). Significant reductions in P max were estimated for H. incrassata in all treatments, when compared to control organisms. P max reductions ranged from -30% (elevated DIC) to -43% (elevated DOC; Fig 2E), and showed a significant effect in the combined treatment (Table 3). In contrast, U. flabellum experienced a significant reduction in P max under elevated DIC compared to the control (high DIC: -33%; high DIC+high DOC: -21%), while elevated DOC did not cause any effect on thallus photosynthesis (Fig 2F, Table 3). Thallus respiratory rates were not affected by any experimental treatment in any species (Fig 2E and 2F). The response of thallus calcification to the experimental treatments also showed large differences between species. While H. incrassata showed full suppression of thallus calcification and even dissolution of CaCO 3 after exposure to elevated DIC, thallus calcification was still positive albeit significantly reduced in U. flabellum after exposure to the same treatment (-36% compared to control; Fig 3). The opposite response was observed for elevated DOC, as we found a significant decline in calcification rates of H. incrassata (-68%) with respect to control organisms (yet positive values), while no calcification but dissolution of CaCO 3 (negative values) was measured for U. flabellum (Fig 3). The inhibition of thallus calcification by elevated DOC Table 2. Comparison of the initial values (day 0) of gross maximum photosynthetic rates (P max ), postillumination respiration (R L ), maximum calcification rates (G max ), and the ratio of calcification:photosynthesis (G max :P max ) of Halimeda incrassata and Udotea flabellum. Data represent mean ± SE (n = 6) and significant differences between species (one-way ANOVA, p<0.05) are indicated by different letters. Metabolic rates Halimeda concentration was further exacerbated in U. flabellum in the combined treatment, due to the addition of the negative effect of DIC (Fig 3B), as no significant interactive effect was found for the response of thallus calcification in this species (Table 3). In contrast, the combined treatment did not show any significant impact on H. incrassata calcification, notwithstanding the significant negative direct effects of elevated DOC and DIC (Fig 3A). These findings support and (e, f) gross photosynthesis, P max , (light grey bars) and respiration, R L , (black bars) rates at the end of the experiment. Data represent mean ± SE (n = 3) and significant differences between treatments (ANOVA, Newman-Keuls, p<0.05) are indicated by different superscript letters. doi:10.1371/journal.pone.0160268.g002 Table 3. Two-way ANOVA analyses performed to determine significant differences in the physiological responses of the apical segments of Halimeda incrassata and Udotea flabellum exposed to four experimental treatments: control, high DIC concentration, high DOC concentration, and the combined treatment (n = 3 for each treatment the antagonistic effect between elevated DIC and DOC and their combined effect on the calcification process of H. incrassata (Table 3). Additionally to the physiological responses of the organisms, measurements of BOD for the different treatment waters were performed to determine potential changes in bacterial respiration. Although increased BOD was detected in the treatments with elevated DOC, these changes were not significant (Table 1). Discussion Large differences between the two species investigated were found in photosynthesis and calcification rates, in agreement with previous findings [41]. Our study further revealed significant differences between Halimeda incrassata and Udotea flabellum in the calcification:photosynthesis ratio (G max /P max ), as H. incrassata was able to precipitate twice as much CaCO 3 per mol O 2 evolved in photosynthesis than U. flabellum. Elevated DIC and DOC treatments caused adverse impacts on the physiology of both species, but significant differences were observed in the severity of this impact. For example, elevated DIC resulted in a decline in F v /F m and photosynthesis rates in both species. However, elevated DOC only caused a similar response in H. incrassata, as non-significant changes were observed for the organisms of U. flabellum exposed to the similar treatment. Control organisms did not show the progressive reduction in F v /F m observed for organisms exposed to DIC and DOC enrichments throughout the experiment. This lack of change in F v /F m after an initial reduction during the first four days, in spite of the maintenance of high light conditions for the last five experimental days, indicates that experimental conditions were optimal for both species and did not induce significant accumulation of photodamage (i.e., F v /F m decline), or positive F v /F m recovery due to light limitation. Thus, the observed reductions in F v /F m and thallus photosynthesis of the organisms exposed to elevated DOC and/or DIC can be attributed to a direct negative impact of these treatments on the photosynthetic process. Thallus photosynthesis in U. flabellum showed a more robust response to elevated DOC, while H. incrassata was equally sensitive to both organic and inorganic carbon enrichments. A similar negative impact of elevated DIC on algal photosynthesis has been previously reported for other species from the genus Halimeda [26,29,30], but the causes of this decline have not yet been elucidated. Price et al. [26] suggested that the increase in dissolved CO 2 under reduced seawater pH may affect the expression of different carbon-concentrating mechanisms (CCMs), causing algal photosynthesis to rely on passive CO 2 diffusion, and thus becoming more susceptible to photosynthesis carbon limitation. The maintenance of high proton-H + permeability of the plasma membrane, for example, which is key for photosynthetic bicarbonate assimilation [52,53], declines at reduced external pH [54]. In addition to the impact of DIC on algal photosynthesis, we also found a negative effect of elevated DIC on calcification rates of both species. Udotea flabellum showed a similar -30% reduction in photosynthesis and calcification (-36%; Figs 2F and 3B), but H. incrassata experienced larger declines in calcification (-155%) compared to a -30% reduction in photosynthesis (Figs 3A and 2E). With respect to the response to elevated DOC concentration, a greater impact was observed on H. incrassata photosynthesis and calcification rates when acting in isolation. Negative effects of elevated DOC concentrations have been recently documented for the photosynthesis rates of two Halimeda species from the Great Barrier Reef [28]. In contrast to our findings, calcification under illumination was not significantly affected by elevated DOC in these species. Large differences for the response of thallus calcification to elevated DIC have been already documented among Halimeda spp. [26,29,30], and this is the first time that similar inter-specific differences were also observed for the response to elevated DOC. Some authors have suggested that the large inter-specific component shown by the calcification process in the genus Halimeda may rely on thallus morphology [26]. This genus displays large variation in the internal anatomy of algal thallus, and these anatomical characteristics are good proxies for species membership when compared to molecular data [55], what may support our interpretation. However, more work is still needed to elucidate the potential implications of the variation in thallus anatomy within the Halimeda genus on the species-specific sensitivity of thallus calcification to environmental changes. Photosynthesis and calcification rates are tightly coupled in calcareous siphonal algae. Photosynthesis promotes algal calcification by removing CO 2 or bicarbonate from the calcification site, which increases the local pH and, thus, facilitates CaCO 3 precipitation [56]. Photosynthesis can also support a high fraction of the energetic costs of the biomineralization process. Therefore, any negative effect on the photosynthetic process would be reflected in a decline in algal calcification, as recently shown for coralline algae [48]. Inter-specific differences in the calcification process may explain the diversity of responses observed. For example, while H. incrassata only calcifies in the intercellular spaces, calcification in U. flabellum represents a transition between intercellular and sheath mineralization (e.g. Penicillus, Rhipocephalus) [57]. The CaCO 3 precipitation in H. incrassata occurs in a semi-isolated space, where CO 2 diffusion from the external environment can cause a decrease in local pH, and thus a reduction in calcification rates. Therefore, a more efficient isolation from surrounding seawater of the biomineralization site of U. flabellum, allows carbonate precipitation to be less dependent on the external variation of DIC and, thus, better suitable to be controlled by the physiology of the organism. The occurrence of a stronger control over the CaO 3 precipitation process by U. flabellum is supported by the findings of Ries [41]. Calcareous green algae are able to release DOC, but cannot incorporate organic carbon [58][59]. Thus, although DOC decline in the enriched experimental treatments was primarily due to water turnover rates in the tanks, part of this DOC enrichment could have likely been assimilated by bacteria (Fig 1). Furthermore, the experimental addition of DOC in the form of glucose stimulates microbial respiration and growth [60]. Such enhancement in bacterial activity explains the lower O 2 concentrations observed in DOC-treatments compared to control-and DIC-treatments, as reported previously [46,61]. Little information is available about the interaction between these epibacterial communities and algal physiology, and the potential effects of environmental changes on these communities and their interactions (i.e., [62][63][64]). It has been documented for Halimeda copiosa that the abundance of thallus surface-associated bacteria increases under organic nutrient enrichments [65]. In addition to increases in bacterial abundance, shifts in the bacterial community towards non-beneficial or even harmful bacteria have been suggested to occur for corals under increasing DOC concentrations [8]. As no increases in algal respiratory rates were observed in the DOC-treatments (Fig 2E and 2F), the negative responses on photosynthetic and calcification rates were most likely related to alterations in the bacterial community than in their abundance. Benthic reef algae have been shown to differ in the microbial communities associated with their tissue [66], therefore, part of the observed differences in the DOC response of H. incrassata and U. flabellum might be related to differences in the response of their respective epibacterial communities to the experimental DOC enrichment, as well as species-specific effects on bacterial-algal interactions. The antagonistic effect found for the combined elevated DIC and DOC treatment in H. incrassata, could also be due to a differential effect of each factor on the bacterial-algal interactions. More studies focusing on the seaweed holobiont are necessary to fully understand the relevance of these indirect effects on algal performance. Ecological Perspective According to our results, the DIC concentrations expected by the year 2100 [45] may significantly reduce photosynthesis and carbonate production in H. incrassata, while U. flabellum production may experience relative lower declines. However, when accompanied by increased concentrations of high labile DOC, the impact of elevated DIC will be alleviated for H. incrassata but exacerbated for U. flabellum. Furthermore, the effect of DIC and DOC could be even more severe when considering their impact at night on algal calcification, as has been documented for dark calcification rates that are also negatively affected in Halimeda spp. [28,67]. Thus, considering the impacts both during the light and night on thallus calcification rates, the net algal carbonate production can be reduced even further. Potential sources of high labile DOC for this particular reef lagoon are the seagrass and macroalgal beds themselves [59], human waste water discharge via groundwater [68,69], and storm events [70], which are all predicted to increase in the future providing more labile DOC to this coastal ecosystem. For the carbon budget of the Puerto Morelos reef lagoon [71], this DOC enrichment may lead to a significant reduction of the contribution of calcifying green algae to the overall primary production and/or carbonate reef accretion. This impact will produce severe consequences on the macrophyte community, habitat structure, and ultimately, on the organic carbon fluxes of the ecosystem due to altered contributions to labile DOC and POC pools [59]. Significant reductions in carbonate sand production from algal derived sediments can alter the volume of sand deposits in coastal tropical areas, with important consequences for beach erosion and coastal sediment dynamics of reef environments. On the other hand, considering that seagrasses and fleshy algae may prosper under higher DIC conditions [72][73][74], and that seagrasses can modulate the OA response of calcareous algae [75][76][77], a deeper understanding of the changes in the macrophyte community and on species interactions will be fundamental to enhance our capacity to foresee the severity of the impact of predicted environmental changes on carbonate sand production by calcareous green algae.
2018-04-03T06:12:50.856Z
2016-08-03T00:00:00.000
{ "year": 2016, "sha1": "2ec66f163b1bf4e75d9ba2de33557810f918cc47", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0160268&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ec66f163b1bf4e75d9ba2de33557810f918cc47", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
115138107
pes2o/s2orc
v3-fos-license
Effect of the surface polarization in polar perovskites studied from first principles The (001) surfaces of polar perovskites BaTiO$_3$ and PbTiO$_3$ have been studied from first principles at T=0 K. For both cases of polarization, the most stable TiO-terminated interfaces show intrinsic ferroelectricity. In the topmost layer, where the O atoms are $>$0.1 \AA above Ti, this leads to metallic instead of the insulating behavior of the electronic states that may have important implications for multiferroic tunneling junctions. Since direct measurements of atomic displacements, occurring in FE near the interface, is extremely challenging, their structures can be understood and numerically characterized from first principles. Recently, much work has been conducted to study bare FE surfaces using the ab initio density functional theory (DFT) calculations. 4,5,6,7,8,9,10,11,12 It has been found that the critical thickness down to 3 unit cells (1.2 nanometers) is enough to enable the existence of ferroelectricity at room temperature. 13,14,15 However, there are only some convincing evidences in literature i.e. the work by Cohen 5 that the direction of P may affect the surface relaxation. The functionality of multiferroics assumes that P must be reversible and parallel to the surface normal. Hence, it is worthwhile to carry out ab initio calculations which model the reaction of the (001) surface of polar ferroelectric surfaces upon the change of its reversible P. In this report, we study the (001) surface of ABO 3 perovskites (A = Sr, Ba, Pb and B = Ti), which represent a wide class of ferroelectrics ranging from paraelectric SrTiO 3 (STO) to highly polar PbTiO 3 (PTO), while BaTiO 3 (BTO), with its moderate spontaneous polarization P s , is an example of a typical FE. The study is based on extensive calculations, using the Vienna Abinitio Simulation Package 16 (VASP), in which the effects of relaxation of atomic positions are included. Nowadays, many FE properties can be successfully calculated from first principles. 17 Table I collects the experimental data for the lattice parameters and atomic positions, obtained for the room-temperature tetragonal phase of PTO and BTO (with space group symmetry P 4mm) and for cubic STO (P m − 3m), in comparison with our calculation. Overall, there is a good agreement between the measured and theoretical structure parameters for the three systems. For polar BTO and PTO, their values of P, calculated by the Berry phase approach 22,23 , are in reasonably good agreement with experiment. The minor differences, seen in Table I between our and some other recent DFT results, 6,12,24 can be attributed to the choice of pseudopotentials and/or to the approximation used for the exchange and correlation potential. We used the local density approximation (LDA) while the electronion interactions were described by the PAW pseudopotentials. After relaxation the calculated forces are always less than 0.5·10 −2 eV/Å. The electron pseudo-wavefunctions were represented using plane waves, with a cutoff energy of 650 eV. For the Brillouin-zone integration a dense Monkhorst-Pack 25 mesh was used. We calculated 5-unit-cell (∼2-nm) thick ABO 3 film. The atoms of the two upper or, alternatively, two lower unit cells were allowed to relax while all other atoms in the supercell were fixed at their bulk-like and previously optimized positions, which are shown in Table I. A vacuum spacer of 2 nm was used to separate the copies of the periodic structures in the direction perpendicular to the surface. The ABO 3 perovskite structures possess a strong anisotropy resulting in the AO (A = Pb, Ba) and TiO 2 layers alternating in the [001] direction. The (001) surface of a ABO 3 can be terminated by a AO or a TiO 2 layer. Recently, Eglitis and Vanderbilt 12 have reported for the cubic structure of BTO and PTO that the TiO 2 -terminated surface is more stable. Using the same approach 12 we calculated the surface energy for the both terminations of ABO 3 . The results are shown in Table II. For each perovskite, its TiO 2 terminated (001) surface is energetically favourable, indeed. Thus, we consider the (001) surface of ABO 3 to be TiO 2 terminated in the following. In polar PTO and BTO, the displacements of the Ti and O atoms occur along the z-axis so that P is considered to be directed along [001] direction as well. First, we must formally set up the direction of electric dipole in the unrelaxed supercell assuming, for instance, that O is always above the corresponding cations in each layer along [001], as given in Table I. Then we can model the two distinct situation alternatively placing the bulk-like 3-unit-cell thick substrate and relaxed layers against each other along the z axis. In the first case, which we denote as P ↓ , the direction of P is antiparallel to the surface normal. The second model labelled by P ↑ corresponds to the case where all cations are above O before relaxation and, therefore, P is parallel to [001]. In the tetragonal FE structure, both configurations may coexist in the random state as P ↓ and P ↑ domains separated by a domain wall of <2 nm. To quantify the process of relaxation we use the cation-anion displacements δ = z O − z cation calculated for each AO and TiO layer near the interface. In the bulk-like substrate of polar PTO and BTO, the model P ↑ means that δ < 0 and vice versa the case of P ↓ models the situation where δ > 0. Fig. 1 shows several top monolayers (ML) of PTO(001) after relaxation. The case P ↑ (P ↓ ) is shown on the left (right) side of Fig.1. The arrows indicate the direction of the dipoles in each ML, while the numbers at the arrows give the intralayer displacements δ inÅ , calculated between the O and metal atoms along [001]. In the state P ↓ , all dipoles possess the same orientation that means that O is always above the cation within each layer. In bulk PTO, the intralayer displacement, δ, is 0.333Å for the TiO layer and 0.476Å for the PbO layer. For the three top ML near the interface, their δ are reduced by 30-45% with respect to the corresponding bulk values. In the topmost TiO layer, the reduction of δ is ∼30%. For the second (PbO) and third (TiO) layers from the interface, we find that their δ is reduced by 45% and 40%, respectively. In the case of P ↑ , shown in the left panel of is 0.153Å , which is reduced by 54% against the corresponding bulk value. For the second ML, we obtain the reduction of 33%. However, the most significant changes occur in the topmost TiO layer, whose δ is largely reduced by 68% whereas the dipole is reversed compared to all others. Thus, using the P ↑ model and placing all O below the cations, we obtained in the topmost layer the relaxed configuration where O is above Ti. This is similar to the case of P ↓ . To investigate the effect of the surface rumpling in perovskites we repeat the calculations for STO(001) and BTO(001). In Table II the corresponding results of our zero temperature calculation are listed. For paraelectric STO, we obtain that its TiO-terminated (001) surface after relaxation becomes marginally polar, with a positive rumpling normal to the surface in the top three ML where the O atoms are above the cations by <0.12Å . This is in good agreement with the most recent experimental studies. 26 The positive rumpling predicted for bare surfaces of perovskites leads to relatively low catalytic activity. With increasing temperature, the rumpling is distorted and it may stimulate further potential catalysis. For the TiO-terminated BTO surface, we have found the details of relaxation similar to those of PTO. In fact, all our results are in good agreement with those reported by Eglitis and Vanderbilt. 12 In case of P ↑ , the topmost BTO rumpling of ∼0.1Å being larger than the corresponding bulk value, is similar to that of highly polar PTO. The sign of δ in the topmost ML of BTO is reversed with respect to all others calculated for the layers situated far down from the interface. In the third ML δ is 0. The P ↓ model yields for BTO(001) the reversal dipole in the second ML, with marginal δ. Thus, we find for the three systems and different arrangements of P that the TiO-terminated (001) surfaces prefer the configuration where O is above Ti. In the cubic ABO 3 perovskite structure, each Ti 4+ ion sits in the regular six- fold coordinated site with all of the Ti-O bonds of equal length, as shown in Fig. 2 for bulk STO. In the tetragonal perovskite structure, such as t-PTO, the relaxed cluster of O atoms about the sixfold coordinated Ti forms a distorted octahedron, where one of the two bond lengths along [001] is rather short while another Ti-O bond in the vertical direction is significantly longer than the four other bonds stretching in the equatorial plane. If we exclude the longest Ti-O bond from the consideration using the electrostatic arguments then the environment for each Ti becomes the fivefold coordinated polyhedra, which is similar to that of Ti at the interface. The left three panels of Fig. 2 compare the Ti-O bond lengths in bulk PTO, normalized to the value of the ideal octahedron in the cubic structure, to those in the polyhedron around fivefold coordinated Ti in the topmost layer. Using the P ↓ model for the PTO (001) surface, we obtain the Ti-O bonds whose lengths are similar to those of t-PTO and, hence, nothing dramatic happens in the environment of the topmost Ti. In the case of P ↑ , the bond length distribution around the topmost Ti is restricted so that the equatorial and vertical bond lengths tend to be equal to each other. Moreover, the closest O atom to the surface Ti is attached along [001] from the opposite side compared to all Ti placed below the surface in the regular crystal structure within P ↑ . It is clear that the O-Ti-O bond angles for the equatorial Ti-O bonds of the topmost ML must be dramatically changed to compensate the charge distribution around Ti. It appears that these bond angles become >90 • , as shown in Fig. 1. Therefore, whatever the state of P is modeled in t-PTO, the O atoms must relax above Ti on the Ti-O terminated (001) surface. Regarding BTO, the same conclusions may be drawn. Fig. 3 shows the view of the charge density along [010] calculated for the top six ML of PTO and projected on the x-z plane of the supercell. The isocharge lines plotted in Fig. 3 for both cases of P illustrate the charge transfer across the cell while the arrows indicate the dipole directions within each ML. In the case of P ↓ , which is shown in the right panel of Fig. 3, there are three bridges seen between Ti and nearest O. The shortest bond with O, which is always below Ti along [001], has the large population value. In the P ↑ state, the charge transfer picture is similar to that of P ↓ for the topmost Ti only. Far below the interface (starting from the 5th ML) all Ti have the most populated bond with O which is above Ti. In the third ML, however, the Ti ion is strongly bonded to equatorial oxygens showing some sort of blockade for the charge transfer along the [001] direction. This may reveal the key electronic states factors behind the surface relaxation of polar FE. Recently, Urakami et al. 27 have observed the surface conductance of BaTiO 3 single crystals in ultra high vacuum below T C . It has been shown that the in-plane conductance is the result of an intrinsic surface electron/hole layer that is, due to the surface polarity and not due to O vacancies or some other defects. The I − V characteris- tics shows a pronounced difference of conduction between the poled states in BTO. We can explain this difference in a simple way using our ab initio results. To reveal the differences between P ↓ and P ↑ we plot in Fig. 4 the Mulliken site-projected density of states (DOS) of the BTO(001) surface for both cases of polarization. The Ti and O DOS for the topmost ML are shown in comparison with the corresponding DOS of t-BTO. For bulk BTO, Fig. 4 shows a pronounced insulating band gap of 2 eV. The value is typically underestimated by the LDA approximation of DFT. Comparing the Ti and O DOS of t-BTO and the topmost ML of BTO(001), we see a spectacular change of the electronic states occurring due to the surface relaxation and variation of P. The major DOS features can be summarized as follows. In the case of P ↑ , a few O states appear in the band gap while the Ti DOS is not affected. For the P ↓ poled state, the Ti lower conduction band, being shifted downwards in energy by ∼2 eV, significantly contributes to the DOS in the band gap region. This causes metallic behavior of the topmost ML in the case of P ↓ , yielding rather large in-plane conductance. In the case of P ↑ the Ti states have a gap at E f which is related to a tiny in-plane conductance. Depending on the polarization direction the topmost ML undergoes a transition from metallic to oxide behavior shows metallic or oxide behavior. As a consequence the in-plane conductance changes drastically which is a reasonable explanation of the experimental results by Urakami et al. 27 . In summary, from the ab initio basis of our work we have shown that the intrinsic ferroelectricity in polar perovskites is suppressed by ∼30% in the surface region. For both cases of polarization direction, the TiO terminated surface of BTO and PTO forms an electric dipole where the O atoms being shifted >0.1Å above Ti. But nevertheless the electronic structure of the surface layer changes from metallic to oxide behavior under reversal of polarization which changes the surface conductance drastically. 27 This may have important implications in the design of multiferroic nano-devices.
2008-02-21T15:39:19.000Z
2008-02-21T00:00:00.000
{ "year": 2008, "sha1": "66fb5aeb28627f81ec06d1b8d7c3aeb622a72e61", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0802.3134", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66fb5aeb28627f81ec06d1b8d7c3aeb622a72e61", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
253018750
pes2o/s2orc
v3-fos-license
Dyson Brownian Motion and motion by mean curvature We construct Dyson Brownian motion for $\beta \in (0,\infty]$ by adapting the extrinsic construction of Brownian motion on Riemannian manifolds to the geometry of group orbits within the space of Hermitian matrices. When $\beta$ is infinite, the eigenvalues evolve by Coulombic repulsion and the group orbits evolve by motion by (minus one half times) mean curvature. Introduction Fix β ∈ (0, ∞] and n standard independent Wiener processes {B j (t)} n j=1 , t ∈ [0, ∞). Dyson Brownian motion refers to the unique weak solution to the Itô equation within the Weyl chamber W n = {(λ 1 , . . . , λ n ) ∈ R n |λ 1 < λ 2 . . . < λ n }. (1.2) In this paper, we introduce a new stochastic model for equation (1.1). To this end, we first review the most relevant past constructions to provide some context for our work. Let µ n,β denote the probability measure on W n with density proportional to the weight 1≤j<k≤n (λ k − λ j ) β . There are two fundamentally different classes of random matrices whose eigenvalues have law µ n,β . For β = 1,2 and 4 these are the self-dual Gaussian ensembles (GOE, GUE and GSE) of real symmetric, complex Hermitian and quaternionic matrices introduced in the 1960s by Dyson, Gaudin and Mehta [15]. For β ∈ (0, ∞], these are the Gaussian β ensembles (GβE) of real, symmetric tridiagonal matrices, introduced by Dumitriu and Edelman [5]. Orthogonal polynomials played an important role in the first studies of these ensembles (see [4,5,15]). However, dynamic models, especially equation (1.1), play an important role in understanding universality [7]. Once a random matrix ensemble has been chosen, natural time dynamics can be obtained by replacing a single matrix drawn from the ensemble with a matrix-valued process whose equilibrium measure is the given ensemble. Dyson obtained equation (1.1) in this way for β = 1, 2 and 4, replacing each Gaussian self-dual ensemble with its associated Ornstein-Uhlenbeck process [6]. For β ∈ (0, ∞] the extension of Dyson's approach to the GβE ensemble is a subtle problem. Holcomb and Paquette have constructed diffusions of tridiagonal matrices whose eigenvalues satisfy equation (1.1) using orthogonal polynomials and the Lanczos algorithm [9]. On the other hand, Yabuoku has studied the eigenvalue process for the GβE diffusion obtained by choosing independent Ornstein-Uhlenbeck processes on the diagonal and independent Bessel processes on the off-diagonal (see [21, (2.1)]). He shows that the eigenvalue process of a GβE diffusion depends on additional minors and does not satisfy Dyson Brownian motion. The gap between these results arises because the diffusions of tridiagonal matrices introduced by Holcomb and Paquette are described somewhat implicitly in terms of a separation between eigenvalue and eigenvector dynamics ([9, Section 3]); these conditions do not hold for the GβE diffusion. In the range β ∈ [0, 2] a model for equation (1.1) inspired by free probability has been constructed by Allez, Bouchaud and Guionnet [1,2]. They construct a stochastic process S t of real, symmetric matrices whose eigenvalues satisfy (1.1). Roughly, the process S t is a scaling limit that interpolates between free convolution and standard convolution steps. Despite the narrower range of β, and a different matrix model, this work contains certain observations that reappear in [9]. Thus, the existence of natural time-dependent matrix models whose eigenvalues satisfy (1.1) for arbitrary β ∈ (0, ∞] is not fully settled. This is the question we address. The main contribution in this work is a geometric interpretation of equation (1.1). For each β ∈ (0, ∞], we construct a stochastic process M t in the space of Hermitian matrices (equation (2.1) below) whose eigenvalue process has the same law as the solutions to (1.1). The main new tool in our approach is Riemannian geometry. Specifically, we use Riemannian submersions of group orbits and a probabilistic interpretation of mean curvature to obtain equation (1.1). The use of Riemannian submersion allows us to view β as a parameter that describes an anisotropic splitting between noise in the tangent and normal directions (not an inverse temperature, as in Dyson's work). A similar role for β has been observed by Holcomb and Paquette [9,Thm.7]; our approach provides a systematic geometric explanation for its importance. Second, we show that the Coulombic repulsion in equation (1.1) corresponds to the mean curvature of group orbits. This is not a lucky accident: it is a general principle corresponding to the gradient descent of Boltzmann entropy for group orbits. In order to explain the main new ideas in the simplest terms, we focus on the explanation of the model, relying on previous work on well-posedness for Dyson Brownian motion and standard calculations in random matrix theory to minimize technicalities. The result in this paper is part of an effort by the authors to develop previously unnoticed connections between three well-studied problems: the construction of Brownian motion on Riemannian manifolds, Dyson Brownian motion, and the isometric embedding problem for Riemannian manifolds. At present, this interplay provides a new formulation of the embedding problem for Riemannian manifolds [13], new interacting particle systems akin to Dyson Brownian motion [16], and a systematic derivation of SDE for eigenvalue processes of other classes of random matrices using Riemannian submersion [11]. We construct a process {M t } t≥0 by a suitable projection of standard Brownian motion on H(n) onto the tangent and normal spaces to isospectral orbits. More precisely, assume given M 0 ∈ V , let {X t } t≥0 be a standard Brownian motion on H(n) starting at X 0 ∈ V , and consider the Itô SDE Using an explicit description of the projection operators and standard SDE theory one can show (Lemma 3) that for every β > 0 there exists a stopping time τ β and a solution M t of (2.1) on [0, τ β ). We then have The projection operators are smooth when the spectrum is simple. Since the eigenvalues do not collide when β ≥ 1, a simple bootstrap argument shows that when β ≥ 1, the stopping time τ β = ∞ (see Lemma 3). Theorem (1) may be established in a direct way once one has identified the Itô equation (2.1). The main insight underlying equation (2.1), and thus Theorem 1, is a probabilistic interpretation of mean curvature. Let us now explain this idea. Brownian motion on Riemannian manifolds and mean curvature. There are two standard constructions of Brownian motion on Riemannian manifolds using SDEs, referred to as the intrinsic and extrinsic constructions respectively [10,12]. The extrinsic construction goes as follows. Assume M is a smooth d-dimensional manifold and assume given a smooth embedding u : M → R q . Then Brownian motion on the embedded submanifold Σ = u(M) may be constructed as the solution to the Stratonovich equation where P Z is the orthonormal projection onto T Z Σ in R q and W t is a standard Wiener process in R q [10]. The use of the Stratonovich formulation is crucial when one studies stochastic processes on manifolds, since it accounts naturally for invariance under coordinate transformations. But equation (2.2) also admits the equivalent Itô formulation where H(Z) is the mean curvature vector of the embedding u at the point Z. This identity is due to Stroock [20,Thm 4.4.2]; it was rediscoved by two of the authors in their work on the isometric embedding problem [13,Thm.2]. The mean curvature vector of an embedding is defined 1 as the trace of the second fundamental form, but equations (2.2) and (2.3) show that it may be approached directly from SDE theory. We obtain equation (2.3) by beginning with (2.2), using the conversion rule between the Itô and Stratonovich formulations to compute the Itô correction, recognizing finally that the Itô correction has a fundamental geometric meaning. A related identity involving mean curvature in the case of a Riemannian submersion was obtained by Pauwels [18]. The intuitive content of equation (2.3) is that stochastic fluctuations in the tangent space give rise to a 'centrifugal force' given by the mean curvature. Let us illustrate this idea with an example. Let Σ be a sphere of radius r in R q . We compute the projections explicitly, to see that equations (2.2) and (2.3) take the form The Stratonovich form ensures that the constraint |Z t | = r holds for all t. The 'centrifugal force' is (q − 1)/2r and it arises as follows. If we had naively attempted to construct Brownian motion on the sphere with the Itô SDE Thus, the radial process |Z t | solves a deterministic equation, even though the evolution of Z t is purely stochastic. Further, while each point Z t moves tangentially to the sphere of radius |Z t |, this evolution has the effect of pushing spheres outwards normally by minus a half times the mean curvature. (Observe that the mean curvature vector for the sphere points inward). is H(u(x, t)) for t ∈ [0, T ]. Motion by mean curvature has been extensively studied in geometric analysis [3,8]. It is related to Dyson Brownian motion as follows. Theorem 2. Assume β = ∞ in equation (2.1). The eigenvalues of the process solving dM t = P Mt dX t evolve deterministically by Coulombic repulsion for t ∈ [0, ∞). Moreover, the corresponding isospectral orbits Σ Mt move by minus a half times the mean curvature. A minor difference with immersions in R q is that the mean curvature vector is the trace of the second fundamental form with respect to the Frobenius metric on H(n). The flow is described precisely in equation (3.6) below. Theorem 2 corresponds to a gradient descent of Boltzmann entropy in the following sense. As in the example above, we see that each matrix M t on the group orbit moves tangentially (and stochastically), whereas the group orbit as a whole evolves normally (and deterministically) by minus a half times the mean curvature. The group orbits foliate the space H(n) and the group action is an isometry. In this setting, it is known that the mean curvature at each point on the group orbit Σ Mt is the gradient (with respect to the Frobenius norm) of − log vol(Σ Mt )) [17, p.3350]. The volume of the group orbit depends only on the eigenvalues Λ t of M t . By interpreting Λ t as a macrostate, and each point M t as a microstate, we see that log vol(Σ Λt )) may be interpreted as a Boltzmann entropy obtained using the theory of Brownian motion on Riemannian manifolds. Further analysis from this viewpoint may be found in [11,13,16]. Proofs We provide self-contained proofs of Theorem 1 and Theorem 2. An alternative approach, which uses Pauwel's theorem on the relationship between Riemannian submersion and Brownian motion on Riemannian manifolds, and applies to other eigenvalue processes, has been pursued by the first author [11]. Proof. Fix a diagonal matrix Λ ∈ V with the same spectrum, so that Σ M = Σ Λ . As can be checked, the isotropy group G Λ = {Q ∈ U (n) : QΛQ * = Λ} is given by diagonal matrices with entries µ i ∈ S 1 and is hence isomorphic to T n . Since G Λ is a closed subgroup of U (n), the left coset space U (n)/G Λ is a smooth manifold of dimension n 2 − n. The orbit map θ Λ then descends to the quotient and gives a smooth embedding θ Λ : In the proof of Theorems 1 and 2 it is convenient to have an explicit description of the tangent and normal spaces to an orbit. Proof. Fix any skew-hermitian A ∈ A(n) and consider the curve Since γ(t) ∈ Σ Λ for any t ∈ R and γ(0) = Λ, we find that Since [iD, Λ] = 0 for any real diagonal matrix D it follows by counting dimensions that Observing that [A, Λ] has empty diagonal for any A ∈ A(n) we find that . . . , µ n ) : µ j ∈ R for all j = 1, . . . , n} . Since the action θ restricts to a transitive action on the orbits and θ Q : Σ Λ → Σ Λ is a local diffeomorphism for any Q ∈ U (n) the claim follows. Proof. We claim that the maps M → P M and M → P ⊥ M are smooth on the open, dense subset V ⊂ H(n) of hermitian matrices with simple spectrum. The local existence then follows from standard SDE theory with τ β being (bounded from below by) the first collision time of the eigenvalues. The assertion that τ β = +∞ almost surely for β ≥ 1 then follows from Theorem 1 and known non-collision results of Dyson Brownian Motion for β ≥ 1 (which are consequences of "McKean's argument", see e.g. [14,Proposition 4.3]). To show that the projections are smooth we fix M = QΛQ * ∈ V and N ∈ H(n). From the description in Lemma 2 we find that where e j is the j-th standard basis vector of R n . Therefore we have where P j = P j (M ) := Qe j (Qe j ) * is the (spectral) projection onto the eigenspace associated to eigenvalue λ j (M ). Observe that the ordered spectrum λ 1 , . . . , λ n : V → R is a family of smooth functions on V thanks to the implicit function theorem. Consequently, also the projection operators P j are smooth on V, which can be seen, for example, from the formula where γ ⊂ C is a Jordan curve such that λ j (M ) is the only eigenvalue of M contained in its interior (see e.g. [19]). Proof (of Theorem 1): We get the equation for the eigenvalues by Itô's formula. Consider the ordered spectrum λ 1 , . . . , λ n : V → R. These are smooth functions by the implicit function theorem and the Hadamard variation formulae show that for A, B ∈ H(n) 3) and Let then {E α } n 2 α=1 be the standard basis (orthonormal with respect to the Frobenius metric) of H(n), i.e., where e j is the j-th standard basis vector of R n and j = 1, . . . , n, 1 ≤ k < l ≤ n. Then (2.1) reads where now X α t are jointly independent standard Wiener processes on R starting at X α 0 . By Itô's formula it follows Since P Mt (E α ) is tangent to the isospectral manifold we have Dλ j | Mt (P Mt (E α )) = 0 for any α. Moreover, writing M t = Q t Λ t Q * t for a fixed t > 0 we infer from the description (3.2) that Q * t P ⊥ Mt (E α )Q t is diagonal and hence for any α thanks to (3.4). Consequently, Observe now that is a real-valued martingale with quadratic variation since {E α } is an orthonormal basis. By Lévy's characterisation it therefore follows that dZ = dB j for a real-valued Brownian motion B j , i.e. , which shows the claim. Proof (of Theorem 2): Suppose that dM t = P Mt dX t . As in the proof of Theorem 1 it follows that which shows the deterministic Coulombic repulsion of the eigenvalues. We moreover claim that the orbits Σ Mt evolve by time-reversed, scaled mean curvature flow. More precisely, let Λ t be a diagonal matrix with the same spectrum as M t , so that Σ Mt = Σ Λt . As in Lemma 2 we consider the closed subgroup T ⊂ U (n) of diagonal matrices with entries in S 1 . We then define the family of embeddings so that Σ Mt = F (U (n)/T, t). We claim that First, observe that H QΛQ * = QH Λ Q * , so that it suffices to show (3.6) for Q = Id.
2022-10-21T01:16:10.768Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "168c0d74e95b60efa7b5ebb546b45bd7e7cf0a2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "168c0d74e95b60efa7b5ebb546b45bd7e7cf0a2b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119640969
pes2o/s2orc
v3-fos-license
A Sub-Density Theorem of Sturm-Liouville Eigenvalue Problem with Finitely Many Singularities We study the distribution of the Sturm-Liouville eigenvalues of a potential with finitely many singularities. There is an asymptotically periodical structure on this class of eigenvalues as described by the entire function theory. We describe the singularities of its potential function explicitly in its eigenvalue asymptotics. Introduction and Main Result In this short note, we study the eigenvalue distribution for the following differential equation. where {x m,k } m,k ∈ (0, π 2 ) and {c m,k } m,k ∈ R. We are dealing with a piecewise C M [0, π] potential function p(x). For each m, x m,k are distinct and p(x) has a jump at m-th derivative at x m,k . We assume nontrivially the {x m,k } m,k ∈ (0, π 2 ) has J elements and are all distinct. If there are two singular points symmetrically to the middle point located in (0, π), then our method doesn't apply in this case. It is asked by Carlson, Threadgill and Shubin [1]: How are the singularities of p manifested in the distribution of eigenvalues? Being considered as a function of ω, y(π; ω) is an entire function of ω. Moreover, the zeros of y(π; ω) are the Dirichlet eigenvalues of the system (1.1). To study the asymptotics of Dirichlet eigenvalues, we examine the zeros of entire function y(π; ω). We try to answer the question from the point of view of complex analysis in this particular setting. In [1], a distribution of the eigenvalues with coefficients in terms of spectral invariants is described in [1,Theorem 4.4] applying the Newton's method. In this paper, we try to characterize the distribution of the eigenvalues explicitly in terms of the singularities themselves and find the composites of the Dirichlet eigenvalues. Can one really hear the singularities of the potential p? We state the main result of this paper: be the rearrangement of the singular points {x m,k } m,k such that 0 =: ω 0 < ω 1 < ω 2 < . . . < ω J < ω J+1 := π 2 . There exist exactly 2J + 2 subsequences of the zeros of y(π; ω), denoted as {z n l }, where l = 1, 2, . . . , 2J + 2, such that in which {z n } are the zeros of y(π; ω). In particular, we recover the point set {ω j } J j=1 from the subsequences of Dirichlet eigenvalues corresponding to each of these points. We may refine the asymptotics (1.1) to next order by the method in [9, p. 37]: This is the only eigenvalue asymptotics containing the information on the position of the singularities of a given potential function known to the author. We may compare the result in [6,7,9]. However, in [6], they considered a much general class of potential functions. One may sum up all of the subsequences to obtain the classic eigenvalue density as in [7,9]. We start with the asymptotic expansion of the solution of (1.1) which we refer to [1,2]. The following asymptotics holds: if ω ∈ C and This is essentially the (3.e) in [1, p. 84]. However, we deal with ω ∈ C in this paper. The only difference is in the big O-term in the end of (1.8). We refer the proof to [1, p. 84], and also [9], which comes from the repeated integration by parts. We will apply the Wilder's theorem to (1.8) which is a sum of asymptotically hyperbolic series to obtain the asymptoics of the Dirichlet eigenvalues. The Wilder's theorem There is an asymptotic periodic structure [4,5,8] within the zero set of the asymptotically hyperbolic sum, say, the asymptotic expansion (1.8). We refer to [5,8] for a comprehensive study on the zero distribution theory of this kind. To be more convincing, we start with the following theorem. One can bypass this part if familiar with the entire function theory. The indexing in this section is independent of the others. where z = x + iy, A j = 0, ω 1 < ω 2 < · · · < ω n . Then, there exists K > 0 such that 1. each zero of g is in |x| < K; 2. for each pair of reals (α, s) with s > 0, Let us acquire a more sophisticated theorem of this type. Let where n > 1 and A j and ω j are complex numbers such that A j = 0 and the ω j are distinct; the m j are non-negative integers; the functions ǫ are analytic for |z| ≥ r 0 ≥ 0 with lim z→∞ ǫ(z) = 0. When we are talking about the zeros of f (z), we are referring to its zeros outside certain open ball around the origin. We set up the following quantities to the f (z) in (2.4): Let Q be the broken line given by the ω j given in (2.4) with ω 1 , · · · , ω σ as its vertices. The indices are labeled counterclockwise. Let L k be the line segment [ω k , ω k+1 ] and Certain ω p on L k are assigned doubly indexed subscripts as follows: Let the convex hull of ω k , ω k+1 and τ p = ω p +im p e k in which ω p on L k ; assign subscripts j = 1, · · · , σ k to ω kj so that ω k1 = ω k , ω kσ k = ω k+1 and τ kj are vertices of this convex hull and preceding in a counterclockwise direction from ω k + im k e k to ω k+1 + im k+1 e k . For j = 1, · · · , σ k − 1, which is real; n kj is the number of τ p on L kj . In particular, if L kj in an interval with exactly two end points, then we have n kj = 2. Moreover, for j = 1, · · · , σ k − 1 and h > 0, we define V kj (h) := {z| ℑ(z/e k ) ≥ 0, |ℜ(z/e k ) + µ kj log |z|| ≤ h}. (2.7) T k (θ) is defined to be a closed sector with vertex at zero of opening 2θ about the outward normal to L k through the origin. For the same k and j and each triple of reals (α, s, h), s > 0 and h > 0, the set R kj (α, s, h) := {z| ℑ(z/e k ) + µ kj arg z ∈ [α, α + s], |ℜ(z/e k ) + µ kj log |z|| ≤ h}, (2.8) where arg z ∈ (φ k , φ k + π) and R kj (α, s, h) is in V kj (h) ∩ T k (θ). They are asymptotically logarithmic tubular neighborhoods. We refer to [4] for a comprehensive study. Now we state the following theorem. Theorem 2.2 (Dickson [4]). Let f (z) be given as in (2.4). Then, there exists h > 0 such that 1. all but a finite number of zeros of f of modulus greater than r 0 are in k,j V kj ; 2. for each pair of positive reals ǫ and s 0 , there exists an α 0 = α 0 (ǫ, s 0 ) such that whenever α ≥ α 0 and s ≥ s 0 , This is exactly stated as in [4]. The proof is in [5,Theorem 2,p.21]. We refer to [3] for another application of this theorem. Proof of Theorem 1.1 Proof. To apply Theorem 2.2 to the hyperbolic sum (1.8), we rewrite (1.8): It is well-known [9] that there is a C δ depending on the distance to the zeros of sin ωπ such that exp |ℑωπ| < C δ sin{ωπ}. (3.1) Hence, (1.8) becomes Now we rearrange according to their exponential powers by the theorem assumption to the following form: in which the C j (m, ω) and D j (m, ω) can be obtained by comparing (3.2) with (1.8). Besides (3.2), the entire function y(π; ω) is bounded near Z. Without loss of generality, we consider the zeros of Y (ω) := ωy(π; ω/i) by applying Theorem 2.2 in a suitable strip containing the real axis. We observe the zeros of Y (ω) spread themselves vertically along the imaginary axis, that is , the zeros of y(π; ω) spread themselves along the real axis. In particular, given the singularity sequence {x m,k } m,k which are all distinct by assumption with J elements, we let {ω j } J j=1 be the rearrangement of {x m,k } m,k such that We construct following 2J + 2 successive intervals in [−π, π]: These intervals are applied as the polygons described previously. However, we note that L J+1 ∪ L J+2 combines to generate a sequence of zeros as described by (2.8) and then (2.9) after observing the exponential exponents in (3.2). There are actually 2J + 1 asymptotically rectangular area on duty. Without loss of generality, we take each {L l } 2J+2 l=1 to generate an asymptotically rectangular area {R l } 2J+2 l=1 as described by (2.7) and (2.8). Finally, we note that one can not identify the quantities {µ k,j } in (2.6), because not being able to locate the coefficients {m j } in (2.4) again in (3.2). For our case, the quantities {e k } in (2.5) are equal to 1. Because the zeros of y(π; ω) are symmetric to the imaginary axis, we can rewrite the equation above to be z n l ∼ n l π ω l − ω l−1 + O(1), l = 1, . . . , 2J + 2; n l ∈ Z. Once again, we note that {z nJ+1 }∪{z nJ+2 } is the sequence of zeros generated by the interval L J+1 ∪L J+2 . This proves the Theorem 1.1.
2017-03-02T05:26:07.000Z
2017-03-02T00:00:00.000
{ "year": 2017, "sha1": "2e44afa279467de2b467c18308ad64cde25347cc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.00630", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2e44afa279467de2b467c18308ad64cde25347cc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
117745181
pes2o/s2orc
v3-fos-license
Constraints on Cosmological Parameters from Strong Gravitational Lensing by Galaxy Clusters We investigate how observations of strong lensing can be used to infer cosmological parameters, in particular the equation of state of dark energy. We focus on the growth of the critical lines of lensing clusters with the source redshift as this behaviour depends on the distance-redshift relation and is therefore cosmologically sensitive. Purely analytical approaches are generally insufficient because they rely on axisymmetric mass distributions and thus cannot take irregular critical curves into account. We devise a numerical method based on the Metropolis-Hastings algorithm: an elliptical generalization of the NFW density profile is used to fit a lens model to an observed configuration of giant luminous arcs while simultaneously optimizing the geometry. A semi-analytic method, which derives geometric parameters from critical points, is discussed as a faster alternative. We test the approaches on mock observations of gravitational lensing by a numerically simulated cluster. We find that no constraints can be derived from observations of individual clusters if no knowledge of the underlying mass distribution is assumed. Uncertainties are improved if a fixed lens model is used for a purely geometrical optimization, but the choice of a parametric model may produce strong biases. Introduction In the widely accepted 'concordance model' of cosmology a cosmological constant accounts for more than 70 per cent of the overall energy density in the universe. It is considered an important feature mainly because it can account for both the spatial flatness and the accelerated expansion of the universe. Still there is no physical explanation that is generally considered satisfying, and the observational evidence can be reproduced in models that instead introduce dark energy, characterized by negative pressure. Observations of gravitational lensing may help determine its equation of state. Lensing phenomena are sensitive to the geometry of the cosmological background since the appearance of an image depends on the distances between source, lens and observer. If we can obtain information about these distances from observations, we can relate them to redshifts to constrain spacetime curvature, which is governed by cosmological parameters. In this paper we study strong gravitational lensing. Since a mass distribution needs to feature very high densities to act as a strong lens, galaxy clusters are suitable subjects of investigation. The critical lines of such lenses grow with the redshift of the source. The methods discussed here aim to infer geometrical information from observations of giant arcs, which trace the critical lines, and consequently constrain the equation of state of dark energy. An exploratory study of the concept was presented by Meneghetti et al. (2005). Other authors followed similar approaches to analyse individual clusters, e. g. Sereno (2002), Soucail et al. (2004), Gilmore & Natarajan (2009). Moreover weak lensing can be studied with the same goal, as presented by Medezinski et al. (2011), for instance. We give a short summary of the underlying theory in Sect. 2. In Sect. 3 we present analytic studies of cluster lensing. While they are restricted to axisymmetric lenses, they can provide us with estimates of the influence of the dark energy equation of state on strong lensing features. In Sect. 4 we turn to numeri-cal approaches and develop Markov chain Monte Carlo methods that aim to fit a set of parameters, characterizing the lens model and the geometry, to an observed image configuration. In Sect. 5 an alternative approach is presented, which infers the geometry from the observed scaling of the critical lines at different redshifts. Finally in Sect. 6 we discuss the performance of the various approaches and point out possible sources of errors. Cosmological model We assume that the universe is spatially flat and characterized by the Friedmann-Lemaître-Robertson-Walker metric ds 2 = −c 2 dt 2 + a 2 (t) dw 2 + w 2 dθ 2 + sin 2 θ dφ 2 , (1) where t denotes the coordinate time, w is the comoving radial coordinate and θ and φ are the azimuthal and polar angles. The evolution of the scale factor a(t) is governed by the Friedmann equations, ȧ a We have omitted the curvature terms as well as those involving a cosmological constant. Instead we consider a dark energy component with the equation of state p x = wρ x c 2 , such that its density is given by The singular isothermal sphere We first examine the lensing behaviour of the singular isothermal sphere profile. In this model the density of a galaxy cluster is described by the function where σ is the velocity dispersion of the cluster members. Such a lens is characterised by the Einstein radius Figure 1 shows the dependence of the Einstein radius on the source redshift in different dark energy cosmologies for a cluster with velocity dispersion σ = 1000 km s −1 at redshift z d = 0.3. In addition to the ΛCDM scenario w = −1, we choose to examine the behaviour for a constant equation of state parameter w = −0.7 and an extreme case of dark energy, a phantom model with w = −1.3. While the Einstein radius itself takes values of up to 25 for the redshift range considered, differences between the cosmologies are less than 1 and largest if the source is located close to the cluster. To obtain the angular diameter distance ratio D ds /D s from a single measured Einstein radius θ E the velocity dispersion σ of the lens must be known. We can eliminate it from our analysis if we study the growth of the critical curve with the source redshift, comparing two Einstein radii θ E1 , θ E2 at different redshifts z s1 , z s2 . The resulting ratio appears frequently in our analyses, and we label it the geometry factor. Via this factor, cosmological parameters determine 2. Growth of the geometry factor f with the source redshift z s2 for two different dark energy cosmologies. The fixed reference redshift is z s1 = 1.0 or z s1 = 0.7 respectively. lensing properties of an object. It should be noted that the proportionality between the geometry factor and the Einstein radius is peculiar to the SIS model. For a fixed redshift z s1 , the geometry factor increases with the redshift z s2 of the second source, rising steeply behind the lens and flattening out for high redshifts. In Fig. 2 this behaviour is displayed for a lens at redshift z d = 0.3 and fixed sources at z s1 = 1.0 or z s1 = 0.7 respectively. Differences between the two cosmologies shown are most prominent if the two sources are at a high distance from each other, since this configuration corresponds to a large 'lever arm'. For a cosmological analysis it is therefore the most convenient to study pairs of arcs in which one source lies closely behind the lens and the other at a much higher redshift. The NFW profile The NFW profile is arguably the most commonly used parametric model for the matter distribution of dark matter haloes. For its Einstein radius no analytic expression exists, but Bartelmann (1996) provided the convergence: x = r/r s is the dimensionless radial coordinate in the lens plane and κ s = ρ s r s Σ −1 cr ; note that Eq. (16) is valid only for x < 1, i. e. inside the scale radius, which encompasses the strong lensing region. We start our analytic approach from a property of axisymmetric lenses, namely the condition that the mean convergence inside the tangential critical line of radius θ E has to equal unity: If we set up this equation for two different source redshifts z s1 and z s2 , rearrange and divide the two equations, we arrive at the form The function g(x) represents the integral solved by Bartelmann (1996). Again we have used dimensionless coordinates in the lens plane, i. e. x E1,2 = D d θ E1,2 /r s . The left-hand side of Eq. (18) is a function of the source redshifts, while the right-hand side depends on the Einstein radii for those redshifts; both sides also depend on cosmological parameters since these determine the distance-redshift relation. In this form the equation admits a simple graphic solution: plotting each side against the equation-of-state parameter w, the intersection of both curves marks the true value. To test whether the relation can be exploited that way, we consider as an example a halo of mass M = 1.0 × 10 15 h −1 M and scale radius r s = 310 h −1 kpc at a redshift of z d = 0.3 and sources at redshifts z s1 = 0.7 and z s2 = 3.0. The Einstein radii for those redshifts are θ E1 = 7. 9 and θ E2 = 21. 9 in ΛCDM cosmology. In reality, we have to resort to estimates of the scale radius, such as best-fitting values, which introduce errors. Einstein radii have to be determined from the positions of observed arcs. While the latter can be expected to trace the critical lines of the cluster, uncertainties in the deduced Einstein radii may be at least as large as their widths and thus of the order of 1 . Figure 3 shows the influence of such errors on Eq. (18). In the first plot, the correct values for the Einstein radii are used, but the estimate for the scale radius is too high or too low respectively, while in the second plot one of the Einstein radii is underestimated. In each calculation, the correct values are used for all but the specified quantity. The left-hand side of the equation is independent of such errors since, as stressed before, it depends only on the source redshifts, of which exact knowledge can safely be assumed here. Inaccuracies in the scale radius estimate do not have a very large effect on the result for the equation of state parameter, shifting the value of w by ∆w ∼ 0.1 in this example. The determination of the critical line poses a larger problem, with an error of ∆θ E ∼ 1 in the Einstein radius translating into a deviation of ∆w ∼ 0.6. Another caveat is given by the fact that galaxy clusters often possess significant ellipticity. Based on our analysis so far it seems unlikely that these simple, spherically symmetric profiles provide a sufficient approximation, if only for the fact that it is unclear how a robust measure of the Einstein radius can be obtained from arc positions in the case of non-circular critical lines. The lens model To avoid the assumption of axisymmetry, we focus on an elliptical generalization of the NFW profile used by Comerford et al. (2006). This model is based on six parameters that characterize the cluster constituting the gravitational lens: the coordinates (x c , y c ) of its centre, the scale convergence κ s , the scale radius r s , the ellipticity and the position angle φ. To calculate the deflection angle field the coordinate frame is shifted and rotated in such a way that the cluster centre determines the origin and the coordinate axes coincide with the axes of the ellipse. Then an elliptical radius is introduced, To obtain the deflection field, the NFW lensing potential ψ is evaluated at this radius and α is calculated by differentiation with respect to the coordinates in the original frame. The convergence κ and shear γ can be computed by further differentiation. Deflection angles obtained in this way are valid only for a reference source redshift z r . For each additional source i at a different redshift z si , values have to be multiplied by the geometry factor Given an image point θ originating from source i, the corresponding source point β is located using the lens equation with the appropriately rescaled deflection angle f i α entering. By scanning a grid on the lens plane for points that fulfil the lens equation for this source position, all image points are located. The chi-square To quantify how well a set of parameters describes the observed lensing effects, we follow Comerford et al. (2006) in introducing a χ 2 -function that has three contributions, χ 2 = χ 2 1 + χ 2 2 + χ 2 3 : -We demand that the first should measure the extent to which the observed images can be reproduced. Let N be the number of data points with coordinates (x i , y i ) and (u j , v j ) the coordinates of the predicted image points. For each data point, the closest image point (u cl,i , v cl,i ) is identified, leading to This form assumes that data points are distributed around the predicted image points in a Gaussian fashion, with a standard deviation of σ i . -The second contribution should test whether the predicted images match the data. For each predicted image point (out of M overall), the closest data point is identified. The chisquare contribution is then given by If the model gives rise to any additional image points far from the data, it affects this term. Obviously this contribution has the same form as the first, but the roles of data and image points are reversed. -Finally, the size of the sources is taken into account. For each of the N s sources, the centre (p i ,q i ) is determined by computing the mean of either coordinate. Then the mean squared distance from the centre is calculated, averaging over the P i points assigned to the i-th source. For a tolerated source size σ s we take as the final chi-square contribution. Again it is assumed that the distribution of points belonging to the same source around its centre is Gaussian. This last contribution is in fact crucial because neglecting it would mean that source shapes and sizes are arbitrary -in that case, any image configuration could easily be reproduced with a lens of zero mass and the points placed in the source plane perfectly matching the distribution in the image plane. Requiring instead that sources are small and compact is therefore a strong constraint. We find that a reasonable choice is σ i = 1. 0 and σ s = 0. 5, emphasizing the source size. Note that the behaviour of the algorithm is determined by the ratio between the two parameters, not their absolute sizes. The Metropolis-Hastings algorithm We aim to fit a set of lens parameters and geometry factors to observed arcs. Because of the large number of free parameters (six for the lens model and one for each pair of sources) we avoid using a simplex algorithm to minimize the chi-square. Instead we generate Markov chains according to the Metropolis-Hastings algorithm. Given a set of parameter values x (n) , a random point x (n+1) is picked from the parameter space and accepted with the transition probability p(x) is the target probability distribution. We choose the likelihood L = Nexp(−χ 2 /2) (where N accounts for proper normalization), so that the sample is concentrated on regions of high likelihood. By means of the proposal density q (x → y) the step sizes can be limited or parameter ranges set. The Skylens simulator To test the algorithm on mock observations, we use SkyLens, a ray-tracing code presented by Meneghetti et al. (2008). It was used for instance by Merten et al. (2009) and Meneghetti et al. (2010). In short, the program generates a distribution of background galaxies, based on a set of real galaxies decomposed into shapelets. Positions in a specified field of view and orientations are randomly selected. If desired, all sources can be placed at a fixed redshift: this feature is very useful for studies like ours that are based on observing the change of lensing properties with the source redshift. Observational effects are added to the lensed image, including the sky background, photon noise and seeing as well as instrument noise. The lensed images can be convolved with point spread functions, which are available for several telescopes. We choose to study a numerically simulated cluster labelled g1, taken from a sample of hydrodynamical simulations by Saro et al. (2006). It was obtained from a dark matter simulation by Yoshida et al. (2001) and re-simulated with added baryonic effects at a higher mass and spatial resolution using Gadget-2 (Springel 2005). The cluster has a mass of M 200 = 1.14 × 10 15 h −1 M and a best-fitting scale radius of r s = 0.310 h −1 Mpc. Principal axis ratios are b/a = 0.64 and c/a = 0.57 and the orientation of the main axis relative to the coordinate axes of the simulation box is given by the angles θ x = 33.3 • , θ y = 57.4 • and θ z = 96.1 • . The cosmological parameters used in the simulation are Ω Λ,0 = 1 − Ω m,0 = 0.7 (with Ω b,0 = 0.04) and h = 0.7. Detailed explanations can be found in papers about other studies using these simulations (e. g. Dolag et al. (2005); Puchwein et al. (2005)). Deflection angle maps were computed for different projections by Meneghetti et al. (2008), placing the cluster at redshift z d = 0.2975. We created mock observations of the mass distribution projected along the z-axis for the Advanced Camera for Surveys (ACS) on HST and obtained 9 giant luminous arcs at redshifts z = 0.7, 1.0, 2.0, 4.0. Results We choose flat prior distributions for the lens parameters, confining parameter values to fixed intervals. If no assumptions at all are made about the lens, it is difficult to obtain information about the geometry due to the degeneracies involved; e. g. trends in the scale convergence and geometry factor can compensate each other to some extent. Moreover, confining the parameters makes the parameter space smaller and accelerates its exploration. For the geometry factors we do not explicitly exclude any region from the beginning and limit only the step size as the algorithm is designed such that the chain should generally move 'in the right direction' regardless of the starting point. In practice, starting values for the geometry factors must still not be set too high unless care is taken to ensure the correct calculation of the tiny likelihoods and their ratios in particular. Generally starting values between 0 and 5 and search radii between 0.2 and 0.5 lead to reasonable burn-in phases and acceptance rates. We consider only one pair of sources at a time: one located at the reference redshift for the lens parameters, the geometry factor of which consequently has the value 1.0 and is not varied; and a second source at a different redshift. Including the reference redshift helps break the degeneracy between the geometry factor f and the scale convergence κ s . Otherwise changes in either quantity can be absorbed in the other, since deflection angles are proportional to the product f · κ s . We take the lowest source redshift of z s = 0.7 as the reference redshift and follow the procedure described above to produce Markov chains for each geometry factor. From these samples we compute likelihood distributions, marginalizing over the lens parameters. Histograms for the distributions are presented in Fig. 4. In each plot, the true value of the geometry factor, i. e. the value in the ΛCDM cosmology assumed in the simulation of the images, is marked. The locations of the likelihood peaks generally agree quite well with the true values of the geometry factors. Yet as pointed out in Section 3, geometry factors vary very little between different dark energy cosmologies (cf. Fig. 2). As w ranges from the rather extreme scenario w = −2 to w = −0.3 for instance, geometry factors vary by less than 0.1 or roughly 6 per cent (depending on the redshift). On the other hand, the widths of the distributions, which for simplicity we quantify using the standard deviation of the best-fitting Gaussian (despite the skewness), are roughly 8 per cent of their respective mean. The problem lies in the fact that the lens model can easily be adjusted to react to any small change in the geometry. To infer cosmological information, however, a much better 'resolution' is needed. To address this problem we explore how the method behaves if the lens parameters are kept constant. Ideally, no geometrical assumptions should be made in choosing the lens parameters. In the absence of independent information from effects other than strong lensing, this means that only images at one redshift may be used to fit a model. We attempt this using the code presented by Comerford et al. (2006). However, we find that it is not possible to obtain a reliable parameter set in this way as fit results vary very strongly with different choices of starting values. As we are nonetheless interested in the performance of the algorithm for a fixed lens model, we perform a fit including arcs at three different redshifts. Again we study the likelihood distributions for the geometry factors, shown in Fig. 5. Compared to the full variation of both lens parameters and geometry factors, the distributions do appear considerably more narrow, with widths of 2-3 per cent of the mean. However, deviations from the ΛCDM values are still not satisfactory. In most cases, the likelihood peaks are located at geometry factor values that are higher than in ΛCDM. Comparing the results for several arcs at the same redshift, we note that the likelihood distributions do not seem consistent in that they do not appear to favour the same geometry factors. In the following we investigate whether our choice of the lens model can account for such deviations. Influence of the lens model To check whether the strong biases observed in the likelihood distributions presented above are indeed caused by insufficient knowledge of the lens model, we repeat the procedure using arcs produced by an analytic deflection angle map rather than the simulated cluster. In order to be able to carry out calculations analytically, we define the deflection angle field by a simple power law: x = r/r 0 is the dimensionless radial coordinate in the lens plane; α is also dimensionless and denotes the deflection angle relative to the angle set by the reference scale r 0 . The latter is arbitrary but has to comply with α 0 = α(x = 1). The map is therefore characterized by two parameters. We choose a scale of r 0 = 0.3 h −1 Mpc and set α 0 = 0.26 and p = 0.36, such that the deflection angle field approximately follows that of an NFW halo of the same scale radius r 0 and the scale convergence κ s = 0.2; the behaviour for both the power law and the specified NFW profile is shown in Fig. 6. We demand that the field should describe the deflection angles for a reference source redshift of z s1 = 0.7 with the lens located at z d = 0.3. In SkyLens simulations, this configuration produces two giant luminous arcs originating from the same source at redshift z s2 = 4.0. The MCMC method again provides likelihood distributions for the geometry factor. Since we want to study the influence of the assumed lens model, we run it several times, varying the parameters α 0 and p. r 0 is kept fixed; changing its value has the same effect as changing α 0 . As seen in Fig. 7, the modifications clearly move the likelihood peaks, but they hardly change the shape of the distributions. Based on our exact knowledge of the underlying mass profile in this case we can attempt theoretical predictions: The deflection field described by Eq. (27) leads to a convergence of for the reference redshift; for other redshifts this has to be rescaled by the geometry factor f . Now we resort to Eq. (17) again to compute the critical line. The mean convergence inside the radius x c of the critical line is which leads to an Einstein radius of For our choice of parameters, this gives a radius of x c1 = 0.122, corresponding to 11. 8, for the reference redshift z s1 = 0.7. The geometry factor for redshift z s2 = 4 is f = 1.5925, consequently the Einstein radius should be x c2 = 0.252 (24. 4), in good agreement with the observed arcs. Conversely, given a profile (α 0 , p) and an Einstein radius x c we can calculate a geometry factor f , solving (29) accordingly: This enables us to test how a wrong choice of parameters affects the deduced geometry factor and compare the results to the likelihood peaks. In the histograms in Fig. 7, the peak locations expected from the analytical estimation are marked along with the correct values. They agree remarkably well with the actual likelihood maxima. It seems that the shift in the distributions away from the true geometry factor can indeed be accounted for by an erroneous choice of mass profile parameters. Since such a choice does not change the features of the distributions, it should generally be difficult to distinguish it from other cases with better choices and remove the effect. Semi-analytic calculation of geometry factors We test a final approach that is based on the same parametric profile but aims to reproduce critical points rather than images. While it seems that arc configurations can often be predicted by a variety of lens models in different geometries, to some extent owing to the freedom in the source shapes and the position of images relative to the critical line, the shape of the critical curves itself may be more demanding to reproduce. Table 1. Geometry factors f computed for three source redshifts z s from the chi-square minimisation, with a reference source redshift of either z r = 0.7 or z r = 1.0, and the theoretical ΛCDM values. For a reference source redshift of z r = 1.0, the geometry factor for source redshift z s = 1.0 is f = 1 by definition. We consider a lens model with convergence κ 0 and shear γ 0 = γ 2 0,1 + γ 2 0,2 for a source redshift z r . For a second source redshift z s characterized by the geometry factor f the convergence at any point (x, y) in the lens plane has the value κ(x, y) = f κ 0 (x, y); the shear also scales with the geometry factor, γ(x, y) = f γ 0 (x, y). On the critical lines has to hold. Given a set of N points (x i , y i ) known to lie on the critical line at redshift z s , we therefore take the function to measure the agreement between the model and the data for the geometry factor f and demand that its minimum should determine the best-fitting geometry factor. To apply this concept we first have to obtain sets of critical points. Ideally an estimator point is obtained by marking the brightness saddle point in an arc that appears to be formed by two merging images. In arcs exhibiting no such structure the brightest point can be chosen instead. It is useful to add more points nearby to mark the presumed direction of the critical line in this point since this makes for more stringent constraints. Multiple image systems can also be taken into account. If several arcs are observed at the same redshift at different positions in the lens plane, this translates into more information on the critical line and can help to constrain ellipticity in particular. We test the method using our mock observations of the cluster g1. Since very few and only faint arcs are found at redshift z = 0.7, we include only redshifts z = 1, z = 2 and z = 4. Note that in principle we could fix the lens model for one of these redshifts by setting f = 1 in Eq. (33) and minimising with respect to the lens parameters. However it is all but impossible to determine a unique best-fitting model since a large number of combinations of lens parameters produces critical curves passing through our estimators, some with scale radii several times as large as the value suggested by other mass profile fits for this cluster. To avoid this obstacle, we rely on the same set of fixed lens parameters as for the MCMC method. We then use estimator sets for all three redshifts to compute their geometry factors. Table 1 shows the values obtained in this way. Two different reference source redshifts are listed for the following reason: The value for the scale convergence κ s is originally defined here for a source redshift of z r = 0.7. If we are confident that the model is valid for this redshift, we can consider the geometry factors with respect to the same reference redshift. If we do not trust the model however, we can restrict our analysis Table 2. Influence of the scale radius r s and ellipticity estimates on the geometry factor f in the semi-analytical method. In the third column, the previous parameter choice and results are given. Results in the remaining columns were computed with a changed value of either r s or as indicated in the header. to the redshifts that actually appear in observations. Since for arbitrary redshifts z 0,1,2 according to the definition of the geometry factor (cf. Eq. (14)), we consider ratios between the computed geometry factors. While for instance f (z s = 2.0, z r = 0.7) is the factor by which the input scale convergence has to be multiplied for the critical line to match our estimator set for redshift z s = 2, f (z s = 2.0, z r = 1.0) instead can be taken to quantify the scaling between our two estimator sets. In other words, the convergence is rescaled to reproduce the critical line at redshift z s = 1.0 and geometry factors for higher redshifts are considered with respect to that redshift. This has the advantage of eliminating the influence of the scale convergence from our calculations. Naturally, the remaining lens parameters can still act as sources of errors. Table 1 confirms these considerations to some degree: for the reference source redshift of the lens parameters, z r = 0.7, deviations of the geometry factors are roughly 4 per cent for a source redshift of z s = 2 and 1 per cent for z s = 4, whereas considering the ratios they reduce to 3 per cent and less than 0.1 per cent respectively. The excellent agreement in the latter case is certainly to some extent coincidental. Taking our lack of knowledge of the precise mass model into account, uncertainties should be considerably larger than the deviation itself. To find out how our parameter choice affects the results, we simply rerun the program for different lens models. Having just described how to eliminate the scale convergence, we investigate the role of the scale radius r s and the ellipticity . Table 2 lists the geometry factors computed for various cases in which changed values for either the scale radius or the ellipticity were used. For a reference source redshift of z r = 0.7, the geometry factor results vary over a range corresponding to about 8 per cent across our examples of lens models. Changing the reference redshift -that is, again considering only ratios -the scatter reduces to 1 per cent. In that case, however, all results calculated for z s = 2 overestimate the geometry factor without exception, due to errors in either the assumed mass model or our choice of critical line estimators. It is also worth noting that in this example we used multiple arcs at the same redshift to derive estimators. In practice, fewer and fainter arcs, albeit at more redshifts, might be available, complicating the choice of reliable estimators. Conclusions We have investigated several approaches to cluster strong lensing to test its use as a probe of spacetime geometry, in particular the dark energy equation of state. Generally this has proven challenging as changes in the observables induced by variations in the equation of state are very small. The methods that we have explored make use of the growth of the critical lines with the source redshift. We have tried to constrain a cosmologically sensitive ratio of angular diameter distances, using first a Metropolis-Hastings algorithm constructed to fit an observed image configuration and secondly a semi-analytic approach that optimizes the distance ratio such that a fixed lens model reproduces a given set of critical points. To test our methods we have studied strong lensing by a numerically simulated cluster, for which we have created mock observations. Modelling uncertainties have turned out to be the most important source of errors in our study. The simplest models for the lensing mass distributions, such as the SIS and the NFW profile, assume spherical symmetry. They offer the advantage of admitting a largely analytic analysis and thus help us estimate geometric effects, but they are generally not well suited to a study of lensing by a real galaxy cluster as in most cases deviations from the assumed symmetry are too large. Models that include ellipticity provide a more accurate description of the mass distributions. The larger number of parameters, however, produces degeneracies. If all model parameters are treated as free, any cosmological sensitivity is masked since the background geometry and profile properties can influence the lensing behaviour in a similar way. An attempt to optimise the lens along with the geometry for an individual cluster does not result in meaningful constraints on the distance ratio, but instead admits a wide range of realisations of dark energy. Constraints can be narrowed down somewhat if the lens model is fixed and only the geometry is optimised, but care must be taken in the choice of the model. While the influence of the overall mass assumed is weak if we choose to consider only the scaling of the critical curve between two redshifts, the remaining parameters can still create biases. Ideally, no geometric assumptions should be made to fix the lens model, yet information from arcs at a single redshift alone is not sufficient to constrain it. Independent information from other observations could be used to address the problem, but it is generally difficult to obtain for the cluster core. Meneghetti et al. (2005) suggested that the position of the brightest cluster galaxy (when present) or the centre of Xray emission could be referred to to determine the cluster centre, but they pointed out that errors of several arcseconds can occur. Additional constraints could be derived from the galaxy velocity dispersion, weak gravitational lensing or X-ray temperature profiles. It is also worth investigating whether the restriction to analytical profiles can contribute significant errors. Galaxy clusters are generally 'lumpy', which gives rise to the question if an ellipsoidal NFW profile nonetheless describes the lensing properties well enough to permit cosmological conclusions. If even the best fitting NFW profile is insufficient, biases are to be expected. An interesting approach to cluster strong lensing that forgoes the use of the NFW profile was presented by Zitrin et al. (2009). Assuming that mass follows light in a cluster, they assigned a power law profile to each visible cluster galaxy, scaled by the ob-served brightness, and smoothed the resulting distribution. Note that they based their method on multiple images. While we have not considered this effect in our study, it provides additional constraints and should therefore be a useful inclusion.
2012-04-02T11:33:39.000Z
2012-04-02T00:00:00.000
{ "year": 2012, "sha1": "a27688406690b39ca505521731029cd04c9aaa2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a27688406690b39ca505521731029cd04c9aaa2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253146709
pes2o/s2orc
v3-fos-license
High efficiency multi power source control constant current/constant voltage charger lithium-ion battery based on the buck converter This paper proposes the design and simulation of a constant current/constant voltage (CC/CV) multi-power source lithium-ion (Li-ion) battery charging system based on the Buck typology. The aim of this new design that uses the Buck converter with multiple numbers of sources, is to provide sufficient energy for battery charging, with an analog switch to select the active source that has priority to guarantee the continuity of the charging without interruption. As well as the transition between the charging modes is smooth that is provided by a multiplexed switcher. At the same time is increases the efficiency of the system by using fewer power dissipation components and low output ripple. The obtained results show that the Li-ion battery can be successfully charged without reducing its life cycle. In the global, those technics allow reducing financial costs. This allows such a solution to be well-positioned in the industrial market (electric vehicles (EV) and medical). INTRODUCTION The so-called "lithium-ion" batteries have become ubiquitous in our daily lives. They can be small in a mobile phone or assembled by the dozens in an electric car or medical equipment. They are the subject of intense research in the industrial sphere, given the challenge that represents electricity storage. The proposed solution will cover the problem of the switching between the sources and its transition flexibility to avoid any undesired interruption during charging or equipment operation which will also increase the efficiency and overall performance of the system and use fewer components to achieve those functionalities. Therefore, the design of a multi-power source is recommended to guarantee and offer the continuity of the operation of the equipment. The challenge of our design lies in switching between sources of different types with priority conditions according to active or the available source, and at the same time keeping the efficiency of charging without creating noisy response at the transition between the constant current/constant voltage (CC/CV) charging method, Figure 1 showing the voltage and current of charging for a lithium-ion (Li-ion) battery by using this method [1], [2]. it will charge a battery with a constant current firstly until the battery voltage increases to the constant voltage limit. After the CC charging regime, the charging process goes into the CV regime to prevent the overcharging [3]- [5]. Our method builds on: i) checking the existence sources by using Hysteresis comparators which compare the input voltage with a reference voltage Vref, ii) ensuring the switch between sources using a logical circuit that drives the source to the power stage by priority and iii) modified direct current (DC-DC) Buck converter allows ensuring the CC/CV charging modes with a proposed mode selector system, while always avoiding voltage drops due to switching between sources and ensuring high efficiency [6], [7]. This paper is organized: the second part will cover the descriptions and simulations of the proposed battery charger that includes the switching solution, so it will present in detail the architecture and functionality of each block composing the proposed Buck-type converter to charge the Li-ion battery. The results of the simulations of the circuit will be illustrated in the 4th section and a conclusion will be presented afterwards. THE PROPOSED BATTERY CHARGER The industrial market always tends to look for solutions for continuous power supply during operation of Li-ion batteries used in electric vehicles and medical equipment. Our solution comes to answer these needs with a circuit comprising a combination of two architectures, each with its own functionalities aimed at providing uninterrupted power supply even when changing power sources based on renewable energies. Adding the challenge of keeping the battery charging efficiency. Multi power source control Our proposed multi-source solution offers an uninterrupted power supply to the battery by switching between three sources by priority according to the availability, while avoiding voltage drops and keeping a high efficiency and less losses of the system by reducing the number of metal-oxide-semiconductor field-effect transistors (MOSFETs) transistors. Figure 2 illustrates the blocks that build our method used: a) Sources: We have three sources classified according to priority. For instance, in electric vehicles (EV) application we could have these following sources: source 1: sector, source 2: photovoltaic (PV) solar panel on top side, source 3: wireless charger in bottom side. b) Comparators: They are Schmitt trigger type that makes the comparison between the input voltage and a reference voltage Vref in order to verify the existence of the input source. at the output of each comparator, we have logical signals that specify the active sources (c1, c2, c3) [8]. c) Source selector logical circuit: responsible for meeting the priority conditions of the sources using AND/NOT logic gates that check those following equations: When the source 1 is present (C1=High), En1 will be active so the load will be powered by the voltage Vsource1. For the load to be taken by second source Vsource2, it is necessary the non-existence of the 1st source ( 1 ) and the presence of the second source (C2=High). In order the system to be powered by the third source, of course it is necessary the presence of source 3 (C3=High) and the non-existence of the primary and second source ( 1 , 2 ). d) Gate driver: This block has as inputs the logical signals En which designate the active sources and a pulse width modulation (PWM) input with the purpose of driving it to the gates of the P-channel metal-oxidesemiconductor (PMOS) that related with an active source. e) Multiple HS half-bridge: The switching part of the power stage, consisting of two elements a three backto-back PMOS, each one of them related to a specific input voltage, represents a power source, and a low side N-type metal-oxide-semiconductor (NMOS) for the synchronous type. The dual structure of the PMOS protects the circuit against reverse currents and keeps a good efficiency using fewer number of transistors [9]- [11]. Schematic The design of the proposed model predictive speed control (MPSC) solution is made using a combination of digital and analog components well-chosen according to their performance. As already mentioned, the operation of our MPSC is to check the existence of sources by comparison with reference voltage and choose the right source by priority. The challenge is always to cancel the load current and voltage errors when changing the voltage sources. Enables signals vs output voltage waveform In this simulation shown in Figure 3, we used three sources of the same type with an input voltage Vin=12 V and a delay in time in order to observe the output voltage Vout being switched between the sources. Taking for instance when t=2 ms, we notice the non-existence of Vin1, the deactivation of Vin2 and the activation of Vin3, so Vout will switch between second and third source. This is illustrated in the Figure 4, the system made a switch between the sources without undergoing any undershoot on Vout and even disturbances. After visualizing the Vout signal by applying all the possible switching cases, we can conclude that our system is well efficient and has met all the specification requirements. The following equations describes the conditions of simulation show in (4), (5). Open loop control by pulse-width modulation (PWM) signal which has the following specification (Fsw=200 KHz, D=50%). Proposed CC/CV Buck converter typology battery charging with modes The technique of using the pulse source and no dissipative output filter allows the transfer of energy with high efficiency. That is, less power is lost in the converter itself. In order to regulate the output voltage, a voltage reference and an error amplifier are added to the circuit. The four basic blocks (the pulse generator, the modulator, the output filter, and the compensator) are combined to form a complete feedback control DC/DC converter as shown in Figure 5. The design of the circuit begins by selecting various operating [14]. The proposed charging CC/CV modes using buck converter with blocks: power stage, compensator, PWM comparator, CC/CV mode switcher and finally voltage and current Schmitt trigger comparator is represented in Figure 6. The system was designed block by block to ensure that each one worked properly, and then the system blocks were assembled so that the impedance between the output and the input was matched. Figure 5. Buck converter typology Schematic The design of this circuit was made according to the desired load specifications aiming at a load in CC/CV mode with a high efficiency by using components known in the market. The difficult constraints that make this circuit complex with its parameters is the grouping of output and input impedances between two linked blocks. Figure 6 illustrates the composition of the proposed Li-ion battery CC/CV charging with a Buck converter by block each with its functionality: Consisting of a low-pass LC filter that converts the square signal (situated at the common point of switching) to a continuous signal, the values of the inductor L and its resistance series DC resistance (DCR), are well chosen so that the response of the loop is fast and, in a way, to allow the use of fewer capacities on the account of the less of the root mean square (RMS) current. Also, they are chosen to allow fewer energy losses in the core. The following (6) represents the transfer function of the lossy LC filter with an output capacitor, and the bode diagram is shown in Figure 7 [15], [16]. We used a Type III compensator can have a phase plot going above zero degree at some frequencies, and therefore it can provide the required phase boost to maintain a reasonable phase margin. The Type III compensator has three poles (one at the origin) and two zeros. which arranged to have two zeros and two poles, and the loop crossover frequency is placed somewhere between the zeros and poles. For this kind of design, the transfer function can be rewritten as [17]- [19]: Using the frequency analysis by matrix laboratory (MATLAB) software, we find the values of the capacitive and resistive components included in the compensator by the choice of a crossover frequency fxover based on the switching frequency and with placing the poles and zeros in the reason to boost the phase margin up to 180° and have a critically damped transient response. c. PWM comparator It makes the comparison between the output of the compensator (CC_error/CV_error) and a sawtooth signal to generate the signal (CC_PWM/CV_PWM) of control. The latter is responsible for having the desired current or voltage according to the active mode of charge CC/CV using a fast comparator (high bandwidth) capable of catching up the switching frequency [20]. In our circuit, we have chosen LTC6752 which belongs to the family of high-speed comparators. d. CC/CV mode switcher It is represented by a multiplexer (MUX) with inputs CC_PWM and CV_PWM operating under the conditions seen in Table 1. The operation of this MUX is controlled by the selector described in (8) In noisy environments specifically with switching power supplies that generate electromagnetic interferences disturbing the acquisition and the measures taken by the circuit. We manage to use low-pass filters and hysteresis type comparators to eliminate the noise. In our circuit and optimization purposes, we used this technique of voltage comparison and filtering shown in Figure 8 to avoid disturbances due to changes in CC/CV charging modes and allowing to have a good functionality during the charging of the Li-Ion battery [21], [22]. RESULTS OF COMBINATION OF THE TWO SOLUTIONS The block diagram of the proposed Li-ion battery CC/CV charging with the proposed multi power source control is shows in Figure 9. The combination of the two solutions was a bit complex to keep the high charging efficiency in CC/CV modes and to avoid errors when switching between the voltage sources. The simulation of the circuit illustrated below clearly shows the stability and efficiency of our design and the good choice of the components used. The simulation of the proposed charger is shows in Figure 10. The brown curve: charging voltage of the battery; The green curve: charging current of the battery; The blue curve: voltage signal of the mode of charging selector, 5 V represents the CC and 0 V represents the CV mode. By noticing Figure 10, we can see that there are CC/CV charging modes controlled by the selection voltage. The latter allows to change the charging mode from constant current to constant voltage by performing a state change from '1' to '0' with a creation of a little spike in current and voltage due to the compensation delay. The power conversion efficiency of the proposed battery charger is presented in Figure 11. The maximum efficiency of our charger is higher than the stated architectures that can reach its maximum 97% between 3 A to 7 A [23]- [26]. As expected, the obtained results are much better compared to other work. Table 2 CONCLUSION To conclude, the proposed architecture presents a good choice for Li-ion batteries in different domains as efficient charger for limited energy sources like renewable energies. Therefore, this solution will minimize the power dissipation during charging to maximize the charged capacity of the battery in different conditions. Using CC and CV modes can also increase the state of healthy for its sustainability's. The result of the simulation presents a power efficiency up to 95% and better performance in terms of current charge than can reach 4,000 mA in short time and low output ripples. Based on this performance, we see that our solution can be well imposed on medical applications or electrical vehicles.
2022-10-27T15:12:55.864Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "26349803de3855a08b1926802d3d36d563036a0f", "oa_license": "CCBYSA", "oa_url": "https://ijece.iaescore.com/index.php/IJECE/article/download/26313/16188", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d6246f7402df4bf317745dfc24761b314f013ccd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
1367349
pes2o/s2orc
v3-fos-license
Furin Overexpression Suppresses Tumor Growth and Predicts a Better Postoperative Disease-Free Survival in Hepatocellular Carcinoma Furin is a member of the pro-protein convertase family. It processes several growth regulatory proteins into their active forms, which are critical to tumor progression, metastasis, and angiogenesis. Furin over-expression could occur in liver cancer and a previous study showed that over-expression of furin promoted HepG2 cell invasion in tail vein xenograft models. However, the clinical relevance of furin expression in hepatocellular carcinoma (HCC) remained unknown. Surprisingly, in a postoperative survival analysis for HCC patients, it was found that the tumor/non-tumor (T/N) ratio of furin expression ≥ 3.5 in HCC tissues predicted a better postoperative disease-free survival (DFS) (P = 0.010; log-rank test). Furthermore, subcutaneous xenograft experiments demonstrated a significant suppression effect of tumor growth in the furin-overexpressed xenografts (Huh7-Furin) compared to the mock control. Administration of a synthetic furin inhibitor for inhibition of the pro-protein convertase activity, decanoyl-Arg-Val-Lys-Arg-chloromethylketone (decRVKR-CMK), to the Huh7-Furin xenograft bearing mice restored the repression effect of tumor growth. In contrast, administration of decRVKR-CMK to the mock Huh7 xenograft bearing mice showed no change in growth rate. In conclusion, furin overexpression inhibited HCC tumor growth in a subcutaneous xenograft model and predicted a better postoperative DFS in clinical analysis. Introduction Furin, a member of the pro-protein convertase (PC) family, activates precursor proteins by cleavage of the specific recognition sequence RXK/PR during the transport through the Golgi/trans-Golgi secretory pathway [1][2][3]. Furin and other PC family members process several latent precursor proteins into mature active products, including growth factors, hormones, receptors, plasma proteins, and matrix metalloproteases (MMPs). These proteins are critical to the proper physiological function in cells [1,2,4]. Most of the functions of furin substrates are related to cancer cell growth, such as tumor progression, metastasis, and angiogenesis [5][6][7]. Previously, it was reported that overexpression of furin occurred in hepatocellular carcinoma (HCC) and furin-overexpressed HepG2 cells promoted their invasion ability in an animal model [18]. In view of the similar effect of furin in hepatoma cells compared to other types of cancers, furin might serve as a candidate target for anti-cancer therapy in HCC. However, before further exploration of this therapeutic strategy, it is critical to understand the clinical relevance of furin over-expression in HCC patients. In this study, we conducted a clinical analysis for the prognosis predictive value of furin expression in HCC patients receiving surgical resection of the tumors. Surprisingly, it was found that over-expression of furin in HCCs predicted a better postoperative disease-free survival (DFS). The growth regulatory effect of furin was further investigated in a xenograft model by use of the furin inhibitor, decRVKR-CMK. Patients Liver tissues from 105 HCC patients, 72 males and 33 females, receiving total removal of liver tumors from 1998 to 2001 in Chang Gung Memorial Hospital were retrieved from Institutional Tissue Bank, Chang Gung Medical Center. Preoperative diagnosis of HCC was made by one of the following methods: echo-guided liver biopsy, fine needle aspiration cytology, high alpha-fetoprotein (AFP) level (.200 ng/mL) plus at least one dynamic imaging studies (dynamic computed tomography or magnetic resonance imaging), or one dynamic imaging studies plus angiography (if AFP,200 ng/mL). Tumors were totally removed with a safety-margin of .1 cm. Postoperative follow-up was performed by ultrasonography, chest Xray, AFP, and blood biochemistry every 1 to 3 months in the first year and 3 to 6 months thereafter. Abnormal findings were verified by computed tomography or magnetic resonance imaging. Intrahepatic recurrence was verified by use of the aforementioned criteria. Extrahepatic recurrence was verified by biopsy, aspiration cytology, computed tomography or magnetic resonance imaging study dependent on the location of the lesions as well as the condition of the patients. The basic clinicopathological data were retrospectively reviewed: age, cirrhosis, hepatitis B antigen (HBsAg) positive, antibody against hepatitis C (anti-HCV) positive, tumor number, tumor size, ascites, AFP, albumin, bilirubin, prothrombin time, creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), and alcohol usage ( Statistical Analysis Linear regression analysis was performed to estimate the relationship between furin expression (T/N ratio) and clinicopathological variables. The furin expression level was quantified using the Image Gauge software (Fuji Film, Tokyo, Japan) to determine the intensities of the immunoreactive bands as described previously [19]. Disease-free survival was measured from the date of diagnosis to the date of recurrence, metastasis, death or last follow-up. The Kaplan-Meier method was used to compare the survival curves between groups, and the log-rank test was used to estimate the survival probability. To determine the cutoff of furin T/N ratios for survival analysis, experimental Kaplan-Meier analysis was performed using a series of increasing values as the cutoffs. The experimental cutoffs were: The smallest T/N ratio + n/5 6 (The largest T/N ratio -The smallest T/N ratio); n = 1 to 4. The multivariate regression was performed to evaluate the joint effect of other factors. Statistical analysis was conducted by use of SPSS version 15.0 (Chicago, IL). Reagents and Antibodies DecRVKR-CMK, a synthetic lipophilic furin inhibitor can penetrate the plasma membrane to reach the interior of cell, where it interacts with furin and blocks its catalytic site by irreversibly binding [20,21] was purchased from Calbiochem (Darmstadt, Germany). Anti-furin antibody from Affinity BioReagent (Golden, CO) was used for immunoblot and immunostain assay. Anti-Ki-67 antibody from Millipore Corp. (Bedford, MA) was used for immunostain assay. The rabbit polyclonal antibodies to NFkB/ p65, CDK2, CDK4, and cyclin D1 from Millipore Corp. and TGFb1, and Bcl-xL from Cell Signaling Technology, Inc. (Beverly, MA) were used for immunoblot analysis. Mouse monoclonal antibody to insulin receptor (IR), IKKa from Millipore Corp., and GAPDH Chemicon (Bedford, MA) were used for immunoblot. Cell Culture and Transfection Human hepatoma cell line (Huh7) was obtained from American Type Culture Collection (Manassas, VA). A cDNA fragment encoding furin was isolated and inserted into pcDNA3 vector (Invitrogen, Carlsbad, CA) to generated pcDNA3-Furin [18]. Huh7 cells were stably transfected with pcDNA3-Furin using TurboFect (Fermentas, Life Technologies, Karlsruhe, Germany) to obtain Huh7-Furin cells. Stable cell colonies were selected by G418 (600 mg/mL) (Amresco USA). Also, Huh7 cells were stably transfected with the empty vector (pcDNA3) to generate Huh7-Neo cells as a mock control. Stable cell lines were maintained in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal calf serum and 600 mg/mL of G418. Immunoblot Analysis Cells and tissues were lysed in lysis buffer (125 mM Trisphosphate pH 7.8, 10 mM DTT, 10 mM CDTA pH 7.8, 50% glycerol, and 5% Triton-X100). Protein concentration was measured using a Bradford assay kit (Pierce Biotechnology, Rockford, IL). Equal amounts of protein were loaded on a 10% SDS-polyacrylamide gel for electrophoresis before being transferred to a PVDF membrane (PerkinElmer, Boston, MA). The membrane was blotted with rabbit polyclonal antibodies to furin, TGFb1, NFkB/p65, cyclin D1, Bcl-xL, CDK2, CDK4, and or mouse monoclonal antibody to IR, IKKa, and GAPDH. The blots were incubated with horseradish peroxidase conjugated secondary antibody and developed using an ECL detection kit (Millipore). Animal Model Five-week-old, male BALB/cAnN.Cg-Foxnl nu /CrlNarl nude mice were used for subcutaneous injection of Huh7-Neo or Huh7-Furin cells (1610 6 ). The xenografts were allowed to grow until they reached to the size of 40 mm 3 . Intraperitoneal (IP) injection of 300 mL 2.5% DMSO or decRVKR-CMK dissolved in 2.5% DMSO (630 mM) was subsequently performed twice a week for four times. The tumor volume (mm 3 ) was measured and calculated using the formula (W 2 6 L)/2 (W, the smallest diameter; L, the longest diameter). All procedures were performed under sterile conditions in a laminar flow hood. The mice experiments were performed in accordance with U.S. National Institutes of Health guidelines, and the Chang Gung Institutional Animal Care and Use Committee Guide for Care and Use of Laboratory Animals. This study was conducted under the approval of Chang Gung Institutionally Animal Care and Use Committee (IACUC Approval NO CGU10-089). Immunohistochemistry (IHC) Paraffin-fixed tumor sections (5mm thick) were deparaffinized and stained with hematoxylin and eosin. To detect Ki-67 or furin, serial tumor sections were made and IHC was performed as previously described [22]. To verify the specificity of furin signal in IHC staining, recombinant furin protein synthesized using TNTH Quick Coupled Transcription/Translation Systems (Promega) was utilized as a blocking protein for parallel IHC staining control. Terminal Uridine Nick-end Labeling (TUNEL) Assay DNA fragmentation was detected using the DeadEnd Fluorometric TUNEL System assay (Promega, Madison, WI) according to the manufacturer's instructions. The TdT-mediated dUTPbiotin nick end labeling reaction was performed by use of uorescein isothiocyanate-dUTP at 37uC for 60 min. Fluorescence of apoptotic cells (green) and of nuclei (blue; stained with DAPI) was detected by fluorescence microscopy (Olympus IX71). Clinicopathological Analysis of Furin Expression in HCC Patients To understand the clinical relevance of furin expression in HCC, paired cancerous and adjacent noncancerous HCC tissues were obtained from 72 male and 33 female HCC patients. Furin protein expression was higher in the cancerous part compared to that in the non-cancerous part in 81/105 of the tumor tissues (Fig. 1A), whereas it was lower in the cancerous part in the remaining patients (Fig. 1B). Immunohistochemistry showed expression of furin mainly in the hepatocytes in the HCC tissue (Fig. 1C), but rarely in other cellular components. By testing a series of cutoff values (see Materials and Methods), Kaplan-Meier survival curves and log-rank test indicated that patients with T/N ratios of furin expression $ 3.5 (n = 13) in hepatoma tissues had significantly longer DFS compared to those with T/N ratios ,3.5 (n = 92) (P = 0.010) (Fig. 1D). On the other hand, no significant association between the furin expression and overall survival was found (data not shown). The Cox proportional hazards model was executed to further verify the associations between furin expression and clinicopathological factors for DFS in patients with HCC. Univariate analysis revealed that microvascular invasion, tumor number .1, macrovascular invasion, ascites, AFP.25 ng/mL, albumin # 4.0 g/dL, and furin expression T/N ratio ,3.5 were significantly associated with a shorter DFS. However, when these confounding factors were included, multivariate analysis showed that tumor number .1, macrovascular invasion, albumin # 4.0 g/dL, and furin expression T/N ratio ,3.5 were independent factors correlated with a shorter DFS (Table S1). Generation of Huh7 Cells Stably Expressing Furin Previous study indicated that furin was overexpressed in liver cancer and overexpression of furin enhanced invasiveness of HepG2 cells in tail vein xenograft models [18]. Presumably, furin overexpression should be associated with a shorter survival for HCC patients. However, the Kaplan-Meier analysis and proportional hazards model demonstrated that a higher expression level of furin (T/N ratio $ 3.5) associated with longer DFS in HCC patients ( Fig. 1D and Table S1). To understand the reason why furin over-expression resulted in longer DFS of patients with HCC, the growth regulatory effects of furin when over-expressed in hepatoma cells were examined in a xenograft model. Because subcutaneous tumor formation only occurred in ,50% of mice injected with HepG2 cells [23], Huh7 cells were utilized to generate xenografts stably over-expressing furin in this study. This cell line had very low endogenous furin expression [18] and was much easier to form tumors than HepG2 cells in mice. The expression level of furin significantly increased in Huh7-Furin cells compared with Huh7-Neo control cells ( Fig. 2A). Two of the furin substrates, TGFb1 [2,24] and MMP2 [25], were used for functional analysis. The expression level of pro-TGFb1 decreased, while that of TGFb1 increased in Huh7-Furin cells, compared with those in Huh7-Neo cells ( Fig. 2A). Zymography assay indicated that active MMP2 was only observed in Huh7-Furin cells but not in the Huh7-Neo cells. A synthetic furin inhibitor for the inhibition of PC activity, decRVKR-CMK, was used in this study. To confirm the inhibition ability of decRVKR-CMK in Huh7-Furin cells, it was found that the amount of pro-TGFb1 was increased, while that of TGFb1 was reduced, after Huh7-Furin cells were treated with 50 mM/L decRVKR-CMK. Also, conversion of pro-MMP2 to MMP2 in Huh7-Furin cells was inhibited in the presence of decRVKR-CMK (Fig. 2B). Inhibition of Furin Activity by decRVKR-CMK Promoted Tumor Growth in Huh7-Furin Xenograft To investigate the growth regulatory effect of over-expressed furin in hepatoma cells, the mice carrying Huh7-Furin xenografts were intraperitoneally (IP) injected with or without decRVKR-CMK. Prior to the injection of inhibitor, the subcutaneous Huh7-Furin xenograft tumors were generated in nude mice. When the tumor size reached to the volume approximately 40 mm 3 , DMSO (as a mock control) or decRVKR-CMK (dissolved in DMSO) was injected through IP route twice a week for four times (Fig. 3A). Four Huh7-Furin xenograft tumors were generated and divided into DMSO (n = 2) and decRVKR-CMK (n = 2) groups. As shown in Fig. 3B and 3C, the final tumor volume and weight of the Huh7-Furin xenografts were significantly increased in decRVKR-CMK group (475.0649.50 mg) compared with those in DMSO group (165.067.07 mg) (P = 0.013). In addition, the growth rate of Huh7-Furin xenograft was significantly increased in decRVKR-CMK treated group (Fig. 3D). The level of mature MMP2 was fl significantly reduced in decRVKR-CMK treated xenograft tumors compared to that in DMSO treated group (Fig. 3E) confirming the inhibitory effect of decRVKR-CMK in the mouse model. Similarly, the pro-TGFb1 was elevated in decRVKR-CMK treated group (Fig. 4). Additionally, immunohistochemistry and immunoblot analysis in Huh7-Furin xenograft tumors detected equally strong expression of furin in both of the DMSO and decRVKR-CMK treated groups (Figs. 3F, i and ii; Fig. 4), suggesting that the difference of growth rate was caused by functional inhibition but not a different expression level. Absence of Growth Regulatory Effect on Huh7-Neo Xenografts with decRVKR-CMK The procedure of generating subcutaneous Huh7-Neo xenografts and IP injection of decRVKR-CMK was the same as described in the previous section. Six xenograft tumors were generated and divided into DMSO (n = 3) and decRVKR-CMK (n = 3) groups. No significant difference was found for the final tumor volume, tumor weight, and tumor growth rate of Huh7-Neo xenografts between the decRVKR-CMK and DMSO groups (Figs. 5A to 5C). The immunohistochemistry and immunoblot analysis detected equally lower expression of furin in Huh7-Neo xenografts in both of the DMSO and decRVKR-CMK treated groups (Figs. 5D, i and ii; Fig. 4), compared with those in Huh7-Furin xenografts (Figs. 3F, i and ii; Fig. 4). The increase of pro-TGFb1 expression was observed in decRVKR-CMK treated Huh7-Neo xenografts compared to those in DMSO treated group despite that Huh7 cells expressed very low endogenous furin. However, the overall expression level of pro-TGFb1 was higher in Huh7-Neo than in Huh7-Furin xenografts (Fig. 4). Moreover, in DMSO treated groups (Figs. 3 and 5), it was found that once the Huh7-Neo xenograft tumor was formed, the tumor weight and tumor growth rate was higher when compared Reduced Expression of Growth and Anti-apoptotic Regulatory Factors in Huh7-Furin Xenograft The xenograft experiments revealed poor cell viability in Huh7-Furin xenograft tumor, whereas decRVKR-CMK treatment resulted in improved cell viability and enhanced cell proliferation. Furthermore, better cell viability of Huh7-Neo xenograft tumor (both DMSO and decRVKR-CMK group) was observed compared with that in DMSO treated Huh7-Furin group. To investigate the possible molecular mechanism, several factors involved in IGF, PI3K/AKT, EGFR/RAS and Wnt mediated signaling were examined, which were presumably up-regulated during hepatocarcinogenesis [26][27][28][29]. As shown in Fig. 4, immunoblot analysis revealed an under-expression level of insulin receptor, IKKa, and NFkB/p65 in DMSO treated Huh7-Furin xenograft compared with those in Huh7-Neo xenograft. Upon decRVKR-CMK administration, the expression levels of these factors restored significantly in Huh7-Furin xenografts. Previous reports indicated that increased expression in CDK2, CDK4, and cyclin D1 were found in HCC [28,30], besides their associations with cell cycle progression [31]. The expression levels of these factors were also found to be reduced in DMSO treated Huh7-Furin xenografts compared with those in Huh7-Neo groups and restoration of these factors was also observed in decRVKR-CMK treated Huh7-Furin group. Similar results were found in Bcl-xL, an anti-apoptotic protein which was over-expressed in HCC [32]. Furthermore, TGFb1 could down-regulate the expression of CDK4 and Bcl-xL [33,34], even when cell cycle was arrested at G1 phase [35]. As TGFb1 was hard to detect in xenograft tumor, the expression level of pro-TGFb1 was presented. The increased levels of pro-TGFb1 implying decreased level of TGFb1 in decRVKR-CMK treated Huh7-Furin xenografts. Discussion Many studies indicated that furin, a pro-protein convertase, is over-expressed in human cancer cell lines and primary malignancies [8][9][10][11][12][13]. Some researches further suggested that furin could be a candidate molecular target for anti-cancer therapeutics [14,17]. Recently, it was reported that furin overexpression also occurred in human HCCs [18]. In addition, overexpression of furin in hepatoma cells resulted in increased invasiveness in tail vein xenograft model [18]. To investigate whether furin could be a candidate target for anti-liver cancer therapy, the association between furin expression and clinicopathologic parameters in HCC patients was analyzed. The Kaplan-Meier survival analysis and Cox regression analysis indicated that a higher expression level of furin (T/N ratios § 3.5) in hepatoma tissues associated with longer DFS. The clinicopathological analysis implied that furin inhibition in hepatoma tissues in which furin was overexpressed might result in worse prognosis in HCC patients and furin might not be a proper target for anti-liver cancer therapy. Despite the unfavorable clinical data, increased tumorigenicity of furin has been suggested in an in vivo study. In PLAG1 overexpressed mice which promoted adenomas occurrence in salivary glands, simultaneous furin deficiency resulted in delayed tumorigenesis [36]. To clarify these puzzles, subcutaneous Huh7-Neo and Huh7-Furin xenograft tumors were generated and furin inhibitor (decRVKR-CMK) was administrated after the tumors grew to a comparable size. In this assay, no significant difference of tumor growth was found between DMSO and decRVKR-CMK treated groups in Huh7-Neo xenografts. However, the tumor growth rate was slower in DMSO treated than that in decRVKR-CMK treated Huh7-Furin xenografts. Interestingly, once the Huh7-Neo xenograft tumors (DMSO and decRVKR-CMK groups) were formed, the growth rate is faster than DMSO treated Huh7-Furin xenografts. Pro-TGFb1 is a substrate of furin, of which the active form (TGFb1) suppresses the growth of Hep3B and Huh7 hepatoma cells [37]. The decrease of pro-TGFb1 expression in Huh7-Furin xenografts, implying the increase of TGFb1, might explain the growth inhibition effects of overexpressing furin. In addition, involvement of furin in repression of tumor growth was also supported by decreased expression of cell proliferation related molecules (IR, cyclin D1, CDK2, and CDK4…etc.). Down-regulation of CDK4 by TGFb1 has also been reported [34]. Thus, inhibition of CDK4 expression in Huh7-Furin xenografts might be mediated through TGFb1. Furthermore, the repression of tumor growth was restored when furin inhibitor was utilized in Huh7-Furin xenograft, whereas no growth regulatory effect was observed when furin inhibitor was administrated to Huh7-Neo xenografts. The protein expression levels of growth related molecules were increased and stronger Ki-67 expression was detected in decRVKR-CMK treated Huh7-Furin xenografts. Furthermore, the increased levels of these molecules were similar to those in Huh7-Neo xenografts, indicating a restoration of the growth inhibition effect by furin. These data were consistent with the clinical observation that furin over-expression with a T/N ratios § 3.5 associates with a longer DFS in HCC patients. In addition to the growth effect of furin, the alteration of cell apoptosis was also examined. H&E stain revealed a larger necrosis area, and TUNEL assay detected more apoptotic cells in the decRVKR-CMK untreated Huh7-Furin tumors, which were reversed upon decRVKR-CMK treatment. The expression levels of cell death related factors such as IKKa, NFkB/p65 and Bcl-xL were reduced in Huh7-Furin tumor and were restored after decRVKR-CMK administration. Besides growth inhibition, TGFb1 also induced apoptosis in hepatoma cell [37]. Furthermore, down-regulation of Bcl-xL expression by TGFb1 was reported [33]. The relationship between furin and patient survival in this study was based on comparison of the T/N ratios of furin expression levels. One might argue that in patients with high T/N ratios, a relatively higher level of furin in the non-cancerous (N) parts was more important than those in the cancerous (T) parts. However, all our previous and present results regarding the growth regulatory roles of furin in HCC were derived from experiments performed in hepatoma (transformed) cells but not immortalized, non-cancerous cells. Therefore, it is unclear whether a high furin expression level in the noncancerous cells plays a role in inhibition of de novo cancer growth. In this study, we assumed that most of the postoperative HCC recurrences were originated either from micro-spread of the original cancer or from de novo carcinogenesis utilizing a similar oncogenic mechanism. As such, the furin T/ N ratios derived from the surgically removed HCC were used to correlate with recurrence. Furthermore, when we examined some patients with high T/N ratios of furin but a generally low expression level of furin in the non-cancerous part, a long diseasefree survival remained. In our previous study, furin was found to promote HepG2 cells invasion in tail vein xenograft model [18]. The results motivate us to investigate whether inhibition of furin activity could be a novel therapeutic approach for liver cancer therapy. However, when analyzing relationship between furin expression and DFS in HCC patients, we discovered that over-expression of furin associated with longer DFS. Thus the growth regulatory effects of furin in hepatoma xenografts were focused in this study. Using the subcutaneous xenograft model, the growth inhibitory function of furin was discovered. Decreased level of pro-TGFb1 was observed in furin over-expressed xenografts. It has been reported that TGFb1 functions in suppressing proliferation and promoting invasion of cancer cells, indicating that TGFb1 can serve as a tumor suppressor and a pro-metastatic factor [35]. The dual role of TGFb1 may explain our observations that furin enhances hepatoma cell invasiveness in the tail vein xenograft model, while suppresses tumor growth in the subcutaneous xenograft model. Actually, another PC family member, PC5/6, possessing similar function but not conducive to tumor growth was reported using PC5/6 intestine-specific knockout mice (PC5/6 iKO) [38]. It was found that when Apc Min/+ mice which spontaneously develop polyps in small intestine [39] lacking PC5/6 tend to form higher tumor number than non-PC5/6 deficient Apc Min/+ mice [38]. In conclusion, over-expression of furin in xenograft tumor in fact exerted a growth inhibitory effect and this effect could be reversed by decRVKR-CMK treatment.
2016-05-12T22:15:10.714Z
2012-07-10T00:00:00.000
{ "year": 2012, "sha1": "8c7550fccb1e7f7135e0f892dcd4c62cd4456b9e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0040738&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c7550fccb1e7f7135e0f892dcd4c62cd4456b9e", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
245279513
pes2o/s2orc
v3-fos-license
Running Performance of Male Versus Female Players in Australian Football Matches: A Systematic Review Background Australian Football is a fast paced, intermittent sport, played by both male and female populations. The aim of this systematic review was to compare male and female Australian Football players, competing at elite and sub-elite levels, for running performance during Australian Football matches based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Methods Medline, SPORTDiscus, and Web of Science searches, using search terms inclusive of Australian Football, movement demands and microsensor technology, returned 2535 potential manuscripts, of which 33 were included in the final analyses. Results Results indicated that male athletes performed approximately twice the total running distances of their female counterparts, which was likely due to the differences in quarter length (male elite = 20 min, female elite = 15 min (plus time-on). When expressed relative to playing time, the differences between males and females somewhat diminished. However, high-speed running distances covered at velocities > 14.4 km·h−1 (> 4 m·s−1) were substantially greater (≥ 50%) for male than female players. Male and female players recorded similar running intensities during peak periods of play of shorter duration (e.g., around 1 min), but when the analysis window was lengthened, females showed a greater decrement in running performance. Conclusion These results suggest that male players should be exposed to greater training volumes, whereas training intensities should be reasonably comparable across male and female athletes. Introduction Australian Football (AF) is a fast paced intermittent type sport played on an oval field between two teams of 18 plus 4 players upon the interchange bench amongst elite male players, and between two teams of 16 players with 5 upon the interchange bench within elite female populations [1,2]. The aim of the game is to successfully transfer Open Access *Correspondence: cewing1@our.ecu.edu.au 1 School of Medical and Health Sciences, Edith Cowan University, 270 Joondalup Drive, Joondalup, , Perth, WA 6027, Australia Full list of author information is available at the end of the article the ball through kicks and handballs to create a scoring opportunity, where 6 points are awarded for a goal and 1 point is awarded for a behind (where the ball passes between the inside and outside posts, or hits the inside posts, or where the ball passes between the inside posts having been touched or carried over by another player to the one who had the initial shot) being scored. At the male elite level, the game is played across 4 quarters of 20 min duration plus time on (a period of play added to compensate for all stoppages in play). This time frame differs from the elite female level, where quarters are contested across 15 min, with time on for stoppages included within the final two minutes of each quarter [2]. These playing conditions may differ between elite and sub-elite athletes [3], and may lead to differences in running performance between male and female players. However, no systematic comparison between male and female players has been made. Players are required to organise into three positional groups at the start of play (i.e., bouncedown) [1]. These are made up of three primary positions, including forwards and backs (half and full positions), as well as a midfield group comprised of inside midfielders, wings (or outside midfielders) and the ruckman (ruck). It is common within research literature to delineate these playing positions into smaller groups [4], or to group them together (e.g., key position players and nomadic players) [5]. This makes cross-study comparisons somewhat challenging [1]. In order for practitioners to develop appropriate training program design and load monitoring protocols, a thorough assessment of player motion during match-play must be undertaken. Wearable microsensor technology is now commonly employed to facilitate this assessment [1]. A microsensor technology device typically consists of a global navigation satellite system (GNSS) as well as a micro-electrical mechanical system (MEMs), which include tri-axial accelerometers, magnetometers and gyroscopes [6]. The GPS component is able to receive signals from orbiting satellites and can provide information upon athlete locomotion and velocity (e.g., total distance travelled) [7][8][9][10]. The MEMs component is often utilised to detect match events such as collisions, as well as other measures of motion including accelerations and decelerations [11,12]. The reliability and validity of these devices have been widely reported within the literature [7][8][9][10] and are well summarised in the review by Scott et al. [13]. Specifically, previous research has confirmed both the validity and reliability of GPS technology when using a sampling frequency of 10 Hz, which has been shown to be superior to both 5 Hz [9] and 15 Hz [8] sampling frequencies. However, Johnston et al. [8] raise caution when measuring high velocity movements, as they report that as running speed increases, so does the level of error. Despite this, it should be noted that wearable microsensor technology has enhanced practitioners' ability to measure athlete motion in team sports, such as AF, and future advances in technology, including local positioning systems (LPS), have the potential to further improve the accuracy, speed and utility of these data collected [14]. There is currently a large body of research concerning the measurement of AF running performance using wearable microsensor technology, reported across a range of metrics (e.g., total distance, high-speed distances), time frames (e.g., full game, quarters), and across various playing levels (e.g., elite, sub-elite), and competitions (male, female). Although it should be noted that systematic reviews focusing on comparisons between male competitors across various playing levels have been published in AF, initially by Gray and Jenkins [15] and more recently by Johnston et al. [1], to the best knowledge of the authors no formal comparisons have been made between male and female AF players, in either reviews or through original research manuscripts, concerning running performance during AF matches. Comparisons of this nature are of increasing importance following the inception of the premier women's competition (AFLW) in 2017. This has seen an increased emphasis placed upon developing the sport amongst female players, particularly at the elite level. As the differences in physical and physiological characteristics between male and female athletes are well documented [16,17], it may be interesting to understand if these are reflected within running performance during AF matches. Additionally, understanding the differences that may exist between male and female players can influence physical training design (e.g., running volumes and intensities), and highlight if there are different requirements between the sexes to transition between the sub-elite and elite levels within their respective developmental pathways, particularly for sport science or strength and conditioning practitioners working across both sexes. Furthermore, if there is a desire to develop the female game into a more high-speed, open game (which is likely considering the recent rule changes (i.e., stand on the mark) to the male game aimed at increasing the "speed of the game"), then comparisons of this type may go some way to highlighting the physical requirements necessary to achieve this. Together, these factors can go some way to influencing the future development of the female game, and in particular, physical performance pathways. In order to provide a thorough and balanced comparison across the breadth of literature, a systematic review has been conducted with the aim to evaluate the differences in running performance between male and female Australian football players. Search Strategy A systematic search of Medline, SPORTDiscus and Web of Science databases, using key terms inclusive of Australian Football, movement demands and microsensor technology, was performed by the lead author (CW) to identify potential peer-reviewed journal articles published in English from inception (Medline and SPORT-Discus, 1988; Web of Science, 1980) until December 2020. Additional publications were also identified through the screening of relevant reference lists. The search strategy was devised through a combination of key words, synonyms and subject headings, as well as through pilot searching of known publications to identify additional relevant terms. The Boolean operators 'OR' and ' AND' were utilised to construct the final search terms (Table 1). Screening and Study Selection Search results were exported to EndNote (X9, Thomson Reuters, Philadelphia, PA, USA), where all duplicates were removed by the lead author (CW). Abstracts and titles were screened by two reviewers (CW, CM), where those that were identified as 'out of scope' (including those clearly identified as reviews and commentaries) were removed. Remaining articles were imported into Rayyan [18], an electronic systematic review management tool, where the full texts were independently screened by two reviewers (CW, CM) against the inclusion and exclusion criteria (Table 2). Where disagreement was present, a third reviewer (PD) acted as arbiter. Search findings and study selection are reported in accordance with PRISMA (Preferred Reporting Items for Systematic Review and Meta-analysis) [19]. Data Extraction Data from articles included within the final review were extracted into a customised Microsoft Excel spreadsheet (Microsoft, Redmond, WA, USA) by the lead author (CW). Data pertaining to sample size (number of matches, subjects, and data files), competition details (type, age-group and playing level), subject demographics (age, height, weight, and sex), measurement duration (e.g., full game, halves, quarters) and measurement approach (e.g., total distance, high-speed running distances, PlayerLoad ™ , relevant or absolute measures) were recorded. Information regarding the microsensor device (manufacturer, model, software, and sampling frequency (Hz)) and recording accuracy (number of satellites, horizontal dilution of precision (HDOP)) was also recorded in line with recent recommendations [6]. The number of satellite connections is an indicator of GPS signal strength, while the HDOP provides information regarding the accuracy of the horizontal GPS position, with both measures combining to give an indicator of data collection accuracy [6]. Previous research has reported that ≥ 6 satellite connections and a HDOP < 1 are required for optimal data collection accuracy [6]. Data Analysis Means for each measure of physical output were recorded and presented within the results section to provide a range. Where comparisons could be made across playing levels, figures were constructed in R software (R, v4.0.3, The R Foundation for Statistical Computing, Vienna, Austria), where the reported means were plotted. Highspeed running was also presented as a percentage of total running distances, which were calculated by dividing the mean high-speed distance by the mean total distance. Search Results The initial search yielded 2529 articles (Medline = 801, SPORTDiscus = 781, Web of Science = 947), with an Following the screening of the titles and the abstracts, a further 1,388 were removed as out of scope (e.g., the wrong sport), which also included any articles that were author commentaries or reviews. Full texts of the remaining 106 articles were independently screened, with 73 removed according to the exclusion criteria (see Fig. 1). The remaining 33 were included in the final review and analysis. Study Characteristics Characteristics of the included 33 studies are outlined in Table 3. From the included studies, 26 described outcomes for male elite-level, six from male subelite level, and one from male amateur or recreational level. Additionally, three studies included female elite level athletes, with a further five studies reporting on female sub-elite level athletes. Although several different microsensor technology metrics were discovered in the literature, only those that could be compared between male and female athletes are discussed within this review. Therefore, this review includes absolute and relative measures of total running distance, highspeed running distances and PlayerLoad ™ , which were expressed across the whole game, individual quarters, and peak periods of play. Methodological information of the included studies is highlighted in Table 4, of which 26 reported a sampling rate ≥ 10 Hz (with one reporting 5 Hz interpolated to 15 Hz), 9 reported the number of satellite connections, and 10 highlighted the horizontal dilution of precision (HDOP). A large scope of playing position definitions were reported amongst the 33 included articles. These include specific groups (e.g., small backs) and broader playing groups, including; half line or small position players; tall, deep, fixed or key position players; and nomadic or rotating positions (midfielders, small forwards, and small defenders) [1]. Oftentimes, these broader classifications are utilised within research papers to overcome issues of small sample sizes [1]. For the purposes of this review, where no specific positions were reported it was assumed that data were pooled from all playing positions. [4] further divided these playing positions at the male elite level into midfielders (12,819 m, 128 m·min −1 ) mobile backs (12,621 m, 120 m·min −1 ), mobile forwards (11,986 m, 115 m·min −1 ), tall backs (11,878 m, 108 m·min −1 ), ruckman (11,701 m, 115 m·min −1 ) and tall forwards (11,158 m, 108 m·min −1 ). Additionally, Stares et al. [30] reported relative distances for male non-nomadic players (122.2 m·min −1 ), while Hiscock et al. [5] reported male key position players to reach 119 m·min −1 . Within female populations, data were presented for elite midfielders (range 6825-5813 m, Running Distances Performed in Discrete Velocity Bands Oftentimes, match running data are presented within discrete velocity bands (e.g., high speed running) which can enable practitioners to compare the proportion of an athlete's total distance spent running at faster and slower speeds. However, the lack of a universally applied speed at which to categorise these velocity bands makes crossstudy comparisons particularly challenging. Even so, a number of studies utilised 14.4 km/h (4 m.s −1 ) to define high-speed (or similar) speed zones for both male and female players, with males covering greater distances than females (see Fig. 4) [4,21,22,26,28,38]. Match Periods Several studies examined specific periods of a match. These included distances compared across playing quarters [3,5,45], with an assessment of winning versus losing quarters [5,46]. Within male populations, the main decrement in running performance could be seen between quarters 1 and 4 [5,45], whilst running demands were also greater in quarters lost [5,46]. Elite female players also show the greatest reductions in running performance during quarter 4; however, running performance amongst sub-elite players tended to remain reasonably stable across the quarters [3]. Furthermore, peak periods of play, i.e.; time periods which identify the most intense running demands of the game, were also established within five of the included manuscripts [28,31,39,47,48]. Research within male AF demonstrated that peak periods were significantly greater than those reported using whole game data, and that the duration of the peak period had a significant impact upon running intensity, indicating that male AF players are exposed to short periods of high intensity running exercise [31,48]. Similar findings have been demonstrated amongst female players, where peak period playing intensities were greatest over shorter analysis windows (e.g., 1-min), and those recorded when using whole game averaged data [28,39]. Total Running Distances Data presented in this review highlights that, when playing positions are pooled, elite level male players cover approximately two times greater total running distance than their female counterparts [20][21][22][23][24][25][26][27][28]. This may, for the most part, be attributed to the differences in onfield playing time experienced by these athletes, with some female players competing for around 54 ± 10 min, whereas male athletes spend around 101 ± 12 min on ground [1,28]. A similar trend was observed when assessing running distances with players delineated into the various playing positions, where male players covered greater distances than female players. Interestingly, when distances are reported relative to playing time, differences are somewhat diminished. For example, Coutts et al. [4] reported male midfielders to cover more than double absolute running distances (12819 m, 95% CI 12,603-13034 m) than those highlighted within female midfielders (5813 m, 90% CI 5120-6505 m) in the report by Clarke et al. [38]. However, when expressed relative to playing time, there were no differences between the results (males; 128 m·min −1 , 95% CI 126-130 m·min −1 , females; 128.4 m·min −1 , 90% CI 121.5-135.3 m·min −1 ) [4,38]. The same results were also evident when making comparisons across the other playing positions highlighted within these two manuscripts [4,38]. This finding not only demonstrates the potential comparative nature of male and female competitions, but also highlights the use of relative distances as a potentially more viable method when making comparisons across the two playing levels. Additionally, it is valuable to compare those competing at different playing levels (e.g., elite vs sub-elite) as often those at the sub-elite level are drafted to the elite level competition, particularly within female AF. These comparisons can also inform physical performance pathways so that development players can be adequately prepared for elite level competition. Data presented within this review highlights that absolute total running distances performed within male AF matches is reflective of playing standard when playing positions are pooled together, with elite level players recording greater distances than sub-elite athletes [20][21][22][23][24][25][26][27]. However, when data for male elite and sub-elite athletes are delineated into playing positions, the differences between playing levels are not so clear. For example, Kelly et al. [35] found no significant differences between male elite and sub-elite nomadic and rotating position players (13,193.14 vs 13,189.34 m respectively). This was also evident when running distances were expressed relative to playing time where, in some cases, sub-elite level male athletes recorded higher meterage per minute than elite level athletes [23,33,34]. Amongst female players, there were contrasting results when comparing between playing levels [3,38,40,41]. For example, of the six playing positions explored within the study by Clarke et al. [3], only female elite level midfielders and small forwards out-performed their subelite counterparts, potentially owing to the differences in playing time (elite 49 min, sub-elite 60 min). However, when these data were presented relative to playing time, there was a trend for an increase in running performance amongst the female elite level playing groups [3]. With these results in mind, it is possible that males performing at the sub-elite level are better prepared to perform at the intensity levels required at the elite level than females. Additionally, previous research has highlighted that the duration of sub-elite male AF matches is approximately 7 min longer than elite matches, potentially aiding development of match related running performance in sub-elite players [1]. However, it should be noted that Johnston et al. [1] reported elite level male players demonstrate superior performance in several measures of physical capacity to their sub-elite counterparts, inclusive of 3 km time trial, yo-yo intermittent recovery test, 20 m sprint and vertical jump, which should be considered when assessing the preparedness of sub-elite players to perform at the elite level. Additionally, it should also be noted that very few data exist at the male sub-elite level where players are delineated into discrete playing positions, which weakens our ability to make judgements of this nature. Finally, it is common amongst male competitors for midfielders, nomadics, and small position players to cover greater distances (both relative and absolute) than tall and key position athletes [4,5,30,[35][36][37]. Johnston et al. [1] note that this is likely due to the requirement of midfielders and small position players to somewhat follow the ball, therefore utilising more of the playing oval, as opposed to tall and key position players whose role confines them to smaller sections of the ground. However, this trend was not always replicated within female populations, where there were some examples of tall and key position players out performing the midfield and small position players [3,28,38]. This finding may be attributed to sample size and player on-field time, which varies between the positions reported in the aforementioned studies [3,28,38] These findings can enable practitioners to plan appropriate training volumes and intensities. Oftentimes, training load and intensity is prescribed based upon the physical requirements of the game and the position the player occupies. In this instance, the findings of this review suggest male players require higher running loads in order to adequately prepare for competition [20][21][22][23][24][25][26][27][28]. However, although female players seemingly require less overall volume of running based training (due to the reduced distances travelled in matches), the exposure to similar running intensities (i.e., relative distances) as their male counterparts appears desirable [4,38]. This may be particularly relevant amongst sub-elite female players, where practitioners may wish to improve relative running performance/ running intensity in order to prepare female players for potential draft to the elite competition [3]. Running Distances Performed in Discrete Velocity Bands Due to the vast array of speeds used to define different velocity bands in the literature, cross-study comparisons were particularly challenging. However, what remains consistent across this body of research is that as velocity increases above high-speed or high-intensity running, the distance travelled decreases across all playing levels, and for both sexes, demonstrating the challenges faced by AF athletes in maintaining high-speed running outputs. When studying high-intensity or high-speed running, distances covered at > 14.4 km·h −1 (> 4 m·s −1 ) were reported for elite male and female athletes [4,21,22,28,38], indicating that male athletes record greater distances above 14.4 km·h −1 (> 4 m·s −1 ) than female athletes across all positional groups, with elite male midfielders covering markedly greater distances (4314 m, 95% CI 4166-4462 m) [4] than elite female midfielders (1252 m, 90% CI 995-1508 m) [38]. These differences may be attributed to the increased ability of males to attain higher running velocities [28,30], the differences in the style of play between the male and female game [49], and to the shorter game time in the female competition. However, when playing time is taken into consideration, Weston et al. [22] reported relative high-speed running distances to be 36 m amongst elite males, with the highest recorded for elite females seen amongst the midfield group as 28 m [38]. Additionally, an approximate 5-10% increase amongst male players was noted when calculating high-speed running as a percentage of total running volume. When all positions are pooled male athletes perform 26-33% of total running distances at a velocity > 14.4 km.h −1 (> 4 m. s −1 ), with females completing 22% at high speed [21,22,26,28]. When athletes were delineated into their various playing positions, male midfielders and small or mobile position players performed around 8% more high-speed running relative to total distance than female midfielders and small/ mobile position players [4,28,38]. However, male and female tall and ruck position players performed much similar percentages at high-speed [4,28,38], further supporting the notion that positional role may play a significant role in the opportunity for these positional groups to perform high speed running [1]. As previously mentioned, the differences in the completion of high-speed running during AF matches may be explained by several factors. These include both the increased playing time experienced by male players and the more "open" style of play evident in the AFL, which lends itself to high-speed running, as opposed to the contested/ congested play evident within the AFLW [49]. Despite these limiting factors within the female game, the ability of male athletes to complete more high-speed running, given the same velocity threshold, is likely attributed to their ability to attain greater maximal running velocities during match play [28,30]. Previous research in similar sports has demonstrated male athletes display superior physical qualities, inclusive of countermovement jump height, sprint speed and performance upon the yo-yo intermittent recovery test, potentially aiding their ability to repeatedly produce greater maximal velocity efforts [16]. Therefore, when the same speed is utilised to define high-speed running zones, it is likely that females will experience a higher physiological cost than their male counterparts [17]. As it has also been established that sprint performance is strongly associated with strength qualities, and therefore training status, the ability of female AF players to attain greater maximal velocities, and potentially increase their capacity to both complete and tolerate high-speed running distances, may be improved with greater exposure to training of this nature [50,51]. This is particularly pertinent with elite female players who are reported to have a younger training age relative to their male counterparts, whilst also having reduced opportunity for training due to the part-time nature of the female game [28]. This is an important consideration, as greater preseason training load (e.g., total and high-speed distances) has been associated with an increase in running performance during AF matches amongst male populations [52]. Furthermore, maximal aerobic running speed [53], 2-km time trial and yo-yo test performance [34], as well as measures of lower body power [30], have all been associated with running performance of male players. Therefore, in order to further enhance the female game, and to develop appropriate physical development pathways, it is a necessity that female athletes are afforded a greater opportunity to train. Due to the reduced ability of female players to reach similar maximal velocities, a more accurate comparison may be made if high-speed running is defined utilising a percentage of maximal speed or similar physiological measurement. This method has been employed in female rugby sevens, where it was shown that a globally applied zone can under estimate high-speed running compared to one applied through the use of a physiological measure [54]. However, it should be recognised that applying a physiologically based threshold is not without its own complications, and requires further consideration [17]. It should also be noted that 14.4 km·h −1 (4 m·s −1 ) does appear to be reasonably slow to utilise as a measure of high-speed running, especially when it can be considered to be less than 50% of a male athlete's maximal velocity [30]. PlayerLoad ™ PlayerLoad ™ was reported for male and female athletes across varying playing levels. Amongst male athletes, those at the elite level recorded higher values than their sub-elite counterparts [21][22][23]. The research by Clarke et al. [38] highlighted that female athletes recorded lower PlayerLoad ™ volumes than male athletes, likely owing to the reduced playing time experienced by female players, and additionally, that midfielders and small position players perform a greater volume than tall position players. This was also noted within male populations, where Boyd et al. [44] reported midfielders and nomadics to record higher PlayerLoad ™ .min −1 than both ruckman and deep position players. PlayerLoad ™ has been positively related to running distances, in part due to foot strike impacts contributing to the total load [25,43]. Therefore, these findings are perhaps unsurprising, with male athletes and small position players having previously been shown within this review to cover greater running distances than female athletes and tall position players respectively. However, it is important to note that recent research has demonstrated PlayerLoad ™ may underestimate actual player load by ~ 15%, highlighting the need for caution when utilising this metric in both research and practical settings [42] Match Periods Previous research has demonstrated that using averaged data (e.g., total distance divided by total game time) can underestimate demands of intermittent type team sports [31,39,48,[55][56][57]. There has been a growing trend to identify the peak, or the most intense, periods of play within recent research [28,31,39,47,48,[55][56][57]. These periods have been established within AF, typically using a rolling-time frame approach [31,39,47,48]. Peak periods of play could be seen to be as high as 1.8 times greater for meters per minute, and over 4 times greater for highspeed running per minute, than that recorded using whole game averaged data amongst female AF athletes [39]. Similarly, Johnston et al. [31] demonstrated within male populations that both meters and PlayerLoad ™ per minute could rise to almost twice those seen using whole game averaged data during peak periods of play. In comparison, Thornton et al. [28] found that the peak 1 min period, recorded amongst elite female athletes, was reasonably similar to that recorded within male populations [31,48]. However, the decline in physical output during 10 min periods was seen to be greater within female players, indicating that female athletes are not as able to maintain high intensity outputs over longer time periods [28]. Additionally, the peak period intensities highlighted by Thornton et al. [28] appear to be substantially higher than those found amongst sub-elite female athletes [39], highlighting a potential area for development amongst this population. Delaney et al. [48] reported that, amongst male players, the highest demands during peak plays could be seen amongst the mobile forwards playing group. The review by Johnston et al. [1] speculated that, due to the playing position, these highly intense periods of play may be occurring during critical game moments (e.g., creating goal scoring opportunities). Although the contribution of high intensity actions to successful play has been somewhat established within soccer [58] and rugby union [59], to the knowledge of the authors this is yet to be established within AF populations, and therefore warrants further research. Furthermore, it was generally established within the included literature that the shorter the time frame analysed, the greater the demands were found, suggesting that stint duration has an effect upon the values recorded during peak periods for both sexes [31,39,47,48]. It is important for both sports scientists and coaches to have an understanding of the demands of these shorter epochs and how to best prepare their athletes for these events [48,60]. Match quarters [3,5,45,46,61] have also been investigated within AF populations. Decrements in running performance, for both males and females, were noted across quarters, with the greatest differences noted between quarter 1 and quarter 4, presumably indicating the increased impact of accumulated fatigue [3,5,45,61]. Interestingly, Mooney et al. [45] demonstrated a very small, non-significant, increase in distance and high-speed running distance in quarter 3 in comparison to quarter 2 within a population of male players, possibly highlighting an effect of the half time break. It appears that within female AF populations, this decrement in running performance is accentuated at the higher velocity bands (e.g., sprint speed running), again highlighting the challenge facing AF athletes when attempting to maintain high-velocity outputs [3]. Finally, coaches can expect running outputs to be higher during quarters lost than quarters won [5,46]. Limitations There are several limitations of this review that we acknowledge. Most pertinent is the difficulty in making cross-study comparisons due to the heterogeneity of metrics, such as different velocity bands and playing positions with a diversity of definitions used. Despite a large body of data for male players, there is comparatively little concerning female players. Similarly, there are also limited data with players separated into specific playing positions, with none reported for sub-elite male players. In some cases, only the results of one manuscript were reported for some sub-groups, which limits the strength of any comparisons made. Additionally, comparisons of accelerations and decelerations across the male and female players were not possible due to differences in methodologies across studies [28,62]. Information of this nature would have been useful to further understanding of differences in running performance. Finally, there is an innate limitation when comparing male and female AF players, due to the contrasting match rules. This not only exists between male and female athletes but also between the elite and sub-elite levels of the female game. Nonetheless, comparisons of this nature are useful to practitioners in the field when devising training and load monitoring protocols across different playing groups. With these limitations in mind, future research should seek to develop a greater understanding of both female AF players and sub-elite male players. Particular emphasis should be placed upon both acceleration and decelerations as well as enhancing the depth of knowledge available when sub-elite male athletes are delineated into the various playing positions. Conclusion This systematic review is the first to compare running performance between male and female AF players. The findings highlight male athletes record substantially higher running distances, and distances covered at highspeed, as well as PlayerLoad ™ than female athletes during AF matches. This can be attributed to several factors including match duration, playing rules, and physical capacity. However, it is also likely affected by the greater opportunity afforded to male athletes to train. Despite male and female athletes being defined as "elite", the female game is relatively young in nature whilst not yet a full-time occupation-as opposed to the elite level of the male game. This leads to greater training and performance opportunities for male athletes (e.g., the AFL season is typically 23 matches plus finals series, whilst the AFLW season is typically 7-9 matches plus finals series), which should be taken into consideration when making comparisons between these two groups of athletes [63]. When total running distances were expressed relative to playing time, it could be seen that the differences between male and female athletes were significantly reduced, indicating that female AF players can reach similar levels of running intensity. However, when peak periods of play were analysed, it was demonstrated that these could not be maintained to the same levels by female athletes once the analysis window was lengthened. Additionally, relative high-speed running, and high-speed running expressed as a percentage of total distance, remained comparatively reduced amongst female players. Practitioners in the field should be aware of these differences and similarities when planning both training volumes and intensities. In this respect, male players should be exposed to higher training volumes, whereas training intensities should be reasonably similar between male and female players. Practical Applications 1. To prepare for the current external loads of AF matches, female players may require lower training volumes, but similar relative intensities as male players. 2. Due to their enhanced ability to attain maximal running velocities, male athletes should have greater exposure to high-speed running (> 14.4 km·h −1 or > 4 m·s −1 ) during physical preparation periods. Additionally, there appears to be scope for improvement of high-speed running amongst female players should an increased opportunity to relevant training be afforded within AF programs and athletic development pathways. 3. Peak periods of play are similar between elite male and female AF players over shorter (e.g., 1 min) time periods, which may be reflected when prescribing drills aimed at replicating these phases of play, where similar running intensities appear to be appropriate.
2021-12-19T14:14:05.185Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "4217f0c41a4ad39595ed0f4de24f5e3b04de58ed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "4217f0c41a4ad39595ed0f4de24f5e3b04de58ed", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229301383
pes2o/s2orc
v3-fos-license
Performance Analysis of a Head and Eye Motion-Based Control Interface for Assistive Robots Assistive robots support people with limited mobility in their everyday life activities and work. However, most of the assistive systems and technologies for supporting eating and drinking require a residual mobility in arms or hands. For people without residual mobility, different hands-free controls have been developed. For hands-free control, the combination of different modalities can lead to great advantages and improved control. The novelty of this work is a new concept to control a robot using a combination of head and eye motions. The control unit is a mobile, compact and low-cost multimodal sensor system. A Magnetic Angular Rate Gravity (MARG)-sensor is used to detect head motion and an eye tracker enables the system to capture the user’s gaze. To analyze the performance of the two modalities, an experimental evaluation with ten able-bodied subjects and one subject with tetraplegia was performed. To assess discrete control (event-based control), a button activation task was performed. To assess two-dimensional continuous cursor control, a Fitts’s Law task was performed. The usability study was related to a use-case scenario with a collaborative robot assisting a drinking action. The results of the able-bodied subjects show no significant difference between eye motions and head motions for the activation time of the buttons and the throughput, while, using the eye tracker in the Fitts’s Law task, the error rate was significantly higher. The subject with tetraplegia showed slightly better performance for button activation when using the eye tracker. In the use-case, all subjects were able to use the control unit successfully to support the drinking action. Due to the limited head motion of the subject with tetraplegia, button activation with the eye tracker was slightly faster than with the MARG-sensor. A further study with more subjects with tetraplegia is planned, in order to verify these results. Introduction According to international incidence data, 250,000-500,000 people per year suffer a spinal cord injury (SCI) worldwide. SCIs can caused by accidents or by diseases such as multiple sclerosis, muscular dystrophy, or tumors. The loss of motor function of the arms and legs due to SCI is described as tetraplegia [1]. People with tetraplegia are dependent on support in their everyday life and work and require comprehensive home care. This support is provided by assistants. With appropriate assistance robots, people with tetraplegia would be able return to work, as has been shown with the FRIEND system at Bremen University Library. This system supports a subject with tetraplegia in the task of retrospectively cataloging books. Gräser et al. showed that interaction and collaboration between the user and system result in a higher success rate than a completely autonomous system [2]. The inclusion of the superior visual senses of humans might be an advantage when controlling a robot or technology. Therefore, it is useful to combine multiple sensor modalities, in order to control an assistive technology in the best possible way. The research and development of assistive technologies and robots in the field of activities of daily living (ADL) are also important for people with tetraplegia, in order to improve their quality of life, as assistance robots with interactive basic skills provide more independence for people with tetraplegia and relief for carers. These ADLs include self-determined eating and drinking. Different assistive technologies are available to support people with limited mobility in eating and drinking. However, residual mobility in the arms, hands, and upper body is necessary to control these products, and thus people without such residual mobility are not able to use them [3]. Hands-free control is a requirement for assistive systems for these people. Several different hands-free Human-Machine Interface (HMI) concepts and assistive devices for people with tetraplegia are reviewed in [4]. In the following section, examples of controlling assistive technologies and robotics with different sensor modalities are presented. State-of-the-Art on Sensor Modalities for Control Interfaces Assistive technologies are controlled through different input modalities, e.g., speech, EEG (Electroencephalography), EMG (Electromyography), or movements of the head, eyes, or tongue. Hochberg et al. presented an example of controlling a robot using neural activity [5]. In their work, the signals of motor cortex neurons of subjects with tetraplegia were used to control a robot arm and perform grasp tasks. The neural signals were recorded using a microelectrode array implanted in an area of the motor cortex of the subject. Therefore, this method is an invasive method to control robots. The implanted electrode array is a limitation and thus the method is only usable for a small group of people. Another sensor modality is EMG, which involves the measurement of muscle activity and can be used to control an assistive technology. In [6], subjects with tetraplegia controlled a wheelchair by only activating their posterior auricular muscles and were able to complete an obstacle course. This provides subjects with tetraplegia more independence in the field of mobility. To activate the posterior auricular muscles, the subjects had to complete a preliminary training using software-based training and visual feedback. This method is therefore only usable with prior several days of training. Furthermore, the system was tested on a wheelchair in two-dimensional space and not on a robotic arm in three-dimensional space. To control a robot in three-dimensional space, Alsharif [7] used the gaze direction, gaze gestures, and states of the eyes. For measuring the gaze direction and states of the eyes, the SensoMotoric Instruments (SMI) Eye Tracking Glasses (ETG) was used. Since the SMI ETG is worn as normal glasses, spectacle wearers were unable to use it. This reduced user acceptance. One method for gaze control is eye tracking, especially for people with limited mobility, e.g., to control a cursor on a computer screen/touchpad and trigger actions. In this case, the eye offers an independent channel of input. The user can produce a lot of eye movements without obvious symptoms of fatigue in this modality [8]. A fundamental obstacle while using eye tracking is the so-called "Midas Touch" problem. The saga about King Midas is used to describe this problem. Initially, it appears to be an advantage that the user controls things and interacts with objects simply by looking at them (i.e., without the help of hands). However, they then become no longer able to use their eyes for "normal vision", as everything the user looks at is clicked on or interacted with. The distinction of whether the eyes of the user are used for sensory perception of the environment or for object interaction is not possible [9]. An option to minimize the "Midas Touch" problem is the so-called "Dwell Time". In this case, the object/button has to be fixated for a certain time, in order to trigger an action. For this method, it is important to select an appropriate Dwell Time: If the Dwell Time is too short, unintended actions are triggered quickly, while, if the Dwell Time is too long, the user becomes impatient and fixation of the button might get lost. Thus, triggering actions may take too long [8]. Besides eye movements, head movements can also be used as a control modality. An example for an assistive robot system which uses head motion control is the Adaptive Head Motion Control for User-friendly Support (AMiCUS) system [10]. Using this system, subjects with tetraplegia could successfully perform different pick and place tasks [11]. The head motions are recorded using Micro-Electro-Mechanical System (MEMS) Magnetic Angular Rate and Gravity (MARG)-sensors. MARG-sensors consist of a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer. Through the combination of these sensors, the absolute orientation can be calculated. MEMS sensors are very small micromechanical structures and associated control elements on one chip (i.e., a System on a Chip, SOC) [12]. The advantage of these sensors is their very compact design and size. They may be placed almost anywhere and may also be attached, for example, to a headset without disturbing the user. State-of-the-Art on Evaluation of Sensor Modalities For the evaluation of sensor modalities, the analysis of the time required for given tasks and the error rate is a proven method. To investigate the performance and usability of sensor modalities, the so-called "Fitts's tapping tasks" are typically performed. The Fitts's Law paradigm is a standard test to evaluate the effectiveness and efficiency of input devices [13]. Zhang and MacKenzie used the Fitts's Law paradigm to examine different eye tracking techniques [14]. Motion sensors were also evaluated using the Fitts's Law test [15]. To evaluate the subjective workload when using control interfaces, the NASA Task Load Index (NASA-TLX) questionnaire is used worldwide [16]. This questionnaire measures the subjective workload in six different subscales. For the assessment of the usability, individually created questionnaires are used. The Likert scale is often used in these questionnaires as it is one of the most reliable instruments for measuring opinions and behavior [17]. Different statements could be evaluated and the participant should indicate whether he or she "fully agrees" to "fully disagrees" with these statements. Previous Work Some elements of the robot control concept related to our previous work on the AMiCUS system [10] have been adapted and used in this work. Therefore, the AMiCUS system is presented in more detail in the following. The AMiCUS system uses head motion and head gestures to continuously control a robot arm with a gripper. The three Degrees of Freedom (DOFs) of the head are mapped to the seven DOFs of the robot arm and the robot movements are divided into different robot groups. The head movements are used to continuously control the robot motion. Switching between robot control groups is enabled by head gestures. While using the AMiCUS system, the user chooses between two control modes, the so-called robot mode and the cursor control mode. During the robot mode, direct three-dimensional robot motions are possible. During the cursor control mode, the user moves a cursor on a screen and performs actions such as starting a calibration routine or selecting a robot movement group. However, the evaluation of head gestures with able-bodied subjects and subjects with tetraplegia indicated problems, in terms of fatigue [11]. To overcome these problems, a multimodal unit which combines head and eye motions for controlling a robot and a cursor is presented here. The monomodal concept of control is extended, by the human visual system, to a multimodal system. It was decided to extend the head motion interface by eye motions, because this sensor modality is generally suitable for controlling a robot, as described in Section 1.1. However, an eye tracker should be used, which may also be worn by people who wear glasses. Furthermore, the eye tracking is a non-invasive method and can be used without prior sensor implantation. Both MARG-sensors and some eye trackers are compact in their design and size. This allows the development of a mobile, compact, and low-cost multimodal sensor control unit. To minimize the Midas Touch problem, which occurs during eye tracking, the option Dwell Time was implemented (see Section 2.3). The purpose of this work is a performance analysis of the control unit. The two modalities are examined in the context of discrete and continuous control. Therefore, two different tests were performed using the modalities alternately. The multimodal control unit was tested in a scenario relevant to supporting subjects with tetraplegia in their everyday lives. In the use-case scenario, both modalities were used simultaneously and we examined how comfortably and intuitively a subject was able to grasp a cup with the control unit and bring the cup to their mouth. The following section describes the multimodal control unit. The experimental design, setup, and procedures of different tests to evaluate the control unit are presented. Then, the results of the different tests are shown and discussed. In the final section, our conclusions are given and the future work is explained. Materials and Methods In this section, the multimodal control unit is introduced and the performed tests and recorded parameters for analyzing the performance of the control unit are described. In this work, different control modes were used: discrete control, continuous cursor control, and continuous robot control. The discrete control mode describes the event-based control. In this mode, interaction elements were activated in the form of buttons on a Graphical User Interface (GUI). In the continuous cursor control mode, a cursor was moved on a screen (i.e., in two dimensions). In discrete control mode as well as in continuous cursor control mode, the control was performed either with the MARG-sensor or with the eye tracker. In continuous robot control mode, the robot arm was continuously moved in three-dimensional space. For this control mode, only head motions were used. The control with head motions has proven to be robust during previous work [10]. In Activation of Buttons Test (see Section 2.3.1), we examined which modality was better suited to activate different buttons. In Continuous Cursor Control Test (see Section 2.3.2), we tested which modality was better suited to moving the cursor on the screen. The use-case scenario (see Section 2.3.3) assessed how comfortably and intuitively a subject was able to grasp a cup and lead it to their mouth when the subject used the proposed control unit to control the robot arm. In the use-case, all three control modes were used to perform the task. Subjects In the present study, three tests were implemented to analyze the performance of the control unit. Twelve able-bodied persons and one subject with tetraplegia performed the three tests as a proof of concept for the proposed control interface. The data of two able-bodied persons were not included in the analysis, as they were not able to finish all three tests due to technical problems. The able-bodied subjects had no known limitations, in terms of head movement. Five of them were male and five female. They were recruited through flyers and announcements listed on the university website. Their age ranged from 20 to 56 years (mean ± standard deviation (SD): 28.20 ± 10.70 years). Three of the subjects wore glasses and one subject wore contact lenses. All subjects gave their written consent to the experiment and were instructed in detail about the procedure. This study was approved by the ethics committee of the German Association of Social Work (DGSA). Figure 1a shows the experimental setup. The used robot arm was a UR5 robot by Univeral Robotos [18] with an adaptive 2-Finger gripper Robotiq 85 [19]. A 23 inch screen was used to display the GUIs for the different tests. The subjects sat in front of the metal platform and the distance between subjects and the screen was 90 cm. The working area of the robot was set between its home position and the table area. During the tests, the subjects wore the multimodal control unit presented in Figure 1b. A MARG-sensor [20,21] was mounted in a printed case on the frame of an eye tracker. The sample rate of the MARG-sensor was 100 Hz. A monocular eye tracker headset (Pupil Core from Pupil Lab) was used [22]. The eye tracker (ET) tracked the pupil of the eye to display the person's gaze point on a screen, in order to follow eye movements. The frame rate of the world camera was 30 Hz at a resolution of 1920 × 1080 pixels, and that of the eye camera was 90 Hz at a resolution of 400 × 400 pixels. (2)) and a MARG-sensor (3). Procedure The experiment consisted of a trial phase and three tests. The trial phase was the first part of the experiment and gave the subjects the possibility to try out the system. The next part was a test examining discrete control using the multimodal control unit. During this test, the subjects had to activate different buttons. To evaluate the continuous cursor control, the subjects performed a two-dimensional Fitts's Law paradigm in the third part of the experiment. In the last part, the multimodal control unit was used to perform a drinking task. In this use-case scenario, all three control modes were used: discrete control (activating buttons), continuous cursor control (continuous two-dimensional moving of a cursor on the screen), and continuous robot control (continuous three-dimensional moving of the robot arm). Before starting with the trial phase, the subjects read the participant information and signed the declaration of content. Then, the multimodal control unit was placed on and fixed with a eyewear strap on the users head, in order to prevent movements of the eye tracker. The eye camera was adjusted to track the pupil of the subject's eye and the world camera captured the monitor screen on which the GUIs for the tests were represented. The next step was the calibration of the eye tracker, as described in [22], to establish a mapping from pupil to gaze coordinates. In the trial phase, the subjects tested the system with the multimodal MARG-sensor and eye tracker control unit. In the continuous robot control mode, head motions were used. The three DOFs of the head were mapped to seven DOFs of the robot arm with gripper, as described in [23]. Calibration of the MARG-sensor for the robot control and cursor control was performed using the calibration routines described in [10]. Before starting each test, the subjects read a detailed set of written instructions about the task. Between the different tasks, the subjects had the possibility to take a break. The eye tracker calibration was repeated for every test. For the Button Activation Test (Section 2.3.1) and in the use-case scenario (Section 2.3.3), a GUI with Dwell buttons was used. Figure 2 shows the activation mechanism of these buttons. The Dwell button type was chosen to reduce the Midas Touch problem. A certain Dwell Time was used, specifying that the subject had to fixate on an object or button for a certain duration before an action was triggered [8]. For this study, a Dwell button with a corresponding confirmation button was used, which was also implemented as a Dwell button. This allowed for double checking before a button was activated to trigger an action. While the user moved the cursor on the screen, it was checked whether the cursor was on a button area. If not, the status of the button was set to neutral. If yes, it was checked if the status was neutral or dwelling. If the status was neutral, it was then set to dwelling. If status was dwelling, the counter was increased. If a threshold was exceeded, the counter was reset to zero, the button was activated, and the button status was set to finished. Then, it was checked which button was activated (i.e., the instructed button or the corresponding confirmation button). If the instructed button was clicked, the corresponding confirmation button was displayed on the GUI. The user then had to activate the confirmation button. If the confirmation button was clicked, the selected action was performed and the confirmation button disappeared again. The threshold for both modalities was 50 samples. Activation of Buttons Test This test examined the discrete control through the activation of buttons on a GUI. Eleven different buttons were available for activation: six buttons in the skill library group (see Figure 3a) and five buttons in the head control group (see Figure 3b). A skill library was created that allowed the user to move the robot to fixed positions without having to switch to the head control. The buttons in the skill library were based on the use-case scenario. The drinking process was divided into different sections: picking up the cup, moving the cup towards the user, drinking, putting the cup down, and moving the robot in its home position. Therefore, four buttons for fixed robot positions (cup, user, back to the table, and starting position) and two buttons to open and close the gripper (open and close) were implemented. By activating the buttons in head control group, the user continuously controlled the robot arm. For controlling the robot, the three DOFs of the head were mapped to seven DOFs of the robot arm. Due to the different number of DOFs, the movements of the robot were divided into different groups. These different groups were implemented as the five buttons in the head control group (hplane, vplane, gripper, and orient1 and orient2). This implementation was proven to be most useful in the work of Rudigkeit et al. [10], and it was thus adopted for this work. The task of the subjects was to activate different buttons, according to the instructions of the experimenter. The instruction sequence was structured in such a way that each of the eleven buttons had to be activated three times and no button was activated directly twice in a row. A button was activated by moving the cursor on the button area and holding it there for some time (see Figure 2). After the button click, a blue confirmation button then appeared (see Figure 3a). The cursor also had to be moved to this button and held there for some time (see Figure 2). It was only after the confirmation button had been clicked that the activation of the button was successful and the selected action was executed. The corresponding times of the instructed button click and the confirmation button click were recorded as t activation and t con f irmation , respectively. The test was performed twice, once with the eye tracker and the other with the MARG-sensor. When using the eye tracker, the cursor was controlled with eye motions, while, when using the MARG-sensor, the cursor was controlled with head motions. To prevent learning effects, the starting order of the modalities was randomized. Continuous Cursor Control Test During the continuous cursor control test, the subjects performed the two-dimensional Fitts's Law paradigm, according to ISO 9241-9, as described in [13,14]. The Fitts's Law test is a standard test to evaluate the performance of input devices (except keyboard devices). The software used in this experiment was based on the software developed by MacKenzie [24]. An example of a screen output is shown in Figure 4. On the screen, 25 circles (targets) are displayed, arranged in a circle. One circle is alternately marked in red. The starting position is the red marked circle in Figure 4 and, to perform the task with the eye tracker, four aruco markers are added. The task was to move the mouse cursor to the red marked circle and hold the mouse cursor there until the next red marked circle at the opposite site appeared. A Dwell Time of 200 ms was set for target activation. The entire test consisted of several sequences, in which both the diameter of the targets (width W) and the distance between the centers of two opposite targets (amplitude A) varied. A sequence contained 25 trails (25 targets/circles where the mouse cursor had to move to). For the distance, 600 and 800 pixels were chosen. The targets had diameters of 60, 80, and 100 pixels. In total, there were six different conditions. Between the procedures, the subjects were offered the possibility to take a short break. Each condition was performed with the eye tracker (mouse cursor is moved by eye motions) and the MARG-sensor (mouse cursor is moved by head motions). To prevent learning effects, the starting order of the modalities was randomized. Furthermore, the option "Randomize Target Conditions" was selected in the software, such that the sequence of the six conditions was randomized for each subject. Table 1 shows the chosen distances and diameters of the targets for the six conditions. Use-Case Test The experimental setup of the use-case is presented in Figure 5. In this test, the eye tracker was used to continuously control the cursor on the screen and for discrete control; that is, for button activation. The robot arm was moved to fixed positions by activating the buttons in the skill library (see Figure 3a). The robot arm was able to be moved to the cup, to the user, back to the table, or to the home position. It was also possible to open and close the gripper. Head motions were used to continuously control the robot arm. Using the buttons of the head robot control (see Figure 3b), the robot could be moved in the horizontal plane (back and forth, left and right) and in the vertical plane (up and down, left and right). Furthermore, it was possible to change the orientation of the gripper and to open and close it. The task in the use-case was to grasp the cup using of the robot arm's gripper and to move it close to the user, such that drinking through a straw was possible. Then, the cup had to be placed back on the table and the robot arm had to be moved to its home position. The experimenter told the subject when a button of the skill library should be activated to move the robot arm to a fixed position, or when the buttons in the head control system should be used to move the robot independently. The following sequence was used: 1. Gripper is moved close to the cup position using the button Tasse (skill library) 2. Gripper is moved to the cup using the buttons of the Kopfsteuerung (robot control) 3. Gripper is closed using the button Greifer schließen (skill library) 4. Gripper is moved close to the user using the button Nutzer (skill library) 5. Gripper is moved to the final drinking position using the buttons of the Kopfsteuerung (robot control) 6. User drinks 7. Gripper is moved back to the table using the button zurück zum Tisch (skill library) 8. Gripper is moved using buttons of the Kopfsteuerung (robot control), such that the cup is placed on the table 9. Gripper is opened using the button Greifer öffnen (skill library) 10. Gripper is moved back to the home position using the button Anfangsposition (skill library) 11. Program is closed using the shut-down button Activation of Buttons Test Both objective and subjective parameters were measured to evaluate the multimodal control unit in the context of discrete control. Objective parameters were the time needed to activate a button and the error rate. The activation time was defined as the time from the click of the given button to the click of the confirmation button (t [s] = t con f irmation − t activation ). An activation was rated as false if a wrong button was activated (type 1 error) or if the subject was too slow in activating the button (type 2 error; blue confirmation button disappears automatically after 5 s). If the activation is rated as type 2 error, the button was not activated at all and no action was triggered. The rating was done by the experimenter while the subjects performed the task. Incorrect activations were repeated by the experimenter, such that 33 correct activations were recorded per subject (three per button). The mean value per button was calculated from the three activation times for each button. In the group of the able-bodied subjects, 355 trials were performed (330 correct trials and 25 incorrect trials). The subject with tetraplegia performed 38 trials (33 correct trials and 5 incorrect trials). Due to the different sample rates of the MARG-sensor and eye tracker, it took different amounts of time until the threshold (which was defined in terms of samples) was exceeded. Due to COVID-19, it was not possible for the subjects to repeat the button activation test in which the activation mechanism of the Dwell buttons was implemented with a threshold in seconds. To compare the two modalities, the acquired time values had to be corrected. For this purpose, the times needed for the MARG-sensor and the eye tracker to exceed the threshold were determined. As explained in Figure 2, a counter was increased, if the status of a button was set to dwelling. This time point t dwelling was recorded. If the threshold was exceeded, the button status was set to finished. This time point t f inished was also recorded. The difference between t f inished and t dwelling indicated how many seconds it took until the threshold value was exceeded (t thresholdexceeded ). Members of our research group activated the confirmation button hundred times with both modalities and the average value of t thresholdexceeded was calculated per modality. The difference between the average values of t thresholdexceeded of the MARG-sensor and t thresholdexceeded of the eye tracker was calculated and was 1.02 s. This value was used to correct the activation times. Immediately after performing the task, the subjects had to fill out the NASA-TLX questionnaire, which measures the subjective perceived workload. This questionnaire was used as a subjective parameter. Mental, physical, and temporal demands, as well as performance, effort, and frustration were recorded using six sub-scales. The original version of the questionnaire [16] consists of two parts: in the first part, the six sub-scales are rated independently, while, in the second part, the scales are compared and weighted in pairs, based on their contribution to the perceived workload. Then, the total workload is calculated using the weighting factors. In this study, the most commonly modified version of the NASA-TLX, the Raw Task Load Index (RTLX), was used [25]. In the case of the RTLX, the weighting is eliminated, resulting in one rating score per scale. Continuous Cursor Control Test To evaluate the continuous control, two different objective parameters were calculated using the Fitts's Law software [24], according to the standard procedures of ISO 9241-9. The parameters were the throughput and error rate, which were calculated for each sequence of the six conditions. The error rate was calculated using the focus of the detected points, which were approached until a target was actually activated. Figure 6 shows an example of the path of the mouse cursor taken by a subject during the task. One target (green rectangle) was rated as an error because the focus of the detected points was not within the target (small red dot). In this example, the error rate was 4.00% (1 of 25 targets). Use-Case Test In the use-case scenario the subjects had to perform a sequence with eleven steps to solve the task (see Section 2.3.3). During the instructions of the experimenter, it was checked whether the correct buttons were activated and thus the correct steps and actions were performed. These observations were used to determine the completion rate of the task as an objective parameter to evaluate the control unit in the use-case. As subjective parameter for evaluating the multimodal control unit in the use-case, the subjective perceived workload was measured using the NASA-TLX [16] after completing the task. Mental, physical, and temporal demands, as well as performance, effort, and frustration, were recorded using six sub-scales. In this study, the most commonly modified version of the NASA-TLX, the Raw Task Load Index (RTLX), was used [25]. In addition, the subjects filled out a questionnaire in which the control unit was evaluated. The statements of this questionnaire are presented in Table 2, which were divided into three parts relating to cursor control, robot control, and general control (see Section 3.3). For the rating, a Likert scale [17] with values ranging from 1 "I do not agree at all" to 5 "I totally agree" was used. Activation of Buttons Test In the following section, the results of the button activation test are presented. For the able-bodied subjects, the activation time averaged over all eleven buttons was 1.67 ± 0.06 s (SD) for the eye tracker and 1.53 ± 0.11 s (SD) for the MARG-sensor. The difference of the activation time between modalities was not significant (p > 0.05). The able-bodied subjects produced significantly more (p = 0.002) type 2 errors (too slow in activating the button) with the eye tracker (error rate: 5.76 ± 3.90% SD) than with the MARG-sensor (error rate: 0.30 ± 0.30% SD). For the type 1 errors (wrong button was clicked) there was no significant difference (p > 0.05) between the two modalities (ET error rate: 0.61 ± 0.40% SD, MARG-sensor error rate: 0.91 ± 0.46% SD). For the subject with tetraplegia, the activation time averaged over all eleven buttons was 1.90 s for the eye tracker and 2.58 s for the MARG-sensor. Figure 7 shows an overview of the results of the button activations. To compare both modalities (eye tracker and MARG-sensor) in the able-bodied subjects, paired sample t-tests were used for each button. The mean activation times for the able-bodied subjects are presented in Figure 7a. A significant difference between the activation times of both modalities was found for button close (p = 0.022). The subjects activated this button with the MARG-sensor significantly faster (1.33 ± 0.26 s SD) than with the eye tracker (1.63 ± 0.26 s SD). For the other ten buttons, there were no significant differences between the activation times of the eye tracker and MARG-sensor (p > 0.05). For the number of errors, the difference between the modalities for button open for type 2 errors was significant (see Figure 7b). For this button, the subjects made significantly more type 2 errors with the eye tracker than with the MARG-sensor (p = 0.024). However, no significant difference was found between the number of errors (of type 1 and type 2) for the two modalities in the other ten buttons (p > 0.05). In Figure 7c, the activation times for the subject with tetraplegia for the eleven different buttons are presented. For nine of the eleven buttons, the subject with tetraplegia needed more time to activate these buttons with the MARG-sensor than with the eye tracker. In general, the activation times of the subject with tetraplegia for the MARG-sensor were longer than the activation times of able-bodied subjects with the MARG-sensor. The subject with tetraplegia produced a similar number of errors with both modalities. The subject made three errors with the eye tracker and two errors with the MARG-sensor (see Figure 7d). However, the results show that the subject with tetraplegia only made type 2 errors with eye tracker. In these cases, the subject was too slow in activating the button. The subject produced only type 1 errors when using the MARG-Sensor. In these cases, the subject activated a wrong button. Figure 8 presents the results of the subjective perceived workload measured with the NASA-TLX questionnaire. For both modalities, the able-bodied subjects and the subject with tetraplegia rated the workload as low on all six sub-scales (all rating scores < 50, see Figure 8e,f). However, significant differences were found in the sub-scales effort, frustration, and mental demand between the modalities while activating the buttons for the able-bodied subjects (p < 0.05). They found the procedure with the eye tracker (effort: 42.50 ± 26.38 SD, frustration: 32.50 ± 23.24 SD) significantly more strenuous and more frustrating than with the MARG-sensor (effort: 19.00 ± 9.07 SD, frustration: 14.00 ± 11.98 SD). Activation of the buttons was significantly more mentally demanding while using the eye tracker (44.50 ± 28.33 SD) than using the MARG-sensor (23.50 ± 19.44 SD). For the subject with tetraplegia, performing with the eye tracker (35.00) was more frustrating than with the MARG-sensor (15.00). Furthermore, the temporal demand was higher while using the eye tracker (35.00) than using the MARG-sensor (15.00). Continuous Cursor Control Test In the following section, the results of the Fitts's Law task are presented. For the able-bodied subjects, the throughput averaged over all six conditions was 2.01 ± 0.78 bps (SD) for the eye tracker and 2.24 ± 0.21 bps (SD) for the MARG-sensor. A paired sample t-test showed that the difference between the modalities was not significant (p > 0.05). The able-bodied subjects achieved the same performance with both modalities. In the complete Fitts's Law task, the able-bodied subjects produced significantly more (p = 0.001) errors with the eye tracker (average error rate: 4.47 ± 2.29% SD) than with the MARG-sensor (average error rate: 0.73 ± 0.49% SD). For the subject with tetraplegia, the throughput averaged over all six conditions was 0.85 ± 0.29 bps (SD) for the eye tracker and 1.19 ± 0.13 bps (SD) for the MARG-sensor. In the complete Fitts's Law task, the error rate for the subject with tetraplegia was 8.00 ± 5.66% (SD) for the eye tracker and 1.33 ± 2.07% (SD) for the MARG-sensor. 8. Results of the NASA-TLX questionnaire to measure the subjective perceived workload for Test 1, scale: 0 = low/good to 100 = high/poor: (a) able-bodied subjects (n = 10), mean values ± SE of the rating scores for the six sub-scales of the NASA-TLX; and (b) subject with tetraplegia (n = 1), values of the rating scores for the six sub-scales of the NASA-TLX. * Marks sig. differences (p < 0.05). Figure 9 depicts an overview of the results of the Fitts's Law task for the six different conditions. To compare both modalities for the group of the able-bodied subjects, we used paired sample t-tests for each condition. The determined mean throughputs for the able-bodied subjects are presented in Figure 9a. With the eye tracker and MARG-sensor, the subjects performed best in Conditions 3 and 6 (eye tracker: TP con3 = 2.39 ± 1.09 bps SD and TP con6 2.40 ± 1.34 bps SD; MARG-sensor: TP con3 = 2.33 ± 0.39 bps SD and TP con6 = 2.34 ± 0.30 bps SD). For Condition 1 (A = 600 Pixel and W = 60 Pixel; see Table 1), a significant difference was found between the throughputs of the modalities (p = 0.033). The subjects showed, in this condition, a significantly higher throughput with the MARG-sensor (2.18 ± 0.31 bps SD) than with the eye tracker (1.51 ± 0.71 bps SD). However, there were no significant differences between the throughputs of the eye tracker and MARG-sensor for the other five conditions (p > 0.05). For the error rate, we found significant differences between the modalities in four of the six conditions (see Figure 9b). For Conditions 1, 3, 5, and 6, the subjects made significantly fewer errors with the MARG-sensor than with the eye tracker (p < 0.05). Figure 9c shows the throughputs for the subject with tetraplegia for the six different conditions. The subject with tetraplegia demonstrated the highest performance with the MARG-sensor in Condition 6 (TP = 1.39 bps) and for the eye tracker in Condition 3 (TP = 1.25 bps). The subject with tetraplegia made more errors in each condition with the eye tracker than with the MARG-Sensor (see Figure 9d). With the eye tracker, the subject with tetraplegia made the most errors in Condition 1 (error rate 16.00%) and performed best in Conditions 3 and 6 (error rate 0.00% and 4.00%, respectively). Use-Case Test In the use-case scenario, the subjects had to perform a sequence with eleven steps to solve the task. Nine of the ten able-bodied subjects and the subject with tetraplegia performed all eleven steps correctly. One subject performed ten of the eleven steps correctly and had to repeat one incorrect step. The completion rate of the use-case task for the able-bodied subjects was 90.00% and for the subject with tetraplegia 100.00%. For evaluation of the multimodal control unit in the use-case scenario, the NASA-TLX and a questionnaire regarding the cursor control and robot control were used. In Figure 10, the results of the NASA-TLX are presented. The able-bodied subjects rated their subjective perceived workload during the procedure of the use-case as low on all six sub-scales (all rating scores < 50; see Figure 10a). The able-bodied subjects gave the highest score for mental demand (rating score: 39.00 ± 21.71 SD) and the lowest score for temporal demand (15.00 ± 8.17 SD). The subject with tetraplegia showed a similar subjectively perceived workload as the able-bodied subjects (see Figure 10b). The rating scores for the mental, physical, and temporal demand, as well as effort and frustration, were less than 50. Only the own performance was rated poorer by the subject with tetraplegia (rating score: 80.00). (a) (b) Figure 10. Results of the NASA-TLX questionnaire to measure the subjectively perceived workload for the use-case, scale: 0 = low/good to 100 = high/poor: (a) able-bodied subjects (n = 10), mean values ± SE of the rating scores for the six sub-scales of the NASA-TLX; and (b) subject with tetraplegia (n = 1), Values of the rating scores for the six sub-scales of the NASA-TLX. Figure 11 depicts the results of the questionnaire in which the control unit was evaluated. The statements of this questionnaire are presented in Table 2. In general, the able-bodied subjects were satisfied with the multimodal unit control (see Figure 11a; all rating scores ≥ 3.5) and they performed the use-case task successfully. The able-bodied subjects considered the cursor GUI and the robot GUI to be clearly arranged and visually appealing (rating scores for Statement 1: 4.70 ± 0.48 SD; and Statement 5: 4.60 ± 0.70 SD). The feedback on the current head position was useful (rating score for Statement 6: 4.40 ± 0.70 SD) and the able-bodied subjects could determine the position of the gripper (rating score for Statement 7: 3.90 ± 0.0.57 SD). The second lowest value of agreement was given by the able-bodied subjects for the evaluation of the easiness of switching between head control and eye control (rating score for Statement 8: 3.60 ± 1.18 SD). The section of the button activation (cursor control) was defined by the Statements 2-4. The able-bodied subjects gave, for these statements, the lowest agreement compared to the other statements (rating scores: 3.50 ± 0.86 SD, 3.50 ± 0.85 SD, and 3.50 ± 0.71 SD, respectively). The button activation was not so easy for the able-bodied subjects. The subject with tetraplegia gave similar agreements as the able-bodied subjects (see Figure 11b). The cursor GUI and robot GUI were considered visually appealing (rating scores for Statement 1: 5.00 and Statement 5: 4.00) and the subject with tetraplegia could easily determine the gripper position and movement direction (rating score for Statement 1: 5.00). The button activation was also not so easy for the subject with tetraplegia (rating scores: 3.00, 3.00, and 2.00, respectively). The lowest agreement was given for Statements 6 and 8 (both ratings scores: 1.00). For the subject with tetraplegia, the feedback of the head position was not useful and switching between head control and eye control was very difficult. Table 2. Statements of the subjective questionnaire about the control unit. Topic Number Statement Cursor control 1 The graphical user interface is visually appealing and clearly arranged. 2 It is easy to move the cursor in a controlled way. 3 It is easy to activate the Dwell buttons. 4 It is easy for me to activate a specific button. Robot control 5 The graphical user interface is visually appealing and clearly arranged. 6 Feedback on the current head position is easy to understand and useful. 7 It is easy for me to put myself in the gripper and determine the position and movement direction of the gripper. General control 8 It is easy for me to switch between head control and eye control. (b) Figure 11. Results of the subjective questionnaire for the use-case, scale from 1 "I do not agree at all" to 5 "I totally agree": (a) able-bodied subjects, mean values ± SE of the rating scores of the eight statements; and (b) subject with tetraplegia, values of the rating scores of the eight statements. Discussion In this work, the developed multimodal control unit was evaluated while using it for discrete control (activating of buttons) and for continuous cursor control, in order to discern which modality for which control mode was the best. The hypotheses that the eye tracker would achieve better results for discrete control as well as for continuous cursor control were not confirmed, considering the results of the able-bodied subjects. With both modalities, the subjects showed similar performance; however, the frustration, effort, and error rate were higher while using the eye tracker. The following points may be the reasons the activation of the buttons was more frustrating and strenuous while using the eye tracker. During fixation, the eyes continuously perform small movements, although the person has the feeling that his gaze is fixed upon something completely calmly. These micro-movements of the eyes are summarized as microsaccadic jitter and can be divided into three groups [26,27]. Slow micromovements, also called drift, occur during inter-saccadic intervals. Microsaccades are small and fast movements of eye position (distinguishable from drift movements due to their high speed). Microtremors are irregular and wavelike movements (high frequency and low amplitude). Micromovements may have caused the cursor to move out of the button area during fixation on the button when using the eye tracker. Due to this, the counter in the Dwell Time mechanism was restarted and the activation of a button took longer. Improving fixation detection is a possibility to reduce the influence of microsaccadic jitter. For this purpose, the online visual fixation detection algorithm of Salvucci and Goldberg [28] could be used. The algorithm uses spatial motion and duration thresholds to define a set of allowed pupil position differences between two sequential eye camera images. The spatial motion is the sum of the differences between successive pupil positions, using the eye tracker pixel positions of the eye camera. The spatial motion is compared with the threshold value of the maximum spatial motion. A fixation is detected if the calculated spatial motion stays below the threshold value. The online visual fixation detection algorithm was successfully tested with our kind of eye tracker by Wöhle and Gebhard [21]. The subject's gaze point was displayed on a monitor located at a distance of 90 cm from the subject. An inaccuracy in pupil detection could have led to greater inaccuracy in the displayed gaze point on the screen, due to this distance. Fixing the eye tracker with an eyewear strap should prevent the eye tracker from moving and, thus, also prevent inaccurate pupil detection; however, it does not completely prevent this effect. The subject with tetraplegia, due to the disease pattern, was more limited in head movements than the able-bodied subjects. This might be a reason for the longer activation times for the subject with tetraplegia when using head motions. To verify this hypothesis, repeating the Activation of Button Test with more subjects with tetraplegia will be performed in a further study. One reason for the higher temporal demand of the subject with tetraplegia while using the eye tracker for activating the buttons might be the "Midas Touch" problem. To reduce the "Midas Touch" problem, the Dwell Time solution was used in this work; however, the chosen Dwell Time was not suitable for the subject with tetraplegia and should be longer or shorter. Repeating the activation test with different Dwell Times will be carried out, in order to examine this aspect. Another possibility to reduce the "Midas touch" problem is to use a further modality to trigger an action (button activation). In a further study, an EMG signal of the lateral eye muscles could be recorded. With a specific eye movement, e.g., blinking, an action could be triggered. For the Fitts's Law test, the option Randomize Target Conditions was chosen. This option was to ensure that the order of the six conditions per subject was randomized. Thus, the six conditions were not performed in the same order for each subject and a possible learning effect should have been prevented. An analysis of the frequencies of the six conditions showed, however, that, when performing with the eye tracker, Condition 1 was performed by eight subjects at the beginning (i.e., at the first, second, or third position). In contrast, when performed with the MARG-sensor, only four subjects performed Condition 1 at the beginning. This might provide a possible explanation for the fact that the subjects achieved a significantly lower throughput with the eye tracker only for this condition than with the MARG-sensor. The subjects would have a lower throughput due to a learning effect, and not due to the modality. The randomization process for the Fitts's Law test only partially worked. In the Use-Case Test, our multimodal control system, consisting of the MARG-sensor and eye tracker, was tested for its usability. Both the robot control with the MARG-sensor and cursor control with the eye tracker, as well as the resulting fixed programmed robot movements (skills), were successfully performed by all subjects. The task of the use-case scenario was solved and the new control unit enabled an aid to drinking. The low subjective perceived workload indicates that the combination of the modalities (i.e., the MARG-sensor and eye tracker) can be used to control a robot to assist in a drinking task without leading to increased effort and strain in subjects. In the evaluation of the control unit, the approval rate of the subjects indicated that there is potential for improvement in the cursor control, especially regarding the specific button activation. In the use-case scenario, the eye tracker was used for button activation. Due to micro-movements of the eyes which occur during fixation [26,27], the control of the cursor by the eye tracker led to small cursor movements, which were unwanted by the subjects. This made it difficult for the subjects to move the cursor in a controlled manner. The high number of glasses wearers among the able-bodied subjects may be another reason for the lower agreement of the subjects regarding statements about cursor control, compared to statements about robot control. Wearing glasses has an influence on the accuracy of pupil detection, affecting the accuracy of the cursor movement and, thus, the activation of buttons. To improve the continuous cursor control while using the eye tracker, increasing the effective target size once the target is acquired, the so-called spatial hysteresis, is a common principle. This will reduce the influence of micro eye movements when activating buttons with the eye tracker. Conclusions and Future Work For all subjects, head motions were found to be a robust modality to control the robot in a three-dimensional space. For the ten able-bodied subjects, no significant difference between the modalities was found for the activation time and error rate in the button activation test and for throughput (Fitts's Law task). However, the button activation with the eye tracker was mentally more demanding, strenuous, and frustrating. In addition, the error rate of the Fitts's Law task was significantly higher with the eye tracker. The subject with tetraplegia activated the buttons more quickly through eye motions, compared to head motions. For the subject with tetraplegia, both modalities showed similar results for throughput; however, the error rate was significantly higher when using the eye tracker. Regarding the overall effectiveness, especially regarding the robustness, represented by the error rate, the MARG-sensor provided better results than the eye tracker. Nine of the ten able-bodied subjects and the subject with tetraplegia performed the task in the drinking use-case scenario without errors. These subjects were able to control the robot arm with a gripper successfully and intuitively with the combination of an eye tracker and a MARG-sensor. When performing the task, all subjects showed a low subjective perceived workload. The developed multimodal control unit, thus, can provide support while drinking. However, the switching between the head control and eye control was not easy for the subject with tetraplegia. The observed results of the subject with tetraplegia will be verified in a further study. For our future work, an evaluation of the proposed system with a larger number of subjects with tetraplegia is planned. In addition, it is planned to compare the proposed control system with conventional methods. The button activation with the eye tracker has potential for improvement in terms of increasing the effective target size once the target is acquired. In future work, the so-called spatial hysteresis will be implemented. The improvement of the switching between head and eye control, as well as the improvement of the button activation, is an important aspect for future research. Especially the user feedback information, whether the head control or the eye control is active, should be improved. Extension by a further modality is planned, in order to reduce the Midas Touch problem occurring while using the eye tracker. A specific EMG signal of the lateral eye muscles will also be used to trigger a button click. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2020-12-17T09:13:57.075Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "fd9600b1999f1c25a53580301484f98e79974179", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/24/7162/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c2c38083aee468849e96d3402e31c612ca3b183", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
117847737
pes2o/s2orc
v3-fos-license
Studying the metallicity gradient in Virgo Ellipticals with E-ELT photometry of resolved stars The next generation of large aperture ground based telescopes will offer the opportunity to perform accurate stellar photometry in very crowded fields. This future capability will allow one to study in detail the stellar population in distant galaxies. In this paper we explore the effect of photometric errors on the stellar metallicity distribution derived from the color distribution of the Red Giant Branch stars in the central regions of galaxies at the distance of the Virgo cluster. We focus on the analysis of the Color-Magnitude Diagrams at different radii in a typical giant Elliptical galaxy obtained from synthetic data constructed to exemplify observations of the European Extremely Large Telescope. The simulations adopt the specifications of the first light high resolution imager MICADO and the expected performance of the Multi-Conjugate Adaptive Optics Module MAORY. We find that the foreseen photometric accuracy allows us to recover the shape of the metallicity distribution with a resolution $\lesssim 0.4$ dex in the inner regions ($\mu_{\rm B}$ = 20.5 mag arcsec$^{-2}$) and $\simeq 0.2$ dex in regions with $\mu_{\rm B}$ = 21.6 mag arcsec$^{-2}$, that corresponds to approximately half of the effective radius for a typical giant elliptical in Virgo. At the effective radius ($\mu_{\rm B} \simeq 23$ mag arcsec$^{-2}$), the metallicity distribution is recovered with a resolution of $\simeq 0.1$ dex. It will thus be possible to study in detail the metallicity gradient of the stellar population over (almost) the whole extension of galaxies in Virgo. We also evaluate the impact of moderate degradations of the Point Spread Function from the assumed optimal conditions and find similar results, showing that this science case is robust. INTRODUCTION In spite of the general consensus on the hierarchical model for the formation of galaxies, the details on their growth and assembly are still unclear. For example, in the case of Elliptical galaxies, we do not know whether their assembly occurs preferentially via dry merging, involving mostly stars, or wet merging, involving stars and gas, accompanied by star formation (e.g. Ciotti et al. 2007). According to Kormendy et al. (2009), these two modalities could lead to a dichotomy in several properties of this kind of galaxies, including, e.g., the formation of disky ellipticals when wet merging is dominant, or boxy ellipticals in the opposite case, as well as other observational evidences (Ciotti 2009). Other scenarios consider the occurrence of wet merging preferentially at high redshift, followed by prevailing dry merging ⋆ E-mail:laura.schreiber@oabo.inaf.it at lower redshift (Oser et al. 2010). Alternatively, the formation process could be characterized by two main phases, with in-situ star formation producing the inner regions at early epochs, followed by the growth of the external parts of the galaxies via dry merging. In this model the importance of the latter mechanism increases with galaxy size. Different formation paths imprint upon different metallicity distributions and gradients over the galactic radii. For example, in the case of wet merging, the gas should deposit in the central regions of the accreting galaxy, where the last star formation episode would occur. This process leads to the construction of sizable metallicity gradients, with metal rich stars dominating the central parts of the galaxy. Conversely, a prevalence of dry merging would result into a quite flat metallicity gradient, the accreted galaxies being disrupted and their members mixed to the accretor's stars. Therefore, the observational determination of metallicity distributions c 2002 RAS and metallicity gradients in Ellipticals provide strong constraints on their formation models. This problem has been investigated through the analysis of integrated colors and line indices gradients in galaxies (e.g. Weijnans et al. 2009;Coccato et al. 2010;Rawle et al. 2010;Kim & Im 2013). These studies support the notion that dry merging is indeed important in the formation of ellipticals, but the actual size of the metallicity gradients and its trend with galaxy size are a matter of debate. In addition, the integrated light can only yield a global information on the metallicity, and this information is necessarily weighted by luminosity, which favors the younger stellar generations. Conversely, tight constraints on the formation models could be obtained from the detailed study of metallicity distribution, its peak and extension, and its trend with radius. An efficient way to measure the metallicity distribution function (MDF) involves the analysis of the color distribution of stars on the Red Giant Branch (RGB) (e.g. Harris et al. 1999). RGB stars are intrinsically very bright, and are produced by stellar populations with an extremely large range of ages, older than ∼ 2 Gyr up to the Hubble time. Therefore, this component of the stellar population samples almost the whole star formation history of the galaxy. Although the color of the RGB stars also depends on their age, the sensitivity to metallicity is much more pronounced, so that the width of the RGB is often used to derive the width of the metallicity distribution of a stellar population. Individual spectroscopy for bright RGB stars (MI ≃ −3.5) cannot be performed even for the Ellipticals nearest to us, leaving the photometric method as the only means to access the metallicity distribution. A thorough study of this kind has been performed for the nearby elliptical galaxy Centaurus A (Harris & Harris 2000;Rejkuba et al. 2005), through the analysis of CMDs obtained from HST data in different regions of the galaxy, sampling the stellar populations from ≃ 8 to ≃ 38 Kpc from the center . The results show that the metallicity distribution is very wide in all the examined fields, with very little variations of the peak and width. However, single star photometry in the inner regions is hampered by crowding; the innermost field studied in Centaurus A is located at 8 Kpc from the center, corresponding to 1.5 times the effective radius (R eff ), leaving unexplored most of the stellar mass of the galaxy. With the exquisite resolving power of the future 40 m class European-Extremely Large Telescope (E-ELT) (Gilmozzi & Spyromilio 2007) working close to the diffraction limit, thanks to the Laser Guide Stars (LGS) assisted Multi-Conjugate Adaptive Optics (MCAO), it will be possible to perform accurate photometry of bright RGB stars in extremely crowded fields, down to the inner regions of galaxies and to map the metallicity distribution over the whole Ellipticals. With the E-ELT we will be able to study galaxies beyond the Centaurus group, and in particular access members of the Virgo Cluster, enabling the comparison of the metallicity distribution in a suitable sample of galaxies located in different environments. This paper builds on Greggio et al. (2012) in which this problem was explored, leading to the encouraging result that the uncertainty on the metallicity of a bright RGB star amounts to ∼ 0.1 dex at approximately 0.5 effective radii in a typical elliptical in Virgo. Here we expand the investigation to quantify how the accuracy varies with star crowding, from the central parts, up to ∼ 2 R eff . Moreover, we add the discussion of the effect on the results of the small variations of the Point Spread Function (PSF) across the whole MI-CADO (Multi-AO Imaging Camera for Deep Observations) Field of View (FoV) due to the non-uniformity of the MCAO correction and to variation of seeing conditions. The expected performance of an ELT for photometry in crowded fields has been investigated by Olsen, Blum & Rigaut (2003) and Deep et al. (2011). These papers aim at quantifying in general the photometric accuracy and its variation with crowding. Here we address a specific scientific issue, assessing the accuracy with which the metallicity distribution can be derived, which degrades with crowding. The paper is organized as follows. In Sect. 2 we describe how the simulated frames were produced, detailing in particular the adopted PSF (Sect. 2.3), while in Sect. 3 we exemplify how the synthetic images were reduced. In Sect. 4 we report our results: the quality of the photometric measurements (Sect. 4.1), the derived CMDs (Sect. 4.2), and the MDF resulting from the analysis of the RGB stars (Sect. 4.3). The comparison of this MDF to the input one clearly illustrates the feasibility of the considered science case. In Sect. 5 we discuss how the results change when considering a non-optimal PSF. A summary of our results is presented in Sect. 6. The science case In this paper we focus on the problem of deriving the stellar metallicity distribution in different parts of a giant elliptical galaxy member of the Virgo Cluster by means of simulated star fields. At a distance of 18 Mpc, the brightest stars of an old stellar population, which are at the Tip of the RGB, have J ≃ 26 mag. For our simulations we adopt the same model stellar population as in Greggio et al. (2012), namely a flat age distribution between 10 and 12 Gyr, and a metallicity distribution determined for a halo field in the elliptical galaxy Centaurus A (Rejkuba et al. 2005) (see Fig. 1). We refer the reader to Greggio et al. (2012) for details on the model computations; we recall here only some properties of the stellar population: the ratio between the mass of formed stars and current B band luminosity is 7.05M⊙/LB,⊙; integrated colors are B -V = 0.88; B -I = 1.97; B -K = 3.66. On the CMDs of Fig. 1, the different metallicity bins appear well separated, thereby illustrating the diagnostic of the color of RGB stars. Comparing the left to the central panel one can appreciate the superior sensitivity to metallicity of the I -J with respect to the J -K color, because the wider wavelength baseline better traces the stellar effective temperature. The quality of the metallicity distribution derived from the CMD will depend on the photometric errors affecting the color distribution of the stars, that depend on crowding conditions. We map quantitatively this effect by placing the stellar population at different locations within the galaxy, i.e. at different surface brightness levels. Real galaxies could be characterized by a systematic trend of the metallicity distribution with galactic radius; however we do not attempt to incorporate this variation in our modeling, since we aim at evaluating how crowding affects our Figure 1. The optical Near-IR CMDs (left and central panels) and the metallicity distribution (right panel) of the model stellar population considered for our science case. On the CMDs dots are colored according to their metallicity with the same encoding as in the right panel. The simulation is the same as in Greggio et al. (2012), and corresponds to a total mass of 6.8 ·10 7 M ⊙ of stars formed between 10 and 12 Gyr ago. The synthetic CMD has been computed with the YZVAR code by G.P. Bertelli using the Girardi et al. (2002) stellar tracks. Please note that the sharp boundaries between the populations are due to the solid colors used for the dots of the plots. Actually there is a partial overlap of the RGB loci occupied by stars members of adjacent metallicity bins, which is not visible in the figure. capability of recovering a given metallicity distribution from photometry of the resolved stars in the galaxy. The instrument The first light imaging system currently foreseen for the future 40 m class E-ELT will consist of a high resolution camera (MICADO, Davies et al. 2010) coupled with MAORY (Multi-conjugate Adaptive Optics RelaY Diolaiti et al. 2010), an LGS assisted MCAO module. The MICADO camera, optimized for imaging at the diffraction limit, will fully sample the 6 (11) mas PSF core FWHM in the J (K) bands; it requires an image correction of high quality and uniformity across a FoV of 53 × 53 arcsec on the wavelength range 0.8 -2.4 µm. A good uniformity of the high resolution PSF across the FoV is ensured by the MAORY MCAO module by means of several deformable mirrors, optically conjugated to different turbulent layers, and several guide stars, to obtain a kind of 3-dimensional mapping of the turbulence. The MAORY phase A baseline takes advantage of a constellation of 6 LGS and 3 Natural Guide Stars (NGS) for the turbulence sensing, following the choice adopted in other MCAO systems for present and future telescopes, like GeMS on Gemini (Neichel et al. 2010) and NFIRAOS (Herriot et al. 2010) on the future Thirty Meter Telescope (TMT, Szeto et al. 2008). MCAO was successfully Input Point Spread Functions The PSFs for the synthetic images have been downloaded from the MAORY official website 1 . The MAORY PSF ( Fig. 2 shows an example in the J band) presents a very complex shape, where the main central component has the typical shape of the Airy disk, and the secondary structures can be attributed to the MCAO system characteristics. The secondary peaks arranged in a quasi-hexagonal configuration mirror the constellation of the 6 LGSs; the big external square reflects the density and the displacement of the deformable mirrors actuators, and the external halo is produced by residual light due to the non-perfect correction of the turbulence. In order to map the PSF variations across the FoV, model PSFs are computed on a polar grid of directions (shown in Fig. 3). For each photometric band and point of the grid they are available for two seeing atmospheric conditions: "median seeing" condition (FWHM = 0.8 arcsec at 0.5 µm) and "good seeing" condition (FWHM = 0.6 arcsec). Fig. 4 shows the trend of the predicted MAORY PSF Strehl Ratio (SR) with distance from the center of the FoV. The SR is the ratio of the maximum intensity in the PSF to that in the theoretically perfect point source image (Airy disk) and it is a tracer of the image quality after AO correction. The performance across the MICADO FoV is remarkably uniform, due to the multi-conjugation of the AO correction. Fig. 2.3 shows two examples of the model MAORY PSF in the J band under "good seeing" conditions, in the two locations on the FoV shown in Fig. 3 by the two colored filled circles. Fig. 2.3 exemplifies the maximum expected variation of the PSF on the MICADO FoV. We explore the sensitivity of our results on PSF variations by computing two sets of simulated images, adopting the model "good seeing" PSFs in the two locations shown in Fig. 3. In addition, we compute a third set of frames using the "median seeing" PSF in the center of the FoV. Since the PSF variation across the whole MICADO FoV is very low, we assumed a fixed PSF for each simulated frame. Frames generation The images have been simulated using the AETC tool 2 with the E-ELT MI-CADO configuration. We recall in Table 1 all the relevant parameters of the telescope and of the instrument we have considered, as well as the observation conditions. We simulated images in the I, J, H and Ks bands for the science case described in Sect. 2.1 at different surface brightness levels (i.e. crowding conditions) and scaling the image FoV. The characteristics of the various cases considered are summarized in Table 2. The total B band luminosity of the stellar population sampled by the synthetic frame is given by: where F oV 2 is the area of the simulated frame in arcsec 2 , µB is the (un reddened) surface brightness of the stellar population, DM is the distance modulus. We adopt DM = 31.3 mag and a solar absolute magnitude of MB,⊙ = 5.48. The number of stars populating a given section of the CMD is proportional to LB. In order to maintain the same statistical sampling of the CMD in all the explored cases, we chose FoV and µB to produce fields with constant LB ≃ 10 7 LB,⊙. The explored cases encompass a wide range of surface brightness (SB), almost down to the center of a typical giant elliptical galaxy at the distance of the Virgo cluster (see Table 2). Stars brighter than a threshold magnitude ( ∼ 1.5 mag fainter than the limiting magnitude (S/N=5) of the simulation ) are used for the simulated frames, while the remaining light of the stellar population is distributed over the frame as a pedestal, with its associated Poisson noise. This ensures that the effect of blending of stellar images is well characterized also at the faint end of the luminosity func-tion. For the considered science case, the input stellar lists, used to generate the frames, contain ≈ 126000 stars brighter than K = 31 mag (Table 2). This number of synthetic stars proved to be large enough to ensure a good statistical sampling of the photometric error. The lists include information on mass, age, metallicity, magnitudes in the I, J, H and Ks bands and the coordinates on the frame of each star. Synthetic frames are generated from these lists in the corresponding bands with the following steps: for each star, the properly re-sampled MAORY PSF is positioned on the coordinates; the source flux is computed by the AETC with the MICADO configuration assuming an exposure time of 2 hours (adding 100 individual exposures); the photon noise statistic is added taking into account the subtraction of the background due to the telescope and to the sky + the nonresolved stars component; finally the total Read Out Noise is added. Fig. 5 shows three simulated images obtained assuming different surface brightness values and illustrate the explored range of crowding conditions. PHOTOMETRIC ANALYSIS The synthetic frames are characterized by highly structured PSFs, with sharp diffraction limited core and extended halo (see Figure 2.3). The complex shape of the PSF, typically obtained when AO is involved, can not be easily represented by a simple combination of few analytical components (Schreiber et al. 2012). We therefore decided to perform the PSF photometry using the StarFinder code (Diolaiti et al. 2000), a program specifically designed for high resolution AO images taking into account the problem of reliable stars recognition in crowded fields with a highly structured PSF. The analysis is accomplished by PSF fitting, using a numerical PSF template extracted from the frame, in order to account for all the bumps and fine scale structures. StarFinder estimates first the background, that can be variable across the frame, and the noise standard deviation. The candidate stars are then chosen by selecting the sources with a peak value statistically significant above the background, they are listed by decreasing intensity and they are compared to the PSF through cross-correlation, yielding an objective measure of similarity. If the correlation coefficient is higher than a pre-fixed threshold, the object is rated similar to the PSF and accepted. The accurate determination of its position and flux are obtained by means of a local fit. We set a detection threshold of 3σ, where σ stands for the background noise standard deviation, and a correlation coefficient of 0.5. The contribution of the detected stars is recorded into an image model which is continuously updated and used as a reference to account for the contamination of the already detected sources. After a first iteration of the star detection loop with a preliminary rough photometric analysis, the contaminating sources around the stars selected for the PSF estimation are identified and the initial background estimation gets refined, resulting in a more accurate PSF estimation. Figure 6 shows the central part of the MAORY PSF in the J band and the PSFs extracted by StarFinder from three simulated frames with different crowding conditions. It is noteworthy how the halo details are better recovered as the surface brightness, and so crowding, decreases. The StarFinder code is written in IDL language and the cur- Photometric accuracy and completeness The photometric accuracy is evaluated for each band and for each crowding condition, matching the input and output catalogues. The chosen matching algorithm searches for candidate counterparts within 1 pixel distance to each detected source and, in case of ambiguity due to multiple candidates, the brightest star is chosen. No limit between the input and output magnitude difference has been set. This criterion has the merit of being independent of the magnitude difference between the input and output source, that is what we want to measure. However it has the drawback of inducing false associations of very faint stars with relatively bright detected objects. This spurious association occurs in particular at high surface brightness levels, when the number of faint stars in the error box around the detected source is large. At the other extreme, when no input source is found within 1 pixel distance, the detected source is considered as a spurious detection, due to noise spikes or to inaccurate PSF modeling. Notice that the secondary peaks of the structured MAORY PSF (outlined in Fig. 2 and Fig. 2.3) may induce false detection of faint stars if the extracted PSF lacks details in the halo substructures. In this respect, the numerical PSF extracted by StarFinder provides a high quality modeling of the halo sub-structures that limits this problem. Table 3 shows that most of the spurious detections are due to noise spikes, especially in high crowding conditions, and the percentage of spurious objects is larger for redder wavelengths because the higher background level generates more spikes. The largest fractions of spurious detections occur for µB = 21.64 mag arcsec −2 , which corresponds to an intermediate level of crowding among the analyzed cases. For very high crowding the adopted matching criterion, which only relies on the position of the sources, maximizes the association between an output and an input star, thereby minimizing the cases in which an output star has no counterpart in the input list. All the simulated images included ≈ 126000 stars with Ks 31 mag (see Table 2). The number of detected sources for each case and in each band is reported in Table 4. The detection of sources is limited by two main background components: the sky + telescope background and the light provided by the non resolved stars + the contribution due to the superposition of the very extended halos of the individual sources. While the former gets brighter at longer wavelengths, the latter is more important when the SR is lower (i.e. at shorter wavelengths), and when the crowding is higher. Matching the input and output catalogue allows us to determine the calibration constant for the output magnitudes as the average magnitude difference of the brightest stars (∼ 300 objects). In Fig. 7 we compare the input and the output luminosity functions for five cases in the I and in the J bands. The luminosity functions are well recovered down to a magnitude which becomes progressively fainter as the crowding decreases (i.e. µB increases). The magnitude levels of 50 per cent completeness (expressed as the ratio between the output and the input luminosity functions) are shown in Fig. 8 as functions of the surface brightness. At µB 21.5 mag arcsec −2 , the 50 per cent completeness magnitude becomes independent of the surface brightness in all bands, indicating that below this level, the source detection is no longer limited by crowding. Another important aspect highlighted by Fig. 7 is the difference between the output luminosity function and the luminosity function of the input stars matched to the stars recovered by the data reduction. The two distributions coincide in the brightest bins, where photometry is very accurate, but below a certain magnitude, which depends on the crowding conditions, the output luminosity function typically exceeds the input one. This is due to blending which causes the migration of stars towards brighter bins along the luminosity function (see, e.g., Greggio & Renzini 2011). The photometric error is illustrated in Fig. 9 where we compare the input and measured magnitudes of the matched stars for the case with µB = 21.64 mag arcsec −2 in the J band. The median value of the magnitude difference is zero for the brightest stars (by construction), but becomes progressively more negative going towards fainter magnitudes. For each star, the photometric error is due to both crowding and noise. While the noise causes a randomly distributed error in the direction of brighter or fainter magnitudes, the crowding leads to a negatively biased error, pushing the stars more frequently into brighter bins. This is the reason for the asymmetrical distribution of the magnitude differences in Fig. 9. The 1σ width loci are compared in Fig. 10 for various values of the surface brightness (i.e. crowding conditions) in different bands. It is apparent that in the case of µB = 19.3 mag arcsec −2 the photometric accuracy is rather poor at all wavelengths. The error distribution appears very asymmetrical due to blending, especially in the more crowded regions. Notice that the photometric accuracy in the I band is similar to that in the Near-IR bands, in spite of the worse AO correction quality (i.e. lower SR) at shorter wavelengths. This is due to the lower sky background in the optical. Similar level of completeness and values for the photometric errors have been reported by Deep et al. (2011). Color Magnitude Diagrams CMDs have been obtained by combining the catalogues in the four broad-band filters (I, J, H and Ks) for each of the 8 cases in Table 2. Fig. 11 shows the output (J, I -J), (H, I -H) and (J, J -K) CMDs at four surface brightness levels, covering a range between 19.3 µB 23.85 mag arcsec −2 , that corresponds to a radial range of 0.1 R/Ref f 1.5 for a typical bright elliptical galaxy. In each panel of Fig. 11 dashed lines show the 50 per cent completeness limits in the two bands used to construct the CMD. It appears that the adopted exposure times allow us to derive complete CMDs in the external regions at µB > 21.6 mag arcsec −2 , and that the limits in the I, J and H bands are equivalent for sampling the RGB stellar population. The limiting magnitude in the Ks band appears instead too bright compared to that in the J band, as shown by the lack of stellar detections at Ks 27.5 mag in the bottom right panel of Fig. 11, which are detected in the J band. However, the (J, J -K) diagrams still sample the upper 2 magnitudes of the RGB with high completeness factors, which should suffice for the determination of the MDF. The effect of crowding on photometric accuracy can be appreciated on Fig. 11 as an increasing depth and better color separation as the surface brightness becomes fainter. Notice that the separation of the different colors on the CMDs reflects the separation of stars in different metallicity bins, thus tracing our ability to derive the metallicity distribution from the color distribution of the stars. In the most crowded regions the background is amplified by the large amount of unresolved stars with their very extended PSF halos. This effect is more pronounced in the bands where the sky and instrument background is not dominant. In addition, the accuracy of the PSF strongly worsens with increasing crowding. These factors produce the high incompleteness and large scatter on the observed CMD in the high surface brightness cases. Below µB ≃ 21.6 mag arcsec −2 completeness levels remain the same, while color separation continues to improve towards fainter surface brightness levels. We also notice that the Tip of the RGB is badly defined in the most crowded cases, due to the effect of blending which causes a spurious brightening of the stars just below the RGB Tip. All combinations of the photometric bands sufficiently sample the bright RGB (red) stars. However, the color separation of stars with different metallicities is much better achieved in CMDs that include the I band. Indeed, this allows a wider wavelength baseline which more effectively traces the effective temperature of the RGB stars. In spite of the best AO correction, the K band is less efficient than other infrared bands because of the high background which limits the depth and the accuracy of the Ks band detections. Therefore the photometric metallicity is better determined using the I -J or the I -H colors. Both options (obtained with the same exposure times) look very similar from the quality of the CMDs shown in Fig. 11. For this reason, in the following we consider only the J vs I -J CMD to evaluate the impact of crowding on the derivation of the metallicity distribution from the photometry of the RGB stars. Metallicity Distribution Function Since the determination of the metallicity relies on the color of the stars, it is important to analyze the error on the colors of the detected sources. Fig. 12 shows the r.m.s. on the color of the detected (and matched) stars for two different cases of crowding. As already mentioned the crowding leads to a negatively biased error at all wavelengths. This asymmetrical distribution of the photometric error is well described by the trend of the median photometric error depicted in Fig. 12 and by the asymmetrical distribution of the photometric error curves depicted in Fig. 10. This error source at different wavelength is statistically correlated. For this reason, when combining photometric mea-surements of different bands to recover the color of the stars, this statistically correlated component of the photometric errors does not sum in quadrature, but partially compensates (Olsen, Blum & Rigaut 2003). As a result, for highly crowded fields, the color error is smaller than that of individual photometric measurements. This interesting effect is well depicted in Fig. 12. The situation is different in low crowding conditions, where the photometric error is dominated by the photon noise: in this case the errors in the two bands are uncorrelated and, therefore, the error on the color is similar (or larger) to that of the photometry in a single band. Similar considerations hold when comparing the error on the color to that of the individual magnitude in the J band. We now turn to examine the metallicity distribution derived from the single star photometry as a function of crowding conditions. The photometric metallicity is determined by comparing the position of the measured stars to model loci characterized by different values of the metallicity. As underlined in the introduction, this method is subject to uncertainties related to the age-metallicity degeneracy and to the AGB contribution to the counts in this part of the CMD (see, e.g., Gallart et al. 2005). For example, Rejkuba et al. (2011) show that neglecting the AGB contribution when deriving the MDF from this portion of the CMD leads to underestimating the average metallicity of the population, since AGB stars are bluer than their RGB progenitors. This effect, however, is small and can be easily accounted for with simulations based on evolutionary tracks. More insidious is the age-metallicity degeneracy for which RGB stars have the same color for age and metallicity combinations with higher metallicity at younger ages. For old stellar populations (age 8 Gyr), simulations based on stellar tracks indicate that the MDF derived with this method shifts to metallicities higher by ∼ 0.1 dex when the age is assumed younger by ∼ 3 Gyr. Besides these systematic effects, the photometric errors introduce an additional uncertainty that is investigated with our simulated frames. The main aim of this paper is to quantifying this additional uncertainty and its systematic with crowding, while establishing the reliability of the metallicity derived with the photometric method is beyond our scope. Fig. 13 shows the theoretical loci adopted for our exercise, derived from the simulated CMD shown in Fig. 1, by dividing the list of input stars in metallicity bins, and computing the average I -J color as function of the J magnitude for the various bins. The theoretical lines include the effect of the AGB component by construction. The metallicity of each detected star in our observed CMD is then derived by interpolation on this grid. This is equivalent to the method applied in Rejkuba et al. (2011) to HST data to derive the metallicity distribution in a halo field of the elliptical galaxy Cen A. For the stars which have been positionally matched to an input object two values of the metallicity are determined: one is the true metallicity of the input star ([Fe/H]i), and the other is the observed metallicity derived from the interpolation described above ([Fe/H]o). The difference between these two values represents the error on the metallicity induced by the photometric scatter on the CMD. Fig. 14 plots this error as a function of magnitude for two of our considered cases. This error is clearly higher for the more crowded field, and it increases as stars become fainter. Fig. 15 To determine the photometric metallicity distribution we restrict the analysis to a sub-sample of the measured stars, selecting only the portion shown in boldface on Fig. 13. This is limited by the Tip of the RGB on the bright side, since we prefer avoiding the region populated only by AGB stars. The lower limit to the luminosity is instead used to delimit a portion of the CMD where the sensitivity of the color to the metallicity is relatively high. Fig. 16 compares the input metallicity distribution to the one derived with this method for the four levels of surface brightness. We notice that the photometrically derived distribution is slightly overpopulated on the low metallicity side of the peak. The overall shape of the two distributions, however, is quite similar in all the examined cases, provided that the binning is wider in the most crowded fields. The accuracy with which the MDF is recovered, as mapped by the width of the bins in Fig. 16, worsens towards the inner regions, as does the photometric quality. Actually we find an almost linear relation between the error at relatively faint magnitudes and the width of the optimal metallicity bin. We conclude that the metallicity distribution can be recovered with a resolution better than 0.2 dex in a Virgo Elliptical in regions with a surface brightness fainter than µB ≃ 21.6 mag arcsec −2 . The magnitude of this uncertainty is of the same order as that related to the age-metallicity degeneracy. EFFECT OF PSF VARIATIONS The cases illustrated so far (hereafter referred to as case A) adopt the PSF computed at the center of the MICADO FoV, where the MCAO correction is more efficient, and in "good seeing" condition. In order to characterize the impact of the PSF variation on our science case we computed two more sets of simulations adopting different PSFs. To test the effect of spatial variation of the PSF on the MICADO FoV we generated frames adopting the PSF at the edge of the FoV and "good seeing" (blue point in Fig. 3, hereafter case B). We test the effect of seeing adopting the central PSF under "median seeing" conditions (hereafter case C). The main characteristics of the three considered PSFs are listed in Table 5. These cases correspond to considering a degradation of the PSF SR (see Fig. 4), but also variations of the PSF shape, especially in the halo substructure (see Fig. 2.3). For this experiment we selected one of the cases in Table 2, i.e. the case at 0.5 R eff (with µB = 21.64 mag arcsec −2 ), which appears to represent the limit beyond which the photometric accuracy becomes almost insensitive to crowding. The analysis has been carried out for the I and J bands follow- ing the same steps illustrated in previous Sections. The SR degradation influences the number of detected sources, especially in the I band where the AO correction is poorer, leading to a brighter detection limit for cases B and C compared to A. The correlation between the detection limit and the SR (which impacts on the signal to noise ratio) is apparent comparing the 50 per cent completeness magnitudes reported in Table 6 with the SR values reported in Table 5. Fig. 17 illustrates the photometric quality as a function of the input magnitudes in the I and J bands. We recall that the surface brightness (i.e. crowding) is fixed. Note that the error curves for the three cases almost overlap along the whole magnitude range in both bands. Indeed the accuracy of the PSF extraction is more affected by the crowding rather than the SR. A marginal increase of the negative error in cases with lower SR (B and C) can be noticed at faint I magnitudes (I 29 mag). This is probably due to the en- Table 6. I and J magnitudes where the output luminosity functions, computed for µ B = 21.64 mag arcsec −2 and assuming different PSFs, become 50% incomplete. The PSFs characteristics of the three cases are listed in Table 5 hancement of the blending effect whereby a larger fraction of the flux is contained within the PSF halo. The above considerations reflect into the (J, I -J) CMDs appearance of the three cases shown in Fig. 18. The effect of the SR degradation on the color separation of different metallicity bins seems to be negligible, while it affects significantly the CMDs depth. Fig. 20 compares the input metallicity distribution to the one derived following the procedure described in Sect. 4.3 for cases B and C. We found that the overall shape of the two distributions is again quite similar. Even the r.m.s. of the errors on the metallicity estimation as a function of magnitude (colored lines in Fig. 19) in cases B and C are slightly larger with respect to case A, but comparable within ∼ 0.05 dex. We may conclude that the considered cases of PSF SR degradation do not influence the accuracy with which we can recover the MDF in a Virgo Elliptical in regions with a surface brightness fainter than µB ≃ 21.6 mag arcsec −2 . This confirms the feasibility of our science case. SUMMARY AND CONCLUSIONS In this paper we investigated the expected performance reached by next generation large aperture telescopes for the photometric study of resolved stellar population in distant galaxies. This work focus on the capabilities of deriving the metallicity distribution of stellar populations in distant galaxies using the future E-ELT high resolution imager MI-CADO. In particular we quantified the impact of the photometric errors on the metallicity distribution derived from the color distribution of RGB stars, and of its systematics with different crowding conditions. This uncertainty is in addition to that related to the age-metallicity degeneracy and inadequacies of the stellar evolutionary tracks. We have shown that the exquisite spatial resolution offered by the E-ELT working close to the diffraction limit, will allow us to perform accurate photometry of bright RGB stars in extremely crowded fields, down to the inner regions of galaxies. It will be therefore possible to map the metallicity distribution across practically an entire elliptical galaxy, with a modest resolution (∼ 0.5 dex) in the central regions. At larger radii the resolution improves, becoming ∼ 0.1 dex at the effective radius and even better in the external regions. We produced synthetic frames in the I, J, H and Ks bands at different surface brightness levels (19.3 µB 24.54) assuming the expected PSF of the MICADO camera assisted by the MAORY MCAO module. We used different PSFs computed under different assumptions of seeing conditions and AO performance across the MICADO FoV. The generated frames have been analyzed using StarFinder, a program specifically designed for high resolution AO images. The photometric accuracy has been evaluated for each band and for each crowding condition, matching the input and output stellar lists. As in Greggio et al. (2012), we found that blending of stellar sources in most crowded fields leads to an asymmetrical error distribution, and to a general migration of star counts along the luminosity function towards the brighter bins. Our analysis has shown that: • stellar photometry in crowded fields of distant galaxies is feasible with an accuracy of σ ≃ 0.1 mag at ≃ 0.5 R eff (µB = 21.6 mag arcsec −2 ) and with an accuracy of σ ≃ 0.2 mag at ≃ 0.25 R eff (µB = 20.5 mag arcsec −2 ) down to J ≃ 27.7 mag. This allows studies of resolved stellar populations in the inner regions of elliptical galaxies up to the distance of the Virgo cluster; • the luminosity function on the upper two magnitudes of the RGB is well determined for surface brightness levels fainter than µB ≃ 20.5 mag arcsec −2 (corresponding to ≃ 0.25R eff ); • at µB ∼ 21.6 mag arcsec −2 the completeness becomes independent of surface brightness in all the bands, indicating that below this level, the source detection is no longer limited by crowding; • the photometric errors introduce an uncertainty 0.2 dex in the determination of the peak of the metallicity distribution in regions with a surface brightness fainter than µB ≃ 21.6 mag arcsec −2 ; at this surface brightness level the photometric errors for stars brighter than J ≃ 27 mag induce a typical accuracy of ∼ 0.1 dex on the photometric metallicity; • when considering a non optimal PSF, such as the one obtained in worse seeing condition or at the edge of the imaging camera FoV, it is still possible to retrieve the metallicity distribution with an accuracy similar to the one recovered assuming the best PSF, while the CMDs become less deep. . Input (red filled histogram) and recovered (black thick histogram) metallicity distributions for the non-optimal PSFs cases considered and µ B = 21.64 mag arcsec −2 . The [Fe/H] histogram bin widths for which the input and output metallicity distributions are in best agreement are 0.18 and 0.19 dex for cases B and C respectively, c.f. the 0.18 bin width of case A (Fig. 16).
2013-11-05T10:03:19.000Z
2013-11-05T00:00:00.000
{ "year": 2014, "sha1": "6fff60d430b8bef8f6f514efa23e31e001dac487", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/437/3/2966/18468131/stt2124.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6fff60d430b8bef8f6f514efa23e31e001dac487", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5854984
pes2o/s2orc
v3-fos-license
Self-averaging of kinetic models for waves in random media Kinetic equations are often appropriate to model the energy density of high frequency waves propagating in highly heterogeneous media. The limitations of the kinetic model are quantified by the statistical instability of the wave energy density, i.e., by its sensitivity to changes in the realization of the underlying heterogeneous medium modeled as a random medium. In the simplified It\^o-Schr\"odinger regime of wave propagation, we obtain optimal estimates for the statistical instability of the wave energy density for different configurations of the source terms and the domains over which the energy density is measured. We show that the energy density is asymptotically statistically stable (self-averaging) in many configurations. In the case of highly localized source terms, we obtain an explicit asymptotic expression for the scintillation function in the high frequency limit. Introduction Let us consider the following scalar wave equation for the pressure potential p(τ, x, t): 1 c 2 (x, t) where τ is time, (x, t) ∈ R d ×R denote the spatial variables, ∆ x is the Laplace operator in the transverse variables x, and c(x, t) is the local sound speed. Our objective is to understand the properties of p(τ, x, t) when c(x, t) is a highly oscillatory random field and the initial conditions for p(τ, x, t) oscillate at the same frequency. The analysis of high frequency waves in random media based on (1) is extremely complicated and still not totally established mathematically. Since the wave field is oscillatory, its (weak) limit typically misses most of the energy of the wave field p. Kinetic models are then used to capture the energy density of the wave fields; see e.g. [6,7,12,17] for rigorous results, [3,19] for more formal derivations, and [11,14,16,20] for references in the physical literature. The validity of the kinetic model is limited by its statistical instability, namely by its variability when the realization of the underlying random medium is changed. In many situations, the energy density is self-averaging [2,6,7], which means that the energy density measured (averaged) on a (sufficiently large) domain is asymptotically, as the frequency goes to infinity, independent of the realization of the random medium. The above results often require that the domain of measurements be of size independent of the wavelength and that the source term for the kinetic model be sufficiently smooth. In this paper, we are interested in the statistical stability of such kinetic models in a very simplified regime of wave propagation, namely the Itô-Schrödinger regime. The latter regime arises when the wave field is a very narrow beam propagating in the direction t and the sound speed c(x, t) oscillates more rapidly in the direction t than it does in other directions. Such assumptions are valid in somewhat restrictive practical settings. However, this regime of wave propagation is relatively simple to analyze mathematically and provides interesting qualitative answers regarding the statistical stability of more general kinetic models. The validity of kinetic models has been analyzed numerically in several settings [8,9,10], with quite good agreements with the energy density given by wave equations of the form (1). Such kinetic models may then be used to solve inverse problems, where constitutive parameters in the transport equation modeling e.g. buried inclusions or statistics of the random medium, are reconstructed from available boundary measurements. We refer the reader to [9,10] for reconstructions based on synthetic (numerical) data and to [5] for kinetic reconstructions from experimental data in the micro-wave regime; see also [4] for a review on the use of kinetic models in the imaging of buried inclusions. These studies show that the kinetic models perform relatively well. Their limitations are almost entirely caused by our lack of knowledge of the random medium, which generates some statistical instabilities in the measurements. Understanding these instabilities will allow us to improve on the reconstructions and to have a better understanding of the maximal resolution that can be achieved. Itô-Schrödinger regime. In the Itô-Schrödinger regime, we introduce ψ(x, t; κ) as where c 0 is the background sound speed, assumed to be constant. Thus ψ represents waves at position (x, t) propagating with frequency ω = c 0 |κ|. After appropriate scalings and simplifications, the wave field ψ satisfies the following Itô-Schrödinger stochastic partial differential equation: Since κ plays no significant role in the sequel, we set it to κ = 1. Here, B(x, dt) is the standard Wiener measure, whose statistics are described by where E is mathematical expectation with respect to the measure of an abstract probability space on which B(x, dt) is defined and t ∧ t ′ = min(t, t ′ ). We shall not justify (3) from (1). See [1] for a justification in one dimension of space and [2] for the scaling arguments leading to (3). For our purposes, ψ η (x, t) satisfies a wave equation with highly oscillatory coefficients oscillating at a frequency inversely proportional to the small parameter η ≪ 1. We assume that ψ η (x, 0) also oscillates at a frequency comparable to η −1 and are interested in the properties of the wave field as η → 0. Because the field oscillates rapidly, its weak limit is of little interest. A more interesting quantity is the energy density of the waves |ψ η | 2 (x, t), or the probability density in the context of quantum waves. Because the energy density does not satisfy a closed-form equation, it is more convenient to analyze energy densities by introducing the following Wigner transform of the wave field: where ψ η denotes complex conjugation of ψ. Note that R d W η (t, x, k)dk = |ψ η (x, t)| 2 by inverse Fourier transform so that W η may be seen as a phase space (microlocal) decomposition of the energy density. Let ψ η (x, 0) be a sequence of functions uniformly bounded in L 2 (R d ), η-oscillatory, and compact at infinity in the sense of [13], i.e., such that for every continuous compactly supported function ϕ on R d , we have: A practical sufficient condition is that ψ η (x, 0) is compactly supported and η∇ψ η (x, 0) is square integrable with L 2 (R d )-norm bounded independently of η. Then, we have the following convergence result [13,15]: The Wigner transform W η (0, x, k) converges, after possible extraction of subsequences, in the space of distributions D ′ (R 2d ) to a Radon measure W 0 (0, x, k), and moreover, we have In other words, the limiting Wigner transform captures all the energy of the incident wave field ψ η in the limit η → 0. Kinetic Model. Upon using the Itô formula, we obtain that the average Wigner transform a η (t, x, k) = E{W η (t, x, k)}, solves the following kinetic equation where we assume that ψ η (x, 0), whence a η (0, x, k), is deterministic; see e.g. [2] for the details of the derivation. We have defined R 0 = R(0) andR(k) as the Fourier transform of R(x), with the convention that Since R(x) is a correlation function,R(k) is non-negative by Bochner's theorem. For the rest of the paper, we assume thatR(k) Note that R 2d a η (t, x, k)dxdk is independent of time so that the total energy of the initial condition is preserved by the transport evolution. Scintillation. The validity of the kinetic model (8) to describe the ensemble averaging of the phase space energy density of the wave field is trivial in the Itô-Schrödinger regime: the kinetic model (8) is here exact for all η ≥ 0, unlike what happens in other regimes of wave propagation [6,7,19]. It remains however to understand how stable it is. In other words, how good an approximation is a η (t, x, k) of the random field W η (t, x, k). A natural object in the study of the statistical stability of W η is the following covariance function: We refer to this function as the scintillation function, in analogy to how stars are perceived to twinkle because the realization of the atmosphere changes in time. We shall see that the size of the scintillation function crucially depends on the smoothness of the initial conditions ψ η (x, 0) and a η (0, x, k) and on the support of the domain over which the energy density is averaged. The effect of the averaging will be quantified by measuring J η in appropriate (weak) norms. One of the main advantages of the Itô-Schrödinger regime of wave propagation is that J η (t, x, k, y, p) satisfies a closed form equation. Another application of the Itô formula [2] shows that J η is the solution of the following kinetic equation: with vanishing initial conditions J η (0, x, k, y, p) = 0, where In the absence of the operator K η , the variables (x, k) and (y, p) remain uncoupled in (11) and the scintillation vanishes. Scintillation is created as the waves propagate through the random medium with a rate of creation proportional to K η a η ⊗ a η . Notice that K η involves a highly oscillatory integral. Outside of the diagonal x = y, this oscillatory integral is small, whereas in the vicinity of the diagonal x = y, it is not. We thus observe that K η h is small when h is smooth and large when part of h is concentrated near x = y. Outline. The rest of the paper is structured as follows. The main results of the paper are summarized in section 2. We obtain estimates for J η in various norms, and in the specific case of initial conditions for a η of the form a η (0, x, k) = δ(x)f (k), show that η −1 J η converges to a measure J solving an explicit kinetic equation. Section 3 presents stability estimates for the scintillation operator K η defined in (12) and for the kinetic equations (8) and (11). The proof of the stability estimates for J η are given in section 4 whereas the proof of convergence of η −1 J η when a η (0, x, k) = δ(x)f (k) is given in section 5. Main results Let ψ η (x, 0) be a sequence of η−oscillatory, compact at infinity, functions uniformly bounded in L 2 (R d ). This is the case of interest for us here, where we can define the Wigner transform (5) and pass to the high frequency limit η → 0 while still ensuring that energy is conserved as in (6). We are interested in quantifying the statistical stability of the Wigner transform W η (t, x, k) and do so by analyzing the scintillation function J η defined in (10). We present two results. The first result proposes an upper bound for J η in different norms and for different initial conditions ψ η (x, 0). The second result analyzes the convergence properties of J η as η → 0 for initial conditions of the form a η (0, x, k) = δ(x)f (k), which correspond to localized sources at position x = 0 radiating energy smoothly in wavenumber k. In this context, we will show that J η is of order O(η) and will obtain the limit of η −1 J η as η → 0. Some typical initial conditions. Let us consider initial conditions ψ η (x, 0) oscillating at frequencies of order η −1 and with a spatial support of size η α for 0 ≤ α ≤ 1. The parameter α quantifies the macroscopic concentration of the initial condition. The simplest example is a modulated plane wave of the form: where χ(x) is a smooth compactly supported function on R d . The direction of propagation is given by k 0 . Note that the above sequence of initial conditions is indeed uniformly bounded in L 2 (R d ), compact at infinity, and η-oscillatory. As another example of initial conditions, we consider where J 0 is the zero-th order Bessel function of the first kind. Such an initial condition is supported in the Fourier domain in the vicinity of wavenumbers k such that |k| = |k 0 | so that ψ (2) η emits radiation isotropically at wavenumber |k 0 |; see [8,9] for more details. We again verify that the above sequence of initial conditions is indeed uniformly bounded in L 2 (R d ), compact at infinity, and η-oscillatory. For this, we use that J 0 (z) = 2 πz cos(z − π 4 ) + O(z −3/2 ). Domain of measurements. For the above initial conditions for ψ η , we are interested in the corresponding Wigner transform W η (t, x, k) and scintillation function J η . It turns out that J η is itself oscillatory so that its size depends on the scale at which it is measured. In order to capture this scale, we introduce a test function ϕ ∈ S(R 2d ), a fixed wavenumber k 1 ∈ R d , and define We then denote by ·, · the duality product S ′ (R n )-S(R n ) for n = 2d or n = 4d and want to quantify W η , ϕ η,s 1 ,s 2 , the energy density averaged over a domain (in the phase space) of width η s 1 in space and η s 2 in wavenumbers. By using the Chebyshev inequality, we obtain the following estimate on the probability that W η deviate from its ensemble average a η : Here, a ⊗ a(x, k, y, p) = a(x, k)a(y, p). In other words, when the above right-hand side converges to 0, then we find that W η (t), ϕ η,s 1 ,s 2 converges in probability to 0, which implies that W η (t) converges weakly and in probability to 0. The measured energy density is thus asymptotically statistically stable. A very relevant practical question pertains to the largest values of s 1 and s 2 that can be chosen so that the Wigner transform is still statistically stable in the limit η → 0. We are now ready to state our main theorem on this issue. Bounds for the scintillation function. For any ϕ(x, k) ∈ L 2 (R 2d ), let F x ϕ(u, k) and F k ϕ(x, ξ) be the Fourier transforms of ϕ in the first variable only and in the second variable only, respectively. We also denote by a b the inequality a ≤ Cb, where C > 0 is some universal constant. Then we have the following result: Theorem 2.1 Let ψ η (x, 0) be a sequence of functions uniformly bounded in L 2 (R d ), compact at infinity, and η-oscillatory. Let a η (0, x, k) be the corresponding sequence of Wigner transforms given by (5). We assume that F x a η (0) and F k a η (0) are integrable functions and that for some α ∈ R and β ∈ R. Then we find that Of interest here is the following corollary: Corollary 2.2 Let ψ η (0) be given by one of the expressions in (13) or (14). We can deduce the following results from the above corollary. In what follows, we consider that averaging takes place over a large domain of wavenumbers so that s 2 = 0, as e.g., in spatial measurements of the physical energy density. Support of the sources. Let us assume that the spatial support of the domain of measurements is large so that s 1 = 0 as well. Then we find that In other words, the scintillation is of order O(η d ) when α = 0, which corresponds to a large support of the initial source term. This corresponds to the ideal case where the scintillation is smallest. In such a setting, we obtain that W η − a η , ϕ is of order η d 2 . This is the most stable situation. For a very narrow support of the initial source term comparable to the correlation length of the medium, namely when α = 1, we obtain that the scintillation is of order O(η) so that W η − a η , ϕ is now of order η 1 2 . We thus obtain statistical stability of the energy density generated by a very localized source term whose radiation pattern in k is smooth, although the statistical instability is much larger than in the case α = 0. We know that for sources that are highly localized both in space and in wavenumbers, the scintillation does not converge to 0 and the energy density is not asymptotically statistically stable; see [2]. Such highly localized initial conditions would correspond to a choice α = β = 1 in Theorem 2.1. We will confirm in the next theorem that the order O(η) above is optimal. Small domain of measurements. Conversely, we can consider the case of a source term with a large support, which corresponds to α = 0, and a very small measurement domain. In this setting, we find that This means that the energy density becomes asymptotically statistically stable as soon as it is measured over an area that is large compared to the correlation length of the medium. This is an optimal result of self-averaging as we cannot expect the energy density to be statistically stable point-wise, or when averaged over sub-wavelength domains. The above result, which is based on estimating K η in (12) in appropriate norms, improves on estimates obtained in [2,18]. We can also consider intermediate situations where both the source and the measurement domain have small support. In that case, the optimal estimate for the scintillation depends on whether α < s 1 or s 1 < α. These results are in fact optimal when the source term and the domain of measurements are located at the same place. Such a geometry explains why we do not obtain scintillation proportional to η d(1−α−s 1 ) . We should obtain better estimates when the domain of measurements and the source term are not centered around the same point, though this cannot be inferred from our current results. Convergence of scintillation. Let us consider the case of initial conditions of the form (13) or (14) with α = 1, i.e., for tightly localized source terms, in (transverse) dimension d ≥ 2. The Wigner transform of such source terms converges in the limit η → 0 to a distribution of the form δ(x)f (k), where f (k) is a smooth function when χ(x) is smooth [15]. We consider the kinetic equations with such initial conditions and obtain the following result. Theorem 2.3 Let J η be the solution of (11) with the initial condition in (8) given by a η (0, x, k) = δ(x)f (k) for some smooth function f (k) in dimension d ≥ 2. Then η −1 J η (t) converges in the space of distributions uniformly in time to the limit J(t), which solves the following kinetic equation where The above theorem should be interpreted as follows. As time propagates, the transport ballistic part a 0 (t, x, k) = e −R 0 t δ(x − tk)f (k) creates some instabilities, which converge after appropriate scaling to J 0 (t). The scintillation thus generated is then transported by the transport equation (21). We also observe that the error estimate of order O(η) in (19) with α = 1 is optimal. Functional setting and stability estimates In preparation for the proof of the theorems and the corollary presented in the preceding section, we prove here some stability results for the transport equations (8) and (11) and for the scintillation operator K η . We denote by F the operator of Fourier transform with respect to all variables of the function on which it applies. For 1 ≤ p ≤ ∞, we introduce X p as the subspace of tempered distributions in S ′ (R 4d ) such that for 1 ≤ p < ∞ and We also define Y p as the subspace of tempered distributions in S ′ (R 2d ) such that for 1 ≤ p < ∞ and Finally, we define Y as the subspace of tempered distributions in S ′ (R 2d ) such that g Y = sup Morally (though this is inexact), the space X 1 corresponds to scintillation functions that are integrable in one spatial variable (bounded in the corresponding dual variable v) and bounded in another spatial variable (integrable in the corresponding dual variable u). It is this boundedness that allows us to obtain the result (20) in the presence of small domains of measurements. In contrast, X ∞ corresponds to scintillation functions that are integrable in both spatial variables (bounded in u and v), which allows us to get the result (19). The above spaces are well-adapted to the estimation of the scintillation operator K η . More precisely, we have the following result: (ii) Let µ ∈ Y p and ν ∈ Y . Then Proof. With obvious notation, we recast K η = ǫ i ,ǫ j ǫ i ǫ j K ij η . Let h ∈ X p . Then we have so that using the Hölder inequality with 1 = 1 This proves (i). Let now h := µ⊗ν. Upon performing the change of variables w → ηw, we have which concludes our proof. We need stability estimates for the kinetic equations. We start with the first kinetic equation: andR non-negative. Then we have: Let S = 0 and let a 0 (t, x, p) := a 0 (x − tp, p)e −R 0 t be the ballistic part of a. Then, assuming that F k a 0 ∈ L 1 (R 2d ), we have the following estimate for all t > 0: Proof. The proof is a direct application of the integral formulation of (30), where G t is the free transport semigroup given by G t a(x, p) := a(x − tp, p). The operators Q and G t are both continuous in Y p . Indeed, for ϕ ∈ Y p , we have: Standard fixed point techniques then provide existence and uniqueness results for (30). When S = 0, estimate (31) follows from the maximum principle and the observation that a 0 Yp is a majorizing solution to (30). When a 0 = 0, (31) is an application of the Gronwall lemma. For S = 0, we have the following Neumann series expansion in terms of multiple scattering: a n (t) = t 0 e −R 0 (t−s) G t−s Qa n−1 (s)ds, with the ballistic part a 0 (t, x, p) := e −R 0 t a 0 (x − tp, p). By induction, we find the following expression for the Fourier transform of a n : The change of variable k + tu → ξ yields Summing over n ≥ 1 gives the result. The last lemma deals with the fourth-order transport equation (11): 3 Assume a 0 ∈ X p and S ∈ L 1 ((0, T ), X p ), for T > 0 and 1 ≤ p ≤ ∞. Then, the above system admits a unique solution in C 0 ([0, T ], X p ) such that: Proof. The result stems from the integral formulation of (11) given by where G 2 t is the semigroup defined as G 2 t a(x, p, y, q) := a(x − tp, p, y − tq, q). From Lemma 3.1, we know K η is continuous in X p , and so are G 2 t and Q 2 since for ϕ ∈ X p . Existence and uniqueness follow as before from standard fixed point theorems while estimate (34) stems from separate applications of the maximum principle and the Gronwall lemma. Estimates for the scintillation We are now ready to prove Theorem 2.1 and Corollary 2.2. Proof [Theorem 2.1]. According to Lemma 3.3, the fourth-order transport equation (21) is stable in X p , so that we have the following estimate, uniformly on [0, T ], Provided that a η belongs to Y ∩ Y p , then K η a η ⊗ a η is small in X p . Indeed, item (ii) of Lemma 3.1 yields for s ∈ [0, T ] and 1 ≤ p ≤ ∞ that: First, we control the Y norm by the Y 1 norm since Y 1 ⊂ Y . Lemma 3.2 shows that the radiative transfer equation (8) is stable in Y r , for 1 ≤ r ≤ ∞, so that we just need to estimate the initial condition a η0 (x, p) := a η (0, x, p) in these Y r norms. Denoting by F x a η0 (u, p) the Fourier transform of a η0 with respect to the spatial variable x only, we obtain, so that the assumption of the theorem gives Moreover, defining ψ η0 (·) := ψ η (·, 0), we have the relation from which it follows, using the Cauchy-Schwarz inequality, that We have thus obtained that for all s ∈ [0, T ], which yields by interpolation, for 1 ≤ p ≤ ∞, This induces a first estimate for J η , which is not optimal for initial conditions with small support when α − β > 0. The stability of the transport equation (8) in Y p is not sufficient to deal with such irregular initial conditions. Rather, we need to separate the ballistic part from the scattering part in the kinetic equation to obtain sharper estimates and thus introduce: where a 0 η (t, x, p) = e −R 0 t a η0 (x − tp, p) is the ballistic part and a s η satisfies ∂a s Since the Fourier transform of a 0 η is given by e −R 0 t F a η0 (u, k+tu), its Y norm can be estimated for t ∈ (0, T ] as: Now, Lemma 3.2 and estimate (32) imply that: so that the time singularity of a s η is weaker than that of a 0 η . Thus, for 1 ≤ p ≤ ∞, For short times, we then use estimate (36) since it is independent of s and for longer times, we use the above estimate. We thus write: Setting t 0 (η) = η α−β when α > β above and using (36) and (35), we find, for We conclude by using the Parseval-Plancherel equality which yields, for t ∈ [0, T ], , It remains to verify the scaling properties: . We conclude the proof of the theorem by choosing p = ∞ or p = 1 in the above estimates. Proof [Corollary 2.2]. We simply need to estimate F x a 0η and F k a 0η in it follows that: It suffices to set β = 1 − α in Theorem 2.1 to conclude the proof of the corollary. Convergence of the scintillation We now prove the announced convergence result. We first observe that the existence results obtained in Lemmas 3.2 and 3.3 hold when the spaces Y p and X p are replaced by the spaces of bounded measures M(R 2d ) and M(R 4d ), respectively or by the spaces of continuous functions C 0 (R 2d ) and C 0 (R 4d ), respectively. We recall that d ≥ 2 here. Proof [Theorem 2.3]. The scintillation function satisfies the following transport equation in integral form (37) We recast this, with obvious notation, as We denote by T 2 the formal limit operator of T 2η defined as The source contribution. We verify that where the ballistic part is given by Indeed, we know from Lemma 3.1 that and from (32) in Lemma 3.2 that That a Y∞ is bounded comes from the stability of the transport equation in Y ∞ established in Lemma 3.2. The term K η a 0 ⊗ (a − a 0 ) is treated similarly. Let us define We find that J 0 Indeed, we deduce from (39) and the stability of Up to a smaller-order error term in the space of distributions, we may thus replace J 0 η by J 00 η in the sequel since the transport equation (37) is stable in X ∞ . Now, calculations with K η replaced by duds. Upon sending η → 0, we find in the limit that in the space of bounded measures M(R 4d ). After accounting for all four terms in the definition of K η and using the fact thatR(u) =R(−u), we find that the limit of η −1 J 0 η (t) is given by: This gives us the source term (22) in the transport equation (21). Kinetic equation for the scintillation. We have shown that η −1 J 0 η converged to J 0 . It remains to obtain convergence of the whole sequence η −1 J η . Let φ(t, x, p, y, q) be a a smooth function on [0, T ] × R 4d . Then we have by integration on the latter space that and equivalently that with We have shown that the difference between the source terms η −1 J 0 η and J 0 converges to 0 as a distribution and has a negligible effect on η −1 J η . So we can replace the initial condition for the error term by J 0 and look at the problemJ η = T 2ηJη + J 0 , whereJ η is now of order O(1). We observe that J 0 (t) = e −2R 0 t δ(x − tp)δ(y − tq)H(p, q), where H(p, q) is a smooth function. Let now J 1 η =J η − J 0 be the solution of We recall that T 2η J(t) = t 0 e −2R 0 (t−s) G 2 t−s (Q 2 + K η )J(s)ds, so that T 2η J 0 = T 2 J 0 + J 2 η , where J 2 η is given by a bounded operator in M(R 4d ) applied to K η J 0 . The latter is given by plus similar contributions. Because H is a smooth function, this term converges to 0 in M(R 4d ) as η → 0. This shows that J 2 η converges to 0 as η → 0. The other contribution, T 2 J 0 , involves a bounded operator applied to Q 2 J 0 , which is equal to For f , whence H, and R sufficiently smooth, the above function is bounded in C 0 (R 4d ). The function is not bounded uniformly in time, however, and we split the contribution J 0 (t) into J 0 δ (t) = J 0 χ (0,δ) (t) and J 0 χ (δ,T ) (t), which we still denote by J 0 (t). The source term T 2 J 0 δ generates a small contribution, which goes to 0 as δ goes to 0 in the sense of distributions since the term in (45) is bounded in e.g. L 1 (R 4d ) uniformly in time so that after time integration in (38), the contribution is bounded by O(δ) → 0. The remaining contribution is bounded in the uniform norm uniformly in time with bound inversely proportional to δ 2d . We now have a problem of the form J 1 η = T 2η J 1 η + T 2 J 0 , where T 2 J 0 is uniformly bounded in the uniform norm by O(δ −2d ). Weakly, this means that (J 1 η , φ) = (J 1 η , T * 2η φ) + (T 2 J 0 , φ), where φ(t, x, p, y, q) is a smooth function. The solution J 1 η is bounded in C 0 (R 4d ) uniformly in η by stability of the fourth-order transport equation in the uniform norm. There is therefore a subsequence that converges weak * in L ∞ (R 4d ) to a limit J 1 ∈ L ∞ (R 4d ). The above convergence to the limiting transport equation holds for every cutoff δ. Thus, by stability of the limiting transport equation, we can remove the cut-off in δ and obtain that weakly in the space of distributions. The above integral equation admits a unique solution, which shows that the whole sequence η −1 J η converges to J solution of: This completes the proof of Theorem 2.3.
2007-11-25T19:28:45.000Z
2007-11-25T00:00:00.000
{ "year": 2008, "sha1": "e4235271684736e133bd92c6bd900c8d2a433383", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/krm.2008.1.85", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "040299cee19e2ff50cd3f08914fd70523880d7ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
504301
pes2o/s2orc
v3-fos-license
Reduced Mimicry to Virtual Reality Avatars in Autism Spectrum Disorder Mimicry involves unconsciously copying the actions of others. Increasing evidence suggests that autistic people can copy the goal of an observed action but show differences in their mimicry. We investigated mimicry in autism spectrum disorder (ASD) within a two-dimensional virtual reality environment. Participants played an imitation game with a socially engaged avatar and socially disengaged avatar. Despite being told only to copy the goal of the observed action, autistic participants and matched neurotypical participants mimicked the kinematics of the avatars’ movements. However, autistic participants mimicked less. Social engagement did not modulate mimicry in either group. The results demonstrate the feasibility of using virtual reality to induce mimicry and suggest mimicry differences in ASD may also occur when interacting with avatars. Introduction Mimicry involves the unconscious imitation of other people's behaviour. Research in both social psychology and cognitive neuroscience has demonstrated that mimicry is not only ubiquitous but is also a powerful and versatile tool in everyday social interactions (Chartrand & van Baaren 2009). Individuals who receive a diagnosis of autism spectrum disorder (ASD) have significant impairments in social communication and interaction (American Psychiatric Association 2013) which may include differences in mimicry behaviour (Edwards 2014). The current study's primary aim was to establish whether any potential differences in mimicry behaviour in ASD could be investigated within a rich and ecologically valid, interactive virtual reality (VR) environment. Hamilton (2008) made an important distinction between mimicry and emulation. Mimicry involves implicitly and automatically copying the detailed kinematic features of an observed action, rather than just the action goal. Conversely, emulation involves copying the explicit goal of an observed action. Whilst emulation is useful in practical situations (e.g. when learning how to use a tool), Wang and Hamilton (2012) have argued that mimicry is fundamentally a social behaviour so modulated by social cues in a subtle and sophisticated manner. This has been captured in their social top-down response modulation (STORM) model (Wang and Hamilton 2012). Mimicry has been measured using a range of approaches including naturalistic studies involving live confederates (e.g. Chartrand and Bargh 1999), reaction time tasks using stimulus-response compatibility paradigms (e.g. Brass et al. 2000), and kinematic studies using motion this congruency effect by social cues may be atypical. For example, Cook and Bird (2012) showed pro-social priming relative to non-social priming led to an enhancement of automatic imitation in neurotypical participants but not in autistic participants. Similarly, Grecucci et al. (2013) found automatic imitation is enhanced in neurotypical participants, but not in ASD, when preceded by emotional facial expressions. Finally, Forbes, Wang and Hamilton (2016) showed that direct gaze socially modulates mimicry in neurotypical participants but not in ASD. Using Virtual Reality to Induce and Modulate Mimicry A significant limitation of previous studies investigating the social modulation of mimicry in ASD is that they typically displayed isolated hand stimuli within a limited social context and measured participants reaction times to make simple finger movements (e.g. Cook and Bird 2012;Grecucci et al. 2013;Forbes et al. 2016). The current study aimed to create a more ecologically valid mimicry paradigm by creating an interactive two-dimensional (2D) VR environment. Pan and Hamilton (2015) previously found that during a drum tapping game participants displayed a greater tendency to mimic when interacting with a VR avatar compared to a bouncing ball. In their paradigm, a sense of interactivity was achieved by programming the avatar to orient her head to the participant's head position when it was the participant's turn to respond. The avatar was also responsive to the participant's movements as she would wait for the participant to finish their turn before starting her own. We aimed to combine the VR approach used by Pan and Hamilton (2015) with the kinematic approach used by Wild et al. (2012) to try and induce and socially modulate mimicry in adults with and without a diagnosis of ASD. As VR technologies become more accessible they are increasingly being used to teach and train social skills, such as job interview training, in ASD (e.g. Smith et al. 2014; see; Wang and Reid 2010, for a review). I t is therefore important to establish whether the behaviours autistic individuals display in everyday life, such as differences in eye-contact, gesture and joint attention, also occur when interacting with and responding to VR avatars. To investigate this with regards to mimicry differences, participants played a game with several avatars during which they observed an avatar point to a series of three targets out of a possible four targets on the virtual table in front of them. Participants were given goal-orientated instructions as they were told to point to the same targets the avatar pointed to on the table in front of them. However, the height of the avatar's movements was manipulated to see whether participants' own movements were sensitive to the kinematics of the avatars' movements. Each participant played the game with a socially engaged and with tracking (e.g. Castiello et al. 2002). All these approaches have converged on the finding that a range of social cues, such as attractiveness of the interaction partner (van Leeuwen et al. 2009), eye-contact , pro-social priming (Leighton et al. 2010;Wang and Hamilton 2013), and, beliefs about the animacy of the interaction partner (Bird et al. 2007;Castiello et al. 2002), modulate mimicry behaviours in neurotypical participants. STORMy Interactions: Mimicry in ASD Autistic people have significant difficulties in everyday social interactions (American Psychiatric Association 2013). Hamilton (2008) suggested that autistic participants perform well on emulation tasks, but tend to perform differently on mimicry tasks compared to neurotypical participants. For example, Hobson & Lee (1999) found that autistic participants were proficient in copying goal-directed actions, but tended not to copy the style with which the experimenter executed those actions. Similarly, McI ntosh, Reichmann-Decker, Winkielman & Wilbarger (2006) found that, unlike neurotypical participants, autistic adolescents and adults did not spontaneously mimic happy and angry facial expressions. Yet, when explicitly instructed to copy an observed facial expression autistic participants performed as neurotypical participants. Moreover, Wild, Poliakoff, Jerrison and Gowen (2012) found autistic participants were less sensitive to the duration, velocity and vertical amplitude of observed actions during an imitation task. Eye-tracking also revealed more goal-directed eye-movements in ASD suggesting an over-reliance on goal-directed imitation strategies in ASD and a reduced propensity to mimic. A recent meta-analysis of 53 studies investigating imitation abilities in ASD supported Hamilton's proposal. It showed spared performance when copying only the goal of an action (i.e. emulation) but impairments when copying both the form (i.e. style) and the goal of an action (Edwards 2014). The finding that mimicry is different in ASD, a condition characterised by difficulties in social interaction, is in line with Wang and Hamilton's (2012) proposal that mimicry is fundamentally a social behaviour. I t is important to note, however, that Hamilton (2008) has stressed that it is not mimicry per se which is impaired in ASD, as autistic children and adults can and do spontaneously copy the actions of others. For example, some autistic individuals display echopraxia characterised by an increased tendency to involuntarily copy the actions of others (Spengler et al. 2010). Rather, it is the top-down social modulation of mimicry that is aberrant in ASD. Hamilton's hypothesis has been supported by several recent studies using a stimulus-response compatibility paradigm. These show that automatic imitation is intact in ASD as showed by faster responses to congruent rather than incongruent actions, but modulation of All procedures were approved by the local Research Ethics Committee. Materials The avatars' pointing movements were animated with prerecorded motion captured data. These data were recorded using an electromagnetic marker (Polhemus LIBERTY system, Colchester, USA) and mapped onto the avatar using the software packages MotionBuilder (http://www.autodesk. com/motionbuilder) and Vizard (WorldViz Inc, Santa Barbara, USA). During motion capture, a piece of card with markings on it assisted the creation of the high (approximately 11 cm peak height above the table) and low (3 cm) conditions. The speech for the engaged and socially disengaged avatars were recorded from two different female actors. Participants sat approximately 70 cm from a 160 × 90 cm projector screen on which the VR graphics were displayed in 2D. An electromagnetic marker (Polhemus LI BERTY system, Colchester, USA) was attached to the top of participants' right index finger and forehead. The marker on their index finger allowed their finger movements to be recorded, whilst the marker on their forehead allowed the socially engaged avatar to give participant's eye-contact when smiling at them at the end of each trial. On the table in front of the participants, there was a piece of 81 × 66 cm blue card with four 6 cm diameter red circles stuck in the middle of it. The centre of the circles were 15 cm apart from each other and were 30 cm in front of the participants. These red circles acted at the targets. There was also a 6 × 4 cm piece of blue card stuck 10 cm in front of the participant which acted at the 'resting pad' where participants were required to place their right index finger when not moving. The physical world extended into the VR world on the projector screen. Thus, the avatar was also sat at a table with a piece of blue card with four red targets on it (Fig. 1). a socially disengaged avatar. The study aimed to explore three questions: 1. Would neurotypical participants mimic the avatar despite being told only to copy the goal of the observed action? 2. I f so, would this mimicry be modulated by the social engagement of the avatar? 3. Would there be any differences in mimicry behaviour in ASD? Method Participants Twenty-five neurotypical participants and twenty-six autistic participants were recruited from an autism database at the authors' institution. Groups were matched on age, gender, handedness, and, verbal and performance IQ using either the Wechsler Adult Intelligence Scale (WAIS-III UK; Wechsler 1999a) or Wechsler Abbreviated Scale of I ntelligence (WASI -I I , Wechsler 1999b; Table 1). Autistic participants had a diagnosis of Asperger's Syndrome (20), autism (4), or, autism spectrum disorder (2) from an independent clinician. Autistic participants were also tested on module 4 of the Autism Diagnostic Observation Schedule (ADOS-G- Lord et al. 2000) or ADOS-2 (Lord et al. 2012) by a trained researcher with research-reliability status. Seven participants met the ADOS classification for autism, twelve for autism spectrum, and, seven did not meet the classification of autism or autism spectrum. However, all seven who did not meet the cut off for an overall classification of autism or autism spectrum, reached the cut-off for autism spectrum on either the communication or reciprocal social interaction subscale. All participants were financially reimbursed for the time and gave written informed consent to participate. session expect that the socially engaged avatar looked up and smiled at the participant and continued to look at them during their response. Conversely, the socially disengaged avatar looked away at the monitor to her right after having completed her movements. So she was not looking at the participant when they made their movements. Finally, in order to measure co-presence, after each game participants were asked to rate on a Likert scale from 1 (not at all) to 7 (very much so): "How much did you behave as if Jessie/ Kate were real?" Excluded Data The movement data were analysed using Matlab R2013b (MathsWorks, Natick, USA). Movement data were filtered with a Butterworth filter to remove high frequencies. Each participant's calibration data were used to chunk each trial into four movements: (1) the movement to the first target from the resting pad, (2) the movement to the second target; (3) the movement to the third target, (4) the movement back to the resting position (Fig. 3). On 4.27 % of trials, the data could not be chunked into four movements and these were excluded from the analysis. There were no significant differences between neurotypical and ASD in the number of trials that could not be chunked into four movements (Mean (SD): neurotypical 3.66 % (5.35 %); ASD 4.87 % (6.26 %); t 49 = −0.741, p = 0.462). On 3.13 % of the trials participants failed to move to the correct targets. There were no significant differences between neurotypical and ASD in the number of incorrect trials per block (Mean (SD): neurotypical 2.31 % (1.93 %); ASD 3.90 % (4.99 %); t 32.61 = −1.516, p = 0.139). By combining these two exclusion criteria, the total proportion of trials excluded was 6.62 %. There were no significant differences between the proportion of trials excluded between the two groups (Mean (SD): neurotypical 5.47 % (5.39 %), ASD 7.72 % (7.99 %); t 49 = −1.176, p = 0.245). Peak Height Analysis The mean peak height of the movements between the targets (the mean of movements 2 and 3) for each trial were subject to an ANOVA with engagement condition (engaged/disengaged) and height (high/low) as within-subject factors and group (neurotypical/ASD) as a between-subject factor. This revealed a main effect of height (F 1,49 = 16.28, p < 0.001, η p 2 = 0.249). Post-hoc t-test revealed the peak height of participants' movements were significantly higher having observed the avatar move with a high, compared to low, trajectory between the targets (t 50 = 3.89, p < 0.001; Fig. 4 Top panel). This difference between the high and low observed Experimental Design A 2 × 2 design was used with height (high/low) and engagement condition (engaged/disengaged) as within-subject factors and group (neurotypical/ASD) as a between-subject factor. I n each block there were 64 trials (32 high and 32 low) with 16 different movement combinations repeated four times. Procedure Participants came into the lab as part of a research day. Participants were told that they would be playing a game with two avatars, Jessie and Kate, but would first practice the game with another avatar, Mike. Participants were told that the avatars' movements were based on the movements of people that had previously been in the lab. Before playing the game with Mike, Jessie or Kate, participants completed calibration during which they were required to place their right index finger into the middle of each of the four targets and the resting pad so that their locations could be recorded. In the practice session with Mike, participants were told that they would hear a 'dong' sound which was the avatar's cue to move. This 'dong' sound occurred at the beginning of each trial after a variable delay (1200-1800 ms). The avatar would then point to three of the targets in front of them before returning to their resting position. A 'ding' sound then occurred after a variable delay (1200-1800 ms). This sound acted as the participants' cue to move and they were instructed to point to the same targets that the avatar moved to. Once the participants completed their movements they were instructed to return to their resting pad and this triggered the next trial. The spatial correspondence between the avatars' and participants' targets was explained to the participants. For example, if the avatar pointed to the target on her far left, participants should point to the target on their far right. Participants were given approximately 10 practice trails with Mike before the start of the experiment to ensure they understood the task instructions. Participants then played the game with Kate and Jessie. For each participant, one avatar was socially engaged and the other was socially disengaged (Fig. 2). The order and engagement of the avatars was counter-balanced across participants. Before the game started, the socially engaged avatar said, "Hi, my names Kate/Jessie and I'm going to be playing this game with you. I 'm really looking forward to it" and then smiled at the participant, whereas the social disengaged avatar said, "Hi, my names Kate/Jessie and I 'm going to be playing this game with you. But I have to watch this as well" and then looked away at a virtual monitor on her right hand side. The trial structure was then the same as for the practice Co-Presence Overall participants' co-presence ratings were low (Fig. 5). These scores were subject to a 2 × 2 ANOVA with engagement (engaged/disengaged) as a within-subject factor and group as a between-subject factor. This revealed marginal effect of engagement (F 1,49 = 3.54, p = 0.066) and group actions was significant for both neurotypical (t 25 = 3.16, p = 0.004, d = 0.631) and autistic (t 25 = 3.02, p = 0.006, d = 0.592) participants. There was a marginally significant interaction between height and group (F 1,49 = 3.99, p = 0.051; η p 2 = 0.075 Fig. 3 Top panel). Neither the interaction between height and condition, or, height, condition and group were significant (F < 0.8; Fig. 4 Bottom panel). kinematics of avatars' movements. More generally, the present study adds to the growing number of studies which highlight the feasibility of VR in the ecologically valid study of human social interaction (Bohil et al. 2011;Georgescu et al. 2014). Our VR paradigm also has the potential to be used in combination with neuroimaging methods, such as functional near infrared spectroscopy, to elucidate the neural underpinnings of mimicry and how these might be different in ASD. Reduced Mimicry in ASD Both the neurotypical and ASD group mimicked the avatars movements, yet autistic participants did so to a lesser extent. This supports previous work demonstrating that autistic individuals can and do under certain conditions spontaneously mimic (Cook and Bird 2012;Grecucci et al. 2013) but there is a reduced propensity to do so (Edwards 2014). Most studies demonstrating a reduced propensity to mimic in ASD investigated children (e.g. Jiménez et al. 2014) and those conducted with adolescences or adults have focused on facial mimicry (Hertzig et al. 1989;McIntosh et al. 2006). Thus, the current study extends this work by showing that this reduced propensity to mimic in ASD continues into adulthood, is not restricted to spontaneous facial mimicry, and, most interestingly, occurs in a VR environment. Importantly, the groups did not differ in terms of their ability to copy the goal of the action (i.e. emulation) as there were no significant differences between the groups in the proportion of trials in which participants pointed to the incorrect targets. Again, this finding is supported by previous work showing intact emulation in ASD (Edwards 2014). Together, these findings support Hamilton's (2008) proposal of intact emulation yet differences in mimicry in ASD. Finally, the finding that mimicry differences in ASD (F 1,49 = 3.21, p = 0.079), but no interaction between engagement and group (F 1,49 = 0.004, p = 0.951). Discussion The study's primary aims were to use VR to induce and socially modulate mimicry in neurotypical participants and to explore any differences in ASD. Participants mimicked the kinematics of the avatars' movements despite being told only to copy the goal of the observed action. Autistic participants tended to mimic but did so to a lesser extent. In neither group, however, was mimicry modulated by the social engagement of the avatar. Possible reasons for this are discussed in further detail below. A Novel Paradigm for Inducing Mimicry in VR The results demonstrate that VR avatars can be used to induce mimicry in both neurotypical and autistic participants. Despite participants being told to point to the same targets the avatar pointed to, they were also sensitive to the kinematics of the observed action, rather than just the action goal. For example, on trials where the avatar moved with a high trajectory between the targets, participants also tended to move with a higher trajectory compared to trials where the avatar moved with a low trajectory. This supports previous kinematics studies, such as that by Wild et al. (2012), in which participants copied the vertical and horizontal amplitude of observed actions despite being given goal-orientated instructions. Previous studies investigating mimicry within a VR setting had only explored reaction time measures of mimicry, such as a stimulus response compatibility paradigm (Pan and Hamilton 2015). The present study extends this work by demonstrating that participants mimic the Fig. 3 An example of a high and low trial (above) and a typical participant movement profile to these observed actions chunked into four movements (below). Only movements 2 and 3 were analysed This is at odds with STORM and a series of previous studies which demonstrated that social cues, such eye-contact (Forbes et al. 2016), pro-social priming (Cook and Bird 2012) and emotional facial expressions (Grecucci et al. 2013), modulate mimicry in neurotypical participants; yet, this modulation is reduced in ASD. There are several possible reasons as to why the social manipulation did not modulate mimicry in the current study. Wang and Hamilton (2012) proposed that the effect of eye-contact on mimicry is mediated by an audience effect, whereby the enhancement occurs when participants feel the observer is maintaining social engagement with them throughout the response occur when interacting with VR avatars has important practical and clinical implications for VR training programmes, and, potentially, VR diagnostic tools (Scassellati 2007). I t suggests that the behaviours autistic individuals display in everyday life also occur when interacting with and responding to VR avatars. Although limitations of our current VR approach are discussed below. Unmodulated Mimicry: Co-Presence and Social Cues Mimicry was not modulated by how socially engaged the avatar was in either neurotypical or autistic participants. Rift or HTC Vive. This would allow the participants to be embodied (i.e. have their own avatar) and share the virtual space with the avatar, for example, both avatar and participant could point to the same virtual targets. However, studies using such an approach typically have the virtual targets positioned in mid-air without a table, but the kinematics of movements to such targets might differ. Implementing our paradigm safely and effectively using a HMD with a physical table is technically challenging. A failure to embody participants accurately within a fully immersive HMD runs the risk of participants injuring their fingers on the table in front of them when pointing to the targets. There was some level of interaction between the avatar and participant in the current study. For example, the avatar did not start her turn until the participant had returned to the resting pad, and, after the engaged avatar had finished her turn she oriented to a motion tracker attached to each participant's forehead thereby giving a sense of eye contact. Despite these advantages over simple video stimuli, participants were still watching animations on a screen in front of them. Reader and Holmes (2015) directly compared real life and video stimuli during an imitation task and found reduced object-directed imitation accuracy with the use of video stimuli. Furthermore, reduced activation of human motor cortex has been found when observing motor acts in videos compared to live movements (Järveläinen et al. 2001). Again, the use of a fully immersive, 3D environment, or, the use of real-life interaction partners may result in the social modulation of mimicry within the current paradigm. Unmodulated Mimicry: Timing and Task Demands In studies investigating social modulators of mimicry within a stimulus-response compatibility paradigm, there is usually a small time window between the social manipulation, the observed action and the subsequent response. For example, in Forbes et al. (2016) the delay between the social manipulation and observed action was either 200 or 800 ms. Participants were then required to respond as soon as they saw the actor's hand move in the video. Similarly, in Grecucci et al. (2013) the facial expression was presented for 500 ms, participants then observed the moving hand for 1105 ms before being required to respond. Finally, in Pan and Hamilton (2015; Experiment 2) the interaction between form (avatar vs. ball) and congruency (i.e. mimicry) was only found on reaction times to tap the first, but not the last, drum in the sequence. Together these studies support the view that for certain social manipulations the delay between action observation and performance needs to be minimised in order for the social manipulation to modulate mimicry. Future studies investigating social modulators of mimicry within the present paradigm may benefit from comparing the kinematics of movements to the first target. period. I n the current study, the socially engaged avatar gave participants eye-contact throughout their response period so it is unclear why mimicry was not enhanced. One possible reason could be the lack of co-presence with the VR avatars; mean co-presence scores were low. Thus, if participants felt the avatars were unrealistic this may have nullified the impact of any social manipulation and caused low co-presence scores. The avatars' hand movements were motion captured so based on those of a human. This may account for the reliable mimicry effect as participants are likely to have regarded these movements as realistic. However, the avatars' head movements, and facial expressions, such as the socially engaged avatar's smile, were key frame animated. Although, participants' qualitative experiences towards the avatars were not collected in the current study, in previous VR studies participants have reported that the avatars "were slightly robotic without facial expression which lessened impact" (Pan et al. 2016, p. 11). Moreover, Moser et al. (2007) have highlighted differences in neural activation, such as reduced activation of the fusiform gyrus, when viewing an avatar with emotional facial expressions compared to a human face displaying the same expressions. Thus, the present limitations of the VR, especially with regard to realistic facial expression, may have accounted for the lack of co-presence and the lack of social modulation in the present study. The 2D nature of our VR environment may also have contributed to the low co-presence scores. Although the physical world of the participant continued into the virtual world on the screen in front of them, there was a tangible divide between the physical world of the participant and the virtual world of the avatar. Schultze (2010, p. 439) has highlighted how "one key-determinant of co-presence is … to jointly manipulate shared space and shared objects." Therefore, the current paradigm may benefit from being implemented in a fully immersive VR setting, for example using a head-mounted display (HDM), such as the Oculus Compliance with Ethical Standards Conflict of Interest All authors declare no conflicts of interest. Ethic al Approval All procedures were approved by the local Research Ethics Committee and were in accordance with the Declaration of Helsinki. Informed Consent I nformed consent was obtained from all individual participants included in the study. Open Ac c ess This article is distributed under the terms of the Creative Commons Attribution 4.0 I nternational License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The relatively high tasks demands in the current study may have contributed to a lack of social modulation. Error rates in stimulus-response compatibility paradigms are typically less than 0.1 % (e.g. Bird et al. 2007). In Pan and Hamilton's (2015) task mean error rates were between 1.2 and 1.5 %. In the present study the error rate was approximately double this for the neurotypical participants (2.6 %). The lower error rate in Pan and Hamilton (2015) is likely due to the lower memory demands of their task. The required drum sequence was displayed on a virtual tablet in front of the avatar, whereas, in the current study participants had to memorise the correct three target sequence. Thus, the higher task demands in the present study may have nullified any potential social modulation of mimicry. Finally, it is also possible that lower task demands will enhance mimicry as this could increase participants' ability to process the motion of the avatar's movements (Rees et al. 1997). Future studies could reduce the task demands by having participants point to fewer targets. Conclusions To conclude, we provide a novel paradigm which enables mimicry to be induced in a rich and ecologically valid, interactive VR environment. Participants copied the kinematics of the avatars' movements, despite being instructed only to copy the goal of the observed action. The study reinforces Hamilton's (2008) proposal of intact emulation but differences in mimicry in ASD as autistic participants showed reduced mimicry compared to neurotypical participants. The findings have implications for VR training programmes and also potential VR diagnostic tools, as it suggests that behaviours autistic people display in everyday life also occur when interacting with avatars. Unlike previous studies investigating the modulation of mimicry, the social manipulation in the present study failed to modulate mimicry. There are several possible reasons as to why the social manipulation did not modulate mimicry, including the timing of the manipulation and the present limitations of facial expressions in VR. Future studies should explore these possibilities.
2016-10-10T18:24:48.217Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "3337e0c032ee846970ab4fbe6e46cadb643353ba", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5110595?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6b7cd9590c77596ff159fe8acb9a7977259d1d6a", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
49928469
pes2o/s2orc
v3-fos-license
New infinite families of pseudo-Anosov maps with vanishing Sah-Arnoux-Fathi invariant We show that an orientable pseudo-Anosov homeomorphism has vanishing Sah-Arnoux-Fathi invariant if and only if the minimal polynomial of its dilatation is not reciprocal. We relate this to works of Margalit-Spallone and Birman, Brinkmann and Kawamuro. Mainly, we use Veech's construction of pseudo-Anosov maps to give explicit pseudo-Anosov maps of vanishing Sah-Arnoux-Fathi invariant. In particular, we give new infinite families of such maps in genus 3. INTRODUCTION In 1981, Arnoux-Yoccoz [6] gave the first example of a pseudo-Anosov homeomorphism not arising as the lift of a toral automorphism whose dilatation was of degree less than twice the genus of the surface on which it is defined. In fact, they gave an infinite family of these, one in each genus g ≥ 3. In his Ph.D. dissertation of the same year, Arnoux [2], see also [1], showed that each of these maps has vanishing Sah-Arnoux-Fathi (SAF) invariant. Pseudo-Anosov maps with vanishing SAF-invariant are especially interesting for their dynamical properties, see [3,21,22,27]. However, there are few examples known; see below for a list of these. We find a new infinite family, and, to aid in the search for these interesting maps, we also clarify criteria in the literature derived from work of Calta-Smillie [11]. We characterize pseudo-Anosov maps with vanishing SAF-invariant. THEOREM 1. Suppose that φ is an orientable pseudo-Anosov map of a closed compact surface, with dilatation λ. Then φ has vanishing Sah-Arnoux-Fathi invariant if and only if the minimal polynomial of λ is not reciprocal. We give explicit constructions of new infinite families of pseudo-Anosov maps with vanishing SAF-invariant. In Subsection 3.4 we apply a construction of pseudo-Anosov homeomorphisms given by Margalit-Spallone [24] to lend support to a conjecture about the set of all dilatations of pseudo-Anosov homeomorphisms. Recall that a real algebraic number α greater than one is called bi-Perron if all of its conjugates (other than itself) lie in the annulus {||α|| −1 ≤ ||z|| < ||α||}, where ||z|| denotes the norm of a complex number. An algebraic integer, thus having minimal polynomial with integer coefficients, is a unit if its inverse is also an algebraic integer. Fried [14] showed that the dilatation of any pseudo-Anosov map is a bi-Perron unit. A conjecture that Farb-Margalit [13] attribute to C. McMullen (and is a question in [14]) states that every bi-Perron unit is the dilatation of some pseudo-Anosov homeomorphism. Recall that exactly when a pseudo-Anosov homeomorphism is orientable, its dilatation is an eigenvalue of the homeomorphism's induced action on first integral homology. The construction of Margalit-Spallone [24] shows that any polynomial that passes a certain homological criterion, see below, is the characteristic polynomial of the homology action induced by some pseudo-Anosov map. Using this, we find a partial confirmation of the conjecture. In Subsection 3.5, we answer an implicit question of Birman, Brinkmann, and Kawamuro [7]. Namely, if φ is an orientable pseudo-Anosov map on a genus g compact surface without punctures, then their symplectic polynomial s (x) associated to φ is reducible if and only if either φ has vanishing SAF-invariant or φ has trace field of degree less than g . 2. BACKGROUND 2.1. Pseudo-Anosov map, translation surface. Suppose that X is an orientable closed real surface of genus g ≥ 2. The Teichmüller modular group Mod(X ) is the quotient of the group of orientation preserving homeomorphisms by the subgroup of those homeomorphisms isotopic to the identity. A mapping class [φ] ∈ Mod(X ) is called pseudo-Anosov if there exists a representative φ : X → X , a pair of invariant transverse measured (singular) foliations (F u , µ u ), (F s , µ s ), and a real number λ, the dilatation of [φ], such that φ multiplies the transverse measure µ u (resp. µ s ) by λ (resp. λ −1 ). The real number λ = λ(φ) is called the dilatation of the pseudo-Anosov homeomorphism φ. Some prefer to call λ the stretch factor of φ. A pseudo-Anosov homeomorphism φ is called orientable if either of (and hence both) F u or F s is orientable (that is, leaves can be consistently oriented). As recalled in [20] (see their Theorem 2.4), a pseudo-Anosov homeomorphism φ is orientable if and only if its dilatation is an eigenvalue of the standard induced action on first homology φ * : H 1 (X , Z) → H 1 (X , Z). By Hubbard-Masur [15] the pair of measured foliations defines a quadratic differential and a complex structure on X so that this quadratic differential is holomorphic. Orientability of the foliations corresponds to the quadratic differential being the square of a holomorphic 1-form (thus, an abelian differential), say ω. Fixing base points and integrating ω along paths defines local coordinates on X (in C or R 2 , depending on our need), transition functions are by translations, and the result is a translation surface, (X , ω). (Any singularities of the foliations occur at the zeros of ω.) The pseudo-Anosov φ acts affinely with respect to the local Euclidean structure of (X , ω). Furthermore, taking the view of real local coordinates, SL 2 (R) acts on the collection of all translation surfaces by post-composition with the local coordinate maps. We often use the words pseudo-Anosov map to mean an orientable pseudo-Anosov homeomorphism (usually with an emphasis on its translation surface). 2.2. SAF-zero defined. The Sah-Arnoux-Fathi (SAF) invariant was first defined for any interval exchange transformation (for more on these interval maps, see Subsection 2.6). Given f defined on an interval I = d j =1 I j and given piecewise by where λ j is the length of I j . This invariant was studied by Sah in unpublished work; Arnoux studied it in his thesis [2], see also [1]. The invariant defines a homomorphism, and hence every IET that is periodic (under composition) certainly has vanishing SAF-invariant. See [9] for very recent work on IETs and the SAF-invariant. In [2], Arnoux showed that any linear flow on a translation surface defines a family of interval exchange maps, by taking any appropriately chosen full transversal of the flow, all having the same SAF-invariant. When the flow is periodic, the resulting SAF-invariant vanishes. However, there are other cases where vanishing occurs, and in particular one says that a pseudo-Anosov map has vanishing SAF-invariant if the flow in its stable direction has its first return interval exchange transformations with this property. (Below we will show that this is then also true of the flow in the unstable direction.) 2.3. Examples in the literature. Besides the Arnoux-Yoccoz family of SAF-zero pseudo-Anosov maps (one per genus at least three), the other known infinite families are the Arnoux-Rauzy family in genus 3 discussed in [22] and the examples of Calta and Schmidt [11] found by Fuchsian group techniques. Sporadic examples were given by Arnoux-Schmidt [5] and in [11]; McMullen [27] presents an example in genus 3 found by Lanneau. After this work was completed, Strenner [30] gave a construction that begins with pseudo-Anosov maps on non-orientable surfaces. One way he shows that the resulting affine pseudo-Anosov map has vanishing SAF-invariant is to apply the precursor of Theorem 1 appearing in [11]. 2.4. Trace field, periodic direction field, Veech group. The trace field of the translation surface (X , ω) of a pseudo-Anosov map of dilatation λ coincides with k = Q(λ + λ −1 ), see the appendix of [23]. If a translation surface has at least three directions of vanishing SAF-invariant, then Calta and Smillie [12] show that the surface can be normalized by way of the SL 2 (R)-action so that the directions with slope 0, 1 and ∞ have vanishing SAF-invariant. They further prove that on the normalized surface the set of slopes of directions with vanishing SAF-invariant forms a field (union with infinity, thus more precisely the projective line over the field). A translation surface so normalized is said to be in standard form, and the field so described is called the periodic direction field. Calta-Smillie also show that when (X , ω) arises from a pseudo-Anosov map, then it can be placed in standard form, and more importantly its trace field and periodic direction field coincide. The Veech group SL(X , ω) ⊂ SL 2 (R) is the group of matrix parts of (orientationpreserving) affine diffeomorphisms of (X , ω). An affine diffeomorphism of (X , ω) is pseudo-Anosov if and only if its matrix part is a hyperbolic element of SL 2 (R), see [31,32]. Furthermore, if there is any such pseudo-Anosov map, then SL(X , ω) ⊂ SL 2 (k), where k is the trace field (this follows from the appendix of [23]: the trace field is also the holonomy field and elements of the Veech group preserve the two dimensional k-vector space spanned by the holonomy vectors; the statement also follows from Theorem 1.5 of [12]). Homological criterion, Margalit-Spallone construction. Margalit and Spallone [24] give a construction of pseudo-Anosov classes in the Teichmüller modular group. Recall that a polynomial p(x) = n i =0 c i x i is called reciprocal when c i = c n+1−i for all i = 1, . . . , n. (The characteristic polynomial of any symplectic matrix is monic reciprocal.) A monic reciprocal polynomial with integral coefficients is called symplectically irreducible if it is not the product of reciprocal polynomials of strictly lesser degree. The homological criterion (as modified by Margalit-Spallone) for a monic reciprocal polynomial q(x) of even degree is that all of the following hold: is not cyclotomic, and • q(x) is not a polynomial in x k for any integral k > 1. For any f representing a class of the modular group of a closed surface X of genus at least two, let q f (x) be the characteristic polynomial for the action on first integral homology induced by f . Margalit-Spallone verify that the following result of Casson-Bleiler holds. If q f (x) passes the homological criterion, then the class of f is pseudo-Anosov. Furthermore, by considering words in explicit elements of the modular group, for any q(x) passing the homological criterion Margalit-Spallone build a homeomorphism f whose homological action has characteristic polynomial q (x). Hence the class of f (and indeed all of its Torelli group coset) is pseudo-Anosov. 2.6. Veech construction. In this subsection we mainly reproduce Lanneau's [19] overview (following [25]) of Veech's construction of pseudo-Anosov homeomorphisms using the Rauzy-Veech induction, [32]. In this subsection we follow standard convention and let λ denote the length vector for an interval exchange transformation. 2.6.1. Interval Exchange Transformation. An interval exchange transformation (IET) is a one-to-one map T from a left-closed, right-open interval I to itself that permutes, by translation, a finite partition I j , j = 1, . . . , d , of I into d ≥ 2 similarly half-open subintervals. It is easy to see that T is precisely determined by the following data: a permutation π that encodes how the intervals are exchanged, and a vector λ with positive entries that encodes the lengths of the intervals. It is useful to employ a redundant notation for IETs. A permutation is a pair of one-to-one maps (π 0 , π 1 ) from a finite alphabet A to 1, . . . , d in the following way. In the partition of I into intervals, we denote the interval labeled k, when counted from left to right, by I π −1 0 (k) . Once the intervals are exchanged, the interval labeled k is I π −1 1 (k) . The permutation π corresponds to the map π = π 1 • π −1 0 . The lengths of the intervals form a vector λ = (λ α ), α ∈ A . We will usually represent the combinatorial datum π = (π 0 , π 1 ) by a table: . It is reasonable to focus on those IET that cannot trivially be decomposed into two distinct IETs. For this, a permutation π is called reducible if for any To each suspension datum ζ, we can associate a translation surface (X , ω) = X (π, ζ) in the following way. Consider the broken line L 0 on C = R 2 defined by concatenation of the vectors ζ π −1 0 ( j ) (in this order) for j = 1, . . . , d with starting point at the origin. Similarly, we consider the broken line L 1 defined by concatenation of the vectors ζ π −1 1 ( j ) (in this order) for j = 1, . . . , d with starting point at the origin. If the lines L 0 and L 1 have no intersections other than the endpoints (for an illustration of the case of other intersections, see Figure 2.6 of [34]), we can construct a translation surface X by identifying each side ζ j on L 0 with the side ζ j on L 1 by a translation. The resulting surface is a translation surface endowed with the form ω = d z. Let I ⊂ X be the horizontal interval defined by I = [0, Σ α λ α ). Then the interval exchange map T is precisely the one defined by the first return map to I of the vertical flow on X . Rauzy-Veech induction. The Rauzy-Veech induction R(T ) of T is defined as the first return map of T to a certain subinterval J of I . We recall very briefly the construction. The type of . We say that the letter π −1 0 (d ) is the winner of this induction step, or that π −1 0 (d ) is, respectively. We define a subinterval J of I by The image of T by the Rauzy-Veech induction R is defined as the first return map of T to the subinterval J . This is again an interval exchange transformation, defined on d letters. Thus one has two maps R 0 and R 1 , given by R(T ) = (R (π), λ ), where is the type of T . The new data and transition matrix are found as follows. Iterating the Rauzy-Veech induction n times, we obtain a sequence of transition matrices {V k }. We can write R (n) (π, λ) = (π (n) , λ (n) ) with n k=1 V k λ (n) = λ. We can also define the Rauzy-Veech induction on the space of suspensions by If (π , ζ ) = R(π,ζ) then the two translation surfaces X (π, ζ) and X (π , ζ ) are isometric, i.e., they define the same surface in the moduli space. For a combinatorial datum π, we call the Rauzy class of π the set of all combinatorial data that can be obtained from π by the combinatorial Rauzy moves. The labeled Rauzy diagram of π is the directed graph whose vertices are all combinatorial data that can be obtained from π by the combinatorial Rauzy moves. From each vertex, there are two directed outgoing edges labeled 0 and 1 (the type) corresponding to the two combinatorial Rauzy moves. 2.6.4. Closed loops and pseudo-Anosov homeomorphisms. We are now ready to describe Veech's construction of pseudo-Anosov homeomorphisms. Let π be an irreducible permutation and let γ be a closed loop in the Rauzy diagram associated to π. We obtain the matrix V as above; let us assume that V is primitive (i.e., there exists k such that for all i , j , the (i , j ) entry of V k is positive) and let θ > 1 be its Perron-Frobenius eigenvalue. We choose a positive eigenvector λ for θ. Now, V is appropriately symplectic (see [34] for an explanation of this result of [32]), allowing one to choose τ an eigenvector for the eigenvalue θ −1 with τ π −1 0 (1) > 0. We form the vector ζ = λ + i τ. We can show that ζ is a suspension datum for π. Thus, with a minor abuse of notation, The two surfaces X (π, ζ) and g t X (π, ζ) differ by some element of the mapping class group. In other words there exists a pseudo-Anosov homeomorphism φ, with respect to the translation surface X (π, θ), such that Dφ = g t . In particular the dilatation of φ is θ. Note that by construction φ fixes the zero on the left of the interval I and also the separatrix adjacent to this zero (namely the interval I ). Veech [32] proved the following. THEOREM 3 (Veech). Let γ be a closed loop, beginning at the vertex corresponding to π, in an unlabeled Rauzy diagram and V be the associated transition matrix. If V is primitive, then let λ be a positive eigenvector for the Perron eigenvalue θ of V and τ be an eigenvector (with τ π −1 (4) Up to conjugation, all orientable pseudo-Anosov homeomorphisms fixing a separatrix are obtained by this construction. Note that reflecting the path γ, in the sense of exchanging the roles of 0 and 1, results in the pseudo-Anosov homeomorphism whose stable foliation is the unstable foliation of the pseudo-Anosov determined by γ. Up to now, we have discussed labeled IETs. An unlabeled IET is one for which we retain only combinatorial data in the form of a permutation of {1, . . . , d }. Equivalent classes of unlabeled IETs are obtained after identifying (π 0 , π 1 ) with [34] for a discussion of this, where the key term is "monodromy".) From this, the labeled Rauzy diagram is a covering of the so-called unlabeled Rauzy diagram. An interval exchange transformation T is called hyperelliptic if the corresponding permutation is such that , with corresponding monodromy permutation (d , d − 1, . . . , 1). A hyperelliptic Rauzy diagram is one that contains a combinatorial datum π of a hyperelliptic IET. Exactly when a Rauzy diagram is hyperelliptic, the labeled and unlabeled diagrams are isomorphic directed graphs. See Figure 1 for the unlabeled Rauzy diagram with four subintervals. (4, 3, 2, 1) In our examples, we always choose the "central" vertex of the hyperelliptic diagram at hand to be the initial vertex of our path. (The resulting pseudo-Anosov map is a conjugate of that given by taking any other initial vertex along the path.) The following justifies that normalization, confer Figures 1, 2, and 3. We thank the referee for pointing out that the following is a result of Rauzy [28]. Proof. First, we show that when V is primitive, every letter must win at least once. By contradiction, suppose that letter a is never a winner on the path γ. Therefore, in each of the transition matrices V k , the a th row has exactly one non-zero entry, the value 1 at the (a, a)-entry. This is then also true of the a th row of the matrix V , and hence even of the a th row of V k for any positive k. This last statement contradicts the primitivity of V . Therefore, each letter of A must be a winner at least once. In the hyperelliptic diagram of d , there is exactly one cycle along which d is a winner, and exactly one cycle along which 1 is a winner. These cycles are of type 1 and type 0, respectively; they share d as their sole common vertex. Since excising the vertex d disconnects the Rauzy Graph, we conclude that any closed path having both d and 1 as a winner passes through d . Components of strata and Rauzy classes. In particular to allow experts to immediately understand the setting of our examples, we entitle certain subsections below with reference to particular components of strata of abelian differentials. Here we briefly summarize the notation and related notions. Let g ≥ 2 be the genus of the Riemann surface X , the non-zero abelian differentials on X have zeros whose multiplicities sum to 2g − 2. Let κ be a partition of 2g − 2. The stratum H (κ) is the set (modulo the action of the mapping class group) of abelian differentials whose zeros have the multiplicities of κ. Computations by Veech and then Arnoux using Rauzy classes showed that in general strata have more than one connected component. Kontsevich and Zorich [18] determined all possible components. They showed that any stratum has at most three components: there may be a hyperelliptic component where both X is hyperelliptic and the hyperelliptic involution preserves ω; and possibly two more components, differentiated by the parity of an appropriate notion of spin, these components are thus called "even" and "odd", correspondingly. One Our examples are in low genus, thus we recall only (part of) the second theorem of [18]: Each of H (2) and H (1, 1) is connected (and coincides with its hyperelliptic component), while each of H (4) and H (2, 2) has two connected components, a hyperelliptic component and an odd spin component. Each Rauzy class corresponds to a single component (see [8] for details on this correspondence), and indeed one finds that the number of intervals d is equal to 2g + σ − 1, where σ equals the total number of zeros of the corresponding abelian differentials. This accords with the fact that local coordinates on H (κ) are given by period coordinates, which one can view as the integration of ω over a basis of relative homology H 1 (X , Σ, C), where Σ is the set of zeros of ω. One can take the basis to be the union of an integral symplectic basis of absolute homology with a set of paths from a chosen zero to each of the other zeros. The transition matrix V = V (γ) for a closed path gives the action of the element of the mapping class group on relative homology. In the pseudo-Anosov case, there is some power of the map that fixes all of Σ and hence this power changes any path connecting zeros by an element of absolute homology. On absolute homology, the pseudo-Anosov (and perforce any of its powers) acts integrally symplectically, thus the action on relative homology of the power decomposes naturally into a block form with the block corresponding to pure relative homology being an identity. Thus, the characteristic polynomial of this action is the product of a reciprocal degree 2g polynomial times a power of (x −1). Therefore, the action of the original pseudo-Anosov has a similar decomposition, as seen in our examples below. CHARACTERIZATION OF VANISHING SAF INVARIANT, IMPLICATIONS We aim to prove that a pseudo-Anosov map has vanishing SAF-invariant exactly when an algebraic condition holds; we thus naturally first gather some algebraic results. Galois theory. We begin with a result using elementary Galois theory. is monic of degree 2n. Of course, q(x) has α as a root. Therefore, p(x) dividesq (x), and by the restrictions on the degree of p(x), either p(x) =q(x) or else p(x) has degree n. If p(x) is not reciprocal, then it cannot equalq (x), as this latter is clearly reciprocal; it then follows that n = [Q(α) : Q] = [Q(α + α −1 ) : Q], and thus Q(α) = Q(α + α −1 ). (⇒) Suppose now that Q(α) = Q(α + α −1 ). Recall that any root of q(x) is the image of α + α −1 under some field embedding (fixing Q), Q(α + α −1 ) → C. Since Q(α) = Q(α + α −1 ), each such field embedding sends α to some root of p (x). This field equality also implies that deg p(x) = n, and thus we conclude that the roots of q (x) are all contained in the set of values of the form β + β −1 with β a root of p (x). However, under the further supposition that p(x) is reciprocal (which implies that n is even, see Lemma 6), there are only n/2 distinct values in the set of the β + β −1 . Hence, the degree of q(x) must in fact be at most n/2, and we have reached a contradiction. For the sake of completeness, we include the following well-known result. LEMMA 6. Suppose that p(x) ∈ Z[x] is reciprocal and of odd degree (greater than one). Then p(x) is reducible. Proof. Given the p(x) is reciprocal, whenever α is a root pf p(x), so is α −1 . Thus, the roots of p (x) are paired together by x → 1/x. This accounts for an even number of roots, except for fixed points of this map. Since p (x) has an odd number of roots, we conclude that at least one of the fixed points, x = ±1, is a root of p (x). It follows that p(x) is reducible. We draw some immediate conclusions from Proposition 5. Let us introduce a non-standard definition: call Q(α+α −1 ) the trace field of the algebraic number α. Recall that the (algebraic) norm of an algebraic number is the product of all of its conjugates over Q. Proof. Field equality would imply that α is quadratic and hence with minimal polynomial of the form p(x) = x 2 + nx + 1 for some n ∈ Z. But p(x) is reciprocal of even degree, and hence field equality cannot hold. Proof. Since the minimal polynomial p(x) of α has degree greater than two, it has a root β = α −1 with ||β|| < 1, therefore ||β −1 || > 1. But since p(x) has only α as a root that has norm greater than one, we conclude that p (x) is not a reciprocal polynomial. Thus, we can invoke Proposition 5 to find that the second statement holds also. Motivated by this last result, we now show that every Pisot unit is bi-Perron (which presumably is well-known). REMARK 11. On the other hand, not every non-reciprocal Perron unit is a Pisot number, as already f (x) = x 4 − 4x 3 + 3x + 1 shows. We verify a property required for a certain construction of pseudo-Anosov elements. LEMMA 12. If α is a non-reciprocal Perron number, letq(x) be the product of the minimal polynomial of α with the minimal polynomial of and k ∈ N implies k = 1. Proof. Supposeq(x) = f (x k ), then certainly for any zero β of f (x) and every k th -root γ of β, thus satisfying γ k = β, we haveq(γ) = 0. Hence the zeros ofq (x) form the full set of the k th -roots of the various zeros of f (x). But, for β fixed, all of its k th -roots share the same complex norm. Therefore, we can partition the set of roots ofq(x) into subsets of cardinality k with all elements of the subset sharing the same complex norm. However, since α is bi-Perron, there is no other root ofq(x) that has the same complex norm as does α. We conclude that k = 1 and of course f (x) =q (x). Similarly, we have the following. LEMMA 13. If α is a reciprocal bi-Perron number and p(x) its minimal polynomial, then p(x) Proof. Here also, the polynomial in question has α as its only root that is of complex norm ||α||. Thus, the argument used to prove the previous lemma applies. Proof of Theorem 1: Characterizing SAF-zero pseudo-Anosov maps. Proof. Suppose that φ is a pseudo-Anosov map with dilatation λ. By the results of Calta-Smillie reviewed in Subsection 2.4, we can assume that φ is an affine diffeomorphism on (X , ω) with matrix part being hyperbolic in k = Q(λ + λ −1 ) and that φ has vanishing SAF-invariant if and only if its stable direction has slope in k. On the other hand, it is obvious that Q(λ) ⊃ k and that λ is a zero of Hence Q(λ) = k if and only if the discriminant of x 2 − (λ + λ −1 )x + 1 is a square in k, and otherwise there is a proper containment with field extension degree [Q(λ) : k] = 2. However, the discriminant of Thus, we find that φ has vanishing SAF-invariant if and only if Q(λ) = Q(λ+λ −1 ). Our result now follows from Proposition 5. REMARK 14. Since the stable and unstable foliations of the pseudo-Anosov map correspond to the fixed points of the linear part, it follows from the above proof that either both are of vanishing SAF-invariant, or else neither is. Some implications. Recall that if λ is the dilatation of a (orientable) pseudo-Ansov map φ, then we call Q(λ + λ −1 ) the trace field of φ. Proof. The dilatation of φ is a unit, and hence has norm 1 or −1. In the first case, Corollary 7 applies. In the second, the minimal polynomial (being monic) cannot be reciprocal. REMARK 16. Recall that Kenyon-Smillie [23] showed that if (X , ω) supports an affine pseudo-Anosov map, then the trace field of the map is the trace field of (X , ω). We can thus compare Corollary 15 with McMullen's Theorem A.1 of the appendix in [26]. In our language, McMullen shows that under the hypothesis that the Veech group of (X , ω) is a lattice (which certainly implies the existence of affine pseudo-Anosov maps), the trace field of (X , ω) being quadratic implies that the only directions of flow with vanishing SAF-invariant are those for which the flow is periodic. (By Veech's dichotomy [33], these are the directions in which (X , ω) decomposes into cylinders). REMARK 17. We point out that if a pseudo-Anosov map φ is of vanishing SAFinvariant and its dilatation is not totally real, then its trace field is also not totally real. This holds, as vanishing SAF-invariant implies equality of the trace field with the field generated over Q by the dilatation. This can be applied to allow a minor simplification in the existence arguments of [16]. 3.4. Every bi-Perron unit has its minimal polynomial dividing the characteristic polynomial of some pseudo-Anosov's homological action. We thank the referee for suggesting the formulation of the following result. THEOREM 18. Let α be a bi-Perron unit and p(x) its minimal polynomial. Let g be the degree of α. If p(x) is reciprocal, then it is realized as the characteristic polynomial of the action on first integral homology of a pseudo-Anosov map. Otherwise, x g p(x)p(x −1 ) is so realized. Proof. If α is a bi-Perron unit whose minimal polynomial p(x) is reciprocal (and hence of even) degree say 2g , then p(x) is obviously (symplectically) irreducible. is only trivially possible is shown in Lemma 13. Thus, the hypotheses are all satisfied for the Margalit-Spallone construction of [24] to give an explicit pseudo-Anosov element (indeed a full coset of the Torelli group), in the mapping class group of the genus g surface, whose induced action on homology has characteristic polynomial p (x). If α is a bi-Perron unit of degree g whose minimal polynomial p (x) is not reciprocal, letp(x) be the minimal polynomial of α −1 . And once again let q(x) be the minimal polynomial of α + α −1 , which by Theorem 1 is also of degree g . Let q(x) = x g q(x + x −1 ). Since both α, α −1 are roots ofq(x), degree considerations give thatq(x) = p (x)p (x). Lemma 12 shows thatq (x) is not equal to any non-trivial f (x k ). Thatq(x) has no cyclotomic roots is clear, as its only roots are those of p(x),p (x) and each of these is an irreducible polynomial with a root that is of absolute value greater than one. Again, the hypotheses are all satisfied for the Margalit-Spallone construction, so that there exist pseudo-Anosov homeomorphisms whose induced action on homology is of characteristic polynomialq(x). REMARK 19. If any of the pseudo-Anosov maps arising in the proof above is orientable, then its dilatation is an eigenvalue of the action on homology. This dominant eigenvalue must then equal α, and we have realized α as a dilatation. However, it is logically possible that all of the pseudo-Anosov homeomorphisms arising in the proof are non-orientable. As recalled in [20], the dilatation of a non-orientable pseudo-Anosov homeomorphism cannot be an eigenvalue for the induced action on homology. Thus, in this case, the pseudo-Anosov homeomorphisms must all have dilatations unequal to α. [7] associate to a pseudo-Anosov map φ of dilatation λ a symplectic polynomial s (x) that has λ as its largest real root. They write, "Its relationship to the minimum polynomial of λ is not completely clear at this writing." We give an explanation in the setting that φ is orientable (and defined on a surface without punctures). THEOREM 20. Suppose that φ is an orientable pseudo-Anosov map on a surface of genus g . Let s (x) be the polynomial associated to φ in [7]. Then s (x) is reducible if and only if either φ has vanishing SAF-invariant or has trace field of degree strictly less than g . Proof. Let λ be the dilatation of φ and p(x) be the minimal polynomial of λ. Since s(x) ∈ Z [x] is monic and has λ as a root, of course p(x) divides s (x). As well, since s(x) is a reciprocal polynomial, whenever some α is a root of s(x) so also is α −1 a root. If s(x) is irreducible then it equals p (x). Thus, p(x) is in particular reciprocal. Therefore, by Theorem 1 the SAF-invariant of φ does not vanish. Suppose now that s(x) is reducible but symplectically irreducible. Were p(x) reciprocal, then there would exist some other factor of s (x), but this factor would perforce be reciprocal. This contraction shows that in this case p (x) is not a reciprocal polynomial. In particular, the minimal polynomialp(x) of λ −1 is distinct from p (x). But since λ is a root of s (x), so is λ −1 and hencep(x) also divides s (x). (x). The existence of any further factor of s (x) would lead to a contradiction of the symplectic irreducibility of s (x). That is, whenever s(x) is reducible but symplectically irreducible it is exactly the product q(x) = p (x)p (x) and p(x) is not reciprocal. By Theorem 1 the SAF-invariant of φ vanishes. Finally, suppose that s(x) is symplectically reducible. We have that either p(x) is reciprocal or thatq(x) = p (x)p(x) divides s (x). In either case, there is some other reciprocal factor of s (x). Thus the degree of p(x) orq(x) is correspondingly of degree less than 2g and as the trace field Q(λ + λ −1 ) has dimension over Q equal to one-half of the degree of p(x) orq(x) in these respective cases, we indeed find that the trace field of φ has degree strictly less than g . REMARK 21. In particular, Example 5.2 of [7] shows that the monodromy of the hyperbolic knot 8 9 leads to an orientable pseudo-Anosov map with Here the dilatation λ is the real root of x 3 − x 2 + 2x − 1, the second factor is the minimal polynomial of 1/λ. Using its minimal polynomial, one easily shows that λ equals −(λ+λ −1 ) 2 +3(λ+λ −1 )−1, implying that indeed Q(λ) = Q(λ+λ −1 ). H odd (2, 2). Mimicking the construction of [6], Arnoux and Rauzy [4] constructed an infinite family of IETs, the first two of which Lowenstein, Poggiaspalla, and Vivaldi [21,22] studied in detail, as these lead to SAF-zero pseudo-Anosov maps. Indeed, by making an appropriate adjustment, Lowenstein et al. renormalized these first two IETs in such a way that each was periodic under Rauzy induction. Each corresponds to a cycle passing through the same 29 vertices in the 294-vertex Rauzy class of 7-interval IETs, and under the Veech construction leads to a pseudo-Anosov homeomorphism. The dilatations of these are the largest root of x 3 −7x 2 +5x−1 = 0 and x 3 − 10x 2 + 6x − 1 = 0 respectively. Rediscovering the Arnoux-Rauzy family of Presumably, Lowenstein et al. intend that one follow their recipe for constructing pseudo-Anosov homeomorphisms for the remainder of the Arnoux-Rauzy family. This seemed somewhat daunting to us. However, we found that one can succeed by adjusting the cycle given by the first Arnoux-Rauzy IET by spinning about certain small cycles. Since the Arnoux-Yoccoz pseudo-Anosov homeomorphism in genus 3 corresponds to an abelian differential in H odd (2, 2) (for this and much more see [17]), all of these examples (since they arise from the same Rauzy class) are in this same connected component. More precisely, for each k ≥ 1, the path ρ k = 00001010(111111) k−1 1101(00) k−1 010100111, starting from the permutation (7354621), gives these maps. (Here and throughout, exponents as in the expression for ρ k indicate repeated concatenation of the correspondingly grouped symbols.) One then finds that the characteristic polynomial of the induced transition matrix for ρ k is To verify this, break up ρ k+1 into five paths corresponding to 00001010, (111111) k , 1101, (00) k , and 010100111, and compute their transition matrices. From this, one easily shows that the associated matrix for ρ k+1 is the matrix , whose characteristic polynomial is p k+1 (x). REMARK 22. Erwan Lanneau has informed us that (in unpublished work) he also found this family in a similar manner. H hyp (4). Veech [33] constructed an infinite family of translation surfaces with Veech groups that are lattices in SL 2 (R). For each n ≥ 5, his construction is to identify, by translation, parallel sides of a regular ngon and its mirror image. In the case of n = 7, one finds a genus 3 surface with exactly one singularity of cone angle 8π. Veech shows that the Veech group here is generated by S = cos(π/7) − sin(π/7) sin(π/7) cos(π/7) and T = 1 2 cot(π/7) 0 1 . In [5], it is pointed out that results on (Rosen) continued fractions of Rosen and Towse [29] imply that on this surface there is a SAF-zero pseudo-Anosov; indeed this is the map, say ψ, of linear part Dψ = T ST −1 S −1 . Explicitly taking a transversal to the flow in the expanding direction for ψ, and following Rauzy induction on the corresponding IET, we found that ψ results from the loop displayed in Figure 2. The primitive matrix associated with this loop has characteristic polynomial (x 3 − 6x 2 + 5x − 1)(x 3 − 5x 2 + 6x − 1), verifying that the SAF invariant vanishes. Two known examples in Lanneau's example, given in [27], has as its dilatation the largest root of x 3 − 8x 2 + 6x − 1. We noticed that both ψ and this example correspond to paths passing through the same 15 vertices of the hyperelliptic Rauzy graph of 6interval IETs. These paths only differ in that Lanneau's has added spins (indeed, the "top right" 1-loop is repeated four times). H hyp (2,2). Motivated by the previous examples, we sought an infinite family of pseudo-Anosov with dilatation being the largest root of P k (x) := x 3 − (2k + 4)x 2 + (k + 4)x − 1. We found such a family, but rather by taking certain paths in the hyperelliptic Rauzy graph of 7-intervals IETs. This graph is shown in Figure 3. We find in fact four distinct families and thus new examples of pseudo-Anosov maps with vanishing SAFinvariant. However, we do not prove that they are all distinct and thus content ourselves with the existence statement of Theorem 2. Naturally enough, we describe the paths as starting at the vertex of π = (7, 6, 5, 4, 3, 2, 1).
2016-03-05T01:10:25.000Z
2016-03-05T00:00:00.000
{ "year": 2016, "sha1": "e8ad3c1fc1219611aa2272a6e472516617211882", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=b608209d-f5e0-43ff-b144-45594b7bbdd7", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1774a28ff0a4f16d675971ef02f5bce95c9f4268", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
224991929
pes2o/s2orc
v3-fos-license
Global learning: Educational research in an emerging field Global learning may be understood as an educational response to the development towards a world society. The development of world society is accompanied by a wide range of adaptation challenges, such as the development of global social justice, the overcoming of paternalism or the facilitation of social solidarity and dealing with migration in an era of climate change. This paper reflects the learning of the understanding of world society by empirical studies. The paper shows some challenges for the research agenda, especially concerning the Organisation for Economic Co-operation and Development’s framework of global competences and suggests a framework for further research. Global developments and the concepts of world society Key topics such as climate change, population growth and global migration, consumption of natural resources, plastic waste in the oceans and the diminishing ice of the Arctic describe some of the more dramatic dimensions of global developments. An important feature of this development is the close interconnectedness of geographical spaces: developments in one part of the world have consequences for others. The CO 2 emissions from industrial countries have consequences for tropical forests (Fearnside, 2013;Grainger, 2017). The plastic fibres from the washing of fleece pullovers in Europe are found in fish caught in the Mediterranean (Nadal et al., 2016). According to a study by the Ifo Institute for Economic Research in Munich (Felbermayr et al., 2019), the economic consequences of an unregulated Brexit will be felt most strongly in the Republic of Ireland and Malta -countries that were not involved in this decision. These are just a few examples of spatial interdependencies in an increasingly interconnected world society. To understand learning about the world society, it is particularly important to understand the social processes related to these developments. The reflection of these challenges has a long tradition in -not only educational -philosophy, as one of the oldest may be the pansophy of Comenius. I mention some of those thinkers with resonance in education and educational research, who are dealing with the new quality of interconnected spaces and its consequences for understanding and reflection. In 1784 the philosopher Immanuel Kant (1764) pointed out this dimension of sociality in his 'Idea for a general history with a cosmopolitan intention'. He reflected how people could deal with the challenges associated with cosmopolitanism. He considered this a learning challenge because, in his opinion, people tend to be egoistic and socially cold towards those with whom they are not personally acquainted. At the same time, it is inevitable that people deal and work together with one another. He therefore speaks of an 'unsociable sociability' of man. In modern society, coexistence can no longer be organized based on the idea of a small group with emotional closeness, or on the model of belonging together as family, but requires an abstract social order for coexistence and the development of freedom. In his work, he is pointing out the necessity to reflect on structures of living together in order to guarantee freedom. The social philosopher Niklas Luhmann described in the 1970s that today's society is to be understood as a world society in terms of its character, since social communication today is no longer possible independently of world social contexts. His central thesis suggests that every society today exists as world society because it is always part of a global context (see Luhmann, 1975;1997: 806-812). World society does not have the form of a state or a world organization, but consists of the sum of social, political and cultural diversity and its interdependence. By this, the worldwide coexistence and the need for a global world understanding becomes inevitable. Every human being is a part of it and included in very specific dimensions and segments. However, the awareness of being globally interwoven is not easy to reach. Moreover, participation and inclusion in it is very unequal. Since global social interdependencies have very noticeable effects almost everywhere, this new social quality must be dealt with. Luhmann's important message is to understand the mechanism of communication and to understand world society as a specific form of communication. Jacques Derrida (1994) calls this kind of order 'beyond the principle of brotherhood' (10). His thinking reminds us of the challenges of emotions and the danger of othering and setting hierarchal differences and he addresses the challenges of belonging in a global world. The British sociologist Roland Robertson (1998) saw in this a new form of spatial experience, which he denoted by the term 'glocality'. He worked out the challenges of working for the 'global common good' in this setting and related his thinking to an understanding of the complexity of risk management in a glocal society, pointed out by Ulrich Beck (1992). In the tradition of these thinkers, the understanding of a world society becomes clearer. It is not a bigger family, it is not a real state, it is not a situation of direct elections and problem-solving just related to one actor. The world society has very weak decision-making structures, high complexity and multilateral actors in problem-solving. The world society has only very few guiding principles as the Declaration of Universal Human Rights. Against this background, the learning challenge is to learn of abstract social relations in an abstract social space (for details, see Asbrand and Scheunpflug, 2014). This means to learn to deal in a social context with an unknown complexity and a structural uncertainty. Things are losing the anchor in space and belonging is loosing the function to be related to distinct entities. This results in learning tasks in regard to identity, in overcoming paternalism, the perception of challenges of global social justice and the ability to reflect on how social solidarity can be shaped in a global spatial context (Scheunpflug, 2004). All these developments are a great challenge for human beings, who are local beings due to their Stone Age past and imprints (Scheunpflug, 2001(Scheunpflug, , 2007. Man is evolved into life in families and sensual communities. That is why people learn everything that can be experienced by the senses much more easily; that is, what takes place within the close range that can be experienced by the senses. At the same time, however, people are gifted -and this distinguishes them from all other living beings -by being able to use their reason, to learn and to solve problems. People have a high capacity for abstract reflection, which also enables them to learn abstract contexts. One such example is the innate ability to count and to think mathematically, which in human history has been cultivated, taught and learned with considerable effort to high abilities. If people can learn to think in mathematical categories, they can also learn to cultivate their coexistence in a global context. However, this also requires a clear intellectual effort (Scheunpflug, 2007;Schmidt, 2009). I call such learning 'global learning', using the definition of the Maastricht Declaration of the GENE/European Council from 2002: 'Education, that opens people's eyes and minds to the realities of the world, and awakens them to bring about a world of greater justice, equity and human rights for all ' (Maastricht Global Education Declaration, 2002;Nygard and Wegimont, 2018; see also other scholars such as Bourn, 2014Bourn, , 2018Lehtomäki, 2019;Lehtomäki et al., 2016;2017;Räsänen, 2009;Tarozzi and Mallon, 2019). This concept is seen as an umbrella concept of 'global citizenship education' (cf. Grobbauer, 2014;Shulz, 2010;UNESCO, 2015) and related concepts, pointing out the necessity to reflect human relations in a globalized world as the foundation of solving problems. My hypothesis is that the understanding of the character of this world society as an abstract social space on a real planet is of high importance for the way people act. Global learning in the framework of the United Nations Educational, Scientific and Cultural Organisation (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) Even if global learning has not, until now, been a huge field of research, the first meta-study was carried out by Wiek et al. (2011), reflecting 43 concepts of competencies in education for sustainability and global learning. With their study, the authors showed that systems-thinking competence, anticipatory competence, normative competence, strategic competence and interpersonal and intercultural competence are seen as the core competencies reflecting global interconnectedness. The international Delphi study by Marco Rieckmann (2010Rieckmann ( , 2012 supported these findings by integrating scholars from the global south. This research had a tremendous impact, as it was the foundation for UNESCO (2015) to develop a 'Framework on global citizenship education' which is now used in curricula around the world. The mentioned meta-study (Wiek et al. 2011) also had an impact on the OECD in the ongoing research on 'Global competencies'. The framework of the OECD combines the meta-study and the findings of intercultural learning especially in the Unites States. Under the title 'Preparing our youth for an inclusive and sustainable world' the OECD (2018) describes four dimensions of global competencies: • • The capacity 'to examine issues and situations of local, global and cultural significance' (OECD, 2018: 9) ('e.g. poverty, economic interdependence, migration, inequality, environmental risks, conflicts, cultural differences and stereotypes' (OECD, 2018: 8)). This framework, which leads the assessment of the OECD, focuses knowledge on the problems of a global world, but does not emphasize the special character of the world society as 'unsociable sociability' and its consequences for learning and teaching. However, I suggest, that the underlying understanding of the character of world society, drives actions in this field. Therefore, reflecting the understanding of a world society is a necessity to consider for research. An empirical approach to global competences To reflect these challenges, I want to present the results of three empirical studies from my research group. All three deal with the challenges of global learning. In order to allow the reflection of the learning process, the research is situated in different global contexts. The studies focus on the central orientations, which are leading action in the field, as the action guiding orientation is at the heart of the learning process (for the method, see Scheunpflug et al., 2016): (Wagener 2018a(Wagener , 2018b) deals with the learning processes of German students who are committed in their school to sponsoring children in the global south by raising money monthly for one of these children, either using their own pocket money, or through fundraising campaigns or by selling cakes and other items. • • Study 2 (Krogull, 2018;Krogull and Scheunpflug, 2013) surveys the orientations of young people who have taken part in encounter trips to a country of the global north or the global south. The trips took place half a year to two years before the data collection. The study includes students from three countries, Ruanda, Bolivia and Germany. • • Study 3 (Richter, 2018) focuses on the quality of learning processes for German young people taking part in voluntary work in the global south. The experiences of the returned volunteers were collected by interviews. For this paper, the different findings of the single studies had been abducted together once again (see, in a first approach, Wagener and Krogull, 2018). We found that even with a high level of cognitive global knowledge and many global experiences, there were very different forms of understanding 'glocal' connectedness and the specifics of 'unsociable sociability'. In the following, the findings are presented in brief and the complexity of the results is considerably reduced (in comparison to the original studies); in addition, all transcripts quoted are edited for reasons of comprehensibility. Through the abduction process, perception of the other, localization of differences, dealing with knowledge and motivation for action evolved as important aspects to structure the field. This structure formed three types that I will describe in the following. Understanding the world society as vicinity in adding neighbourhoods In child sponsorship, student exchange and voluntary service, orientations emerge which transform new experiences of strangeness into a new local area. Students integrate the new experiences into a new vicinity and, by this, the world constitutes by added neighbourhoods. In all three studies, groups and individuals, whose understanding of world society is shaped by this type, show an orientation towards asymmetries in the perception of others. Thus, young people from Germany perceive themselves in a giving role, whereas the partners in the global south are in a receiving role. The situation of economic inequality superimpose all other experiences. For example, German pupils in upper secondary schools clearly feel superior to experienced teachers in the southern countries: 'They are not only dependent on the money, but also on the help, on the knowledge that comes from Europe . . . they have teachers, but they are in no way as far as our secondary students are now' (Krogull, 2018: 141). Conversely, the young people we examined from the participating countries of the south felt that they were inferior. The learning arrangement thus led -in all three studies -to an increase in self-esteem of the young people from the global north. In the following interview, a participant shows his pride at being asked to contribute to the teaching in a school (being himself still a student): 'in school Heinrich-Böll-Foundation and so on, they all said whether we did not want to do something there' (Krogull, 2018: 142). The increase in self-esteem is also expressed explicitly in part: 'so for example . . . as a white man you are simply THE person and are adored like such a little god, and I mean in the beginning it's all so nice and good and your ego also rises (laughs)' (Richter, 2018: 20). In this type, the localizations of differences is seen in one's own everyday culture: eating here and there, living as a student here and there, the cityscape here and there, the climate here and there, and so on: 'And another thing . . . the markets of the Germans, that is different from here in Rwanda; when you go to the market here, you ask for the price of items and you answer the money you have; but in Germany, for example, the clothes; there were the prices and you pay directly; but here it is not like this; you demand discussing. (Krogull, 2018: 95) The new world becomes the new neighbourhood and the sponsored child becomes like one's own child: 'So this is like our little child somehow, our own baby' (Wagener, 2018: 6) The newly experienced everyday culture is identified and related to one's own experiences. In this way, one makes the living environment of other people one's own in parts. In this way, globality develops as the addition of vicinities. In all three samples, the understanding of world society is oriented towards the expectation of being able to encounter the world in all dimensions authentically. Encounter trips and voluntary services are of course situations of direct contact; the situation of supporting a child through regular donations reflects the desire to help another person directly. Authenticity becomes very important. In the following example, the group discusses whether the child they sponsor has written a card by itself/by him or herself: A: Somebody helped her. B: Yes, as she cannot yet write very well! A: Yes. C: She does not know it very well, but she knows it. D: I think the letter was from her but not the writing as this was not her scripture, she did not write this was somebody else. B: Yes, but she made the card by herself. (Wagener, 2018: 103) In this reflection, we observe a struggle for authenticity. The students are orienting their solidarity to the fact that there is an authentic counterpart and a real existing relation. For the students it is central that they experience authenticity. The related action is motivated by charity. Understanding the world society as community We found groups which did not focus on this kind of vicinity, but identified communities of global belonging. Here is the example of one group from Rwanda: Before leaving, I asked myself how would life look like, how would people approach us? We do not speak the same language; we come from very different cultures, so I asked myself many questions. But I met the contrary: these families where we lived had really been Christian -Christian families. (Krogull, 2018: S.112) Others built on the communities of football clubs, music or youth organizations. Common to all is a kind of global belonging related to an understanding of community. Those groups reflected the new experiences and the new knowledge gained by identification with the community. Action in the global world was motivated by solidarity. Understanding the world society as abstract social space We also found young people who showed that, beyond their immediate experiences, they are interested in the structures of the world society, showing orientations that do not devalue the identity of others and discussing aspects of global justice. These young people can describe their experiences and reflect them by talking about forms of organization of societies, tax systems, legal regulations; they reflect what their effects look like, and they think about what constitutes a society: 'I really believe that I have had a very important experience in what a new concept of order is (.) An order in society, order in behaviour, (.) order of (.) a whole society' (Krogull, 2018: 129). Young people of this type show an orientation towards perceptions of others which do not depreciate other people. They are able to classify individual situations and translate them into reflexive actions with regard to underlying principles. Their forms of action are motivated by global participation. Table 2 summarizes the findings. People of the type 'addition of vicinities' understand the world society as an addition of things they know by their own experience. They create asymmetries with people they meet -either in paternalism or in subordination. The complexity of the world society is reduced by an authentic approach and the resultant action is charity. People of the type 'community' understand the world society as a community of people with the same background. They learn in organizations of the same character, like churches, music bands or football clubs. The complexity of the world society and of the related knowledge is reduced by the identification with others (football player as me, workers as me, etc.) and the resulting action is solidarity. People who understand the world society as an abstract social space understand the underlying structures and principles. They reflect the complexity by using self-reflection and act by global participation. Only those who understand the world society by abstract social spaces would correspond to what I described at the beginning, against the background of the theoretical considerations indicated, as a global competence, being able to deal with the challenges of the planet in equality with people from other parts of the world. These findings show that a world social learning setting does not naturally lead to a perspective of joint work on the outstanding problems of this world, but that there is a danger that structures or ascriptions of supposed superiority or inferiority are determined, and possibly cultivated and not worked on. To summarize the observations from the three studies, one can state that knowledge about globality and personal experience of globalization do not automatically lead to global competence in the sense of an understanding of symmetrical 'unsociable sociability' -or the understanding of 'abstract sociality'. Learning which evolved in this context was independent of the age of the participants, the north-south context and their educational background. In educational terms, this raises the question of what a concretion of abstraction can look like; that is, how such complex contexts can be conveyed and experienced in simpler learning constellations. The question arises of how the desire for personal encounter and authenticity can be brought together with 'abstract sociality' so that the social background of emergencies as well as human rights aspects can be taken into account. People with a world social orientation towards 'unsociable socializing' all had the experience of active social participation with an international component. The experiences of these young people were connected with guided self-reflection. These young people did not learn by experience alone, they did not learn by knowledge and knowledge alone, but in all the reports of the young people surveyed, the connection of knowledge and experience with their own biography and their own self plays a central role. The combination of these three aspects -experience, knowledge, biography -resulted in learning constellations in the direction of world social understanding. However, the format of donations for children in the global south was linked to orientations that lead to a considerable distance from the topic. From the three studies, we do not know whether the three types build on each other hierarchically and in sequence in the sense of a progressive level of competence, or whether they develop in a different form. The findings of my study confirm other studies in this field that highlight the risks for the consolidation of paternalism and emphasize opportunities for participation (Asbrand, 2009) even in teaching situations (Kater-Wettstädt, 2015). World society and 'abstract sociality': summary and outlook My central thesis, which I outline in this contribution, is that world society has some structural logic and can only be grasped inadequately with the categories of social groups from the local area or national societies available to us so far. World social structures in their complexity and interdependence demand an even higher abstraction in order to understand and influence them. This has led to my position that global competencies must be founded on an understanding of 'abstract sociality' if they are to help master today's challenges facing the world society. The OECD will present the results of the survey on 'Global competences' at the end of the year. The research concept has already been criticized, among other things for its narrowly economic objective, the lack of clear operationalization of the associated competencies and the lack of a social framework for data collection (Auld and Morris, 2019;Conolly et al., 2019;Grotlüschen, 2018;Sälzer and Roczen, 2018; see also, in general regarding the challenges of measuring global learning, Scheunpflug, 2020). The operationalization of 'global competences' by the OECD is likely to miss the 'abstract sociality' as I described it, because it draws the focus into the realms of proximity. The study uses as indicators personal contacts with people from other countries and an interest in getting to know people from other countries. Items such as 'in our school we celebrate festivities from other cultures' (OECD, 2018) I would not see as an indicator for global understanding, but as a form of othering or a low level of global consciousness. The results of our research provide an impetus to explore whether the OECD survey questionnaire is sufficiently focussed on the interplay of informal learning opportunities, self-reflection opportunities and participation opportunities. Understanding the acquisition of global competences, recording them empirically and then looking at pedagogical practice on an evidence-based basis against this background is likely to prove to be more complex than expected. Drawing on these results, I see the following aspects for further studies and surveys: • • Research on how these global competencies develop is of importance. What steps are necessary to build a competent global understanding? In what way should local experiences and an abstract understanding of global social structures intertwine? How exactly do experiences and the dimension of experience relate to knowledge, understanding and reflection? • • The informality of framing the learning of global competencies should be included accordingly. The interplay between formal learning and informal learning seems to be very important. • • It would indeed make sense to substantiate the findings described previously in quantitative studies. It is important to take the collection of value attitudes into account, in order to enhance the understanding of promoting symmetrical understanding. However, value attitudes such as paternalism, for example, are difficult to examine in quantitative studies, as people might easy understand the social desirability of the data. • • 'Abstract sociality' must focus more than before on what forms of new social spaces and networks are created by the Internet and social media. Social media offer completely new forms of space experiences and proximity, but also pose the danger of unreal and virtual experiences that are detached from real living conditions.
2020-10-19T18:12:54.855Z
2020-09-13T00:00:00.000
{ "year": 2020, "sha1": "1b350a7e645728903e0d3cff9433e73429000c29", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1474904120951743", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "e6c4d36202f2e6ad257dbecff406667c1a8e457e", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
3342091
pes2o/s2orc
v3-fos-license
Study of weak solutions for parabolic variational inequalities with nonstandard growth conditions In this paper, we study the degenerate parabolic variational inequality problem in a bounded domain. First, the weak solutions of the variational inequality are defined. Second, the existence and uniqueness of the solutions in the weak sense are proved by using the penalty method and the reduction method. Introduction This article concerned with initial-boundary problem whose model is with Lu = u t -div a(u)|∇u| p(x,t)-2 ∇uf (x, t), a(u) = u σ + d o , where ⊂ R + is a bounded simply connected domain, Q T = × (0, T], and T denotes the lateral boundary of the cylinder Q T . This type of variational inequality was studied initially by Chen and Yi [1], who proposed the equation V (x, 0) = g(x) i n (2) for modeling the American option. When r and σ are positive constant, the existence and uniqueness of solutions to problem (4) were also studied in [2][3][4]. In 2014, the authors in [5] discussed the problem with second-order elliptic operator They proved the existence and uniqueness of a solution to this problem with some conditions on u 0 , F, and L. Later, the authors in [6,7] extended the relative conclusions with the assumption that a(u) and p(x) are two positive constants. The author discussed the existence and uniqueness of a solution by the penalty method. The existence and uniqueness of such a problem with the assumption that p(x) and a(u) are variables were less studied. The aim of this paper is to study the existence and uniqueness of solutions for a degenerate parabolic variational inequality problem. Throughout the paper, we assume that the exponent p(x, t) is continuous in Q = Q T with logarithmic module of continuity: where lim sup The outline of this paper is as follows. In Section 2, we introduce the function spaces of Orlicz-Sobolev type, give the definition of a weak solution to the problem, and prove the existence and uniqueness. Section 3 is devoted to the proof of the existence and uniqueness of the solution obtained in Section 2. Basic spaces and the main results To study our problems, let us introduce the Banach spaces: and denote by W (Q T ) the dual of W (Q T ) with respect to the inner product in L 2 (Q T ). In spirit of [3] and [4], we introduce the following maximal monotone graph: In addition, we define the following function class for the solution: , the following identity holds: The main theorem in this section is the following: (4). Suppose also that the following conditions hold: Then problem (1) has at least one weak solution in the sense of Definition 2.1. Proof of the main results In this section, we consider the family of auxiliary parabolic problems Here, M is a positive parameter to be chosen later. Moreover, and β ε (·) is the penalty function satisfying Following a similar method as in [6], we can prove that the regularized problem has a unique weak solution satisfying the following integral identities: and We start with two preliminary results that will be used several times. Lemma 3.3 Let u ε be weak solutions of (5). Then Proof First, we prove u ε ≥ u 0ε by contradiction. Assume that u ε ≤ u 0ε in Q 0 T , Q 0 T ⊂ Q T . Noting that u ε ≥ u 0ε on ∂Q T , we may assume that u ε = u 0ε on ∂Q 0 T . With (5) and letting t = 0, we deduce that From Lemma 3.2 we conclude that obtaining a contradiction. Second, we pay attention to u ε (t, x) ≤ |u 0 | ∞ + ε. Applying the definition of β ε (·), we have From (5) it is easy to prove that u ε (x, t) ≥ ε on ∂ × (0, T) and u 0ε (x) ≥ ε in . Thus, combining (21) and (23) and repeating Lemma 3.3, we have Third, we aim to prove (19). Since It follows by ε 1 ≤ ε 2 and the definition of β ε (·) that Thus, Lemma 3.3 can be proved by combining initial and boundary conditions in (5). From [6] we can get the following inclusions: These conclusions, together with the uniform estimates in ε, allow us to extract from the sequence {u ε } a subsequence (for simplicity, we assume that it merely coincides with the whole sequence) such that for some functions u ∈ W (Q T ), A i (x, t) ∈ L p (x,t) (Q T ), and W i (x, t) ∈ L p (x,t) (Q T ). Hence, (48) holds, and the proof of Lemma 3.10 is completed. Applying (28), (29), and Lemma 3.10, it is clear that u(x, t) ≤ u 0 (x) in T , u(x, 0) = u 0 (x) in , ξ ∈ G(uu 0 ), and thus (a), (b), and (c) hold. The remaining arguments of the existence part are the same as those of Theorem 2.1 in [8], and we omit the details. Moreover, the uniqueness of solutions can be proved by repeating Lemma 3.1.
2018-03-03T06:42:18.247Z
2018-02-08T00:00:00.000
{ "year": 2018, "sha1": "c44e7e9965544243fa63eba990eda0f7ba034218", "oa_license": "CCBY", "oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-018-1623-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c44e7e9965544243fa63eba990eda0f7ba034218", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
204974327
pes2o/s2orc
v3-fos-license
Pemetrexed Plus Platinum for Patients With Advanced Non-small Cell Lung Cancer and Interstitial Lung Disease Background/Aim: Pemetrexed plus platinum followed by pemetrexed maintenance has been one of the standard first-line treatments in advanced nonsquamous non-small cell lung cancer (NSCLC), but little is known regarding its safety and efficacy for patients with interstitial lung disease (ILD). Patients and Methods: The medical records of 24 patients with advanced nonsquamous NSCLC and preexisting ILD who received pemetrexed and platinum doublet therapy with and without pemetrexed maintenance in the first-line setting between December 2009 and June 2016, were retrospectively reviewed. Results: The median progression-free survival time was 4.7 months, and the median overall survival time was 9.5 months. Of the 24 patients analyzed, six received pemetrexed maintenance. Acute exacerbation of ILD (AE-ILD) occurred in five (20.8 %) of 24 patients, including two fatal cases. Conclusion: The treatment with pemetrexed plus platinum has a high risk of AE-ILD in patients with advanced nonsquamous NSCLC and preexisting ILD. Recently, treatment of patients with advanced lung cancer, especially nonsquamous non-small cell lung cancer (NSCLC) has been improving and diversifying. Molecular target-based therapies such as epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) and anaplastic lymphoma tyrosine kinase inhibitors (ALK-TKIs) have significant efficacy in patients with nonsquamous NSCLC having the corresponding driver mutations (1,2). Immune checkpoint inhibitors (ICIs) have also benefited patients with advanced NSCLC, especially those with high PD-L1 expression on tumor cells (3). However, a higher incidence of drug-related ILD was reported in patients treated with TKIs and ICIs compared to those treated with cytotoxic agents (3)(4)(5). Preexisting ILD is a risk factor for drug-related ILD and its risk is considered to be higher when using EGFR-TKIs or ICIs compared with cytotoxic chemotherapy. Therefore, in clinical practice, most physicians avoid these targeted therapies and immune checkpoint therapies for NSCLC patients with preexisting ILD and instead select cytotoxic platinum-based chemotherapy. However, optimal chemotherapy using other agents has not yet been established. Pemetrexed has been a key drug in the treatment of nonsquamous NSCLC. Pemetrexed and cisplatin combination chemotherapy has been one of the most evaluated regimens in the first-line setting for patients with advanced nonsquamous NSCLC (6). Pemetrexed is even used in the maintenance setting because of its efficacy and low cumulative toxicity. The PARAMOUNT study revealed that in patients with advanced nonsquamous NSCLC, continuation of maintenance therapy with pemetrexed following induction therapy with pemetrexed and cisplatin significantly improved progressionfree survival (PFS) and overall survival (OS) (7). Similarly, pemetrexed plus carboplatin followed by pemetrexed maintenance has been recognized as a standard regimen for nonsquamous NSCLC and is widely used in clinical practice (8). However, little is known about the safety and efficacy of combination chemotherapy of pemetrexed plus platinum with and without pemetrexed maintenance for patients with nonsquamous NSCLC and preexisting ILD. Thus, we conducted a retrospective study to determine whether combination chemotherapy of pemetrexed plus platinum with or without pemetrexed maintenance is a feasible treatment option in nonsquamous NSCLC patients with ILD. pemetrexed plus platinum doublet chemotherapy as the first-line treatment at Funabashi Municipal Medical Center between December 2009 and June 2016 were retrospectively reviewed. ILD was identified by clinical features and computed tomography (CT) images obtained prior to chemotherapy. The presence of ILD was evaluated by at least two pulmonologists and the preexisting ILD patterns were divided into usual interstitial pneumonia (UIP) pattern and non-UIP pattern in accordance with the International Consensus Statement (9). Pemetrexed and platinum (cisplatin or carboplatin) were administered on day 1 every three weeks. After four or six cycles, some of the patients without disease progression underwent maintenance therapy with pemetrexed every three weeks. Pemetrexed and cisplatin were administered at a dose of 500 mg/m 2 and 75 mg/m2, respectively. Carboplatin was administered at a dose determined by the area under the curve 5. Clinical evaluation and adverse events. The overall response rate (ORR) and the disease control rate (DCR) were assessed based on the Response Evaluation Criteria in Solid Tumor (RECIST) guidelines (10). Adverse events were graded according to the Common Terminology Criteria for Adverse Events, version 4.0. AE-ILD was defined on the basis of CT with new bilateral ground-glass opacities and/or consolidation superimposed on a background reticular pattern or honeycombing. Cases with other diseases such as congestive heart failure, apparent pulmonary infection and pulmonary embolism were excluded (11). In addition, the imaging findings of AE-ILD were classified into diffuse alveolar damage (DAD) type or non-DAD type based on the consensus statement of Japanese Respiratory Society (12). Statistical methods. PFS was defined as the duration between the start of treatment and the date of disease progression or death from any cause. OS was measured from the start of treatment until death or the last follow-up examination. Event time was estimated using the Kaplan-Meier method. Results Patient characteristics. The baseline characteristics of the 24 patients are summarized in Table I. Two patients were women and the median age was 70 years (range=56-80 years). One patient who had rheumatoid arthritis was never a smoker and was diagnosed with interstitial pneumonitis associated with rheumatoid arthritis. All patients except the one were current or past smokers, and ILD of these patients was diagnosed as idiopathic interstitial pneumonitis. Two patients had an Eastern Cooperative Oncology Group performance status (PS) of two and the others had PS of 0 or 1. Eight patients had stage Ⅲ disease and the others had stage Ⅳ or recurrent disease. The median % vital capacity (%VC: VC/predicted VC) was 91.2% (range=61.0-122.2%), and the median % forced expiratory volume in one second (%FEV1: FEV1/predicted FEV1) was 93.5% (range=60.8-105.6%). A total of 16 patients also had pulmonary emphysema and four were diagnosed as having COPD (FEV1/Forced VC ratio <0.7). Ten patients received cisplatin Based on pretreatment CT scan of the chest, UIP and non-UIP patterns were observed in two (8.3%) and 22 (91.7%) patients, respectively. Among patients who were not progressed after four cycles of pemetrexed and platinum doublet therapy, there were no significant differences between the maintenance group (n=6) and the nonmaintenance group (n=12). Treatment exposure and efficacy. The median number of treatment cycles for pemetrexed and platinum doublet therapy was 4 (range=2-6 cycles). In the 24 patients analyzed, ORR was 33.3% and DCR was 80% in pemetrexed and platinum doublet therapy (Table II). The median PFS and the median OS of all patients were 4.7 months and 9.5 months, respectively (Figure 1a, b). The one-year survival rate was 24.2%. Of 18 patients who did not progress after four cycles of pemetrexed and platinum doublet therapy, 6 received pemetrexed maintenance therapy. In 18 patients receiving only the doublet therapy, the median PFS and the median OS were 4.4 months and 9.5 months, respectively. The median number of pemetrexed maintenance cycles was 2 (range=2-4 cycles). The median OS of the maintenance group was 9.2 months (95%CI=0-26.5 months) compared with 10.0 months (95%CI=8.8-11.4 months) for the nonmaintenance group (p=0.197). Toxicity. The adverse events from pemetrexed and platinum doublet therapy are summarized in Table III. The most frequently reported hematological adverse events of grade ≥ 3 were anemia (6/24, 25%) and thrombocytopenia (25%). Two patients developed grade 4 neutropenia but no patients experienced febrile neutropenia. Regarding nonhematological adverse events, the most frequently adverse event was anorexia in seven patients (29%). There were no adverse events of grade ≥3 except AE-ILD and respiratory failure during pemetrexed maintenance therapy. AE-ILD and treatment-related death. Chemotherapy-related AE-ILD was observed in three patients during pemetrexed and platinum doublet therapy and in two patients during pemetrexed maintenance therapy. Treatment-related deaths occurred in one patient (AE-ILD) during the doublet therapy and in two patients (AE-ILD: 1, respiratory failure: 1) during the maintenance therapy. The CT findings showed DAD pattern only in one fatal AE-ILD case during the doublet therapy and non-DAD pattern in the other AE-ILD cases. During the sequential therapies, AE-ILD developed in five patients who received docetaxel or S-1, including two fatal cases. Discussion Pemetrexed, an anti-folate that is structurally similar to methotrexate, is known to cause ILD in patients with rheumatoid arthritis (13). Recently, Tomii et al. have reported that the incidence of pemetrexed-related ILD was 1.8% (12/683 patients) in NSCLC Japanese patients (14). They insisted that the risk of pemetrexed-related ILD may be at a level similar to other cytotoxic agents and pemetrexed-induced ILD responded well to steroid treatment. On the other hand, Kato et al. have reported that pemetrexed-induced AE-ILD occurred in three (12%) of 25 patients with NSCLC and preexisting ILD who were treated with pemetrexed monotherapy, and all three patients died from the drug-related ILD (15). Therefore, there is need to carefully assess whether pemetrexed containing regimens can become feasible treatment options as first-line chemotherapy for nonsquamous NSCLC patients with preexisting ILD. Paclitaxel containing regimens are the most common and well evaluated first line regimens for advanced NSCLC with preexisting ILD (16)(17)(18)(19). Minegishi et al. have conducted a prospective study of paclitaxel plus carboplatin doublet chemotherapy for advanced NSCLC and preexisting ILD, and showed that the median PFS and the median OS were 5.3 months and 10.6 months, respectively (16). The present study showed the median PFS and the median OS were 4.7 months and 9.5 months, respectively, suggesting that the efficacy of pemetrexed plus platinum doublet chemotherapy followed by maintenance of pemetrexed may not be superior to paclitaxel plus carboplatin doublet chemotherapy. The incidence of AE-ILD in paclitaxel plus carboplatin doublet chemotherapy has been reported to be 5.6% (1 in 18 patients) in a prospective study (16) and 7.9% (5 in 63 patients) in a largest retrospective study by Kenmotsu et al. (17). On the other hand, limited information is available regarding the role of the first-line regimen containing pemetrexed for lung cancer patients with preexisting ILD. Choi et al. have retrospectively analyzed 52 patients with advanced NSCLC and preexisting ILD treated with gemcitabine or pemetrexed plus platinum (20). They have reported that the incidence of AE-ILD was 5.8% (3 in 52 patients) and concluded that pemetrexed plus platinum could be a feasible regimen for NSCLC and preexisting ILD. However, two among the three AE-ILD cases in their study received pemetrexed plus cisplatin doublet chemotherapy and they both died from AE-ILD, meaning that the incidence of AE-ILD in patients treated with pemetrexed plus platinum was 15.4% (2 in 13 patients). In the present study, AE-ILD occurred in five (20.8%) of 24 patients with ILD who received platinum plus pemetrexed with and without pemetrexed maintenance, with three cases and two cases experiencing AE-ILD during the doublet phase and maintenance phase, respectively. Among the five patients, two died due to the AE-ILD. These findings suggest that the incidence of drug-related AE-ILD in chemotherapy containing pemetrexed may be higher than that in paclitaxel plus carboplatin doublet chemotherapy, and pemetrexed containing regimens can cause fatal ILD with high frequency in patients with preexisting ILD. This study has several limitations. First, the number of patients was small. Second, it was a retrospective analysis performed at a single institution, and larger scale studies are needed. Third, the diagnosis of ILD and AE-ILD was based on CT findings and not on histological examinations. Considering the high frequency of AE-ILD in chemotherapy containing pemetrexed for patients with preexisting ILD, pemetrexed-containing regimens should not be administered for such patients. Conflicts of Interest The Authors declare that they have no conflicts of interest regarding this study.
2019-10-31T09:16:26.051Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "4f28b9c1c766b4c100b743645b76cc4d9a503ea6", "oa_license": null, "oa_url": "http://iv.iiarjournals.org/content/33/6/2059.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "910c26e84662820d3259cb6fdb1f7429137860ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
139621060
pes2o/s2orc
v3-fos-license
Optical laser trapping for studying the deformability of sickle red blood cells in response to hydroxyurea Background: Sickle cell disease (SCD) is prevalent in Basrah city and affects the red blood cell (RBC) deformability and thereby causes disease symptoms. Hydroxyurea (HU) is effective in reducing morbidity and mortality in SCD patients by different mechanisms. Objectives: The aim of the study was to investigate the effect of HU on RBC deformability among SCD patients by direct laser optical trapping (OT) technique. Materials and Methods: Blood samples from SCD patients and control groups were prepared in the medical laboratory of Basrah Center for Hereditary Blood Diseases and transferred into physics laboratory wherein the laser system was presented and built-in. RBCs from each sample were exposed to three different powers of laser 5, 15, and 20 mW for 15 s and then were released and followed for 2 min. The images for each trapped RBC were obtained and at relaxation sequential times. The percentage changes in the diameters of trapped RBCs were measured for control and patient groups. Results: SCD patients were divided into two groups depending on whether they were receiving HU (39 patients) or not (43 patients). They were matched with 50 healthy individuals (control) regarding age and gender. We found that all the trapped RBCs were affected during the trapping time and then returned toward near normal with some differences between the groups and according to the power used. The deformability of HU group was better and closure to the control. Conclusion: The presented laser system and OT technique with optimal power are effective to study the RBC characteristics and deformability. HU is effective in improving RBC deformability among SCD patients. Introduction T he blood rheologic and hemodynamic properties are largely determined by red blood cells (RBCs) which is the main cellular component of the blood. The normal healthy RBCs are biconcave disc shape with approximately 7.5-8.7 µm in diameter and 1.7-2.2 in thickness, and were not subjected to an external stress. Such healthy RBCs have a flexible membrane that facilitates reversible elastic deformability during RBC passage through the microcirculation. Some pathologic conditions affected the RBC deformability and therefore altered the circulation causing disease symptoms ranging from benign to lethal complications such as what happened in sickle cell disease (SCD). [1] Hence, the mechanically fragile, less deformable RBCs in SCD cause hemolysis and impaired blood flow velocity contributing to other pathophysiologic aspects of disease. [2][3][4] The impaired blood flow plays a major and key role in acute and chronic complications of SCD. [5] SCD is an inherited blood disorder with highest prevalence throughout areas of Africa, India, Mediterranean countries, and Middle East. [6] Basrah city, southern Iraq, is within the regions affected by SCD in the world. [7] The hemoglobin S (HbS) is present in large numbers in Abu-Khasib and the low numbers in Al-Shatt Al-Arab district than other districts in Basrah. [8] In Basra, some of our patients still experienced poor quality of life despite the achievements in management of the disease and the use of hydroxyurea (HU), which is the only approved and best currently available drug used in SCD. [9] HU is effective in reducing morbidity and mortality [10,11] due to its well-known therapeutic benefits that related to increased production of fetal hemoglobin (HbF), improved hydration, and increased mean corpuscular volume of sickle RBCs as well as decrease in adhesiveness of sickle RBCs into internal wall of the vessels. [12,13] On the other hand, measuring the optical characteristics and the properties of RBC deformability has been studied as well to understand the RBC-related diseases. The technique used was based on the optical forces of the laser beam for optical trapping (OT) which was introduced in the early 1970s. Since then, advances in laser technology and progress in OT make it possible to investigate the elasticity and viscoelasticity of single cells or the microorganelle. [14,15] In Basrah, we have studied the clinical and hematologic effects of HU, [16] but yet, there is no research about the RBC deformability. We hypothesized that studying RBC deformability may help to understand more aspects of HU effect on RBCs. Therefore, we have presented and built a single-laser OT technique to study the therapeutic effect of HU on deformability of a single-living RBC obtained from blood samples of SCD and healthy control participants in a comparative study. Subjects and sample preparation The study was conducted between November 2016 and August 2017 in two institutes: Basrah Center for Hereditary Blood Diseases and the Department of Physics, College of Science. The study was approved by ethical committee of college of science university of Basrah. written inform consent was obtained from each patient prior to study. The adult patients with SCD who were taking HU regularly for 3 months were included in this study to compare with SCD patients without HU. We have excluded patients with interrupted use of HU and those with severe painful crises. Blood samples from the patients and control healthy individuals were collected and prepared in the medical laboratory and then transferred within 3 h to the Department of Physics' laboratory. For sample preparation, the blood was taken from patients and healthy controls and collected in ethylenediaminetetraacetic acid tube (4.5 mg/ml 3 ). Then, 30 µl of blood was washed three times by normal saline and centrifuged at 4000 rpm for a minute. The washed RBCs were then resuspended in 1 ml of normal saline. After that, two drops of bovine serum albumin is then added to prevent RBC adhesion to glass plates. On measurement, the diluted blood samples with normal saline of 1 ml were placed as sandwich between microscope slide and coverslip. Finally, piece of coverslip was attached in three edges with colorless nail polish or Dibutylphthalate polystyreue xylene (DPX). Methods and experimental technique Principle of optical trapping operation The basic theory for trapping of a particle is based on the size or diameter of particle (in the order of microns) to be much larger than that the laser wavelength (λ). [17,18] In principle, when a laser beam interacts with particle, the particle will trap due to the absorption, reflection, and refraction or scattering of light, and thus, the light induces linear momentum change that leads to generate an optical force on that interacting particle. This force attracts the particle to the region of the intensity of laser beam. Then, the trapped particle will affect two types of forces as illustrated in Figure 1a. These two forces are called the scattering force (F scat ) and gradient force (F grad ) to the direction of the propagation of the laser beam and All the samples from the controls and patients were then studied using increasing powers of laser beam during trapping starting from 5, through 15, and 20 mW. The duration of trapping was 15 s followed by relaxation periods up to 5 min. The image of the trapped cell was recorded by a charge-coupled device camera and monitored by a personal computer. Images for each trapped RBC were taken and then during the relaxation periods to compare with the images at free time. Data analysis After laser trapping of RBCs, a procedure of image analysis is applied to the images from RBC optical stretching and the free cell. Then, the RBC optical deformation can be described in terms of elongation along the optical axis of the laser beam (the maximum diameter) and contraction normal to the direction of the beam (minimum diameter). An ImageJ program/ software (Wooster, Ohio, USA) was used to analyze and measure the sizes and the deformability of each cell type. Using these sizes or diameters, the elastic deformability of RBCs can be analyzed using the relative changes in the maximum and minimum diameters at a given incident power of laser beam as [20] %Change in Diameter = Results General characteristics of the healthy controls and patients are shown in Table 1. The enrolled 82 patients were divided into two groups, the first group (G1) included 43 SCD patients without HU and the second group (G2) included 39 SCD patients treated with HU. Most of the patients, i.e. 62, have sickle thalassemia. The age range was 14-61 years with mean age of 24.8 years for G1 and 28.4 years for G2. They were matched regarding age and gender to control group of 50 healthy individuals, P = 0.212. spatial of the intensity of the beam, respectively. The scattering force is caused by reflection of the light away from the laser beam waist. The other force is caused by light intensity applying or trapping the particle at the narrowest region of the laser beam waist. [18,19] Thus, when a Gaussian laser beam in the TEM 00 mode is highly focused by an objective lens (100XOL) to a single spot in a near particle (in our case RBC) plane, the spot creates an "OT" where the particle or RBC with higher refractive index than that of the ambient medium as well as the gradient force will push toward the focus of laser beam where the intensity is highest as shown in Figure 1b. Experimental setup The technique is illustrated in Figure 2. A diode-pumped solid-state laser with a maximum power of 300 mW operating in a Gaussian TEM 00 mode at a wavelength of 532 nm is used for trapping. The output power of the laser is attenuated by a neutral density filter and the beam is passed through a dichroic mirror (DM). Then, the beam is passed through an objective lens (100XOL) and illuminated the prepared RBC sample. DM was fixed at an angle of 45° and used to guide the laser beam through the objective and allows the reflected light from the sample. The reflected laser beam aligned and passed into the back of the objective lens. The sample was placed on an adjustable three-dimensional translation stage to accommodate the microscope slide. A halogen lamp was used to illuminate the prepared RBC sample through an objective lens (10XOL). The RBCs from control and both patients' groups showed changes in size and shape to direct laser trapping by elongation (increase diameter) and then after release at relaxation by decreasing their diameter backward to near original size and shape but to a different extent according to the power of the laser beam and the type of the sample whether from control or from the patients' groups. As an example for the changes, Figure 3 shows the differences in size and shape of the RBCs from control, G1, and G2 at free time and after trapping with power of 15 mW. The deformability of RBCs was analyzed by calculating their maximum and minimum diameters and the relative changes in these diameters as a function of trapped power were studied using the above-mentioned equation. We calculated the average values of all tapped RBCs from the studied samples at each specific setting of the power. The results of changes in diameters are illustrated in Figure 4 (a and b for maximum and minimum diameters, respectively) for the RBCs of control, G1, and G2. It is clear that the percentage changes in the diameters were proportional and dependent on the trapped power of laser beam and with reduction in maximum and increase in minimum diameters at power of 15 mW as compared to power of 5 mW. At power of 20 mW, the results were disproportionate both for maximum and for minimum diameter measurements. For maximum diameters [ Figure 4a], the higher value of relative changes refers to more deformability and that is true for control group followed by G2 and lowest value for G1. The opposite applied to relative changes of minimum diameters. These results refer to better deformability of RBCs from G2 than G1 but less than that of control group. After release, we have followed up the RBCs to study their relaxation rates by measuring the percentage changes in maximum diameters at 15-s intervals for a period of 120 s. The obtained results are given in Figure 5, and the relative average changes in the maximum diameter were plotted as a function of time for three input powers (a, b, and c for 5, 15, and 20 mW, respectively). From this figure, it is clear that the sizes and shapes of the trapped RBCs reduced back toward near (but not as exactly as) normal free cells. The pattern of return is almost the same for control and the patients' groups at powers of 5 and 15 mW with G2 closure than G1 to control. However, the rate of relaxation of G2 was not as same as the control group and the difference increased as the power of laser increased. At a power of 20 mW, the curves of changes for G1 and G2 are almost the same and obviously differ from that of the control group. We have noticed that; at the end of 120 seconds, all the trapped cells from G1, G2 and control were not fully relaxed, i.e., the relative changes did not approach to a zero value. Discussion SCD is highly prevalent in Basrah as compared to other parts of Iraq. [8] HU is largely used among our patients, but some of them still experience painful attacks despite the improvement in HbF and other RBC indices. Therefore, the current work was conducted to evaluate the effect of HU on the deformability of sickle cells using an OT technique that was presented and built locally. The enrolled adult patients had age ranging between 15 and 61 years, most of them having sickle thalassemia syndrome, and they were age and gender matched with control healthy individuals. The cells from all the samples were measured at free time, and we found that the average value is interestingly higher for the patients than for controls. However, all the trapped cells were elongated, but the relative changes in the maximum diameters were different according to the power setting of laser and also the type of the sample whether control or patients' groups. The higher values of relative changes of maximum diameters mean more deformability properties of RBCs. The obtained results indicated that RBCs of control group were more deformable, followed by HU group, and the lowest value was for sickle patients without HU. However, the changes were disproportionate at high power (20 mW) probably due to photodamage effect that disturbed the cell membrane properties leading to protein denaturation. [21] The photodamage effect was different for control and the patient groups. Hence, in the current work, we could point out an optimum power for the used wavelength of 15 mW as a function of the relative changes in diameters for the RBCs. When we followed RBCs after release from OT, the photos were captured for up to 5 min during relaxation time. However, the percentage changes in maximum diameters were measured at 15-s intervals for only a period of 120 s because of frequent artifacts that happened in the samples after 2 min. We found that the trapped RBCs tend to return to their near original size and shape during relaxation time, and pattern of return was almost similar for control and the patient groups at powers of 5 and 15 mW. The better response for HU group could be attributed to the effect of HU in promoting production of HbF which actively inhibits polymerization of HbS improving morphologic and physiologic properties of RBCs. [22] This was in accord with other studies, [23][24][25][26] although they were using different experimental settup for measuring RBC deformability. On the other hand, the response among HU group was not as same as that of healthy control RBCs probably because some of sickle RBCs might have irreversible changes altering membrane viscosity and deformability [3,27,28] despite the laboratory response by HU in those patients and that raises the necessity to investigate other mechanisms than polymerization of HbS causing membrane stiffing and reduced RBC. [29] At power of 20 mW, the optical deformability for healthy control increased compared to both patient groups; this might be due to the cell photodamage affecting more sickle RBCs regardless the treatment with HU. We have noticed as well that at the end of 120 seconds the trapped cells were not fully relaxed and were not returned to their exact size and shape, i.e., the relative changes did not approach to a zero value even of control sample as expected and found in other studies [20] because we used in our experiment different sample preparation, different system setting up and different power levels. Conclusion and Recommendations The results confirmed that the presented and built single-laser OT technique was effective and able to study the therapeutic effect of HU on deformability of a single-living RBC among patients with SCD. Although still worse than healthy control, but it was clear that the patients treated with HU exhibited better deformability
2019-04-30T13:08:09.006Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "0551c266701b904fa3d3328afeb529adc850c40c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijh.ijh_6_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "e9eefa1643cd6d93c777939e7c979466f82ff1ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11332619
pes2o/s2orc
v3-fos-license
Robot Cognitive Control with a Neurophysiologically Inspired Reinforcement Learning Model A major challenge in modern robotics is to liberate robots from controlled industrial settings, and allow them to interact with humans and changing environments in the real-world. The current research attempts to determine if a neurophysiologically motivated model of cortical function in the primate can help to address this challenge. Primates are endowed with cognitive systems that allow them to maximize the feedback from their environment by learning the values of actions in diverse situations and by adjusting their behavioral parameters (i.e., cognitive control) to accommodate unexpected events. In such contexts uncertainty can arise from at least two distinct sources – expected uncertainty resulting from noise during sensory-motor interaction in a known context, and unexpected uncertainty resulting from the changing probabilistic structure of the environment. However, it is not clear how neurophysiological mechanisms of reinforcement learning and cognitive control integrate in the brain to produce efficient behavior. Based on primate neuroanatomy and neurophysiology, we propose a novel computational model for the interaction between lateral prefrontal and anterior cingulate cortex reconciling previous models dedicated to these two functions. We deployed the model in two robots and demonstrate that, based on adaptive regulation of a meta-parameter β that controls the exploration rate, the model can robustly deal with the two kinds of uncertainties in the real-world. In addition the model could reproduce monkey behavioral performance and neurophysiological data in two problem-solving tasks. A last experiment extends this to human–robot interaction with the iCub humanoid, and novel sources of uncertainty corresponding to “cheating” by the human. The combined results provide concrete evidence for the ability of neurophysiologically inspired cognitive systems to control advanced robots in the real-world. INTRODUCTION In controlled environments (e.g., industrial applications), robots can achieve performance superior in speed and precision to humans. When faced with limited uncertainty that can be characterized a priori, we can provide robots with computational techniques such as finite state machines that can address such expected uncertainty. But in the real-world, robots face unexpected uncertainty -such as new constraints or new objects in a taskand need to be robust to variability in the world. Exploiting knowledge of primate neuroscience can help in the design of cognitive systems enabling robots to adapt to varying task conditions and to have satisfying, if not optimal, performance, in a variety of different situations (Pfeifer et al., 2007;Arbib et al., 2008;Meyer and Guillot, 2008). We have previously characterized the functional neurophysiology of the prefrontal cortex as playing a central role in the organization of complex cognitive behavior (Amiez et al., 2006;Procyk and Goldman-Rakic, 2006;Quilodran et al., 2008). The goal of the current research is to test the hypothesis that indeed, a model based on this architecture can be used to control complex robots that rely on potentially noisy perceptual-motor systems. Recent advances in the neurophysiological mechanisms of decision-making have highlighted the role of the prefrontal cortex, particularly the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (LPFC), in flexible behavioral adaptation by learning action values based on rewards obtained from the environment, and adjusting behavioral parameters to varying uncertainties in the current task or context (Miller and Cohen, 2001;Koechlin and Summerfield, 2007;Rushworth and Behrens, 2008; see Khamassi et al., in press for a review). Both the ACC and LPFC appear to play crucial roles in these processes. They both receive inputs from dopamine neurons which are known to encode a reward prediction error coherent with reinforcement learning (RL) principles (Schultz et al., 1997). The LPFC is involved in action selection and planning. The ACC is known to monitor feedback as well as the task and is considered to modulate or "energize" the LPFC based on the motivational state (Kouneiher et al., 2009). However, there is a contradiction between current models of the ACC-LPFC system, which are either dedicated to rewardbased RL functions (Holroyd and Coles, 2002;Matsumoto et al., 2007) or are focused on the regulation of behavioral parameters by means of conflict monitoring and cognitive control (Botvinick et al., 2001;Cohen et al., 2004). Here we propose a novel computational model reconciling these two types of processes, and show that it can reproduce monkey behavior in dealing with uncertainty in a variety of behavioral tasks. The system relies on RL principles allowing an agent to adapt its behavioral policy by trialand-error so as to maximize reward (Sutton and Barto, 1998). Based on previous neurophysiological data, we make the assumption that action values are learned and stored in the ACC through dopaminergic input (Holroyd and Coles, 2002;Amiez et al., 2005;Matsumoto et al., 2007;Rushworth et al., 2007). These values are transmitted to the LPFC which selects the action to perform. In addition, the model keeps track of the agent's performance and the variability of the environment to adjust behavioral parameters. Thus the ACC component monitors feedback (Holroyd and Coles, 2002;Brown and Braver, 2005;Sallet et al., 2007;Quilodran et al., 2008) and encodes the outcome history (Seo and Lee, 2007). The adjustment of behavioral parameters based on such outcome history follows meta-learning principles (Doya, 2002) and is here restricted to the tuning of the β meta-parameter which regulates the exploration rate of the agent. Following previous machine learning models, the exploration rate β is adjusted based on variations of the average reward (Auer et al., 2002;Schweighofer and Doya, 2003) and on the occurrence of uncertain events (Yu and Dayan, 2005;Daw et al., 2006). The resulting meta-parameter modulates action selection within the LPFC, consistent with its involvement in the exploration-exploitation trade-off (Daw et al., 2006;McClure et al., 2006;Cohen et al., 2007;Frank et al., 2009). The model was tested on two robot platforms to: (1) show its ability to robustly perform and adapt under different conditions of uncertainty in the real-world during various neurophysiologically tested problem-solving (PS) tasks combining reward-based learning and alternation between exploration and exploitation periods (Amiez et al., 2006;Quilodran et al., 2008); (2) reproduce monkey behavioral performance by comparing the robot's behavior with previously published and new monkey behavioral data; (3) reproduce global properties of previously shown neurophysiological activities during these tasks. The PS tasks used here involve a set of problems where the robot should select one of a set of targets on a touch screen. Each problem is decomposed into search (exploration) trials where the robot identifies the rewarded target, and exploitation trials where the robot then repeats its choice of the "best" target. We will see that the robot solved the task with performance similar to that of monkeys. It properly adapted to perceptual uncertainties and alternated between exploration and exploitation. We then generalized the model to a human-robot interaction scenario where unexpected uncertainties are introduced by the human introducing cued task changes or by cheating. By correctly performing and autonomously learning to reset exploration in response to such uncertain cues and events, we demonstrate that neurophysiologically inspired cognitive systems can control advanced robotic systems in the real-world. In addition, the model's learning mechanisms that were challenged in the last scenario provide testable predictions on the way monkeys may learn the structure of the task during the pre-training phase of Experiments 1 and 2. GLOBAL ROBOTICS SETUP In each experiment presented in this paper, we consider a humanoid agent -a physical robot or a simulation -which interacts with the environment through visual perception and motor commands. The agent perceives objects or geometrical features (i.e., cubes on a table or targets on a screen) via a camera-based vision system described below. The agent is required to choose one of the objects with the objective of obtaining a reward. The reward is a specific visual signal (i.e., a triangle presented on a screen) supposed to represent the juice reward obtained by monkeys during these experiments. For simplicity, perception of the reward signal is hardcoded to trigger an internal scalar reward signal in the computational model controlling the robot. Thus all external inputs are provided to the robot through vision. Experiments 1 and 2 are inspired by our previous monkey neurophysiology experiments (Amiez et al., 2006;Quilodran et al., 2008). They involve interaction with a touch-sensitive screen (IIyama Vision Master Pro 500) where different square targets appear. The agent should search for and find the target with the highest reward value by touching it on the screen (Figure 1). Experiment 3 extends monkey experiments to a simple scenario of human-robot interaction that involves a set of cubes on a table. A human is sitting near the table, in front of the robot, and shuffles the cubes. The robot has to find the cube with a circle on its hidden face, corresponding to the reward. GLOBAL STRUCTURE OF THE EXPERIMENTS The three experiments have the same temporal structure. Here we describe the details of this structure, and then provide the specifics for each experiment. All experiments are composed of a set of problems where the agent should search by trial-and-error in order to find the most rewarding object among a proposed ensemble. Each problem is decomposed into search (exploration) trials where the agent explores different alternatives until finding the best object, and repetition (exploitation) trials where the agent is required to repeat choice of the best object several times (Figure 2). After the repetition, a problem-changing cue (PCC) signal is shown to the agent to indicate that a new problem will start. In 90% of the new problems the identity of the best object is changed. In Experiments 1 and 2, the PCC signal is known a priori. Experiment 3 tests the flexibility of the system, as the PCC is learned by the agent. Experiment 1 is deterministic (only one object is rewarded while the others are not). Experiment 2 is probabilistic (each object has a certain probability of association with reward) and thus tests the ability of the system to accommodate such probabilistic conditions. EXPERIMENT 1 The first experiment is inspired by our previous neurophysiological research described in (Quilodran et al., 2008). Four square targets are presented on the touch screen (see Figure 2). At each Frontiers in Neurorobotics www.frontiersin.org FIGURE 1 | Lynxmotion SES robotic arm in front of a touch screen used for Experiment 1. The screen is perceived by a webcam. The arm has a gripper with a sponge surrounded by aluminum connected to the ground. This produces a static current when contacting the screen and enables the screen to detect when and where the robot touches it. This setup allows us to test the robot in the same experimental conditions as the non-human primate subjects in our previous studies (Amiez et al., 2006;Quilodran et al., 2008). problem, a single target is associated with reward with a probability of one (deterministic). At each trial, the four targets appear on the screen and remain visible during a 5-s delay. The robotic arm should touch one of the targets before the end of the delay. Once a touch is detected on the screen, the targets disappear and the choice is evaluated. If the correct target is chosen, a triangle appears on the screen, symbolizing the juice reward monkeys obtain. For incorrect choices, the screen remains black for another 5-s delay and then a new search trial starts. Once the correct target is chosen through a process of trial-and-error search, a repetition phase follows, lasting until the robot performs three correct responses, no matter how many errors it made. At the end of the repetition phase, a circle appears on the screen, indicating the end of the current problem, and the start of a new one. Similarly to monkey experiments, in about 90% cases, the correct target is different between two consecutive problems, requiring a behavioral shift and a new exploration phase. EXPERIMENT 2 Experiment 1 tests whether the model can be used under deterministic conditions, but leaves open the question as to whether it can successfully perform under a probabilistic reward distribution. Experiment 2 allows us to test the functioning of the model in such probabilistic conditions, directly inspired by our neurophysiological research described in (Amiez et al., 2006). In contrast with Experiment 1, the agent can choose only between two targets. In each problem, one target has a high probability (0.7) of producing a large reward and a low probability (0.3) of producing a small one. The other target has the opposite distribution ( Table 1). Problems in this task are also decomposed in search and repetition trials. However, in contrast to Experiment 1, there is no sharp change between search and repetition phases. Instead, trials are a posteriori categorized as repetition trials, as follows. Each problem continues until the agent makes five consecutive choices of the best target, followed by selection of the same target for the next five trials or five of the next six trials. However, if after 50 trials the monkey has not entered the repetition phase, the current problem is aborted and considered unsuccessful. Similarly to Experiment 1, the end of each problem is cued by a PCC indicating a 90% probability of change in reward distribution among targets. EXPERIMENT 3 The third experiment constitutes an extension of Experiment 1 to a simple human-robot interaction scenario. The experiment is performed with the iCub, a humanoid robot developed as part of the RobotCub project (Tsagarakis et al., 2007). The task performed by the iCub robot is illustrated in Figure 3 and its temporal structure is described in Figure 4. In this task, four cubes are lying on a table. One of the cubes has a circle on its hidden face, indicating a reward. The human can periodically hide the cubes with a wooden board ( Figure 4D) and change the position of the rewarding cube. This mimics the PCC used in the previous experiments. The difference here is that the model has to autonomously learn that presentation of the wooden board is always followed by a change in condition, and should thus be associated with a shift in target choice and a new exploration phase. MONKEY BEHAVIORAL VALIDATION To validate the ability of the neurocomputational model to control the robot, we compared the robot's behavioral performance with monkey data previously published as well as original monkey behavioral data. Average behavioral performances of Monkeys 1 and 2 performing Experiment 2 were taken from (Amiez et al., 2006). Trial-by-trial data of monkey M performing Experiment 1 were taken from (Quilodran et al., 2008). In addition, we analyzed unpublished data performed by three other monkeys (G, R, S) on Experiment 1 in our laboratory. NEURAL-NETWORK MODEL DESCRIPTION Action selection is performed with a neural-network model 1 whose architecture is inspired by anatomical connections in the prefrontal cortex and basal ganglia in monkeys ( Figure 5). The model was programmed using the neural simulation language (NSL) software (Weitzenfeld et al., 2002). Each module in our model contains a 3 * 3 array of leaky integrator neurons whose activity topographically encodes different locations in the visual space (i.e., nine different locations on the touch screen for Experiments 1 and 2, or on the table for Experiment 3). At each time step, a neuron's membrane potential mp depended on its previous history and input s: where τ is a time constant. The average firing rate output of the neuron is then generated based on a non-linear (sigmoid) function of the membrane potential. We used ∂t = 100 ms, which means that we simulated 10 iterations of the model per second of real Frontiers in Neurorobotics www.frontiersin.org FIGURE 5 | Neural-network model. Visual input (targets seen on the screen or cubes on the table) is sent to the posterior parietal cortex (PPC). The anterior cingulate cortex (ACC) stores and updates the action value associated with choosing each possible object. When a reward is received, a reinforcement learning signal is computed in the ventral tegmental area (VTA) and is used both to update action values and to compute an outcome history (COR, correct neuron; ERR, error neuron) used to modulate the exploration level β * in ACC. Action values are sent to the lateral prefrontal cortex (LPFC) which performs action selection. A winner-take-all ensures a single action to be executed at each moment. This is performed in the cortico-basal ganglia loop consisting of striatum, substantia nigra reticulata (SNr), and thalamus (Thal) until the premotor cortex (PMC). Finally, the output of the PMC is used to command the robot and as an efferent copy of the chosen action sent to ACC. time. A parameter table is provided in the appendix, summarizing the number of neurons and parameters in each module of the model. Here we describe the role of each of these modules. VISUAL PROCESSING Visual information perceived by the camera is processed by a commercial object recognition software (SpikeNet; Delorme et al., 1999). Prior to each experiment, SpikeNet was trained to recognize a maximum of four different geometrical shapes (square, triangle, circle in Experiments 1 and 2; cube, wooden board, hands, circle in Experiment 3). During the task, perception of a particular shape at a particular location activates the corresponding neuron in the 4 * 3 * 3 input matrix in the visual system of the model. A time persistence in the visual system enables the perception of an object to progressively vanish instead of instantaneously disappear. This is necessary for robotic tests of the model during which spurious discontinuities in the perception of an object should not influence the model's behavior. CORTICAL MODULES In order to decide which target to touch or cube to choose, the model relies on the estimation of action values based on a Temporal-Difference learning algorithm (Sutton and Barto, 1998). In our model, this takes place in ACC, based on three principal neurophysiological findings: First -anatomical projections of the dopaminergic system that have been demonstrated to have greater strength to ACC than to LPFC (Fluxe et al., 1974). Second -the observed ACC responses to reward prediction errors (Holroyd and Coles, 2002;Amiez et al., 2005;Matsumoto et al., 2007). Third -the observed role of ACC in action value encoding (Kennerley et al., 2006;Lee et al., 2007;Rushworth et al., 2007). For Experiments 1 and 2, these action values are initialized at the Frontiers in Neurorobotics www.frontiersin.org beginning of each new problem, after presentation of the PCC signal. This is based on the observation that, after extensive pretraining, monkeys show a choice shift after more than 80% of the PCC presentation (mean for Monkey G: 95%; M: 97%; R: 61%; S: 77%). In Experiment 3, the model autonomously learns to reinitialize action values (Experiment 3 Results, below). Anterior cingulate cortex action value neurons project to LPFC, and to dopamine neurons in the ventral tegmental area (VTA) module to compute an action-dependent reward prediction error: where a i , i∈{1..4} is the performed action, and r is the reward set to 1 when the corresponding cue is perceived. In the neuroscience literature of decision-making, subjects' behavior can be well captured by RL models by computing a reward prediction error once every trial, at the feedback time, even in the case where no reward is obtained (Daw et al., 2006;Behrens et al., 2007;Seo and Lee, 2007). Here, we wanted to avoid such ad hoc informing of the model when the absence of reward should be considered as a feedback. Thus, dopamine neurons of the model produce a reward prediction error signal in response to any salient event (appearance or disappearance of a visual cue). In addition to being more parsimonious with respect to robotic implementation of the model, this is consistent with more general theories of dopamine neurons arguing that dopamine neurons respond to any task-relevant stimulus to prevent sensory habituation (Horvitz, 2000;Redgrave and Gurney, 2006). This reinforcement signal is sent to ACC and affects synaptic plasticity of an action value neuron only when it co-occurs with a motor efference copy sent by the premotor cortex (PMC): The reinforcement signal δ is sent to ACC which updates synaptic weights associated to the corresponding action value neuron: where trace is the efferent copy sent by the PMC to reinforce only the performed action, and α is a learning rate. While ACC is considered important for learning action values, decision on the action to make based on these values is known to involve the LPFC . Thus in the model, action values are sent to LPFC which makes a decision on the action to trigger (Figure 5). This decision relies on a Boltzmann softmax function, which controls the greediness versus the degree of exploration of the system: where β regulates the exploration rate (0 < β). A small β leads to almost equal probabilities for each action and thus to an exploratory behavior. A high β increases the difference between the highest action probability and the others, and thus produces an exploitative behavior. As shown in Figure 5, such action selection results in more contrast between action neurons' activities in LPFC than in ACC during repetition phases where β is high, thus promoting exploitation. As we wanted to adhere to the mathematical formulation employed for model-based analysis of the prefrontal cortical data recorded during decision-making (Daw et al., 2006;Behrens et al., 2007;Seo and Lee, 2007), the activity of leaky integrator neurons in our LPFC modules is algorithmically filtered at each time step by Eq. 4. We invite the reader to refer to (McClure et al., 2006;Krichmar, 2008) for a neural implementation of this precise mechanism of decision-making under exploration-exploitation trade-off. BASAL GANGLIA LOOP In order to prevent the robot from executing two actions at the same time when activity in LPFC related to non-selected action remains non-null, we finally implemented a winner-take-all mechanism in the basal ganglia. It has been proposed that the basal ganglia are involved in clean action selection so as to permit a winner-takes-all mechanism (Humphries et al., 2006;Girard et al., 2008). Here we simplified our previous basal ganglia loop models (Dominey et al., 1995;Khamassi et al., 2006) to a simple relay of inhibition which permits the neurophysiologically grounded disinhibition of a single selected action in the Thalamus at a given moment (Figure 5). COGNITIVE CONTROL MECHANISMS In addition to RL mechanisms, we provide the system with cognitive control mechanisms which will enable it to flexibly adjust behavioral parameters during learning. Here this is restricted to the dynamical regulation of the exploration rate β used in Eq. 4 based on the outcome history, following meta-learning principles (Schweighofer and Doya, 2003). A substantial number of studies have shown ACC neural responses to errors (Holroyd and Coles, 2002) as well as positive feedback, a process interpreted as feedback categorization (Quilodran et al., 2008). In addition, neurons have been found in the ACC with an activity reflecting the outcome history (Seo and Lee, 2007). Thus, in our model, in addition to the projection of dopaminergic neurons to ACC action values, dopamine signals also influence a set of ACC feedback categorization neurons ( Figure 5): error (ERR) neurons respond only when there is a negative δ signal; correct (COR) neurons respond only when there is a positive δ signal. COR and ERR signals are then used to update a variable encoding the outcome history (β * ): where α + = −2.5 and α − = 0.25 are updating rates with β * (0 < β * < 1). Such a mechanism was inspired by the concept of vigilance employed by Dehaene and Changeux (1998) to modulate the activity of workspace neurons whose role is to determine the degree of effort in decision-making. As for the vigilance which is increased after errors, and decreased after correct trials, the asymmetrical learning rates (α + and α − ) enables sharper changes in response to either positive or negative feedback depending on the task. β * is then transferred to LPFC where it regulates the exploration rate β. In short, β * is algorithmically filtered by a sigmoid function Frontiers in Neurorobotics www.frontiersin.org which reverses its sign, and constraints it to a range between 0 and 10: where ω 1 = 10, ω 2 = −6 and ω 3 = 1. This equation represents a sigmoid function that produces a low β when β * is high (exploration) and a high β when β * is low (exploitation). Finally, the ACC module also learns meta-values associated with different perceived objects which represent how each of these objects is associated with variations of average reward. This will enable the robot to learn that, during Experiment 3, presentation of the wooden board is always followed by a drop in the average reward, and thus should be associated with a negative meta-value. This part of the model represents the learning process that takes place in monkeys during pre-training phases preceding Experiments 1 and 2. During such pre-training, monkeys progressively learn that different problems are separated by a PCC signal. In the model, a reward average is computed and meta-values of objects that have been seen during the trial are updated based on variations in the reward average as computed at the end of the current trial: where η is a learning rate and θ(t ) is the estimated reward average. When the meta-value associated with any object is below a certain threshold (empirically fixed to require approximately 10 presentations before learning; see parameter table in Appendix), presentation of this object to the robot automatically triggers a reset of action values and β * variable -action values are reset to random values while β * is increased so that it produces a low β (corresponding to exploration). As a consequence, the robot will display exploratory behavior after such reset. MOTOR COMMANDS Motor output from the model's PMC module is sent to the robotic devices via port communication with YARP (Metta et al., 2006). EXPERIMENT 1 We first performed a first series of 11 sessions with the Lynxmotion SES 5DOF robotic arm (http://www.lynxmotion.com) on the problem-solving task described above. This corresponded to a total of 112 problems and 717 trials. Figure 6 shows a sample performance of the model on two consecutive problems -corresponding to 14 trials. Each trial lasted a few seconds and resulted in the selection of one of the four targets -corresponding to different colors on the third chart of Figure 6. At the beginning of a trial, the perception of the onset of the four targets on the screen produced an increase of activity of ACC and LPFC neurons (first two charts on Figure 6). The neuron with the highest activity activated a selection of the corresponding target by the robot. At the end of the trial, the offset of the targets with or without reward (depending on the correctness of the robot's choice) resulted in a drop of ACC and LPFC activity and return of the robot's arm to its initial position (end of target choice on the third chart of FIGURE 6 | Simulation of the model on two consecutive problems during Experiment 1. Each color represents a different target chosen by the robot. The black triangle above the "chosen target" chart indicates the presentation of the Problem-Changing Cue before the start of a new problem. The x -axis represents time. The first chart shows the activity of ACC action value neurons. The second chart shows LPFC action neurons. The fourth chart shows ACC feedback categorization neurons, indicating errors (ERR) and correct (COR) trials, and induced by dopaminergic reward prediction error signals. The last chart shows the evolution of the exploration rate β in the model. This simulation illustrates the correct execution of the task by the robot and shows the incremental variation of the exploration rate in response to positive and negative feedback. Figure 6). During the first problem, the robot selected three successive targets (indicated by the green, blue and brown blocks in Figure 6) corresponding to error trials until the correct target was chosen (the target illustrated as orange in Figure 6) and a reward was obtained (ACC COR neuron Figure 6). The errors lead to a progressive increase of activity of β * along the search phaseproducing more exploratory behavior -and a drop of β * after the first reward -promoting exploitation during repetition (Fifth chart of Figure 6). Such activity may explain our finding that many ACC neurons respond more during the search phase than during the repetition phase (Procyk et al., 2000;Quilodran et al., 2008). In the model, we made the hypothesis that feedback categorization responses in the ACC would emerge from reward prediction error signals (Eq. 5; Holroyd and Coles, 2002). Interestingly, the high learning rate α suitable for the task produced a positive reward prediction error (and thus a COR response of ACC feedback categorization neurons) only at the first correct trial, and not at subsequent correct trials during repetition where the reward prediction error in the model was null (Figure 6). This may explain why, in monkeys, ACC neurons responding to positive feedback in the same task mainly responded during the first correct trial and less to subsequent correct trials (Quilodran et al., 2008). Indeed, these neurons have been interpreted as responding to dopamine reward prediction error signals. Validating this interpretation, the explanation emerging from the model for the precise pattern of response of these neurons is that subsequent correct trials during repetition were correctly expected and thus did not produce a reward prediction error. In terms of behavior, the robot quickly adapted to feedback obtained at each trial and rarely repeated choice errors. The second Frontiers in Neurorobotics www.frontiersin.org half of the session shown on Figure 6 illustrates a case where the robot adapted to uncertainty emerging from perceptual ambiguities. Around time step 3900, a new problem started, cued by the PCC, and the model thus resets its exploration rate and action values. The robot searched for the new correct target (the target illustrated as blue in Figure 6), and once found, repeated the correct choice. However, due to visual ambiguity that could occasionally take place during such physical interaction with the environment the robot interpreted the trial as incorrect. Specifically, in this case, while touching the correct target the robot's arm hid the targets on the screen and the system thus perceived targets as vanishing long before reward occurrence. As a consequence, the model generated a negative reinforcement signal which reduced the action value associated with the correct target (time step 4300 on Figure 6). This lead to the choice of a different target on the next trial, and finally a return to the correct choice, to properly finish the repetition phase. This demonstrates that perceptual noise inherent in robotic systems can be accommodated by such type of neurophysiologically inspired model. We next compared the robotic results with real monkey data collected in the same task and tests of the same model in simulation, to assess robustness in real-world conditions and variations in performance due to embodiment. Monkey behavioral data were collected in four monkeys for a total of 7397 problems and 46188 trials. Figure 7A shows the average errors during search versus repetition phases. Similar to monkeys, the robot produced approximately 60% errors during the search phase, which is close to optimality (considering that in 90% of new problems, the correct target was different from the previous problem, there were 2/3 = 66.67% chances of choosing a wrong target). During the repetition phase, the robot made approximately 85% correct responses, which was similar to monkeys. In contrast, simulation of the same model made no error during repetition, as task-related perception in the simulation was always perfect. Performance of the robot was also similar to monkeys when considering the average duration of search and repetition trials ( Figure 7B). The search phase for the robot lasted 2.5 trials on average which was not different from that of monkeys (Kruskal-Wallis test, p > 0.31). The repetition phase lasted less than four trials, again not different from monkeys (Kruskal-Wallis test, p > 0.78). The robot's behavior thus did not differ from that of the monkeys. In contrast, the simulation always took exactly three trials during repetition, which was the smallest possible duration and was statistically different from monkey performance during repetition (Kruskal-Wallis test, p = 1.6e-12). Thus, in addition to respecting known anatomy and reproducing neurophysiological properties observed in the monkey prefrontal cortex during the same task, the model could reproduce global behavioral properties of monkeys when driving a robot 2 . EXPERIMENT 2 In order to test the ability of our neuro-inspired model to generalize over variations in task conditions, we next tested it in simulation on a stochastic version of the problem-solving task used in monkeys (Amiez et al., 2006). The reward distribution was stochastically distributed over two possible targets, and so obtaining the largest reward value was possible even when choosing the wrong target (see Table 1). Thus a single correct trial was not sufficient to know which target had the highest value. As a consequence, we predicted that the same model with a smaller learning rate α (used in Eq. 3) would better explain monkeys' behavior, as a reduced learning rate would require several successful trials before convergence. Consistent with our prediction, a naive test on the stochastic task with the parameters used with Experiment 1 and a fixed exploration rate β -that is, without the β * -mechanism for exploration regulation (α = 0.9, β = 5.2) -elicited a mean number of search trials of 13.3 ± 12.3 with only 87% successful problems -problems during which the most rewarded target was found and correctly repeated ("Model no-β * " on Figures 8A,B). This represented poor performance compared to monkeys. In the original experiment, the two monkeys found the best target in 98% and 94.5% of the problems. The search phase lasted on average 6.4 ± 5.6 and 5.6 ± 6.9 trials respectively (Amiez et al., 2006). We then explored different values of the learning rate combined with a flexible adaptation of the exploration rate β regulated by the modulatory variable β * . This provided results closer to monkey performance. Roughly, monkeys' performances could be best approximated with α between 0.3 and 0.6 (Figures 8C,D). This produced a mean number of search trials of 5.5 and 99% successful problems ("Model β * " on Figures 8A,B). Interestingly, monkey performance could be best approximated with a mean α around 0.5 during Experiment 2, while a higher mean α (0.9 on average) better explained monkey behavior during Experiment 1. This is consistent with theoretical propositions for efficiently regulating the learning rate α based on the volatility of the task (Rushworth and Behrens, 2008). Indeed, in Experiment 1 the correct target changed every seven trials on average (as illustrated in Figure 7) which was more volatile than Experiment 2 where changes of reward distribution occurred less frequently: every 16 trials (∼six search trials as illustrated in Figure 8, and 10 repetition trials imposed by the task structure). Concerning the optimization of β, it is remarkable that the more exploitative the better the performances (low β induced a too lengthy search phase because the model was too exploratory). Unlike our initial hypothesis, this was in part due to the nature of Experiment 2 in which only two targets were available, decreasing the search space, so the best strategy was clearly exploitative. In accordance with this finding, β was systematically adjusted with β * to its highest possible value allowed here (around 10). The optimized model with a fixed exploration rate β reached a nearly optimal behavior -in the sense of reward maximization. In contrast, the model with a dynamic exploration rate achieved good performance (although not as good) but nevertheless closer to monkeys' performance in this task. This suggests that such brain inspired adaptive mechanisms are not optimal but might have been selected through evolution because they can produce satisfactory performance in a variety of different conditions. EXPERIMENT 3 The last experiment was implemented for two purposes: • In the previous experiments the model knew a priori that a particular signal called PCC was associated with a change in the task condition, and thus a shift in the rewarded target. Here we wanted the model to autonomously learn that some cues are always followed by errors and thus should be associated to an environmental change that requires a new exploration. • We also wanted to test our neuro-inspired model on a humanoid robot performing a simple human-robot interaction scenario where the human can introduce unexpected uncertainty or cheat, showing the potential applications of the model to more complex situations. During the course of eight experiment sessions, the robot performed a total of 151 problems and 901 trials. Figure 9 shows a sequence of 14 problems performed by the model on the iCub robot during Experiment 3. Similar to Experiment 1, the robot searched for the correct cube and repeated its choice once that cube had been determined. Also similarly to Experiment 1, we used a "PCC" which was here a wooden board used to hide the cubes while the human changed the position of the rewarded one ( Figure 4D). An important difference with Experiment 1 was that the model did not a priori know what this signal meant and made errors following its presentation during the first part of a session. Since the wooden board was always associated to an error, the robot learned by itself to shift its behavior and restarted to explore when it was later presented. This was achieved by learning meta-values associated to different perceived objects: each time the perception of a given object was Frontiers in Neurorobotics www.frontiersin.org FIGURE 9 | Example session comprising 14 consecutive problems performed by the iCub robot during Experiment 3. The different parts of the figure follow the same legend as Figure 6. This session illlustrates that after many presentations of the wooden board (triangles) followed by errors, the robot learned to associate it with a condition change. This happened around time step 12000 (labeled as "SHIFT") where the apparition of the wooden board successfully triggered a reset of action values and of the exploration rate. followed by a variation (positive or negative) of the average reward obtained by the robot, the meta-value of this object was slightly modified (Eq. 7). With this principle, the robot learned that presentation of the board was always followed by a drop in the average reward. Thus the board acquired a negative meta-value. When the meta-value of a given object became significantly low, the robot systematically shifted its behavior and restarted to explore each time the object appeared again. Figure 10A shows the evolution of the meta-values associated with the board, the cubes and perception of the experimenter's hands grasping the cubes. We can see that the board's meta-value incrementally decreased -each time it was presented and followed by an error. In the example session shown on Figure 9, the metavalue of the board became sufficiently low to enable a behavioral shift at the beginning of the 11th problem after about 12000 time steps. At that moment, the human hid the cubes with the board, changed the position of the rewarding cube, and the robot directly chose a new cube (exploration). When looking at all eight experiments performed by the robot, among 55 presentations of the board that occurred in the first 10000 iterations of a session, the robot shifted only five times (9.1% of the time). Among 37 presentations of the board that occurred after the 10000 first iterations, the robot shifted 29 times (78.4%). Thus the iCub robot learned to shift in response to the board. Such a learned behavioral shift produced an improvement in the robot's performance on the task. During the second part of each session, the robot made fewer errors on average during search phases, and required fewer trials to find the correct cube. Before this shifting was learned, in 65 problems initiated by a board presentation, the robot took on average 3.5 trials to find the correct cube. After shifting learned, in 36 problems initiated by a board presentation, the robot took on average 2.2 trials to find the correct cube. The difference is statistically significant (Kruskal-Wallis test, p < 0.001). Figure 10 also shows that the meta-value associated with the cubes themselves fluctuated -because perception of the cubes was sometimes followed by correct choices, sometimes by errors -but remained within a certain boundary. As a consequence, the robot did not unlearn the task. If the cubes' meta-value had also significantly declined, the robot would have reset action values at each presentation of the cubes (i.e., at each trial), and would not have been able to find the correct target. Thus, such meta-learning mechanism may be a good model of how animals learn the structure of the task during the pre-training phase of Experiments 1 and 2: (A) Learning that some cues are sometimes followed by rewards, sometimes by errors, and are thus subject to RL; (B) Learning that some other cues such as the PCC are always followed by errors and shall be associated with a task change which requires a reset of action values and exploration each time they are presented. We finally addressed an additional degree of complexity. During the second half of each experiment, once the robot had learned to shift its choice in response to the wooden board, the human introduced new unexpected uncertainty by occasionally "cheating" in the middle of a problem. The human put his hands on the cubes, grasped them and changed their position without hiding the cubes with the board (as illustrated on Figures 4F-H). The robot saw such an event by recognizing the hands on the cubes. This was a priori provided to the robot as a possible visual feature, but was not a priori associated with any meaning. In a first stage, this event was systematically followed by an error from the robot which selected the cube location associated to the highest value (exploitation), though the human had "cheated" by moving the rewarded cube to a different location. A first degree of flexibility was enabled by the model's RL mechanisms. This permitted the robot to decrease the value of the cube location following this error, and thus to avoid persistence in failure: among 37 times where the human cheated followed by an error from the robot, in 34 cases (91.9%) the robot shifted at the next trial. In addition and similar to the board, the meta-value of the perceived hands incrementally decreased, finally producing a high probability of triggering a new exploration phase each time it occurred ( Figure 10B). Thus the robot progressively learned to shift its behavior in response to the human's hands configuration during cheating: Among 16 such events occurring after the 20000 first iterations of a session, the robot shifted 10 times (62.5%) while it shifted in only 3.0% (1/33) of the cases during the first 20000 iterations of each session 3 . DISCUSSION This work showed the application of a neuro-inspired computational model on a series of robotic experiments inspired by monkey neurophysiological tasks. The last experiment extended such tasks to a simple human-robot interaction scenario. This demonstrates that a neuro-inspired model could adapt to diverse conditions in a real-world environment by virtue of: • Reinforcement learning (RL) principles, enabling the capability to learn by trial-and-error, and to dynamically adjust values associated to behavioral options; • Meta-learning mechanisms, here enabling the dynamic and autonomous regulation of one of the RL meta-parameters called the exploration rate β. The model synthesizes a wide range of anatomical and physiological data concerning the Anterior Cingulate-Prefrontal Cortical system. In addition, certain aspects of the neural activity produced by the model during performance of the tasks resembles previously reported ACC neural patterns that where not a priori built into the model (Procyk et al., 2000;Quilodran et al., 2008). Specifically, like neurons in the ACC, in the model ACC feedback categorization neurons responded more to the first correct trial and not to subsequent correct trials, a consequence of the high learning rate suitable for the task. This provides a functional explanation for these observations. Detailed analysis of the model's activity properties during simulations without robotic implementation provided testable predictions on the proportion of neurons in ACC and LPFC that should carry information related to different variables in the model, or that should vary their spatial selectivity between search and repetition phases (Khamassi et al., 2010). In the future we will test hypotheses emerging from this model on simultaneously recorded ACC and LPFC activities during PS tasks. The work presented here also illustrated the robustness of biological hypotheses implemented in this model by demonstrating that it could allow a robot to solve similar tasks in the real-world. Comparison of simulated versus physical interaction of the robot with the environment in Experiment 1 showed that real-world performance produced unexpected uncertainties that the robot had to accommodate (e.g., obstructing vision of an object with its arm and thus failing to perceive it, or perceiving a feature in the scene which looked like a known object but was not). The neuro-inspired model provided learning abilities that could be suboptimal in a given task but which enabled the robot to adapt to such kind of uncertainties in each of the experiments. By incorporating a model based on neuroscience hypotheses in a robot, we had to make concrete hypotheses on the interaction between brain structures dedicated to different cognitive processes. Robotic constraints prevented us from providing ad hoc information often used during perfectly controlled simulations, such as the information that the absence of reward at the end Frontiers in Neurorobotics www.frontiersin.org of a trial should be considered as a feedback signal for the RL model (Daw et al., 2006;Behrens et al., 2007;Seo and Lee, 2007). Instead, dopamine neurons of our model produced a reward prediction error signal in response to any salient event (appearance or disappearance of a visual cue) and could affect synaptic plasticity of an action value neuron within ACC only when it co-occurred with an efferent copy sent by the PMC. Interestingly, dopamine neurons were previously reported to respond also to salient neutral stimuli (Horvitz, 2000), which was interpreted as a role of dopamine neurons in blocking sensory habituation and sustaining appetitive behavior to learn task-relevant actionoutcome contingencies (Redgrave et al., 2008). Moreover, in the case of dopaminergic signaling to the striatum, it has been reported that a motor efference copy is sent to the striatum in conjunction with the phasic response of dopaminergic neurons, which was interpreted as enabling a specific reinforcement of relevant actionoutcome contingencies (reviewed in Redgrave et al., 2008). Thus, an interesting neurophysiological experiment that could permit to validate or refute choices implemented in our model would consist in recording dopaminergic neurons during our PS task and see whether: (1) they respond to neutral salient events; (2) their response to trial outcomes is contingent with traced inputs from PMC to ACC. Importantly, our work demonstrated that the model could also be applied to human-robot interaction. The model enabled the robot to solve the task imposed by the human and to successfully adapt to unexpected uncertainty introduced by the human (e.g., cheating). The robot could also learn that new objects introduced by the human could be associated with changes in the task condition. This was achieved by learning meta-values associated with different objects. These meta-values could either be reinforced or depreciated depending on variations in the average reward that followed presentation of these objects. The object which was used to hide cubes on the table while the human changed the position of the reward was learned to have a negative meta-value and triggered a new behavioral exploration by the robot after learning. Such meta-learning processes may explain the way monkeys learn the significance of the PCC during the pretraining phase of Experiments 1 and 2. In future work, we will analyze such pre-training behavioral data and test whether the model can explain the evolution of monkey behavioral performance along such process. Future work can also include a refinement of the β * -based regulation of exploration within the LPFC so as to take into account noradrenergic neuromodulation within a network of interconnected cortical neurons. Indeed, here we wanted to evaluate mathematical principles of meta-learning for the regulation of exploratory decisions. As a consequence, we simply algorithmically transferred the outcome history computed in ACC into the β variable used in the softmax equation for action selection in LPFC (Eq. 4). This does not preclude a neural implementation of such an interaction. It has previously been shown that noradrenergic neurons in the locus coeruleus (LC) shift between two modes of response between exploration and exploitation phases, and that noradrenaline changes the signal-tonoise ratio within the prefrontal cortex (Aston-Jones and Cohen, 2005). Given that ACC projects to LC and drives phasic responses of LC noradrenergic neurons (Berridge and Waterhouse, 2003;Aston-Jones and Cohen, 2005), our model is consistent with such a configuration. A possible improvement of our model would be to replace the algorithmic implementation of the softmax function in our LPFC module by a modulation of extrinsic and inhibitory synaptic weights between competing neurons based on the level of noradrenergic innervation, as proposed by (Krichmar, 2008). On the robotic side, future work could involve autonomous learning of the relevant objects of each experiment (i.e., those that are regularly presented) and adaptive regulation of the learning rate α when shifting between deterministic and stochastic reward conditions (Experiment 1 and 2 respectively). The latter could be achieved by extracting measures of the dynamics of the different task conditions, such as the reward volatility which is expected to vary between deterministic and stochastic conditions (Rushworth and Behrens, 2008; see (Khamassi et al., in press) for a review of this issue on PS tasks). We also plan to extend the model to social rewards provided by the human to the robots by means of language (Dominey et al., 2009;Lallée et al., 2010). Such pluridisciplinary approaches provide tools both for a better understanding of neural mechanisms of decision-making and for the design of artificial systems that can autonomously extract regularities from the environment and interpret various types of feedback (rewards, feedbacks from humans, etc...) based on these regularities to appropriately adapt their own behaviors.
2015-03-20T15:25:33.000Z
2011-07-12T00:00:00.000
{ "year": 2011, "sha1": "f2177d99a0ca42396cffe05ad054cb8427db5186", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2011.00001/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "603c2e7538759297df26129935a3f9c21f01c403", "s2fieldsofstudy": [ "Psychology", "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
254199563
pes2o/s2orc
v3-fos-license
The Open Biomedical Engineering : Background: Breast cancer is one of the most significant health problems in the world. Early diagnosis of breast cancer is very important for treatment. Image enhancement techniques have been used to improve the captured images for quick and accurate diagnosis. These techniques include median filtering, edge enhancement, dilation, erosion, and contrast-limited adaptive histogram equalization. Although these techniques have been used in many studies, their results have not reached optimum values based on image properties and the methods used for feature extraction and classification. INTRODUCTION Breast cancer is a perfidious disease that leads to a large number of deaths in women [1]. There are many techniques used for detecting breast cancer, such as mammography, ultrasound, magnetic resonance imaging (MRI), thermography, and electrical impedance tomography. Mammography has a high specificity and sensitivity to detect cancer together with better resolution and more accuracy in detecting abnormalities deeper in breast tissue, although it uses ionizing radiation and is less sensitive to radiographically dense breasts [2]. On the other hand, ultrasound has a high diagnostic utility in women with dense breasts [3], which uses nonionizing radiation and is a safe technique. However, it cannot capture an image of the entire breast. MRI is a very accurate test with approximately 100% efficiency and can detect the intraductal spread of cancer, but it has poor specificity and is very expensive compared to others [4,5]. On the contrary, thermography is non-invasive, non-radioactive, and promising for dense breasts [6]. However, it is easily affected by temperature and poorly extracts images from large breasts. Electrical impedance is non-invasive, non-radiative, and risk-free, works well with dense breasts, and is reasonably priced [7 -9], but it has poor resolution [10]. Sahiner et al. used mammography and a convolutional neural network (CNN) to classify cancer as normal/abnormal [11]. Nega et al. used linear discriminant analysis (LDA) as a classifier [12]. However, 92% accuracy for normal/abnormal and 80% accuracy for benign/malignant were achieved by using the support vector machine (SVM) classifier, discrete wavelet transforms (DWT), and discrete shearlet transform (DST) [13]. Using the SVM classifier and wavelet decomposition, 80% accuracy was achieved at 1.1 fps/I by Campanini et al. [14], and by using the SVM classifier, 85.11% accuracy was achieved at 1.44 fps/I by Ke et al. [15]. Magnetic resonance imaging (MRI) [16 -19], which uses subtracted mean intensity projection images, evaluates a fully automatic CAD system. A semiautomatic segmentation algorithm achieved accurate and consistent breast lesion segmentation in the study by Ritter et al. [20]. It is different from using ultrasound as in the study by Eltoukhy et al. [21,22], which uses curvelet and wavelet transformation and nearest neighbor as a classifier, achieving 94.07% accuracy, while wavelet 90.07% and curvelet transform achieving 94.28% accuracy for abnormal. After the classification of the Euclidian distance and curvelet transform for feature extraction, 98.59% accuracy was achieved in the study by Eltoukhy et al. [22]. Using the local discrete cosine transform (LDCT) and curvelet transform in the wrapping technique, 77.3% accuracy was achieved by Gardezi et al. [23]. The use of a support vector machine and 1238 coefficients and 150 features achieved an accuracy of 95.84% for normal/abnormal and 96.56% for benign and malignant lesions in the study conducted by Eltoukhy et al. [24,25]. In addition, a markercontrolled watershed transformation algorithm achieved 84.848% accuracy in the study by Shareef [26]. An image preprocessing technique was used to improve the image features and prepare for further processing by eliminating unrelated and spare parts from the background of the mammogram images [27]. Preprocessing used many steps to make the image ready to use, such as the median filtering technique, edge enhancement, dilation, erosion, and contrastlimited adaptive histogram equalization. The median filter is a nonlinear filter that efficiently eliminates the salt-and-pepper noise. The median tends to maintain the sharpness of image edges while removing the noise. Edge enhancement is the simplest linear filter that assigns equal weights (Wk) to all neighborhood pixels. A weight of Wk = 1/(NM) was used for the N × M neighborhood. It is used as a filter to suppress noise in an image and remove Gaussian noise with a reasonable effect. The mean filter smoothens and blurs the images [28]. Dilation and erosion affect the shape, structure, and the form of objects. Dilation is used to add pixels at the region's boundaries or to fill in holes in the image [29]. Dilation can also be used to connect disjoint pixels and add pixels at edges. Erosion does the opposite operation of dilation; erosion reduces boundaries and increases the size of holes. Contrast-limited adaptive histogram equalization (CLAHE) was originally applied to the enhancement of low-contrast medical images [30 -32]. CLAHE differs from ordinary AHE in terms of contrast limitations. CLAHE introduces a clipping limit to overcome the noise amplification problem. CLAHE limits the amplification by clipping the histogram to a predefined value before computing the cumulative distribution function (CDF). Regarding feature extraction techniques, many studies have used the Gabor filter, wavelet transform, and local binary pattern (LBP). The Gabor filter provides the highest response at the points and edges where texture changes. Owing to these characteristics, algorithms based on Gabor filters have been successfully applied in computer vision applications [33], such as texture extraction [34,35]. The general form of the 2D (for mammographic images) Gabor filter family is characterised by a Gaussian kernel adapted by an oriented complex sinusoidal wave [36]. LBP is an effective method for extracting textural features. The LBP operator converts the image into an array or an image with integer labels, illustrating a small-scale appearance of the image [37]. Support vector machine (SVM), linear discriminant analysis (LDA), and nearest neighbor (KNN) classifiers were used in our research. SVM is a machine learning technique that categorizes binary classes by obtaining and using a class boundary hyperplane, thereby expanding the margin of the offered training data. The training data samples along the hyperplanes close to the class boundary are known as support vectors, and the margin is the space between the support vectors and class boundary hyperplanes. The SVM is established based on the idea of decision planes that identify decision boundaries. The decision plane differentiates sets of items with different class memberships. The SVM is a valuable procedure for data classification. A classification mission typically involves training and testing data comprising data instances [38]. Linear discriminant analysis (LDA) is a frequently used procedure for data classification and dimensionality reduction. LDA handles situations in which within-class frequencies are unequal, and their actions are analysed using randomly generated data. This approach maximizes the ratio of betweenclass variance to within-class variance in any specific dataset, thereby ensuring maximal separability [39]. LDA often delivers robust, reliable, and interpretable results in a simple manner. When faced with real-world classification difficulties, LDA is repeatedly the first benchmarking technique before other more complicated and adaptable techniques are utilized [40]. The nearest neighbor classifier (KNN) is a commonly used pattern classification procedure owing to its ease and productivity [41 -43]. Furthermore, KNN, a flexible multivariate statistical technique, uses the standard Euclidean distance to estimate the data [44,45]. KNN evaluates the class aspect based on the k-nearest training models in the feature space. When a dataset is offered, it selects the k-nearest samples from the categorized training data and determines the class taking into consideration the most representative samples. The Euclidean distance similarity metric was applied to select neighborhoods. Our study aimed to differentiate between normal and abnormal mammographic breast images and to accurately diagnose these images. Data The data were gathered using the Mammographic Image Analysis Society (MIAS) database, which categorizes breast tissues as normal, benign, or malignant. Although breast tissues may be classified as fatty, fatty glandular, or dense glandular, the collected images are diagnosed using image processing algorithms. The collected images were analyzed using 1024 × 1024 pixels. Their distribution is shown in Table 1, considering the radius of the abnormality as 197 pixels. Image Preprocessing Image preprocessing techniques are regarded as one of the most significant steps for improving image quality by reducing noise or other undesired regions. Image segmentation is used to cut and change images into abnormal regions for easy detection and diagnosis of ROIs (regions of interest). Beginning manually, a circle of radius 197 pixels was considered. Four different preprocessing procedures were used, and each technique had its own filtration sequence. As indicated in Table 2, the median filter, average filter dilation, erosion, and adaptive histogram are four filters that can be used in specific sequences. The key variation between these sequences is the order in which filters are applied. For example, in sequence four, an adaptive histogram is first applied, followed by dilation, erosion, median filter, and average filter. The extracted and filtered images were examined and compared using the mean square error (MSE) and structural similarity index (SSIM) to determine the best applied scenario to make the image clearer and noise-free. The mean square error is the most common form of image quality. A higher MSE value indicates lower image quality. MSE is defined as follows: SSIM is also used to measure the similarity between the two images in order to assess the difference in the quality of the generated image from the original image. With a moving window, SSIM considers the arrangement of image values by quantifying pixel intensities, which are composed of three components: brightness, contrast, and structure. SSIM calculates the similarity between two images, X and Y, as expressed by the following equation: (2) According to the retrieved findings of the two tested methodologies, scenario-2 of the sequence (adaptive histogram, dilation, median, average, and erosion) obtained the highest score, as shown in Table 3. The original image and preprocessing image are illustrated in Fig. (1), whereas Fig. (2) shows the image after it has been processed as well as the ROI extraction. . (1). Applying sequence-2 of preprocessing techniques. Feature Extraction To select the most effective features in the gathered photos, the Gabor filter and local binary pattern were employed as feature extraction techniques, with features combined between them. Fig. (3) shows the flowchart of the completed work. Gabor Filter The Gabor filter is a linear filter used to extract information from images, such as texture (mean, standard deviation, skewness, variance, mean absolute, and maximum energy). Fig. (1) shows the ROI of the mammography image before and after using the Gabor filter (eq. 4) at points and edges where the texture changes. Algorithms based on Gabor filters have been effectively employed in breast cancer images to extract significant features and data to aid the classification process using these characteristics (Fig. 4) SSIM(x, y) = (2μ x μ y +c 1 )(2σ xy +c 2 ) (μ x 2 +μ y 2 +c 1 )(σ x 2 +σ y 2 +c 2 ) modulated by an oriented complex sinusoidal wave represents the generic form g(x, y) of a 2D Gabor filter family, as shown in equations 3-6: (3) (4) Where, δ x and δ y are the scaling parameters, W is the central frequency of the complex sinusoid, and ϴ ϵ [0, π] is the orientation of the normal to the parallel stripes of the Gabor function. (6) Where, m is the total number of orientations and n is the total number of frequencies. Fig. (4). Applying the Gabor filter. Local Binary Pattern An LBP is a robust description of textures. The features were extracted based on a threshold. This method has proven to be a powerful tool for extracting texture features from images, such as the mean intensity value, contrast, correlation, and entropy. The mammogram image preprocessing and after the local binary pattern are shown in Fig. (5). Local binary pattern (LBP) was used to calculate the mean intensity value, contrast, correlation, and entropy of the studied image, which was considered a texture descriptor. Texture was defined for each pixel using the local structure. The binary code is extracted based on the intensity level differences between neighboring pixels. The pixel intensity level was used as the threshold value for surrounding pixels. The general form of a local binary pattern (LBP) is represented by equation 7, as follows: (7) Where, gp is the value of its neighbors, gc is the gray value of the central pixel, P is the total number of neighbors involved, and R is the radius of the neighborhood. To calculate the accuracy of the classifier, a merged Gabor filter and local binary pattern features are employed as a new group of features. Ten features were created by combining these values, including mean, standard deviation, skewness, variance, mean absolute, maximum energy, mean intensity, contrast, correlation, and entropy. To determine the best features, three groups of features were introduced for the three classifiers. Classification After collecting three groups of features, 1) Gabor filter features, 2) local binary pattern features, and 3) merged features, the features were classified using three techniques: support vector machine (SVM), linear discriminant analysis (LDA), and nearest neighbor (KNN) classifiers. In KNN, the cosine distance metric and equal distance weights, together with 10 neighbors, are the adjusted parameters. The linear kernel function, together with the multiclass method, is an SVM factor, whereas LDA assigns a full covariance structure. These parameters are assigned to implement the classification process. RESULTS In this paper, 319 images were obtained from the Mini-MIAS database (Mammographic Image Analysis Society). The images were divided into 209 normal and 110 abnormal for the mass/non-mass classification. For benign/ malignant legions, 110 images were divided into 60 benign and 50 malignant lesions. The images were grayscale, with a size of 1024 × 1024 pixels.The ROI was manually extracted, with a radius of 197 pixels. The images were processed via a group of filters, such as adaptive histogram equalization, dilation, median, average, and erosion as selected sequences/scenarios that satisfied the best MSE and SSIM scores. Three groups of features were used: 1) features from the Gabor filter, 2) features from LBP, and 3) features from merging (GF+LBP). Three classifiers, SVM, LDA, and KNN, were used to classify the images as either normal/abnormal or benign/malignant. The combination of LDA as a classifier and GF+LBP as a group of features has satisfied the highest results with 100% differentiation between normal and abnormal images, as illustrated in Fig. (6). The experimental results indicated that when using the Gabor filter, the results were 95.7%, 98.9%, and95.7% for normal/abnormal, and 85.1%, 85.1%, and 82.9% for benign/malignant using SVM, LDA, and KNN as classifiers, respectively. Using the local binary pattern for feature extraction, the results were 96.8%, 98.9%, and 96.8% for normal/abnormal and 85.1%, 85.1%, and 82.9% for benign/malignant, using SVM, LDA, and KNN as classifiers, respectively. By merging the features of the Gabor filter and local binary pattern features, the results were 97.8%, 100%, and 94.6% for normal/abnormal and 85.1%, 88.7%, and 81.9% for benign/malignant by using SVM, LDA, and KNN classifiers, respectively. As shown in Fig. (7), the accuracy of applying classifiers to abnormal cases (benign and malignant) was 88.7% in the case of applying LDA together with combined features. Furthermore, as shown in Table 4, a comparison of the proposed technique and previous work ensures that the calculated results meet the higher accuracy, particularly in distinguishing between normal and abnormal cases. DISCUSSION Based on the experimental results, KNN was observed to be a poor classifier, either for differentiating between normal and abnormal images or between benign and malignant images. SVM techniques provided equal results for abnormal images for all three feature groups. The texture descriptor extracted from the LBP and the maximum response at points and edges extracted from the Gabor filter correlated to the observed differences in the results between normal and abnormal images. The same accuracy as that of the LDA classifier was obtained using features extracted from either GF or LBP. CONCLUSION This paper has introduced a combined system that uses the best sequences of preprocessing enhancement techniques after manually segmenting ROIs extracted from the MIAS database. Three distinct classifiers were used to classify the features obtained from the Gabor filter (GB) and the local binary pattern (LBP). The LDA classifier achieved substantial improvement by integrating the features, achieving 100% accuracy for normal/abnormal images and 88.7% accuracy for benign/malignant images. The advanced technique combines these methods and determines the appropriate order of picture enhancement techniques based on the image database used. ETHICAL STATEMENT The database that supports the results of this research is available online and cited appropriately. We were concerned only with data analysis and methodology and not with any clinical testing. Mammographic Image Analysis Society (MIAS) database has been used for this study. This study has been approved by the medical ethics committee. CONSENT FOR PUBLICATION Not applicable. FUNDING None.
2022-12-03T16:13:44.760Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "d5eea53cf491ec54e8d96bdb90819a05b5abc855", "oa_license": "CCBY", "oa_url": "https://openbiomedicalengineeringjournal.com/VOLUME/16/ELOCATOR/e187412072209200/PDF/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "393ccc9b05f0bf9b982649eb3cb2faa0448cfe8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
255775050
pes2o/s2orc
v3-fos-license
Converging orifice used to control the discharge rate of spherical particles from a flat floor silo The effect of the converging orifice geometry in a model silo on the discharge rate of monosized spherical particles was studied experimentally and numerically. The cylindrical container was equipped with interchangeable inserts with converging discharge orifices of various upper diameters in the upper base and a constant lower diameter in the lower base. Plastic PLA beads and agricultural granular materials: wheat, rapeseeds, and linseeds were tested. A series of discrete element method simulations corresponding to the performed experiments was conducted with a largely extended set of experimental discharge conditions. In the case of the constant thickness of the insert, the discharge rate initially increased with an increase in the half cone angle of the converging orifice and then the tendency reversed. In the majority of cases, the discharge rate through the converging orifice was higher than through the hopper with the same orifice diameter. Recently, reports have been published presenting numerical methods for designing hoppers with a varying contraction rate to maximize the mass discharge rate of granular material. The finite element method 13,14 or the discrete element method 15 with efficiency corroborated by experimental verification 16 are used most often. Some results have shown that MDR can be increased by nearly 140% in a curved hopper, compared to a conical hopper with the same orifice size, hopper height, and silo diameter. Proper silo geometry may allow to control precisely the flow rate of granular material discharging the silo; however, understanding how to manipulate the mass discharge rate requires further research. That may have practical applications in metering, dosing or mixing. Considering the results of above-mentioned studies, the objective of the reported project was to carry out a systematic study of the flow through a conical converging orifice with various values of thickness and half cone angle. A possibility of replacing the hopper bottom by the flat bottom equipped with converging discharge orifices in silo has been investigated. Motivation for the present study comes from the industrial flow of powders and grains in various devices. The converging parts, e.g. welding neck flanges, are common and important components of many practical apparatus used in the transport and processing of liquids and granular solids 17, 18 . So far no attempts have been made to use a numerical method for analyzing flow rate of granular materials through a conical converging orifice with various geometry. Therefore, series of the discrete element method simulations, supplemented by laboratory experiments have been performed. The specific appliance was designed for purpose of that project. Methods and materials Laboratory testing. The experimental silo has been used to measure the mass discharge rate MDR. The cylindrical flat bottomed container (Fig. 1a) was 150 mm in diameter and 450 mm high. The container wall was made of galvanized steel, while its flat floor was made of plywood. Plastic PLA beads with a diameter of 5.95 mm, d p , and the mass of 0.25 g were used as reference particles. The number of PLA particles in the sample was equal to 14,000. Wheat, rapeseeds, and linseeds were tested as agricultural granular particles (Fig. 1b, Table 1). The frictional parameters of particles were determined with use of the tilting table method ( Table 2). The silo diameter was 25 times bigger than the largest particle diameter, which, according to the findings reported in literature, allowed neglecting the influence of the bin wall [19][20][21] . A repeatable filling procedure was adopted to maintain a similar geometrical bedding structure in subsequent tests. A sieve was placed axially on the top surface of the silo. The measured amount of particles was poured through the sieve. After completion of filling, the top free surface was leveled. The discharge gate was opened and the mass of particles leaving the container was measured until the discharge was completed. Indications of three load cells supporting the silo were used to determine change in the mass of silo and particles during discharge. The change in the mass of discharged particles was The thicknesses h of the inserts were tested in a range from 0 to 100 mm. The majority of them were normal multiplicities of the particle mean diameter. The lower diameter d 0 ranged from 19 to 55 mm and the upper diameter d 1 ranged from 32.5 to 72 mm, providing the half cone angle ranging between 4 and 90º. The reference lower diameter d 0 of the orifice was 32.5 mm. The flat orifice with d 1 = 32.5 mm (d 0 > d 1 ) served as a reference orifice providing a non-disturbed discharge. The discharge through conical hoppers with the same half cone angle as that of the converging orifice provided additional reference data of the mass discharge rate. The orifice diameter of the hopper was 32.5 mm and the upper diameter was 150 mm. The Hertz-Mindlin no-slip contact model was applied for simulations following the Hertz theory 23 as the default model used in the EDEM software package 24 . The material parameters of the particles were taken to reproduce the properties of the PLA particles: solid density ρ = 2212 kg/m 3 , Young's modulus E = 8.8 GPa, and Poisson's ratio ν = 0.25 25 . The frictional parameters between particles μ p-p = 0.47, between particle and wall μ p-w = 0.49, and between particle and bottom (plastic insert) μ p-b = 0.21, as well as the coefficient of restitution e = 0.3 were determined experimentally. A default value of the rolling friction of 0.01 of EDEM software was applied for simulations. The walls of the silo were modelled with density ρ = 7800 kg/m 3 , Young's modulus E = 200 GPa, and Poisson's ratio ν = 0.25, which were material parameters of the steel. Particles were generated inside the model silo. The particles were then discharged through a centrally located flat orifice, a converging orifice, or a conical hopper (Fig. 2). The simulations were performed with a time step of 1.6•10 -6 s with use of EDEM software package 24 . The simulations were performed according to the following scheme of setting the converging orifice parameters: Results Discharge scheme No. (1) d 1 = var., α= var., d 0 = const., h = const. The preliminary DEM simulations performed for the flat orifice (d 0 > d 1 ) with the diameter d 1 in the range from 19 to 35 mm indicated that the threshold orifice size providing an undisturbed flow of material from the silo was 32.5 mm. Therefore, in the further study, the lower diameter d 0 = 32.5 mm was applied for the simulations. The DEM simulated relationship between the mass discharge rate MDR and the upper diameter of the converging orifice d 1 for d 0 = 32.5 mm Figure 3b shows a change in the normalized mass discharge rate (MDR norm. ) with the increasing half cone angle α of the converging orifice. Mass discharge rates were normalized by the mass discharge rate determined for the flat orifice of d 1 = 32.5 mm. For all tested thicknesses, the MDR norm. initially increased with the increasing α. After the maximum was reached at α crit. , the mass flow rate monotonically declined to the MDR obtained for the flat reference orifice (i.e. MDR norm. → 1). The highest maximum of the MDR norm. (> 3) was obtained for α crit. = 4º and h = 100 mm. The maximal values of MDR norm. decreased with the decrease in the thickness of the insert and were noted for the higher half cone angle α crit . For small values of α crit. the maxima MDR norm. obtained for the converging orifice were 5% lower than those obtained for the hopper with the same half cone angle α and the same orifice diameter of 32.5 mm, while the maxima for α > 20º were approximately 10% higher than those for the hopper. The course of the relationships MDR norm. (α) may be interpreted in the light of the Jenike criterion for the flow pattern in a conical hopper as dependent on the angle of internal friction and on the α value 14,24 . In the case of a steep hopper (low α), a mass flow takes place. After an increase in α to a limiting value, the flow pattern changes into a funnel flow. A further increase in α leads to formation of a stable dead zone with a converging flow identical to that present in a flat floor silo. The results of the laboratory tests performed for four granular materials discharged from the converging orifice with geometry providing the maximum MDR in the DEM simulations were compared with the numerical www.nature.com/scientificreports/ results obtained for the same geometry of the converging orifice and for the hopper (Fig. 4). The experimental and numerical results were in reasonable agreement. Both of them showed the same tendency of a decrease in MDR norm. with the α crit. increase. Most of the experimental results were located very close to the results of the simulations performed for the converging orifice. The values of MDR norm. for rapeseeds were lower than those for the other materials, which should be attributed to the over twice larger difference in the size of the seeds. This is consistent with the findings reported by Gella, (2) the MDR decreasing with α increase for α > α crit. (Fig. 5). (Fig. 4), the relationships obtained using scheme No. 2 (d 1 = var., d 1 = d 0 + const., h = const., α = const.) followed Beverloo's relationship very well. This means that the relationship MDR(d 1 ) obtained for the converging orifice with α = const. ≤ α crit. followed Beverloo's relationship obtained for the flat orifice. www.nature.com/scientificreports/ its maximum/plateau and remained almost constant with the further increase in d 0 (Fig. 7a). Substituting the d 0 variable with the corresponding half cone angle α under the condition d 1 = const., it can be observed that the MDR remained almost constant for α ≤ α crit. and decreased with the α increase for α > α crit. (Fig. 7b). Scatter of the MDR illustrated in Fig. 7 as the standard deviation bars disturbed precise determination of α initiating the plateau. The difference in the course of dependencies presented in Figs. 3 and 7 results from applying the different independent x variable: d 1 in Fig. 3a and d 0 Fig. 7a. Additionally, the half cone angle α applied in Fig. 3b and Fig. 7b depends in different way on the variables d 0 and Dense and loose flow through the orifice. Figure 8 shows changes in the mean porosity of the assembly of spherical particles determined in the volume of the orifice of d 0 = 32.5 mm, for insert thickness of 100 mm (Fig. 8a), and 12 mm (Fig. 8b), at detention, after filling, and during commencement of the discharge. Porosity is defined as the ratio of the volume of pores to the volume of the assembly. The time variation of porosity in the volume of the orifice for several values of α has been shown. After filling, the porosity was approximately 48% in static conditions. For the insert with h = 100 mm, the discharge commencement produced a sharp increase in the porosity to a value dependent on α (Fig. 8a). For α values below 4°, the increase was nearly immediate. A further increase in α to 4° produced a substantial change in the p(t) relationship with a switch in porosity lasting for approximately 1.4 s. The porosity of the material flowing through the volume of the converging orifice was approximately 83% for α ≤ 4° and 53% for α ≥ 5°. The seemingly slight increase in α from 3° to 4° and subsequently to 5° produced substantial changes in the behavior of the material. The limiting value of the half cone angle was α = α crit. = 4°. The porosity inside the corresponding volume of the hopper of the half cone angle α = 4° during the discharge was 53%, i.e. it was the same as the values of a dense flow obtained for the converging orifice with α > α crit. . The same tendency for changes in the porosity was observed for the insert with h = 12 mm and α crit. = 19.7º (Fig. 8b). In this case, the relationships were not as clear as for h = 100 mm due to relatively big scatter of data resulting from discrete nature of the process averaged over eight times lover volume. The comparison of the profiles of velocity V z of particles during discharge for the flat orifice, converging orifice, and hopper with the same α and d 0 (Fig. 9) explains the cause of the increase in the mass discharge rate through the converging orifice to values obtained for the hopper. For the converging orifice, at the level of the bottom edge of the orifice the particle velocity was approximately twice higher than the velocity of particles leaving the orifice (Fig. 9a). Figure 8a shows that the porosity in the converging orifice was also approximately twice higher than in the hopper. Therefore, the mass discharge rate, the product of the particle velocity and the bulk density, was similar for the converging orifice and the hopper with the same half cone angle α. Profiles of the particles velocity V z in the vertical direction have shown that the highest particles acceleration occurred in the converging orifice (Fig. 9b). Increase in the porosity during commencement of discharge through the converging orifice softened the structure of the bulk of particles, and, consequently, made accelerating particles easier due to gravity. Finally, it resulted in higher velocity at the level of the bottom edge of the orifice. Softening of the structure of the bulk of particles in the volume of the converging orifice with a thickness of a few particle diameters ensures the same mass discharge rate as the discharge of the dense structure of the bulk of particles through the hopper. This means that, by applying different geometries of the orifice, a similar mass discharge rate can be achieved by means of a stream of more densely packed particles with a lower particle velocity or a stream of more loosely packed particles with a higher particle velocity. Discussion The need of deeper understanding of the kinematic transition region near the outlet in the silo is important for a precisely controlled discharge rate 1,5 . Therefore, the orifice dimensions were selected as variables to study discharge through the converging orifice. The converging orifice can be considered as an extremely simplified curved hopper reduced into two segments: a flat floor and a short part of the hopper. Studies on the effect of the geometry of a conical converging orifice on the mass flow rate of granular material is scarce. Therefore, in this project, the results obtained for silos with conical hoppers were considered as a reference point. In the majority of cases, the flow rate through the converging orifice is higher than through the hopper with the same orifice diameter. Hence, the conical hopper may be replaced by flat bottom equipped with converging orifice with a smaller diameter to obtain the same discharge rate. The values of the MDR obtained for the converging orifice were located close to these provided by the hopper and considerably lower than the values provided by curved hopper, presented by Huang et al. 16,26 and Guo et al. 14 . The main novelty of the study is the indication of the hyperbolic type relationship between the half cone angle α crit. and the thickness of the insert with converging orifice h separating the geometry of the converging orifice into two regions with respect of the dependence of the MDR on α for d 0 = const.: (1) the MDR increasing with α increase for α ≤ α crit. and (2) the MDR decreasing with α increase for α > α crit. . The results of this study corroborated the observation that the flow mode (bulk density of the stream and particle velocity) of granular material through a conical converging orifice depends on the half cone angle of the orifice. For α < α crit. , the discharge commencement produces a rapid increase in the porosity of the material in the volume of the orifice associated with the higher particle velocity. Attaining α = α crit. produced a substantial change. The increase in porosity with the discharge time was much slower and nearly linear. Slight surpassing α crit. (by one or two degrees) allowed a denser flow with a lower particle velocity. At the flat floor of the bin, a dead zone is formed generating a natural hopper. In this area, the flow direction changes from vertical to converging, which is associated with softening of structure of the material. In a hopper, the change in the direction of particle movement is much smoother, which results in much lower dilation and acceleration along the straight line of particle movement. Despite such a big difference in the characteristics of particle movement between the converging orifice and the hopper, the mass discharge rate may be similar for the same half cone angle and appropriately adjusted height of the converging orifice. As concluded by Gella, Maza, & Zuriguel 8 , it is difficult to definitively state which specific property of particles is responsible for the macroscopic changes observed in the system. The relationship among all these magnitudes is not trivial, and further research is necessary to clarify these questions. Understanding how to manipulate and control the mass discharge rate may have a positive impact on the productivity and quality of industrial unit operations. Conclusions The following detailed conclusions were drawn: 1. Material discharges in the dense (α > α crit. , porosity ≈ 60%) or loose (α ≤ α crit. , porosity ≈ 80%) flow mode depending on the insert thickness h and the angle of inclination of the generatrix of the converging orifice α. The maximal normalized mass discharge rate MDR norm. decreased from 3.2 for h = 100 mm and α = 4º to www.nature.com/scientificreports/ 1.2 for h = 1.5 and α = 55º. In the majority of cases, the flow rate through the converging orifice is higher than through the hopper with the same orifice diameter. 2. For d 0 = const. the critical value of the half cone angle α crit. depended only on the insert thickness h. For α ≤ α crit. the mass discharge rate followed Beverloo's relationship obtained for the flat orifice. The hyperbolic type dependence of the critical value of the half cone angle α crit. on the insert thickness separated the geometry of the converging orifice (h,α) into two regions of opposite reaction of the mass discharge rate MDR to α increase: (1) increase of the MDR with α increase for α < α crit. and, (2) decrease of the MDR with α increase for α > α crit. . 3. The tendencies observed for the monodisperse assembly of spherical particles were preserved when beddings of wheat, lineseeds, and rapeseeds were tested. However, closer convergence of the results of the experiments and simulations would require fine tuning of the simulation parameters. The geometrical and mechanical parameters of real particles are far from those of a perfect sphere, which results in this discrepancy. 4. The results of the reported study show that the application of proper orifice geometry may allow precise control of the flow rate of granular material discharged from the silo. The fairly close compliance between the results of the experimental measurements and the simulations shows that DEM can be used to design equipment in systems involving granular flow. Data availability All data generated or analyzed during this study are included in this published article. Further detailed information on the datasets elaborated during the current study is available from the corresponding author and can be provided on reasonable request.
2023-01-14T06:16:18.768Z
2023-01-12T00:00:00.000
{ "year": 2023, "sha1": "0e7d6db846468bd6375524ce4bdb9ce2db33f14c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "88031ae0dc68b4cd69733b2580ce2d02fcc18b2c", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
258604231
pes2o/s2orc
v3-fos-license
Degradation of lignocellulosic substrates by Pleurotus ostreatus and Lentinus squarrosulus ABSTRACT Lignocellulosic substrates are wastes in the environment whose reducing sugars are not readily available for use. Biological pretreatment is the use of microorganisms and/or their metabolites to break down substrates to obtain simple sugars which is also cheap compared with other pretreatment techniques. This work is aimed at degrading lignocellulosic substrates with higher mushrooms to obtain simple sugars that could be used as raw materials for other industrial processes. The two mushrooms [Pleurotus ostreatus (PO) and Lentinus squarrosulus (LS)] with the ability to produce cellulase, xylanase, and lignase were used for degradation of lignocellulosic substrates [groundnut shell (GS), maize cob (MC), maize straw (MS), rice straw (RS) and sugarcane bagasse (SB)]. The residual extractives, cellulose, hemicellulose, lignin, and reducing sugar contents were determined every 7 days. Least extractives (1.12 %), hemicellulose (15.09 %), lignin (17.60 %), and cellulose (5.60 %) were recorded in POdegraded MS, POLS-degraded GS, LS-degraded GS, and PO-degraded MS at 28, 35, 49 and 42 days of degradation, respectively. The highest reducing sugar contents (mg/g) obtained in GS (11.83), MS (27.03), SB (28.70), and RS (37.96) were recorded when degraded by PO for 49, 14, 7, and 49 days, respectively while that of MC (13.32) was recorded when degraded by LS for 42 days. Reducing sugar obtained was higher from sole degradation with PO compared with LS and POLS. Degraded MS, RS, and SB had better yield of reducing sugar than GS and MC. The amount of reducing sugar released varied with substrates, organisms, and degradation time. The Extractives, hemicellulose, lignin, and cellulose contents of degraded maize straw is as shown in Table 3. The highest reducing sugar of rice straw degraded by PO (37.96 mg/g), LS (17.01 mg/g), and POLS (28.74 mg/g) were recorded at 49, 42, and 63 days, respectively. Statistical analysis revealed that there was no significant difference (P>0.05) in the reducing sugar content of rice straw that was degraded by Lentinus squarrosulus with days of degradation. DISCUSSIONS Changes in chemical composition observed when different lignocellulolytic substrates were degraded with Pleurotus ostreatus and Lentinus squarrosulus through solid-state fermentation could be due to the metabolites (cellulase, xylanase, lignase/laccase, etc.) produced by these organisms, which can degrade different parts of lignocellulose. This observation corroborates the work of Issaka, et al. [17] and Wuanor and Ayoade [18] who degraded groundnut shell with Pleurotus species and reported changes in the chemical composition of groundnut shell. Costa-Silva, et al. [19] also observed changes in the composition of grape stalks degraded by some white rot fungi. Lower extractives recorded in most of the degraded substrates than in non-degraded ones might probably be due to the utilization of the extractives as nutrients during degradation by these mushrooms [20]. The values of extractives vary from biomass to biomass and between different parts of the same plant [20,21]. Higher hemicellulose content observed in degraded lignocellulolytic substrates compared with non-degraded might be a result of low required nutrients needed for the production of hemicellulases (xylanase and others) on the substrates that could have converted hemicellulose to glucose and xylose. This is contrary to the findings of Issaka, et al. [17] and Wuanor and Ayoade [18] who recorded a decrease in hemicellulose content after degrading groundnut shell with Pleurotus species. The percentage composition of lignocellulolytic substrates differs from one another based on the class of the substrate which is softwood or hardwood [20]. Generally, lower hemicellulose content was observed when degraded by co-culture of Pleurotus ostreatus and Lentinus squarrosulus than when degraded singly. This might be due to the synergistic relationship between Pleurotus ostreatus and Lentinus squarrosulus in the utilization of hemicellulose. There have been reports that organisms performed differently when in consortium from when used singly [22]. The decrease in lignin content of groundnut shell observed after 49 days of degradation by Pleurotus ostreatus, Lentinus squarrosulus and consortium of Pleurotus ostreatus and Lentinus squarrosulus showed that these mushrooms can remove lignin bonds that prevent holocellulose from being broken down to simple and fermentable sugar. This observation has been reported to be due to the production of lignindegrading enzymes by these organisms [23][24][25][26]. A similar observation of a decrease in lignin content of degraded groundnut shell by Pleurotus ostreatus for 5 weeks [17] and 30 days [18] has also been reported. Conversion of cellulose to simple sugars by cellulase-producing mushrooms selected for degradation in this work could be responsible for a decrease in cellulose content observed at most sampling times in all selected lignocellulolytic substrates. The cellulose part of lignocellulosic substrates would have been extensively utilized and converted to hexoses by selected mushrooms leading to a decrease in cellulose after degradation as reported by some researchers [27]. A similar observation of a decrease in cellulose content after degradation with Pleurotus species was reported by Akinfemi [28] and Huang, et al. [29] from maize cob and crop straw respectively. The higher reducing sugar released from groundnut shell degraded by the monoculture of Pleurotus ostreatus and Lentinus squarrosulus than co-culture of the two might be due to the high utilization of the released reducing sugar as carbon and energy sources by the co-culture than monoculture [30,31] or the organisms might be having an antagonistic effect on each other leading to decrease in released reducing sugar when grown together. While the observed higher reducing sugar released in Pleurotus ostreatus and Lentinus squarrosulus-degraded maize cob than non-degraded one was probably because of the interaction between hydrolytic and oxidative enzymes released by these organisms when degrading maize cob, breaking down cellulose and hemicellulose to simple sugar [32,33]. A similar observation of increased reducing sugar content of maize cob, when degraded by Pleurotus ostreatus, was reported by Adamafio, et al. [32]. The increase in reducing sugar content observed in degraded maize straw could be due to the breaking down of different components of maize straw to reducing sugar by the enzymes produced by the organisms which could be influenced by both genetic makeup and environmental conditions [29,34,35]. Higher reducing sugar content observed in degraded sugarcane bagasse could be due to the ability of Pleurotus ostreatus and Lentinus squarrosulus to produce cellulase and xylanase which could have broken the holocellulose content of sugarcane bagasse to reducing sugar Ravichandran, et al. [26]; Jonathan and Akinfemi [36]; Dong, et al. [37]; Shankarappa, et al. [38]; Gani, et al. [39]. Gani, et al. [39] reported high reducing sugar when sugarcane bagasse was pretreated with alkaline and acid. The ability of Pleurotus ostreatus and Lentinus squarrosulus to produce lignocellulolytic enzymes that can breakdown cellulose, hemicellulose, and lignin to simple sugar could be responsible for high amount of reducing sugar recorded in degraded rice straw Jonathan and Akinfemi [36]; Belal [40]; Nurika, et al. [41]. Belal [40] reported high reducing sugar in rice straw degraded with Trichoderma reesei for 14 days while Nurika, et al. [41] observed a higher amount of reducing sugar after 21 days of degradation of rice straw with Serpula lacrymans. CONCLUSION The ability of Pleurotus ostreatus and Lentinus squarrosulus to degrade lignocellulosic substrates to simple sugars shows that these organisms could be employed in second-generation biofuel production where simple sugars released from lignocellulose would be used for ethanol production. Highest reducing sugar content (37.96 mg/g) was obtained by degrading rice straw by Pleurotus ostreatus for 49 days. Sole degradation with Pleurotus ostreatus had a better yield of reducing sugar than Lentinus squarrosulus and co-cultured. The amount of reducing sugar released varied with substrates, organisms, and degradation time. Funding: This study received no specific financial support.
2023-05-11T15:09:43.149Z
2023-05-08T00:00:00.000
{ "year": 2023, "sha1": "789c3c13c67f21515b5e55abf8ea9cd2492932b6", "oa_license": null, "oa_url": "https://archive.conscientiabeam.com/index.php/33/article/download/3354/7544", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "398fbe2cdad7d3dbd6667e0486510b155dc0fb47", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
271461660
pes2o/s2orc
v3-fos-license
Chlamydia suis undergoes interclade recombination promoting Tet-island exchange Background The obligate intracellular bacterial family Chlamydiaceae comprises a number of different species that cause disease in various vertebrate hosts including humans. Chlamydia suis, primarily found in the gastrointestinal tract of pigs, is the only species of the Chlamydiaceae family to have naturally gained tetracycline resistance (TetR), through a genomic island (Tet-island), integrated into the middle of chromosomal invasin-like gene inv. Previous studies have hypothesised that the uptake of the Tet-island from a host outside the Chlamydiaceae family was a unique event, followed by spread among C. suis through homologous recombination. In vitro recombination studies have shown that Tet-island exchange between C. suis strains is possible. Our aim in this study was to gain a deeper understanding of the interclade recombination of the Tet-island, among currently circulating C. suis field strains compared to in vitro-generated recombinants, using published whole genome sequences of C. suis field strains (n = 35) and in vitro-generated recombinants (n = 63). Results We found that the phylogeny of inv better reflected the phylogeny of the Tet-island than that of the whole genome, supporting recombination rather than site-specific insertion as the means of transfer. There were considerable differences between the distribution of recombinations within in vitro-generated strains compared to that within the field strains. These differences are likely because in vitro-generated recombinants were selected for a tetracycline and rifamycin/rifampicin resistant background, leading to the largest peak of recombination across the Tet-island. Finally, we found that interclade recombinations across the Tet-island were more variable in length downstream of the Tet-island than upstream. Conclusions Our study supports the hypothesis that the occurrence of TetR strains in both clades of C. suis came about through interclade recombination after a single ancestral horizontal gene transfer event. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-024-10606-6. Background Chlamydia (C.) suis belongs to the Chlamydiaceae family, Gram-negative, obligate intracellular bacteria that are primarily responsible for respiratory, ocular and urogenital disease in humans and animals [1].C. suis is a pig pathogen and has mostly been associated with mild clinical signs such as conjunctivitis, diarrhoea but also reproductive disorders to a limited degree [2].Like many of the Chlamydiaceae, C. suis is a zoonotic pathogen and has been detected in the eyes, pharynges and rectum of people that come in close contact with pigs, namely farmers and slaughterhouse workers [3][4][5], as well as in the eyes of trachoma-endemic populations that domesticate pigs [6]. While the clinical impact of C. suis on porcine health is mild, C. suis has received attention in recent years as the only Chlamydiaceae species to have naturally obtained an antibiotic resistance gene: the tetracycline resistance-conferring tetA(C) gene [7,8].This gene has been detected in many different C. suis strains from Europe, Israel and the USA [1].Where present, it is consistently identified to be part of an over 12 kilobase pair (kb) genomic island termed the Tet-island, likely derived from a plasmid originating from Proteobacteria, located within the chromosomal invasin-like gene inv [9,10].The nearly 4 kb inv gene has an unknown function, and is disrupted by the Tet-island insertion [8,11].The complete inv gene is only found in two chlamydial species, C. suis and C. caviae, a pathogen found in guinea pigs.In C. muridarum and human C. trachomatis, the two closest phylogenetic relatives of C. suis, the inv gene is truncated or entirely absent, respectively [11,12]. Discrepancies between the mutation rate of the Tetislands and the chromosomes led to the hypothesis that tetA(C) acquisition in C. suis happened relatively recently, and certainly after the separation of the species into two major ancestral clades [9,10].Given current data on sequenced strains, the Tet-island possibly originated in the USA before being later transferred to European C. suis strains [9,10].Moreover, based on the unique structure of the Tet-island and its invariable position within the C. suis chromosome, it has been hypothesised that the original acquisition was a rare and possibly singular horizontal gene transfer (HGT) event followed by spread of the Tet-island among C. suis strains through homologous recombination [9,10].This hypothesis presupposes the occurrence of interclade recombination events after the Tet-island was integrated into C. suis.Intra-and interspecies recombination is well-recorded for many Chlamydiaceae species [13], however the extent of interclade recombination has so far not been analysed in detail for C. suis. In this study, we investigated the extent of interclade recombination, in both currently available sequences from field isolates, and in vitro-generated recombinants.We performed phylogenetic comparisons of the inv gene to that of the Tet-island and of the whole genome using all available C. suis genomes, paying special attention to interclade recombination events.Second, we used an established co-culture model [14] using tetracyclinesensitive (TetS) and tetracycline-resistant (TetR) C. suis strains from the two major clades to investigate interclade recombination and Tet-island transfer dynamics in vitro. Generation of in vitro-generated recombinants The detailed protocol has already been published [14].Briefly, tetracycline-sensitive, rifamycin-resistant strains S45 RIF, 94 Ry and 111 Ry were individually co-cultured with SWA-141 (4-29b), SWA-107 (5-27b) or SWA-110 (1-28b) in LLC-MK2 cells (continuous Rhesus monkey kidney cell line, kindly provided by IZSLER Brescia, Italy) at 37 °C and 5% CO 2 .After the first passage, selection was added using inhibitory concentrations of rifampicin (Merck, Darmstadt, Germany) or rifamycin (Merck), and tetracycline (Merck).After the second passage, plaque assays were performed as previously described to obtain individual inclusions [15].Putative recombinants were initially identified by strain-specific PCR followed and confirmed by stability assays for five to ten passages in the presence and absence of selective antibiotics and PCR reanalysis of passaged cultures.Whole-genome sequencing was performed on a subset of these confirmed recombinants, and the recipient strains, using the Illumina MiSeq platform [14]. Sequence analysis All read data for field strains [10] and recombinant strains [14] were obtained from SRA or were deposited under PRJNA668469 (Table S1).The complete assembled genome of strain 8-29b (NZ_FTQU01000001) was used as the reference against which to map all reads within CLC Genomics Workbench v20.0.2 and also generate a single nucleotide polymorphism (SNP) phylogeny with parameters that differed from the default as: variant calling with 10x minimum coverage, 10 minimum count and 70% minimum frequency (after which mean coverage was assessed), and SNP tree creation with 10x minimum coverage, 10% minimum coverage, 0 prune distance and including multi-nucleotide variants (MNVs).Genome consensi were extracted from mapped reads using a minimum coverage of 5, a multiple sequence alignment (MSA) was created from the whole genome alignments, exported and run through Gubbins v3.0.00 with 5 iterations.Results were viewed in Artemis [16] and Phandango [17].FastBaps v 1.0.8[18] was used to calculate the population structure.BactDating v1.1.1 was run on the gubbins output with up to 5 million iterations and the arc model (Table S6).Traces did not converge, and between three runs, most likely root dates varied from − 7433.75 [-20318.78;-109.86]to -5626.65 [-13128.00;24.31],with mu ranging from 2.57e + 00 [7.44e-01;7.72e+ 00] to 2.93e + 00 [1.09e + 00;8.30e+ 00]. The inv gene was reconstructed from previous assemblies [10] and the Tet-island, where present, was excluded from the resulting MSA, which was generated using MUSCLE [19] in AliView v 1.27 [20].A phylogeny was created using RAxML (https://raxml-ng.vital-it.ch/)using defaults and 100 bootstraps. The Tet-island phylogeny was generated from the previous alignment, corrected for the strains used in the whole genome phylogeny (n = 22), using RAxML as above. Analysis of in vitro-generated recombinants Gubbins alignment of in vitro-generated recombinants and all recipient/donor strains was performed. The alignment was then used for subsequent analysis with the Geneious Prime software (v.2023.1.1;Biomatters, Auckland, New Zealand).Specifically, we extracted individual recombinants and their associated recipient strains and identified recombinations using the "Find Variation/SNPs" function.All areas with three or more SNPs were identified and marked as motifs.Recombinant regions were then curated following comparison of recipient, donor and recombinant using MAFFT alignment followed by the "Find Variation/SNPs" function and curation of identified motifs.Areas with high variation between all three strains were excluded from downstream recombinant analysis.Genes at the predicted ends of the recombinations were identified following alignment against recombinant strain 217.1, which was annotated using Prokka [22]. All genomes with under 15x mean read coverage were excluded (n = 6) providing n = 35 genomes to analyse.The phylogeny (Fig. 1) closely agrees with that already published [10], despite the use of alternative analysis tools. Bayesian clustering was used to confirm the separation into two clades.Recombination analysis of the collection of field strains also identified highly impacted loci (Fig. 1), including the Tet-island, as previously described.Bayesian dating of the ancestor of the species was attempted, giving results unfortunately with high uncertainty (see methods).The most recent common ancestor appears to have emerged several thousand years before the common era; further data is required to improve upon this analysis. Comparisons of phylogenies of whole genomes, insertion site inv and Tet-island of field strains The complete inv gene was reconstructed from genomes containing disrupted versions of the gene, and a phylogeny of inv was created to investigate the degree of agreement with a whole genome phylogeny with recombination events removed (Fig. 2A).A high degree of clade mixing is clear in the inv phylogeny, which does not reflect the whole genome phylogeny, and suggests recombination of inv.A comparison of the phylogeny of inv with that of the Tet-islands from Seth-Smith et al. [10], shows a much higher congruence (Fig. 2B), strongly suggesting that the inv gene is linked to the Tet-island during Tet-island movement, and not that it serves purely as an integration site.This is particularly clear between clades and speaks for recombination as a means of Tet-island transfer. Analysis of recombination sites in field and in vitro recombinant isolates A phylogeny of in vitro recombinant genomes from previous plaque-purified strains generated from Tet-island transfer experiments [14] (n = 63, all over 25x mean read coverage), including the six parental strains was generated, to compare against that of the field strains (Fig. 3).The recombinant strains cluster phylogenetically around the recipient parental strains, all of which are from Clade 2 (SWA-94 is 10-26b; SWA-111 is 1-28a; and S45), compared to the Clade 1 Tet-donor strains (SWA-141 is 4-29b; SWA-107 is 5-27b; and SWA-110 is 1-28b) (Fig. 1).The distribution of recombinations within the in vitroderived recombinants is clearly different from that within the field strains.The in vitro-derived recombinants were selected on the basis of having acquired the Tet-island in a rifamycin/rifampicin resistant background.Hence the largest peak of recombination is across the Tet-island with the downstream recombination site extending further relative to the Tet-island compared to the upstream recombination site.The predicted mean recombination size across all recombinations is 10.6 kb, ranging between 4 bp and 135 kb (Table S2), which stands in contrast to comparison of the field strains where the average recombination size was predicted to be much shorter with 1.1 kb (2 bp-21.2kb, Table S3). We could not identify consistent upstream or downstream recombination junctions involving the Tet-island of the in vitro-derived recombinants, suggesting that a general homologous, rather than site-specific, recombination mechanism is responsible.Further recombinations across the genome are also apparent, despite not having been selected for, showing the promiscuity of recombination under permissive conditions. Recombinations in in vitro-generated recombinants reveal high variability in selected and non-selected areas Predicted recombinations within in vitro-generated strains were analysed in detail by finding the variations between individual recombinants with their respective parental strains using Geneious Prime (Table S4).Overall, we identified a total of 237 recombinations with a mean size of 26.6 kb (range: 20 bp to 492 kb).We detected an average 3.8 recombinations per in vitro-generated strain (n = 63), all of which included one recombination involving the Tet-island; recombinations across this region were notably longer than regions that were not under specific selective pressure, with a mean size of 9.5 kb (20 bp to 193 kb) and 73.8 kb (15.2-492 kb) for recombinations without and with the Tet-island, respectively.This confirms the results of our previous study which contained a smaller sample size [14].These findings are also in line with a co-culture study co-infecting C. muridarum and a tetracycline-resistant C. suis which identified a 98 kb recombinant region involving the Tet-island [24].In co-infection experiments that did not include the Tet-island and corresponding tetracycline selection, average recombinant regions were variable.One study concerning C. muridarum / C. trachomatis, produced recombinant region sizes similar to those in our study, ranging between 558 bp and 124 kb [25], while other studies investigating C. trachomatis yielded longer regions ranging between 200 and 400 kb [26,27]. We then investigated recombinations involving the Tetisland relative to their distance to inv (Table S5).There was no significant difference in the recombination extension length upstream and downstream (p = 0.7).However, while the upstream recombination site is an average of 17.1 kb upstream of the proximal inv fragment (56/63) and never further upstream than 53 kb, the downstream recombination site ranges between 0 kb and 440 kb from the distal fragment of inv.The reason for this finding is The genome with annotated coding sequences (CDSs in blue, both forward and reverse reading frames) of strain 8-29b is shown above these tracks.Below the tracks is a plot of recombination density within the phylogenies, with peaks annotated according to the relevant genome regions, CDSs and predicted encoded proteins.Data around the Tet-island illustrates that not all strains carry the island; inv fragments at both sides are commonly predicted to be involved in recombinations.The data was generated using Gubbins [23] and visualised using Phandango [17] unclear.In Chlamydia, both homologous recombination pathways-double-strand break-dependent RecBCD and single-strand break-dependent RecFOR-have been identified [13].One unknown component is the identity of Chi sites, the site where the double-stranded degradation of RecBCD ends and single-strand degradation begins for the binding of RecA [13].It is possible that there are one or more Chi sites close to the upstream rrn operon resulting in more constrained recombination sites compared to recombination downstream of the Tet-island.However, we do not have sufficient data to support this hypothesis. Limitations Sample and whole genome sequencing of further Tetisland carrying and tetracycline-sensitive isolates of C. suis would better inform all aspects of this study. In the current study, all in vitro recombinations were performed with a Tet donor in Clade 1 and a recipient in Clade 2. To further investigate interclade recombination, Conclusion In this study, by comparing phylogenies of genomes and genomic elements, we show that the inv gene, as the location of the important Tet-island in C. suis, appears to transport with the Tet-island during recombination events, rather than acting as an integration site.Additionally, the distribution of the Tet-island across the phylogeny does not lend itself to the inference of a simple vertical inheritance, and recombination must be invoked to explain the pattern most parsimoniously.Some recombination events appear to have occurred been between Clade 1 and Clade 2 (interclade).A high proportion of sequenced C. suis field strains carry the Tet-island (Fig. 1), which may represent bias in the strains selected for sequencing.Sadly, historical isolates were not collected, and the history of the insertion of the Tet-island remains unknown.We add further evidence of the high rates of recombination in C. suis, occurring with high frequency between the two main clades, and readily transferring the Tet-island.Where recombinations are identified in more than one strain (red recombination blocks), this is either due to recombinations identified between the parent strains (Figure S1), or to duplicate picking of plaques from the same parent combination.The genome with annotated coding sequences (CDSs in blue, both forward and reverse reading frames) of strain 8-29b is shown above these tracks.Below the tracks is a plot of recombination density within the phylogenies, with peaks annotated according to the relevant genome regions, CDSs and predicted encoded proteins.The data was generated using Gubbins [23] and visualised using Phandango [17] Fig. 1 Fig. 1 Phylogeny of C. suis field strains (n = 35) and identified recombinations.The phylogenies with recombinations removed (left) are coloured with Clade 1 in blue and Clade 2 in orange (FastBAPS).Names in bold define strains that carry the Tet-island.The sample names are matched to the genome length tracks (right) where red bars indicate recombinations identified in more than one genome, and blue bars show those only identified in single genomes.The genome with annotated coding sequences (CDSs in blue, both forward and reverse reading frames) of strain 8-29b is shown above these tracks.Below the tracks is a plot of recombination density within the phylogenies, with peaks annotated according to the relevant genome regions, CDSs and predicted encoded proteins.Data around the Tet-island illustrates that not all strains carry the island; inv fragments at both sides are commonly predicted to be involved in recombinations.The data was generated using Gubbins[23] and visualised using Phandango[17] Fig. 2 Fig. 2 Comparison of phylogenies.The colour scale of the branches shows the weighting of the similarities.Names in bold define strains that carry the Tet-island.A: Whole genome (left) vs. inv (right) phylogenies.B. inv phylogeny (left) vs. Tet-island phylogeny (right; [10]).Leaves are coloured with Clade 1 in blue and Clade 2 in orange Fig. 3 Fig. 3 Phylogeny of C. suis in vitro-generated recombinant strains n = 63 including Tet-carrying (n = 3) and derived resistant parental (n = 3) strains (bold) and identified recombinations.The phylogenies with recombinations removed (left) have bold names representing parental strains.The sample names are matched to the genome length tracks (right) where red bars indicate recombinations identified in more than one genome, and blue bars show those only identified in single genomes.Where recombinations are identified in more than one strain (red recombination blocks), this is either due to recombinations identified between the parent strains (FigureS1), or to duplicate picking of plaques from the same parent combination.The genome with annotated coding sequences (CDSs in blue, both forward and reverse reading frames) of strain 8-29b is shown above these tracks.Below the tracks is a plot of recombination density within the phylogenies, with peaks annotated according to the relevant genome regions, CDSs and predicted encoded proteins.The data was generated using Gubbins[23] and visualised using Phandango[17]
2024-07-27T13:13:30.640Z
2024-07-26T00:00:00.000
{ "year": 2024, "sha1": "5f82bf278698a6f0234bd6b4b080ba4501263c63", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "147c7749b4d39d6c64340b5f48b59e4b3d358194", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35284977
pes2o/s2orc
v3-fos-license
A new role for human dyskerin in vesicular trafficking Dyskerin is an essential, conserved, multifunctional protein found in the nucleolus, whose loss of function causes the rare genetic diseases X‐linked dyskeratosis congenita and Hoyeraal‐Hreidarsson syndrome. To further investigate the wide range of dyskerin's biological roles, we set up stable cell lines able to trigger inducible protein knockdown and allow a detailed analysis of the cascade of events occurring within a short time frame. We report that dyskerin depletion quickly induces cytoskeleton remodeling and significant alterations in endocytic Ras‐related protein Rab‐5A/Rab11 trafficking. These effects arise in different cell lines well before the onset of telomere shortening, which is widely considered the main cause of dyskerin‐related diseases. Given that vesicular trafficking affects many homeostatic and differentiative processes, these findings add novel insights into the molecular mechanisms underlining the pleiotropic manifestation of the dyskerin loss‐of‐function phenotype. Dyskerin is an essential, conserved, multifunctional protein found in the nucleolus, whose loss of function causes the rare genetic diseases X-linked dyskeratosis congenita and Hoyeraal-Hreidarsson syndrome. To further investigate the wide range of dyskerin's biological roles, we set up stable cell lines able to trigger inducible protein knockdown and allow a detailed analysis of the cascade of events occurring within a short time frame. We report that dyskerin depletion quickly induces cytoskeleton remodeling and significant alterations in endocytic Ras-related protein Rab-5A/Rab11 trafficking. These effects arise in different cell lines well before the onset of telomere shortening, which is widely considered the main cause of dyskerin-related diseases. Given that vesicular trafficking affects many homeostatic and differentiative processes, these findings add novel insights into the molecular mechanisms underlining the pleiotropic manifestation of the dyskerin loss-of-function phenotype. Vesicular transport is a fundamental way of communication between the cell and its microenvironment and regulates many vital cellular processes, including internalization of several types of molecules, nutrients uptake, membrane protein turnover, cell adhesion or migration properties, and receptor signaling [1]. A variety of molecules shuttle in and out of cells via the endocytic/exocytic pathways. Along the endocytic pathway, the cargo is internalized by either clathrindependent or clathrin-independent pathways and then routed to sorting endosomes, to be subsequently addressed to lysosomes for degradation, or returned back to membrane via either the fast or the late-recycling routes. The Ras-related in brain (Rab) small G proteins are master regulators of vesicular trafficking [2,3]. In particular, Ras-related protein Rab-5A (Rab5)-Rab4 sorting endosomes have a key role in mediating the fast recycling to membrane, while Rab11 endosomes mediate the slow recycling, moving from the endocytic recycling compartment (ERC), localized near the centrosomal microtubule-organizing center (MTOC), to the cell surface [4,5]. Dysregulation of vesicle trafficking can underlie diverse aspects of cancer cell biology, including loss of cell polarity, transformation, invasion, and metastasis, so that aberrant expression/regulation of Rab guanosine triphosphate hydrolase (GTPases), including Rab5 [6] and Rab11 [7], has been associated with tumorigenesis [8]. Here, we report that the ubiquitous nucleolar protein dyskerin, a component of the small nucleolar ribonucleoprotein complexes (snoRNPs), plays an unexpected role in the regulation of vesicle trafficking. Dyskerin, encoded by the human DKC1 gene, is a multifunctional highly conserved protein that participates in diverse nuclear ribonucleoprotein complexes, such as those of active telomerase, type H/ACA snoRNPs, and small Cajal ribonucleoproteins [9]. All these complexes are involved in a variety of crucial biological functions that include safeguarding telomere integrity, ribosome biogenesis, and pseudouridylation of cellular RNAs [10][11][12]. In addition, dyskerin has been shown to act as a cotranscriptional factor of key pluripotency-related genes in mammalian embryonic stem cells [13]. Considering the variety of these biological functions, it is not surprising that DKC1 hypomorphic mutations cause hereditary disorders, respectively, known as X-linked dyskeratosis congenita (X-DC) and Hoyeraal-Hreidarsson (HH) syndrome [14]. Main manifestation of these diseases is a triad of mucocutaneous features accompanied by chronic bone marrow failure, telomere instability, premature aging, and increased susceptibility to various types of cancers [15,16]. Although many authors consider X-DC and HH mainly as telomeropathies, a wide bulk of data support the alternative view that the primary cause of these diseases can be associated with telomerase-independent roles of dyskerin. For example, DKC1 m mice show symptoms of the disease before a telomere shortening is detectable [15]. Similarly, in an X-DC zebrafish model, changes in telomerase activity were undetectable at early stages, supporting the view that telomerase deficiency is not responsible for the onset of X-DC pathogenesis [17]. In addition, although Drosophila lacks a canonical telomerase, Drosophila dyskerin is essential for fly viability and its depletion causes a large variety of developmental defects [18][19][20][21]. Finally, snoRNPs have recently gained an important role in several pathologies, including cancer [22,23]. To investigate in more detail the primary effects triggered by dyskerin loss-of-function phenotype, we generated colon carcinoma (RKO) and osteosarcoma (U2OS) stable cell lines expressing a short hairpin RNA (shRNA) able to trigger inducible silencing of the DKC1 gene. These cellular systems enabled us to analyze in detail the cascade of events occurring within a short time frame (1-3 cell doublings) immediately following dyskerin knockdown, and thus largely preceding the time needed for telomere shortening. With this approach, we found that dyskerin downregulation quickly causes cytoskeletal remodeling and dysregulation of Rab5-Rab11 vesicular trafficking, thus revealing additional and unexpected mechanisms by which this protein can affect cell homeostasis. Cell culture and generation of shRNA-expressing stable cell lines RKO and U2OS human cell lines were obtained from ATCC (Manassas, VA, USA) and cultured as previously described [24]. To generate stable cell lines, cells were transfected with 3 lg pLKO-Tet-On-shDKC1 and 12 lL of Metafectene Pro (Biontex Laboratories GmbH, M€ unchen, Germany), following the manufacturer's instructions. After 20 days of puromycin selection (750 ngÁmL À1 ; Sigma-Aldrich), independent clones were collected and maintained in media supplemented with Tet-free FBS (Clontech Laboratories Inc, Mountain View, CA, USA) and puromycin. In the absence of tetracycline (Tet), or its synthetic derivative doxycycline (Dox), shRNADKC1 expression is repressed by the binding of the constitutively expressed TetR protein to the Tet-responsive element; Tet/Dox addition (400 ngÁmL À1 ) to the medium triggers shRNA expression, resulting in targeted DKC1 silencing [25]. Cell proliferation assays and cytoskeletal analyses To measure cell vitality and proliferation, an equal number of RKO Dox-treated and untreated cells (3 9 10 5 /dish) were seeded in triplicate in 100-mm plates, harvested every 24 h up to 4 days, stained with 0.5% trypan blue (Euroclone spa, Milan, Italy), and counted by a Burker chamber. For 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay, Dox-treated and untreated cells were seeded in triplicate at the density of 5 9 10 3 cells per well in 96-well plates. The day after, culture medium was aspirated and, after a washing in PBS, replaced with 100 lL of 0.5 mgÁmL À1 MTT solution per well. After 4-h incubation at 37°C in 5% CO 2 incubator, the medium was removed and the precipitated formazan was dissolved in 100 lL of acidic isopropanol. The absorbance was quantified by spectrophotometry at 570 nm using the microplate reader Victor3 Multilabel Counter (Perkin Elmer, Waltham, MA, USA). FACS analysis Control and Dox-treated cells were trypsinized at the indicated time, counted, washed three times in PBS, and fixed in ice-cold methanol at À20°C overnight. Cells were then washed twice with cold PBS, counted, and rehydrated at the density of 10 6 cellsÁmL À1 by incubation in PBS for 30 min on ice. Subsequently, cells were suspended in hypotonic solution 0.1% Na-citrate, 50 lgÁmL À1 RNase, 50 lgÁmL À1 propidium iodide and incubated for 30 min in the dark at room temperature. The DNA content was measured using a fluorescence-activated cell sorting (FACS) Calibur flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA), and data were analyzed using the CELL QUEST PRO and MODFIT3.0 software packages (Becton Dickinson). RNA and protein analysis Total RNA extraction, preparation of first-strand cDNA, and quantitative real-time reverse transcription (qRT)-PCR were carried out as previously described [26] Oligonucleotide sequences were as follows: Western blot analysis was conducted according to [27]. Used antibodies are reported in Table S1. WGA and transferrin uptake assays, immunofluorescence analysis For wheat germ agglutinin (WGA) live staining, cells were incubated at room temperature for 10 min with WGA/ Texas red conjugate (W21405; Thermo Fisher Scientific, Waltham, MA, USA) and fixed with 3.7% paraformaldehyde for 10 min. For confocal immunofluorescence analysis, RKO-and U2OS-transfected cells were preliminarily seeded on glass coverslips in six-well plates and treated for 72 h with Dox. Dox-treated or untreated cells were then fixed with 3.7% paraformaldehyde for 10 min, permeabilized in 0.3% Triton X-100 for 15 min, and blocked in PBS supplemented with 3% BSA for 30 min. After each step, the cells were rinsed in PBS, incubated for 1 h at room temperature with primary antibodies and then for 30 min at room temperature with secondary antibodies (listed in Table S1). Uptake assay of Alexa Fluor 488-conjugated transferrin (T13342; Thermo Fisher Scientific) was conducted according to [28]. Upon indicated treatments, coverslips were mounted on glass slides with Hoechst solutions and examined under the fluorescence confocal microscope Zeiss LSM 700 (Zeiss, Oberkochen, Germany). Generation and validation of stable cellular systems for DKC1-inducible knockdown To perform a careful analysis of the early events triggered by dyskerin depletion, we generated an inducible DKC1 silencing lentiviral vector that contained all the necessary components to express DKC1 shRNA upon Tet/Dox addition (see Materials and methods). The silencing efficiency of the vector, named pLKO-Tet-On-shDKC1, was first tested in the RKO human colon cancer cells that are diploid, poorly differentiated, and express wild-type p53, APC, and b-catenin [29]. Independent stable clones were isolated and DKC1 knockdown evaluated at both mRNA and protein levels. As shown in Fig. 1A, the system was quickly responsive, as dyskerin amount was significantly reduced after only 24 h from Dox induction and further decreased subsequently. In strict agreement, reverse qPCR experiments showed a parallel downregulation of dyskerin mRNA levels. Confocal microscopic analysis confirmed a strong reduction in dyskerin accumulation in the nucleoli of the silenced cells (Fig. 1B), further validating the silencing efficiency. Moreover, the silencing conditions did not perturb the nucleolar accumulation of fibrillarin, a highly conserved nucleolar protein associated with box C/D small nucleolar RNAs (snoR-NAs) [30], leading to rule out the occurrence of obvious nucleolar alterations (Fig. 1B). Altogether, these observations showed that this inducible system was able to generate stable cellular clones useful to define the early consequences of dyskerin depletion. A set of different approaches was then performed and followed on independent clones. First, we checked the effects of DKC1 gene silencing on cell proliferation. Dyskerin depletion has in fact generally been reported to perturb this parameter [31,32], although the specific phase of the cell cycle at which cells accumulate diverged among different cell types. Consistent with previous findings, both a direct count of viable cells and MTT measurements indicated that cell proliferation progressively decreased upon Dox-induced DKC1 silencing (Fig. 1C). We confirmed that the decrease in cell number was not due to apoptosis, as poly (ADP-ribose) polymerase-1 (PARP-1) and caspase 3, two known apoptotic markers, were not activated upon gene silencing. Both markers were instead efficiently cleaved upon treatment with the proapoptotic topoisomerase II inhibitor doxorubicin (Dxr), thereby providing a positive control (Fig. 1D). FACS analysis revealed an increase in the G1 percentage of the silenced cells (42.4% compared to 33, 5% of control cells; Fig. 1E), accompanied by a slight reduction in the S-phase (42.8% compared to 46.9% of the control cells) and G2-phase percentages (14.4% compared to 17.4% of control cells). Dyskerin depletion was reported to induce a block at G1 also in yeast [33] and, more recently, in neuroblastoma cells [34]; in this latter case, the proliferative arrest was observed to be unrelated to human telomerase RNA levels or to telomerase activity [34,35]. To further investigate the effect of dyskerin depletion on cell cycle, we checked the expression of p21, a protein that plays a key role in G1/S cell cycle arrest [36]. In keeping with FACS results, p21 accumulation raised significantly upon dyskerin depletion (Fig. 1F), confirming that G1/S progression was halted. Noticeably, this effect occurred very early upon dyskerin knockdown and thus had no correlation with telomere instability. Dyskerin depletion affects Rab5-and Rab11mediated recycling Morphological analysis of RKO-silenced cells showed that only 24 h from silencing induction they lost the typical epithelia-like structure and tended to assume a round-shaped morphology and appeared highly refractile when observed by phase-contrast microscopy ( Fig. 2A). In addition, about 10% of these cells detached from the plate. However, when recruited from the medium, they remained unstained by the trypan blue dye and, after transferring in a Dox-free medium, recovered their substratum-adhesive property (data not shown). These observations suggested that this feature was reversible and dependent on DKC1 gene activity, and highlighted that dyskerin depletion can promote a cytoskeletal remodeling resulting in a quick transition from an adherent state to a suspended one. Thus, we first analyzed the microtubule and actin networks by immunofluorescence staining of b-tubulin and F-actin. Indeed, analysis of confocal images indicated that, upon dyskerin depletion, RKO cells are characterized by reduced anti-b-tubulin immunoreactivity and presence of less stretched and oriented microtubule filaments. In addition, phalloidin staining revealed that the actin meshwork appeared thinner and displayed a reduced number of filopodia in the silenced cells (Fig. 2B). These observations fully support the view that cytoskeletal scaffolding undergone a prompt rearrangement. As microtubules and actin filaments, together with motor proteins, support vesicles movements [1], we wondered whether cytoskeletal remodeling had any impact on vesicular trafficking. To gain information on this latter aspect, we firstly followed the in vivo internalization of Texas red-conjugated WGA, a carbohydrate-binding lectin that recognizes sialic acid and N-acetylglucosaminyl sugar residues on the plasma membrane. WGA is widely used to follow receptor-mediated membrane transport and vesicular trafficking; its uptake is an active, energy-dependent process mediated by both actin filaments and microtubules [37]. As shown in Fig. 3A, WGA staining marked more heavily both the peripheral membrane and the internal vesicles of dyskerin-depleted cells, suggesting an increase in the receptor-mediated endocytic process. To further explore this feature, we analyzed by confocal microscopy the distribution of vesicles positive for Rab5, a small GTPase that marks the early Rab4-Rab5 endosomes and mediates the fast recycling to membrane [2]. Indeed, Rab5 staining incremented significantly in the silenced cells (Fig. 3A), supporting the conclusion that dyskerin depletion can enhance endocytosis and the fast recycling route. To check the slow recycling of endocytosed proteins, by which the cargo traverses the endosomal recycling compartment (ERC) and is addressed back to cell periphery, we looked at the Rab11 late-recycling endosomes [5]. Confocal microscopic analysis of distribution of Rab11 vesicles showed that in the control cells they mainly concentrated at ERC and at cortical periphery, where they are particularly enriched at cell-cell contacts (Fig. 3A). In contrast, in dyskerin-depleted cells, the density of Rab11 endosomes appeared drastically reduced, with these vesicles essentially clustered at pericentrosomal ERC, poorly diffused throughout the cytoplasm, and nearly absent at cell periphery (Fig. 3A). The loss of Rab11 cortical localization indicated that the slow recycling to membrane was drastically hampered in the silenced cells, implying a possible dysregulation of this important process [38]. Worth noting, western blot analyses confirmed that Rab5 and Ras-related protein Rab-11A (Rab11A) expression levels are differentially influenced by dyskerin depletion (Fig. 3B,C). As Rab11 vesicles play a regulatory role also in the exocytic process, modulating the transport of secretory vesicles to the plasma membrane [39], we labeled cells with CD63, a marker of both late endosomes lysosomes and exocytic multivesicular bodies (MVBs) [40]. However, no significant alteration in the density/distribution of CD63 vesicles, or in their mobilization toward cell periphery, was observed (Fig. 3A), suggesting that dyskerin depletion specifically affected the dynamics of Rab11-mediated transport from ERC to the membrane. To ensure that Dox treatment could not elicit, by itself, any of the above-described effects, we treated RKO-untransfected cells with 400 ngÁmL À1 Dox for 72 h. As shown in Fig. S1, Dox treatment per se did not induce changes in cell shape, cytoskeletal remodeling, or alteration in density/localization of Rab5/Rab11 endosomes, confirming the specificity of the observed phenotypes. Rab11 mislocalization in dyskerin-depleted cells is not due to the disruption of cytoskeletal scaffolding To rule out the possibility that Rab11 mislocalization could be due to cytoskeletal damages eventually caused by dyskerin depletion, we pretreated cells with nocodazole or latrunculin A, two cytoskeletal disruptive drugs, and then looked at the intracellular distribution of Rab11 endosomes. Treatment either with nocodazole, which rapidly depolymerizes microtubules and MTOC [41], or with latrunculin A, which depolymerizes F-actin [42,43], completely dispersed Rab11 vesicles from the pericentrosomal region, promoting their cytosolic diffusion in both control and silenced cells (Fig. 4). Cytosolic dispersion appeared more pronounced upon latrunculin treatment, suggesting that in RKO cells the Rab11 vesicles might travel preferentially along actin network. Anyhow, none of these treatments mimicked the Rab11 mislocalization observed upon dyskerin depletion. On the contrary, both drugs counteracted the accumulation of Rab11 vesicles at ERC, ruling out the possibility that this specific phenotype could be attributed to disruption of the cytoskeletal scaffolding. Collectively, the above results lead to conclude that dyskerin depletion can induce alterations in cell shape, cytoskeletal scaffolding, and vesicular trafficking within only 72 h from silencing induction. As none of these effects occurred upon Dox exposure of untransfected RKO control cells (Fig. S1), it is reasonable to conclude that they are specifically triggered by dyskerin depletion. Cytoskeletal remodeling and alteration of vesicular trafficking are telomerase-independent effects of dyskerin depletion To exclude cell line-specific effects and to firmly assess that the above-described effects were independent from telomere instability, we extended our analyses to the human osteosarcoma epithelial (U2OS) cell line. U2OS cells were selected because, although telomerase-negative [32], they elongate telomeres efficiently by the alternative lengthening of telomeres (ALT) pathway [44]. As shown in Fig. 5, the pLKO-Tet-On-shDKC1 vector caused a very efficient dyskerin downregulation also in U2OS cells, confirming its general applicability. Remarkably, dyskerin depletion was again accompanied by an abrupt change in cell morphology, with U2OS-silenced cells quickly assuming a more stretched shape (Fig. 5). Next, we checked the distribution of Rab5 and Rab11 endosomes. As shown in Fig. 6, Rab5 trafficking significantly increased upon dyskerin depletion, confirming the tendency to a more active endocytosis and to a fast recycling. The intracellular distribution of Rab11 slow-recycling endosomes was also significantly altered, being even more evident in these cells because of their large size. In fact, in the control cells, the Rab11 vesicles heavily marked both ERC and cell periphery and were amply dispersed throughout the cytosol, where they closely matched the microtubule network (Fig. 6). In contrast, upon dyskerin depletion, the density of these late endosomes was strongly reduced and they appear predominantly clustered at ERC, poorly dispersed in the cytosol, and dramatically reduced at the cortical region (Fig. 6). To functionally analyze the endocytic and recycling processes, we next followed the internalization of transferrin. This protein is internalized by a receptor-mediated process [45] and is recycled through the ERC by Rab11 vesicles [46]. In these experiments, we incubated control and silenced cells with a 15-min pulse of Alexa Fluor 488-conjugated transferrin at 4°C, followed by a chase of 15 and 60 min at 37°C. As shown in Fig. 7, transferrin was very efficiently internalized also in dyskerin-depleted cells where, immediately following its uptake (15-min chase), assumed an intracellular distribution very similar to that observed in the control cells. In fact, in both silenced and control cells, transferrin did not colocalize with Rab11 at these early times. However, upon 60 min of chase, in the control cells a large amount of transferrin colocalized with Rab11 vesicles (Pearson's coefficient 0.8), which appeared amply dispersed throughout the cytoplasm. Conversely, in the silenced cells, transferrin and Rab11 signals colocalized essentially only at the ERC (Fig. 7), confirming the occurrence of a defective late recycling. Hence, the results of this functional assay further supported the conclusion that alteration of Rab11 receptor-mediated recycling occurs in diverse cell lines as a precocious effect of dyskerin depletion. Discussion To deeply investigate the range of dyskerin biological roles, we set up cellular systems able to trigger inducible DKC1 gene silencing and allow a detailed, short-term analysis of the events immediately following protein knockdown. Here, we report that dyskerin suppression shortly induces a cytoskeletal remodeling which triggers changes in cell shape well before the eventual occurrence of telomere erosion. Specifically, the cytoskeletal rearrangements induced by dyskerin depletion weakened cell adhesion to substrate in both RKO and U2OS cells, although this effect was less intense in the U2OS line, whose cells have thicker stress fibers and more robust focal adhesion complexes [47]. Noticeably, dyskerin downregulation was found to induce loss of cell-substratum adhesion also in prostate carcinoma [31] and in neuroblastoma cells [32], although this aspect has remained so far poorly investigated. Indeed, it is reasonable to suppose that anchorage weakening can be a general feature that contributes to growth and proliferative impairment that characterizes various cell types upon dyskerin depletion. The reshuffling of the cytoskeleton is linked to many dynamic cellular processes, such as cell division, motility, endocytosis, and vesicular trafficking. Here, we focused our analysis on the intracellular transport, mainly with respect to endocytic and recycling processes. We found that the fast endocytic recycling was incremented in the silenced cells, possibly accelerating the return of specific cargos to membrane. At the same time, a dramatic reduction in Rab11 laterecycling endosomes was observed, with these vesicles essentially clustered at the pericentrosomal ERC and nearly absent at cell cortex. Traveling of Rab11 vesicles from ERC to cortex requires interaction with a number of effectors, including the Rab11 family-interacting proteins (FIPs), which mediate association with microtubule-or actin-based molecular motors and enable both movement and correct intracellular positioning of these vesicles [4]. The most obvious explanation for Rab11 accumulation at the ERC may thus rely on a transport defect. However, this phenotype has similarly been observed upon depletion of the Rab11 GAP Evi5 [48], suggesting that also deregulated Rab11 activation can favor trapping of these vesicles at the ERC, strongly reducing Rab11-dependent, slow recycling. This finding further highlights the complexity of dyskerin cellular functions, although the specific mechanisms by which this protein can influence cytoskeletal and vesicular dynamics remain elusive. Indeed, several telomerase-independent roles of dyskerin may be involved. Dyskerin, by associating with other core proteins and H/ACA snoRNAs, can in fact participate in diverse ribonucleoprotein complexes involved in rRNA processing and site-specific pseudouridylation of rRNA, snRNAs, as well as of mRNAs [49]. In turn, reduction in rRNA pseudouridylation affects ribosome translation fidelity [50] and internal ribosome entry site-dependent translation efficiency [51]. Thus, it is conceivable that these functions could contribute to cytoskeletal rearrangement and alteration of vesicular trafficking. For example, dyskerin depletion may deregulate the expression of mRNAs involved in these processes, or affect these cellular dynamics in association with specific H/ACA snoRNAs. Intriguingly, this hypothesis is favored by recent studies that established an emerging role of snoRNAs in specific metabolic functions. A striking example is that of U17, a dyskerin binding snoRNA involved in intracellular cholesterol trafficking [52]. Worth noting, proper sorting and trafficking within endosomal vesicles is necessary to maintain cellular homeostasis and to perform both ubiquitous and cell type-specific functions. In fact, perturbation of this traffic widely affects cargo destination and membrane properties, this way potentially altering cell-cell and cell-extracellular matrix interactive communication and related differentiative events. Consistent with these premises, dyskerin depletion may cause both cellautonomous and nonautonomous effects, as observed for developmental defects and alterations of long-range signaling occurring in Drosophila upon in vivo silencing of the DKC1 orthologue [19][20][21]. Finally, the endocytic pathway intersects other intracellular transport routes, such as the secretory pathway and the retrograde transport of selected cargo from ERC to the . Cells were costained with Rab11A (green) and b-tubulin (red) antibodies or with Rab11A antibody (green) and phalloidin (red). Nuclei were counterstained with 4 0 -6diamidino-2-phenylindole (DAPI; blue). All signals are shown in gray in the left insets. Images of control and silenced cells were acquired at the same conditions; the sum of five z-stack central focal planes is shown. Note that silenced cells are characterized by a dramatic reduction in Rab11A labeling at cell periphery, accompanied by a pericentrosomal accumulation at the ERC; this phenotype is not mimicked by cytoskeletal disruptions induced by either nocodazole or latrunculin A treatment, as both of them promote diffusion of Rab11 vesicles. trans-Golgi network (TGN). While the relationship with the TGN pathway remains to be investigated, the unaltered intracellular distribution of CD63 exocytic vesicles suggests that their specific trafficking is not affected. Collectively, our results indicate that DKC1 silencing can perturb several aspects of cell homeostasis independently of telomere instability. The observation that dyskerin can orchestrate vesicular traffic not only adds further information on the multiple biological roles played by this protein but gives novel insights into the comprehension of molecular mechanisms underlying the congenital diseases triggered by dyskerin depletion. Supporting information Additional Supporting Information may be found online in the supporting information tab for this article: Fig. S1. Dox treatment per se did not elicit morphological changes or alteration in Rab5/Rab11 trafficking in RKO and U2OS wt cells. Table S1. List of antibodies used in immunofluorescence (IF) and/or western blotting (WB) analyses.
2018-04-03T04:09:58.220Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "b041645a88d7a8d87131da6a7ab26a399bde4062", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/2211-5463.12307", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "b041645a88d7a8d87131da6a7ab26a399bde4062", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
29103539
pes2o/s2orc
v3-fos-license
Gluon Saturation and Proton-AntiProton Cross Sections We study proton anti-proton cross sections in the framework of an updated minijet eikonal model. We propose a different scheme for fixing the parameters, in which we make use of the measured minijet cross section. We compare the results obtained with the GRV98, MRST98, CTEQ6-L and KLN gluon distributions. The latter includes gluon saturation effects. We conclude that in the very high energy regime the use of the KLN distribution improves significantly the behavior of the cross sections. However this improvement is due to the shape of the KLN gluon density and has little to do with saturation effects. I. INTRODUCTION The growth of hadronic total cross sections was theoretically predicted many years ago [1] and observed in many experiments at CERN, Fermilab [2,3] and, more indirectly, in cosmic rays . Computing total cross sections as a function of the collision energy is one of the great unsolved problems of QCD.Unlike processes which are computed in perturbation theory, calculating total hadronic cross sections appears to be an intrinsically non-perturbative procedure. In the absence of a pure QCD description, phenomenological models have been used to compare experimental data with theoretical schemes.For a long time the main theoretical approaches to total cross sections were the Regge-type models and the QCD-inspired models.In the latter, the (too fast!) energy rise of the total cross sections is driven by the increasing number of the low x gluon-gluon collisions.These models [4][5][6] need to be embedded in an eikonal formalism to soften the violent energy rise of the mini-jet cross sections.Of course, this is only one among several unitarization methods (for a discussion see, for example [7]).Some other QCD inspired models based on non-perturbative physics, such as the stochastic vacuum model, may lead to no energy independent cross sections [8]. Even after eikonalisation the predicted energy rise is stronger than the gentle one observed experimentally.Attempts to further tame this rise were advanced by the mini-jet supporters in [9], invoking the increasing soft gluon emission by valence quarks in hadron collisions at increasing energies.Here we discuss another possible mechanism to be included in the eikonal mini-jet model (EMM): gluon saturation due to recombination. The same gluon recombination which was found to be responsible for reducing the growth of the gluon distribution might prevent the total cross section from growing too fastly with energy, violating the Froissat bound.Of course, from the pure idea to its implementation there is a long way.One important result was presented in [10], where it was shown that the cross section of a very small and neutral color dipole colliding against a hadronic target at fixed impact parameter follows the Froissat behavior at aymptotic energies.In this particular case, gluon saturation in the target was enough to unitarize the scattering amplitude.Inspite of this remarkable result, this is a very special case and the extension to larger, colored dipoles summed over all impact parameters still needs some modelling.For real projectiles and targets the unitarization of the total cross section still must be done in an ad-hoc way, as done, for example, in the eikonal formalism. II. THE EIKONAL MINI-JET MODEL At high energies there is a significant increase of the number of the gluons inside the hadron and hadronic cross sections are dominated by the mini-jets coming from gluon-gluon interactions. The perturbative expression of the jet cross section is given by: where G(x, Q 2 ) is the gluon distribution in the proton extracted from deep inelastic scattering (DIS), x and σgg are the proton momentum fraction x and the elementary gluongluon cross section respectively.There are different parametrizations for these distribution functions given, for example, by the collaborations GRV [11], MRST [12] and CTEQ [13].P 0 = p T min is a parameter which defines the energy scale at which semi-hard interactions start and perturbative QCD is applicable.The increase of the number of gluons in the high energy region (x << 1) makes these functions increase very rapidly at small x and hence the cross section (1) violates the Froissart bound: In Fig. 1 we compare the distributions GRV98, MRST98 and CTEQ6-L.We can see that in the small x region they start to be very different from each other.Using these parametrization in (1) we have computed the corresponding mini-jet cross sections and compared them with the experimental data [14], as shown in Fig. 2. In this figure we have adjusted P 0 in order to describe the data.We can see that, although all of them are able to reproduce the data there is a clear trend of growing too fast.The early mini-jet models tried to apply (1) directly to the total hadronic cross sections.This exercise is updated in Fig. 3 where a compilation of data [3] is shown and also the Froissat limit (2).As anticipated in the introduction this simple use of pQCD does not work. In order to ensure unitarity, the most common procedure is to utilize an eikonal formalism, evaluating the eikonal in the two-dimensional transverse impact parameter space b.The total cross section is given by [4]: where χ I (b, s) and χ R (b, s) are the the imaginary and real part of the eikonal function respectively.The function χ(b, s) is usually split into a soft and a hard piece: and each of these pieces has a real and an imaginary part.The soft part of the eikonal comes from a parametrization valid at lower energies, in the range 20 − 60 GeV [4] and the hard one contains the mini-jet cross section and the impact parameter dependence. where A(b) is given by the Fourier transform of the electromagnetic form factor and is the same for the soft and hard parts of χ. III. GLUON SATURATION In high energy experiments we expect to observe the nonlinear behavior of QCD.In this regime, the growth of parton distributions should saturate and we should observe the state called "Color Glass Condensate" (CGC) [15].In fact, signals of parton saturation have already been observed both in ep deep inelastic scattering at HERA and in deuteron-gold collisions at RHIC [16].However, the observation of this new regime still needs confirmation.In the saturation regime the gluon distributions are no longer given by the parametrizations used above, which only contain the DGLAP (linear) evolution.Instead, they are the solutions of non-linear evolution equations.However, these solutions are not yet known and one has to use parametrizations.Karzeev, Levin and Nardi (KLN) [17] developed a model for G(x) that simulates the saturation effects and generates a distribution function that has been used to describe the new data from RHIC, in studies of multiplicity and rapidity distribution of charged particles.A common feature of all saturation models is the existence of a scale that separates the dense and dilute regions of the hadrons.It is known as the saturation scale and has been parametrized as [18]: The KLN distribution function is: where S is the proton area and x 0 , Q 0 and λ are the parameters of the model fitted from RHIC data: x 0 = 0.310 −4 , Q 2 0 = 0.3 GeV 2 , λ = 0.288.The distribution above contains α s (which is small) in the denominator, which can cancel the α s factor in σ.This is typical of the non-linear regime, where we have to deal with weak couplings but strong fields.The alluded cancellation casts some doubts on the use of the collinear factorization formula.In fact, the collinear factorization is violated in many cases in the context of saturation physics.Fortunately, for many cases of interest, an expression analogous to (1) is valid, in which we have to replace G(x) by the unintegrated (in the gluon transverse momentum) gluon distribution φ(x, k 2 T ).This is called k T factorization and was proven to hold in many cases.As shown in [19] k T factorization is valid for gluon production and is violated in quark production, especially in p-A and A-A collisions.Since we are addressing mostly gluon-gluon interactions (with subsequent gluon production) and only p − p collisions we shall assume that k T factorization holds.Moreover, as shown in [20], k T and collinear factorization are equivalent at the leading twist level.Given the exploratory nature of this study, we shall assume that (1) holds also for distributions like (7).The KLN distribution is designed to be valid at very low x and its most interesting feature is the comparatively mild behavior in the low x region.It has been used to study gluon FIG.2: The mini-jet cross section calculated by Eq. ( 1) compared to UA1 experimental data [14] FIG.3: The cross section calculated by Eq. (1) compared to pp total cross section data [14] production through the fusion g + g → g at lower scales Q 2 10 GeV 2 .We shall use the KLN distribution at much higher scales, Q 2 = x 1 x 2 s, GeV 2 and therefore we should perform the DGLAP evolution of (7).However we are going to postpone it for a future work.The results without evolution are nevertheless meaningful because the "effective dominant scale", i.e., the one which contributes the most to the integrals in (1) and ( 3) is not so large, since the gluon distributions are peaked at small values of x, which lead to relatively small values of Q 2 in most of the cases.In Fig. 1 we compare Eq. ( 7) with the other gluon densities.The difference seems to be very large, of one order of magnitude already at x = 10 −4 .Of course part of it is due to the lack of DGLAP evolution, which is known to enhance the small x region of G(x).However part of this low x behavior is really due to the physical input of (7). Using the KLN distribution in (1) we obtain a good description of the mini-jet cross section, as shown in Fig. 2 and we can improve a lot the description of total cross sections, as it can be seen in Fig. 3. However a really good fit of data is found only using KLN in (3), as shown in Fig. 4. Notice that here we fit first the minijet cross section (in Fig. 2), fixing P 0 , and then fit minimum FIG.4: The total cross section calculated with the EMM (3) compared to the experimental data [3] FIG. 5: The mini-jet cross section calculated with the KLN distribution function, with (dashed lines) and without (solid lines) saturation effects. bias cross section data (in Fig. 4).This is different from what is done in Refs.[4,5], where the information contained in mini-jet cross sections [14] is not used and all parameters are fixed together fitting the total cross sections.Of course, this conclusion should be better grounded after a least χ 2 fit, which we leave for the future. Looking at Figs. 3 and 4 we would be tempted to conclude that we are already observing saturation effects in the total p − p cross section, since the KLN distribution gives the best fits of the available data.In order to check this conjecture, we have repeated the calculations using only the second line of (7).When Q 2 > Q 2 s we are in the linear regime and no saturation effects are present.The result obtained is shown in Fig. 5 where we compare the cross sections obtained with only the linear part (solid line) and with the two parts (dashed line).The difference between them tells us how important are the saturation effects in this observable.The answer depends on the cut-off P 0 .As it can be seen, a significant difference appears only at very high energies and only for a small cutoff.As expected, non-linear effects tend to deplete the cross section, but their magnitude is small.The good agreeement between KLN results and data, shown in the figures, can be attributed to its initial shape rather than to saturation. To summarize we have used an eikonal mini-jet model to study the behavior of the total hadronic cross section with energy.We have updated previous versions of the EMM in some aspects: we have used more recent versions of the standard gluon parametrizations; we have used the measured mini-jet cross sections [14] to improve the fit and we tested (for the first time in this kind of model) the KLN distribution, which turns out to give the best description of data.This first success suggests that, after the proper incorporation of the DGLAP evolution, which was not included here, the KLN distribtuion may become competitive to the study of total hadronic cross sections in the very high energy limit.Finally, we have observed that, inspite of the phenomenological success of the KLN distribution, saturation effects are very small in the energies considered.
2017-09-13T12:46:06.318Z
2007-03-01T00:00:00.000
{ "year": 2007, "sha1": "4afb13be20f1a77fe252b3557c203589b270a122", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjp/a/D6L4sPZFhNrGkdMxpwWNNMF/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4afb13be20f1a77fe252b3557c203589b270a122", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247025500
pes2o/s2orc
v3-fos-license
Learning Competitive Equilibria in Exchange Economies with Bandit Feedback The sharing of scarce resources among multiple rational agents is one of the classical problems in economics. In exchange economies, which are used to model such situations, agents begin with an initial endowment of resources and exchange them in a way that is mutually beneficial until they reach a competitive equilibrium (CE). The allocations at a CE are Pareto efficient and fair. Consequently, they are used widely in designing mechanisms for fair division. However, computing CEs requires the knowledge of agent preferences which are unknown in several applications of interest. In this work, we explore a new online learning mechanism, which, on each round, allocates resources to the agents and collects stochastic feedback on their experience in using that allocation. Its goal is to learn the agent utilities via this feedback and imitate the allocations at a CE in the long run. We quantify CE behavior via two losses and propose a randomized algorithm which achieves sublinear loss under a parametric class of utilities. Empirically, we demonstrate the effectiveness of this mechanism through numerical simulations. Introduction An exchange economy (EE) is a classical micro-economic construct used to model situations where multiple rational agents share a finite set of scarce resources. Such scenarios arise frequently for applications in operations management, urban planning, crowd sourcing, wireless networks, and sharing resources in data centers [14,19,23,27,29,45]. In an EE, agents share a set of resources consisting of multiple resource types. They begin with an initial endowment and then exchange these resources among themselves based on a price system. This exchange process allows two agents to trade different resource types if they find it mutually beneficial to do so. Under certain conditions, continually trading in this manner results in a competitive equilibrium (CE), where the allocations have desirable Pareto-efficiency and fairness properties. EEs have attracted much research attention, historically since they are tractable models to study human behavior and price determination in real-world markets, and more recently for designing multi-resource fair division mechanisms [7,8,10,15,17,47]. One of the most common use cases for fair division, which will be especially pertinent in this work, occurs in the context of shared computational resources. For instance, in a data center shared by an organization, we wish to allocate resources such as CPUs, memory, and GPUs to different users who wish to share this cluster in a way that is Pareto-efficient (so that the resources are put into good use) and fair (for long-term user satisfaction). Here, unlike in real world economies where agents might trade with each other until they reach an equilibrium, the equilibrium is computed using a central mechanism (e.g. a cluster manager) based on the preferences submitted by the agents to obtain an allocation with the above properties. Indeed, fair division mechanisms are a staple in many popular multi-tenant cluster management frameworks used in practice, such as Mesos [28], Quincy [30], Kubernetes [11], and Yarn [50]. Due to this strong practical motivation, a recent line of work has studied such fair division mechanisms for resource sharing in a compute cluster [13,24,25,42], with some of them based on exchange economies and their variants [26,35,48,54]. However, prior work on EEs and fair division typically assumes knowledge of the agent preferences, in the form of a utility function which maps an allocation of the m resource types to the value the agent derives from the allocation. For instance, in the above example, an application developer needs to quantify how well her application performs for each allocation of CPU/memory/GPU she receives. At best, doing so requires the laborious and often erroneous task of profiling their application [18,40], and at worst, it can be infeasible due to practical constraints [44,51]. However, having received an allocation, application developers find it easier to report feedback about the utilities based on the performance they achieved. Moreover, in many real-world systems, this feedback scheme can often be automated [28]. Contributions & summary of results We study a multi-round mechanism for computing CE in an exchange economy so as to generate fair and efficient allocations when the exact utilities are unknown a priori. A central mechanism is used to learn the user utilities over time via feedback from the agents. At the beginning of each round, the mechanism generates allocations; at the end of the round, agents report feedback on the allocation they received. The mechanism then uses this information to better learn the preferences. In particular, we focus on applications for fair division where a centralized mechanism can compute an allocation of these resources on each round, say, by estimating the utilities and finding their equilibria. In this pursuit, we first formalize this online learning task and construct two loss functions: the first L CE directly builds on the definition of a CE, while the latter L PE is motivated by the fairness and Pareto-efficiency considerations that arise in fair division. To make the learning problem tractable, we focus on a parametric class of utilities which include the constant elasticity of substitution (CES) utilities which feature prominently in the econometric literature and other application-specific utilities used in the systems literature. We develop a randomized online mechanism which efficiently learns utilities over rounds of allocations while simultaneously striving to achieve Pareto-efficient and fair allocations. We show that this mechanism achieves O( √ T ) loss for the two loss functions with both in-expectation and high-probability upper bounds (Theorems 4.1 and 4.2), under a general family of utility functions. To the best of our knowledge, this is the first work that studies CE without knowledge of user utilities; as such different analysis techniques are necessary. For instance, finding a CE is distinctly different from a vanilla optimization task, and common strategies in bandit optimization such as upper-confidence-bound (UCB) based algorithms do not apply (details in 4). Instead, our algorithm uses a sampling procedure to balance the exploration-exploitation trade-off. We develop new techniques both to bound the losses and to analyse the algorithm. Finally, we corroborate these theoretical insights with empirical simulations. Related work Our work builds on a rich line of literature at the intersection of microeconomics and machine learning. This richness is not surprising: many real world systems are economic and multi-agent in nature, where decisions taken by or for one agent are weighed against the considerations of others, especially when these agents have competing goals such as in resource allocation, matching markets, and in auction-like settings. As in this work, several works have studied online learning formulations to handle situations where the agents' preferences are not known a priori, but can be learned from repeated interactions [4,6,9,21,31,32]. Our setting departs from these as we wish to learn agent preferences in an exchange economy, with a focus on designing fair division mechanisms. Since the seminal work of Varian [48], fair division of multiple resource types has received significant attention in the game theory, economics, and computer systems literature. One of the most common perspectives on this problem is as an exchange economy (or as a Fisher market, which is a special case of an EE). Moreover, fair allocation mechanisms have been deployed in many practical resource allocation tasks when compute resources are shared by multiple users. Due to space constraints, we defer a more detailed overview on this line of works in Appendix E.1. Notably, in all of the above cases, an important requirement for the mechanism is that agent utilities be known ahead of time. Some work has attempted to lift this limitation by making explicit assumptions on the utility, but it is not clear that if these assumptions hold in practice [36,54]. Recently, Kandasamy et al. [33] provides a general method for learning agent utilities for fair division using feedback. However, they only study a single-resource setting and do not do not explore multiple resource types. Crucially, in the multi-resource setting, one agent can exchange a resource of one type for a different type of resource from another user, so that both are better off after the exchange. Thus, learning in a multi-resource setting is significantly more challenging than the single-resource case since there is no notion of exchange, and requires new analysis techniques. Background We first present some necessary background material on exchange economies, their competitive equilibria, and fair division mechanisms. Exchange economies In an exchange economy, we have n agents and m divisible resource types. Each agent i ∈ [n] has an endowment, e i = (e i1 , . . . , e im ), where e ij can be viewed the amount of resource j agent i brings to the economy for trade. In the shared compute cluster example, e i may represent agent i's contribution to this cluster. Without loss of generality we assume i∈[n] e i1 = 1 so that the space of resources is denoted by [0, 1] m . We denote an allocation of these resources to the n agents by x = (x 1 , x 2 , . . . , x n ), where x i ∈ [0, 1] m and x ij denote the amount of resource j that is allocated to agent i. The set of all feasible allocations is therefore }. An agent's utility function is simply u i : [0, 1] m → [0, 1], where u i (x i ) represents her valuation for an allocation x i she receives. Here u i is non-decreasing, i.e., u i (x i ) u i (x i ) for all x i x i element-wise (more allocations will not hurt). In an exchange economy, agents exchange resources based on a price system. We denote a price vector by p, where p ∈ R m + and 1 p = 1 (the normalization accounts for the fact that only relative prices matter). Here p j denotes the price for resource j. Given a price vector p, an agent i has a budget p e i , which is the monetary value of her endowment according to the prices in p. As this is an economy, a rational agent will then seek to maximize her utility under her budget: While generally, the preferred allocations d i (p) form a set, for simplicity we will assume it is a singleton and treat d i as a function which outputs an allocation for agent i. This is justified under very general conditions [39,49]. We refer to d i (p) chosen in the above manner as the agent i's demand for prices p. Competitive equilibria -definition, existence and uniqueness: A natural way to allocate resources to agents is to set prices p for the resources, and have the agents maximize their utility under this price system. That is, we allocate x(p) = (x 1 , . . . , x n ). Unfortunately, such an allocation may be infeasible, and even if it were, it may not result in an efficient allocation. However, under certain conditions, we can compute a competitive equilibrium (CE), where the prices have both of the desired properties: Some definitions of a CE require that the first condition above being an exact equality (e.g., [39]). However, when the utilities are strictly increasing (which will be the case in the sequel), both definitions coincide [49]. Utilities. In general, CEs do always exist but may not be unique. However, one important class of utilities that guarantee this condition with much attention in the fair division literature is the constant elasticity of substitution (CES) utility. Due to its favorable properties, CES utilities are widely-studied in many fair division works, and most of the existing algorithms that generate fair and efficient allocations assume CES utilities or its sub-classes [39,49]. CES utilities are also ubiquitous in the microeconomics literature; due to this flexibility in interpolating between perfect substitutability and complementary, they are also able to approximate several real-world utility functions. Moreover, computationally, there are efficient methods for computing a CE in the CES and related classes [54,55]. In contrast, even when CE exist, they may be hard to find under more general classes of utilities [49]. the elasticity of substitution, and θ i = (θ i1 , . . . , θ im ) is an agent-specific parameter. When ρ = 1, this corresponds to linear utilities where goods are perfect substitutes. As ρ → ∞, the utilities approach perfect complements. Fair division We describe exchange economies which are used in fair-division mechanisms. We first formally define the fair division problem. In a standard mechanism for fair division when the utilities are inputs, each agent truthfully 1 submits her utility u i to the mechanism. The mechanism then returns an allocation x ∈ X that are not only efficient but also fair, which satisfies the following two requirements: sharing incentive (SI) and Pareto-efficiency (PE). An allocation x = (x 1 , . . . , x m ) satisfies SI if the utility an agent receives is at least as much as her utility when using her endowment, i.e. u i (x i ) u i (e i ). This simply states that she is never worse off than if she had kept her endowment to herself, so she has the incentive to participate in the fair division mechanism. A feasible allocation x is said to be PE if the utility of one agent can be increased only by decreasing the utility of another. Rigorously, an allocation x dominates another x , if u i (x i ) u j (x i ) for all i ∈ [n] and there exists some i ∈ [n] such that u i (x i ) > u i (x i ). An allocation is Paretoefficient if it is not dominated by any other point. We denote the set of Pareto-efficient allocations by PE. One advantage of the PE requirement, when compared to other formalisms which maximize social or egalitarian welfare, is that it does not compare the utility of one agent against that of another. The utilities are useful solely to specify an agent's preferences over different allocations. EEs in fair division: The above problem description for fair division naturally renders itself to a solution based on EEs. By treating the resource allocation environment as an exchange economy, we may compute its equilibrium to determine the allocations for each agent. Then, the SI property follows from the fact that each agent is maximizing her utility under her budget, and an agent's endowment (trivially) falls under her budget. The PE property follows from the first theorem of welfare economics [39,49]. Several prior works have used this connection to design fair-division mechanisms for many practical applications [15,48,54]. Computing a CE: In order to realize a CE allocation in a fair division mechanism, the mechanism needs to compute a CE given a set of utilities. One way to do this is via tatonnement [49]. While there are general procedures, such as tatonnement [49], they are not guaranteed to converge to an equilibrium even when it exists; moreover, even when they do, the rate of convergence can be slow. This has led to the development of efficient procedures for special classes of functions. One such method is proportional response dynamics (PRD) [54,55] which converges faster under CES utilities [55] and other classes of utilities [54] when e i = α i 1 m for all i ∈ [n] (with i α i = 1). In fact, in our evaluations, we adopt PRD for computing a CE, which is a subroutine of the learning algorithm. We note that in the context of fair division, the CE allocations are more pertinent than the CE prices. While the prices are used to compute fair allocations, they are not used directly in their own right. Online Learning Formulation We formalize online learning an equilibrium in an exchange economy under bandit feedback, when the exact agent utilities are unknown a priori. We consider a multi-round setting, where in each round t, the mechanism selects (x t , p t ), where x t = (x t,1 , . . . , x t,n ) ∈ X are the allocations for each agent for the current round, and p t are the prices for units of each resource. The agents, having experienced their allocation, report stochastic feedback {y t,i } i∈[n] , where y t,i is σ sub-Gaussian and E[y t,i |x t,i ] = u i (x t,i ). The mechanism then uses this information to compute allocations for the next round. As described in Section 1, this set up is motivated by use cases in data center resource allocations, where jobs (agents) cannot state their utility upfront, but can report feedback on their performance in an automated way. Going forward, we slightly abuse notation when referring to the allocations. When i ∈ [n] indexes an agent, x i = (x i1 , . . . , x im ) ∈ [0, 1] m denotes the allocation to agent i. When t indexes a round, x t = (x t,1 , . . . , x t,n ) ∈ X will refer to an allocation to all agents, where x t,i = (x t,i,1 , . . . , x t,i,m ) ∈ [0, 1] m denotes i's allocation in that round. The intended meaning should be clear from context. Losses We study two losses for this setting. The first loss is based directly on the definition of an equilibrium (Def. 2.1). For a ∈ R, denote a + = max(0, a). We define the CE loss CE of an allocation-price pair (x, p) as the sum, over all agents, of the difference between the maximum attainable utility under price p and the utility achieved by allocation x. The T -round loss L CE T is the sum of CE (x t , p t ) losses over T rounds. We have: It is straightforward to see that for a CE pair (x , p ), we have CE (x , p ) = 0. As this loss is based directly on the definition of a CE, it captures many of the properties of a CE. Our second loss is motivated by the fair division use case. Recall from Sec. 2.2 that in fair division, while prices are useful in computing CE allocations, they have no value in their own right. Therefore, we will motivate our loss function based on the sharing incentive (SI) and Paretoefficiency (PE) desiderata for fair division. It is composed of two parts. We define the SI loss SI for an allocation x as the sum, over all agents, of how much they are worse off than their endowment utilities. We define the PE loss PE for an allocation x as the minimum sum, over all agents, of how much they are worse off than some Pareto-efficient utilities. Next, we define the fair division loss FD as the maximum of SI and PE . Finally, we define the T -round loss L FD T for the online mechanism as the sum of FD (x t ) losses over T rounds. We have: Note that individually achieving either small SI or PE is trivial: if an agent's utility is strictly increasing, then by allocating all the resources to this agent we have zero PE as such an allocation is Pareto-efficient; moreover, by simply allocating each agent their endowment we have zero SI . In FD , we require both to be simultaneously small which necessitates a clever allocation that accounts for agents' endowments and utilities. One intuitive interpretation of the PE loss is that it can be bounded above by the L 1 distance to the Pareto-front in utility space; i.e. denoting the set of Pareto-efficient utilities by The FD loss is more interpretable as it is stated in terms of the SI and PE requirements for fair division. On the other hand, the CE loss is less intuitive. Moreover, in EEs, while prices help us determine the allocations, they do not have value on their own. Given this, the CE loss has the somewhat undesirable property that it depends on the prices p t . That said, since the CE loss is based directly on the definition of a CE, it captures other properties of a CE that are not considered in FD (see an example in Appendix E.3). It is also worth mentioning that either loss cannot be straightforwardly bounded in terms of the other. Note that we have presented a basic version of the online learning framework as it provides a simplest platform to study the learning problem of efficient and fair allocations. For instance, one could consider richer settings where the utilities might change over time with certain contextual information. While these settings are beyond the scope of this work, we believe the analysis techniques and intuitions developed here are also insightful in analysing other variant settings. Model and assumptions To make the learning problem tractable, we make some additional assumptions on the problem. We consider the following parametric class of utility functions P. Let φ j : [0, 1] → [0, 1] be an increasing function which maps the allocation x ij of resource j to agent i to some feature value. For brevity, we will write φ : . . , φ m (x im )); Next, let µ : R + → [0, 1] be an increasing function. Finally, let Θ ⊂ R m + be a set of positive parameters. Then, we consider the following class of utilities P: An agent's utility then takes the form where the featurization φ and the function µ are known, but the true parameters θ * i ∈ Θ are unknown and need to be learned by the mechanism. We consider the above class of functions for the following reasons. First, observe that it represents a valid class of utilities in that for all positive θ, the utilities are increasing in the allocations. Second, a CE is guaranteed to exist uniquely in this class. Third, from a practical point of view, it subsumes a majority of utilities studied in the fair division literature, such as linear utilities, the CES utilities from Example 2.2 [7,8,10,15,47], and other application-specific utilities [51,54], Fourth, also from a practical point of view, the CE can be efficiently computed on this class [55]. Finally, it also allows us to leverage techniques for estimating generalized linear models in our online learning mechanism [12,22]. We will also assume the following regularity conditions on P to avoid some degenerate cases in our analysis. First, µ is continuously differentiable, it is Lipschitz-continuous with constant L µ , and These assumptions can be relaxed (albeit with a more involved analysis), or replaced by other equivalent regularity conditions [12,22], without affecting the main analysis ideas or take-aways in this paper. Our results also apply when µ, φ, and Θ can be defined separately for each agent, but we assume they are the same to simplify the exposition. Algorithm and Theoretical Results We present a randomized online learning algorithm for learning the agents' utilities and generating fair and efficient allocations. Note that this algorithm not only needs to learn the unknown utilities quickly, but should also simultaneoulsy find the CE allocation. This latter aspect introduces new challenges in our setting. For instance, the most popular approach for stochastic optimization under bandit feedback are based on upper-confidence-bounds (UCB). However, finding a CE cannot be straightforwardly framed as a vanilla optimization procedure and hence UCB procedures do not apply. Instead, our proposed algorithm uses a key randomized sampling step, which tradeoffs between exploration and exploration while maintaining the utilities' shape constraints in every round for computing the CE (details in proof sketch). The algorithm, outlined in Algorithm 1, takes input parameters M and {δ t } t 1 whose values we will specify shortly. It begins with an initialization phase for M sub-phases (line 3), each of length min(n, m). During each sub-phase, we allocate each resource entirely to each user for at least one round. This initialization phase ensures that some matrices we define subsequently are well conditioned. After the initialization phase, the algorithm operates on each of the remaining rounds as follows. For each user, it first computes quantities Q t,i ∈ R m×m andθ t,i ∈ R m as defined in lines 15, and 16. As we explain shortly,θ t,i can be viewed as an estimate of θ * i based on the data from the first t − 1 rounds. The algorithm then samples θ t,i ∈ R m from a normal distribution with meanθ t,i and co-variance α 2 t Q t,i , where, α t is defined as: for k = 1, . . . , max(m, n) do 5: if m < n then 8: x t,h+k−1,j ← 1 for all j ∈ [m]. 19: Choose allocations and prices The sampling distribution, which is centered at our estimateθ t,i , is designed to balance the exploration-exploitation trade-off on this problem. Next, it projects the sampled θ i,t onto Θ to obtain θ t,i . In line (19), the algorithm obtains an allocation and price pair x t , p t by computing the CE on the θ t,i values obtained above, i.e. by pretending that u t,i (·) = µ(θ t,i φ(·)) is the utility for user i. It is important to note that the computation of the CE happens as a subroutine of the mechanism, and users will simply receive the allocations x t . The mechanism collects the rewards {y t,i } i∈[n] from each user and then repeats the same for the remaining rounds. As we discussed in Sec. 2.2, there are different ways to compute a CE efficiently in our setting, including tatonnement or the proportional response dynamics (PRD) algorithm [55] which we implemented. Given that our algorithm focus on learning the efficient and fair allocations, we do not focus on the computation complexity of CE in this work. Empirically, we find PRD converges quickly in the simulations. Computation ofθ it : It is worth explaining steps 15-16 used to obtain the estimateθ it for user i's parameter θ * i . Recall that for each agent i, the mechanism receives stochastic rewards y t,i where y t,i is a σ sub-Gaussian random variable with E[y t,i ] = u i (x t,i ) in round t. Therefore, given the allocation-reward pairs {(x s,i , y s,i )} t−1 s=1 , the maximum quasi-likelihood estimator θ MLE t,i for θ i is defined as the maximizer of the quasi-likelihood L(θ) = t−1 s=1 log p θ (y s,i |x s,i ), where p θ (y i |x i ) is as defined below. Here, µ(ν) = ∂b(ν) ∂ν and c(·) is a normalising term. We have: Upon differentiating, we have that θ MLE t,i is the unique solution of the estimating equation: In other words, θ MLE t,i would be the maximum likelihood estimate for θ * i if the rewards y t,i followed an exponential family likelihood as shown in (6). Our assumptions are more general; we only assume the rewards are sub-Gaussian centred at µ(θ * i φ(x t,i )). However, this estimate is known to be consistent under very general conditions, including when the rewards are sub-Gaussian [12,22]. Since θ MLE t,i might be outside of the set of feasible parameters Θ, this motivates us to perform the projection in the Q −1 t,i norm to obtainθ t,i as defined in line 16. Here, Q t,i , defined in line (15), is the design matrix obtained from the data in the first t − 1 steps. On the algorithm design: It is worth comparing the design of our algorithm against prior work in the bandit literature under similar parametric assumptions [16,22,38,43]. For instance, in a CE, each agent is maximizing their utility under a budget constraint. Therefore, a seemingly natural idea is to adopt a UCB based procedure, which is the most common approach for stochastic optimization under bandit feedback [5]. However, adopting a UCB-style method for our problem proved to be unfruitful. Consider using a UCB of the form µ(θ it φ(·)) + U it (·), where U it quantifies the uncertainty in the current estimate. Unfortunately, a CE is not guaranteed to exist for utilities of the above form, which means that finding a suitable allocation can be difficult. An alternative idea is to consider UCBs of the form µ( θ t,i φ(·)) where θ t,i is an upper confidence bound on θ * i (recall that both θ * i and φ are non-negative). While CEs are guaranteed to exist for such UCBs, θ t,i is not guaranteed to uniformly converge to θ * i , resulting in linear loss. Instead, our algorithm takes inspiration from classical Thompson sampling (TS) procedure for multi-armed bandits in the Bayesian paradigm [46]. The sampling step in line 17 is akin to sampling from the posterior beliefs in TS. It should be emphasized that the sampling distributions on each round cannot be interpreted as the posterior of some prior belief on θ * i . In fact, they were designed so as to put most of their mass inside a frequentist confidence set for θ * i . Upper bounds on the loss The following two theorems are the main results bounding the loss terms L FD , L CE for Algorithm 1. In the first theorem, we are given a target failure probability of at most δ. By choosing δ t appropriately, we obtain an infinite horizon algorithm for which both loss terms are O( √ T ) with probability at least 1 − δ. In the second theorem, with a given time horizon T , we obtain an algorithm whose expected losses are O( √ T ). Above, probabilities and expectations are with respect to both the randomness in the observations and the sampling procedure. Both theorems show that we can learn with respect to both losses at √ T rate. Note that the rates depend on the number of initialization sub-phases M . By choosing M = m 2 , we get a O(nm √ T ) bound. However, this also requires a large initialization phase, which may not be feasible in practice. We can instead choose M to be small, but this leads to correspondingly worse asymptotic bounds. Proof sketch. Our proof uses some prior martingale concentration results from the bandit literature [22,43], and additionally, we use some high level intuitions from prior frequentist analyses of Thompson sampling [2,34,41]. At the same time, we also require novel techniques, both to bound the loss terms, and analyse the algorithm. Our proof for bounding L CE T first defines high probability events A t,i , B t,i for each agent i and round t. A t,i captures the event that the estimated θ t,i is close to θ * i in Q t,i norm. We upper bound P(A c t,i ) using the properties of the maximum quasilikelihood estimator on GLMs [12,22] and a martingale argument. B t,i captures the event that the sampled θ t,i is close toθ t,i in Q t,i norm. Given these events, we then bound the instantaneous losses CE (x t , p t ) by a super martingale with bounded differences. The final bound is obtained by an application of the Azuma inequality. Another key ingredient in this proof is to show that the sampling step also explores sufficientlythe B it event only captures exploitation; since the sampling distribution is a multi-variate Gaussian, this can be conveniently argued using an upper bound on the standard normal tail probability. While bounding L FD uses several results and techniques as above, it cannot be directly related to L CE , and requires a separate analysis. Experiments We evaluated Algorithm 1 with simulations. To the best of our knowledge, this is the first online algorithm studying fair and efficient allocations with unknown utilities with multiple heterogeneous resource types, and there are no existing natural baselines. There is also no straightforward adaptation of the method described in Kandasamy et al. [33] for single resource types since they do not consider the exchange of resources. We evaluated based on two types of utilities. 2. Amdahl's utilities: The Amdahl's utility function, described in Zahedi et al. [54], is used to model the performance of jobs distributed across heterogeneous machines in a data center. This utility is motivated by Amdahl's Law [3], which models a job's speed up in terms of the fraction of work that can be parallelized. Let 0 < f ij < 1 denote the parallel fraction of user i's job on machine type j. Then, an agent's Amdahl utility is: is the relative speedup produced by allocation x ij . Both CES and Amdahl utilities belong to our class P given in (4). We focus our evaluation on the CE loss; computing the FD loss is computationally expensive as it requires taking an infimum over the Pareto-front (more details in Appendix E). T . The results are given in Figure 1. They show that the CE loss grows sublinearly with T which indicates that the algorithm is able to learn utilities and compute a CE. To compute the CE at line 19 of Algorithm 1, we use the proportional response dynamics procedure from [55] with 20 iterations. To compute L CE , we need to maximize each agent's utility subject to a budget. Full experimental details and additional results are included in Appendix D. Conclusion We introduced and studied the problem of online learning a competitive equilibrium in an exchange economy, without a priori knowledge of agents' utilities. We quantify the learning performance via two losses, the first motivated from the definition of an equilibrium, and the second by fairness and Pareto-efficiency considerations in fair division. We develop a randomized algorithm which achieves O(nm √ T ) loss after T rounds under both losses, and corroborate these theoretical results with simulations. While our work takes the first step towards sequentially learning a market equilibrium in exchange economies, an interesting avenue for future work would be to study learning approaches in broader classes of agent utilities and market dynamics. A Technical Lemmas We first provide some useful technical lemmas. N (0, 1). Then, Proof. Suppose that X is sub-exponential random variable with parameters (ν, b) and expectation µ. Applying well known tail bounds for sub-exponential random variables (e.g. [52]) yields: The lemma follows from the fact that a χ 2 m random variable is sub-exponential with parameters (ν, b) = (2, 4). Lemma A.2. (Lower bound for normal distributions) Let Z be a random variable Z ∼ N (0, 1), Proof. First, from Abramowitz et al. [1] (7.1.13) we have, Set t = √ 2x, then the above equation yields: which completes the proof. Lemma A.3. (Azuma-Hoeffding inequality [52]) Let (Z s ) s 0 be a super martingale w.r.t. a filtration almost surely. Then for any δ > 0, Proof. The result follows immediately from the fact that the function f (x) def = x log(1+x) is nondecreasing on (0, ∞). Lemma A.5. (Lemma 1, Filippi et al. [22]) Let (F k , k 0) be a filtration, (m k ; k 0) be an R dvalued stochastic process adapted to (F k ). Assume that η k is conditionally sub-Gaussian in the sense that there exists some R > 0 such that for any γ 0, k 1, E[exp(γη k )|F k−1 ] exp γ 2 R 2 2 almost surely. Then, consider the martingale ξ t = t k=1 m k−1 η k and the process M t = t k=1 m k−1 m k−1 . Assume that with probability one, the smallest eigenvalue of M d is lower bounded by some positive constant λ 0 , and that m k 2 c m almost surely for any k 0. Then, the following holds true: for any 0 < δ < min(1, d/e) and t > max(d, 2), with probability at least 1 − δ, }} denote the σ-algebra generated by the observations in the first t − 1 rounds. Clearly, {F t } t 0 is a filtration. We will denote E t [·|F t ] = E t [·] to be the expectation when conditioning on the past observations up to round t − 1. Recall that {δ t } t 0 are inputs to the algorithm. Similarly, let {δ 2t } t 0 be a sequence. We will specify values for both sequences later in this proof. Given these, further define the following quantities on round t: Here, recall that L µ is the Lipschitz constant of µ(·), C µ is such that C µ def = inf θ∈Θ,x∈Xμ θ φ(x) , and α t is a sequence that is defined and used in Algorithm 1. Next, we consider the following two events: is a design matrix that corresponding to the first t − 1 steps. Lastly, define where x * it = arg max y∈X ,p t y p t e i u i (y). Here, we used X to denote the set of feasible allocations for one agent: {x ∈ R m : 0 x 1}. Intuitively,x it is the best true optimal affordable allocation for agent i in round t under the price function p t . Since the set {y ∈ X , p t y p t e i } is a compact set, the maximum is well defined. Now we begin our analysis with the following lemmas. . Then by the fundamental theorem of calculus, we have By the definition of C µ and Q it , we have that G it C µ Q it M · I, where the last inequality follows due to the initialisation scheme. Therefore, G it is invertible and moreover, We can write, Therefore, we have, where the first equality follows from Eq (7), and the inequality follows from Eq (8). Therefore, where the second inequality is from the triangle inequality, and the last equality is from the definition of θ M LE it and g it . Let A it denote the event that then we have A it holds with probability at least δ t by Lemma A.5. We can now write, The first step simply uses the fact that sinceθ it is already inside Θ (see line 17 in Algorithm 1), projecting θ it to be inside Θ after sampling only brings it even closer toθ it . Note that Z is a χ 2 m random variable. This follows from the fact that Denote y = α −1 t Q 1/2 it θ it −θ it , then Z = y y is a χ 2 m random variable. Therefore, by Lemma A.1, and the definition that γ 2t = max 8 log( 1 δ 2t ), log( 1 δ 2t ) , we have which completes the proof. Lemma B.3. Let x be arbitrary such that x ∈ X . Then, Proof. First, notice that From the above we have that where Z ∼ N (0, 1) is sampled independently of the observations, since the randomness in Algorithm 1 can be assumed to be independent of the randomness in the observations. Therefore, under the event A it , Here, the first inequality follows from the definition of the matrix norm and the definition of A it , and the second inequality follows from the definition of β 1t . Therefore, by Lemma A.2, we have Setting α 2 0 = 4, we have which completes the proof. Lemma B.4. Let θ 1 , θ 2 ∈ Θ ∈ R m . Let Q 0, Q ∈ R m×m be a positive semi-definite matrix, and ρ Q (x) = φ(x) Q −1 φ(x). Then, Proof. This follows from the Lipschitz properties of µ and the following simple calculations: Proof. First, when event B it holds, by lemma B.4, we have that for all x, Note that by definition, On the other side, under event A it , by lemma B.4, we have that for all x, Moreover, recall that by definition for any x ∈ S it , Therefore, consider any x ∈ S it , and under the condition that where the last inequality follows from combining equations Eq (9), Eq (10), Eq (11) and the definition of β 3t . Hence, Eq (12) implies that, under the same condition, x it / ∈ S it since by construction, x it maximizes u it under the budget, thus This further implies that, Here, the second and third inequality both from the law of total probability and rearranging terms, and the last inequality follows from Lemma B.1, Lemma B.2 and Lemma B.3, which completes the proof. Lemma B.6. For t max(t 0 , t 0 ), This implies that, Therefore, by Lemma B.5, we have Select t 0 such that, ∀t t 0 , δ t = 1 4 , and δ 2t q 0 4 , then we have: where the first inequality follows from triangle inequality, and the second one follows from the definitions of A it and B it . Hence, Therefore, we have which further yields which completes the proof. Lemma B.7. Let δ > 0. Define L iT = T t=1 it . Then, with probability at least 1 − δ , and v it = t s=1 u is , with v i0 = 0 and u i0 = 0. We show that {v it }, t 0 is a super-martingale with respect to the filtration (F t ) t 0 . First, Moreover, Therefore, by Lemma A.3, with probability at least 1 − δ , we have that Therefore, we have Now it remains to bound T t=1 ρ it (x t ). Since Q it M I, by the definition of ρ it (x it ), we have Having lemma C.1 at hand, the key remaining task is to bound PE . We will show that this can be achieved by an analogous analysis as in Section B.1, but with some key differences. First, we defineS it (in comparison to S it used in Section B.1): , where x * ∈ R n×m is the unique equilibrium allocation. Note thatS it shares a similar spirit as S it , which is used in Section B.1, but with a different referencing point x * . We show a key lemma which provides a lower bound on P (x ∈S it ). Lemma C.2. For any round t > t 0 , Proof. First, when event B it holds, by lemma B.4, we have that for all x, Note that by definition, On the other side, under event A it , by lemma B.4, we have that for all x, Moreover, recall that by definition for any x ∈S it , Therefore, consider any x ∈S it , and under the condition that where the last inequality follows from combining equations Eq (17), Eq (18), Eq (19) and the definition of β 3t . Moreover, recall that x it maximizes u it under the budget, thus , Therefore, Eq (20) implies that, x it / ∈S it . This further implies, Here, the second and third inequality both from the law of total probability and rearranging terms, and the last inequality follows from Lemma B.1, Lemma B.2 and Lemma B.3, which completes the proof. Lemma C.3. At any round t > t 0 , define x it def = arg min x ∈S it ,p t y p t e i ρ it (x it ), then we have where the last inequality follows from this definition. Moreover, we have under the event A it ∩ B it , by Eq (17) and Eq (18). Putting these together yields which completes the proof. Now we show that the above result leads to the lemma below, which shows a analogous guarantee as we obtained in lemma B.6. Here, t 0 is chosen such that, ∀t > t 0 , δ t < 1 4 , δ 2t < q 0 4 . Proof. First, note that, by the definition of x it , Moreover, combining the above with Lemma C.2, we have Select t 0 such that, ∀t t 0 , δ t = 1 4 , and δ 2t q 0 4 , then we have: Also, under A it ∩ B it , where the first inequality follows from triangle inequality, and the second one follows from the definitions of A it and B it . Hence, Therefore, we have ρ it (x it ). With the above lemmas at hand, we are now ready to provide a proof of Theorem 4.1 for L FD T . C.1 Proof of Theorem 4.1 for L FD Proof. Lemma C.4 shows a analog guarantee as we obtained in lemma B.6 for the L FD loss function. Therefore, following the same steps in lemma B.7, we have that with probability at least 1 − δ where δ will be specified momentarily, The utilities of the three users if they were to simply use their endowment is, u 1 (e 1 ) = 0.1 × 0.45 + 0.05 = 0.095, u 2 (e 2 ) = 0.14, and u 3 (e 3 ) = 0.19. We find that while agents 1 and 2 benefit more from the second resource, they have more of the first resource in their endowments and vice versa for agent 3. By exchanging resources, we can obtain a more efficient allocation. The unique equilibrium prices for the two goods are p = (1/2, 1/2) and the allocations are x 1 = (0, 0.5) for agent 1, x 2 = (0, 0.5) for agent 2, and x 3 = (1.0, 0.0) for agent 3. The utilities of the agents under the equilibrium allocations are u 1 (x 1 ) = 0.5, u 2 (x 2 ) = 0.5, and u 3 (x 3 ) = 1.0. Here, we find that by the definition of CE, PE (x , p ) = 0. It can also be verified that FD (x , p ) = 0. In contrast, consider the following allocation for the 3 users: x 1 = (0.35, 0.49) for agent 1, x 2 = (0.35, 0.49) for agent 2, and x 3 = (0.3, 0.02) for agent 3. Here, the utilities are u 1 (x 1 ) = 0.1 × 0.35 + 0.49 = 0.525, u 2 (x 2 ) = 0.56, and u 3 (x 3 ) = 0.3002. This allocation is both PE (as the utility of one user can only be increased by taking resources from someone else), and SI (as all three users are better off than having their endowments). Therefore, FD ((x 1 , x 2 , x 3 )) = 0. However, user 3 might complain that their contribution of resource 2 (which was useful for users 1 and 2) has not been properly accounted for in the allocation. Specifically, there do not exist a set of prices p for which PE (x, p) = 0. This example illustrates the role of prices in this economy: it allows us to value the resources relative to each other based on the demand.
2021-06-15T01:16:15.165Z
2021-06-11T00:00:00.000
{ "year": 2021, "sha1": "7e3d08a6a1e94fa74dc314c12d878bbe545331dc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7e3d08a6a1e94fa74dc314c12d878bbe545331dc", "s2fieldsofstudy": [ "Economics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
125113698
pes2o/s2orc
v3-fos-license
Direct Entropy Measurement in a Mesoscopic Quantum System The entropy of an electronic system offers important insights into the nature of its quantum mechanical ground state. This is particularly valuable in cases where the state is difficult to identify by conventional experimental probes, such as conductance. Traditionally, entropy measurements are based on bulk properties, such as heat capacity, that are easily observed in macroscopic samples but are unmeasurably small in systems that consist of only a few particles. In this work, we develop a mesoscopic circuit to directly measure the entropy of just a few electrons, and demonstrate its efficacy using the well understood spin statistics of the first, second, and third electron ground states in a GaAs quantum dot. The precision of this technique, quantifying the entropy of a single spin-$\frac{1}{2}$ to within 5\% of the expected value of $k_B \ln{2}$, shows its potential for probing more exotic systems. For example, entangled states or those with non-Abelian statistics could be clearly distinguished by their low-temperature entropy. The entropy of an electronic system offers important insights into the nature of its quantum mechanical ground state. This is particularly valuable in cases where the state is difficult to identify by conventional experimental probes, such as conductance. Traditionally, entropy measurements are based on bulk properties, such as heat capacity, that are easily observed in macroscopic samples but are unmeasurably small in systems that consist of only a few particles [1,2]. In this work, we develop a mesoscopic circuit to directly measure the entropy of just a few electrons, and demonstrate its efficacy using the well understood spin statistics of the first, second, and third electron ground states in a GaAs quantum dot [3][4][5][6][7][8]. The precision of this technique, quantifying the entropy of a single spin- 1 2 to within 5% of the expected value of k B ln 2, shows its potential for probing more exotic systems. For example, entangled states or those with non-Abelian statistics could be clearly distinguished by their low-temperature entropy [9][10][11][12][13]. Our approach is analogous to the milestone of spin-tocharge conversion achieved over a decade ago, in which the infinitesimal magnetic moments of a single spin were detected by transforming them into the presence or absence of an electron charge [14,15]. Following this example, we perform an entropy-to-charge conversion, making use of the Maxwell relation that connects changes in entropy, particle number, and temperature (S, N , and T , respectively) to changes in the chemical potential, µ, a quantity that is simple to measure and control. The Maxwell relation in Eq. 1 forms the basis of two theoretical proposals to measure non-Abelian exchange of Moore-Read quasiparticles in the ν = 5 2 state via their entropy [9,10]. Reference 10 proposes a strategy by which quasiparticle entropy could be deduced from Measurement protocol (a) Scanning electron micrograph of a device similar to the one measured. Electrostatic gates (gold) define the circuit in a 2D electron gas (2DEG), with grey gates grounded. Squares indicate ohmic contacts to the 2DEG. The temperature of the electron reservoir in the middle (red) is oscillated using AC current, I heat , at frequency f heat through the quantum point contact (QPC) on the left. A portion of the 5 µm-wide reservoir has been removed here for clarity. The occupation of the quantum dot, tunnel coupled to the right side the reservoir, is tuned by Vp and monitored by Isens through the charge sensor QPC. Isens is split into DC and AC components, the latter being measured by a lock-in amplifier at 2f heat . (b) and (c) Simulated DC charge sensor signal, Gsens, for a transition from N − 1 → N electrons at two temperatures (T Red > T Blue ), showing two possible cases for ∂S ∂N . Insets show the corresponding difference, δGsens, between hot and cold curves. the temperature-dependent shift of charging events on a local disorder potential-a thermodynamic equivalent of the measurements that established the e/4 quasiparticle charge [16]. As a demonstration of the viability and the high accuracy achievable by this technique, we investigate a well-understood system with localized fermions in place of more exotic quasiparticles: a few-electron GaAs quantum dot. The entropies of the first three elec-arXiv:1905.12388v1 [cond-mat.mes-hall] 29 May 2019 tron states in the dot are measured by the temperaturedependent charging scheme laid out in Ref. 10. Applying the language of quantum dots to Eq. 1, the entropy difference between the N − 1 and N electron ground states (∆S N −1→N for ∆N = 1) is measured via the shift with temperature in the electrochemical potential, µ N , needed to add the N th electron to the dot. The measurement relies on the mesoscopic circuit shown in Fig. 1a, using electrostatic gates to realize an electron reservoir in thermal and diffusive equilibrium with a few-electron quantum dot coupled to its right side. The occupation of the dot is tuned with the plunger gate voltage, V p , and measured using an adjacent quantum point contact as a charge sensor [17][18][19]. Applying more positive V p lowers µ N , bringing the N th electron into the dot when µ N drops below the Fermi level of the reservoir, E F . The reservoir temperature, T , can be increased above the GaAs substrate temperature by Joule heating from current, I heat , driven through a quantum point contact on the left side. Charge transitions on the dot appear as steps in the charge sensor conductance, G sens (V p ), thermally broadened by the reservoir temperature (Figs. 1b and c). The gate voltage corresponding to the midpoint of the transition, V mid , marks the electrochemical potential at which the probabilities of finding N − 1 and N electrons on the dot are equal. When µ N shifts with temperature, V mid also shifts; it is the shift in V mid with temperature that forms the basis of our experiment (Fig. 1c). In practice, charge noise limits the accuracy to which V mid can be measured. To overcome this, the measurement is done with a lock-in amplifier, oscillating the temperature using an AC I heat and measuring resultant oscillations in G sens , which we label δG sens . As seen in the insets of Figs. 1b and c, the lineshape of δG sens is perfectly antisymmetric when ∂S/∂N = 0, but asymmetric when ∂S/∂N = 0. The temperature-induced shift in the dot chemical potential with respect to reservoir E F can also be understood in terms of detailed balance. At V mid , where probabilities for N and N − 1 electrons on the dot are equal, the tunnel rates Γ in = Γ N −1→N and Γ out = Γ N →N −1 must also be equal. These rates depend on the number of available states in the tunneling process, and therefore on the degeneracies, d N −1 and d N , of the N − 1 and N ground states [20,21]. The condition Γ in = Γ out leads to a simple relationship between degeneracy and the thermally broadened Fermi function, clearly demonstrating the connection between entropy, temperature, and the shift in µ N at V mid . Previous experiments have explored the relationship between tunnel rates and degeneracy using time-resolved transport spectroscopy and by coupling quantum dots to atomic force cantilever oscillations [8,[22][23][24]. The approach presented here is a thermodynamic analogue, and (d) Θ grows with DC current through the QPC heater. A fit to T 2 = aT 2 M C + bI 2 heat RQP C is used to convert between I heat and δT , where TMC is the mixing chamber temperature [25]. (e) Entropy measurements were independent of the magnitude of I heat oscillations over a large range. The top axis indicates the corresponding magnitude of δT , while the right axis shows the entropy signal converted to a gate voltage shift per unit temperature. Error bars show 95% confidence intervals calculated with the bootstrap method. extends entropy measurements to a wider set of applications where tunneling processes may not be observable in real-time. The dot was tuned such that the source was weakly tunnel-coupled to the reservoir with the drain closed. The conductance of the charge sensor was tuned to G sens ∼e 2 /h, where it was most sensitive to charge on the dot. The addition of the first electron to the dot was marked by a decrease in G sens that is consistent with a thermally-broadened two-level transition (Fig. 2a): where G 0 quantifies the sensor sensitivity, Θ = k B T αe is the thermal broadening expressed in units of gate voltage, α ≡ 1 e dµ N dVp is the lever arm, γ 1 reflects the cross capacitance between the charge sensor and plunger gate, and G 2 is an offset. Figure 2a shows two such transition curves with thermal broadening set by I heat . For I heat = 0, Θ followed T M C down to approximately 100 mK (Fig. 2b), validating the approximation of thermal broadening used throughout this experiment. The data in Fig. 2c, and corresponding fits, illustrate a measurement of ∆S 0→1 across the 0 → 1 electron transition. The lock-in measurement of δG sens , due to temperature oscillations δT , yields the characteristic peak-dip structure seen in Fig. 2c. The expected lineshape of such a curve is δG sens = ∂Gsens ∂T δT , with G sens defined by Eq. 2. This lineshape depends explicitly on ∆S, recognizing (via Eq. 1) that As expected from Figs. 1b and c, δG sens (V p ) is antisymmetric around V mid for ∆S = 0, and asymmetric for ∆S = 0. A fit of the data in Fig. 2c to Eq. 3 yields ∆S 0→1 = (1.02 ± 0.03)k B ln 2, closely matching the expected ∆S 0→1 = S 1 − S 0 = k B ln 2 for transitions between an empty dot with zero entropy (S 0 = 0) and the two-fold degenerate one-electron state (d 1 = 2) with entropy S 1 = k B ln 2. It is important to note that ∆S is extracted from fits to Eq. 3 based solely on the asymmetry of the lineshape, with no calibration of measurement parameters (such as δT or the lever arm α) required. We can, however, estimate α and δT by determining Θ from fits to Eq. 2 for varying substrate temperature (Fig. 2b) and I heat (Fig. 2d). Measurements of ∆S remained constant over a broad range of δT (Fig. 2e), as expected for temperatures low enough not to excite orbital degrees of freedom on the dot. Confirmation that the measured ∆S derives from spin degeneracy is seen through its evolution with in-plane magnetic field, B . Figure 3a compares ∆S(B ) for the 0 → 1 and 2 → 3 transitions, both of which correspond to transitions from total spin zero to total spin one-half. The entropies of the one-and three-electron states go to zero as Zeeman splitting lifts the spin degeneracy, following the Gibbs entropy for a two-level system: where p ± (B , T ) = (1 + e ∓ gµ B B k B T ) −1 are the probabilities for the unpaired electron to be in the spin up or spin down states at a given field and temperature. Fits to Eq. 4, with the ratio g/T and an added scaling ∆S(B = 0) as free parameters, give ∆S 0→1 (B = 0) = (0.94 ± 0.03)k B ln 2 and ∆S 2→3 (B = 0) = (0.98 ± 0.02)k B ln 2 (Fig. 3), and reflect the collapse to zero at high field where spin degeneracy is broken. This collapse can also be seen qualitatively, in the crossover from asymmetric to antisymmetric lineshapes of δG sens (V p ) (Figs. 3b and c). Estimating an average T for each data set using the calibration in Fig. 2d yields |g| = 0.48 ± 0.02 and |g| = 0.44 ± 0.01 for the 0 → 1 and 2 → 3 transitions, respectively. Errors in the g-factor measurement are likely due to the difficulty of estimating temperature oscillations. Still, the g-factors are consistent with reported values [26][27][28] and the value measured separately in Fig. 3e using bias spectroscopy. The 1 → 2 transition can be understood as the inverse of the 0 → 1 transition for B < 5 T, comparing Figs. 3a and 4a. For relatively low fields, the twoelectron ground state remains a spin singlet with zero entropy, while the one-electron entropy goes from k B ln 2 to zero due to Zeeman splitting. At higher fields, the one-electron ground state remains non-degenerate while the two-electron ground state gains a two-fold degeneracy when the singlet |S and triplet |T + states cross. This singlet-triplet crossing is seen in bias spectroscopy data (Fig. 4f) at 8.4 T, and in the appearance of a peak in ∆S 1→2 at 9 T (Fig. 4a). The discrepancy in field required to drive the singlet-triplet degeneracy in Figs. 4a and f is attributed to a change in shape of the dot potential, caused by altering the confinment gate voltages, when transitioning from one to two open tunnel barriers. The field-dependent entropy measurement for the 1 → 2 transition can again be fit using Eq. 4, with probabilities as before for the one-electron states and ) −1 for the two-electron states, where ∆ ST is the singlet-triplet splitting at zero field. From the fit, we find ∆S 1→2 at the two-fold degenerate points, B = 0 and 9 T, are −(1.01±0.03)k B ln 2 and (1.04±0.04)k B ln 2, respectively. The extracted g-factor, |g| = 0.47 ± 0.02, from the peak at B = 0 is consistent with the 0 → 1 transition. At the high-field singlet-triplet degeneracy we find |g| = 0.69 ± 0.04, an unexpectedly high g-factor that is explained by a shift of the |T 0 state with magnetic field, as seen in Fig. 4f and previous work [29]. We conclude with a few notes to encourage the application of this entropy measurement protocol to other mesoscopic systems. The crucial ingredients in achieving the high accuracy reported here were i) the ability to oscillate temperature rapidly enough to avoid 1/f noise, ii) the ability to measure charging transitions without perturbing the localized states, and iii) the fact that the charging transitions were thermally broadened. Criterion iii) enabled the entropy determination purely by asymmetry, without the need to know δT or other measurement parameters accurately, yielding an uncertainty less than 5%. With this level of precision, it should be possible, for example, to distinguish the 1 2 k B ln 2 entropy of a non-Abelian Majorana bound state from the k B ln 2 entropy of an Andreev bound state at an accidental degeneracy [11,12]. Similarly, the S = 1 2 k B ln 2 two-channel Kondo state could be clearly distinguished from fully screened (S = 0) or unscreened (S = k B ln 2) spin states [13]. Methods The device was built on a AlGaAs/GaAs heterostructure, hosting a 2D electron gas with density and mobility at 300 mK of 2.42 × 10 11 cm −2 and 2.56 × 10 6 cm 2 /(V s) respectively, determined in a separate measurement. Mesas and NiAuGe ohmic contacts to the 2DEG were defined by standard photolithography techniques, followed by atomic layer deposition of 10 nm HfO 2 to improve the gating stability in the device. Fine gate structures, shown in Fig. 1a, were defined by electron beam lithography and deposition of 3/18 nm Ti/Au. The measurement was carried out in a dilution refrigerator with a two-axis magnet. The 2DEG was aligned parallel to the main axis with the second axis used to compensate for sample misalignment. In practice, out-ofplane fields up to 100 mT showed no effect on our data. A retuning of the quantum dot gates was necessary to capture the bias spectroscopy data in Figs. 3d,e and 4e,f. The rightmost gate (Fig. 1a) on the quantum dot was used to tune between the one and two lead configurations, for the entropy and bias spectroscopy measurements respectively. This tuning had a significant effect on the shape of the potential well, accounting for variations in parameters such as g and ∆ ST between the two measurement configurations. Charge sensor conductance was measured using a DC voltage bias of 200-350 µV; we find that Joule heating through the sensor does not affect our reservoir temperatures up to V sens ∼ 500 µV. The DC current (I sens ) was measured using an analog-digital convertor while AC current (δI sens ) was measured using a lock-in amplifier. The DC conductance reported here is G sens = I sens /V sens while the oscillations are defined as δG sens = (δI sens )/V sens . The temperature of the reservoir was raised above the substrate temperature using I heat at AC or DC, with the QPC heater set by gate voltages to 20 kΩ. Applying AC current at f heat = 48.7 Hz yields an oscillating Joule power, P heat = I 2 heat R QP C . To leading order this gives oscillations in temperature, and therefore δG sens , at 2f heat . These are captured by the lock-in amplifier at the second harmonic of I heat . Except where noted, measurements of ∆S were made at δT ∼ 50 mK, although the error bars in Fig. 2 demonstrate that the measurements would have been just as accurate with δT set to 30 mK. The fixed pressure condition of Eq. 1 is met by working well below the Fermi temperature of the 2DEG, T F ∼ 100 K, where degeneracy pressure dominates [30]. Data Availability Data generated for, and analyzed in, this study are available at https://github.com/ nikhartman/spin_entropy. The repository also contains all code necessary to complete the analysis and create each of the figures in this manuscript.
2019-04-22T06:22:28.845Z
2018-03-06T00:00:00.000
{ "year": 2019, "sha1": "68b64322cb1afd09b22bdf18b412c14cc8ef72f2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.12388", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "68b64322cb1afd09b22bdf18b412c14cc8ef72f2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220793222
pes2o/s2orc
v3-fos-license
Our House is Our Glassy Castle: Challenges of Pervasive Computing in Private Spaces Modern society is going through the transformation under the influence of Information Technologies. Internet of Things as one of the latest facet of it becoming more visible and widely spread. We wish to reflect and discuss the current concerns regarding its expansion. Our particular interests lie in the increasing of usability and comfortability through the unification of the IoT protocols and security measures. As well as addressing the privacy concerns and discussing the possible changings in the perception of privacy and personal space concepts. represents a major departure in the history of the Internet. IoT moves it beyond the rectangular scopes of the desktop computers, tablets and smartphones and helps to power up millions of the everyday devices from the kettles, home thermostats and light bulbs. According to Business Insider the predicted amount of installed IoT devices will exited 20 billion by 2020 [1], based on the current Global World Wide Web penetration rate [2] and the changes of the hardware prices [3]. The Internet of Things is expected to have significant impact on many, if not all, aspects of our daily living such as public, work or private spaces. IoT aims to change the approach of individual existence in the environment that is "seamlessly equipped" with technologies that enable it to be sensed and controlled remotely through digital networks using computerbased systems, improving their efficiency and providing a variety of economic and social benefits. The future that advertised to us by IoT devices' vendors appears to be unshadowed and clear [4] [5]. Is it? Unification and Security concerns As any rapidly changed and popular technology or field of human activity, IoT is currently going through the stage of formation, and acceptance by the general public and business. These processes attract the attention of the variety of researches and organizations with their own methods and views on the way how things need to be delivered and deployed [8][9] [10][11]. The overall situation is similar to the beginning of the Internet era, when each of the major players tried to introduce and enforce their own vision before they managed to agree on the unified standards and approaches. In addition to that, because of the increased interest to the field, companies strive to introduce their solutions, products and specific innovations with the minimum time frame, while there are still unexplored domains and aspects of the human life where smart-devices can be used. And again, similarly to the situation with the early days of the World Wild Web, modern IoT products mainly concentrate on the delivery of the intendent service giving a low priority to security, reliability and resilience [6] [12]. As a result, this creates a heterogeneous and chaotic situation in the IoT field. What does this mean for us -general users? • Uncertainty regarding the presence or absence of the vulnerabilities in the specific system or device we want to use; • Inability to receive a seamless and "natural" experience in case of using similar products from different vendors, due to absence of unified protocol; • Vendor lock. In case we want to get the comprehensive experience, we could achieve it only if a vendor has complete product line. Worth to mentioned that there are Open Standards that aim to unite all the devices or provide the unified API for third party systems and devices [10]. However, they are not widely accepted by manufacturers. We are keen to examine and discuss the aspects of aforementioned challenges, their impact on the user perception of smart houses, as well as usability and safety/security issues that can affect the trust towards IoT and potential ways to overcome those challenges. Privacy concerns Another concern that directly linked with the issue of vanishing agency of the individual is the privacy in the modern field of the IoT. It implies that the interconnected everyday objects (IoT devices/things) and environments that are "seamlessly equipped" with technologies are connected to the global (semi-globaltied to one vendor) network that one way or another can receive the data from the IoT devices. Partially because of the impossibility of delivering the same quality service without using powerful remote backend (servers' infrastructure) and partially because of the demand from the big companies for more personal and specific information about users. Thus, by introducing smart devices and systems to our everyday activity we willingly provide third parties with the big amount of personal information [13] [14]. It is even better than the famous postulate: "If you're not paying for it, you become the product" [17]. Those of us who consider our home or car to be primarily personal spaces where we can escape from the outside world (even for a short amount of time), would be reluctant to accept these rules. Additionally, as IoT developers try to attract and engage third party developers into using their platforms and services they tend to open access to their services or provide access to "anonymised" data for research purposes [15]. Previous researches showed that in some cases aggregation of even anonymised information still could lead identification of a person or the disclosure of one's information [16]. We wish to query the ways in which we can control information disclosure and manage the exchange process in more privacy-friendly way. Perception changing Our perception of the privacy and the personal space was coined through out of the human history. During the evolution of our society and the historical changes the perception changed from the Roman times to the 20 th century [6]. In which the modern concept of these terms has been crystallized. What if it's time for perception to change? Like it did before, affected by historical events and movements. What if the society with the overall process of the emerging technologies (such as IoT, Social Networks and the Internet itself) stands at the origins of the new evolutional step? What if the privacy and agency as we knew them before are outdated today? What if it is time to thing and explore the new notion? Maybe the concepts of the private space are not relevant anymore. The fact that IoT devices somehow collect, process and possibly make it accessible to others is just a step to the rethinking of the norms and concepts. A general user hypothetically law-abiding and doesn't have anything to hide. Moreover, in case of the emergency (accident, health problem) IoT, if monitoring, could request emergency services or other type of help. Maybe we need to reconsider our beliefs? Maybe it's time for the brave new world? This particular point is interesting one and in our opinion is worth to discuss in more depth during the workshop. All smart devices and sensors around us collecting a lot of information and sending it to different companies and organisation. It could be portrayed as a more efficient way of preserving safety and security. You as a user always know that there are some big companies/organisation that watching and analysing your habits and actions with potential to make your life easier and more efficient as well as safer. If only you'll overcome this ancient need to preserve privacy and personal space. Biographies Dinislam Abdulgalimov is a Doctoral Trainee in Digital Civics at Open Lab, Newcastle University, UK. His academic background was in the field of information security. He is interested in community collaboration and the possibility of using crowd-sourcing techniques for gathering operative information, exploring privacy versus trust aspects in such systems and examining the possibility of using peer-to-peer and decentralised models without exposing identity. Timur Osadchiy is a Doctoral Trainee at the Open Lab, Newcastle University, UK. His academic background was in the field of computer science and applied cyber security and developing user-friendly communication protocols within IoT aiming in making it seamless for people with no technical background.
2020-03-13T20:27:46.537Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "8dc5a2cbfa290d502f43c7f2f6d2e21a3b199234", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8dc5a2cbfa290d502f43c7f2f6d2e21a3b199234", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
30521594
pes2o/s2orc
v3-fos-license
Unique Organization of the Nuclear Envelope in the Post-natal Quiescent Neural Stem Cells Summary Neural stem cells (B1 astrocytes; NSCs) in the adult ventricular-subventricular-zone (V-SVZ) originate in the embryo. Surprisingly, recent work has shown that B1 cells remain largely quiescent. They are reactivated postnatally to function as primary progenitors for neurons destined for the olfactory bulb and some corpus callosum oligodendrocytes. The cellular and molecular properties of quiescent B1 cells remain unknown. Here we found that a subpopulation of B1 cells has a unique nuclear envelope invagination specialization similar to envelope-limited chromatin sheets (ELCS), reported in certain lymphocytes and some cancer cells. Using molecular markers, [3H]thymidine birth-dating, and Ara-C, we found that B1 cells with ELCS correspond to quiescent NSCs. ELCS begin forming in embryonic radial glia cells and represent a specific nuclear compartment containing particular epigenetic modifications and telomeres. These results reveal a unique nuclear compartment in quiescent NSCs, which is useful for identifying these primary progenitors and study their gene regulation. INTRODUCTION Neural stem cells (NSCs) persist in the ventricular-subventricular zone (V-SVZ) in the walls of the lateral ventricles of many adult mammals. This neurogenic niche is composed of NSCs (B1 astrocytes) that divide slowly to give rise to transit-amplifying cells (C cells), which in turn generate neuroblasts (A cells) that migrate tangentially to the olfactory bulb (Alvarez-Buylla et al., 2001;Lois and Alvarez-Buylla, 1994). B1 cells are characterized by their highly polarized morphology, which presents a thin apical process that contacts the lateral ventricle (LV) and cerebrospinal fluid (CSF). Moreover, they also exhibit a basal process ending on blood vessels (Doetsch et al., 2002;Mirzadeh et al., 2008;Tavazoie et al., 2008). The apical surface of B1 cells is surrounded by large apical surfaces of ependymal cells in a pinwheel configuration (Mirzadeh et al., 2008). NSCs cells can exist as quiescent/slowly dividing (qNSCs) or activated/dividing (aNSCs) primary progenitors. It has been suggested that these two populations represent two functionally distinct types of NSCs which differ in their cell-cycle status and molecular properties (Codega et al., 2014;Llorens-Bobadilla et al., 2015;Mich et al., 2014;Morshead et al., 1994). aNSCs maintain the expression of glial fibrillary acidic protein (GFAP), CD133, epidermal growth factor receptor (EGFR), and Nestin, while qNSCs preserve the expression of GFAP, CD133, but not EGFR and Nestin. Furthermore, qNSCs do not express proliferation markers and survive infusion of cytosine-b-D-arabinofuranoside (Ara-C), which eliminates the aNSC population (Codega et al., 2014;Doetsch et al., 1999;Morshead et al., 1994;Pastrana et al., 2009). Recently, it has been suggested that qNSCs have an embryonic origin; pre-B1 cells are produced during mid-fetal development (embryonic day 13.5 [E13.5] to E15.5), remaining relatively quiescent until reactivated postnatally (Fuentealba et al., 2015;Furutachi et al., 2015). The maintenance of quiescence is thought to be directly co-related with the regulation of gene expression, which can be observed as large heterochromatic regions likely corresponding to silenced genes (Capelson and Corces, 2012). Previously, it has been suggested that a distinctive nuclear morphology is linked to the maintenance of pluripotency (Gorkin et al., 2014;Ito et al., 2014;Sexton and Cavalli, 2013), and possibly associated with quiescence. However, despite NSC chromatin presenting peculiar topographical configurations (Krijger et al., 2016;Peric-Hupkes et al., 2010;Phillips-Cremins et al., 2013), the relationship between chromatin organization and nuclear morphology remains poorly understood. Previous studies have shown that murine and human fetal V-SVZ B cells have irregular nuclei that exhibit unusual nuclear envelope (NE) invaginations (Capilla-Gonzalez et al., 2014;Doetsch et al., 1997;Guerrero-Cazares et al., 2011). Here we have studied the fine ultrastructure and threedimensional (3D) organization of these invaginations and show that they correspond to envelope-limited chromatin sheets (ELCS). These structures were originally described by Davies and Small (1968) in neutrophils, and named envelope-limited sheets (ELS). ELS have an unusual type of nuclear morphology characterized by the presence of a sheet of chromatin ($30 nm thick) bound on two sides by the inner nuclear membrane (INM), creating a highly reproducible and regular ''sandwich'' of 40 nm thickness (Davies and Small, 1968). These structures, later called ELCS, are associated with the NE proteins Lamin B, Lamin B receptor (LBR), and Lap2 (Ghadially, 1997;Olins et al., 1998;Olins and Olins, 2009). Interestingly, ELCS have only been reported in certain lymphocytes and some cancer cells, including the CNS neuroectodermal tumor medulloblastoma (Tani et al., 1971). Furthermore, we show here that V-SVZ B1 cells with ELCS correspond to qNSCs in mice. A Subset of B Cells Has Nuclear Envelope-Limited Chromatin Sheets Unlike other V-SVZ cell types, B cells in the V-SVZ present an irregular nucleus and, occasionally, NE invaginations (Capilla-Gonzalez et al., 2014;Doetsch et al., 1997;Guerrero-Cazares et al., 2011). These nuclear structures show a single sheet of chromatin bound on two sides by the INM and outer nuclear membrane (ONM), resembling the nuclear ELCS previously described in neutrophils (Olins and Olins, 2009). However, whether these nuclear ELCS are present in all B cells or in a distinct subpopulation has not been studied. To improve the characterization of B cells containing ELCS, we examined the V-SVZ of P60 mice by transmission electron microscopy (TEM). We found that in single TEM ultrathin sections, ELCS were frequently present in B cells (11.9% ± 0.4%; 1,052 B cells including B1 and B2 cells, defined as cells with or without apical ending, respectively; n = 4) ( Figures 1A and 1B). To better estimate the proportion of B cells with ELCS, we used serial section 3D reconstruction of the entire nuclei. Almost half of the analyzed B cells (46% ± 3%; 160 B cells, n = 4) presented ELCS. These astrocytes were found forming groups of two to four cells in the whole length of the lateral walls of the LVs, but not in the adjacent striatum (0/102; n = 3). We did not find differences in the percentage of B cells with ELCS among different rostrocaudal levels (anterior 12.67%, medial 11.00%, posterior 10.34%; p > 0.05, not significant; bregma 1.18-0.02). In serial reconstructions, no ependymal, C cells, or A cells exhibited this type of NE (83, 20, and 140 studied cells at post-natal day 60 [P60]; n = 4) (Figures S1A-S1D). In addition, B cells with ELCS displayed one or two nuclear ELCS with a length of 0.4-2 mm (studied in 50 cells; n = 3) encompassing $0.20% of the total nuclear volume (for image sequence reconstruction, see Movie S1). The ELCS ONM showed ribosomes attached to it and did not contain nuclear pores. Heterochromatin clumps were usually associated to nuclear ELCS endpoints ( Figures 1C and 1D). Interestingly, 90.1% ± 0.1% of ELCS (80 B cells, n = 4) were internalized within the irregular nucleus ( Figures 1E and 1F). To better characterize ELCS in B cells, we performed pre-embedding immunogold stainings and independently examined the expression of brain lipid-binding protein (BLBP), GFAP, Nestin, EGFR, and Glast, which differentially marked B cells. We found out that ELCS were present in GFAP + , BLBP + , and Glast + B cells ( Figures 1G-1M). The majority of cells with ELCS showed no expression of EGFR (1/20) and none were positive for Nestin. This expression profile overlaps with that recently described for qNSCs (Codega et al., 2014). B1 cells (contacting the CSF in the pinwheel center) are a relatively quiescent or slow-diving NSCs derived from radial glia (Fuentealba et al., 2015;Kriegstein and Alvarez-Buylla, 2009). To determine whether B1 cells with identified apical ending present nuclear ELCS, we performed serial section reconstructions using confocal microscopy of the ventricular wall whole-mount preparation (Mirzadeh et al., 2010). In serial confocal microscopy sections of V-SVZ whole mounts, the apical terminations of GFAP + B1 cells were localized in the pinwheel center and tracked up to cell nuclei delimited by Lamin B and DAPI. The majority (83% ± 1%; 104 pinwheels; n = 13) of B1 cells with apical endings on pinwheels had irregular NE and bright Lamin B labeling (Figures 2A-2H). Analyzing serial whole-mount ultrathin sections by TEM (20 cells; n = 2), we confirmed that the deep nuclear invaginations of B1 cells corresponded to the ELCS zone ( Figures 2J-2L). Hence, we conclude that a subpopulation of B1 cells contains ELCS. Interestingly, single TEM sections revealed that ELCS were also present in a small subset of subgranular zone astrocytes (1.5% ± 0.6%; P60; 268 cells; n = 3), identified as the hippocampal dentate gyrus NSCs (Klempin and Kempermann, 2007;Seri et al., 2004). To confirm that this small fraction of astrocytes corresponded to radial astrocytes, we performed post-embedding GFAP and BLBP immunostainings in semithin sections ( Figure S2). We found that Glast or BLBP cells with radial morphology exhibited ELCS in their nuclei. These results suggest that a subpopulation of NSCs present a unique NE. ELCS in a Subset of Dormant B Cells EGFR expression has been associated with the activation of B1 cells. We found out that 13.3% ± 0.6% of B1 cells were EGFR + (30 pinwheels; n = 4) and did not display ELCS, while Lamin B expression was weak (9/30) or absent (21/30) (Figures 3A and 3B). Therefore, we hypothesized that ELCS could be associated with quiescent B1 cells. To test this hypothesis, we injected a group of P60 mice (n = 3) with [ 3 H]thymidine ( 3 H-Thy) (four injections, every 2 hr) and euthanized them 2 hr after the last injection, to label the proliferating cells ( Figure 3C). 3 H-Thy-labeled cells were mapped in serial 1.5-mm semithin sections sampled at different rostrocaudal levels in the V-SVZ. One hundred and eighty 3 H-Thy-labeled cells were then resectioned for serial TEM analysis. Thirty corresponded to B cells, displaying irregular contours, light cytoplasm, and intermediate filaments. None of these 30 B cells had a nuclear ELCS ( Figures 3D-3F), suggesting that actively dividing B cells do not contain ELCS. We next examined whether B cells that have undergone cell division can develop nuclear ELCS 2 months later. Mice (n = 3) were injected with 3 H-Thy (as mentioned above) and label-retaining cells (LRCs) were studied 2 months later under TEM ( Figure 3C rare; of >8,000 cells studied, only 20 labeled cells were observed. All LRCs had characteristics of B cells. Threedimensional nuclear reconstruction of these LRCs showed no evidence of nuclear ELCS ( Figures 3G-3I). This suggests that ELCS do not form in B cells after having divided 2 months earlier. Antimitotic treatment with Ara-C induces the death of actively dividing V-SVZ progenitors, but qNSCs are spared (Doetsch et al., 1999;Morshead et al., 1994;Pastrana et al., 2009). We therefore tested whether cells that are retained 12 hr after a 6-day Ara-C treatment had ELCS in their nuclei. After Ara-C, fewer B cells were observed in the V-SVZ Note that cells with ELCS first appear in RGCs at E14.5 with 0.14 ± 0.14 cells/mm (n = 3 mice), but its number increased progressively to an average of 30.63 ± 9.33 B cells with ELCS/ mm at P15 (n = 3 mice) and then decreased at P60. Over the total population of post-natal B cells, the percentage of B cells dramatically decreases from 11.98% ± 0.43% at P60 (1,052 cells; n = 3 mice) to 4.24% ± 0.45% (466 cells; n = 3 mice) (test for trend, p < 0.001). Error bars represent the mean ± SEM. (B) Diagram describing 3 H-Thy injection protocols. E14.5 timed pregnant mice received two 3 H-Thy injections and the V-SVZ of the offspring was analyzed at P0 and P21. (C-F) Autoradiography of the V-SVZ of a P0 mouse. (C) Labeled cells were identified on toluidine blue-stained semithin sections (arrow). (D-F) TEM micrographs of the 3 H-Thy-labeled cell in (C). Note that this cell shows RG cell characteristics, contacts the LV, and shows nuclear ELCS (arrows). (legend continued on next page) (from 80 ± 5 to 32 ± 10; t test p < 0.05; n = 3), but the percentage of B cells with ELCS increased (from 11.9% ± 0.4% to 18% ± 1%; t test, p < 0.05; 703 cells, n = 3). This indicates that B cells with ELCS are spared by Ara-C treatment. Fourteen days after Ara-C treatment the percentage of B cells with ELCS cells was reduced (15% ± 2%; 933 cells, n = 3). These results indicate that B cells with ELCS survive Ara-C treatment, consistent with the interpretation that they correspond to qNSCs. Nuclear ELCS Are Not Observed In Vitro Previous studies have shown that V-SVZ progenitors can be grown in vitro as neurospheres or in monolayer cultures (Reynolds and Weiss, 1992;Scheffler et al., 2005). Some of these in vitro NSCs maintain pluripotency and selfrenewal potential. We investigated whether cells grown as neurospheres or in monolayer cultures, supplemented with EGF and fibroblast growth factor, presented ELCS in their nuclei. We studied neurosphere cells (1,550 cells, ten neurospheres; n = 5) and monolayer cells (1,000 cells, three wells, n = 5), but did not find any ELCS under either culture condition. As retinoic acid stimulation has been shown to promote ELCS formation in neutrophils (Olins et al., 1998), we treated neurosphere cultures with 2, 5, and 10 mM retinoic acid but could not find any evidence of ELCS formation in this in vitro NSC condition (500 cells; n = 6). We conclude that NSCs grown in vitro as neurospheres or in monolayer cultures do not exhibit ELCS. Nuclear ELCS Begin Forming in the Embryo Subsequently, we investigated at which developmental stage nuclear ELCS first appear. During embryonic development, the walls of the LVs are lined by radial glial cells (RGCs), some of which later give rise to adult V-SVZ B cells (Kriegstein and Alvarez-Buylla, 2009;Merkle et al., 2004). Using TEM, we serially sectioned and scanned RGC nuclei to look for the presence of ELCS at four embryonic stages (E10.5, E14.5, E16.5, and E18.5) and at four post-natal ages (P0, P15, P30, and P60). Although ELCS were rarely observed in RGC nuclei at E14.5 (0.1 ± 0.1 cells with ELCS/mm; n = 3), their number increased progressively through the embryonic stages to an average of 31 ± 9 B cells with ELCS/mm at P15 (n = 3), but then decreased at P60 ( Figure 4A). These observations indicate that nuclear ELCS appear as early as E14.5, and are preserved in a subpopulation of adult V-SVZ. To further investigate whether cells with nuclear ELCS in post-natal stages were directly derived from pre-B1 cells (Fuentealba et al., 2015;Furutachi et al., 2015), we injected timed pregnant mice at E14.5 stage with 3 H-Thy (two injections 12 hr apart) and euthanized the offspring at postnatal stages P0 and P21 ( Figure 4B). Serial 1.5-mm sections were studied at different rostrocaudal levels of the V-SVZ and 3 H-Thy-labeled cells close to the LV were serially reconstructed. Interestingly, we found that more than half of the radial glial (RG) LRCs at P0 (10 out of 15 studied cells; n = 3) and all labeled B cells at P21 (13 out of 13 studied cells; n = 3) presented ELCS in their nuclei ( Figures 4C-4J, respectively). These results suggest that ELCS are assembled in pre-B1 cells in the embryo and remain in the quiescent progenitors during the transition from RG to B cells. Next we studied the expression of p57, a recently described marker present in dividing embryonic NPCs and adult V-SVZ qNSCs (Furutachi et al., 2015). This protein is a component of the CIP/KIP family of cyclin-dependent kinase (CDK) inhibitory proteins, related to the blockade of cell-cycle progression by binding and inhibiting cyclin/CDK complexes of the G 1 phase (Tury et al., 2012). We analyzed the expression of p57 in the V-SVZ of embryonic and P60 mice using confocal microscopy. B cells showing NE folding, consistent with the presence of ELCS, were positive for p57 + (25/30 of P60 and 18/24 of E18) ( Figures 4K-4N and S3). Taken together, these results indicate that nuclear ELCS is a hallmark of V-SVZ qNSCs derived from pre-B1 cells. Formation of ELCS during Development Since nuclear ELCS start forming in some RGCs at E14.5, we performed a serial ultrastructural analysis of RGC (G-J) Autoradiography of the V-SVZ of a P21 mouse. (G) Labeled cells were identified on toluidine blue-stained semithin sections (arrow). (H-J) Electron micrographs of the 3 H-Thy-labeled cell shown in (G). Note that this cell is characterized as a B cell and shows nuclear ELCS (arrows). (K-N) Confocal images of a V-SVZ whole mount immunostained for GFAP (white), Lamin B (green), and p57 (red) to visualize B1 cells with ELCS nuclear expression of p57. Note that the insets (N) show the characteristic ELCS zone (white arrows) and co-localization with p57. Scale bars, 10 mm (C, D), 5 mm (K, M), 2 mm (E, H, I, N), and 500 nm (F, J). nuclear morphology by TEM at different stages of embryonic development. Punctual detachments of the INM and ONM were observed in some RGCs (from E14.5 to P0). This resulted in an increased perinuclear space in the detachment zone ( Figures 5A-5E). On tangential sections, this NE detachment appeared as a perinuclear space sphere. Hence, we refer to this structure as the ''nuclear envelope ring'' (NE ring). The ONM was clearly distinguished by its attached ribosomes and the INM by the presence of nuclear lamina. To characterize the 3D organization of NE rings and to investigate their possible relation with nuclear ELCS, we performed P0 V-SVZ 3D reconstructions by TEM. As NE ring sections were stacked, we observed that the ONM aligned to the INM, reducing the perinuclear space and likely generating ELCS (six cells) ( Figures 5B-5D and S4). Importantly, as ELCS, the nuclear rings were partially lined by $30-nm chromatin fibers. These results were supported by the fact that the NE ring perinuclear space was delimited by INM marker Lap2 and heterochromatin fibers labeled by 5-methylcytosine (5mC) (Figures 5F and 5G). We also confirmed the expression of Lamin B, a main component of the type B cell NE, in the periphery of the NE rings Figure S1 for complete series of sections). (E) TEM quantification of the number of cells with ELCS (blue) or NE rings (green) per millimeter (n = 3 mice). Note that the number of cells with NE rings is 0 at P15, which coincides with the maximum peak of cells containing ELCS at this age (30.21 cells with ELCS/mm; n = 3 mice). Error bars represent the mean ± SEM. (F-I) Pre-and post-embedding immunogold staining for Lap2 (F), 5mC (G), Lamin B (H), and actin (I) in E18 VZ RG cells. Arrowheads indicate gold particles associated to each staining. Scale bars, 500 nm (A, B; applies also to C, D) and 200 nm (F-I). C, cytoplasm; N, nucleus; PNS, perinuclear space. ( Figure 5H). In addition, to support that ELCS formation is a dynamic process that requires components of the nuclear cytoskeleton, we studied actin expression in NE rings (Figure 5I). Remarkably, we observed that actin was highly expressed in the perinuclear space of NE rings compared with the neighboring NE and nucleoplasm. These observations suggest that NE rings are a transitory NE structure related to the formation of nuclear ELCS during the stages E14.5 to P15. ELCS as a Nuclear Compartment The NE and the nuclear lamina play important roles in cellcycle regulation as well as in genome and cytoskeletal organization (Malhas et al., 2011). They harbor tissue-specific resident proteins, extensively interact with chromatin, and contribute to spatial genome organization and regulation of gene expression (Brachner and Foisner, 2011;Las Heras de and Schirmer, 2014). Since nuclear ELCS were mainly formed by the NE, we decided to study whether components of the NE and nuclear lamina are expressed in ELCS. We performed immunogold and immunofluorescence detection of the INM proteins Lap2 and LBR, and the nuclear lamina intermediate filament Lamin B. Congruently, we found that all of these proteins were widely distributed along the NE and were also found in ELCS ( Figures 6A-6C, S5A, S5B, S5D, and S5E). However, Lamin B and Lap2 were significantly more highly expressed within ELCS (t test, p < 0.01 for Lamin B; p < 0.05 for Lap2). We next determined whether the ultrastructural 30-nm condensed chromatin fibers within nuclear ELCS expressed epigenetic markers related to heterochromatin. We performed pre-and post-embedding immunogold staining for 5mC, trimethyl-histone H3 (Lys27) (H3K27me3), trimethyl-histone H3 (Lys9) (H3K9me3), and heterochromatin protein 1 (HP1), and found that while 5mC and H3K9me3 were expressed in ELCS chromatin fibers ( Figures 6D, 6E, S5C, S5F, and S5G), H3K27me3 and HP1 were present in the nuclear lobes but not within ELCS ( Figures 6F-6I). Given that ELCS of B cells in the V-SVZ contained heterochromatin with specific epigenetic modifications, we examined whether ELCS heterochromatin could correspond to telomeric heterochromatic domains. It has been indeed proposed that nuclear telomere positioning may influence cell longevity in quiescence (Guidi et al., 2015). TRF2, a telomere sheltering component thought to mediate telomere binding to lamins (Gonzalo and Eissenberg, 2016), was preferentially located to the NE ( Figure 6J). Intriguingly, telomere-associated protein TFR2 expression was significantly higher within ELCS or in the ELCSendpoint heterochromatin than in the rest of the nucleus (t test, p < 0.0002; Figures 6K-6N, S6A, and S6B). Moreover, we investigated the nuclear telomere distribution in B cells by assay using fluorescence in situ hybridization (FISH) with peptide nucleic acid (PNA-FISH). We combined the Telomere-C probe detection with the Lamin B and GFAP expression, and found that telomeres were enriched in the ELCS zone (t test, p < 0.05; Figures 6P-6R, S6C, and S6D). Altogether, these results suggest that ELCS represent a specific nuclear compartment housing particular heterochromatin domains. DISCUSSION In this study, we show that a subset of adult V-SVZ B cells have NE-limited chromatin sheets or ELCS. Using molecular markers, 3 H-Thy, and the antimitotic drug Ara-C, we found that B1 cells with ELCS correspond to qNSCs. TEM analysis revealed that nuclear ELCS start to appear in RGCs in the embryo around E14.5 and are present throughout adult life in a subpopulation of V-SVZ B cells. 3 H-Thy birth dating suggests that quiescent B cells with ELCS have an embryonic origin. We also detected the expression of epigenetic markers associated with repression and telomeres within ELCS. This structure may represent a specific nuclear compartment associated with quiescent pre-B and B cells. Our work suggests that this unique compartment of the NE is associated with quiescence and, in particular, with the subpopulation of progenitors that are set apart during embryonic development to function in the juvenile and adult brain as NSCs. We found out that a subset of V-SVZ B cells display a nuclear structure, characterized by a thin chromatin layer bound to the inner and outer membranes. These distinct structures highly resemble the previously described ELCS in other cell types (Davies and Small, 1968;Olins et al., 2008). The ELCS we describe in post-natal qNSCs is similar to subtype 1-1 ELCS (a single chromatin sheet bound on two sides by cytoplasm) (Olins and Olins, 2009). In the CNS, ELCS have been observed in the developing human retina (Popoff and Ellsworth, 1969) and in the subcallosal zone cells during post-natal development (Wittmann et al., 2009). Outside the CNS the presence of ELCS has been mainly reported in myeloid, lymphoid (Davies and Small, 1968;Olins et al., 1998), and cancer cells (Mollo et al., 1969;Tani et al., 1971). The functional significance of ELCS remains elusive. It has been proposed that it could facilitate neutrophil functions (Rowat et al., 2013) and may also be a part of a developmental program to shut off gene activity during terminal differentiation (Sanchez and Wangh, 1999). Remarkably, tumor cells with ELCS survive to radiation and antimitotic treatments (Ahearn et al., 1967;Erenpreisa et al., 2002;Stalzer et al., 1965), suggesting that ELCS may be present in quiescent cancer cells. Further research is needed to determine the function of ELCS. Our observations are in line with the idea that (legend continued on next page) ELCS are linked to genomic rearrangements associated with quiescence in NSCs. Adult V-SVZ NSCs are a heterogeneous population of primary progenitors that can exist in either quiescent or activated state (Codega et al., 2014;Llorens-Bobadilla et al., 2015;Morshead et al., 1994). We found that B cells with ELCS are GFAP + , BLBP + , Glast + , Nestin À , and EGFR À , do not incorporate 3 H-Thy, and survive following Ara-C treatment. These data suggest that ELCS are present in qNSCs. In addition, 83% of B1 cells have nuclear ELCS in contrast to the rest present a more spherical nucleus devoid of ELCS. This is consistent with previous observations reporting that 11.4% ± 1.3% of B1 cells are EGFR + and correspond to aNSCs (Codega et al., 2014) while about 8.6% are actively dividing (Ponti et al., 2013). Transcriptomic analyses have suggested that qNSCs or dormant cells enter a primed-quiescent state before activation (Llorens-Bobadilla et al., 2015;Shin et al., 2015). However, due to the dynamic nature of this process, this intermediate state has not yet been well characterized by molecular or morphological characteristics. At present we cannot conclude whether B cells with ELCS include dormant qNSCs or only primedquiescent cells, or a subpopulation of these. Future studies might help to unravel this question. We also observed that 2 months after 3 H-Thy incorporation, none of the V-SVZ LRCs exhibited ELCS in their nuclei. The number of cells exhibiting ELCS greatly decreased with age. Consistently, other reports have shown a decrease in the neurogenic potential with age (Bouab et al., 2011;Capilla-Gonzalez et al., 2014;Encinas et al., 2011). Furthermore, a recent study using barcode lineage tracing indicates that new neurons arise from distinct B1 cohorts formed in the embryo (Fuentealba et al., 2015), suggesting that B1 cells become depleted with age. The observed decrease in the number of V-SVZ cells with ELCS could be associated to a depletion of NSCs in this neurogenic niche. We could not find any cells with ELCS under TEM when V-SVZ progenitors were expanded in vitro as neurospheres or in monolayer cultures. Previous studies have shown that cultured NSCs cells are mainly derived from actively dividing cells in vivo (Codega et al., 2014;Doetsch et al., 2002;Mich et al., 2014) which, as we showed, lack ELCS. Nevertheless, it is also possible that the qNSC induction of proliferation in culture could result in the disassembly of ELCS. Recently, it has been suggested that yeast quiescent cells that sustain long-term viability form a discrete subcompartment of telomere silent chromatin (Guidi et al., 2015). This encouraged us to investigate whether ELCS are enriched in telomeric components. Using stainings for TRF2 and telomere FISH, we found that some telomeres are preferentially located within ELCS and their immediate proximity. Further evidence supports the idea that telomere-associated proteins are likely to contribute to the regulation of cellular proliferative capacity (Blasco, 2002;Grammatikakis et al., 2016). Previous studies have also shown that telomeres are associated to lamins and lamin-associated proteins such as Lap2, and are rich in epigenetic modifications, as we confirm here for H3K9me3 (Gonzalo and Eissenberg, 2016). Based on these data and the presence of telomeres in the ELCS and its proximity, we propose ELCS as a nuclear compartment for specific heterochromatic domains related to the quiescent state of NSCs in the V-SVZ. As molecular components of ELCS are identified, this may permit new studies about the stability of these nuclear structures. Furthermore, components of the ELCS may be employed for lineage tracing of cells within the NSC lineage. Animal Samples Mice maintenance and experimental procedures were approved by the Committee for Animal Welfare of the University of Valencia (2015/VSC/PEA/00,068), following the guidelines of the EC Directive 2010/63/UE. Wild-type CD1 mice were obtained from Charles River Laboratories. [ 3 H]Thymidine Administration To identify proliferating cells by TEM, we administered four intraperitoneal injections of 3 H-Thy at 2-hr intervals to adult mice (1.67 mL/g body weight, specific activity 5 Ci/mmol; PerkinElmer) with subsequent perfusion 2 hr after the last injection (n = 3). To detect LRCs, we injected 3 H-Thy as above, but perfused the animals after a 2-month survival period (n = 3). For qNSC labeling during the embryonic stages, E14.5 timed pregnant mice received two intraperitoneal injections of 3 H-Thy (3.34 mL/g body weight, specific activity 5 Ci/mmol; PerkinElmer) and the offspring were perfused at P0 (n = 4) and P21 (n = 4). Ara-C Infusion Mice brains were infused for 6 days with Ara-C (Sigma) using osmotic pumps (Alzet). The V-SVZ were dissected 0 hr after treatment for whole-mount analysis, and 12 hr and 14 days after for TEM analysis and quantifications (n = 3). Transmission Electron Microscopy For TEM, mice were fixed as described in Supplemental Experimental Procedures. Brains were rinsed in 0.1 M phosphate buffer (PB) and cut into 200-mm sections. Sections were post-fixed in 2% osmium tetroxide, dehydrated, and embedded in Durcupan resin (Fluka; Sigma-Aldrich). Semithin sections (1.5 mm) were cut with a diamond knife and stained with 1% toluidine blue for light microscopy. Ultrathin sections (70-80 nm) were cut, stained with lead citrate, and examined under an FEI Tecnai G 2 Spirit transmission electron microscope (FEI Europe) using a digital camera (Morada Soft Imaging System; Olympus). For pre-and post-embedding immunogold stainings, mice were perfused with 4% paraformaldehyde (PFA)/0.5% glutaraldehyde. Pre-embedding immunogold stainings were carried out as previously described (Sirerol-Piquer et al., 2012). Post-embedding immunogold stainings are described in Supplemental Experimental Procedures. Autoradiography Brains injected with 3 H-Thy were processed for TEM as described above. Subsequently, V-SVZ semithin sections were dipped in autoradiography emulsion (Carestream Autoradiography Emulsion, Type NTB), dried in the dark, and stored at 4 C for 4 weeks (Doetsch et al., 1997). Autoradiography was developed using standard methods and counterstained with 1% toluidine blue. Close to the LV 3 H-Thy-labeled nuclei were identified in semithin sections. Six or more silver grains needed to be present over the nucleus, and the nucleus had to be labeled in at least three consecutive serial sections, for a cell to be considered labeled. All consecutive sections showing labeled cells were selected under a light microscope (Eclipse; Nikon), re-embedded, and ultrathin-sectioned for TEM serial reconstruction. The number of 3 H-Thy-labeled studied cells is detailed in Supplemental Experimental Procedures. Immunohistochemistry Primary and secondary antibodies were incubated in PB with 0.2% Triton X-100, 5% normal goat serum, and 10% casein. Confocal images were taken on an FV1000 microscope and images were analyzed with the Olympus FV1000 software and processed with Adobe Photoshop. Fluorescence quantifications were carried out as mean gray values of the studied areas using FIJI software (Schindelin et al., 2012). An antibody list is provided in Supplemental Experimental Procedures. In Vitro Assays Mouse V-SVZ neurosphere cultures and monolayer cultures were carried out as described in Supplemental Experimental Procedures. Cells obtained from these cultures were fixed with 3% glutaraldehyde and processed for TEM. Quantification and Nuclear Three-Dimensionalization Identification and quantification of V-SVZ cells under TEM was performed according the ultrastructural characterization described in Doetsch et al. (1997). For quantifications of RG with ELCS and/or with NE rings in embryonic stages ( Figures 4A and 5E), cells within the VZ (first 40 mm adjacent to the ventricle lumen) were studied. For 3D nuclear reconstructions of V-SVZ cells we photographed every ultrathin section (Figure 1E; see also Movie S1). Digital electron micrographs from each level were aligned with FIJI TrakEM2 software and rendered with the Reconstruct software (Fiala, 2005). Statistics All results shown in the graphs are expressed as mean ± SEM. The means of experimental groups and fluorescence intensity were compared by unpaired two-tailed Student's t test. The decrease of B cells with ELCS with aging was evaluated by the trend test. All tests were performed using Prism 7 software (GraphPad). Differences were considered significant at p < 0.05. Supplemental Information includes Supplemental Experimental Procedures, six figures, and one movie and can be found with this article online at http://dx.doi.org/10.1016/j.stemcr.2017. 05.024.
2018-04-03T05:37:44.042Z
2017-06-22T00:00:00.000
{ "year": 2017, "sha1": "cd118445fb173c629fcc8173c33af8fe77dacf4c", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S221367111730231X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd118445fb173c629fcc8173c33af8fe77dacf4c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
216326039
pes2o/s2orc
v3-fos-license
MUSLIM FRIENDLY TOURISM AND WESTERN CREATIVE TOURISM: THE CONCEPTUAL INTERSECTION ANALYSIS The present boom and widespread Muslim friendly destinations, extensive research has been carried out on this concept. However, these works have not adequately addressed the issue of creativity, which is widely discussed in the current study of Western tourism literature or so-called creative tourism. This research argued that the current components of Muslim friendly destination are inadequate, particularly in terms of creativity. The aim of this paper is to highlight the concept of creativity embodied in the theoretical and empirical research on Muslim friendly tourism. In order to achieve this, for data collection and analysis, systematic literature review and content or document analysis were used. The findings indicate that this study led the initial stage of understanding the concept of creativity from the perspective of Muslim-friendly tourism. In conclusion, by closely examining the conceptual intersection between Muslim friendly tourism and Western creative tourism, this paper sheds new light on the rarely acknowledged idea of creativity, which is useful for business practitioners in Muslim friendly tourism industry to attract more Muslim market. INTRODUCTION The Muslim population around the world is rising with highest growth. In reality the growing trend of Muslim-friendly destinations is spreading rapidly in most Muslim countries. As a result, more Muslims are eager to travel to other destinations, either in the majority of Muslim or non-Muslim countries. In the tourism industry itself, the potential of the Muslim market is driven by an increasing interest in the Muslim-friendly tourism industry (Mohsin, Ramli, & Alkhulayfi, 2016). At the same time, the discussion on Muslim-friendly tourism is gaining popularity in the academic world, such as through journals, conferences and symposium (Faiza Khan, 2017). Interestingly, Pew research center (2015) has projected that by 2030, the Muslims will make up 26.4% of the world's total population. Interestingly, Pew Research Center (2015) has estimated that by 2030, Muslims will make up 26.4 percent of the total world population. This significant number is a good indicator for industry practitioners to invest in this potential market. For many years tourism has been important topics of study in the literature. In the new global economy, tourism has become the main catalyst for economic development. One of the recent new areas for investigation has been the field of creativity. To satisfy tourists with an authentic and memorable experience, the tourism industries should inculcate creative and innovative ideas through activities. In addition, the past decade has seen the increase in interest and the blossoming of creative tourism, like in Thailand, the United Kingdom, and New Zealand. Indeed, on the ground, today's trend of tourism has shifted from merely enjoying sightseeing to attaining an authentic experience. In the tourism field, this shift is known as creative tourism. For this reason, players in the Muslim tourism industry should consider adopting this creative tourism concept. This paper argues that the creativity concept of tourism still lacks of discussion in the Muslim tourism context. This paper examines the creativity from the perspective of Muslim tourism literature by highlighting the creative tourism components embedded in Muslim friendly concept. To achieve the objective, the paper will apply the conceptual approach. Thus far, one has argued that the current components of Muslim friendly destinations are inadequate, notably the creativity element (Darasha, 2016). On the other hand, surprisingly, based on the recent thought by the most prolific scholar on creative tourism, Prof Greg Richards, he points out that to date there is still no literature discussing creative tourism from the Islamic perspective. By focusing on the intersectional concept analysis, the research discussed in this paper has the potential to contribute to the body of knowledge of creative tourism and Muslim friendly tourism, as well as to tourism industry per se. Besides filling in the existing gap, this study presents a crucial initial preliminary investigation stage in identifying the inadequate components of Muslim friendly tourism concept, i.e., the creativity, which is powerful in the Western tourism ecosystem. Tourism in Islam Islam encourages Muslims to travel to see the beauty of Allah's creation in nature and then to reflect on the greatness of Allah's bountiful blessings upon the humankind. Every place has its uniqueness and different attractions. Not only bestowed with beautiful ranges of mountains, rivers, and lakes, but also race, culture, and customs. Indeed, the contemplation through the lenses of the travellers should bring themselves closer to their creators, as mentioned in the Quran surah Al-Haj verse 46; "Do they not travel through the land, so that their hearts (and minds) may thus learn wisdom and their ears may thus learn to hear? Truly it is not their eyes that are blind, but their hearts which are in their breasts (Ali, 2006). In Islam, the purpose of travelling is to bring more meaning to the travellers' lives. In Islam, basically, the idea of tourism or travelling has essentially existed for a long time ago . This is actually an old concept. If it is traced back, it has been in Islam, falls under Hajj (Khan & Khan, 2016;Mujtaba, 2016;Oktadiana, Pearce, Pusiran, & Agarwal, 2017). As every Muslim understands, Islam has 5 pillars (arkanul Islam); one of them is Hajj (pilgrimage). Hajj is fulfilling the God's invitation to travel to Macca. And during the implementation of hajj, there are many rules that one must follow and obey. A individual can visit the historical heritage of Islam during the performing hajj, as well as the historical heritage of legacy prophets. Umrah is similar to hajj. It is a requirement for every Muslim to observe the behavior based on Islamic principles during traveling Mujtaba, 2016). Western Creative Tourism The current knowledge of tourism generally is based on Western. Including creative tourism, it is originally from Western knowledge. Creative tourism is the extension of cultural tourism that has been practiced as mass tourism with no emotional bound and authentic experience with the destination. As a result, cultural tourism was extended to a higher level, leading to the development of creative tourism (Wattanacharoensil and Sakdiyakorn, 2016). The emerging of creative tourism is the demand of tourists who rejected mass and traditional cultural tourism, as well as looking for personal development through courses and meaningful experience by learning and participating (Richards & Raymond, 2000;S. Tan, Luh, & Kung, 2014). In addition, creative tourism can also be defined as an enhanced form of cultural tourism that focuses more on the invisible than on the visible heritage (Hassani & Bastenegar, 2016). Creative tourism is a prevalent area of importance and research interest due to the creative tourism trend significantly contributes to the countries' GDP. Moreover, it can attract more visitors, create more jobs, bring full of market value, empower the local community, and be a new model for industrial development (OECD, 2014). It comes as no surprise then that how creative tourism differs from other types of tourism and becomes trendy mostly among the developed countries (Richards & Wilson, 2007). For this reason, creative tourism has significant potential for many future applications in the tourism industry. How Muslim Tourism Adapt to Western Creative Tourism It is myopic and too early to assume that creative tourism has its potential in non-Muslim countries only. Relate to it, and a question has been raised whether creative tourism is also auspicious in the Muslim market landscape. With creativity maelstrom in Western tourism as well as Muslim friendly maelstrom in the Muslim market, indeed both concepts are popular in their domains. Given these points, this study addressed the creative tourism components embedded in Muslim friendly tourism realm. Later, how the industry players can adapt this creative tourism concept into Muslim friendly tourism? Therefore, the service providers in the tourism industry per se should be more active and creative in providing memorable and authentic tourism products and services. To differentiate from other service providers and attract the potential market of Muslim tourists. It may be plausible to argue that through the practice of Western centric idea of creative tourism, the major problem is whether or not it is compatible with Islamic principles and values, which is very crucial matter. Islamic principles are part and parcel of important aspects that need to be considered in developing creative tourism activities for Muslim travellers. Apart from that, the study of creative tourism in a Muslim context is rarely reported. METHOD The objective of this study is to review Muslim friendly tourism papers, either empirical or theoretical, which highlight the embedded creative tourism concept. This study applied the systematic literature review (SLR) method in filtering and finding the relevant articles on Muslim friendly tourism. The systematic literature review (SLR) method ( Figure 1) commenced with the search for relevant articles and conducted through the online database search. Specifically, the database search entailed seven reputable publishers, which are Emerald Insight, Oxford Academic Journals, Sage Journals, Science Direct, Springer, Taylor & Francis Online, and Wiley Online Library. There are three stages of SLR process in filtering the relevant articles. The first stage is known as the identification phase. To expand the search into a specific study, the use of AND string as a Boolean advance search was utilised immensely in this process. The relevant articles were found using three search strings; "tourism" "tourism" AND "Islamic tourism" "tourism" AND "Islamic tourism" AND "Muslim tourists" The next phase is known as screening. At this stage, all book/e-books, conference proceedings, book reviews, references, trade publication articles, magazine articles, and newspapers were excluded. Besides that, other articles published before 2007 were also excluded as this research merely focused on publications between the year 2007 and 2018. The third phase is called eligibility. At this stage, the titles and the abstracts of the articles were analysed. Whichever irrelevant was removed. RESULT Finally, as the study is at the preliminary stage, so it just focuses on a few research articles, specifically the high indexed journal papers. In the final screening of the systematic literature review method, 13 articles of high indexed journals listed under Web of Science were found relevant to this study. DISCUSSION In this section, the discussion revolves around the definitions of Muslim friendly tourism from a few researchers. Then, the discussion continues with the components of Muslim friendly destination as presented by various researchers. Then, it discusses creativity from the tourism perspective, which was implanted in the relevant articles of Muslim friendly tourism found through the systematic literature review method. As this is the preliminary information collection stage, 13 of high indexed journal listed under Web of Science found in this study were the focus of this paper. Preliminary investigation stage is a subsurface exploration to address the issue in this study. In defining Muslim friendly tourism, the following table presents the various definitions of Islamic tourism by several researchers. Halal tourism A journey when travellers accomplish his or her role as 'ibadullah (servant of Allah), to worship Allah during their travelling. When embarking a journey, they fulfil their obligation and cling to Islamic teaching. The journey is not only meaningful for the traveller but also the traveller will be granted reward by Allah. 2. (Battour & Ismail, 2016) Halal tourism The tourism caters to Muslim, with its product or services are in line with Islamic principles, regardless the location of destination; either in Muslim or non-Muslim countries and the motivation of journey can be religious or general. 3. (Mohsin et al., 2016) Halal tourism Halal tourism is sort of tourism which sticks to Islamic principles, whereby it enables Muslim to perform their duty to worship Allah and to eat lawful food. 4. (Carboni, Perelli, & Sistu, 2014) Islamic tourism It is a type of tourism in compliance with Islamic teachings and not restricted to travelling to Muslim countries and with religious motive only. To meet the needs of Muslims, the products and services offered are in line with Islamic principles. Table 2 above depicts the definition of Muslim friendly tourism by several authors found in high indexed papers listed under Web of Science. Indeed, it can be clearly seen that there are multiple definitions of Muslim friendly destination and a variety of terminologies too. It is important to take note of the term "Muslim friendly tourism" refers to Islamic tourism, halal tourism, Muslim tourism, shariah tourism, and Islamic hospitality. While a variety of terminologies have been used alternatively, this paper followed the term "Muslim friendly tourism" as it is the most common term used by the Standing Committee for Economic and Commercial Cooperation (COMCEC) under the Organisation of Islamic Cooperation (OIC) body. Moving on, the following table displays the components of Muslim friendly destinations and the Western creative tourism intersections found throughout this literature study. It presents the attributes of Muslim friendly tourism either tangible (physical) or intangible (non-physical), followed by creative tourism qualities embedded in the Muslim friendly tourism attributes. Table 3 above depicts that 13 high indexed papers, but only seven articles explicitly highlight the attributes/elements/components of Muslim friendly tourism. It could be better to define a creative tourism concept first and then to come out with the key factors or properties of creative tourism properly. According to the most prolific researcher and expert in creative tourism, Prof. Greg Richards, who has reinstituted the concept of creative tourism, he and his colleague, Raymond in the year 2000, defined creative tourism as a destination where tourists can engage with the community through experiencing the potential learning resulting in improving their skills (Termsak, 2014). It means that creative tourism is all about learning, experience, skills, and engagement. In the same vein, other researchers share a similar perspective on the common properties of the creative tourism concept, like 'active participation', 'authentic experiences', 'creative potential development', and 'skills development' (Richards, 2011;S.-K. Tan, Kung, & Luh, 2013). Equally imperative, Pine and Gilmore (1999) conclude that creative tourism consists of five key factors; co-creation, contemplating, sightseeing, learning, and participating (Limsopitpun, Siriwoharn, & Laohanan, 2016). Interestingly, the latter properties of creative tourism share certain ideas in common with regards to Muslim friendly tourism concept, notably contemplating (Samori et al., 2016;Stephenson, 2014;Mohsin et al., 2016), sightseeing (Stephenson 2014;Mohsin et al., 2016), and participating elements (Stephenson, 2014;Mohsin et al., 2016). As shown by content analysis in Table 3, the fragmented constructs are contemplating, sightseeing, and participating. It is apparent from this table that very few components of Muslim friendly tourism intersected with creative tourism elements. In this study, these contemplating, participating and sightseeing are the qualities of creative tourism embedded in Muslim friendly tourism literature. Figure 2. Conceptual Intersections between Muslim Friendly Tourism and Creative Tourism Notion Based on the Findings According to the findings, there are three conceptual intersections in this study; contemplating, participating and sightseeing. However, these findings still do not reflect the interpretation of creative tourism itself and also neglect the genuine implementation of creative tourism. In fact, it is skill and experience identified as the gems of creative tourism which are generated from learning exchange (Tiyapiphat, 2017). However, the finding in this study shows that the component of learning through activities is still lacking. Based on the aforementioned intersectional components, researchers come out with the correlation between Muslim friendly tourism and Western creative tourism as presented in the following table 4. (Samori et al., 2016). Tawhid-compliance (Mohsin et al., 2016) Contemplating Internal reflections and interactions with people, activities, and surroundings which are resulted from the creative experiences (Limsopitpun et al., 2016) Islamic village tourism (Stephenson, 2014) Islamic cruises (Stephenson, 2014) Sightseeing Beauty and gorgeous scenery, clean tourists areas, unique and particular local cultures (Limsopitpun et al., 2016) Islamic festivals and events sector (Stephenson, 2014;Mohsin et al., 2016) Participating Culinary Tourism Events (Limsopitpun et al., 2016) In the final analysis, these discussions provide the following insights for future research. Islam encourages tourism. One of the creative tourism's constructs or known as creative tourism gem is experiential learning. Islam supports learning, too. This is reflected in the first revelation sent down to Prophet Muhammad (may peace and blessing be upon him), i.e., Iqra, which means read. Insightfully, the interpretation of this Quranic verse is to learn in order to acquire wisdom and understanding, which will bring us to the elevation of our eeman (faith). Following this understanding, how to integrate this creative tourism gem into the lacking attributes of Muslim friendly tourism? With reference to these issues, further research is needed to address them comprehensively. CONCLUSION This paper is with the specific condition of using online database searches like Emerald Insight, Oxford Academic Journals, Sage Journals, Science Direct, Springer, Taylor & Francis Online, and Wiley Online Library. Then, this paper uses a three-stage systematic literature review (SLR) process to study journal papers published in the year of 2007 until 2018, which are related to Muslim friendly tourism. Therefore, as a preliminary study, this paper focuses on Muslim friendly tourism articles published in high indexed journals and simultaneously analyses the creativity concept embedded in that literature. Furthermore, this study highlighted more on the issue of the creativity concept in Muslim friendly tourism context. This article has therefore contributed references to the emerging trend of discussions on Muslim friendly tourism and then drawn together the intersection, specifically, the fragmented construct of Muslim friendly tourism and creative tourism. Additionally, it should be noted that this paper only uses secondary data to generate the definition and components of Muslim friendly tourism. This study has led the initial stage of understanding the concept of creativity from the perspective of Muslim friendly tourism. By connecting the idea of creative tourism with Muslim friendly tourism, this study has the potential to be a fruitful area for further work. Future studies may be necessary to understand this work in greater detail by using an empirical approach. The results have significant implications for Muslim friendly tourism industry in considering the adaption of creative tourism activities to attract more Muslim travellers.
2020-04-02T09:07:31.106Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "35dd756ac28d3a6af0099c4de21bd88338c2d73f", "oa_license": "CCBYSA", "oa_url": "http://jurnal.unissula.ac.id/index.php/ijibe/article/download/7986/3877", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b3c9e62b871b163d57af5dbbf66dc53bd6f19951", "s2fieldsofstudy": [ "Business", "Education" ], "extfieldsofstudy": [ "Sociology" ] }
2032842
pes2o/s2orc
v3-fos-license
Characterization of Free, Conjugated, and Bound Phenolic Acids in Seven Commonly Consumed Vegetables Phenolic acids are thought to be beneficial for human health and responsible for vegetables’ health-promoting properties. Free, conjugated, and bound phenolic acids of seven commonly consumed vegetables, including kidney bean, cow pea, snow pea, hyacinth bean, green soy bean, soybean sprouts and daylily, from the regions of Beijing, Hangzhou, and Guangzhou, were identified and quantified by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). Three vegetables, namely green soy bean, soybean sprouts, and daylily (Hemerocallis fulva L.), from the Beijing region contained higher concentrations of total phenolic acids than those from the Hangzhou and Guangzhou regions. The results indicated that the phenolic acid content in the seven vegetables appeared to be species-dependent. The highest content of phenolic acids was found in daylily, followed by green soy bean, while the least amounts were identified in kidney bean and hyacinth bean. Typically, phenolic acids are predominantly found in conjugated forms. Principle component analysis (PCA) revealed some key compounds that differentiated the seven vegetables. Green soy bean, compared to the other six vegetables, was characterized by higher levels of syringic acid, ferulic acid, vanillic acid, and sinapic acid. Other compounds, particularly p-coumaric acid, neochlorogenic acid, and caffeic acid, exhibited significantly higher concentrations in daylily. In addition, p-coumaric acid was the characteristic substance in cow pea. Results from this study can contribute to the development of vegetables with specific phytochemicals and health benefits. Introduction As basic consumable food products in people's daily lives, vegetables constitute an increasing amount of the diet, especially in China, reaching up to 33.7% of the total dietary consumption [1]. Epidemiological studies have provided evidence to prove that the frequent consumption of fruits and vegetables is associated with a reduced risk for some chronic and cardiovascular diseases [2]. In particular, phenolic compounds are one of the most important natural antioxidants responsible for these health promoting properties due to their bioactivities [3,4]. Phenolics, as a product of secondary metabolism, are considered to deliver health benefits by free radical scavenging, antioxidant, anticarcinogenic, anti-inflammatory, or antimicrobial effects [5][6][7]. Among the variety of phenolic compounds, phenolic acids have attracted considerable interest. Many studies showed that phenolic acid significantly contributes to plant development, induced resistance, sensory qualities, and flavors [8,9]. The main dietary sources of phenolic acids are fruits, vegetables, cereals, and beans [3,10]. However, levels of phenolic acids found in these sources maybe affected by geographical region, variety, cultivation, climate, and extraction process [11,12]. Some studies in the literature have focused on the extraction and analysis of phenolic acids from vegetables, including potatoes [11], tomatoes [13], eggplants [14], carrots [15], Chinese cabbage [16,17], bitter melons [18], and broccoli [19]. Chlorogenic acid was the most studied phenolic and showed relatively high levels in eggplants and potatoes. Ferulic acid, caffeic acid, sinapic acid, p-hydroxybenzonic acid, and p-coumaric acids have also been studied. Some literature about phenolic acids in dry beans also exists. Darmadi-Blackberry and colleagues reported that consumption of legumes, including dry beans, was the most critical factor compared to other food in determining longevity [20]. According to epidemiological evidence, the protective effect of reducing chronic diseases through the intake of more vegetables are attributed, in part, to the occurrence of different antioxidant components, and mainly to phenolic compounds [4,21]. Bean extracts from white kidney beans and round purple beans have been proven to be rich in phenolics and exhibit both antioxidant and anti-inflammatory activities [21]. However, published data on the contents of phenolic acids are limited for fresh beans, such as kidney bean (Phaseolus vulgaris L.), cow pea (Vigna unguiculata), and hyacinth bean (Lablab purpureus). Most of the literature focused on the determination of total polyphenols by Folin-Ciocalteu colorimetry in these above vegetables [22]. Additionally, daylily (Hemerocallis fulva L.) and soybean sprouts, as traditional Chinese food, are consumed widely and deeply loved by people for their high nutrition value. Studies also showed that the extracts from daylily had strong antioxidant activity and scavenging effects on free radicals and nitric oxide [23]. Nonetheless, comprehensive food composition data for specific phenolic acid content in these vegetables are still lacking. Phenolic acids in plants may exist in free, soluble conjugated (esterified), and insoluble-bound forms [24][25][26]. However, soluble conjugated phenolic acids have not received as much attention as the free forms. Studies demonstrated that phenolic acid conjugates are recognized antioxidants with anti-inflammatory properties both in vitro and in vivo [27][28][29][30]. Conjugated and bound phenolics may also play an essential role in delivering antioxidants to the colon upon their release by bacterial microbiota [31]. These two forms of phenolic acids, considered to be bound to oligosaccharides, peptides or polysaccharides, can be released after acidic or alkaline hydrolysis. Therefore, there is demand for a comprehensive analysis of all forms of free, conjugated, and bound phenolics. Most of the studies so far have only been concerned with total phenols or have only considered a few extractable free phenolics present in fruits and vegetables. The evolution and distribution of three types of phenolic acids in these commonly consumed vegetables have not been well investigated. As such, demand exists for a comprehensive analysis of phenolic acid. In this study, we aimed to evaluate the phenolic acids, including free, conjugated, and bound forms in seven vegetables, including kidney bean, yardlong cow pea, snow pea, hyacinth bean, green soy bean, soybean sprouts and daylily, from three regions in China, Beijing, Hangzhou, and Guangzhou, using ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). The distribution, existing forms, and characteristic substances of these vegetables were investigated. The results of this study can provide further insights into the health-promoting compounds of vegetables, and offer a more in-depth bioactivities index for further research. Results and Discussion Phenolic acids in plants may exist in three types: free, soluble conjugated, and insoluble bound types. Free-form phenolics are directly released and easily detected, while the conjugated and bound forms need to be alkaline hydrolysed before being perceived during processing or storage. Generally, phenolic acids are divided into hydroxybenzoic and hydroycinnamic acids according to their structural features. Free Phenolic Acid The free, conjugated, and bound forms of phenolic acids in the seven vegetables are shown in Tables 1-3. Our results indicate that among the three groups, the free-form phenolic acids accounted for the lowest proportional concentration in these vegetables. With respect to individual phenolic acid, six free phenolics were identified and quantified, including three hydroxybenzoic acids, vanillic acid, p-hydroxybenzoic acid and protocatechuic acid, and three hydroxycinnamic acids, neochlorogenic acid, chlorogenic acid and p-coumaric acid (Table 1). Both the region and species had influences on the free phenolic acids content. Generally, the subtotal free phenolic acids of green soy bean, soybean sprouts, and daylily showed markedly higher concentrations in the Beijing region compared to the Hangzhou and Guangzhou regions. Apart from chlorogenic acid and vanillic acid, the other phenolics all had higher concentrations in daylily from the Beijing region than from the other two regions. So far, no studies exist on the regional variations in phenolic compounds in the above vegetables. Regarding the seven different vegetables, our results show that the types and content of free phenolic acids vary considerably. Except for daylily, only one or two phenolic acids were detected in the other six vegetables. For instance, two free-form phenolic acids, chlorogenic acid and p-coumaric acid, were detected in kidney bean and cow pea, and the levels were lower than the values in green beans obtained by Mazzeo et al. [32]. This could be attributed to various factors, such as cultivar genotype, geographical region, climate, and assay procedure [12,33,34]. Meanwhile, p-coumaric acid was detected in minute amounts in snow pea (0.09-0.22 mg/kg fresh weight (FW)). According to the published data, only the determination of total polyphenols by Folin-Ciocalteu colorimetry in cow pea and snow pea exists [22]. For the free-form phenolics, p-hydroxybenzoic acid was the only phenolic acid detected in green soy bean and soybean sprouts. Free phenolic acid was undetectable in hyacinth bean from any of the three regions. To our knowledge, no information was available on the phenolics in hyacinth bean and green soy bean. The highest concentration of free phenolic acids was observed in daylily, and a total of seven free form phenolics were detected. Among these compounds, neochlorogenic acid was the most abundant acid in daylily, ranging from 84.04 mg/kg FW in Hangzhou region to 103.10 mg/kg FW in the Beijing region, whereas p-coumaric acid showed the lowest amount (4.03-5.11 mg/kg FW). A study showed that the water extract from daylily expressed relatively high antioxidant activity and an inhibitory effect on nitric oxide production [23]. The large amount of phenolic acids in daylily detected in our results also suggests the strong antioxidant capacities. So far, only a few reports have studied the concentration of certain phenolic acids in these vegetables. The study of Silva et al. found four phenolic acids in soybean sprouts, including caffeic acid, neochlorogenic acid, p-coumaric acid, and ferulic acid, which does not align with our results. The reason for the difference may be that, in addition to cultivar genotype, the extraction process also played an important role on the levels of phenolic acids found. In their study, freeze-dried sprouts were boiled for 15 min to mimic the way how the sprouts are usually prepared for human consumption [35]. Hence, in the process, the conjugated or bound phenolic acid may be released by thermal hydrolysis. In our results, several conjugated and bound phenolic acids were also detected in soybean sprouts. Conjugated Phenolic Acid The conjugated phenolic acids in the seven vegetables were assessed ( Table 2). Six hydroxybenzoic acids, including gallic acid, 2,3,4-trihydroxybenzoic acid, vanillic acid, p-hydroxybenzoic acid, syringic acid, and protocatechuic acid and four hydroxycinnamic acids, including ferulic acid, caffeic acid, p-coumaric acid, and sinapic acid, were detected. The seven vegetables in the three regions presented somewhat different phenolic amounts. Three vegetables, namely green soy bean, soybean sprouts, and daylily, from the Beijing region all showed relatively higher levels of total conjugated phenolic acids compared to those from the Hangzhou and Guangzhou regions. The difference in the amounts of total phenolic acids may be due to regional climate, and growing and storage conditions [36]. The region factor showed less of an effect than species difference on phenolic acid metabolism. Large differences were found in the levels of conjugated phenolic acids among the seven different vegetables. Our results indicate that the conjugated phenolic acids accounted for the majority of the phenolic acids in all of the seven vegetables and accounted for the largest proportion of the three forms, which aligns with the results obtained from cranberry beans (Phaseolus vulgaris L.) [10]. For the conjugated fraction, the three phenolic acids present in all the vegetables were gallic acid, ferulic acid, and p-coumaric acid. Additionally, p-coumaric acid was the predominant conjugated phenolic acid in most of the samples in the current study, except for in hyacinth and green soy beans. Hyacinth bean contains more protocatechuic acid than any other phenolic acid, and syringic acid showed the highest level in green soy bean. However, apart from green soy bean, syringic acids were only detected in daylily in extremely low levels. Ferulic acid is considered to have a structural role in cross-linking wall polymers and some research suggests that ferulic acid is the most predominant phenolic acid in dry edible beans [33]. Our results showed that ferulic acid was the second highest compound detected in most of the vegetables and this component was particularly high in green soy bean (32.67-46.61 mg/kg FW) and daylily (37.86-40.10 mg/kg FW), but showed relatively low levels in kidney bean (1.17-3.0 mg/kg FW). Additionally, vanillic acid and p-hydroxybenzoic acid were only detected in green soy bean, soybean sprouts, and daylily. 2,3,4-trihydroxybenzoic acid was identified in quantifiable amount only in kidney bean, soybean sprouts, and daylily. Daylily and green soy bean, as typical Chinese food, contain high levels of phenolics. According to our results, the highest amount of total conjugated phenolic acids was found in daylily (300.58-313.63 mg/kg FW), followed by green soy bean (192.58-238.79 mg/kg FW). p-coumaric acid accounted for about 45% of the total conjugated forms in daylily, with intermediate levels of caffeic acid (25%) and ferulic acid (15%). Moreover, conjugated p-coumaric acid had the highest level in daylily (134.66-149.61 mg/kg FW) compared to the other six vegetables. Notably, for snow pea and cow pea, p-coumaric acid accounted for the largest proportion of the conjugated fraction, with nearly 70% and 90% of the total phenolics, respectively. In addition, insignificant amounts of conjugated caffeic acid were extracted in the other vegetables, except for daylily (69.54-73.42 mg/kg FW). Quantitatively, syringic acid, ranging from 87.45 mg/kg FW in Hangzhou region to 102.46 mg/kg FW in the Beijing region, accounted for more than 40% of the total conjugated fractions in green soy bean. Except for syringic acid, ferulic acid (32.67-46.61 mg/kg FW), vanillic acid (26.88-41.59 mg/kg FW), and sinapic acid (32.93-42.01 mg/kg FW) were the other three conjugated fractions with higher concentrations detected in green soy beans, accounted for about 20% of the total. Conjugated phenolic acids are soluble components extractable by methanol aqueous solution, but considered to be bound to soluble oligosaccharides and peptides through ester bonds or ether linkage, which can be released after hydrolysis [31,37]. Phenolic acids were predominantly found in conjugated forms [10]. The same results were obtained in our study. However, a study stated that the majority of phenolic acids were extracted from the alkaline hydrolyzed fraction, and further sequential acid hydrolysis of the same extract did not yield any additional amounts of phenolic acid [33], which means most conjugated phenolic acids were more likely to be released by alkaline hydrolysis rather than acid hydrolysis [10,30]. Only three phenolic acids, namely the conjugated forms of p-coumaric acid, ferulic acid, and sinapic acids, were previously reported in dry beans [10], and no information was available for the vegetables used in our study. This is a significant finding, since the analytical methods commonly used to determine the amount of phenolic acids for the above vegetables in previous studies were direct extraction and injection without hydrolysis, which only provide the free forms. Hence, this could lead to significant underestimation of the total extractable phenolic content of a particular food [38,39]. Conjugated and bound phenolics may play an essential role in delivering antioxidants to the colon, upon their release by bacterial microbiota [31,40]. Our research on conjugated phenolic acids can help to better understand the antioxidant effects and nutritional value of these vegetables. Bound Phenolic Acid Bound phenolic compounds are esterified to cell wall polysaccharides, but can also be covalently linked to lignin monomers with an ether linkage [10]. These phenolic acids are the insoluble phenolics remaining in the residue following the initial extraction with 80% MeOH. Table 3 show the bound phenolic acids in the seven vegetables. Bound phenolics, released upon alkaline hydrolysis, contained a total of eight substances, including four hydroxybenzoic acids (vanillic acid, p-hydroxybenzoic acid, syringic acid, and protocatechuic acid) and four hydroxycinnamic acid (ferulic acid, caffeic acid, p-coumaric acid, and sinapic acid). The accumulation of bound phenolics varied between regions and species. Similarly, the levels of total bound phenolic concentrations detected in green soy bean, soybean sprouts, and daylily were significantly higher in the Beijing region than in the other two regions. p-Coumaric acid was the only bound phenolic found in all the seven vegetables, but large differences in the levels were found. The bound form of vanillic acid, p-hydroxybenzoic acid, and syringic acid were detected in green soy bean, soybean sprouts, and daylily. Sinapic acid was identified in measurable amounts only in snow pea and green soy bean. Caffeic acid was also released as a bound phenolic compound, but was only detected in hyacinth bean and daylily in relatively small amounts. Quantitatively, bound-form phenolic acids were characterized in small amounts in kidney bean, hyacinth bean, snow pea, and cow pea. In fact, p-coumaric acid was the only phenolic acid in the bound form detected in kidney bean and cow pea. The highest subtotal levels of bound phenolics were found in green soy bean, followed by daylily. The subtotals of bound phenolics in green soy bean represented a significant portion (about 30%) of the total phenolic index, including the subtotals of the free and conjugated fractions. Syringic acid was the predominant bound fraction in green soy bean (33.40-45.10 mg/kg FW), which was the same as the conjugated forms. Ferulic acid, vanillic acid, and sinapic acid were also found in significant amounts in green soy bean. In addition, syringic acid was found to be the most abundant acid in soybean sprouts ( To date, no literature has reported on bound-form phenolic acids in these vegetables. Previous studies only provided a partial characterization of phenolics. To the best of our knowledge, this is the first time all possible phenolic acids in these vegetables were characterized. Finding a significant amount of conjugated and bound phenolic acids, as detected and quantified in this study, provides systematic estimation of biological activities, including beneficial health effects. Principal Component Analysis To provide an overview of the effects of region and variety on the phenolic acid composition and to further identify the discriminant components, principal component analysis (PCA) was applied using all the detected phenolics as variables. All the seven vegetables, taking into account the three regions, were used for the PCA. The PCA score scatter plot of all samples is shown in Figure 1a. The series of numbers (1-7) following the letter represented region (BJ, HZ, and GZ) represented kidney bean, cow pea, snow pea, hyacinth bean, green soy bean, soybean sprouts, and daylily, respectively. And the corresponding loading plot is shown in Figure 1b, establishing the relative importance of the variables. Based on all the detected phenolic acids, using the total concentrations of their respective free, conjugated and, bound types, the first two principal components (PCs) accounted for about 91.4% of the total variance. As shown in the loading plot (Figure 1a), the first component (PC1) explained approximately 62.2%, and the samples were almost distributed on the negative axis of PC1, except for cow pea and daylily from all three regions. PC2 accounted for 29.2% of the total variance. Except for two vegetables (green soy bean and daylily, from all three regions), all the other five vegetables were distributed on the negative axis of PC2. Our results show that the seven vegetables were separated from each other, and the semi-transparent fields meant 95% confidence intervals. In combination with the corresponding loading plot, green soy bean, compared with the other six vegetables, was characterized by high levels of syringic acid. Simultaneously, ferulic acid, vanillic acid, and sinapic acid also had a bias toward green soy bean, suggesting the high levels detected in green soy bean. As shown in Figure 1, p-coumaric acid, neochlorogenic acid, and caffeic acid were the characteristic substances in daylily; that is, these compounds generally had higher levels in daylily in comparison with the other vegetables. Other compounds, including p-hydroxybenzoic acid, protocatechuic acid, chlorogenic acid, gallic acid, and 2,3,4-trihydroxybenzoic acid, were also found to be closer to the points representing daylily from all three regions ( Figure S1). According to the distance between the points representing these four vegetables, namely soy bean sprouts, hyacinth bean, kidney bean, and snow pea, and phenolic acids attributers ( Figure S1), these four vegetables contain relatively lower levels of phenolics. In addition, p-coumaric acid was the characteristic compound in cow pea, which is a critical distinguishing factor between cow pea and other legume vegetables. Our results show that the seven vegetables were separated from each other, and the semitransparent fields meant 95% confidence intervals. In combination with the corresponding loading plot, green soy bean, compared with the other six vegetables, was characterized by high levels of syringic acid. Simultaneously, ferulic acid, vanillic acid, and sinapic acid also had a bias toward green soy bean, suggesting the high levels detected in green soy bean. As shown in Figure 1, p-coumaric acid, neochlorogenic acid, and caffeic acid were the characteristic substances in daylily; that is, these compounds generally had higher levels in daylily in comparison with the other vegetables. Other compounds, including p-hydroxybenzoic acid, protocatechuic acid, chlorogenic acid, gallic acid, and 2,3,4-trihydroxybenzoic acid, were also found to be closer to the points representing daylily from all three regions ( Figure S1). According to the distance between the points representing these four vegetables, namely soy bean sprouts, hyacinth bean, kidney bean, and snow pea, and phenolic acids attributers ( Figure S1), these four vegetables contain relatively lower levels of phenolics. In addition, p-coumaric acid was the characteristic compound in cow pea, which is a critical distinguishing factor between cow pea and other legume vegetables. Sampling and Processing Vegetables were selected taking into account the same seasonality and volume of consumption in China. Seven vegetables, namely kidney bean (Phaseolus vulgaris L.), yardlong cow pea (Vigna unguiculata (Linn.) Walp), snow pea (Pisum sativum var. macrocarpon L.), hyacinth bean (Lablab purpureus (Linn.)), green soy bean (Glycine max (L.) Merr), soybean sprouts, and daylily flower (Hemerocallis fulva L.) were collected from three regions, including Beijing, Hangzhou and Guangzhou. For representative purpose, each kind of vegetable from each region were sampled randomly from five stalls, including three wholesale vegetable markets and two hypermarkets, for a total of 2 kg each, and then mixed together as one sampling point. All the sampling points were divided into three sampling units to create three biological replicates, 2 kg per replicate. Vegetables were placed in a bubble chamber and transported on ice to the laboratory within 24 h. The inedible parts of the raw samples were removed manually with a sharp knife on an ice plate and cut into slices Sampling and Processing Vegetables were selected taking into account the same seasonality and volume of consumption in China. Seven vegetables, namely kidney bean (Phaseolus vulgaris L.), yardlong cow pea (Vigna unguiculata (Linn.) Walp), snow pea (Pisum sativum var. macrocarpon L.), hyacinth bean (Lablab purpureus (Linn.)), green soy bean (Glycine max (L.) Merr), soybean sprouts, and daylily flower (Hemerocallis fulva L.) were collected from three regions, including Beijing, Hangzhou and Guangzhou. For representative purpose, each kind of vegetable from each region were sampled randomly from five stalls, including three wholesale vegetable markets and two hypermarkets, for a total of 2 kg each, and then mixed together as one sampling point. All the sampling points were divided into three sampling units to create three biological replicates, 2 kg per replicate. Vegetables were placed in a bubble chamber and transported on ice to the laboratory within 24 h. The inedible parts of the raw samples were removed manually with a sharp knife on an ice plate and cut into slices of 8-10 mm in length before being frozen in liquid nitrogen and then stored at −80 • C until analysis. All the samples were analyzed within one month's time. Free Phenolic Acid Extraction Free phenolic acid extraction followed the method in our previous study with minor modifications [30]. Before analysis, vegetables were grounded to a fine power under liquid nitrogen, and the powder was divided into three subsamples. A subsample (2 g) was mixed with 80% methanol (20 mL) containing 1% ascorbic acid. The resulting mixture was ultrasonicated for 30 min at room temperature and then centrifuged at 10,000 rpm for 10 min. The supernatant was collected and the above extraction was repeated twice more. The combined supernatant were transferred to a 50-mL volumetric flask, diluted with extracting solution to volume, mixed, and then filtered through 0.22 µm PTFE membranes (Pall, Ann Arbor, MI, USA) prior to UPLC-MS/MS analysis. Conjugated Phenolic Acid Extraction Conjugated phenolic acid extraction was performed according to Li et al. [41]. The powder (2 g) was mixed with 20 mL of 80% methanol containing 1% ascorbic acid, followed by ultrasonication for 30 min at room temperature. The mixture was centrifuged at 10,000 rpm for 10 min and the extraction was repeated twice. The solid residues were used for the next extraction of bound phenolic acid. The supernatant was combined and evaporated to a volume of less than 10 mL (aqueous phases) at 35 • C using a rotary evaporator. After evaporation, 20 mL of 4 M NaOH was added to the remaining aqueous layer, and the medium in nitrogen blanketing was alkaline hydrolysed by shaking at 40 • C for 2 h. Afterward, the hydrolysate was acidified to pH 2 with 12 M HCl and then 20 mL hexane was added, vibrating for 20 min under ambient temperature. After removing the hexane, the resultant hydrolysate was extracted three times with 20 mL ethyl acetate. In this step, free and conjugated phenolic acids were both extracted with ethyl acetate. Therefore, the levels of conjugated phenolic acids were the difference between the values obtained from this step and the previous step. All the organic phases were pooled and evaporated to dryness at 35 • C. The resultant dry residue was re-dissolved in 10 mL of 50% methanol/ultrapure water (v/v), and then filtered through a 0.22 µm PTFE membrane filter before further analysis. Bound Phenolic Acid Extraction The solid residues remaining after the last step were treated with 20 mL of 4 M NaOH and alkaline hydrolysed by shaking for 2 h at 40 • C under N 2 as described above. The resultant hydrolysate was acidified to pH 2 with 12 M HCl. Twenty milliliters of hexane were added and shaken for 20 min to remove the oil and other esters. Then, the liberated phenolics were extracted three times with 20 mL ethyl acetate after the hexane was removed. Subsequently, the combined supernatant was evaporated to dryness under vacuum at 35 • C and re-dissolved in 10 mL 50% methanol. Eventually, the released bound phenolic acids were analyzed using the UPLC-MS/MS after filtration through 0.22 µm PTFE membrane filters. UPLC-MS/MS Analysis An ACQUITY HSS C18 column (1.8 µm particle size; 2.1 × 150 mm, Waters, Milford, MA, USA) was used for the separation of phenolic acids on a Waters ACQUITY UPLC system interfaced to a triple quadrupole MS (TQ-S, Waters Micromass, Manchester, UK) and an orthogonal Z-spray electrospray ionization (ESI) with Masslynx 4.1 software (Waters, Milford, MA, USA). A gradient consisting of (A) 0.1% formic acid in water (v/v) and (B) 0.1% formic acid in acetonitrile (v/v) was applied at a flow rate of 0.3 mL/min. The gradient program was as follows: initial conditions 5% B were held for 30 s, from 5-30% B for 4.5 min, from 30-90% B for 4.5 min, 90% B maintained constantly for 30 s, from 90-5% B for 30 s, and this composition was held for 2.5 min for re-equilibration. The injection volume was 5 µL. The column was maintained at 45 • C and the autosampler was at 10 • C. Both positive and negative electrospray ionization (ESI) modes were applied based on the structural properties of phenolic acids. The ESI parameters were as follows: +2.5 kV/−1.0 kV capillary voltage, 150 • C source temperature, 500 • C desolvation temperature; 150 L h −1 cone gas flow and 1000 L h −1 desolvation gas flow. Detection was performed in multiple reactions monitoring (MRM) mode. Quantification was completed according to the standard curves generated from individual compounds in serial dilutions (1~500 ng mL −1 ). Statistical Analyses One-way analysis of variance (ANOVA) was performed using SPSS 20.0 Statistical Package for Windows (SPSS Inc., Chicago, IL, USA) at significance levels of p < 0.05. Based on the type and content of phenolic acid, as well as principal component analysis (PCA) of these vegetables from different regions, a scatter plot was created to visualize the difference of phenolics among various vegetables. Each data point was the average of three replications. Conclusions In conclusion, seven commonly consumed vegetables from three regions in China (Beijing, Hangzhou, and Guangzhou), were tested for their phenolic acids contents in free, conjugated, and bound forms. The total amount of phenolic acids in green soy bean, soybean sprouts and daylily showed higher levels in the Beijing region compared to the other two regions. Variety difference had a much stronger influence on the accumulation of phenolic acids. Of all the vegetables, daylily contained the highest levels of free and conjugated phenolic acids, while the highest subtotal amount of bound phenolics was detected in green soy bean. Additionally, conjugated phenolic acids were the main fraction for all seven vegetables and accounted for the largest proportion of the three forms. PCA revealed some key compounds that differentiated the seven vegetables. Most of phenolic acids, especially p-coumaric acid, neochlorogenic acid, and caffeic acid, showed significantly higher concentrations in daylily, while green soy bean was characterized by high levels of syringic acid, ferulic acid, vanillic acid, and sinapic acid. Interestingly, p-coumaric acid was observed at a high level in cow pea, which is a critical distinguishing factor between cow pea and other legume vegetables. The present study provides comprehensive information on the phenolic acid composition of these vegetables, particularly for conjugated phenolic acids.
2017-11-17T03:34:59.365Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "01e931e2e59912f9123a4168716204272df35a8a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/22/11/1878/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01e931e2e59912f9123a4168716204272df35a8a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
246025025
pes2o/s2orc
v3-fos-license
A Process Evaluation of a Learning Community Program: Implemented as Designed? : Learning communities can be useful to counter some of the challenges encountered by first-semester students as they transition to college. This 2-year process evaluation examines the launch of a campus-wide learning community initiative for developmental reading students at a community college in the USA. Students, instructors, and administrators were interviewed about the implementation of the program, and program-related materials were reviewed. Findings suggested ways to enhance the effectiveness of learning communities of the linked-course variety through program implementation that is more faithful to key design aspects. Suggestions include (1) implement team-teaching across linked courses; (2) carry out an integrated curriculum across courses; (3) provide in-depth and continued instructor training as well as specialized resources; (4) expand support services available to students and require them to use at least some; and (5) create tools/methods for instructors and administrators to regularly assess processual aspects rather than just program outcomes. Introduction Learning communities can be useful to counter some of the challenges encountered by first-semester students as they transition to college life [1,2]. For the purposes of this study, we consider a learning community to be "A curricular model that links two or more classes together for a cohort of students" [3]. Often labeled a "high-impact" practice [4,5], learning communities can impact how students experience college and forge meaningful experiences, via people, places, and/or programs, and are thus important levers for student success in college [6][7][8]. Specifically, learning communities may help students transition more effectively from high school by increasing recognition and access to important on-campus resources and study skills, as well as enhancing social integration of students on campus [9]. Social integration leads to greater student persistence [7,8,10,11], and Nancy Shapiro and Jodi Levine [12] also report that learning community students enjoy " . . . higher levels of involvement with peers and the campus, and express greater overall satisfaction with the college experience". A higher level of collegiate satisfaction operates as another pathway to student persistence [13]. Further, effective learning communities can both demonstrate to students that expectations are quite different in college than high school and help students to cultivate adaptive habits and new goals in a supportive social environment [14]. While benefits of learning communities have been routinely touted over the past two decades, research shows that they are not always associated with beneficial effects-or uniform effect sizes-at different colleges [4,15] or for different groups of students [16]. Some argue that sometimes disparate outcomes reported for learning communities may be due, in considerable part, to their varying degrees of successful implementation [4,17]. Further, surprisingly few studies provide practical suggestions-gleaned from data as well as theory-on how to implement learning communities for student success; even carefully planned learning communities will experience difficulties if implemented in clumsy, inappropriate, or less than thorough ways. To better understand how implementation of a learning community program can shape its success, our study provides evidence from a 2-year process evaluation that may help would-be program designers, administrators, and instructors to implement more effective learning communities during students' first semester of college, a critical juncture in both their transition to-and ultimate trajectory within-college. Current Study This study examines a learning community program newly implemented in 2015 for developmental reading students at a two-year public college in the USA. The college chose to implement learning communities because, in prior years, less than 20 percent of students in developmental reading earned six or more college-level credits, and this percentage was lower than those for students in other developmental courses such as math and English [18]. Accordingly, short-term goals of the program included increasing retention in the developmental reading course (RDG), improving reading skills, developing study strategies, boosting academic self-confidence, enhancing attitudes about reading, and heightening a sense of community. An intermediate goal was to increase student persistence in college-level courses beyond students' first semester (including introductory English, which followed in the second semester). The learning communities were designed to be of the linked-course type; students were concurrently enrolled in RDG and a first-semester experiences course (FSE) that introduced students to general study skills and strategies; goal setting; how to cope with competing demands of school, work, and/or family; as well as support offices and resources available on campus and beyond. Learning communities were implemented for first-semester, developmental reading students at the college during spring and fall 2015, thus there were two cohorts of learning community students under study. The duration of each learning community was for a single semester, which is typical for most learning communities [3], and class sizes were capped at 24. Each learning community was taught by two instructors (one for RDG and another for FSE), and these same instructors volunteered to teach both cohorts studied. Instructors had previously taught their respective courses in semesters prior to the introduction of the program. Students could enroll in the program if they scored between 38 and 42 on the Computer-adapted Placement Assessment and Support Services (COMPASS) placement exam for reading level; they could only enroll in six credit hours for that semester (RDG and FSE). If there was an open seat in one of the two concurrent learning communities and the student met the COMPASS exam criterion, advisors encouraged the student to register during their first-semester orientation, although students were not required to join a learning community. This study does not aim to evaluate the effectiveness of the learning communities on student outcomes per se (however, see [19] for a qualitative analysis of students' perceived outcomes from the program). Rather this study centers on a process evaluation that gauges the degree to which learning communities were implemented as designed, and if not, what implications might have arisen for program stakeholders, including students. Process evaluations can distinguish between interventions that were fundamentally faulty and interventions that were merely poorly implemented [20] and thus may shed light on how to ultimately improve both the operational and effectiveness aspects of learning communities. For this study, interviews were conducted with students, instructors, and administrators (both program and institutional) during spring and fall 2016, and a content review was conducted of program-related documents, both publicly available and internal to the college. Based on empirical data collected from these various sources, we share various lessons learned by stakeholders and also analyze qualitative data for key themes regarding both positive and negative aspects of the implementation processes. Finally, we offer suggestions toward more effective strategies for the design, implementation, and assessment of learning communities; policymakers, stakeholders, and researchers alike may find use in our conclusions. Materials and Methods In order to collect rich processual information about the learning community program's design and implementation, in-depth and semi-structured interviews were relied upon for data collection. In total, 13 one-on-one interviews were conducted in person and on campus near the end of both the spring and fall 2016 semesters; seven students, two instructors, and four learning community program or institutional administrators were interviewed. By interviewing various types of stakeholders, we sought to capture their viewpoints on various aspects of implementation, and to triangulate information when appropriate. In general, there was a high degree of concordance across the student, instructor, and administrator interviews on the vast majority of issues discussed. The mean interview length was 43 min with a standard deviation of 16 min, and interviews were audio-recorded and transcribed for analysis. Interview invitations were sent to college email addresses of 20 randomly selected students from those who took part in a learning community during 2015 and were still enrolled at the college in 2016. Students were thus two or three semesters removed from learning community involvement when they were interviewed for the study. As an incentive to be interviewed, students were offered a $20 Visa card. While both women and men students were invited for interviews, only women responded to the request. Students were asked about (1) their reading and study habits, (2) the degree to which they had achieved their goals at the college, (3) their learning and grades, and (4) whether/how the learning community contributed to their collegiate experiences. Students were also asked about enrollment in the program, their likes and/or dislikes, their experiences, and if their ideas or person was changed because of the learning community. All program administrators and instructors associated with the program participated in this study. Administrators were asked about why learning communities were initiated; perceptions of the learning communities on campus; design and operation of the program; strengths, weaknesses, and surprises regarding the program; and modifications to the program throughout the two years it had been implemented. Administrators, particularly those in broader institutional roles, were also asked about the extent to which they were connected to the program. Instructors were asked similar questions but were also queried about their teaching practices and interactions with students and administrators. For the content review, internal and publicly available materials were collected during spring and summer 2016. Materials consisted of three types: documents relating to an action plan (16 total) from 2015, training documents for learning community instructors (20 total), and course syllabi (three). Internal materials were provided by program administrators and instructors, and all internal materials were requested in an attempt to reduce selection bias. Publicly available materials included training documents from other colleges or organizations that were used by the instructors and/or administrators in this program. Based on a pilot study from fall 2014 [21], these sensitizing concepts guided the interviews as well as their analysis: program goals for developmental reading, program design, "high impact practices", perceptions of the program by stakeholders, administrator involvement, program training and resources, team teaching, integration of linked course content, student support, and assessment of implementation processes. A sensitizing concept " . . . gives the user a general sense of reference and guidance in approaching empirical instances . . . directions along which to look" [22]. Line-by-line coding was used, with responses categorized by question. Memoing was carried out throughout the analysis to make sense of the nascent codes and potential linkages between them. Thematic analysis was used for both the interview data and the content review; thematic analysis was used to both describe and interpret data. Themes were identified using the constant comparison method [23]. This method "involves searching for similarities and differences by making systematic comparisons across units of data" [24]. Program Design and Implementation Based on instructor and administrator interviews, as well as the content review of materials, there were numerous identifiable components within the program's design. During the process evaluation, however, five design components took on particular importance with respect to how they were actually carried out within the learning communities: (1) team teaching across linked RDG and FSE courses; (2) integration of learning themes, other content, and assignments between linked courses; (3) comprehensive and continued training for instructors, as well as access to specialized resources; and (4) introduction of students to support offices, services, and related resources available on campus that have been consistently linked to student success. Instructors reported a fair amount of collaborative communication regarding their linked courses. However, team teaching wherein instructors were concurrently present in the classroom was not commonly practiced. Team teaching reportedly occurred only a few times throughout the first semester the learning communities were introduced. As one teacher put it, "We would do some team teaching within the classroom. We would both be in the class together at certain periods throughout the semester, and we called these workshops. And we would do three workshops a semester". The fact that instructors received credit for teaching only one of the courses-not both-appeared to serve as a disincentive toward team teaching since "true" team teaching was seen as requiring considerably more time than for a single course. Thus, the actual implementation of team teaching appeared to fall short of what the program designers intended. Although the learning community instructors did not consistently team teach, they wanted to do so more frequently and expressed positive views about its effectiveness, both for students' learning and their own professional development. As a teacher opined: "From a personal growth standpoint, that is another really big strength of the program is that . . . I learned so much about teaching and different strategies and how to relate to students. And just having that connection with [my team teacher] and that resource to be able to go to [my team teacher] all the time, it was just invaluable to me". Students also enjoyed instances of team teaching when it occurred. One student stated, "They [instructors] worked together so . . . we were working on kind of the same thing at like the same time . . . it made it easier". Another expounded on this notion, "Every so often we would have a day where both teachers were in there at the same time, and we would like bring in together what we were learning in both classes . . . Those days were actually my favorite days". Second, the program's design specified strategic integration of content and assignments across linked courses; this goal of course integration was to be supported with comprehensive training for instructors who volunteered for the program. Instructors reported little formal training on learning communities prior to and during their first semester in the program, i.e., training primarily consisted of attending a professional conference. Yet their level of training appeared to have increased during later semesters. Due to their increased amount of training and expanded experiences with learning communities over the semesters, both instructors and administrators grew to feel confident about instructors' knowledge of-and ability to carry out-learning communities. For instance, as an instructor stated: They [administrators] were great about sending us to conferences to learn a whole lot more because locally we didn't really know. We knew the research and we knew what we were trying to do with the learning communities, but as far as implementing everything that we needed to do-administration knew that we needed to go somewhere else to kind of learn a little bit more about best practices and what other colleges were doing. So [my fellow linked course instructor] and I have attended a . . . learning communities conference for the last couple of years, and that has been very, very helpful. Learning about different course pairings and different things to do in the classroom. Due to the cumulative effects of training and day-to-day experiences with the program, learning community instructors increased the level of integration between RDG and FSE over subsequent semesters. As one administrator recounted "The first semester . . . they had a few overlapping assignments. But as each semester progressed, they have kind of been folding in more and more commonalities". Students also liked when they detected topical linkages across the linked courses. For instance, one student noted: "It was just kind of cool how they brought everything together and . . . made it one whole class of two subjects". Students in this study were able to identify several other content areas that were covered in both courses. RDG was a course that could feature virtually any topic, and instructors appeared to take advantage of this to build topical linkages between RDG and FSE. For instance, another student recalled learning about music during RDG: "[RDG instructor would] . . . get on YouTube and we'd listen to music because we were in a music-the chapter in our book-for like 3 weeks of music. Jazz and hip hop, how hip hop got here". Other topics that appeared in both courses were personal finance, community service, diversity, culture, politics, history, math, English, and health. As the student put it: "We did everything . . . in [FSE]. There was one girl that always had trouble with her history and she would ask [FSE instructor]. So we would have like a 30 min history lesson". Students also mentioned that studying strategies and goal setting-important components to FSE-were treated in assignments required for RDG. A common feature of learning communities is for instructors to incorporate prominent and recurring learning themes to foster integration and deeper learning within paired courses. In the present study, learning themes were present across both linked courses for the duration of the semester. Instructors used two primary themes in their courses: (1) healthful living inspired via a book common to RDG and FSE, and (2) motivational materials/lessons that highlighted how visualization of a goal and dedicated work toward it can result in its attainment. Some learning themes were pre-planned during the design stage of the program, especially those that originated from the book-in-common. As an instructor recalled: We really decided that [book] was going to be one of the main themes in our learning communities. That our students were-both classes-our students were going to read those books and we were going to use the themes within those books to kind of merge the content as far as reading strategies and then also the goal setting, and the themes, whatever it is, in that book for that semester. Rather than being designed a priori, the theme of motivation emerged more spontaneously during the learning communities. The importance of learning themes surrounding the book-in-common and motivation were echoed in interviews with students. For instance, students in one learning community initiated a campus health fair because the book-in-common focused on healthful living. One instructor described how the learning community students organized and held the health fair for the college after reading the book. Another student mentioned career-themed writing assignments as an example of an important theme; students' careers goals were a major thematic focus of the learning communities: We had to do career-themed papers, which helped me a whole lot discover if I was truly interested in the career I was going for. Which I'm still kind of iffy on it but . . . I like being able to write the paper about it that helped me get in touch more with the career I was wanting to do. One instructor also suggested that successful attainment of career goals was likely a theme in her classes: "The teachers that we had . . . they just were all about being successful . . . So maybe success was it [a theme]". One student seemed to share a similar perspective about motivation: "A theme . . . study. Do your work on time. Be punctual. It's just like having a job". Although only a few students explicitly used the word "success" when asked about a theme across their courses, students often mentioned their instructors being "all about success". In addition, the chosen book every semester centered around an inspiring story of a person overcoming difficulties to attain their goals. Thus, the overarching goals of the learning community were designed around the notion of helping academically underprepared students to persist and succeed, and it appeared that students sensed this purpose and its related learning theme. Administrator Involvement and Support Successful implementation of a learning community often hinges on the degree of involvement and/or support provided by administrators, student advisors, and other key staff on campus. Accordingly, we sought to understand the amount and nature of administrator involvement and support with the program. Interviews revealed that all administrators felt connected to the program, but the degree to which the administrators were involved varied by their roles. One administrator offered: Very much [connected to the program] . . . I may not be in the classroom day in and day out with the students and faculty. But from the very beginning of this thing, from the research standpoint to really the decision-making standpoint to making sure that people across the college-faculty and staff-knew what this thing was. Administrators expressed that they valued and supported the learning community program. "We will support our faculty. They are not just out there on an island by themselves trying to enforce something that is a good practice," voiced an administrator. In turn, learning community instructors felt that the program was supported by the administration. Both instructors mentioned that administrators were eager to fund their professional developmental for teaching in the learning communities. They described moral support as well as material support: They [administrators] were super, super excited and supportive of us doing the health fair . . . They actually came in to visit the booths and ask our students questions . . . We have a learning community conference that we go to [regularly], so they're always eager to sign for us to go . . . other changes we can make to boost the communities . . . See what else other schools are doing. They are very supportive as far as professional development. If we do any kind of activity, they always make sure that they are there to support the students . . . they really jump in 100 percent. Learning community instructors reported satisfaction with the level of involvement from administrators. They described administrators as being primarily facilitative and supportive rather than directly managing or assessing them. Indeed, one instructor appreciated the freedom to teach their course within the learning communities in the manner that instructors deemed appropriate: They are involved just the perfect amount. [Laughter.] They have given us direction, and they have given us resources . . . asking us what we need for those learning communities. I know they are looking at the data and want to see the effectiveness of it. And they're looking at, of course, our evaluations. But they really give us freedom within those classes, and I think that is really important. Administrative support and enthusiasm for learning communities was also evident by the fact that all administrators were in favor of expanding learning community opportunities for students at the college, e.g., for other paired developmental courses, paired courses for college-ready students, and to train other instructors in methods that were seen to be widely beneficial for student engagement. One administrator suggested "For me, learning communities being improved would mean-not necessarily improve in the way our current faculty teach in learning communities-but exposing more faculty to it". Another noted the many possible subjects amenable to learning communities: "The learning community concept could be expanded to other areas . . . the faculty have discussed-are there other opportunities for learning communities? . . . I mean there could be all kinds of different pairings out there". Training Materials Training materials primarily consisted of exemplars or "good practices" for learning communities. These materials were juxtaposed with how learning communities were actually implemented at the college. Training documents consisted of materials that instructors received from two conferences on learning communities during fall 2014 and fall 2015. Many presentations at a conference on learning communities focused on best practices for learning communities, conveying what has worked at their institutions. Most of the learning communities discussed were of the linked-course variety and a substantial number focused on academically underprepared students. Linking two or three courses was most common, although there were a few cases where four classes were linked; in the current study, instructors believed that adding a third linked course would enhance the program. Finally, a majority of these presentations were to inform instructors on how to integrate content areas between linked courses. While learning community instructors in the present study enhanced their level of integration between RDG and FSE as the program evolved, the courses were never commingled under a single unifying curriculum. As an example of a specific conference paper, Huot and Palm [25] reported that Georgia State University implemented a learning community program that began in the summer and ended the following spring. This learning community consisted of three courses: New Student Orientation, English 101, and a Social Science. The program, called "Success Academy", had four major components: Summer Bridge Program, Mentorship, Academic Support, and Personal & Professional Development. Students were required to engage in student services (a feature that learning community instructors in the present study wished was a component of their program). Further, students had to attend meetings with a peer mentor and met three times per semester with their academic coaches and academic advisers. If students did not meet GPA requirements during the summer, they were involved in an academy recovery plan. This plan required meetings with instructors, attending workshops, identifying barriers (academic and personal) to their success in college, planning how to remove these barriers, and calling students to reflect on their goals for college. Some key features of this program, such as mandating student services and meetings with peer mentors, instructors, and advisers, may have dramatic effects on student performance and persistence. Note that all these elements included increasing involvement for students, which is consistent with Astin's [26] student involvement theory. Additionally, through involvement, students experience academic and social integration according to Tinto's [27] student departure theory. Involvement is important for students in gaining Bourdieu's [28] forms of cultural and social capital that are prevalent in the institution. Instructors in the present study believed that students should be required to go to tutoring, counseling, etc. In some cases, services were available (such as tutoring), but many students did not attend because it was not required under the program. Additionally, a counselor was unavailable for learning community students at the time of this study. Another study was presented by Baham and Finley [29] that highlighted what they believed to be "best practices" of learning communities: " . . . fostering partnerships with student services, including advising, media, marketing, institutional research, and administration". In addition, Gebauer [30] identified student engagement, academic affairs, and enrollment management as pivotal to the success of learning communities. As such, buy-in for learning communities is important and the effectiveness of learning communities is contingent on multiple services provided by the college. In the current study, advising and administrative support proved to be program strengths. However, other student services and marketing for the program were limited and thus constituted areas of weakness for the program. In-depth interviews with instructors revealed that students were largely unaware of the program if they did not directly participate. Discussion This process evaluation identified several important design components that were not fully implemented in practice, or not implemented as the designers intended. At most postsecondary institutions, more attention is likely devoted to the design and assessment of outcomes for learning communities than for the specifics of their implementation. Yet how the design is enacted should be of great concern since implementation mediates learning and other program outcomes. To increase the probability that the design is readily implementable, we concur with Fosnacht and Graham [4] that instructors and those from teaching and learning centers should be consulted or actually brought onto the design team. Those most deeply involved with learning communities on a regular basis also may offer insights as to the specificity of design goals, and how these may be operationalized to ultimately measure success, whether it be with respect to implementation or student outcomes. Bringing in learning community instructors early in the program would also permit them a "big picture" vantagepoint, as well as a better understanding of exactly how their teaching efforts may contribute to program effects. In this study, there was mention of outcomes assessment by instructors and administrators, but little monitoring of processual aspects was discussed, e.g., team teaching and integration of content across linked courses. While instructors and administrators appeared knowledgeable about the intended design, it would seem prudent for the parties to revisit the original design aspects on a regular basis to ensure alignment of implementation questions and nuances as they arise, i.e., to assess consistency across implementation and design. Alternately, when decisions are necessitated concerning processes not expressly laid out in the design, at least these extemporaneous "mini-design" decisions could be documented as discretionary, and rationales noted. To our knowledge, this sort of process assessment was not conducted in a formal and routinized fashion for this program. A regular assessment schedule with respect to processes-as well as specialized tools and/or procedures-would likely prove useful for guiding implementation. Instructors might chafe at this form of compliance given that they reportedly enjoyed considerable freedom from oversight of administrators, yet they might ultimately appreciate the structure and feedback inherent in the process, especially if instructors took the lead in carrying out assessment themselves. Through more intensive program monitoring, program strengths and weakness could be noted, and adaptations could be made in response to changes in demand from students as well as the college. It should be noted that the learning community coordinator's reported duties included professional development; updating progress to the college; marketing the program; budgeting; forming and facilitating the related college committee; as well as design, implementation, and assessment-in addition to responsibilities for other college reform initiatives. As evident from this list of responsibilities, this position may approach a full-time workload, and thus having a full-time coordinator or two co-coordinators may be warranted, especially if additional assessment of implementation processes were to be included. Initial instructor training with respect to the learning communities in this study appeared to stem mainly from attending conference sessions on learning communities and studying associated conference papers from a single conference. Ongoing mentorship or additional resources to assist with unfolding questions or problems that arose during the first semester were not reported by instructors or administers. Yet instructors felt more confident and equipped as they gained more training-primarily through attending a yearly conference-and experience over subsequent semesters. More thorough and varied training, especially for instructors new to the program, would likely enhance the effectiveness of the learning communities at their inception. Further, it became apparent from the interviews that there was interest at the college with respect to implementing additional learning communities, such as in history and English. Given the high interest level, the formation of an informal professional learning community might result in more pooled training resources for the instructors. Such a professional learning community might result in increased "word of mouth" for the student learning communities on campus and could diminish pressure on instructors to quickly become de facto local experts on learning communities. Team teaching, where both instructors teach together for the duration of both classes, occurred only a few times throughout each semester. Ideally, learning communities should be team taught during a majority of classes or more for the entire semester: "The daily practice of team teaching creates an environment of continuous learning for everyone and for acculturating new members of the community" [31]. However, team teaching in this form may be impractical due to both instructors' time and college budgetary constraints. When instructors did team teach, students described these days as their "favorite days". On these days, the methodologies of learning communities were in full effect with a variety of active learning opportunities that allowed for frequent instructor-instructor interaction, student-student interaction, and student-faculty interaction. As such, a full implementation of team teaching into the learning communities would likely enhance the effectiveness of the program, including potentially facilitating more integration of course content, primarily via common themes. Even though there was increased integration of content as the program evolved, there was not anything approaching a comprehensive curriculum between the two courses, which represented a missed opportunity in these learning communities. Last, as evident from the literature, many learning communities require students to partake in a variety of student services [9,32]. For the program at this college, student services were voluntary, but mandating students to attend weekly or bi-weekly meetings with their instructors, tutors, and advisers could be particularly beneficial for academically underprepared students. In fact, findings from this study revealed the importance of student services. In-depth interviews with learning community students revealed that they primarily learned about student services from inside the program. These services included a learning center, where tutors were available in various subjects, and a writing center. Increasing students' confidence to seek help, and the subsequent involvement with student services, instructors, and peers, are key steps in helping academically underprepared students to succeed in college [9,19]. The present study suggests that more intensive models of learning communities, equipped with tutors, counselors, and other student support services, may be needed for optimal learning communities, especially with respect to developmental education. As an implication for the future, our study points to the need for more widespread use of process evaluations when others are considering future learning community programs. Locally conducted process evaluations would permit stakeholders to detect potential problems or areas for improvement "on the fly" before program outcomes have crystalized. Further, process evaluations could pinpoint variables or processes idiosyncratic to each institution that may serve as powerful mediators of program success, e.g., student or instructor characteristics, institutional culture(s) and resources, relations with the external community, and so forth. Like all studies, this one has its limitations. First, the process evaluation examined a program implemented for academically underprepared students in reading at a two-year, public college. While our findings may or may not be generalizable to other groups of students, other types of institutions, or even other two-year colleges, there is little reason to believe apriori that they would not apply to many other learning communities elsewhere. Second, while all instructors and administrators in the program participated in the study, student attrition occurred over the two-year study. For instance, students who participated in a learning community but dropped out of the college were not available to be interviewed, and it is possible that those who left might have provided different perspectives than those who remained. Further, there was the potential for nonrespondent bias in terms of those who declined to be interviewed. Third, data for this study were collected over 2015 and 2016. Since then, the COVID-19 pandemic has ushered in more widespread use of-and innovation in-remote and hybrid learning opportunities. Concomitantly, there has been increased interest in online-based learning communities, e.g., in terms of how they can foster a sense of community when conditions occur that constrain physical proximity, or can reduce digital inequality for those students living in rural areas [16]. That said, our primary findings and conclusions appear quite relevant in the contemporary educational landscape. In fact, the challenges to learning communities that we identified on the campus, such as more comprehensive instructor training, fully realized team teaching, and providing student support services may prove even more challenging remotely than in person, particularly given the technological and coordination demands that must be surmounted. Conclusions More careful implementation of learning communities may result in greater program success for students. Based on this process evaluation of a linked-course learning community for developmental reading students, we offer the following suggestions for implementation (1) define specific goals that are, in fact, easily implementable; (2) fully implement team-teaching across linked courses; (2) implement an integrative curriculum; (3) provide in-depth and ongoing instructor training, along with specialized resources; (4) expand support services available to students and require them to use at least some as part of the learning community experience; and (5) create tools/methods for instructors and administrators to assess processual aspects rather than just program outcomes. Data Availability Statement: Data are covered by a confidentiality agreement and thus are not available. Conflicts of Interest: The authors declare no conflict of interest.
2022-01-19T16:21:04.076Z
2022-01-17T00:00:00.000
{ "year": 2022, "sha1": "7845292c788947d50e5202a26035def2d1cbb40d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/12/1/60/pdf?version=1642418138", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a3a9699f06be831a205ce4af1da7668fe5fe7493", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
260885475
pes2o/s2orc
v3-fos-license
Maintaining long-term physical activity after cancer: A conceptual framework to inform intervention development Purpose: This paper describes a conceptual framework of maintenance of physical activity and its application to future intervention design. Methods: Evidence from systematic literature reviews and in-depth (N=27) qualitative interviews with individuals with cancer were used to develop a conceptual framework of long-term physical activity behaviour. Determinants of long-term PA were listed and linked with domains of the Theoretical Domains Framework which in turn were linked to associated behaviour change techniques (BCTs) and �nally to proposed mechanisms of action (MoA). Results: The conceptual framework is presented within the context of non-modiable contextual factors (such as demographic and material resources) and in the presence of learnt and adapted behavioural determinants of skills, competence and autonomous motivation that must be established as part of the initiation of physical activity behaviour. An inventory of 8 determinants of engagement in long-term PA after cancer was developed. Clusters of BCTs are presented along with proposed MoA which can be tested using mediation analysis in future trials. Conclusion: Understanding the processes of PA maintenance after cancer and presentation of implementable and testable intervention components and mechanisms of action to promote continued PA can inform future intervention development. Implications for cancer survivors. This resource can act as a starting point for selection of intervention components for those developing future interventions. This will facilitate effective support of individuals affected by cancer to maintain PA for the long term. Introduction Each year more than 17 million people are diagnosed with cancer worldwide (1). Cancer and its treatment can result in numerous adverse physical and psychological consequences, some of which can persist for years after treatment completion. Individuals with a history of cancer are also at risk of cancer recurrence and developing other chronic conditions such as heart disease. Engaging in regular physical activity (PA) can mitigate many of these adverse effects including fatigue, anxiety, depression, physical functioning and health-related quality of life (2) and reduce cancer recurrence and improve survival (3). Despite this around 70% of people with cancer are not meeting physical activity guidelines (4)(5)(6)(7). Systematic reviews and metaanalysis have found clinically signi cant effects of interventions to support initiation of physical activity behaviour change with substantial increases in activity levels from baseline to end of intervention (8-10). However, to sustain the health bene ts of physical activity individuals must be habitually physical active for the long term. There is some evidence of modest maintenance effects (11) in people with cancer but activity levels typically regress towards baseline as the time from end of intervention increases (12). There are scant examples of interventions developed with an explicit aim to support sustained increases in physical activity in people affected by cancer and further research is required. Key to developing effective interventions is to identify determinants of the target behaviour in the population in question as well as potential processes of change. Intervention components that in uence those determinants can then be identi ed. This can be achieved by reviewing existing literature and consultation with the intervention development team. It is also increasingly recognised that empirical research with the speci c population is imperative to fully understand the 'problem' and identify potential solutions. These are core principles of intervention development frameworks such as Intervention Mapping and the MRC Guidance for development of complex interventions (13). Such development frameworks also recommended the use of theory of behaviour/behaviour change to guide the identi cation of pathways of change and appropriate intervention components. The Theory of Planned Behaviour, Social Cognitive Theory and the Transtheoretical Model have received much attention within the PA and cancer literature (14)(15)(16). However empirical evidence suggests no one theory is superior and that theory-based interventions are as effective as those without explicit theoretical underpinning (17,18). This may be because both types of intervention include similar behaviour change techniques (the main catalysts of intervention effects) (13,17) and existing theories have multiple overlapping constructs. To address the latter, issue the Theoretical Domains Framework (TDF) was developed. It brings together 33 models of behaviour/behaviour change, including 128 separate constructs and is increasingly used by researcher developing complex interventions that target health behaviour change (19). The TDF has 14 theoretical domains from which researchers can draw on to support identi cation of pathways of behaviour change. Furthermore, for optimal transparency of the intervention development process, and to advance our understanding of not just what works, but how it works, researchers are encouraged to identify the proposed mechanisms of action (MoA) through which interventions are hypothesised to exert their effects. MoA are 'the processes through which behaviour change techniques affect behaviour' (20). For example, a barrier to engagement in physical activity might be a lack of belief in one's capability to perform the behaviour. Intervention developers would select BCTs believed to impact the MoA 'belief in capability', for example, verbal persuasion and focus on past success. Subsequent evaluation of the intervention would include measuring change in this MoA and conducting mediation analysis to determine its impact on behaviour (21). Central to such endeavours is an agreed matrix of BCTs and hypothesised MoA which can act as a standardised resource, enabling synthesis of data. This has been achieved through a series of studies of literature synthesis (22) and expert consensus (23), triangulated to develop a Theory and Technique tool linking BCTs and their MoA (24). To date there is a lack of empirical data, conceptual or theoretical understanding of the process of engaging in sustained physical activity that can inform the identi cation of behavioural determinants. Indeed, in concluding remarks following their recent metaanalysis of sustaining physical activity behaviour after intervention completion, McEwan et al. (2021) recommend 'future efforts to develop and test theoretical frameworks that specially focus on maintenance of health behaviour could help optimise interventions that are concerned with supporting long-term physical activity adherence"(p8). This paper describes the development of a conceptual framework of sustained physical activity engagement in people with cancer through meta-analysis and primary qualitative research. This led to the generation of an inventory of determinants of this behaviour and a corresponding matrix BCTs and associated MoA that can be used as a basis for developing future interventions. Methods A modi ed version of French et al's (25) approach to developing theory informed behaviour change interventions was used and involved four phases: 1: Who needs to do what, differently? A systematic review and meta-analysis of long-term physical activity behaviour change in cancer survivors was conducted by our group (11). We explored factors associated with success (or lack thereof) including context, population characteristics and behaviour change techniques. What barriers, enablers and processes need to be addressed? To afford an in-depth understanding of the barriers, enablers and processes involved in long-term PA behaviour change a qualitative study of 27 cancer patients who had taken part in a previous PA programme in the UK was conducted (26). Inductive thematic analysis was conducted, and ndings were combined with those from the aforementioned systematic review and meta-analysis (15) and evidence from relevant qualitative meta-syntheses and reviews (12,(27)(28)(29), selecting and structuring factors to create constructs that informed the development of an inventory of key determinants of sustained PA behaviour. These data also informed the development of a conceptual framework describing the processes of sustained PA engagement. Using a theoretical framework, what intervention components could address these barriers and enablers? In consultation with co-authors, barriers and enablers were mapped to the associated domains of the Theoretical Domains Framework. Using published expert consensus linking BCTs to the TDF domains (30) BCTs were identi ed that might address these barriers and enablers. In addition, we reviewed the recently published compendium of 'self-enacted techniques' (31) and selected additional intervention components hypothesised to impact on the target determinants. How can the behaviour change be understood? Using the Theory and Techniques Tool we identi ed key MoA associated with each behaviour change technique which could be used to assess intervention causal pathways (mediating behaviour change). This was an iterative process involving regular meetings and revisions at all stages with co-authors. Who needs to do what differently? The systematic review and meta-analysis, including 19 studies, concluded that existing interventions with a long-term follow-up were successful in achieving moderate improvements in sustained behaviour change (11). Older adults, those with existing functional limitations and who had fewer contacts with those delivering the PA programme were less likely to sustain PA increases. Furthermore, PA programmes included in the review with poorer long-term behaviour outcomes were less likely to include the BCTs of action planning, graded tasks, and social support (unspeci ed) (see (11) for full details). These ndings were triangulated with the qualitative data (26) and relevant reviews (12,(27)(28)(29) generating hypotheses of the processes at play and informing step 2; generation of an inventory of barriers and enablers and conceptual framework. What barriers and enablers need to be addressed? Findings from the qualitative study and systematic reviews were selected and structured to identify an inventory of 8 determinants of engagement in long-term PA behaviour after cancer and a conceptual framework illustrating the processes at play was developed (see Central to the conceptual framework are the founding factors related to initiation of behaviour required before long-term maintenance can be achieved (depicted in the outer segment of Fig. 1). Such factors are vital for consideration during the initiation of PA behaviour before maintenance can be achieved. Contextual factors are key and include socioeconomic status, demographics (including age, sex, education level) material resources and environment (i.e. access to facilities and/or appealing outdoor space). Appropriate attention must be paid to these factors when identifying when, where, and how an individual will engage in a physically active behaviour. This will ensure the chosen activity is appropriate to the individual's personal context and resources. Furthermore, the initial intervention must result in motivation to initiate change. The participant must develop the necessary skills to engage with their new activity and that activity must be appropriate to their pathophysiological status. A case study: A participant in the qualitative study described how the practice nurse at her GP surgery repeatedly told her she needed to do more exercise and suggested walking. No support or consultation was provided. The participant reported walking on a handful of occasions but then stopped. Reasons included pain on walking, concerns about breathlessness and di culties accessing suitable walking routes. On taking part in the Move More intervention she engaged in a conversation with the practitioner regarding her history of PA, likes, and dislikes, her priorities, and commitments as well as existing health condition, which included obesity, mobility issues/joint pain and COPD. A local weekly yoga class was identi ed which, on the date of interview, the participant had been attending for more than 2 years. Once PA behaviour is initiated, the individual experiences the consequences of that behaviour, including an impact on their affective state and physiological outcomes. When engaging in PA with others this will also include social interaction. If the behaviour results in desired/positive outcomes that behaviour is reinforced and is more likely to be sustained. Inevitably at some point the behaviour will be disrupted. This may be due to affective factors, such as low mood or boredom, or practical, e.g. discontinuation of an exercise class. Alternatively, a life stressor event such as ill health or caring responsibilities can disrupt engagement, as can change to the environment/ resources. Adaptation to this disruption then ensues. This will include a prioritisation process of physical and psychological resources. It may also involve problem solving, assessment and/or change regarding accessibility to local amenities and /or personal resources. Pros and cons of re-engaging with the discontinued activity including re ection on the consequences of behaviour (experienced outcomes) will be in uenced by the degree of intrinsic motivation and self-e cacy (con dence) the individual has for that activity. For some social support will play a role here with some individuals requiring practical or emotional support to reengage. Consequently, the behaviour remains ceased/reduced or resumption occurs. This is a cyclical process. The inventory of barriers and enablers are set out in Table 1. Finding pleasure/enjoyment in the chosen activities was a key enabler of maintenance identi ed in the qualitative data and wider literature. Those individuals who were habitually active talked about the fun they had and enjoyment they felt from being physically active. Individuals engaging regularly in physical activity describe feelings of empowerment as a result, seeing it as evidence that they have overcome the physical challenges of cancer and its treatment and regaining a sense of ownership and control of their bodies. This is linked to a feeling of con dence in their ability to engage in these activities. An individual's sense of perceived value/experienced outcomes is also integral to motivation to continue to engage in PA participation. Such outcomes included extrinsic factors such as the importance of PA in maintaining health, function, and independence, as well as intrinsic factors such as a sense of wellbeing. For those who participate in group activities social interaction is an important facilitator to engagement. Disruption to PA engagement can typically be divided into affective factors, such as low mood, anxiety and boredom and external factors including terminated exercise classes, ill health of life stressors. The perceived appropriateness of PA to an individual's age is also important with individuals less inclined to engage if social comparison or personal identity are not aligned with the activities. Finally monitoring of engagement by self or others is an important element of participation. This process enables individuals to track progress and identify reductions in PA that can then be addressed. compendium of self-enacted techniques has several overlapping techniques with Michie's taxonomy but also includes some important additions which we hypothesize would impact some of the barriers and enablers identi ed here. For example, boredom was a barrier to continued PA engagement. This might be overcome with the use of the self-enactment technique "add challenge (stops behaviour becoming boring)". Enjoyment was an important enabler to continued PA and the technique of "task crafting (enjoyment)" and "focus on enjoyment (pleasant aspects)" are likely to facilitate this. How can the behaviour change be understood? Mechanisms of Action associated with the BCTs were identi ed using the Theory and Techniques tool (33). See Table 1. Measuring them in future interventions would enable calculation of mediating mechanisms of behaviour change. Discussion This paper presents a novel conceptual model of long-term engagement in physical activity after a cancer diagnosis, with accompanying inventory of determinants and suggested behaviour change techniques. This resource can act as a starting point for selection of intervention components for those developing future interventions. It is essential that it is used alongside a robust development process, using qualitative and co-design approaches to understand the local context and speci c participant group, working with multidisciplinary teams to choose methods and modes of delivery that are locally feasible. This conceptual framework is a holistic appreciation of the problem of sustained PA participation after a cancer diagnosis. The outer elements of the framework, including contextual factors such as environment, socioeconomic status, material resources are key, shaping the possibility of initial PA engagement necessary before sustained action can be achieved. Historically, consideration of these contextual factors has often been absent from intervention development. A personalised approach, assessing pathophysiological state and evaluating these contextual factors at an individual level is key to establishing initial PA engagement that has the potential to be sustained. This perspective is in keeping with the recently updated MRC Framework for the Development and Evaluation of Complex Interventions (13). A key addition to which was the inclusion of context in the de nition of complex interventions as well as considerations for the systems within which the intervention sits. Comparisons can be drawn between the conceptual model, inventory of behavioural determinants and associated intervention components presented here and Kwasnicka et al's review of the theoretical explanations for maintenance of physical activity behaviour (34). Kwasnicka and colleagues coded constructs of 100 theories of behaviour and set out ve theoretical themes they deem relevant to PA maintenance, four of which are also captured in our work. These include 'maintenance motives'; regular grati cation is more likely to lead to sustained engagement, 'self-regulation'; including the need for high coping self-e cacy, 'resources'; psychological and physical assets, and environmental and social in uences. In recent years the concept of behavioural maintenance in physical activity has received considerable attention with thought paid to the way maintenance is conceptualised and operationalised. The conceptual model presented here aligns with the evolution in thinking from maintenance of a behaviour as a speci c state, to consideration of the underlying mechanisms of action that determine behaviour. This includes shifts back and forth between re ective and reactive processes (35). Rhodes & Sui (35) present a new, working de nition of physical activity maintenance as 'a dynamic development of mechanisms of action that engender greater perceived behavioural enactment e ciency that partially supplant prior mechanisms of action that required greater perceived cognitive recourses to enact physical activity". They argue that some constructs that were critical to the initiation of a behaviour will still be important for engagement in the longer-term. This is re ected in the conceptual model and associated BCTs presented here where constructs and techniques are essential to embed in the process of initial behaviour enactment to support longer term maintenance. Whilst it has been argued that maintained behaviours are characterised by intrinsic motivations and self-determined actions, self-regulatory skills are still required over time as inevitable disruptions occur and more effortful regulation is required. Hence the suggested inclusion of BCTs that consolidate self-regulatory processes and skills in the initiation of behaviour change, as well as considering long-term behaviour change where these skills may need to be revisited to manage a period of disruption. Furthermore, some individuals may spend longer in a phase of effortful self-regulation than others and need to revisit these skills more frequently. This is also supported by evidence from a review of determinants of physical activity behaviour in older adults which found that self-regulatory strategies such as action planning and coping planning were positively associated with both activity initiation and maintenance (36). Development of a typology of sustained physical activity that proceeded and informed this model argues that individuals fall into three 'types' (26). Those who, after initial support to engage successful maintain increased levels of PA through planning and prioritisation. Those who are 'intermittently active' with cycles of action and inaction with frequent periods of effortful self-regulation, and those with consistently low levels of PA with minimal engagement in PA during or after intervention participation. Reviewing individuals after 12 weeks of intervention participation to ascertain which of these 'types' they most align with could help personalise the intervention components/BCTs that need to be emphasised to support maintenance of behaviour. An important consideration when developing interventions including evidence derived BCTs is to have con dence that these strategies are being delivered and utilised by recipients as intended. Knittle et al (31) recently published the compendium of self-enactable techniques v1.0 to change and self-manage motivation and behaviour. The focus of this taxonomy is that of techniques which individuals can enact themselves, rather than those which are delivered or enacted by the intervention providers. They point to evidence that suggests maintenance of behaviour change following interventions is dependent on the extent to which individuals can enact the BCTs involved (37)(38)(39). They also argue that existing taxonomies have insu cient focus on the way techniques are delivered, received (comprehended and understood) and enacted in everyday lives. This new classi cation also includes techniques from additional behavioural domains that may have utility in affecting behaviour and/or it's determinants, from sport and occupational psychology. (40) who used qualitative methods to explore participant's understanding of the speci c BCTs included in the Diabetes Prevention Program. The focus was self-regulatory strategies such as problem solving, action planning, self-monitoring of behaviour and goal setting. Some techniques, including selfmonitoring were well understood and participants accurately described their use. Others, including action planning and problem solving were harder to understand and additional support was needed to enable participants to operationalise these intervention components. It is therefore imperative that those designing and delivering interventions to support long-term PA behaviour change provide appropriate and accessible explanations of the fundamental BCTs to ensure they can be enacted. Assessing this during and after intervention delivery is also important. In addition to the need for evidence informed intervention components it is important to understand how these components exert their effects on behaviour. This paper identi es potentially effective intervention components/BCTs linked to the constructs of the conceptual model and goes on to state the hypothesised MoA through which these BCTs/groups of BCTs exert their effect on behaviour. This was achieved by using a publicly available database where links between BCTs and proposed MoA have been collated based on expert-veri ed consensus (ref). It has been widely acknowledged that research exploring potential MoA as mediators of behaviour change in speci c context with speci c populations is necessary to advance the eld of behavioural science. Haggar et al (41) describe a process model, including the type of data and analysis needed to contribute to the evidence base of testing proposed MoA. Such contributions will provide data that can be synthesised and increase our collective knowledge of the MoA of interventions. This is important work given the current lack of such evidence, as demonstrated in a recent series of meta-reviews (42). Conclusion This conceptual model and accompanying inventory of potential intervention components is intended to inform those developing novel interventions to promote long term engagement in physical activity after a cancer diagnosis. The model and its proposed MoA could be tested in future, using methods outlined by Haggar and colleagues. Use of this model alongside participatory approaches, consulting with end users and key stakeholders to ensure relevance and appropriateness are encouraged. Competing interests: The authors declare that they have no competing interests. Abbreviations Authors contributions: Chloe Grimmett was responsible for the conception, design, data interpretation and drafting of the manuscript. Claire Foster and Bernardine Pinto made substantial contributions to the conception of the manuscript. Carl May, Teresa Corbet, Kate Morton and Katherine Bradbury made substantial contributions to the conception and data interpretation of the manuscript. All authors read, revised, and approved the nal manuscript. Data availability: The data generated during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate: Not applicable to this paper. Ethics approvals were given for the qualitative work proceeding this publication. Consent: Not applicable Availability of data and materials: Not applicable Acknowledgements: Not applicable Figure 1 Conceptual framework of long-term engagement in PA after cancer
2023-08-15T06:17:32.166Z
2023-08-14T00:00:00.000
{ "year": 2023, "sha1": "8d37e2880b55523d757c3730e7fbee6a3e2a1a5f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11764-023-01434-w.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "839b08a9bd83febf9805157208ae237564637e37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55989382
pes2o/s2orc
v3-fos-license
An overview of the limnetic Cyclopidae ( Crustacea , Copepoda ) of the Philippines , with emphasis on Mesocyclops Approximately 120 (sub)species of Cyclopidae have been reported from South and Southeast (SE) Asia, where the Philippine archipelago – with 16 (including two endemic) taxa – is one of the least explored parts of the region. Our study, part of current efforts to assess freshwater biodiversity, was undertaken to update the diversity and geographic distribution of the cyclopid copepods living in the limnetic zone of the freshwater lakes in the Philippines. Examination of the samples from 22 lakes in five islands (Luzon, Mindoro, Cebu, Leyte and Mindanao) revealed a novel species from lake Siloton (Mindanao), Mesocyclops augusti n. sp. The new species can be distinguished from the congeners by the surface ornamentation of the hindgut, among others. The same character state was found in a Mesocyclops from North Vietnam, which is provisionally identified as M. augusti n. sp., though the Vietnam and Mindanao specimens differ in a few (yet polymorphic in the close relatives) characters. Mesocyclops microlasius Kiefer, 1981 endemic to the Philippines is redescribed, based on females and males from lake Paoay (North Luzon). Sister relationships of M. augusti n. sp. and M. microlasius were tested in a phylogenetic analysis that included the closely related Old World representatives of the genus. The max. parsimony trees show M. dissimilis Defaye et Kawabata, 1993 (East Asia) as the closest relative of M. augusti n. sp. (Mindanao, Vietnam), and support sister relationship between M. geminus Holynska, 2000 (East Borneo) and M. microlasius (Luzon, Mindanao). A mainland clade (M. francisci, M. parentium, M. woutersi, M. dissimilis, M. augusti) appears in most reconstructions; all members of the clade occur in continental Asia though some species also live in islands that have never been connected to the SE Asian shelf. In most trees with the mainland clade the insular taxa (M. microlasius, M. geminus, M. friendorum) form either a paraphyletic (basal to mainland) or monophyletic sister group of the mainland clade. We also established the first records of Thermocyclops taihokuensis (Harada, 1931) in the Philippines (Luzon), so far known from East and Central Asia. In all, 11 taxa [Mesocyclops (4), Thermocyclops (4), Microcyclops (1), Tropocyclops (1) and Paracyclops (1)] including only one endemic species (M. microlasius) have so far been found in the limnetic waters. We expect significantly higher diversity and higher rate of endemism of the freshwater cyclopids in the littoral (paludal) and subterranean habitats in the Philippines. INTRODUCTION Cyclopidae [~1010 (sub)species], as one of the largest crustacean families, is represented by ~120 (sub)species in South and Southeast (SE) Asia (Boxshall and Defaye, 2008; MH's personal database), and only 39 species have so far been reported from the insular Indo-West Pacific region (Hołyńska and Stoch, 2011).As a dominant component of the lake zooplankton community, cyclopid copepods are considered important prey of larval and adult zooplanktivorous fish (Papa et al., 2008).Their abundances (together with cladocerans) when compared to calanoid copepods are often related to lake nutrient levels and water temperatures, as they have been revealed by long-term investigations of the zooplankton community responses to eutrophication and climate change (Anneville et al., 2007;Chih-hao et al., 2011).Furthermore, Mesocyclops (and at least one Thermocyclops) species have potential use as biological control agents of Denguecarrying mosquitoes (Aedes spp.).Experiments on the use of copepods in mosquito control have been conducted in several countries including the Philippines, Vietnam, and Australia (Nam et al., 1999;Dussart and Defaye, 2001;Ueda and Reid, 2003;Panogadia-Reyes et al., 2004). The Philippine archipelago (comprising 7107 islands) are located on the fringe of SE Asia, yet remained separated (though by a narrow strait between the present Borneo and Palawan islands) from the Asian mainland even during the largest sea-level drop during last glaciation (Voris, 2000;Sathiamurthy and Voris, 2006).Diverse connections of the Philippines caused them to be allocated to various biogeographic regions.The Philippines are part of the Oriental region in the Sclater-Wallace system, while the archipelago (except for Palawan) is taken out from the Oriental region in the Huxley-scheme.Botanists consider the Philippines as part of Malesia (also including the Malay peninsula, Indonesian archipelago and New Guinea), while in the latest monograph of the zoogeography of the continental waters, the southern and southwestern Philippines (Palawan, Calamianes, Mindoro, Sulu and Mindanao islands) belong to the South Asian subregion (that largely fits the Oriental region) of the Sino-Indian region, and the northern Philippines are part of the Indo-West Pacific peripheral areas also including Wallacea, the Pacific islands and few islands in the cold zone of southern hemisphere (Bănărescu, 1992(Bănărescu, , 1995)).The insular character has favoured a high species richness and endemism in flora and fauna, thus making the Philippines one of the most interesting regions worldwide (Jones and Kennedy, 2008).It is, unfortunately, also recognised as a biodiversity hotspot, due to the high rate of habitat destruction and loss (Ong et al., 2002;Sinha and Heaney, 2006).Inland waters are among the least studied and most threatened habitats for Philippine biodiversity.In other parts of the world they have been found to host a variety of interesting plants and animal taxa (Balian et al., 2008), but have so far escaped the attention of the Philippine biologists.In the Philippines, studies on limnology and freshwater biodiversity have been fragmentary and inadequate, which often led to the formulation of poorly-prepared conservation and management plans, including those for approximately 70 lakes throughout the archipelago (Papa and Mamaril Sr., 2011). The current body of knowledge on freshwater zooplankton diversity in the Philippines has mostly been made up of contributions of western scientists.In 1872, Carl Semper described a spherical rotifer, Trochosphaera aequatorialis, from rice fields in Mindanao island (southern Philippines) (de Elera, 1895;Mamaril Sr. and Fernando, 1978).This survey was followed by the descriptions of new species and the listing of new records in many lakes and other freshwater ecosystems during the Wallacea-expedition, including the cyclopid copepods Mesocyclops microlasius Kiefer, 1981 and Thermocyclops wolterecki Kiefer, 1938 (Brehm, 1938(Brehm, , 1942;;Kiefer, 1938aKiefer, , 1938bKiefer, , 1981;;Hauer, 1941;Woltereck et al., 1941).Two of the most comprehensive and important papers on Philippine freshwater zooplankton were published by a Filipino -Mamaril Sr. -who listed a total of nine copepod species (Mamaril Sr. and Fernando, 1978;Mamaril Sr., 1986).Only a few researchers have added new species or locality records since then (Petersen and Carlos, 1984;Tuyor and Baay, 2001;Aquino et al., 2008;Papa and Zafaralla, 2011).At present, 16 freshwater cyclopid species have been recorded in the Philippines (Tab.1), yet some of those species certainly do not occur in the archipelago, or their taxonomic identity needs further verification.On the other hand, hardly anything is known about the cope-pods of the small water bodies, swamps and subterranean waters of the Philippines, which may be home to a much richer fauna including several endemic and/or relict taxa (Brancelj et al., 2013;Van Damme et al., 2013).A focus on the lake plankton forms can only be justified as the first step of a long-term research project on the freshwater copepod fauna of the Philippines. Here we present an overview of the species diversity and geographic distribution of the cyclopid copepods collected from the limnetic zone of 22 lakes distributed in five major islands (Luzon, Mindoro, Cebu, Leyte and Mindanao) of the Philippines.Among these are lakes where efforts to conserve biodiversity have to be balanced with the need to provide sustainable fisheries-based livelihoods, and where the recent increase in under-regulated aquaculture practices have led to a decline in water quality and increased eutrophication (Tamayo-Zafaralla et al., 2002;Palerud et al., 2008). A large part of the paper deals with the systematics of a single group, the genus Mesocyclops, and the reasons for this are twofold: i) Mesocyclops is one of the most common and most species-rich components of the tropical lake plankton; ii) the taxonomy of the genus is relatively well understood to pose questions on the phylogenetic relationships/origin of the Philippine species.We describe a new Mesocylops species from Mindanao, provide an amended diagnosis of the endemic M. microlasius, and discuss phylogenetic relationships of these Mesocyclops taxa. METHODS In total, 43 samples collected from 22 lakes were analysed: 15 lakes in Luzon, 4 in Mindanao and 1 each in Cebu, Leyte, and Mindoro islands (Fig. 1; Supplementary Material A).Samples were collected between 2006 and 2011.Plankton sampling was done by towing 50, 80 and 100 µm mesh-size plankton nets from several transects perpendicular to the lake shore.Littoral and limnetic samples were stored separately unless there was no clear demarcation between littoral and limnetic areas of the lake.The samples were fixed in 10% formalin and later transferred to 5% formalin with Rose Bengal dye.Selected specimens from the limnetic samples were dissected and mounted in a glycerin medium.Slides were sealed with nail polish.Dissections were made under an Olympus SZ11 stereomicroscope (Olympus, Tokyo, Japan).Specimens were examined in bright-field and differential interference contrast optics and drawn using a camera lucida attached to an Olympus BX50 compound microscope (Olympus).Telescoping body segments were measured separately.Setae of the caudal rami are denoted by Roman numerals, following the scheme applied by Huys and Boxshall (1991). With few exceptions, only the females could be identified as available identification keys rarely include characters of the male.Identifications were based on Karaytug Lu, Luzon;Mi, Mindoro;Ne, Negros;Le, Leyte;Ce, Cebu;Ca, Camiguin;Mn, Mindanao;Jo, Jolo;1, Marsh (1932); 2, Woltereck et al. (1941);3, Mamaril Sr. and Fernando (1978); 4, Kiefer (1938b); 5, Kiefer (1981);6, Mamaril Sr. (1986);7, Mamaril Sr. (2001); 8, Tuyor and Baay (2001) (1999), Ueda and Reid (2003) and Hołyńska (1997Hołyńska ( , 2000Hołyńska ( , 2006a) ) In search of the sister species of M. augusti n. sp. and M. microlasius, we applied the criterion of the global parsimony using Hennig86 version 1.5.The analyses used 18 morphological characters (46 character states) and included the morphologically close Old World representatives of the genus only (Supplementary Materials B and C).In choice of the ingroup and outgroup taxa we relied upon a former phylogenetic analysis (Hołyńska, 2006b) that included all species of the genus.Phylogenetically informative characters that are fixed in some ingroup species can be intraspecifically variable in other members of the ingroup.Portion of the polymorphic traits can be significant in young groups where most speciations happened relatively recently, and time was too short to reach the fixed condition of a novel character state in each char-acter.Exclusion of the polymorphic characters from the phylogenetic analysis (means loss of information) would not only result in worse resolution but also false topology of the trees (Wiens, 2000).Eight characters showed intraspecific variation in some species.Data on character polymorphism were taken from earlier publications (Hołyńska et al., 2003;Hołyńska, 2006b), and on observations of the specimens listed herein. We treated the polymorphic characters in three different ways (unordered, unscaled and scaled coding).All three coding methods recognise a polymorphic condition as a separate state, the differences between these coding methods being in the presumed numbers of transformations between the states fixed absent, polymorphic and fixed present [for more details see Wiens (2000) and Hołyńska and Stoch (2011)].In the unordered run, all characters (both those which are intraspecifically variable, and those which are not) were coded as unordered, and given a weight of one.In the unscaled run, all characters with intraspecific variation (chars 1, 4, 8-10, 12, 16, and 18), as well as some fixed characters with serially homologous encaptive states (chars 2) were coded as ordered, and all characters were given a weight of one.In the scaled analysis, characters 1, 2, 4, 8-10, 12, 16, and 18 were coded as ordered; the fixed characters 2, 3, 5-7, 11, 13-15 and 17 were given a weight of 2, and polymorphic characters were given a weight of one.In tree building, the ie command was applied to produce trees by implicit enumeration, thus the results are in certain to be trees with shortest length.In the analysis of character transformations and editing of the trees we used WinClada (Nixon, 1999(Nixon, -2002)).Specimens were deposited in the Museum and Institute of Zoology (MIZ), Warsaw, Poland, and the University of Santo Tomas Zooplankton Reference Collection (USTZRC), Manila, Philippines.Etymology: the species is named in honor of Associate Professor Augustus C. Mamaril Sr. of the University of the Philippines-Diliman, who is considered to be the foremost Filipino zooplanktologist. Integumental perforation pattern of rostrum as in Fig. 3D.Labrum with 13 teeths on distal edge, lateral lobes smooth (Fig. 3E, arrowed), distal fringe-hair arranged in arc.Shorter spinules (Fig. 3E and 3F, arrowed) present on both right and left side of labrum, anteriorly to fringehair.Epistoma (longitudinal medial hump between labrum and rostrum) pilose (Fig. 3E), hair also present laterally to epistoma.Vertical cleft between epistoma and rostrum pilose (verified in paratype from Mindanao and females from Vietnam) resembling that in M. microlasius.Mandible (Fig. 4A and 4B) with palp bearing two long and one short setae.Three groups of spinules present on anterior surface of gnathobase near mandibulary palp.Maxillule armature as common in genus, palp naked (Fig. 4C).Proximalmost seta of maxillulary palp and three setae of lateral lobe without long setules.Maxilla (Fig. 4D and 4E) segmentation and setation as usual in genus.Row(s) of spinules (Fig. 4E, arrowed) present on frontal surface of coxopodite.Maxilliped (Fig. 5A) four-segmented with three, two, one, and three setae, respectively.Lateral seta on terminal segment shorter than ¼ length of median seta.No surface ornamentation on frontal surface of syncoxopodite.Caudal surface of basipodite (segment 2) with spinules arranged into two groups (not shown in Fig. 5A), frontal side bearing long and thin spinules near medial margin.First endopodal segment (segment 3) with few short spinules next to distal margin. Armature formula of swimming legs as in Tab. 2. P1 basipodite (Fig. 5B) lacking medial spine/seta.Medial expansion of basipodite apically pilose in P1-P4.In P4 additional row of long hairs present on caudal surface near medial margin of basipodite (Fig. 5C, arrowed).Long spinules appear on frontal surface of basipodite near lateral margin in P1-P4.Couplers of P1-P4 bare on frontal and caudal surfaces.Outgrowths (Fig. 5C, arrowed) small (ca.as long as wide) and acute on distal margin of P4 coupler.Caudal surface of P4 coxopodite bearing intermittent row of spinules (11) along distal margin, group of elongate spinules at laterodistal angle, robust spinules of unequal size in middle near proximal margin of segment, and long hair in lateral part.Lateral margin of P4 first segment of exopodite (exp1) with hair, exp2 and exp3 with spinules (Fig. 5C).P4 third segment of endopodite (enp3) (Fig. 5C) 2.8 times as long as wide, terminal spines subequal, outer margin of medial spine with many (9) spinules.P5 (Fig. 5D) as typical of genus. Roman numerals denote spines, while Arabic denote setae.The armature on the lateral margin of any segment is given first, followed by the elements on the apical and medial margins.sius.Spinules present on first antennular segment only.Antenna enp2 with six setae.Caudal surface ornamentation of antennal coxobasis (Fig. 6C) similar to that in female in Vietnam, but differs from female in Mindanao in absence of spinules next to distal margin.Labrum bearing distal fringe hair only, epistoma and vertical cleft bare.Longitudinal rows of spinules absent on frontal surface of maxillary coxopodite.Mandible, maxillule and maxilliped as in female.Armature formula of P1-P4 as in female.Long spinules absent on laterofrontal surface of basipodite in P1-P4. Outgrowths small and acute on distal margin of P4 coupler. Caudal surface ornamentation of P4 coxopodite (Fig. 5E) similar to that in female, but lateral hair restricted to one row in middle.Medial expansion of P4 basipodite apically pilose, oblique row of hairs also present on caudal surface of segment.Lateral margin of P4 exopodal segments with small spinules.P4 enp3 2.7-3.0 times as long as wide.Medial terminal spine 0.9 times as long as lateral spine, longer (lateral) terminal spine 0.9 times as long as segment.Lateral margin of medial spine bearing many spinules.P6 flap (Fig. 6A) with several rows of small spinules, mediodistal angle with small teeth.P6 (Fig. 6D) median seta subequal or slightly shorter than medial spine, lateral seta 2.1 times as long as medial spine. Labrum (Fig. 7D) bearing distal fringe of hairs arranged in arc.Spinules/hair absent anteriorly to distal fringe hair (Fig. 3F where spinules are arrowed).Epistoma (longitudinal medial hump between labrum and rostrum) pilose (Fig. 7D, lower arrowed), hair also present laterally to epistoma.Vertical cleft between rostrum and epistoma pilose (Fig. 7D, upper arrowed).Segmentation and setation of mandible, maxillule, maxilla and maxil- 8A) comprising: robust spinules (7-11) in continuous or intermittent row next to distal margin, group of long spinules at laterodistal angle, row of spinules of unequal size in middle near proximal margin, and hair in lateral part.Lateral margin of P4 exopodal segments with long hair (Fig. 8A).P4 enp3 2.6-3.0 times as long as wide; of terminal spines, medial one 1.0-1.2times as long as lateral spine, and 0.76-0.88times as long as segment.Lateral margin of medial terminal spine of P4 enp3 with 8-15 small spinules.P5 (Fig. 7F) typical of genus, segment 2 with long apical seta almost reaching posterior margin of genital doublesomite, and 1.4-1.8times as long as medial spine.Lateral seta on first segment 0.83-1.35times as long as medial spine. Antenna enp2 with six setae.Caudal surface ornamentation of antennal coxobasis (Fig. 9E) similar to that in female, but number of spinules less in particular groups.On frontal surface 18-20 spinules present in longitudinal row next to lateral margin. Labrum with 10 distal teeth.Except for fringe hair no ornamentation on labrum, epistoma and vertical cleft naked.Segmentation and setation of mandible, maxillule, maxilla and maxilliped as in female.Spinules tiny or absent on frontal surface of maxillary coxopodite. Armature formula of P1-P4 as in female.Couplers of P1-P4 naked on frontal and caudal surfaces.Medial pilosity of basipodite of P1-P4 as in female.Outgrowths small and acute on distal margin of P4 coupler.Caudal surface ornamentation of P4 coxopodite (Fig. 8C) similar to that in female but lateral pilosity scarce.Spinules/hair present on lateral margin of second and third exopodal segment of P4, but much shorter than those in female (Fig. 8A and 8C).P4 enp3 2.7-2.8 times as long as wide.Medial terminal spine of P4 enp3 ca.1.1 times as long as lateral spine, and 0.81-0.95times as long as segment.Lateral margin of medial terminal spine with many fine spinules.P6 flap (Fig. 8D) with few transverse rows of spinules, mediodistal angle with two small teeth.P6 median and lateral seta 1.0-1.2 and 2.5-2.7 times as long as medial spine, respectively. Comments This species was originally described from specimens collected from Manila (type locality; cement ponds) and Laguna de Bay during the Wallacea-expedition (Kiefer, 1981).It was later identified from samples collected in lake Sebu (Mindanao), which is the southernmost locality where it has been recorded (Tuyor and Baay, 2001).This is the first time that M. microlasius has been reported in northern Luzon, ca.430 km from the type locality.Aquino et al. (2008) listed a Mesocyclops sp. from lake Paoay but did not identify to species level.The specimen examined here is part of the samples analysed in the aforementioned study. For an analysis of the phylogenetic relationships of M. microlasius see the Discussion. Geographic distribution of limnetic Cyclopidae in the Philippines A total of 4 species of Mesocyclops, 3 of Thermocyclops, and 1 species each of Microcyclops and Paracyclops were identified (Tab.4, Fig. 10).Two species (T.taihokuensis and M. augusti n. sp.) are new records for the Philippines.There was only one male specimen each for Microcyclops and Paracyclops.The identification of Microcyclops sp. to species level was not done due to the absence of a key for male Microcyclops.Thermocyclops crassus is the most widely distributed species, occurring in 16 lakes found across all five islands followed by Mesocyclops thermocyclopoides and T. taihokuensis, both occurring in six lakes each.Thermocyclops taihokuensis, however, seems to be restricted to Luzon island, while M. thermocyclopoides, which occurs mostly in Luzon, was also found in Mindoro island.The endemic M. microlasius -which was first collected from the Philippines 80 years ago during the Wallacea-expedition -was found in lake Paoay (northern Luzon).Thermocyclops decipiens was only found in three lakes within one small locality in Southwest Mindanao.Paracyclops fimbriatus is hereby reported for the first time in lake Mainit together with Microcyclops sp. and Mesocyclops sp.(Mindanao island). Notes on morphology and phylogenetic relationships of Mesocyclops augusti n. sp. The unique surface ornamentation of the hindgut (oblique field of short spinules, and row of long spinules more posteriorly) distinguishes M. augusti n. sp.from all the other Mesocyclops species.In many cyclopine and eucyclopine genera (e.g.Acanthocyclops, Megacyclops, Diacyclops, Eucyclops, Ectocyclops, Macrocyclops and Tab. 4. Cyclopid species identified from the limnetic zone of different freshwater lakes in the Philippines (cf.Fig. 1 and Supplementary Material A). Cyclopid species Lake Mesocyclops microlasius Kiefer, 1981 * 1 M. thermocyclopoides Harada, 1931 2, 4, 5, 11, 13, 18 Mesocyclops sp.20 Mesocyclops augusti n. sp.°22 Microcyclops sp.(male only) 20 Paracyclops fimbriatus (Fischer, 1853) 20 Thermocyclops crassus (Fischer, 1853) 1-3, 7-14, 16-20 T. decipiens (Kiefer 1929) 19, 21, 22 T. taihokuensis (Harada, 1931)°6, 7, 9, 11, 14, 15 * Endemic species; °new record.namese and Mindanao specimens share several other characters as well, therefore we consider them to be conspecific.There are a few features, in which the North Vietnam and Mindanao females seem to be different (cf.intraspecific variation).Most of the qualitative characters differing in these distant populations, however, are intraspecifically variable in the close relatives [surface ornamentation of the antennule and antennal coxobasis (group g) varies within both M. dissimilis and M. woutersi; female genital structure, tranverse canal-like structures form acute or obtuse angle in M. woutersi; P4 basipodite, laterofrontal spinules present or absent in M. ogunnus].Therefore those differences were given less importance.Nonetheless, with the exception of M. augusti n. sp.where the Vietnam and Mindanao females have eight and six pores, respectively, no polymophism was found in number of the pores posteriorly to P6 (cf.Supplementary Material C) in any taxon included in the present phylogenetic analysis.It is especially difficult to interpret the morphometric differences (Mindanao 2♀♀, and North Vietnam 2♀♀).Comparisons of the pelagic and littoral populations of M. dissimilis in lake Biwa (Japan) revealed significant shifts in the body size and proportions such as the relative length of the leg segments (P4 enp3), caudal rami, and dorsal caudal seta (Hołyńska, 1997), which warned us about the cautious use of the morphometric traits in species identification.We lack strong evidence of a (sub)species-level separation, and the apparent morphological separation of the Mindanao and North Vietnam forms can only be confirmed by the examination of additional material.Also, the Mindanao and North Vietnam findings do not necessarily indicate disjunct distribution until faunistic information on the Philippines (e.g. the Sulu archipelago as a possible dispersal route), Borneo, and Indochinese peninsula are so fragmentary. Mesocyclops augusti n. sp. is closely related to the woutersi-complex coined and defined by Hołyńska (1997Hołyńska ( , 2000)).The group was diagnosed by the characters of the antenna (enp 2 with seven setae; complex spinule ornamentation on the caudal surface of the coxobasis), female genital system (transverse ducts meet at acute angle), and surface ornamentation of P4 coxopodite (spinules of conspicuously different size near proximal margin), and it comprised M. woutersi, M. dissimilis, M. parentium and M. friendorum.Mesocyclops francisci was hypothesised as the sister of the woutersi-complex.A phylogenetic analysis (Hołyńska, 2006b) that included all Mesocyclops species (71) showed the woutersi-complex sensu Hołyńska (2000) para-or polyphyletic rather than monophyletic, while M. microlasius, M. geminus and M. ogunnus were united with some members of the woutersi-complex in a clade.Our present reconstructions include all the taxa mentioned above, plus some represen-tatives of three other, relatively closely related groups [Afro-Asian dussarti-clade: M. thermocyclopoides (SE/E Asia); Afro-Asian aequatorialis-affinis clade: M. affinis (New Guinea-SE Asia); Australian-South Pacific clade: M. roberti (Fiji/Wallis)] as outgroups. The unordered and scaled reconstructions and six of the seven shortest trees of the unscaled reconstruction show the woutersi-complex paraphyletic, yet support monophyly of a predominantly Oriental group (Fig. 11, clade E) comprising: M. friendorum (Sulawesi), M. geminus (Borneo), M. microlasius (Luzon, Mindanao), M. francisci (Sumatra, Malaysia Cambodia), M. parentium (India, Sri Lanka, Cambodia), M. woutersi (Australia, New Guinea, Vanuatu, Laos, Vietnam, Taiwan, South China, Japan, South Korea), M. dissimilis [China, Japan, South Korea, Russian Far East (Primorskiy)], and M. augusti n. sp.(Mindanao, Vietnam) [occurrence data on Cambodia from Chaicharoen (2011)].Clade E is supported by at least one apomorhy (Tab.5) (char 10:0, joint canal present).Another derived feature, spinules of unequal size near the proximal margin of P4 coxopodite (char 8:2), also defines this clade in some trees of the unscaled run and in most of the unordered and scaled trees.The character of small spinules in anterior part of proctodeum (char 13:1) as third apomorphy of clade E only shows up in the unordered reconstruction.In these reconstructions the unique apomorphy of clade E is the joint Tab. 5. Apomorphies of the clades revealed in phylogenetic reconstructions obtained by applying different coding of the polymorphic characters.canal, which is formed by the fusion of the transverse duct-like structures, anteriorly to the copulatory pore.This character state, however, appears in most New World species and also present in several Old World taxa (including the more basal species, e.g.M. cuttacuttae Dumont et Maas, 1985, M. rarus Kiefer, 1981, M. annae Kiefer, 1930, M. splendidus Lindberg, 1943, M. salinus Onabaniro, 1957, and M. brevisetosus Dussart et Fernando, 1985), which implies that joint canal is either a plesiomorphy or a character reversal (apomorphy) in group E. As the outgroups determine the character polarity and also influnce the tree topology, we may need an analysis with larger taxon sampling to verify the diagnostic value of joint canal and test the monophyly of group E (this was beyond the goals of this work).None of the reconstructions supports sister relationships of the two Philippine taxa.Instead, in all analyses the closest relative of M. augusti (Mindanao, North Vietnam) is M. dissimilis (East Asia), while in all shortest trees of the unordered and scaled reconstructions and some trees (3 of 7) of the unscaled run the sister of M. microlasius (Luzon, Mindanao) is M. geminus (East Borneo).At Clade B (Fig. 11), comprising M. augusti, M. dissimilis and M. woutersi also appears in all the reconstructions.Two apomorphies define this clade in all shortest trees of the scaled and unscaled run, and in majority (9 of 15) of the trees in the unordered run (Tab.5): spinules can be present on 14 th antennulary segment (char 1:1), and the pilosity of pediger 5 is reduced to the lateral surface (char 11:0).The presence of spinules on the 14 th antennulary segment is an unique feature of clade B, and it is a rare character state in the whole genus also.Spinules sometimes present on the 14 th antennulary segment in M. aequatorialis, and always present in the species of the Afro-Asian dussarti-clade (but not in M. thermocyclopoides included here). The microlasius-geminus group (clade F in Fig. 11 and Tab. 5) is defined by at least one apomorphy (char 15:1, hair or long spinules present on anterior half of caudal rami).A second (but not unique) apomorphy (char 11:0, pilosity of pediger 5 is restricted to the lateral surface) appears in all trees of the scaled run, in majority (9 of 15) of the unordered trees, and in three trees of the unscaled run where this node shows up.Hair/spinules restricted to anterior half of the caudal rami (unique apomorphy here) only appear in few Old World taxa, M. pilosus Kiefer, 1930, M. insulensis Dussart, 1982, M. mariae Guo, 2000, M. spinosus Van de Velde, 1984, M. shenzhenensis Guo 2000and M. pseudopinosus Dussart et Fernando 1988.Mesocyclops microlasius differs from M. geminus in: caudal surface ornamentation of the antennal coxobasis (spinules in group f small in M. microlasius, yet large in M. geminus); caudal surface ornamentation of P4 coxopodite (spinules of unequal size near proximal margin in M. microlasius, yet equal size in M. geminus); laterofrontal ornamentation of P4 basipodite (long spinules absent in M. microlasius, yet present in M. geminus). The unscaled and scaled trees and majority (8 of 15) of the trees in the unordered run support a mainland clade (Fig. 11, clade D), members of which, albeit can also show up in islands that have never been connected to the SE Asian shelf, occur in continental Asia.In most trees with the mainland clade (except for one tree in the unscaled reconstruction, where M. geminus groups with M. ogunnus and M. thermocyclopoides) the insular taxa (M.microlasius, M. geminus, M. friendorum) either form a paraphyletic group (basal to mainland clade), or constitute a monophyletic group, the sister of the mainland clade.Two apomorphies [char 6:1, row of spinules between dis-tal hair of labrum and epistoma; and char 16:2, spinules absent at implantation of lateral (II) caudal seta] diagnose the mainland clade in all trees of the unscaled and scaled run.From among the eight trees of the unordered run where the mainland clade appears, in five trees two apomorphies (char 6:1; char 16:2) and in three trees only one apomorphy (char 6:1) supports monophyly of clade D. Nonetheless, to propose a robust hypothesis of the relationship of the mainland clade to the insular species, or to answer the question whether the insular group would be more basal to the mainland group, we need a wider context, i.e. an analysis with larger taxon sampling. Geographic distribution patterns of the limnetic Cyclopidae in the Philippines Our samplings emphasised the limnetic fauna and our results suggest that interesting taxa may still be found even in this usually less diverse habitat.This is shown with the discovery of two new records including one new species.It also shows how scarce information is on copepods in most Philippine lakes, in spite of their utilisation for local fisheries and aquaculture.Many other lakes and islands have yet to be sampled, which, in time, may yield equally interesting species, and give a more complete picture of the zooplankton fauna of the Philippines. The endemic Mesocyclops microlasius seems to have a wider distribution than previously thought.Our collections reveal that beyond the Manila region (type locality) the species is also present in northern Luzon.The farthest record comes from lake Sebu in Mindanao island (Tuyor and Baay, 2001), yet we could not find the species in our sample (Tab.4, lake 21).We think that Mindanao occurrence might need further confirmation.Sister relationship of Mesocycyclops microlasius (Luzon, Mindanao) and M. geminus (East Borneo) are in line with the observation made in several primary freshwater fishes, molluscs and crabs in which the Philippine forms (mainly those from the southwestern and southern islands) have their closest relatives in Borneo (Bănărescu, 1992).It is premature to make inferences about the geographic distribution of M. augusti sp.n. in the Philippines, yet absence of the species in all the 15 lakes that we sampled in Luzon might indicate that species does not occur in northern Philippines.On the other hand, M. augusti sp.n. has been identified from North Vietnam, and its closest relative, M. dissimilis, is distributed in East Asia (but not found in Taiwan).Our guess is that M. augusti sp n. might use a southern colonisation route via Borneo between Vietnam and Mindanao. In contrast, two other cyclopids, M. thermocyclopoides (Luzon and Mindoro) and T. taihokuensis (Luzon), have been encountered in the northern islands only, yet scarce sampling in the southern part leave some doubts about the restricted northern distributions of these taxa.At least the study by Tuyor and Baay (2001) M. thermocyclopoides in Calamba river (Luzon) and lake Mainit (Mindanao).It is possible that some of the previous records of Mesocyclops leuckarti (a Palearctic species) in the Philippines (Woltereck et al., 1941;Mamaril Sr. and Fernando, 1978;Mamaril Sr., 1986, 2001) (Tab. 1) may actually refer to M. thermocyclopoides, given the wide distribution of the latter species in Northern Philippines.Also, most localities and lakes where we found M. thermocyclopoides were sites where M. leuckarti has been previously recorded from, such as Laguna de Bay and lake Naujan.Based on the widespread occurrence of M. thermocyclopoides among the neigh-bouring countries such as Indonesia (Flores, Java and Sumatra), Malaysia, Thailand, Cambodia, Vietnam, China and Taiwan (Hołyńska et al., 2003;Alekseev and Sanoamuang, 2006;Hołyńska and Stoch, 2011;Chaicharoen, 2011), and widespread distribution of the species in Luzon, we think that M. thermocyclopoides is a native species rather than introduced by humans in the Philippines.The distribution of the newly recorded T. taihokuensis in the Philippines (Luzon), meanwhile, seems to be restricted to clusters of lakes near one another, as lakes Pandin, Bunot, Mohicap and Palakpakin all located in one town (San Pablo, total area: 214 km 2 ), while lakes (Mirabdullayev et al., 2003), and we speculate that T. taihokuensis reached Luzon from Taiwan. Colonisation of the northern Philippine Batanes islands from Taiwan has been evidenced in small mammals (shrews) (Esselstyn and Oliveros, 2010).The ubiquitous nature of T. crassus (Old World) in the Philippines is consistent with the results of other studies on its distribution (Hołyńska, 2006a;Chaicharoen et al., 2011).Furthermore, studies in lake Taal have shown how T. crassus is the most abundant copepod in the lake compared to calanoids and how this was related to the increased trophic status of the lake from the excessive nutrient inputs in aquaculture areas (Papa and Zafaralla, 2011;Papa et al., 2011).This is also the second record of T. decipiens in Mindanao island, where it has been observed in three lakes within the same locality (South Cotabato).Tuyor and Baay (2001) found T. decipiens in lake Mainit, Mindanao, while Hołyńska (2006a) reported it from ponds in the town of Dasmariñas (Cavite province, Luzon island).It is not surprising to find more localities with T. decipiens in the Philippines (Tab. 1) as it is a widely distributed Pantropical species (Chaicharoen et al., 2011). We failed to find two species that were also known from limnetic waters in the Philippines.Thermocyclops wolterecki Kiefer, 1938 has originally been described from the plankton of lake Lanao (Mindanao), and later reported from lake Pogera in Papua New Guinea (Defaye et al., 1987) and lake La Han in Northeast Thailand (Alekseev and Sanoamuang, 2006).Interestingly, Chaicharoen et al. (2011) found the species in small water bodies (canal, stream and temporary pond) in Cambodia.In comparison with the Lanao specimens, the Cambodian females had larger body, shorter caudal setae (V and VII) and slightly different surface ornamentation on the P4 coupler, but the morphology otherwise fit that in the type locality.We suppose that T. wolterecki occurs in small waterbodies in the Philippines too, and the small and slender form in lake Lanao is a pelagic ecotype of the species.Tropocyclops prasinus (Fischer, 1860) was reported from Luzon, Cebu and Mindanao islands (Tab.1).The species is recorded from almost every continent, yet recent morphological studies (Lee and Chang, 2007) on the East Asian T. prasinus-like forms showed that a few good species with more restricted distribution could be hidden under this name.Tropocyclops of the Philippines also needs revision. In the Philippines, the threat of non-indigenous species taking over native zooplankton fauna has already been observed among calanoid copepods.A Neotropical species, Arctodiaptomus dorsalis (Marsh, 1932) was found in most of the lakes sampled in this study.Previously recorded native calanoid copepods have already been displaced by A. dorsalis (Papa et al., 2012).Importantly enough, there has been no similar occurrence in the cyclopid fauna; however, with the presence of aquaculture and its role in the dispersal of non-indigenous zooplankton species (Reid, 2007), in most Philippine lakes the threat still remains. CONCLUSIONS Two genera, Mesocyclops (4 species) and Thermocyclops (4 species), dominate the relatively poor open-water fauna of the lakes in the Philippines (11 species), among which only one species (M.microlasius) seems to be endemic to the archipelago.Our paper currently brings the total number of cyclopid copepods (including both limnetic and littoral/benthic taxa) known from the Philippines to 18, but more importantly, it highlights the need for more intensive investigations in the small water bodies, paludal and subterranean habitats, which may be home to a significantly richer fauna with higher rate of endemism. Former phylogenetic analyses that included all species of the genus Mesocyclops, and a present reconstruction that was restricted to the Oriental representatives of the genus revealed that: i) all Mesocyclops species so far recorded from the Philippines are nested in clades occurring predominantly in the tropical Asian mainland and/or the Greater Sunda islands; ii) the closest relative of M. augusti n. sp.(Mindanao, North Vietnam) is M. dissimilis (East Asia); and iii) the closest relative of M. microlasius (Luzon, Mindanao) is M. geminus (East Borneo). Exploring the species diversity and geographic distribution of Cyclopidae in the Philippines may have implications for human epidemiology as well.The potential use of cyclopid copepods (e.g.Mesocyclops), as biological control agents of Dengue-carrying Aedes mosquitoes should be considered, as Dengue continues to be one of the leading causes of mortality among Filipinos. List of cyclopid species so far reported from the Philippine islands. Fig. 1 . Fig. 1.Map of the Philippine archipelago showing the location of the 22 lakes sampled for this study. Fig. 10 . Fig. 10.Distribution map of the nine cyclopid taxa encountered in the 22 lakes sampled throughout the Philippine archipelago. Variation of the morphometric characters in female of Mesocyclops augusti n. sp.
2018-12-15T08:18:53.554Z
2013-08-26T00:00:00.000
{ "year": 2013, "sha1": "25d3086a5bf7ac77ee09b13a21f96aebae6923b3", "oa_license": "CCBYNC", "oa_url": "https://www.jlimnol.it/index.php/jlimnol/article/download/jlimnol.2013.s2.e14/583", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "25d3086a5bf7ac77ee09b13a21f96aebae6923b3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
6327395
pes2o/s2orc
v3-fos-license
Liver Manipulation Causes Hepatocyte Injury and Precedes Systemic Inflammation in Patients Undergoing Liver Resection Background Liver failure following liver surgery is caused by an insufficient functioning remnant cell mass. This can be due to insufficient liver volume and can be aggravated by additional cell death during or after surgery. The aim of this study was to elucidate the causes of hepatocellular injury in patients undergoing liver resection. Methods Markers of hepatocyte injury (AST, GSTα, and L-FABP) and inflammation (IL-6) were measured in plasma of patients undergoing liver resection with and without intermittent inflow occlusion. To study the separate involvement of the intestines and the liver in systemic L-FABP release, arteriovenous concentration differences for L-FABP were measured. Results During liver manipulation, liver injury markers increased significantly. Arterial plasma levels and transhepatic and transintestinal concentration gradients of L-FABP indicated that this increase was exclusively due to hepatic and not due to intestinal release. Intermittent hepatic inflow occlusion, anesthesia, and liver transection did not further enhance arterial L-FABP and GSTα levels. Hepatocyte injury was followed by an inflammatory response. Conclusions This study shows that liver manipulation is a leading cause of hepatocyte injury during liver surgery. A potential causal relation between liver manipulation and systemic inflammation remains to be established; but since the inflammatory response is apparently initiated early during major abdominal surgery, interventions aimed at reducing postoperative inflammation and related complications should be started early during surgery or beforehand. transhepatic and transintestinal concentration gradients of L-FABP indicated that this increase was exclusively due to hepatic and not due to intestinal release. Intermittent hepatic inflow occlusion, anesthesia, and liver transection did not further enhance arterial L-FABP and GSTa levels. Hepatocyte injury was followed by an inflammatory response. Conclusions This study shows that liver manipulation is a leading cause of hepatocyte injury during liver surgery. A potential causal relation between liver manipulation and systemic inflammation remains to be established; but since the inflammatory response is apparently initiated early during major abdominal surgery, interventions aimed at reducing postoperative inflammation and related complications should be started early during surgery or beforehand. Liver failure is a severe complication of liver surgery, occurring when there is insufficient remnant cell mass postoperatively. This can be due to excessive resection, leaving a too small remnant liver volume, but in some cases liver failure occurs in patients with a seemingly sufficient remnant liver volume [1]. In these cases the functional capacity of the remaining cell mass is probably impaired by secondary processes. Recognition and modulation of factors impairing cell survival is crucial to enhance liver function following liver surgery. In this context much attention has been paid to the effects of ischemia and reperfusion. Ischemia-reperfusion is a frequently encountered phenomenon in liver resection, caused by temporary occlusion of blood flow to the liver (Pringle maneuver), which is a popular way to reduce blood loss during parenchymal liver transection. Ischemia-reperfusion induces energy depletion and generation of reactive oxygen species. Al-though several endogenous protection mechanisms aimed at supporting cell survival are activated during ischemiareperfusion [2], excessive energy depletion or oxidative stress ultimately results in apoptotic or necrotic cell death. Interestingly, it has been shown that plasma levels of markers for hepatocellular injury increase before liver transection and before application of the Pringle maneuver, suggesting that factors other than ischemia-reperfusion may cause hepatocyte demise during liver surgery [3]. In this context a role has been proposed for the hepatotoxic effects of anesthesia [4], the effects of systemic inflammation, probably secondary to intestinal manipulation [5], and the effects of manipulation of the liver itself during perihepatic dissection and mobilization [6]. The aim of this study was to elucidate the causes of early hepatocellular injury in patients undergoing liver resection. Patients Patients undergoing liver resection for secondary tumors in an otherwise healthy liver were studied (Table 1). All patients were anesthetized according to institutional routines using isoflurane and propofol. Surgery was commenced using a subcostal bilateral incision. Olivier retractors (Copharm, Abcoude, Holland) or Omni-Tracts were used to improve exposure. Before parenchymal division the liver was mobilized as described elsewhere, followed by intraoperative ultrasound [7]. Cholecystectomy was performed routinely before liver transection. All patients had indwelling radial artery catheters placed for continuous monitoring of arterial blood pressure and blood sampling. Patients were nonrandomly assigned to one of two protocols (described below). Assignment was based upon the surgeon's preference of using the Pringle maneuver or not. If applied, an intermittent Pringle maneuver (15 min of ischemia, 5 min of reperfusion) was used by rubberband ligation of the hepatoduodenal ligament. Effects of intermittent Pringle maneuver on hepatocellular injury From nine patients undergoing liver resection with intermittent Pringle maneuver, arterial blood was sampled preoperatively before and after each period of 15 min of ischemia and 5 min of reperfusion and 90 minutes postoperatively. Systemic concentrations of two markers of hepatocellular injury, aspartate amino transferase (AST) and glutathione-s-transferase-a (GSTa), were measured as well as plasma levels of liver-type fatty acid binding protein (L-FABP). Methods of blood processing and laboratory analyses are described below. In addition, plasma levels of the inflammatory cytokine interleukin-6 (IL-6) were measured as described below. Source and fate of L-FABP From ten patients undergoing liver resection without the Pringle maneuver, arterial blood was sampled preoperatively and before liver transection. Simultaneous with the second arterial blood sample, blood was drawn from the portal vein, a hepatic vein, and the right renal vein by direct puncture to assess concentration gradients across organs. It should be noted that at this timepoint surgical procedures were identical in all patients, irrespective of the eventual application of a Pringle maneuver. L-FABP plasma concentrations were measured as described below and arteriovenous concentration differences were calculated to study the contribution of the intestines and the liver to systemic L-FABP release and to study renal clearance of L-FABP. Renal clearance was calculated by dividing the arteriovenous concentration gradient by the arterial concentration (uptake/influx). This quotient was multiplied by the percentage of blood flowing through the kidney to calculate fractional plasma clearance. Effects of anesthesia and intestinal manipulation To differentiate the effects of anesthesia induction, laparotomy, and intestinal manipulation from the effects of liver manipulation, a control group consisting of four patients undergoing lower abdominal surgery (1 rectopexia, 2 proctectomy, 1 sigmoidal resection) was studied. Plasma was sampled before surgery and at 40-min intervals during surgery to determine L-FABP concentrations. Blood processing and analysis Blood samples were collected in prechilled EDTA vacuum tubes (BD vacutainer, Becton Dickinson Diagnostics, Aalst, Belgium) and kept on ice. Blood was centrifuged at 4000g for 10 min. Plasma was immediately stored at -80°C until analysis. L-FABP and IL-6 were determined using commercially available enzyme-linked immunosorbent assays (ELISA) (kindly provided by Hycult Biotechnology, Uden, The Netherlands), GSTa was measured by ELISA as described earlier [8,9]. AST was determined by the clinical chemistry laboratory of the University Hospital Maastricht. Ethics The studies were approved by the Medical Ethics Committee of the University Hospital Maastricht and all subjects gave written informed consent. Statistics Normality of all data obtained was verified by Lillieford's test (all p > 0.10). Data are presented as mean with standard error of the mean (SEM). A paired t test was used to test the significance of changes in the various plasma concentrations. Arteriovenous concentration gradients were tested versus a theoretical mean of zero using a one-sample t test [10]. Pearson's test for correlations was used to test the significance of correlations. Statistical calculations were made using Prism 4.0 for Windows (GraphPad Software Inc., San Diego, CA). A p value less than 0.05 was considered to indicate statistical significance. Patient outcome No signs of organ failure or other major complications were observed postoperatively during the hospital stay of all patients. Systemic levels of damage markers During assessment of resectability, prior to organ transection, mean arterial L-FABP plasma concentration increased 55-fold (p = 0.002) (Fig. 1a). Surprisingly, arterial L-FABP levels did not change significantly during subsequent intermittent Pringle maneuver, indicating that neither intermittent ischemia-reperfusion nor liver transection significantly aggravated the hepatocellular injury caused by early perioperative processes. Immediately postoperatively, systemic L-FABP levels were found to be decreasing. Systemic plasma levels of GSTa (Fig. 1b) and AST (Fig. 1c) also increased significantly before liver transection and before ischemia-reperfusion. Effects of lower abdominal surgery on liver injury In patients undergoing lower abdominal surgery, the mean L-FABP concentration remained below 15 ng/ml, underlining the effect of direct manipulation of the liver on the elevation of L-FABP plasma levels during liver surgery (Fig. 2). Organ-specific FABP release Since L-FABP is expressed in both the liver and the intestines, we performed an organ balance analysis to reveal the origin of circulating L-FABP. The data show that L-FABP was specifically released from the liver and not from the intestines. In addition, organ balance analysis showed an active renal uptake of L-FABP from circulation (Fig. 3). Renal clearance of FABPs The kidneys efficiently removed L-FABP from circulation. Renal clearance of L-FABP correlated directly with their respective arterial concentrations (Fig. 4), resulting in a fractional extraction rate (arteriovenous gradient/arterial concentration · 100%) [11] of approximately 30%. Assuming that renal blood flow per minute approximates 22% of the total blood volume [12], renal plasma L-FABP clearance can be calculated to equal 6.6%/min (30% · 22%/min), resulting in a plasma half-life of about 11 min. Interleukin-6 levels during liver surgery with intermittent Pringle maneuver The inflammatory response to liver surgery was investigated using IL-6 as a parameter. Arterial IL-6 levels remained unaltered during manipulation of the liver, when L-FABP, AST, and GSTa levels were increasing. In contrast, a significant increase of systemic IL-6 plasma levels was found after 15 min of hepatic inflow occlusion (Fig. 5). This increase was progressive throughout the remainder of the study period. Discussion The present study was designed to determine the cause of early hepatocellular injury during liver resection. Previous publications already showed elevated plasma levels of markers of liver damage prior to liver transection and hepatic inflow occlusion [3,6]. These observations suggested that mechanisms other than ischemia-reperfusion injury could contribute to peri-and postoperative cell death and organ dysfunction. We showed that direct manipulation of the liver during surgery is a leading cause of hepatocellular injury. Arterial L-FABP and GSTa plasma levels increased following the start of the operation and reached a plateau before liver transection and ischemia-reperfusion. Surprisingly, we found no additional effect of organ transection or intermittent Pringle maneuver on these increased L-FABP and GSTa plasma levels. The apparent resistance of the liver to an ischemic insult however, is in line with previous data by Figueras et al. [13] who showed no effect of the extent of inflow occlusion in patients with a normal liver, although in their study livers of patients with cirrhosis appeared to be more vulnerable to hepatic inflow occlusion. In addition, other authors who showed a progressive release of GSTa following a Pringle maneuver applied prolonged and continuous inflow occlusion [6], which is known to aggravate hepatocellular injury compared to the intermittent Pringle maneuver [14]. Previous authors who observed an early increase in plasma concentrations of transaminases and GSTa have ascribed this phenomenon to the hepatotoxic effects of anesthesia [4], systemic inflammation after intestinal manipulation [5], and the effects of manipulation of the liver itself during perihepatic dissection and mobilization [6]. We were able to rule out the first two factors as potential causes of early hepatocyte injury since a similar increase of L-FABP plasma concentrations did not occur in patients undergoing lower intestinal surgery. These patients were anesthetized in a similar manner as the patients undergoing liver resection and underwent extensive intestinal manipulation. By elimination of other potential causes, it thus must be concluded that increasing L-FABP levels occurs during and due to liver manipulation in patients undergoing liver resection. By measuring concentration gradients across the intestines and the liver, we were able to show that the increased L-FABP levels during liver manipulation are exclusively due to L-FABP release from the liver. Some intestinal L-FABP release could have been expected when L-FABP, which is also expressed by the intestines, would have been released from injured enterocytes, but the absence of intestinal L-FABP release clearly indicates that organ manipulation during liver resection results in hepatocellular injury without causing intestinal injury. Finally, we were able to rule out systemic inflammation as a leading cause of hepatocellular injury since L-FABP peak levels were reached already before the onset of the inflammatory response. Arterial IL-6 plasma levels increased between 90 and 125 min after laparotomy, which approximates the established time lag between an inflammatory stimulus and IL-6 release [15]. This suggests that the inflammatory response is triggered by an early event during the operation. According to the ''danger model'' hypothesis of Matzinger [16], cell injury leads to the release of immunostimulatory proteins or nucleotides, so-called danger signals, that activate the immune system and induce systemic inflammation [17]. In line with this theory, we consider that early hepatocyte damage due to liver manipulation could give rise to the release of such ''danger signals'' and contribute to systemic inflammation. Consequently, liver manipulation-induced hepatocyte injury may be a trigger for the inflammatory response to surgery. Mechanistic proof for this theory, however, is difficult to obtain in vivo. Both L-FABP and GSTa decreased immediately following surgery, in contrast to the more classic marker of hepatocyte injury, AST. The ongoing increase of plasma levels of such markers following liver surgery has been regarded as a sign of ongoing hepatocellular injury and impending liver failure [18]. Our study shows that the late postoperative peak of AST is more likely to be a reflection of slow leakage than of ongoing injury, since the leakage of small-molecular proteins L-FABP and GSTa decreased within 90 min. As a consequence, L-FABP and GSTa are probably more sensitive for detecting ongoing hepatocyte injury and impending liver failure than AST. To prove this assumption, however, a large prospective study is needed. The rapid decline of L-FABP and GSTa is a result of their rapid renal clearance. Arterial renal venous concentration gradients showed that the kidneys remove approximately 30% of L-FABP in a single pass, leading to a calculated L-FABP half-life of 11 min. We did not explore potential mechanisms of liver manipulation-induced cell injury. Earlier studies showed that liver mobilization and assessment of resectability significantly reduced hepatic venous oxygen saturation [19, ]. This could also be a cause of hepatocyte damage in our case, although we did not measure hepatic oxygen saturation. It is not clear whether hepatic oxygen saturation decreased due to physical obstruction of the blood stream or whether it decreased secondary to other processes [21,22]. Alternatively, cell damage may occur as a direct consequence of mechanical impact [23]. In summary, it was previously believed that vascular occlusion and parenchymal transection were the major reasons for hepatocyte injury during liver surgery. This study demonstrates that liver manipulation is a leading cause of hepatocyte injury during liver surgery. A potential causal relation between liver manipulation and systemic inflammation remains to be established. However, since the inflammatory cascade is apparently initiated early during major abdominal surgery, interventions aimed at reducing postoperative inflammation and related complications should be started early during surgery or beforehand.
2014-10-01T00:00:00.000Z
2007-08-01T00:00:00.000
{ "year": 2007, "sha1": "b3086d41d23107ef1b3269c4b9d0fe7638e48d48", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00268-007-9182-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "951c9c35478e903cb634c53034080235101411d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29159055
pes2o/s2orc
v3-fos-license
The Impacts of Spatiotemporal Landscape Changes on Water Quality in Shenzhen, China The urban landscape in China has changed rapidly over the past four decades, which has led to various environmental consequences, such as water quality degradation at the regional scale. To improve water restoration strategies and policies, this study assessed the relationship between water quality and landscape change in Shenzhen, China, using panel regression analysis. The results show that decreases in natural and semi-natural landscape compositions have had significant negative effects on water quality. Landscape composition and configuration changes accounted for 39–58% of the variation in regional water quality degradation. Additionally, landscape fragmentation indices, such as patch density (PD) and the number of patches (NP), are important indicators of the drivers of water quality degradation. PD accounted for 2.03–5.44% of the variability in water quality, while NP accounted for −1.63% to −4.98% of the variability. These results indicate that reducing landscape fragmentation and enhancing natural landscape composition at the watershed scale are vital to improving regional water quality. The study findings suggest that urban landscape optimization is a promising strategy for mitigating urban water quality degradation, and the results can be used in policy making for the sustainable development of the hydrological environment in rapidly urbanizing areas. Introduction Water quality degradation is a key issue for global environmental change in urban areas and a serious problem in many rapidly urbanizing catchments in developing countries [1,2] where wastewater is continuously discharged to stream systems due to highly centralized socioeconomic development and anthropogenic processes [3]. Urbanization alters landscape patterns that, in turn, control the various biogeochemical and physical processes of watersheds [4]. Thus, it is important for researchers of urban sustainability and river ecosystem restoration to understand the relationship between landscape change and water quality at the watershed scale. Landscape governance in urban areas may provide a scientific foundation for addressing the effects of urbanization on water pollution [5]. Several recent studies have addressed the interactions between water quality degradation and landscape changes [6], with a focus on the selection of suitable landscape metrics, water quality parameters, scales, and statistical methods as they relate to water quality and the landscape [7]. Simple statistical analysis at the watershed scale considers the effect of the spatial configuration of the landscape as an important factor in understanding the hydrological processes related to land use and water quality in adjacent aquatic systems [8,9]. However, such analyses are limited by the need to quantify the changes in the landscape and water quality. As simple statistical metrics, landscape indicators are commonly used to quantify the spatial relationships between water quality variables [10,11]. Although these studies have become more common in recent years, many associated questions remain unanswered. For example, the quantitative contributions of landscape changes to variations in urban stream water remain unknown. Therefore, it is necessary to combine spatial changes and temporal changes through statistical analyses to determine the relationship between water quality and the landscape on a regional scale. Most previous studies have emphasized the relationship between landscape patterns and water quality while neglecting the synergistic effects of landscape changes on water quality. When researchers conduct related analyses, they often neglect temporal landscape change information and apply statistical spatial information using various techniques, such as Pearson's correlation analysis [6], spatial autocorrelation [12], geographically weighted regression [13], redundancy analysis [14] or de-trended correspondence analysis [15]. Other studies have focused on the effects of landscape patterns at different scales on water quality using linear regression [10,11], generalized linear mixed regression [16], logistic regression [17], nonlinear regression [18] or stepwise regression analysis [19]; however, these studies did not consider changes in landscape patterns. The relationship between landscape change and water quality degradation is not clear, especially during rapid urbanization [20] and most previous studies are complicated by static landscape patterns [21]. It is difficult to fully explain the contributions of landscape changes to water quality using a correlation relationship. Landscape changes include information on landscape composition and configuration and the interactions that affect water quality. Panel regression analysis can be used to effectively assess the synergetic relationship between spatiotemporal landscape change and water quality. In addition, this approach can yield the correlation between landscape change and water quality, as well as the contributions of landscape change to water quality [22]. This approach has considerable potential in spatiotemporal data analysis and is a very effective method of quantifying coupled influences. Seasonal and annual variability in watersheds are two important timescales of water quality changes [23]. Many studies of the relationship between landscape patterns and water quality have focused only on seasonal variations or short-term impacts. Hydrological and biogeochemical processes, such as surface runoff and the nitrogen and phosphorus cycles are important mechanisms that can explain these seasonal or short-term changes [12]. However, landscape changes mainly influence the annual variations in the water quality of urban stream systems. If watershed management in a region is designed to balance development with water quality, it is necessary to quantify the extent to which water quality degradation is caused by the spatial and temporal variability of landscape changes, as these changes have different rates, and the environmental consequences differ in different watersheds. The objectives of this study were as follows: (1) to quantify the spatial and temporal variations in water quality in Shenzhen based on multi-statistical analysis and identify the spatial and temporal degradation processes of different watersheds in Shenzhen; (2) to assess the relationships between landscape change and water quality degradation from 1990 to 2010 by considering water quality degradation stages and spatial characteristics using panel regression analysis; and (3) to address the contributions of landscape changes to water quality changes during rapid urbanization. Site Description Shenzhen, the first Special Economic Zone, is a new city that formed after the Chinese reform and opening policies were issued. More than 45% of the natural landscape in Shenzhen has been converted to an urban landscape over the past 40 years due to rapid urbanization [24]. The consequences of rapid urbanization in Shenzhen have been extensively researched in environmental and ecological resource studies, which have focused on landscape urbanization [25], impervious surface area expansion [20], habitat fragmentation [26], urban heat island effects, vegetation degradation and ecosystem health deterioration [27]. One of the largest environmental pollution problems is the deterioration of water quality in small catchments [18]. Shenzhen includes approximately 310 streams. Sixty-nine of these streams cover a watershed area of over 10 km 2 , including the Shenzhen River, Maozhou Stream, Longgang Stream, Guanlan Stream, and Pingshan Stream. In this study, 27 subwatersheds were selected in accordance with the sample locations. The government constructed many reservoirs and floodgates during the urbanization process to conserve water resources, and most of the selected sample sites are independent because they are not hydrologically connected ( Figure 1). All of the sub-watersheds were selected within the Shenzhen area, and they did not overlap with the boundary of the overall watershed to minimize the sub-watershed differences in geological, climatic, geographic, and hydrological conditions [18]. degradation and ecosystem health deterioration [27]. One of the largest environmental pollution problems is the deterioration of water quality in small catchments [18]. Shenzhen includes approximately 310 streams. Sixty-nine of these streams cover a watershed area of over 10 km 2 , including the Shenzhen River, Maozhou Stream, Longgang Stream, Guanlan Stream, and Pingshan Stream. In this study, 27 subwatersheds were selected in accordance with the sample locations. The government constructed many reservoirs and floodgates during the urbanization process to conserve water resources, and most of the selected sample sites are independent because they are not hydrologically connected ( Figure 1). All of the sub-watersheds were selected within the Shenzhen area, and they did not overlap with the boundary of the overall watershed to minimize the sub-watershed differences in geological, climatic, geographic, and hydrological conditions [18]. Data Sources Water quality data were collected from over 51 sampling sites by the Environmental Protection Bureau of Shenzhen Municipality. However, only 27 sampling sites and corresponding sub-watersheds were used in this study because sub-watersheds were separated as independent hydrological systems. Overall, six water quality indicators were selected to assess the different chemical characteristics of streams: the chemical oxygen demand (CODMn), 5-day biochemical oxygen demand (BOD5), ammonia nitrogen (NH3-N), total phosphorus (TP), volatile phenol (VP), and oils (Oils). All the above indicators are reported in units of mg/l. Sampling was conducted under low-flow, normal-flow, and high-flow conditions. Samples were collected twice during each flow period and six times per year (12 times per year in the reservoir watersheds). All sample data sets from 1990 to 2010 were obtained from the 1991-2012 Environmental Quality Bulletins by the Environmental Protection Bureau of Shenzhen Municipality. The catchment landscape data were derived using Landsat TM/ETM+ images from 1990 to 2010 (from October to February) using maximum likelihood supervised classification algorithms, and the accuracy of the landscape data classification has been published in other studies [28]. The data were classified into nine landscape types: farmland (FL), orchard landscape (OL), forest landscape (FL), built-up landscape (BL), water landscape (WL), grass landscape (GL), wetland landscape (WetL) construction landscape (CL), and undeveloped landscape (UNDL). CL is a temporary landscape in urban areas that transitions to BL. Additionally, rapid urbanization often creates large areas of temporary construction land ( Figure 2). Multivariate Statistical Analysis Cluster analysis (CA) using Ward's method and the squared Euclidean distance were employed to describe the spatial and temporal patterns of water quality [29]. Based on water quality data and monitoring site codes, water quality can be grouped based on the spatial scale or temporal stage. Using a spatial scale, water quality sites can be clustered into similar urban development stages. Data Sources Water quality data were collected from over 51 sampling sites by the Environmental Protection Bureau of Shenzhen Municipality. However, only 27 sampling sites and corresponding sub-watersheds were used in this study because sub-watersheds were separated as independent hydrological systems. Overall, six water quality indicators were selected to assess the different chemical characteristics of streams: the chemical oxygen demand (COD Mn ), 5-day biochemical oxygen demand (BOD 5 ), ammonia nitrogen (NH 3 -N), total phosphorus (TP), volatile phenol (VP), and oils (Oils). All the above indicators are reported in units of mg/l. Sampling was conducted under low-flow, normal-flow, and high-flow conditions. Samples were collected twice during each flow period and six times per year (12 times per year in the reservoir watersheds). All sample data sets from 1990 to 2010 were obtained from the 1991-2012 Environmental Quality Bulletins by the Environmental Protection Bureau of Shenzhen Municipality. The catchment landscape data were derived using Landsat TM/ETM+ images from 1990 to 2010 (from October to February) using maximum likelihood supervised classification algorithms, and the accuracy of the landscape data classification has been published in other studies [28]. The data were classified into nine landscape types: farmland (FL), orchard landscape (OL), forest landscape (FL), built-up landscape (BL), water landscape (WL), grass landscape (GL), wetland landscape (WetL) construction landscape (CL), and undeveloped landscape (UNDL). CL is a temporary landscape in urban areas that transitions to BL. Additionally, rapid urbanization often creates large areas of temporary construction land ( Figure 2). Multivariate Statistical Analysis Cluster analysis (CA) using Ward's method and the squared Euclidean distance were employed to describe the spatial and temporal patterns of water quality [29]. Based on water quality data and monitoring site codes, water quality can be grouped based on the spatial scale or temporal stage. Using a spatial scale, water quality sites can be clustered into similar urban development stages. Temporally, water quality time series can be divided into different development stages. In this study, boxplots were used to explain the variations in the water quality distribution. Temporally, water quality time series can be divided into different development stages. In this study, boxplots were used to explain the variations in the water quality distribution. Landscape Pattern Analysis Ten metrics were chosen to quantify changes in the catchment landscape pattern at the landscape level based on the patch size, shape, and structure and the landscape diversity. These metrics included the number of patches (NP), cohesion (Cohesion), Shannon's diversity index (SHDI), patch density (PD), landscape shape index (LSI), edge density (ED), interspersion and juxtaposition index (IJI), percentage of the cultivated landscape combined with the agricultural and orchard landscapes (CultiP), the percentage of FL (ForestP), and the percentage of UL combined Landscape Pattern Analysis Ten metrics were chosen to quantify changes in the catchment landscape pattern at the landscape level based on the patch size, shape, and structure and the landscape diversity. These metrics included the number of patches (NP), cohesion (Cohesion), Shannon's diversity index (SHDI), patch density (PD), landscape shape index (LSI), edge density (ED), interspersion and juxtaposition index (IJI), percentage of the cultivated landscape combined with the agricultural and orchard landscapes (CultiP), the percentage of FL (ForestP), and the percentage of UL combined with built-up and developing landscapes (UrbanP). These metrics reflect not only land use and land cover but also the spatial configuration and composition of the landscape at the catchment scale. These metrics have also been considered relative to water quality in previous studies [4,8,10,11]. All of the metrics were calculated using the FRAGSTATS 4.2 software (University of Massachusetts Amherst, Amherst, MA, USA) [30]. Panel Regression Analysis Panel regression analysis was employed to examine the relationships between landscape patterns and water quality in urban catchments. Panel data analysis can be used to assess a variety of regression models and observation times, and it reduces the risk of collinearity between variables. Panel data analysis can incorporate observational or cross-sectional data, and the data can be temporally investigated. In addition, variable intercept panel data models can be advantageous for interpreting the specific impacts of spatial heterogeneity. In this study, a fixed-effect panel data model was used for data assessment based on the following equation: where y i,t is the dependent variable with measurement unit i (i = 1, 2, ..., N) at time t (t = 1, 2, ..., T); LCP i,j,t are vectors of observations for m independent variables of landscape composition ([N × T] × m); LCF i,j,t are vectors of observations for independent variables of landscape configuration; α j and β j are the matching vectors of unknown model parameters; ε i,t is an independently and identically distributed error term with a zero mean and variance of σ 2 ; and µ i denotes a specific effect. In the case of water quality modeling, some space-specific variables that affect 5-year water quality degradation are omitted, such as socioeconomic factors. The random effects can be best interpreted based on the overall characteristics of the model results; therefore, a random effects model was employed in this study. All the variables were transformed into logarithmic form to reduce multicollinearity, eliminate the influence of the dimension, and investigate the rate of change and flexibility of each variable. The estimation results of a Hansman test indicated that the fixed-effects model is appropriate for assessing the interactions between variables (p > χ 2 = 0.000). Spatial and Temporal Variations in Water Quality The water quality statistics indicate that the mean values of the variables notably exceed the standard water quality level III in all instances and level V in some instances (Table 1). Furthermore, BOD 5 , NH 3 -N, and TP are the three main pollution indicators. The spatial variations in surface water quality can be clustered into five groups based on sample site during 1990-2010. Cluster A consists of four sites of Shenzhen stream and Buji stream, which can be described as urbanization-polluted stream catchments; water pollution in Cluster A is greater than that Changes in Watershed Landscape Patterns Based on the clustering of water quality, the catchments were grouped into five clusters to describe the spatial patterns of the landscape from 1990 to 2010. The variations in the landscape fragmentation, landscape composition, spatial configuration, and patch diversity of Clusters A to E are shown in Figure 4. With accelerated urbanization in Shenzhen, the landscapes in catchments with different water qualities exhibited obvious spatial variations. Overall, the majority of landscape patterns in the catchments of Clusters A to E exhibited inverted U-shaped curves, with exceptions being ForestP, which exhibited an increasing trend, and UrbanP, which exhibited a decreasing slope. The landscapes in Cluster B catchments varied more drastically than those of other catchments, indicating that it underwent rapid urbanization. The boxplots of the landscape pattern indices (Figure 4) include water quality plots (Figure 3) in different clusters, which may suggest that landscape changes have profound effects on water quality degradation in different sub-catchments. Changes in landscape patterns occurred in different clustered catchments. The variations in the landscape indices of NP, PD, ED, IJI, LSI, SHDI, CultiP and ForestP in Cluster A, where the urban landscape is a basic matrix, were smaller than those in Cluster B, suggesting that a stable stage occurred after the rapid urbanization stage. In Cluster B, the mean values of NP, PD, ED, and LSI were the largest of all five clusters of catchments, suggesting that these watersheds experienced landscape fragmentation, which led to structural complexity as urbanization progressed. However, the variations in IJI, Cohesion, SHDI, ForestP, CultiP, and UrbanP were also much larger than those in the other clusters, indicating that the catchments in Cluster B were severely influenced by human activities. In Cluster C, which included suburban areas, the indices of UrbanP and ForestP were higher than those of Cluster B but lower than those of Cluster A, while CultiP was significantly higher than that of Clusters A and B. In Cluster C, the watershed landscape displayed high fragmentation and diversity, low aggregation, and a highly complex spatial structure, as reflected by NP, PD, ED, and LSI. The landscape pattern changes in Cluster D indicated that it experienced a high degree of landscape urbanization through variations in Cohesion and IJI. Cluster C catchments include water resource protection zones in urban areas with high proportions of agricultural land use and FL and low proportions of urban land. In these areas, water resource protection policies have significantly influenced landscape restoration and the landscape indices of NP, PD, ED, LSI, and SHDI exhibited lower values than those in Clusters B and C. Cluster E is characterized by a natural landscape in an outer suburban area and is a water protection area with the highest aggregation and ForestP values. In this cluster, urban and agricultural landscapes account for low proportions of the total landscape, and the values of NP, PD, ED, LSI, and SHDI were the lowest among all clusters. These findings suggest that the processes of urbanization have had impacts at the sub-catchment landscape, which reflects the gradient pattern of landscape changes from an urban core area to suburban, outer suburban, and, eventually, natural landscape areas. Temporal Changes in Landscape Patterns in Five Typical Catchments To understand the differences in landscape changes according to temporal characteristics, five typical catchments-BJH, GLHZC, DSH, SYSK, and MLSK-were chosen based on water quality indicators at different watershed urbanization levels. All the catchments were located in west-central Shenzhen and exhibited a spatial gradient from urban to suburban areas. Ten landscape configuration and component metrics were used to analyze the characteristics of landscape change in the five typical watersheds ( Figure 5). The NP, PD, ED, LSI, SHDI, ForestP and CultiP indices of the five typical watersheds have drastically decreased over the past 20 years, while only Cohesion, IJI and UrbanP increased during the rapid urbanization process. These results suggest that a reduction in landscape fragmentation has occurred in most watersheds and that the transformation of the landscape matrix from a natural landscape to an urban landscape was most significant in the BJH catchment, as reflected by decreasing ED, LSI, and PD values. The proportion of the urban landscape increased quickly in watersheds that underwent rapid urbanization, such as BJH, followed by increases in industrial and peri-urban catchments such as GLHZC and urban areas with water protection policies, such as SYSK. However, areas of natural water protection exhibited the opposite trend. Landscape pattern changes were significantly different among the typical catchments. The values of NP, ED, LSI, SHDI, and UrbanP in highly urbanized catchments were several times higher than those in natural catchments, such as MLSK, while those of Cohesion and ForestP were several times lower. All these patterns suggest that water quality degradation varies simultaneously at the sub-catchment scale. Temporal Changes in Landscape Patterns in Five Typical Catchments To understand the differences in landscape changes according to temporal characteristics, five typical catchments-BJH, GLHZC, DSH, SYSK, and MLSK-were chosen based on water quality indicators at different watershed urbanization levels. All the catchments were located in west-central Shenzhen and exhibited a spatial gradient from urban to suburban areas. Ten landscape configuration and component metrics were used to analyze the characteristics of landscape change in the five typical watersheds ( Figure 5). The NP, PD, ED, LSI, SHDI, ForestP and CultiP indices of the five typical watersheds have drastically decreased over the past 20 years, while only Cohesion, IJI and UrbanP increased during the rapid urbanization process. These results suggest that a reduction in landscape fragmentation has occurred in most watersheds and that the transformation of the landscape matrix from a natural landscape to an urban landscape was most significant in the BJH catchment, as reflected by decreasing ED, LSI, and PD values. The proportion of the urban landscape increased quickly in watersheds that underwent rapid urbanization, such as BJH, followed by increases in industrial and peri-urban catchments such as GLHZC and urban areas with water protection policies, such as SYSK. However, areas of natural water protection exhibited the opposite trend. Landscape pattern changes were significantly different among the typical catchments. The values of NP, ED, LSI, SHDI, and UrbanP in highly urbanized catchments were several times higher than those in natural catchments, such as MLSK, while those of Cohesion and ForestP were several times lower. All these patterns suggest that water quality degradation varies simultaneously at the sub-catchment scale. Table 2 shows the coefficients estimated from the panel data analysis, which indicates a clear relationship between water quality and landscape changes. The R 2 of the fixed-effects panel model shows that water quality degradation in Shenzhen can largely be determined by the landscape composition and configuration from 1990 to 2010. Specifically, 39-58% of the degradation is reflected by landscape changed during the rapid urbanization process. BOD5 was significantly and positively correlated with PD and negatively correlated with ForestP and NP; CODMn was positively correlated with PD and negatively correlated with NP, ForestP and Cohesion; NH3-N was negatively correlated with ForestP and CultiP; and TP was positively correlated with Cohesion and negatively correlated with UrbanP. Moreover, VP exhibited a negative correlation with IJI and CultiP. These five water quality parameters exhibited relationships with changes in the landscape composition, while Oils were more affected by the landscape configuration. These relationships indicate that due to the rapid changes in natural landscapes and BLs, water has become seriously polluted in high-level urbanization catchments and contaminated in water protection areas. Effects of Landscape Changes on Water Quality Degradation The compositions of various landscape types exhibited different effects on water quality. Notably, forest and cultivated landscapes exhibited negative effects on water quality, while built-up landscapes displayed positive effects based on the various indicators, except for TP. A significant negative correlation was observed for the influences of ForestP and CultiP on CODMn, BOD5, NH3-N, TP, and VP. The changes in cultivated landscapes contributed to changes of −3.44%, −0.43% and −0.37% in BOD5, NH3-N and VP, respectively, and the changes in FL accounted for −0.80%, −0.34%, and −0.75% of changes to BOD5, CODMn, and NH3-N, respectively. With decreases in the areas of natural and semi-natural landscapes, the concentrations of these water quality parameters increase, leading to water quality degradation. Notably, BL exhibited a positive correlation with five water quality parameters, but TP had a significant negative effect on these parameters. The change in the proportion of the urban landscape accounted for a −0.48% change in TP, suggesting that the increase in BL increased the level of contamination. This finding indicates that the influence of decreasing the natural landscape composition is more important than the influence of increasing the urban landscape composition when urban development occurs in an urban watershed. Table 2 shows the coefficients estimated from the panel data analysis, which indicates a clear relationship between water quality and landscape changes. The R 2 of the fixed-effects panel model shows that water quality degradation in Shenzhen can largely be determined by the landscape composition and configuration from 1990 to 2010. Specifically, 39-58% of the degradation is reflected by landscape changed during the rapid urbanization process. BOD 5 was significantly and positively correlated with PD and negatively correlated with ForestP and NP; COD Mn was positively correlated with PD and negatively correlated with NP, ForestP and Cohesion; NH 3 -N was negatively correlated with ForestP and CultiP; and TP was positively correlated with Cohesion and negatively correlated with UrbanP. Moreover, VP exhibited a negative correlation with IJI and CultiP. These five water quality parameters exhibited relationships with changes in the landscape composition, while Oils were more affected by the landscape configuration. These relationships indicate that due to the rapid changes in natural landscapes and BLs, water has become seriously polluted in high-level urbanization catchments and contaminated in water protection areas. Effects of Landscape Changes on Water Quality Degradation The compositions of various landscape types exhibited different effects on water quality. Notably, forest and cultivated landscapes exhibited negative effects on water quality, while built-up landscapes displayed positive effects based on the various indicators, except for TP. A significant negative correlation was observed for the influences of ForestP and CultiP on COD Mn , BOD 5 , NH 3 -N, TP, and VP. The changes in cultivated landscapes contributed to changes of −3.44%, −0.43% and −0.37% in BOD 5 , NH 3 -N and VP, respectively, and the changes in FL accounted for −0.80%, −0.34%, and −0.75% of changes to BOD 5 , COD Mn , and NH 3 -N, respectively. With decreases in the areas of natural and semi-natural landscapes, the concentrations of these water quality parameters increase, leading to water quality degradation. Notably, BL exhibited a positive correlation with five water quality parameters, but TP had a significant negative effect on these parameters. The change in the proportion of the urban landscape accounted for a −0.48% change in TP, suggesting that the increase in BL increased the level of contamination. This finding indicates that the influence of decreasing the natural landscape composition is more important than the influence of increasing the urban landscape composition when urban development occurs in an urban watershed. Changes in the landscape configuration, such as the significant positive correlations observed between PD and three water quality parameters (BOD 5 , COD Mn and Oils), accounted for 4.37%, 2.03% and 5.44% of changes in BOD 5 , COD Mn , and Oils, respectively. NP exhibited a significant negative correlation with BOD5, COD Mn , and Oils and accounted for changes of −3.44%, −1.63%, and −4.98%, respectively. The impacts of these two landscape fragment indices suggest that landscape fragmentation is an important driver of water quality degradation in urban catchments. IJI had a negative impact on VP (contribution of −1.47%), while LSI only had an impact on Oils (contribution of 6.13%). Cohesion had a small negative influence on COD Mn (−1.87%) and a positive influence on TP (42.97%); SHDI had a negative impact on COD Mn (−1.06%); and ED had no significant impact on any of the six water quality parameters. These results suggest that ED is not an ideal indicator of landscape configuration change as it relates to water quality. However, when landscape fragmentation occurs in urban watersheds, it significantly contributes to changes in water quality. Relationships between Urban Landscape Changes and Water Quality Understanding the relationship between landscape patterns and water quality degradation typically requires more than 15 years of continuous water quality data, and it is necessary to distinguish between the impacts of human activities on landscape changes and normal fluctuations in water quality. Our results indicate that the water quality of urban streams is considerably affected by landscape changes. Previous studies focused on the effects of static spatial landscape patterns on water quality in the short term [4,11], ignoring the impacts of landscape changes on water quality [31]. In such studies, the effect of the landscape configuration is often overestimated, and many landscape metrics have been misused by quantifying such relationships using a correlation analysis [32]. However, this research indicates that the effects of landscape composition changes on water quality are generally more important than those of landscape configuration changes [33][34][35]. It is difficult to select landscape metrics as variables to quantify the relationships between landscape characteristics and water quality [36]. Previous studies generally used area/density/edge, shape, isolation, interspersion, connectivity, or diversity metrics to characterize the structure of a watershed or riparian buffer zone [37]. However, the selection of landscape indices could be improved so that changes in watershed landscape patterns and the drivers of urban stream water quality can be reasonably quantified [38]. Our results indicate that landscape composition and configuration metrics and panel regression analysis can improve assessments of water quality degradation. COD Mn and BOD 5 were reported as the major pollutants associated with the rapid urbanization process in Shenzhen [39]. Previous studies in Shenzhen have demonstrated that the urban landscape composition is nonlinearly related to water quality parameters [18]. Therefore, linear regression methods, such as stepwise regression or correlation analysis, are unsuitable for quantifying these relationships, although they are currently the most popular methods used to analyze the relationships between water quality and landscape patterns [40]. To quantify these nonlinear relationships and avoid collinearity, panel regression analysis based on the log transformation of variables was employed in this study, indicating that this method can effectively quantify these relationships. Other studies have simulated the pollution load based on the landscape or land use, such as studies based on SWAT, HSPF, or L-THIA, with the purpose of quantifying and reducing water quality pollution [41]. However, the pollutant load does need to be directly discharged into streams to create water pollution issues due to landscape changes. Environmental engineering measures can only slow water quality deterioration and cannot completely restore river water quality to its natural state [42]. Thus, engineering measures can quickly reduce pollutant levels but cannot eliminate the cumulative effects of landscape changes. If stream water quality is to be restored, it is necessary to provide a scientific basis for establishing environmental policy and to mitigate the negative effects of landscape changes on water quality. Policy Implications for Water Quality Management Understanding the relationship between regional water quality and landscape change is important for providing evidence-based policy recommendations for the sustainable governance of water quality resources in both developing and developed urban areas [43]. Heterogeneous drivers of water quality reflect the necessity for diverse resource policies that account for differences in water quality in urban stream systems and promote the recovery or restoration of water quality. Model evaluations demonstrate that many parameters, such as COD Mn and BOD 5 , are highly correlated with PD and the natural landscape composition. Therefore, local management plans should focus on increasing the extent and density of natural landscape changes and on controlling the expansion of built-up landscapes to potentially improve water quality in urban stream systems. Landscape restoration should be planned to appropriately increase the proportion of the natural landscape, which can efficiently reduce the risk of non-point-source pollution to water bodies. Therefore, at the sub-watershed scale, landscape pattern optimization is one management method that should be considered. Conclusions In this study, the relationships between water quality and landscape changes were investigated during the rapid urbanization of Shenzhen using panel regression analysis. The results show that: (1) A clear spatial and temporal distribution of water quality along with urban landscape change was identified in Shenzhen, which considerably affected stream water quality degradation during urbanization. (2) Water quality is sensitive to different landscape changes, particularly changes in urban and forest areas. Landscape composition change is an important process that affects water quality degradation; specifically, decreases in the proportions of cultivated and forest landscapes influence water quality changes more than increases in the proportions of urban built-up landscapes. (3) The landscape changes explained 39-58% of the variations in water quality based on different water quality parameters. The landscape configuration also influences water quality degradation, but landscape metrics are not highly correlated with water quality parameters, except for landscape fragment indices, such as NP and PD, which play significant roles. (4) Water quality degradation and landscape changes have kept pace with each other in sub-catchment, so reducing landscape fragmentation and enhancing the natural landscape composition at the watershed scale are of vital importance for improving water quality. Therefore, optimizing the landscape pattern is an alternative strategy for alleviating urban water quality degradation. The results of this study reflect the relationships between urban landscape development and stream water quality and can be used to improve water quality in urban areas. Author Contributions: Z.L. and H.Y. conceived and designed the experiments; Z.L. performed the experiments; Z.L analyzed the data; Z.L. contributed reagents/materials/analysis tools; Z.L. and H.Y. wrote the paper.
2018-05-25T23:38:10.266Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "93a0987888094f0ef0504c14cccf6583ced7dbfc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/5/1038/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5b943a59675aa68693d882b4c884dc4c58229fb", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
261432628
pes2o/s2orc
v3-fos-license
Study on the determinants of health professionals’ performance on diabetes management care in China Background As the direct providers of diabetes management care in primary health care facilities (PHFs) in China, health professionals’ performance on management care of diabetes determines the quality of services and patients’ outcomes. This study aims to analyze the key determinants of health professionals’ performance on diabetes management care in PHFs in China. Methods We conducted a cross-sectional study in 72 PHFs in 6 cities that piloted the contracted family doctor service (CFDS). Self-developed questionnaire was used to measure three kinds of factors (capacity, motivation and opportunity) potentially influencing the performance of health professionals. The performance of diabetes management care in the study was measured as whether health professionals delivered 7 service items required by the National Basic Public Health Service Guideline with a total of 7 points and was divided into three grades of good, medium and bad. The questionnaire is self-administered by all the health professionals involved in the study with the number of 434. The Chi-square tests were used to compare differences of performance on diabetes management care among health professionals with different characteristics. The ordinal logistic regression was used to analyze the determinants on the performance of diabetes management care. Results Health professionals who got higher score on diabetes knowledge test had odds of better performance on diabetes management care (OR = 1.529, P < 0.001). health professionals with higher degree of self-reported satisfaction on training (OR = 1.224, P < 0.05) and perception of decreasing workload (OR = 3.336, P < 0.01) had odds of better performance on diabetes management care. While health professionals with negative feeling on information system support had odds of worse performance on diabetes management care (OR = 0.664, P < 0.01). Conclusions Attention should be paid to the training of health professionals’ knowledge on diabetes management capacity. Furthermore, measures to improve training for health professionals could satisfying their needs for self-growth and improve the motivation of health professionals. The information system supporting management care should be improved continuously to improve the health professionals’ working opportunities and decrease the workload. Background Type 2 diabetes mellitus (T2DM) has become as a global public health issue.In China, the prevalence of T2DM is now 11.2%, being higher than the global average level [1].The cost of T2DM treatment and management care in China is predicted to exceed RMB 360 billion (almost USD 51 billion) annually by 2030 [2].It is imposing a huge economic burden for both patients and the whole health system in China.However, the public health services in China especially for chronic diseases such as diabetes, still have many problems, such as the standardization and quality of diabetes management services provided by PHFs is not high, and the rate of diabetes patients with blood glucose under control is low [3].There are some reasons for these problems: at the level of system or organizational arrangements, most PHFs in China set separated departments for the two kinds of services which are public health services and medical services, and they are reimbursed by different financing system [4]; at the organization level, PHFs held strong motivation in providing more medical services other than public health services, which was attributed to the government subsidies on public health services being relatively low; meanwhile, the revenues could be obtained from delivering medical services which were paid from out-of-pocket payment of patients or social insurance reimbursement through feefor-service method; at the health worker individual level, the awareness and recognition on the importance of preventing diseases in population is not sufficient in doctors and nurses of PHFs.Doctors had stronger willingness in delivering more medical services, although they had to work on public health services under supervision pressure from health administration department [5].The primary healthcare system was seen as a means of addressing the burden of chronic non-communicable diseases in China government's Healthy China 2030 plan [6]. In response to such challenges, the Chinese government has committed to a dramatic increase in the capacity building of the primary health care system [7].The central government introduced a comprehensive healthcare reform plan in 2009 to strengthen the primary healthcare system in both basic medical services and public health service provision.One important measure was the program entitled "Basic Public Health Services" (BPHS) in which government subsidies support PHFs to deliver a defined package of basic health services throughout the country [8].In urban areas, PHFs are called community health centers and stations; in rural areas they are township health centers and village clinics.This essential health care package focuses on maternal and child health, health management for the elderly and chronic disease patients.The health management care for chronic disease in this program covers health education, improving medication compliance, control risk factors, such as smoking control, alcohol intake and combating obesity [9], which are in line with the recommendations of the World Health Organization for essential packages of interventions for non-communicable diseases by primary care facilities [10]. In a series of measures strengthening primary health system, one trend is to develop "contracted family doctor service" (CFDS) and try to gradually promote the gatekeeping role of primary health care providers.The CFDS package is suggested by national policy guideline and covers the public services package defined by BPHS and basic medical services.Specifically for T2DM, the CFDS cover health education, control risk factors, screening, regular physical check, health document updating and management, prescription of medicines for controlling blood glucose, direction on medication use and compliance, referral to hospitals for uncontrolled blood glucose or complications.All the services are provided by family doctor team led by health professionals who have been registered as General Practitioners (GPs) or have got physician and assistant physician license. As the main and direct providers of diabetes management in PHFs, health professionals' performance on management care of diabetes directly affects quality of services and patient outcomes.Based on behavior change wheel (BCW) framework [11], the impacts of individual health professionals on performance of health services delivery are through three major channels: capacity (ability to perform well), motivation (willingness to exert efforts for performance targets), and opportunity (organizational supports for achieving performance targets).Published studies have ever analyzed one group of determinants on performance.For example, diabetes management can be improved through reform of medical education or training of health professionals [12][13][14].Performance-related economic incentives on health professionals lead to better diabetes management care delivery process and outcome performance [15].Using of an electronic diabetes form was associated with improved screening and GPs with high workload recorded fewer micro vascular screening procedures [16,17].Some other factors, like female gender, younger age and high attitude score of GPs were also associated with a better diabetes management [18,19].Few studies have explored the correlation between the determinants with performance on chronic disease management in a comprehensive perspective at present.The main purpose of this study is in the setting of PHFs of China to analyze the determinants of health professionals' performance on T2DM management care in a comprehensive way with the guide of BCW theory. Sampling A stratified sampling method was applied to select PHFs and health professionals.Firstly, 6 prefectures from 6 provinces throughout the eastern, middle and western regions representing the high-, middle-and low-level economic status and different development stage of primary health system were selected.Second, we randomly selected 2 districts or counties in each sampled prefecture.Third, we randomly selected 6 community health care centers in each sampled district and 6 township health care centers in each sampled county based on the institution list given by local agency using computer sampling method.If there were no counties in certain prefecture, 12 community health care centers were randomly selected instead.Finally, 72 PHFs were selected, including 47 community health care centers in the urban areas and 25 township health care centers in the rural areas.All the health professionals in the selected PHFs were recruited in the survey.The inclusion criteria were: (1) The cadre being physician; (2) Participation into teams delivering contracted services in the last year; (3) Voluntary participation with informed consent; (4) On-duty on the investigation day.Each participant completed a self-administered questionnaire independently, with research team being on site to address their questions.The response rate was 100%.Because the chronic disease management care is mainly provided by CFDS team, the correlation analysis in this study were only conducted in the sample of health professionals who reported having participated into CFDS in the last year with the number of 434. The dependent variable The health professionals' performance of diabetes management care was set as the dependent variable.The performance of diabetes management care in the study was defined based on the National Basic Public Health Service Guideline (the third edition), which aimed to set up comprehensive service mode of continuous measurements of T2DM, including①screening for T2DM; ②T2DM diagnosis; ③regular treatment; ④diet or exercise guidance; ⑤follow-up visit; ⑥regular examinations for T2DM and its complications; ⑦referral service.The health professionals needed to answer whether these services are being provided (yes/no) in the last year and 1 point assigned for "yes" with a total of 7 points.Based on the extend of health professionals following the guideline, we divided the performance of diabetes management into three grades.If the health professionals provided 5 or more items of services, the performance is rate as good.If 3-5 services were provided, the level of performance is rated as medium.If less than 3 services were provided, the level of performance is rated as bad. The independent variables (1) Demographic and job characteristics.The health professionals' gender, degree, training background, qualification and professional title was set as the demographic and job characteristics.5 questions were set in the questionnaire. (2) Capacity factors.In the BCW framework [11], capacity means the individual's psychological and physical capacity to engage in the activity concerned.It includes having the necessary knowledge and skills.The capacity of professionals' diabetes management was usually measured by knowledge test with the questions selected from the examination for licensed practitioners [14,20].In our study, we designed 7 questions about diabetes management knowledge to measure the diabetes management capacity of the health professionals, including the diagnostic criteria of T2DM; the complications of T2DM; the first choice for treatment of T2DM; the understanding of glycemic index; the drug use; and the measurement of glycosylated hemoglobin.Health professionals get 1 score if they answered right.If the score greater than 5 is considered as high; a score between 3 and 5 is rated as medium, and a score below 3 is rated as low.A higher score indicates better capacity in diabetes management. (3) Motivation factors.According to the BCW framework [11], motivation means the individual's degree of willingness to exert and maintain an effort towards organizational goals.Large number of evidences have suggested the factors related financial and non-financial incentives to maximize health worker's motivation [21][22][23][24][25].In China, the systematic review has verified that income, career development through promotion and training, and workload were key factors influencing health workers motivations [26].Therefore, in our study, the factors related motivation level of health professionals were measured by linkage between income and performance of health services, health professionals' experience of promotions, the perception on workload, and satisfaction with training received.4 questions were designed such as "How does performance on CFDS has impacted on your personal income?" with the options of increase, no impact and decrease, in which the assumption was the linkage between increase in income with better performance being able to satisfy needs of health workers on financial (income) or non-financial rewards (promotion and selfgrowth) and motivate health workers. (4) Opportunity factors.According to the BCW framework, either physical or social opportunity to perform well is depended on the supports from environment.The opportunity for the health professionals to perform well is depended on the organizational supports [27,28].So, we set the health professionals' perception on the organizational support from information sharing, device and drug configuration for T2DM as the opportunity factors.3 questions were designed such as "How does the information sharing about diabetes management and care in your organization?"with the options of very good, relatively good, relatively bad, very bad and have no idea, in which the assumption was the higher perception on these supports meaning the organization providing more opportunities to perform well. Statistical analyze This study tries to analyze how different factors influence the performance of diabetes management care among health professionals.Descriptive statistics in the form of frequencies and percentage were used to describe the characteristics of the health professionals.The Chisquare tests were used to compare differences of performance on diabetes management care among health professionals with different characteristics.The ordinal logistic regression was used to analyze the determinants on the performance of diabetes management care. All analyses were performed using the statistical package Stata version 14.0.A difference of P < 0.05 was considered to be statistically significant. Basic characteristics of investigated health professionals This study included 576 health professionals from 72 PHFs in 12 administrative districts in China.As Table 1 shows that 56.42% (n = 325) of the investigated health professionals were female, and more than half have been trained with Western medicine (n = 320), 15.45% and 12.5% were with Traditional Chinese Medicine (n = 89) and preventive medicine (n = 72).67.53%(n = 389) of the health professionals had bachelor's degree.75.35% (n = 434) reported having participated into teams delivering CFDS in the last year. The correlation between performance on diabetes management care and health professionals' demographic and job characteristics Table 2 shows the correlation between individual health professional characteristics with whether they have carried out each item of services defined by BPHS management care.The results showed that females performed better in providing diagnosis (p = 0.004) and referral services (p = 0.029) for diabetes patients compared with male health professionals.It was also found that health professionals with the specialty of preventive medicine had lower percentages in undertaking all the service items (p < 0.005) except follow-up visit.Health professionals with other specialties such as medical technology and others performed worse in providing diagnosis (p < 0.005) and regular treatment (p < 0.005) services.Those with training specialty as Traditional Chinese Medicine performed better in providing diagnosis (p = 0.004), regular treatment(p = 0.01) and referral services (p = 0.002) for diabetes patients compared with practicing physicians. The correlation between performance on diabetes management care and health professionals' work capacity Table 3 shows the correlation between health professionals' capacity characteristics with each item of services defined by BPHS management care.The results showed that there were statistical differences in all items of the services for T2DM among health professionals with different levels of capacities (p < 0.05).And with further comparisons among any two levels of capacity, it was confirmed that health professionals with high and medium T2DM knowledge test score have undertook more items of service (p < 0.017) than the ones with low T2DM knowledge test score. The correlation between performance on diabetes management care and health professionals' work motivation As Table 4 shows, health professionals who got promotion in the past year were more likely to provide regular treatment (p = 0.008) to T2DM patients.The majority of health professionals perceived that the CFDS have increased workload and this perception negatively correlated with the service delivery of diagnosis (p = 0.048), regular treatment (p = 0.010), follow-up visit (p = 0.049), complications examination (p = 0.018) and referral services (p = 0.005) for T2DM patients.At the same time, majority of health professionals perceived that the CFDS had resulted in increase or no change in their income, and the ones who considered an increasing of income were more likely to provide follow-up visit service (p = 0.004).We also investigated the satisfaction on training of health professionals, and the result showed that the positive relationship between training and better performance in regular treatment (p = 0.015), lifestyle guidance (p = 0.045), complications examination (p = 0.000) and referral services (p = 0.013). The correlation between performance on diabetes management care and health professionals' perception on opportunity factors The result in Table 5 showed that there were statistical differences in all items of the services providing for T2DM among health professionals with different perception on information sharing (p = 0.000) and the device configuration (p = 0.007), and those with better feeling on organizational supports having higher proportion in delivering manage care services. Ordinal logistic regression on the determinants of health professionals' performance on diabetes management care The multivariate analysis results in Table 6 showed that the training specialty as combination of Traditional Chinese Medicine and western medicine or preventive medicine, the T2DM knowledge scores, the satisfaction with training, the perception on workload and perception on information system support were factors being correlated with the performance of health professionals on diabetes management care.(P < 0.05). Discussion This study used the investigation data from six cities in China and found that three kinds of determinants of health workers performance were all associated with primary health professionals' performance on diabetes management care in China: health professionals with higher satisfaction on training provided more items of services required by national guideline, health professionals with higher diabetes knowledge score was associated with better performance on diabetes management care, at the same time health professionals perceiving better support from information sharing system and better availability of adequate equipment performed better in diabetes management services.This study is designed based on behavior change wheel (BCW) framework, which has the similar implication as several other frameworks regarding the determinants of health workers behaviors [29,30].Based on these frameworks, health professional performance is the consequence of three factors: the ability to get the job done (their knowledge, skills and experience to perform the job); motivation to work hard (the extent of efforts on performing better); organizational support or opportunities to do a good job (availability of resources, existence of performance-friendly policies and practices, physical and social environment). This study found that the higher the health professionals' diabetes knowledge score is correlated to the more items of services required by national guideline on diabetes management care.Knowledge examination score is one of common methods measuring the capacity of health workers, and have ever been used in dental care, internal medicine care and primary health care [31,32].The positive relationship between capacity and work performance has been verified in different countries and in different kinds of services, for example a study of rural general practitioners training on mental health capacity building in Mali indicated that a short mental health training intervention for rural general practitioners improved general practitioners´ knowledge and skills, and resulted in a significant number of new patients being diagnosed and managed [33]; another study on the a capacity-building training program for the early recognition and referral of childhood cancer in North-West Cameroon [34] indicated a significant correlation between the participants' form of training and their mean score for knowledge about childhood cancer types, signs of childhood cancer and the availability of treatment all together. Regarding the influence of work motivation on performance of health professionals, the data analysis in this study shows that, in the univariate analysis, those who have experience of being promoted in the past year, and who perceived CFDS bringing income increasing and have higher satisfaction on training undertook more items of diabetes management services.While those who considered CFDS increasing the workload undertook less items of diabetes management services.With further analysis in multivariate analysis, satisfaction on training was correlated to better performance on diabetes management care, while perception on higher workload was correlated to lower performance on delivering diabetes management services.Huge amount number of studies [35][36][37][38][39][40][41] have explored the factors being able to motivate primary health workers in China and abroad.One systematic review has synthesized the major motivation factors for primary health workers in China and confirmed the influences of the career development and financial income on motivation.The experiences of being promoted and linkage between income with CFDS directly satisfied the needs of health workers for career development and increased income, and so these factors could motivate better performance on diabetes care.The finding on the negative impact of higher workload on delivering diabetes management services is in consistent with other findings on the high workload as a demotivation factor [16,42].The satisfaction on training was found as one important aspect contributing better performance on diabetes management services, which is consistent with previous studies [16,18,19,43,44].The mechanism on how being satisfaction with training could contribute to better performance include: training could satisfy health workers' need for self-growth, at the same training could also help the improvement of work capacity [45]. Organizational support, such as availability of infrastructure and supplies, provides one of the most important opportunities for health workers to perform better.In our study, we found that health professionals perceived high levels of diabetes management information sharing performed better in diabetes management services.In other regions of China, the studies have the similar findings: the effect of management care on patient outcomes was nearly 30% stronger in districts/counties with fully established management information systems compared with districts without information systems [27]; PHFs with the support of information system, including the information sharing of health records and medical records system have a better control of blood glucose in diabetes patients [28].The studies in other countries also found that whether have the information system support is connected with health workers' work performance: Carolyn J.Green [46] indicated that a web-based chronic disease management (CDM) was found to be a direct critical success factor that allowed this group of physicians to improve their practice by tracking patient care processes using evidence-based clinical practice guideline-based flow sheets; Griffin and Kinmonth [47] concluded in their Cochrane review that responsibility for diabetes by family physicians will only succeed with adequate support in the office practice such as computerized, prompted recall and review of patients with diabetes. This study has several limitations.First, the observational nature of our study limited our ability to draw any causal inference from our findings.The health professionals' performance on diabetes management care maybe influenced by other uncontrollable factors including health system characteristics and other environmental factors.Future studies should focus on more rigorous research, including randomized, controlled trials and observational studies with concurrent control groups, to assess the effectiveness of the relevant policies targeting behaviors of health professionals.Second, in this study, diabetes management services provision was selfreported and may leads to higher performance of diabetes management than what the performance actually is.However, the determinants of health professionals' performance on diabetes management care and explanation based on BCM theory can still provide in-depth understanding and reliable evidence support for policies targeting improvement of diabetes management care in China. Conclusions Chronic disease management service has gradually become the major tasks of primary health facilities in China.Whether health professionals can provide qualified management services for diabetes patients will directly contribute to health status of diabetes patients and performance of whole primary health system.Based on the findings of this study, attention should be paid to the training of health professionals' knowledge on diabetes management capacity.Furthermore, measures should be taken to provide satisfactory training for health professionals to improve the motivation of diabetes management.It is also concluded that the information system supporting management care should be improved continuously, which could not only improve the health professionals' working opportunities for diabetes management but also decrease the damage of high workload on enthusiasm of health professionals on diabetes management services, as poor information system implies lots of manual works in diabetes management care. Table 1 General status of the health professionals Table 2 The management services for T2DM delivered by health professionals with different personal characteristics * Other specialties including nursing, medical examination, medical technology, stomatology and pharmacy **The "others" in professional title means the respondents didn't apply for the professional title Table 3 The management services for T2DM delivered by health professionals with different work capacity Table 4 The management services for T2DM delivered by health professionals with different work motivation Table 5 The management services for T2DM delivered by health professionals with different perception on opportunity factors Table 6 Ordered logistic regression of determinants of health professionals' performance on diabetes management care
2023-09-02T13:37:36.912Z
2023-09-02T00:00:00.000
{ "year": 2023, "sha1": "66555a154872248282c29cb0c307b6fb23b82bcf", "oa_license": "CCBY", "oa_url": "https://bmcprimcare.biomedcentral.com/counter/pdf/10.1186/s12875-023-02136-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73b5951df317a54882774c7a593a2a317370a8f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14738307
pes2o/s2orc
v3-fos-license
Shared topics on the experience of people with haemophilia living in the UK and the USA and the influence of individual and contextual variables: Results from the HERO qualitative study The study illuminates the subjective experience of haemophilia in people who took part in the Haemophilia Experience, Results and Opportunities (HERO) initiative, a quali-quantitative research program aimed at exploring psychosocial issues concerning this illness around the world. Applying a bottom-up analytic process with the help of software for textual data, we investigated 19 interviews in order to describe the core themes and the latent factors of speech, to explore the role of different variables in shaping the participants’ illness experiences. The five themes detected are feeling different from others, body pain, acquisition of knowledge and resources, family history, and integration of care practices in everyday life. We illustrate how nationality, age, family situation, the use of prophylaxis or on-demand treatment, and the presence of human immunodeficiency virus or hepatitis C virus affect the experience of our participants in different ways. Findings are used to bring insights on research, clinical practice, and psychosocial support. administered regularly, two or three times per week, even in the absence of bleeding episodes. This second method reduces the anxiety and the alarm surrounding physical trauma, but can be quite invasive, thus entailing a significant everyday psychological and organizational burden for affected individuals and their care givers (Brigati & Emiliani, 2013). Moreover, therapies are expensive and, given that it is a chronic condition, they represent a significant economic cost for the community*in countries where health care is guaranteed by the state*or for the individuals*in countries where treatment is only possible through private insurance. Nowadays, at least in developed countries, haemophilia is a very different disease than it was a few generations ago (Gringeri et al., 2004). Medical advances, like prophylaxis and the availability of safe clotting factor, have reduced life-threatening events and have prevented chronic complications. We must mention that in the early 1980s many PWH were infected with the human immunodeficiency virus (HIV) or hepatitis C virus (HCV) because of transfusions with infected blood. Acquired immunodeficiency syndrome (AIDS) caused high mortality rates as well as social stigmatization and marginalization of haemophiliacs. Older generations of PWH may still have the HIV and/or HCV virus. Even if at present the risk of contamination with infected blood has technically been eliminated, researchers have observed that the fear of contamination and the stigma persist in social relationships (Barlow, Stapley, & Ellard, 2007), also because this illness is inevitably linked to symbolic and taboo elements, such as blood and needles (Markova, Linell, Grossen, & Salazar Orvig, 2007;Potì, 2013). Therefore, PWH and their families often have to deal with an array of psychosocial challenges and emotions (e.g., shame, guilt, fear, and anxiety), and the illness can have a significant impact on their quality of life regarding relationships and the management of school, work, or leisure activities (Beaton, Neal, & Lee, 2005;Cassis, Emiliani, Pasi, Palareti, & Iorio, 2012;Cassis et al., 2014;Tejero Pérez, 2005). Some authors have investigated the impact of haemophilia from a bio-medical and individual psychological perspective, measuring dimensions such as self-esteem, stress, anxiety levels, and depression among PWH or care givers (Basu, Chowdhury, & Mitra, 2010;Ghanizadeh & Baligh-Jahromi, 2009;Kyngas & Rissanen, 2001;Plug et al., 2008). In recent years, the increasing attention to the perceived impact of illness led to several studies on the quality of life of PWH, with the creation of disease-and age-specific instruments (Berg et al., 2015;Bradley et al., 2006;Szende et al., 2003). Nevertheless a systematic review of methodologies and findings on the psychosocial aspects of haemophilia has evidenced that research in this area is still limited, based on questionnaire techniques with little or no qualitative information, and that there is a lack of data on the haemophilia life cycle, along with data from developing countries (Cassis, Querol, Forsyth, & Iorio, 2012). Although useful for providing quantitative information, this literature can be limited in grasping the mechanisms and process that lead people toward different kind of outcomes and to variations in the levels of well-being or social integration (Emiliani, Palareti, & Melotti, 2010). In an era where the psychosocial care of chronic conditions is increasingly recommended (Holland, Watson, & Dunn, 2011), some authors claim the need for further and international studies that take into account the subjective perspectives of people in the variety of contexts and circumstances they have to face (Breakey, Blanchette, & Bolton-Maggs, 2010;Glozah, 2015). In the present paper, we took advantage of old data gathered by the multinational Haemophilia Experience, Results and Opportunities (HERO) program to run a qualitative study that, overcoming some of the gaps evidenced in the literature, explores the psychosocial construction of meanings of living with haemophilia. Method The HERO program (www.herostudy.org) was established in 2009 as a multimethod initiative aimed at broadening the understanding of the psychosocial issues associated with living with haemophilia. After a literature review, in the first phase, 150 faceto-face interviews with PWH, parents, and health care professionals (HCPs) in seven countries were collected (Algeria, Brazil, France, Germany, Italy, United Kingdom, and United States) and their explicit content was used to elucidate the key psychosocial issues and to prioritize areas for the following quantitative assessment . Then two online surveys (one for adult PWH and one for parents of children with haemophilia) were developed and conducted in 10 countries. The 1236 questionnaires gathered represent the largest multicountry data set yet gathered, including demographic, treatment, and psychosocial information at the same time. In this paper, a different approach to the analysis of the interviews of PWH is applied in order to deepen the subjective meanings of the illness experience, in particular cultural and social contexts (Markova et al., 2007). Our specific goals were to identify the core themes shared by our participants and to explore the way L. Palareti et al. 2 (page number not for citation purpose) in which the national context, the life-cycle stage (age and family situation), and the clinical condition (type of treatment, and presence of HIV or HCV virus infection) affect their illness experience. The face-to-face interviews were performed in 2010 by a specialist health care research agency. They generally took place in the homes of PWH and lasted at least 60 min each. The interview grid (see Supplementary file) was prepared by two of the authors, in collaboration with the research agency, to explore the following issues: the first awareness of haemophilia; the meaning of growing up with haemophilia; current issues related to living with haemophilia; haemophilia treatment; support received; and hopes for the future. The interviewer encouraged participants to talk about their own personal experiences through non-directive prompts and a list of open-ended questions. At the end of the interview, they were also asked to complete a questionnaire on demographic and social data. All of the interviews were audio recorded with the patients' informed consent and transcribed verbatim. The relevant ethical boards of the countries involved approved the study. Participants We purposely selected only interviews of PWH living in the United States (USA) and the United Kingdom (UK), because they are characterized by the same linguistic matrix (English-speaking first-world countries), but have opposite health care systems. Until recently the health care system in the USA was mainly based on private insurance, whereas in the UK there is a national health system (Lelli, 2013). Participants were contacted through patients' associations and received a letter from the HERO board and the research agency that presented the research aims and methods. Because the general aim of the qualitative phase of this HERO project was to explore different PWH's points of view, all adult men with haemophilia A or B who were willing to participate in an in-depth face-to-face interview were enrolled in the study. Nine PWH were interviewed in the UK and 10 in the USA, divided according to the variables illustrated in Table I. Although it was a convenience sample, the Fisher's exact test confirmed the independence between nationality and all the other variables considered, indicating that the interviewees are equally heterogeneous in the two countries. Data analysis A bottom-up thematic analysis was performed on the 19 interviews. The inductive approach was chosen as particularly well suited to bring out subjective experiences that are socially constructed and contextbounded (Guba & Lincoln, 1994;Holloway & Wheeler, 2013;Vaismoradi, Turunen, & Bondas, 2013). In particular, we analysed the transcripts using the ''Thematic analysis of elementary context'' from T-Lab software (Lancia, 2012). The software is used to study vocabulary and cooccurrence matrices to identify shared themes or issues associated with the topics being researched, which allows for the exploration of the latent content (Braun & Clarke, 2006) of the entire data set. In the thematic analysis, the software examines the text of all the interviews considered as a single set of data, identifying the interviewees' lexical choices, and performing cluster and correspondence analysis (Caputo, 2014;Emiliani, Bertocchi, Potì, & Palareti, 2011;Montali, Monica, Riva, & Cipriani, 2011). Beyond the statistical analysis, the researcher is highly involved while using the software, from the preparation of the data set, to the interpretation of the statistical outputs. Therefore, he/she must be very familiar with the interview texts and the research topic. In the present study, two of the authors prepared the data set (e.g., lemmatization and disambiguation of terms, choice of the keywords) and conducted the T-Lab analysis together. Subsequently, a third author analysed the results of the statistical outputs, and together the authors redefined the interpretations and labelled the data. Of the transcribed interviews, we choose to import only the participants' answers into T-Lab, obtaining a data corpus of 101 pages (font 12, single-spaced) for a total of 76,326 words (before the lemmatization and creation of the dictionary). The Thematic Analysis of Elementary Context produced the following outputs: a. the clusters, described through a set of keywords tended to be associated according to their decreasing X2 value. The rationale is that a set of co-occurring words marks a specific thematic content. Therefore, sentences having a certain set of co-occurring words in common share the thematic content marked by such a set. The association of words in each cluster allows the researcher to reconstruct the links that arise on a psychological level in the respondents, recognizing how the group of participants cognitively and emotionally conceptualizes that particular theme. Each cluster has a different weight according to the percentage of text (sentences) that includes with respect to the entire textual corpus; b. the main factors (the axes of the factorial space, generally 2 or 3) that describe the joint behaviour of groups of lemmas and that can be interpreted as significant semantic dimensions expressed by the conversation. Each polarity is formed by different keywords that most often co-occur in the same parts of the text; c. a graphic representation of the position of the variables and clusters on the factorial space. The closeness between a cluster and a variable indicates that the theme is particularly relevant for participants that share that variable. Results The analysis identified five clusters, that is, the themes, located in the factorial space illustrated in Figure 1. We labelled the polarities of each factor interpreting the words that characterize them. In this way we can see that the first factor represents in its left polarity the familiar and social experience of illness. On the right side is evoked instead the personal experience of haemophilia in the everyday life. The negative pole of the second factor is centred on pain, distress, and limits, whereas the positive one is focused on resources for coping with illness and, in particular, on the support system. The themes Below you will find the description of each theme along with a table that lists the chain of words that characterizes it (in the order of decreasing associative value), and some illustrative sentences. Each table also indicates the weight of the cluster according to the percentage of text included. 1. Growing up and feeling different from others, even within one's own family (Table II). This is the biggest cluster and is placed on the third quadrant of the factorial plane. It focuses on the meaning that being a haemophiliac has over time, as the condition plays a role in an individual's growth within the family. Respondents have the impression of being on the wrong side; they also feel as like they are in a glass bubble, wrapped in cotton wool, untouchable, different from the others. The impact of haemophilia at school, in relationships with others (especially with healthy brothers and sisters), and in sport, with the risk of becoming socially isolated, is evident in this theme. This issue is associated with boredom, expectations, failure, and anxiety. It is very interesting that in the development of the theme, HIV is one of the first words, suggesting that the historical contagion that affected the haemophilia community has been a relevant worry in our participants' lives. 2. Cognitive strategies to manage the illness and find resources (Table III). This big cluster is located on the first quadrant of the plane and highlights the effort to learn how to self-manage the disease and its treatment, generally seen as a break in life, especially in leisure time or on holidays. PWH look for scientific explanations and new information on how to deal pragmatically with their condition, expressing a responsible use of resources and know-how. Different HCPs are evoked and represent a crucial point of reference whose support nourishes hope. The evaluation of what it means to be a person with haemophilia in different countries in terms of opportunities and cost of treatment surfaced. 3. Illness as disability and stressor event (Table IV). This cluster is located on the fourth quadrant and covers 20% of the text. The core is the body, described through a detailed review of its injured and vulnerable parts. Feelings of depression and frustration emerged, related to the limitations and pain caused by the illness in everyday life. In this theme, we find explicit references to medical aids, especially to drugs used to provide pain relief. Haemophilia appears to be a stressful and painful event that marks time (not just the everyday, but also weeks and months), creating new routines experienced with difficulty. 4. Learn to manage emotions and action toward normality (Table V). 5 This smaller cluster is positioned in the first quadrant, together with Cluster 2, and refers to the processing of meanings and practices designed to integrate the illness and its treatment into the everyday, with the aim of creating a sustainable routine in the search for normality. The cluster gives a picture of people that are involved in many social contexts and have to cope with emotions and feelings without trying to ignore them. Both clusters in this quadrant place emphasis on learning to manage haemophilia. Yet, whereas Cluster 2 is full of references to medical aspects, Cluster 4 describes the existential dimension of the illness within everyday life contexts. 5. A tremendous mortal game: the discovery of genetic and family history (Table VI). This small, but significant, theme on the second quadrant focuses on the search for origins, even back to previous generations and countries of origin, in order to probe the genealogic tree. In this theme, illness is experienced as a form of death. Together with science and genetics, randomness and misfortune are considered in a game described as horrible, like Russian roulette. The emotions that characterize this theme are fear, anger, and blame. Loss is evoked in different aspects of life, such as work or romantic relationships. Exploring differences related to individual and contextual variables If we analyse the position of structural variables on the factorial plane, we can see that the four modalities describing the family situation of our participants are distributed on the four different quadrants. In particular, more than others, married people represent (7.78), hot (7.37), actual (7.37), feel good (7.37), syringe (7.37), smaller (7.37), skate (7.37), wheelchair (7.09), social (7.07), mind (7.06), vein (6.68), summer (6.68), know how (6.68), legs (6.41), important (6.34), myself (6.33), normal (6.22), responsibility (6.09), cost (5.58), cope (5.58), crutches (5.29), receive (5.29), God (5.29), start (5.14), needle ( , nobody (11.91), expect (11.91), born (11.67), long time (11.42), share (11.06), divorce (9.46), loss (9.46), ready (9.46), blame (9.35), job company (7.75), treat (7.75), bled (7.75), close (to someone) (7.32), genetic (6.86), persistent (6.68), negative (6.49), found out (6.49), grow up (6.43), hell (5.48), tremendous (5.15), possibility (4.65), crazy (4.65), hide (4.65), lucky (4.65), girl (4.65), rare (3.96), horrible (3.96), funny (3.96), game (3.96) Extracts «As far as I know, my great grandfather on my mother's side had haemophilia. That's pretty much as far as history of haemophilia goes in my family, at least as far as we know. I guess there have been daughters, daughters carriers and then a boy. I was the lucky one (US, Age group 1, On-demand, Dating, with HIV/ HCV)». «Then when they came back later and gave me the carrier testing, then they found out that my mom was a carrier. So then they come back and tried to check with the uncles in the family and because at that time the older generations, the African American, they wouldn't talk about certain things. There were a couple of uncles who passed away, but none knew of what or why (US, Age group 2, Prophylaxis, Single, with HIV/HCV)». haemophilia as a possible limit to relationships, whereas those married with children are more concerned with the issue of genetic and family history. Single people emphasize the importance of coping strategies in everyday life, and seek social support and resources through HCPs, whereas dating individuals stress the physical problems and the limits of their body, expressing concern and sufferance. Referring to age, we can observe that as people grow older, they move from a primary interest in personal day-to-day experience, to being engaged with the issue of genealogy, family history, and death. Older interviewees are also near the variable ''with HIV/HCV,'' because of the years in which they have contracted the viruses, whereas the young are near the variable ''prophylaxis,'' reflecting the changes in health care. As expected, patients affected by HIV/HCV are rather close to the image of haemophilia as a mortal game but mainly underline the role played by the health care system. Persons with haemophilia without these viruses are more interested in the limits in their everyday lives, mostly worried about the other disease complications. Note that the issue of HIV, elicited in the first thematic group, is transversal to people with and without infection. Confirming the results of other studies (Pasqual Marsettin et al., 1995, 1998, this finding indicates that fear of contagion and the derived social stigma still have a strong emotional impact also for seronegative PWH. On-demand therapy is more closely associated to an image of haemophilia that creates worries and feelings of being different from others starting from infancy, whereas prophylaxis is related to a more normalized life style, where emotions are elaborated and the illness is better managed. Finally, the US respondents stressed the aspects linked to the existing support systems more, such as the relationship with HCPs or the cost of health insurance, whereas people in the UK considered the issues of pain, diversity, and disability in the everyday life more. Although the specific characteristics of participants may have produced this outcome by chance, it seems consistent with the differences in the two health care systems to us. Probably, in fact, where people pay directly for care (USA), PWH are more demanding toward professionals, paying attention to their relationships with them and taking notice of the quality of care received. Instead, in UK, where the health care system is public, the relationship with professionals is not in the forefront and interviewees remain focused on their personal experience of sickness and its impact on other relational contexts, such as family, school, or work. Strengths and limitations of the study Haemophilia is a rare disease and this study is based on a small easily accessible sample of people who were interviewed in 2010. Therefore, any attempt to extend or generalize its results must be done with caution. However, because the criterion for judging the soundness of a qualitative study is not its generalizability but rather the transferability of the results (Guba & Lincoln, 1994), we believe our 19 respondents represent a valuable source of information with which to explore the psychosocial construction of meaning in living with this chronic condition. Moreover, the length and quality of the interviews were ideal for elaboration with the T-Lab analytic software. We consider the use of T-Lab a further strength because it allows one to preserve the richness and ideographical elements of the subjective experience and, at the same time, be accurate and rigorous (Lancia, 2007(Lancia, , 2012 in analysing the data. In order to achieve our goal, we choose a method focused on how concepts (words) are associated in the whole data set, not the interpretation of the explicit content of each interview. The associations, detected through statistical analyses, have allowed for the emersion of latent aspects of the material in terms of shared content (number and characteristics of the themes), in terms of semantic and symbolic dimensions expressed in the conversation (the factors), and in terms of the contribution given by each of the investigated variables to the final result (the position of the variables in the map). The use of quantitative statistics does not reduce the qualitative nature of the study. Therefore, the co-researchers were in constant dialogue to expand, check, or correct their viewpoints throughout the whole process. Lincoln and Guba state that the aim of trustworthiness in a qualitative research is to support the argument that the inquiry's findings are ''worth paying attention to '' (1985, p. 290). We believe that this study offers insights that can help professionals to develop more effective practices that are culturally relevant and respondent to patients' needs. In fact, simply providing patients with information or behavioural prescriptions in order to elicit attitudes that promote treatment compliance has not proved sufficient. This approach implies that people are blank pages on which one can write the best conduct, and that people will always make the best choices. Conversely, it is important to understand in what knowledge system, and with which symbolic and emotional meanings, the new information will be integrated and implemented. The indication of the details of the statistical outputs in the tables (the exact sequence of words characteristic of each theme) was not only chosen for transparency but also as an invitation to the reader to consider the affective and cognitive meanings expressed by the interviewees without the mediation of, and inevitably reduction by, the researchers. Research and clinical implications Regarding additional research connected with the HERO initiative, it would be interesting to explore the interviews given to parents and HCPs in the UK and the USA in order to assess similarities and differences in the relevance given to certain topics, like pain or relationships between patients and HCPs. Moreover, the same thematic analysis could be performed on the interviews of PWH living in developing nations (Algeria and Brazil, where access to haemophilia care and treatment is more limited) and in European non-English speaking countries (France, Germany, and Italy), in order to further explore the role that sociocultural variables have in modifying the experience of this illness. Finally, it will be important to use the quantitative data of the last HERO phase, to check whether some issues evidenced in this study (e.g., concern over the cost of the disease in the USA, or the experience of physical suffering and marginalization in the UK) are confirmed in a larger sample. We believe that these results can also be useful in clinical practice, because a deeper understanding of patients' experience of haemophilia can help the HCPs to consider the subjective, developmental, and cultural aspects that often remain unstated within interactions, thus fostering therapeutic alliance with them (Khair, 2013;Sorrentino, Guglielmetti, Gilardi, & Marsilio, 2015). In particular, this explorative study suggests the following considerations: a. counselling for PWH should address social and family relations including the fear of stigma, the fear of rejection, lack of confidence, and communication with carers who tend to overprotect PWH; b. HCPs should take into account that the fear of HIV/HCV infection through blood can still be present, even in rich developed countries; c. the interpersonal relationships with HCPs, economic concerns, and the perceived costs of the illness should also be addressed, particularly in the context of private health care systems, as in the USA; d. psychosocial interventions and HCPs should take into account the issue of pain management to help PWH to manage frustration and avoid becoming psychologically dependent on painkillers (Elander & Barry, 2003;Montali, et al., 2011); and e. finally, it is important to analyse the explanations that people give themselves about being ill, identifying models that refer to faith, fate, and/or genetics. As we can see from our data, this issue is even more relevant for men with children, because these models not only affect their reproductive choices, but also their relationship to new generations. The psychosocial support provided to this target group could improve the process of making sense of their illness as well as communication with affected or carrier children. Conclusions The main purpose of the present study was to explore and highlight shared aspects of the haemophilia experience for certain PWHs in the USA and the UK. We replaced the objective view of the phenomenon with the subjective perspectives of people who engage daily in the process of constructing symbolic meanings of the illness and utilize them in their own social, personal, and cultural contexts. If we consider haemophilia to be ''an illness to care for'' and not a ''disease to cure'' (Brigati & Emiliani, 2013), a number of psychosocial issues emerge that we should be aware of, particularly regarding the emotional and cognitive aspects of that particular experience. If we consider the factorial plane as a whole, focusing on the positions of the different clusters and the meanings that they express, we find a coherent representation of the feelings, worries, and resources these patients display in facing their psychosocial challenges. The picture reminds us how suffering from a chronic haemorrhagic illness is related to a life history steeped in pain. This study does not only reveal the shared themes but also illustrates some differences linked to personal and contextual variables such as country, family status, age, and HIV or HCV status. We trust that this composite representation can offer diverse suggestions for planning and developing clinical and psychosocial interventions, and thereby improve comprehensive care of haemophilia. Declaration of conflicting interests Alfonso Iorio and Frederica Cassis declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: A. Iorio received honoraria from Novo Nordisk for speaking in educational symposia, sitting in advisory boards, and acting as a consultant. F. Cassis received honoraria from Novo Nordisk for oral presentations in meetings and symposium. All the other authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2018-01-24T17:25:33.048Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "349a36c9f56c23c6a9096a68528de7e097341443", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3402/qhw.v10.28915", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "349a36c9f56c23c6a9096a68528de7e097341443", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
2538979
pes2o/s2orc
v3-fos-license
D-brane Analyses for BPS Mass Spectra and U-duality We give a confirmation of U-duality of type II superstring by discussing mass spectrum of the BPS states. We first evaluate the mass spectrum of BPS solitons with one kind of R-R charges. Our analysis is based on the 1-loop effective action of D-brane, which is known as ``Dirac-Born-Infeld (DBI) action'', and the fact that BPS states correspond to the SUSY cycles with minimal volumes. We show the mass formula derived in this manner is completely fitted with that given by U-duality. We further discuss the cases of BPS solitons possessing several kinds of R-R charges. These are cases of ``intersecting D-branes'', which cannot be described by simple DBI actions. We claim that, in these cases, higher loop corrections should be incorporated as binding energies between the branes. It is remarkable that the summation of the contributions from all loops reproduces the correct mass formula predicted by U-duality. Introduction D-brane analyses have been found to be very successful methods to describe the solitonic states of the string theory [1]- [13]. They give essential insights into the non-perturbative aspects of the string theory, such as the recent success of the microscopic description of the black hole properties [14]. By the use of string theory, they give a hope that those non-perturbative excitation might be quantized. In this paper, we focus on the type II superstrings compactified on torus. This theory is conjectured to have the U-duality symmetry [11,15,16] which originates from the symmetry of the supergravity theory [17]. Our aim is to present a non-trivial check of U-duality. One of the remarkable successes to this aim is the calculations of degeneracy of BPS states given in the excellent works [12,13]. However, these calculations are essentially independent of the background moduli. Hence it is still meaningful to study the quantities which strongly depend on the moduli for the confirmation of U-duality. One of the typical objects with this property is the BPS mass spectrum. Motivated with this fact, we intend to examine the mass formula for BPS solitons with Ramond-Ramond (R-R) charges by means of D-brane techniques. We shall compare it with that predicted by U-duality. For the simple cases in which only one kind of R-R charges are excited, we shall analyse the mass spectra by using the Dirac-Born-Infeld (DBI) action for the D-brane [2,7,8,10]. This is an extension of the study in our previous publication [18]. It is further stimulating to analyse the more complicated situations with several kinds of R-R charges excited, that is, the R-R solitons described by the "intersecting D-branes" [9,11,13,19,20]. We will stress that the analysis based on DBI action, which includes only the one-loop contribution, is not sufficient to discuss the D-brane bound states. Under general backgrounds, the configurations of intersecting D-branes break SUSY completely. In those cases, the (stringy) higher loop corrections do not vanish. We will show that the higher loop corrections can be regarded as the binding energies among the intersecting branes and analysis based on these loop corrections give the consistent results with U-duality. Let us give the plan of this paper. In section 2, we give the BPS mass formula which is invariant under U-duality by extending the well-known BPS spectrum of the fundamental string sector [21]. In section 3 we consider the simple BPS solitons possessing only one kind of R-R charges. The BPS states of this type are realized as the supersymmetric cycles [22,23], equivalently, geometrical configurations with minimal volumes. We evaluate the D-brane mass directly from the DBI action by constructing such a configuration. It is an important check of Uduality to study the objects depending strongly on the background moduli, because the duality transformations map the moduli in a string theory onto those in a dual string theory non-trivially. We will show that the DBI action produces the mass spectra with correct moduli dependences expected by U-duality. In the type IIA case, we also explain the BPS mass formulae from the M-theory viewpoint. It is based on SUSY algebra [24] and the results confirm the validity of our analysis in the DBI action. We will also argue on the situations in which the gauge fields on the world volumes of branes have topological charges. It is believed that these charges are induced by subbranes within other D-branes ("branes within branes" [9]). We confirm that our evaluation of BPS masses based on the DBI action produces the results which are consistent with this interpretation. That is, we show that the calculation under the non-trivial background gauge fields yields the correct masses of the bound states expected by U-duality. Although this result is very satisfactory, we should still keep in mind the fact that this story is not complete to understand the physics of D-brane bound states. There still exist many bound states which cannot be reduced to the cases of "branes within branes". It is sufficient to take only an example to illustrate such generic situations; the 1-branes wrapping around 9th-axis and the 3-branes wrapping around 678th-axes (we assume the 6789th-directions correspond to the compactified 4-torus). This situation and its mass formula cannot be studied by the trick of gauge fields. To defeat this difficulty we investigate, in section 4, the problem of bound states not only in the "branes within branes" cases but also in the situations that some branes "literally" intersect with others. We emphasize the necessity of taking higher loop corrections into account in the latter cases. We discuss relations between the SUSY breaking under generic moduli and the non-vanishing higher (string) loop amplitudes associated with annulus diagrams. We will evaluate the binding energies of intersecting branes as contributions from higher loops, and show a remarkable fact: The summation of all loop corrections reproduces the correct mass formula of bound states predicted by U-duality! In the last, we will try to explain the possibility to describe some (not all ) bound states by only DBI actions from the point of view of "geometrization of quantum correction". The last section is devoted to conclusion and a few comments on the open problems. BPS Mass Formulae with U-duality Invariance In this section, we shall analyse the BPS mass formulae in the IIB superstring compactified on T 4 . The NS-NS sector mass formula can be written in a manifestly T-duality invariant form. Then we can derive the BPS mass formula in the R-R sector by using a (special) U-duality transformation. Let us consider type IIB superstring compactified on T 4 . The massless states in the type IIB string are metric field G M N , second order antisymmetric tensor B M N , and dilaton φ from NS-NS sector. A scalar field (axion) C (0) , an antisymmetric tensor C (2) M N , and a self-dual fourth order antisymmetric tensor C (4) M N P Q appear in the R-R sector. After the compactification, the scalar fields which describe the moduli of the theory are given by G ij , B ij and dilaton φ from NS-NS sector. We use indices i, j to run 6,7,8,9 to describe the internal space coordinates and use µ, ν for uncompactified dimensions. The e is the vierbein of the string (sigma model) metric G, namely, G = e · e t . Ω * belongs to O(4, 4) and satisfies a relation Here I 4 is a 4 × 4 unit matrix. The left action of ̟ ∈ O(4, 4; Z) (Ω ′ * = ̟Ω * ) represents the T-duality of the system. On the other hand, the Ramond moduli, C (0) , C (2) ij and C (4) 6789 are combined to give an 8 component field ψ (α ′ ) (α ′ = 1, 2, · · · , 8) in the cospinor representation of O(4, 4), which we write as ijk are combined into a spinor multiplet of O(4, 4) ). The vector fields in 6 dimensions are essentially composed of G iµ and B iµ . The former is associated with the Kaluza-Klein momentum and the latter is coupled with the winding number around i-th direction. The NS vector fields are combined into a single 8 component gauge field A (a) µ (a = 1, 2, · · · , 8) in a vector representation under O(4, 4), µ . Vector fields from the Ramond sector C (2) iµ , C (4) ijkµ count the 1,3-brane charges. They are combined into an 8 component vector field K (α) µ which transforms as a spinor under O(4, 4), K (α) µ → (R s (̟)) αβ K (β) µ . These spinor representation matrices, R s (̟) , satisfy R s (̟)JR t s (̟) = J. (More precisely, there appear some mixing of G iµ and B iµ in the definition of A (a) µ and that of C (i) 's in the definition of K (α) µ under general backgrounds.) We write the integral charges n (a) , m (α) associated with A (a) µ and K (α) µ . These charges transform as vector (n (a) ) and spinor (m (α) ) of O(4, 4; Z). For each set of integers, we can define a stable state called the BPS state. In the fundamental string spectrum, we have vanishing R-R charges, m (α) = 0. The famous mass formula of the (anti)BPS state in this case is given as [21], 3) Here we write the O(4, 4; Z) vector n = (n (a) ) =   w i n i   , (n i ; KK momentum, w i ; winding number). The n (a) couples with the gauge field A (a) µ as n t A µ . The Π ± are appropriate operators that project to (anti)BPS states. Needless to say, this mass formula is invariant under the right action of σ ∈ O(4) × O(4) since Ω B Ω e is transformed into (Ω B Ω e )σ and σΠ ± σ t = Π ± . The invariance under T-transformation (O(4, 4; Z) left-action) is ensured by the transformation law; We shall make here one important remark: The mass formula (2. 3) is written with respect to the sigma model metric G µν . Later we will observe that G µν is not invariant under general U-duality transformations (even though it is invariant under all the Ttransformations O(4, 4; Z)). Therefore we should write down the mass formula associated not to the sigma model metric G µν but to the 6-dimensional Einstein frame metric g (6) µν ≡ e −Φ G µν which is U-invariant. Here Φ denotes the 6-dimensional dilaton that is invariant under T-transformations; It is easy to observe that the mass formula defined with respect to g (6) µν should be related with that for G µν (2. 6) Namely, we should rewrite the mass formula for the NS-NS charges (2. 3) as Beside O(4, 4; Z), the type IIB superstring theory is conjectured to be invariant under the strong-weak SL(2; Z) duality (S-duality). This symmetry is combined with T-duality symmetry to give the U-duality symmetry group O(5, 5; Z) [11,15,16,25]. In order to clarify the structure of U-duality O(5, 5; Z), we must suitably arrange the moduli fields from both the NS-NS sectors G ij , B ij , Φ and the R-R sectors ψ α . The Ω e , Ω B should be embedded in O(5, 5)-matrix as follows (this is one of the conventions) ; Here the R s (Ω) and R c (Ω) are spinor and cospinor representations of Ω ∈ O(4, 4). The dilaton and RR moduli are also incorporated as where E(i, j) is a 10 × 10 matrix with only one non-zero entry at (i, j). ObviouslyM is a symmetric matrix in O(5, 5), and hence parametrizes the "extended Teichmüller space" Now, let us study R-R sector mass formula. In the following discussion we turn off all the R-R moduli ψ α = 0. We first pick up a special U-transformation S (6) := S (10) T 6789 S (10) ("S-transformation in the sense of 6-dimension") such that (S (6) ) 2 = 1. Here S (10) stands for the 10-dimensional S-duality transformation (which corresponds to the transformation τ → − 1 τ of SL(2; Z)). Also the T 6789 is a T-duality transformation along the 6789th directions. Then the S (6) acts on the G, B and Φ S (6) : 11) and completely exchanges the NS-NS charges n (a) and the R-R charges m (α) . The 6 dimensional space-time Einstein metric g (6) µν is invariant under the S (6) , S (10) but the G µν is not. It is remarkable that where R s ( * ) denotes the spinor representation matrix as is already introduced. Hence we can rewrite the transformation law (2. 11) as In this way, we can easily find out that the mass formula of the R-R sector must take the following form if we claim the invariance of total mass spectrum under the S (6) -transformation; (2. 14) We will compare this formula with the results of D-brane analyses in the later sections. BPS mass formula from Dirac-Born-Infeld action In the previous section, we conjectured the BPS mass formula by using U-duality. The outcome becomes algebraic and uses the representation theory of O(5, 5; Z). In the following, on the other hand, we evaluate the BPS mass geometrically by minimizing the D-brane worldvolume, or more precisely by minimizing the Dirac-Born-Infeld type integral. This condition should be equivalent to the requirement to keep half of the supersymmetries, namely the BPS condition. Such a supersymmetric configuration is called a "supersymmetric p-cycle" [22], and at least in the case that Kalb-Ramond moduli B ij is equal to zero, this statement is proved in [22]. 1 In the general case of B ij = 0, the equivalence of the conditions of minimal volume and supersymmetry is not so clear. But, from the physical point of view this assumption is very plausible, since the BPS states must respectively have the minimal energies in the charged sectors. It is interesting to observe that the mass formula obtained geometrically coincides with algebraic one. In the subsection 3.3, we also explain the full mass formula in the IIA theory including both NS-NS and R-R sectors from the M-theory viewpoint. The analysis is based on the SUSY algebra in 6-dimension [24]. We will observe that our analysis based on the DBI action is consistent with the M-theory approach. Type IIB on T 4 First of all, we consider the type IIB case compactified on T 4 . For this case the situations are somewhat simpler than those for type IIA. Our starting point is the (one loop) effective action of Dirichlet p-brane [2,9,10] (p is an odd number for type IIB string); Here the G αβ , B αβ are the induced metric and anti-symmetric field on the world volume. The C (l) 's (l = 0, 2, 4) are the Ramond-Ramond fields for the type IIB string and F αβ is the field strength of U(1) gauge field A. S DBI is usually called the "Dirac-Born-Infeld action" and S W Z is often called "Wess-Zumino term". (Several authors also call it "Chern-Simons term".) We evaluate the BPS mass for the 6-dimensional space-time theory along the same line of argument developed in the [18]. Consider the p-brane with the specific configuration R × Σ (R is the time axis and Σ is a p-dimensional subspace of the internal T 4 ). In this setup, the effective mass of particle associated with this p-brane is directly evaluated by DBI action; 00 dτ 3) The 6-dimensional space-time dilaton Φ is related with the 10-dimensional dilaton φ and the torus metric G ij in (2. 5), in other words where |G| := det G ij . As we already stressed in the previous section, the mass should be evaluated with respect to the 6-dimensional Einstein frame g (6) µν = e −Φ G µν . We used the relation; g The topological nature of Σ (more precisely speaking, the homology class of Σ, the homotopy class of X and the topological charge of the bundle where A is defined) determines the Ramond-Ramond charge of this particle. We can find this fact by observing the Wess-Zumino term S W Z (3. 2) and will see it explicitly in the following examples. As is already commented, the BPS state should correspond to a pair of configurations of the map X : Σ → T 4 and the U(1)-gauge field A that minimizes the integral (3. 3). Hence our problem is reduced to search the minimal configuration of (X, A) under a fixed topological nature of world volume. If we turn off the R-R fields, the compactified type IIB moduli space is described by G ij and B ij of T 4 . In the following argument we assume these G ij , B ij are all some constants on T 4 , which is of course possible since T 4 is a flat space. The induced metric G αβ and anti-symmetric tensor B αβ on the p-brane are defined as with the coordinates {σ α } (α = 1, 2, · · · , p) on the world volume. 1-brane (D-string) Let us start by taking the 1-brane (R × Σ (1) ) case. We easily obtain where q i := dσ∂ σ X i denotes the winding number of D-string. Obviously, the above inequality (3. 5) is saturated when Σ (1) is a straight line in the torus. (We have no constraints for the configuration of A.) Hence the desired BPS mass is obtained as In order to clarify the meaning of the winding q i in (3. 5), we take a look at the Wess-Zumino term. The local coordinate of 1-brane R × Σ (1) is expressed by (t, σ) and the Wess-Zumino term turns into a form, i0 dt , The q i turns out to be a "charge" associated with the gauge fieldĈ (2) i0 . 3-brane Next we consider the 3-brane case. Let (t, σ 1 , σ 2 , σ 3 ) be a local coordinate of 3-brane R × Σ (3) . Then one can read the "charges" coupled to gauge fieldsĈ from the Wess-Zumino term (3. 2), The W l is the 3-brane winding number. We remark that the non-vanishing expectation value for F makes the situation a little complicated. The Poincaré dual of [ represented by some 1-branes within our 3-brane Σ (3) (the case "branes within branes" [9]). The w i is regarded as the "effective winding number" of induced 1-branes. In this way we can describe by DBI action some bound states among 3-branes and 1-branes effectively. Even after this change of situation we can use the similar argument for the 1-brane case to derive the 3-brane mass bound, where we set It is easy to prove that 8 × 8 matrix M IIB,T 4 belongs to O(4, 4) and is also symmetric. The inequality (3. 9) is saturated when Σ (3) is a 3-dimensional torus "linearly embedded" in T 4 , that is, the map X : Σ (3) → T 4 is a homomorphism of abelian groups (not necessarily injective.) and the gauge field A has a constant curvature (whose value is uniquely determined by w i ). Hence the desired mass formula can be written as 2 This formula (3. 12) coincides with our algebraic formula (2. 14)! Actually the "moduli matrix" M IIB,T 4 induced from DBI action is equal to the moduli matrix R s (M) derived from U-duality. To close this subsection we make a remark: The above description of the systems such that the 3-branes and 1-branes coexist, i.e. the bound states of 3-and 1-branes, is not complete. For example, consider the system of the 3-brane wrapping around the 3-torus along the 678'th axes and 1-brane wrapping around the circle along the 9'th axis when the compactified directions are the 6789'th axes. In this situation one cannot reinterpret the 1-brane charge as the field strength F as above, and cannot derive the correct BPS mass from only the DBI action. The most naive approach for this problem is to start with the simple ansatz for the effective action Of course this is not correct, since one must also evaluate the effect of interaction between these branes by taking the sectors of DN (or ND) open string into account. In other words, one must calculate higher loop corrections to the effective action. It may be natural to expect that these loop corrections explain the binding energy fitted to the conjecture of U-duality. We will argue on this problem in section 4. Type IIA on T 4 Next we analyze the mass formulae for the type IIA case. In the same way as in the type IIB case, we start from the D-brane action (3. 1), (3. 2). The only difference from the type IIB case is that we have the R-R fields with odd degrees; C (p) , (p = 1, 3,5,7,9), which leads to the even D-branes. In our analysis we need to treat the three kinds of D-branes (0-,2-,4-branes). We first consider the cases that only the same kind of branes exist, and later discuss the problem of the bound states of branes having different dimensions. 0-brane The 0-brane case is trivial. The moduli dependence of BPS mass only originates from the volume of the internal torus (3. 4); The n is the number of 0-branes and is identified with the RR charge for C (1) µ . 2-brane For the 2-brane case, we take the configuration R×Σ (2) . As in the analysis of the type IIB case, R is the time axis and Σ (2) is wrapped around some 2-cycle of T 4 . Simple evaluation gives the following inequality (which is essentially the Minkowski inequality); does not depend on the choice of this basis. Here we assume that When is this inequality (3. 15) saturated? It is satisfied if the collective coordinate X is a holomorphic mapping from Σ (2) onto some 2-cycle S determined by a givenĈ (3) µij charge. Strictly speaking, the minimality of world-volume does not necessarily mean a holomorphic mapping, rather means a more general harmonic mapping. But it is known [22][13] that the BPS condition (the condition of SUSY 2-cycle) leads to a holomorphic mapping at least in the case of B ij = 0. We further comment on the following fact: Fix an arbitrary 2-cycle S ∈ H 2 (T 4 ; Z). The condition that S is represented by a holomorphic curve in T 4 is that the Poincaré dual α S of S belongs to H 1,1 (T 4 ; R). This can be always satisfied if we properly choose the holomorphic structure of T 4 . (The choice of holomorphic structure compatible with the given metric G ij is parametrized over O(4)/U(2) ∼ = S 2 , which is identified with S 2 spanned by J 1 , J 2 , J 3 . These degrees of freedom are just equal to those needed to make α S a (1,1)-form for an arbitrary S.) 3 So, it is sufficient to take X to be a holomorphic mapping from Σ (2) onto some holomorphic curve S in T 4 with respect to a properly chosen complex structure. If we write the corresponding Kähler form on T 4 as J S normalized by For the U(1)-gauge field A, the condition for the saturation is somewhat non-trivial. This is because the pull-back X * B is not a harmonic 2-form even if all of the components B ij are constants on T 4 . Nevertheless we can choose A so that F (≡ X * B + dA) = aX * J S , where a is not a function but merely a complex number. (It is most easily proved by making use of the Hodge decomposition.) The assumption Hence, under this configuration of X and A, we obtain This means the saturation of the inequality (3. 15) and gives the desired BPS mass formula. In the similar manner to the case of type IIB 3-brane, one may consider some extra monopole charge n for A by taking the the background analyzing the Wess-Zumino term, we can find that this charge n can be identified with the extra contribution from the n 0-branes within our 2-brane Σ (2) . The BPS mass formula can be easily generalized to this case; This is indeed the correct mass formula of the bound states of a 2-brane and 0-branes predicted by U-duality. 4-brane In the 4-brane case, the space-part of world-brane Σ (4) must occupy the full volume of T 4 . Consider the smooth map X : Σ (4) → T 4 covering T 4 m-times. We again assume in H 2 (Σ (4) ; R) for the time being. The inequality of mass integral is represented; (3. 20) What is the condition for saturation? Clearly the value of this integral does not depend on the choice of smooth map X (under fixing deg X = m, of course). However, the condition for the gauge field A is similar to the 2-brane case, but rather complicated, because the field X * B necessarily has constant components. It reads; Now let us consider the case that the gauge field A has a non-trivial topological charge. Alternatively one may regard it as an "integer theta-parameter shift" [21], which is a part of T-duality transformations. By analyzing the Wess-Zumino term again, we can find that [ is identified with the extra 0-brane charge. The corresponding mass formula is immediately calculated. This result is rather complicated, but we can show that it is fitted to the similar formula to (3. 12); But the check of this claim is not so self-evident, because the moduli matrices become rather complicated under general background. In this sense, we may also say our results support the consistency of DBI action under T-duality. Lastly, we comment on the same difficulty as that in the analysis of type IIB case. One must understand that generic bound states of 0-, 2-, and 4-branes cannot be described by the recipe of the branes within branes [9]. We can at most analyze, by using only the DBI action, the cases which can be connected by the T-duality transformations 4 with the cases that 4-branes alone exist. Of course, one cannot describe all the bound states by making use of these T-duality transformations. In order to complete our analyses we need to argue seriously on the binding energies of branes. In section 4 we will return to this problem. Considerations from M-theory In this subsection we reconsider the case of IIA on T 4 by the M-theory approach. The discussion here is based on that of [24]. The M-theory is believed to exist in 11 dimensional space-time, whose massless multiplet is a 11 dim. supergravity one,ĝ M N , c KLM , and ψ α M . Here theĝ M N is the 11 dimensional graviton and c KLM is the rank 3 anti-symmetric field. The spinor index "α" of the gravitino ψ α M labels a 32 component Majorana fermion. The indices "K, L, M, · · ·" represent spacetime coordinates and run from 0 to 10. Let us consider a compactification of this theory on the 5-dimensional torus T 5 . The 6-dimensional space-time is extended in the (X 0 , X 1 , · · · , X 5 )-directions and the remaining coordinates (X m ) (m = 6, 7, 8, 9, 10) represent the torus T 5 . Especially the 5th coordinate "X 10 " of the T 5 is the "longitudinal" one in this 11 dimensional theory. The with rm, sn 10 respectively. Also we can identify D0-, D4-brane charges q 4 , q 0 with q 4 = q, q 0 = −r 10 and six D2-brane charges smn are combined into a rank 2 anti-symmetric form It is useful to note that q 2 is the Poincaré dual of the homology cycle around which the D2-brane is wrapping; Σ := sklΣkl, where Σkl (k <l) denote the bases of H 2 (T 4 ) defined by the relation We summarize these scalars, vectors and charges of the M/T 5 -theory (equivalently IIA/T 4theory) in the tables 1,2. The M/T 5 theory has scalarsĝ mn , c lmn derived from the graviton, rank 3 anti-symmetric field in 11 dimensional SUGRA. The Φ is a six dimensional dilaton combined with the volume of T 5 and 10 dimensional dilaton φ. In NS-NS sector of the IIA/T 4 side,ĝmn, Bmn are scalar fields and g 10,10 is essentially ten dimensional dilaton. There are RR 1-form C (1) and RR 3-form C (3) in the RR-sector. Also numbers "♯" in parentheses (♯) are degrees of freedom of associated scalars. There are three kinds of vector fieldsĝ µm , c µmn , and its dual c * µklmnr in the M/T 5 theory. They can couple to charges r m , s mn and q respectively. In the context of the IIA string, eight vectorŝ g µm , c µm10 are combined into a NS sector multiplet. Their associated charges rm, sm 10 are called momenta pm, winding numbers wm respectively. In the R-sector, the vector multiplet consists of the remaining eight vectorsĝ µ10 , C (3) and C (6) . Numbers of parentheses represent the degrees of freedom of the corresponding charges. where γ µ αβ 's are 6 dim. space-time γ-matrices and p µ is the space-time momentum. The central charge Z a b can be decomposed by internal SO(5) γ-matrices Γ m ab (m = 6,7,8,9,10), When we turn off the Kalb-Ramond field Bmn and RR fields C (1) , C (3) for simplicity, the coefficients q, r m , s mn of the decomposition can be identified with the previous charges of the M/T 5 -theory. In the following, we abbreviate space-time spinor indices α,β. The 1/4-susy condition of BPS states can be written This condition is transformed into eigenvalue problems of operators ZZ † , Z † Z with eigenvectors ǫ,ǭ Here the m 2 BP S := −p 2 is the square of BPS mass and is given explicitly However, we want to compare this formula with the results in the previous subsection, and it was calculated in the 6 dimensional Einstein frame. Hence we will write down the relations between the 11D metricĝ mn , 10D string metric G mn , and 6D Einstein metric g (6) mn , g 10,10 := e 4 3 φ , ĝmn = e − 2 3 φ Gmn (m,n = 0, 1, · · · , 9) , G µν = e Φ g (6) µν (µ, ν = 0, 1, · · · 5) . The Φ is the 6 dimensional dilaton and is associated with the 10 dim. dilaton φ and the volume of the T 4 as Note a relation; 00 . Thus we have to multiply a factor e − 1 3 φ e As a second case, the mass formula with only D-brane charges can be calculated as (m where * is a Hodge dual in the T 4 and q + 2 corresponds to the self-dual part of q 2 . For arbitrary 2-forms A 1 , A 2 on T 4 , the inner product (A 1 · A 2 ) is defined as Here the symbol " * 5 " means the Hodge dual in the T 5 . Then the BPS mass formula is expressed in the same form as that of Eq.(3. 26) with these replacements. The NS-NS sector BPS mass can be written as (m . (3. 30) As special cases when there are only one kind of RR charges, we write down results; the Poincaré dual of q 2 .) Also the mass formula of D0-D4 system is written down These results Eqs.(3. 32), (3. 34) will be compared to the binding energy calculations in the next section. Type IIA on K3 The type IIA string compactified on K3 is very similar to the case of type IIA over T 4 . This is because K3 has orbifold limits described by theory [27] and to the twisted sectors in the language of orbifold CFT). Our above analysis of D-brane masses is also applicable to this case. The calculation is almost parallel to the case of type IIA over T 4 . We can summarize this result as follows; However, there is a crucial difference from the case of T 4 , which is due to the fact that K3 is a curved manifold. It is known [13,29] that the 4-brane in the K3 case has the extra 0-brane charge −1 due to the 1st Pontrjagin number of K3. This leads to unexpected assignments of R-R charges to the configurations of D-branes, and gives a serious contradiction to our analysis of the BPS mass spectrum of the R-R solitons. In order to get over this difficulty we will have to take account of the extra degrees of freedom that are absent in the T 4 -case, the twisted sectors in T 4 /Z 2 ∼ = K3. If the contributions from the twisted sectors to the effective action (or the equations of motion) can be interpreted as bound states of 4-branes and effective 0-branes, which perhaps reside at the fixed points of Z 2 -action, the similar analyses to those in the next section might yield the correct results. However, our study on this problem is still far from the complete solution. We would like to present a further discussion elsewhere. for the 1st problem. The 2nd, 3rd, 4th problems are more challenging, but we believe that the higher loop analyses will also give the correct answers. We would like to discuss these problems elsewhere. To solve the 1st problem, we will discuss an important relationship between the binding energies of the bound states and some SUSY breaking. This statement may sound strange, since we should now consider the mass spectra of BPS solitons which should preserve a part of SUSY! But this is not a contradiction. One must carefully understand the term "BPS". This should be used in the framework of the 6-dimensional supergravity theory which is the low energy effective theory compactified over the 4-torus. On the other hand, if we interpret the BPS solitons with RR charges as D-branes, we must treat the full 10-dimensional superstring theory. Of course, the states with some unbroken SUSY in the sense of 10-dimensional theory are also supersymmetric in the sense of 6-dimensional effective theory. But the inverse is not correct. Actually, we will later focus on some brane configurations which break SUSY in the sense of 10-dimensional string theory but should correspond to the BPS states in the sense of 6-dimensional SUGRA. We can expect that even if these states have no higher loop corrections in the framework of 6-dimensional SUGRA, they can have stringy loop corrections in the framework of 10-dimensional superstring. In this sense we may say our calculation of binding energies will give a non-trivial check of U-duality in the level of quantum string theory. Higher Loop Corrections to β-Functions and SUSY Breaking In the sequel we consider the type II (A or B) string over T 4 . Let us start with the loop corrected equations of motion of string 6 . Throughout this section we take a convention µ, ν = 0, . . . , 5 (6-dimensional space-time), i, j = 6, . . . , 9 (the internal torus). We also use the notation |G| = det G ij (the square of volume of internal torus). Recall that G µν = e Φ g (6) µν ≡ e φ |G| −1/4 g (6) µν , where g (6) µν denotes the 6-dimensional Einstein frame metric. Set g (6) µν = η µν + h µν ). The (linearized) equation of motion for h 00 can be written as β h 00 ≡ −λ −2 ✷h 00 (x) + c (1) (x) + c (2) (x) + · · · + c (n) (x) + · · · = 0. Consider a Dirichlet p-brane D wrapping around an internal p-cycle so as to be observed as a rest particle in our 6-dimensional space-time. We can rewrite the equation of motion (4. 1) as The L.H.S. is a (linearised) Ricci tensor and the R.H.S. can be interpreted as the "matter" terms. Then it is easy to see that, in general, , where x i = X i (i = 1, . . . , 5) express the position of the rest particle. We thus find that M (n) can read as the n-loop correction to the rest mass of our particle. In this way we can directly evaluate the mass of D-branes from the β-functions. Especially, it is easy to calculate the 1-loop contribution c (1) : Consider a 1-loop (disk) where |D denotes the suitable boundary states corresponding to the D-brane D. The divergence of moduli integral for A D has its origin in the massless components of D. Under the natural assumption for the backgrounds h 0µ = B 0µ = 0, the divergent part of δ δh 00 (x) A D is easily calculated as δ δh 00 (x) where V hµν (−1,−1) (x) is the graviton emission vertex in the (−1, −1)-picture, and the superscript "(0)" indicates the massless sector of the boundary state |D . We should notice that the position integrals along the Neumann directions are left (on the other hand, the momentum integrals do not exist for these directions). Clearly the boundary state of R-R sector does not contribute to the above calculation. Recall the relation e Φ = e φ |G| −1/4 , G µν = e Φ g (6) µν . We can also approximate g (6) µν by the Minkowski metric η µν in its R.H.S., because the deviation of metric h µν should be a quantity of the same order as the string coupling. We can easily obtain the 1-loop equation of motion The subscripts s, s ′ express the picture (the ghost charge of bosonic ghosts) and the terms with s ∈ Z, s ∈ 1 2 + Z belong to the NS-NS sector, the R-R sector respectively. Let us take a cylinder amplitude with one graviton emission vertex operator V h 00 (0,0) in the (0, 0)-picture, D|V h 00 (0,0) |D ′ . It is trivial to extend to the cases with other pictures. Recall a relation of the graviton vertex operator V hµν (0,0) and a photon vertex operator V µ Here the Q A 1/2 ,Q A 1/2 are supercharges in the 1/2-picture and we used fermion vertex operators −1/2 and so on. So the above cylinder amplitude can be re-expressed as D| u · Q , ũ ·Q, (v · V ) (ṽ ·Ṽ ) |D ′ and we can rewrite it [30,6]; For a p-brane with Neumann coordinates {X µ }, (µ = 0, 1, 2, · · · , p) with background fields G and F , the matrix M is written as [30,6] Here the gamma matrices γ α , γ α are normalized as and we introduced the notations for the anti-symmetrized gamma matrices with each other, or at least, the branes placed very closely. This is because we are interested in only bound states of branes and otherwise, the description by the coordinates of the center of mass would lose its meaning. In this "intersecting D-brane" case [9,11,13,19,20], we will find out that the cancellation between NS-NS and R-R sectors is not complete for general background. For example, consider a 3-brane wrapping around the 678-th axes of T 4 and a 1-brane wrapping around the 9-th axis (Fig.1), which is the case we will later analyze in detail. It is well-known that this configuration is supersymmetric (so-called "short multiplet", in which 1/4 of space-time SUSY are unbroken) in the special background; (4. 14) (The 1-brane intersects the 3-brane perpendicularly, and the background Kalb-Ramond field is set zero.) However, if we put general background moduli, we have no SUSY any longer. In fact, consider the unbroken SUSY charges Q A + (M) and Q A + (M ′ ) respectively associated with boundary states |D and |D ′ . In the special background (4. 14), M ′−1 M indeed has 1 as an eigenvalue. But it is not the case for generic moduli. It follows that the higher loop corrections to β h 00 no longer vanish. This aspect sharply contrasts with the case of one kind of branes, in which we always have some unbroken SUSY independent of the VEV of moduli fields. We have observed that the higher loop corrections to D-brane mass is inevitable in the broken SUSY cases. We will evaluate these amounts in detail in the next subsection. Evaluation of Higher-Loop Corrections as the Binding Energies We explain the method to calculate the contributions from higher loops to the β-functions concretely. The contribution for a fixed diagram comes from the divergent part of the amplitude when the Teichmüller parameters of the world sheet simultaneously go to large values. Then the size of all the boundary loops shrink to zero simultaneously. In this limit only the massless modes are relevant for our computations. First of all, we stress the following fact: We must assume that D-brane mass of 1-loop level, which we evaluated in the previous section based on the DBI action, is sufficiently heavy for the validity to treat the D-branes as static backgrounds. In other words, we must consider the cases with large amounts of R-R charges. Otherwise, we would have to take account of fluctuations of the D-brane configurations [31]. This assumption is also necessary so that the perturbative expansion is applicable. It is not difficult to show that, under this assumption, the diagrams with no closed string loops are dominant for a fixed number It is convenient to sum up all the "tadpole diagrams" (Fig.2) first. We evaluate the tadpole corrections to the pants-type diagram. Each wavy-line connecting a pants and one In this way the summation of all the tadpole diagrams leads to the following factor; 1 + (−λM (1) ) + (−λM (1) ) 2 + (−λM (1) ) 3 + · · · = 1 1 + λM (1) ∼ 1 λM (1) . In the last line, we used our assumption that the one-loop D-brane mass is very large. We may also reinterpret this correction as follows: The string coupling λ should be replaced with an "effective string coupling" λ eff ≡ λ λM (1) ≡ 1 M we have already included all the contributions from tadpole type diagrams (Fig.3) into the factor (4. 17) and it will be an over counting if we take these diagrams into account. Collecting all the above observations, we can conclude that only the diagrams with the following two properties can contribute to the calculations: 1. A diagram with no closed string loops. As a result, there are contributions to the β-function from diagrams composed of only cylinder type diagrams whose two boundaries are put on different D-branes (Fig.4). Now, we arrive at the stage to evaluate concretely the higher loop corrections to the D-brane masses. As a simple example, let us consider the type IIB case compactified on T 4 with coordinates (x 6 , x 7 , x 8 , x 9 ) and take an "Intersecting D-brane" configuration of Dirichlet 1-branes and D3-branes. Consider n D1-branes (D) wrapping around the 9th axis and n ′ D3branes (D ′ ) wrapping around the 678th axes. The center of mass of this system is specified by (X 1 , X 2 , · · · , X 5 ). The one-loop mass M First we take a 2-loop cylinder diagram whose boundaries are put on the D1-brane D, the D3-brane D ′ , respectively. As we already pointed out, for the computation of β-functions we only have to evaluate the value of amplitude in the limit of large Teichmüller parameters, which is the IR limit in the closed string channel (or equivalently, UV limit in the open string channel). In this limit only the massless components of boundary states can contribute It is convenient to make use of the same technique as (4. 10); where M is chosen so that Q + (M)|D (0) = 0. Since and taking the spinor u with a property; 1 we obtain In the first line of (4. 26), the factor 2 before nn ′ △ corresponds to the existence of contact terms between the graviton vertex and 2 boundaries, one of which resides on the 1-branes D and the other of which does on the 3-branes D ′ . The factor 1 2 is nothing but a symmetry factor due to the exchange of the boundaries of cylinder. Thus the 2-loop correction to the mass can be read as follows; . (4. 27) It implies that we may naturally regard △ as the binding energy between two types of D-branes D, D ′ . As is already observed, △ vanishes under the supersymmetric brane configuration (4. 14), but does not for the general non-supersymmetric backgrounds. We can conclude that the binding energy among the branes has its origin in the SUSY violation. Next we consider the contributions from 2k (k ≥ 2) loop c (2k) . As is already commented, in figure 4 survive, and they are factorized into the k products of cylinder amplitudes of the forms (0) D|V hµν |D ′ (0) . Hence we obtain (Note that adding one cylinder ∼ (0) D|V hµν |D ′ (0) needs another pants diagram connecting to the original diagram.) But one more symmetry factor 1 2 k k! for these diagrams appears. These lead to a correct (moduli-independent) numerical coefficient (2k − 3)!! 2 k k! . So, the 2kloop correction to the mass M (2k) 1−3 is given by Collecting the results (4. 18)(4. 27)(4. 29), we can finally get the mass formula for this bound state of the intersecting n D1-branes and n ′ D3-branes including all the corrections of higher loops; This result exactly reproduces the BPS mass formula predicted by U-duality in section 2! Until now we only consider a bound state of the intersecting D1-and D3-branes in the type IIB string compactified over T 4 . The applications to other bound states are straightforward. For a bound state of n Dirichlet 0-branes and n ′ D4-branes in the T 4 -compactified type IIA theory, we only have to replace the value of △ with This result is also consistent with the prediction of M-theory in Eq.(3. 34). Next let D, D ′ be respectively, n Dirichlet 2-brane wrapping around 67th directions and n ′ D2-brane wrapping around 89th axes in the type IIA theory compactified on the T 4 . For a bound state of D and D ′ , we obtain its binding energy where the Σ 1 and Σ 2 are respectively the world volumes of the D2-branes D and D ′ . We make one remark for this intersecting D2-brane case. Let us assume n = n ′ . As we In summary, in order to treat the bound states of the two kinds of D2 branes, one half of which is wrapping around T 67 and the other half of which is wrapping around T 89 , we D2-brane D2-brane can incorporate all the quantum corrections (perturbative loop corrections) into a single Dirac-Born-Infeld action with a genus "two" world volume. That is a homological sum of n holomorphic curves (supersymmetric cycles) with genus two (Fig.6). It may be plausible to interpret this aspect as a "geometrization of quantum corrections". On the other hand, for a bound state of the D1-branes and D3-branes, the dimensions of these two kinds of branes are different and we cannot describe the state by a single DBI action (Fig.7). Another interesting example of the "geometrization" is the situation of the branes within branes [9]. Let us again consider the bound states of D1-, D3-branes in IIB string. But, this time we make n D1-branes wrap around the 6th-axis and n D3-branes wrap around the 678th-axes. We further assume that each 1-brane is contained in each 3-brane. Clearly this configuration of branes breaks SUSY completely (in any background!), and we have a non-zero binding energy (4. 32) However, we already knew this case can be also described by a single DBI action with a suitable choice of a background gauge field: This is a manifestly supersymmetric treatment. It is straightforward to check that these two treatments yield the same mass formula. Therefore, we can find out a remarkable fact: In the case of branes within branes, the non-supersymmetric calculation with non-zero higher loop corrections is equivalent to the supersymmetric treatment based on the single DBI action with suitable background gauge fields. That is, all loop corrections can be transmuted into a charge of the gauge field! One may say this is another example of the geometrizations of quantum corrections. In the relation to this subject, it may be also meaningful to discuss the non-abelian extension when the gauge symmetry enhancement occurs on D-branes. In section 3, we only considered the charges of background gauge fields along the "U(1)-sectors". This approach was limited in the sense that we can only realize as the "U(1)-charges" the brane configurations that can be reduced to one kind of branes by T-duality. However, at least in the level of naive observation, if the gauge theory on branes becomes non-abelian, we can interpret more general configurations, which is not necessarily reduced to one kind of branes by T-dualiy, as the characteristic classes composed of the field strength. The non-abelian extension of DBI action is proposed by Tseytlin [26], in which the symmetrized traces of the products of field strength appear. It may be interesting to check the consistency between the two general descriptions of bound states, one of which is based on the non-abelian Born-Infeld action and the other of which is based on the string loop analysis given in the present section. To close this section we again emphasize that the DBI action (even the non-abelian DBI) is not sufficient to describe all the bound states. We must inevitably perform the higher loop analysis to complete our studies. Conclusions and Discussions In this paper, we investigated the mass spectra of R-R solitons by making use of the D-brane techniques in order to confirm the U-duality. We would like to emphasize that our results are obtained under the completely general backgrounds. Especially it is remarkable that the form of the DBI action is perfectly fitted to the moduli dependence of the masses of BPS solitons with (one kind of) the R-R charges. Moreover, the masses of some bound statesbranes within branes -can be also evaluated by the DBI action by incorporating suitable charges of gauge fields. In other words, we have shown that the DBI action is consistent with one of the T-duality transformations -"integer theta parameter shift" B ij → B ij + Θ ij . It is a challenging task to analyse more general bound states. We argued on these states, emphasizing the relation with the SUSY violation, and observed that the quantity △ which characterizes SUSY violation can be interpreted as a binding energy of D-branes. The characteristic △ is essentially a sum of contributions from annulus amplitudes in both NS-NS sector and R-R sector. When cancellations between two sectors in these amplitude are not complete, the space-time SUSY is broken and there exist binding energies among branes. In The problem about the extra R-R charge [29] originated from the 1st Pontrjagin number of K3 is the third one. We believe that these problems can be also solved by higher loop analyses to β-functions. In section 2 we discussed the BPS mass formulae based on some part of U-duality invariance. However, according to the analysis in M-theory, we can write down more complete mass formula Eq.(3. 26) (with the suitable replacements for the charges q, r l , s kl byq,r l ,s kl ). This has the full U-invariance O(5, 5; Z) and remarkably, it includes some interaction terms between the fundamental excitations and R-R solitons. Therefore, the first problem is especially significant in order to confirm the full U-duality from the viewpoints of the D-brane analysis and also to check the consistency between the D-brane calculations and the M-theory approach. We wish to present a more detailed study on this subject in future. In the last problem, we will have to treat carefully the open string loops in the twisted sectors of the K3 orbifold. It is also worth while remarking on the closed string loops, which we neglected in the discussion of section 4. Our analysis with only open string loops is valid in the cases when the D-branes are very heavy and there are no recoils between them. In other words, these are the cases that R-R charges N assigned to the D-branes are very large and we treat them However, there is a naive question: How about the quantum corrections in SUGRA? If we assume the description by the M(atrix) theory is completely valid, the consistency of the computations in [35,36] will demand that, in the large N-limit, the tree level of bulk SUGRA should be exact. As a result, this classical SUGRA will become equivalent to the quantum SYM in this limit. On the other hand, in this paper we evaluated the BPS mass formulae from the open stringy loop corrections under the D-brane backgrounds and compared the results with the mass formulae obtained by the classical SUGRA (U-duality). We have actually observed that the closed string loop corrections can be neglected in the limit of large R-R charges. Recalling the fact that the loop corrections in open string theory correspond to those in SYM and the closed string loops correspond to those in SUGRA in the low energy limit, our results seem to support the validity of M(atrix) theory! Our analysis is still limited, but we hope it will give some insights to the studies of M(atrix) theory in future. Although the above consideration is satisfactory, it may be still meaningful to ask whether the closed string loop corrections exactly vanish, because the U-duality should be valid even if the amount of R-R charges N is a small value. One possibility that the contributions from closed string loops do not break our analysis even in the cases with small R-R charges is a Fischler-Susskind type mechanism [37]. Namely, all the closed string loop corrections might contribute to only the renormalization of dilaton (= string coupling constant), and hence the mass formulae might be kept essentially unchanged. However, it remains an open problem for a long whether this mechanism can apply to supersymmetric theories in higher loop order when supersymmetry is broken by boundary conditions. Anyway, we will have to treat carefully the quantum fluctuations of D-branes as in the discussions in [31] in order to work properly in the region where R-R charges are not large.
2014-10-01T00:00:00.000Z
1997-07-24T00:00:00.000
{ "year": 1997, "sha1": "8af2e0bad47782e44d73987b9cc3b1b9fe8277e8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9707205", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8af2e0bad47782e44d73987b9cc3b1b9fe8277e8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
57759099
pes2o/s2orc
v3-fos-license
PET Imaging of Crossed Cerebellar Diaschisis after Long-Term Cerebral Ischemia in Rats Crossed cerebellar diaschisis (CCD) is a decrease of regional blood flow and metabolism in the cerebellar hemisphere contralateral to the injured brain hemisphere as a common consequence of stroke. Despite CCD has been detected in patients with stroke using neuroimaging modalities, the evaluation of this phenomenon in rodent models of cerebral ischemia has been scarcely evaluated so far. Here, we report the in vivo evaluation of CCD after long-term cerebral ischemia in rats using positron emission tomography (PET) imaging with 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG). Imaging studies were combined with neurological evaluation to assess functional recovery. In the ischemic territory, imaging studies showed a significant decrease in glucose metabolism followed by a progressive recovery later on. Conversely, the cerebellum showed a contralateral hypometabolism from days 7 to 14 after reperfusion. Neurological behavior showed major impaired outcome at day 1 after ischemia followed by a significant recovery of the sensorimotor function from days 7 to 28 after experimental stroke. Taken together, these results suggest that the degree of CCD after cerebral ischemia might be predictive of neurological recovery. Introduction Crossed cerebellar diaschisis (CCD) is a decrease of regional blood flow and glucose metabolism in the cerebellar hemisphere contralateral to the affected brain hemisphere as a common consequence of a supratentorial cerebral malfunction [1][2][3]. Previous studies have suggested that the remote deactivation of cerebellar neurons can be promoted as a result of the interruption of excitatory impulses by the corticopontocerebellar tract [4]. CCD has been reported in several diseases affecting the brain such as cerebral gliomas [5], epilepsy [6], intracerebral hemorrhage [7,8], and stroke [3,[9][10][11], among others. In the latter, CCD has been detected at the acute, subacute, and chronic phases. As a result, CCD has been postulated as a prognostic indicator of neurological outcome after cerebrovascular diseases [12,13]. Nevertheless, other studies have evidenced controversies regarding the correlation of CCD to the location and size of the brain injury and to the clinical severity [14,15]. In view of these controversial results, the use of in vivo imaging techniques acquires a relevant role in the investigation of the precise role of CCD on stroke pathophysiology. CCD in human supratentorial brain infarction has been detected with positron emission tomography (PET) and single photon emission computed tomography (SPECT) [13,14,[16][17][18][19]. Alternatively to nuclear imaging techniques, arterial spinlabeling (ASL) has also proven efficient for the evaluation of CCD after hyperacute and acute ischemic stroke [9,20]. Unexpectedly, although CCD has been detected in patients with stroke, the in vivo imaging evaluation of this phenomenon in rodent models of cerebral ischemia has been never evaluated so far. Here, we report the unprecedented investigation of CCD after long-term cerebral ischemia in a rat model of experimental stroke using PET with 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG). Glucose metabolic changes in the ischemic brain (cerebrum, cortex, and striatum) were also investigated, and the results were correlated to the neurological outcome evolution over time. Our studies show contralateral cerebellar hypometabolism at the acute and subacute phases of ischemia and a correlation between the presence of CCD and the improvement of neurological outcome. Hence, these results reported here will provide novel information on the detection of the CCD in experimental stroke that might ultimately contribute to a better comprehension of the role of this phenomenon during stroke evolution. Materials and Methods 2.1. Cerebral Ischemia. 9-week-old male Sprague-Dawley rats (n � 6; 295 ± 6.2 g body weight; Janvier, France) were used for imaging studies. Rats were anaesthetized with 2.5% isoflurane in 100% O 2 and transient focal ischemia was produced by a 2-hour intraluminal occlusion of the middle cerebral artery (MCAO) followed by reperfusion as described previously [21]. Six rats were repeatedly scanned before reperfusion (day 0) and at 1, 3, 7, 14, 21, and 28 days after ischemic onset to evaluate glucose metabolism by PET. Magnetic Resonance Imaging. T2-weighted (T 2 W) MRI scans were performed at days 1 (to measure the size of the infarction; n � 6), 3, 7, 14, 21, and 28 after MCAO (to coregister PET signal data; n � 1 per time point). Before the scans, anesthesia was induced with 4% isoflurane and maintained by 2-2.5% of isoflurane in 30% O 2 /70% N 2 during the scan. Animals were placed into a rat holder compatible with the MRI acquisition systems and maintained normothermia using a water-based heating blanket at 37°C. MRI experiments were performed on a 7 Tesla Bruker Biospec 70/30 MRI system (Bruker Biospin GmbH, Ettlingen, Germany), interfaced to an AVANCE III console. e BGA12-S imaging gradient (maximum gradient strength 400 mT/m switchable within 80 µs), an 82 mm inner diameter quadrature volume resonator for transmission, and a surface rat brain coil for reception were used. T 2 W images were acquired with a RARE sequence with the following parameters: RARE factor 2, TR/TE � 4400/40 ms, FOV � 25 mm × 25 mm, ACQ Matrix � 256 × 256, Slice ickness � 1 mm, 2 averages, and 24 contiguous slices. [22] and provided by IBA Molecular Spain (San Sebastian, Spain). PET scans were performed using a General Electric eXplore Vista CT camera (GE Healthcare). Scans were performed in rats anaesthetized with 4% isoflurane and maintained by 2-2.5% of isoflurane in 100% O 2 . e tail vein was catheterized with a 24-gauge catheter for intravenous administration of the radiotracer. For longitudinal assessment of glucose metabolism with [ 18 F]FDG, animals were scanned before and during the following month after ischemia. e radioactivity (∼10 MBq) was injected and after an uptake period of 30 minutes, the animals were reanaesthetized and placed on the PET for a brain static acquisition in the 400-700 keV energetic window, with a total acquisition time of 30 minutes as described elsewhere [23]. After each PET scan, CT acquisitions were also performed (140 µA intensity, 40 kV voltage), to provide anatomical information of each animal as well as the attenuation map for the later PET image reconstruction. Static acquisitions were reconstructed (decay and CT-based attenuation corrected) with filtered back projection (FBP) using a ramp filter with a cut-off frequency of 0.5 mm −1 . Positron Emission Tomography Image Analysis. PET imaging analysis was performed according to our previously published procedure [24]. PET images were analyzed using PMOD image analysis software (PMOD Technologies Ltd, Zürich, Switzerland). To verify the anatomical location of the signal, PET images were coregistered to the anatomical data of a MRI rat brain template. Two type of volumes of interest (VOIs) were established as follows: (i) A first set of VOIs was defined to study the whole brain and cerebellum [ 18 F]FDG PET signal. Whole brain and cerebellum VOIs were manually drawn in both the entire ipsilateral and contralateral hemispheres on slices of a MRI (T 2 W) rat brain template from the PMOD software. (ii) A second set of VOIs was automatically generated in the cortex and the striatum by using the regions proposed by the PMOD rat brain template, to study the evolution of [ 18 F]FDG PET signal in these specific regions in both the ipsilateral and contralateral cerebral hemispheres. PET signal uptake was averaged in each ROI and expressed as percentage of injected dose per cubic centimetre (%ID/cc), and the hemispheres ratios were considered. 2.6. Neurological Assessment. e assessment of neurological outcome induced by cerebral ischemia was based on a previously reported 9-neuroscore test [25]. Before imaging evaluations, four consecutive tests were performed at days 1 and 7 after ischemia in treated and control rats as follows: (a) spontaneous activity (moving and exploring � 0, moving without exploring � 1, no moving � 2); (b) left drifting during displacement (none � 0, drifting only when elevated by the tail and pushed or pulled � 1, spontaneous drifting � 2, circling without displacement or spinning � 3), (c) parachute reflex (symmetrical � 0, asymmetrical � 1, contralateral forelimb retracted � 2), and (d) resistance to left forepaw stretching (stretching not allowed � 0, stretching allowed after some attempts � 1, no resistance � 2). Total score could range from 0 (normal) to a 9 (highest handicap) point-scale. Statistical Analyses. PET imaging comparisons within ischemic group were made with one-way ANOVA followed by Tukey's multiple-comparison tests for post hoc analysis. Behavioral data have been compared with Mann-Whitney U tests. e level of significance was regularly set at P < 0.05. Statistical analyses were performed with GraphPad Prism version 6 software. Results e cerebral glucose metabolism was explored by PET imaging during the first month after transient focal ischemia in rats. All the images were quantified in standard units, i.e., %ID/cc of [ 18 F]FDG. e images with normalized color scale illustrate the evolution of the PET signals at control (day 0) and at 1, 3, 7, 14, 21, and 28 days after ischemia onset (Figure 1). e extent of brain damage after cerebral ischemia was assessed using T 2 W MRI at 1 day after reperfusion. Hyperintensities of T 2 W images showed similar infarct extents as well as locations affected. All ischemic rats subjected to nuclear studies showed cortical and striatal MRI alterations (mean ± s.d.: 296.24 ± 48.3 mm 3 , n � 6). [ 18 F]FDG after Cerebral Ischemia. e time course of the cerebral glucose metabolism was evaluated with [ 18 F]FDG in both the ipsilateral and contralateral cortexes, striatum, and whole brain at control and 1, 3, 7, 14, 21, and 28 days after MCAO (Figure 1, n � 6). e different regions evaluated showed similar metabolic evolution after long-term focal cerebral ischemia. In the whole brain (cerebrum), the ratio of the [ 18 F]FDG signal uptake in the ipsilateral to contralateral hemispheres showed a general metabolic decrease in the ipsilateral hemisphere after cerebral ischemia in relation to control (day 0) values (p < 0.001, Figure 1(b)). At day 1, the [ 18 F]FDG signal ratio showed the lowest values in comparison to day 0 (control) followed by a progressive increase from day 3 to days 7-14 after ischemia in comparison to day 1 (p < 0.01; p < 0.001, Figure 1(b)). Subsequently, the PET signal ratio displayed a slight decrease from days 21 to 28 after ischemia onset (p < 0.01, Figure 1(b)). e cerebral cortex showed a significant reduction of the [ 18 F]FDG-PET ratios from 1 at day 0 to 0.5 at day 1 after reperfusion followed by a recovery to circa 0.7 at days 7 and 14 and a subsequent decrease later on (days 21-28) (p < 0.05; p < 0.01; p < 0.001, with respect to control and day 1, Figure 1(c)). In addition, the striatum showed similar [ 18 F] FDG-PET signal values over the first moth after cerebral ischemia than that observed in the whole brain and cerebral cortex (p < 0.05; p < 0.01; p < 0.001, with respect to control and day 1, Figure 1(d)). Crossed Cerebellar Diaschisis after Ischemia. e time course of the glucose metabolism in the cerebellum was evaluated with [ 18 F]FDG at the acute, subacute, and chronic stages after MCAO (Figure 2, n � 6). In the cerebellum, the ratio of the contralateral to ipsilateral hemispheres showed a progressive glucose metabolism decrease from control (day 0) to day 3 followed by a significant decrease at days 7 and 14 in comparison to day 0 and day 1 after cerebral ischemia. erefore, these results evidence the existence of CCD after cerebral ischemia in rats (p < 0.05; p < 0.01, Figure 2(b)). Subsequently, the [ 18 F]FDG signal ratios displayed a recovery to control values at days 21 and 28 after MCAO. e time course of the neurologic score after cerebral ischemia showed the major neurologic impairment at day 1 after MCAO in relation to control rats followed by a progressive significant improvement from days 7 to 28 after ischemia (p < 0.05; p < 0.01; p < 0.001, with respect to control and day 1, Figure 2(c)). Discussion Since Baron and collaborators coined the term CCD in the 1980s to describe the disturbance of the cerebellum distant but linked to an ischemic region of the brain, the pathophysiology underlying this phenomenon has been still plenty of controversy [3,10]. Some clinical studies in stroke have defined diaschisis as a transient and reversible event attributed to a functional neuronal deafferentation. However, others have described CCD as a persistent process that leads to neurodegeneration [18,[26][27][28]. Likewise, CCD has been extensively diagnosed clinically with neuroimaging modalities in patients at different stages after stroke [9,11,13,[29][30][31]. Despite this, CCD has remained poorly characterized in animal models of stroke. Because of this, we have assessed for the first time the in vivo imaging of CCD using PET with [ 18 F]FDG imaging in combination to neurofunctional evaluation after cerebral ischemia in rats. [ 18 F]FDG-PET Imaging after Cerebral Ischemia. Contralateral cerebellar hypometabolism (CCH) is a wellestablished remote functional effect related to CCD that might be promoted by the uncoupling of the oxygen consumption and glucose utilization caused by neuronal deafferentation [32]. Moreover, it is known that astrocytes respond to neuronal deafferentation and a recent report indicates that [ 18 F]FDG signal is sensitive to astrocyte metabolism suggesting the potential role of astrocytes on CCH [33,34]. Contralateral cerebellar hypometabolism has been observed clinically with [ 18 F]FDG-PET imaging after stroke; however, the correlation to the clinical significance is still under debate [12]. erefore, the preclinical characterization of the CCH in animal models of cerebral ischemia with [ 18 F]FDG might provide novel perspectives to the understanding of CCD after stroke. In the present study, we have assessed the characterization of the CCH during the Contrast Media & Molecular Imaging first month following cerebral ischemia in rats by using [ 18 F] FDG. After experimental stroke, rats showed a decreased [ 18 F]FDG uptake in the region of the infarction followed by a progressive recovery during the first week after ischemia onset (Figure 1). In fact, the increase of glucose metabolism at day 7 stands in agreement with the activation of microglial cells and infiltrated leukocytes after cerebral ischemia [24,35]. Subsequently, [ 18 F]FDG PET showed a progressive decrease of the glucose metabolism from day 14 to 28 due to a (i) reduction of the inflammatory response and (ii) the reabsorption of the necrotic cerebral tissue [36,37]. In addition, as a result of the supratentorial ischemic lesion, PET imaging with [ 18 F]FDG displayed a contralateral cerebellar hypometabolism from days 7 to 14 after MCAO that was followed by a recovery to control values during the second and the third week later on ( Figure 2). In addition, we have previously demonstrated that [ 18 F]FDG signal did not show any change over one month in both control and SHAM 9-week-old male Sprague-Dawley rats [23]. erefore, these results evidence the existence of CCD after acute and subacute stages of cerebral ischemia in rats that are reverted at the chronic phase and are in agreement with the reversibility of the CCH in some patients with stroke [12]. Nevertheless, despite only one-third of the patients in the acute phase showed CCD after large anterior circulation vessel occlusion [31,38], all rats evaluated in this study presented cerebellar diaschisis. Likewise, the ischemic rats included in this study showed a very similar location and size of lesion (296.24 ± 48.3 mm 3 ). However, the extension of the lesion did not show a significant correlation to the CCH observed at days 7 and 14 after MCAO (data not shown). erefore, these results are in disagreement with those described by Infield and colleagues who defined CCD as the functional phenomenon that correlates with the severity of the ischemic lesion [15]. In the present study, the neurological score showed that the animals presented the worst outcome at the day 1 after ischemia followed by a progressive significant functional improvement from day 7 to onwards. erefore, the neurological recovery experienced by the ischemic rats run in parallel with the presence of the CCD at the second and third week after cerebral ischemia. Likewise, these findings stand in contrast with the description of cerebellar diaschisis as the phenomenon that can persist despite the recovery of neuronal functionality [15]. In summary, PET imaging with [ 18 F]FDG was carried out to evaluate the CCD after cerebral ischemia in rats. Our studies showed for the first time the contralateral cerebellar hypometabolism at the acute and subacute phases of cerebral ischemia in rats. Besides, the presence of CCD stands in agreement with the improvement of neurological outcome. erefore, these results provide valuable knowledge regarding the role of CCD after experimental stroke and suggest that the degree of cerebellar diaschisis might be predictive of neurological recovery in rats. . Neurologic outcomes before (day 0) and at 1, 3, 7, 14, 21 and 28 days after cerebral ischemia (c). * * * p < 0.001 compared to control, # p < 0.05, ## p < 0.01 and ### p < 0.01 compared to day 1. Data Availability All data supporting the results can be found at CICbio-maGUNE, San Sebastian, Spain, or from the corresponding author upon request. Ethical Approval Animal experimental protocols were approved by the animal ethics committee of CIC biomaGUNE and the local authorities and were conducted in accordance with the AR-RIVE guidelines and Directives of the European Union on animal ethics and welfare. Conflicts of Interest e authors declare no conflicts of financial interest.
2019-01-22T22:30:20.682Z
2018-12-02T00:00:00.000
{ "year": 2018, "sha1": "18374da599ab12fdfa3560dca2337f71b2c3b3cd", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cmmi/2018/2483078.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "888335bb4980e851cb6f6fdd2c9ec3f75f1ff3b9", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252237939
pes2o/s2orc
v3-fos-license
The Role of Antennas on GNSS Pseudorange and Multipath Errors and Their Impact on DFMC Multipath Models for Avionics Current satellite navigation systems are providing more and more dual-frequency capabilities, enabling improved navigation accuracy and a reduction of residual errors (e In this context, antennas need to be properly analyzed for two reasons: On the one hand, multiband antennas will be progressively installed on aircraft and, therefore, the achievable performance at the L5/E5a band will be evaluated and properly described in the Minimum Operational Performance Specifications (MOPS).On the other hand, the impact that the antenna has had on the final navigation solution has been more thoroughly investigated over the last few years.Contributions have been published, for instance, by Amielh et al. (2018), Caizzone et al. (2019aCaizzone et al. ( , 2019b)), Harris et al. (2017), as well as Raghuvanshi and Van Graas (2015).Moreover, activities in this respect are being performed at the standardization level (RTCA/ EUROCAE/ICAO), with new antenna MOPS for DFMC antennas specified in DO-373 (RTCA, 2018).This renewed interest is bringing a better understanding of the group delay role to light, as well as that of the dominant role of the antenna working as a spatial filter with respect to multipath. In the framework of the Dual-Frequency Multipath Model for Aviation (DUFMAN) project financed by the European Commission that aims to develop multipath models for dual-frequency navigation for avionics, the previously described methodology has been used and further developed.The process being chain-built led to the capability of predicting pseudorange errors caused by the user antenna and airframe-induced multipath by means of a tight integration of electromagnetic measurements and simulations. In the present work, the activity done in order to gain more insight into the physical rationale of the impact of the antenna on multipath results will be described.First, a method to characterize antennas and their performance with respect to multipath rejection will be discussed.Multipath suppression capability indicators will be defined, which are able to properly quantify the capability of an antenna to attenuate (or not) incoming multipath.Moreover, the process developed in-house to account for the airframe will be described and validated by means of comparison with data from flight measurements. Furthermore, the simulation capabilities described before will be used to consistently analyze major multipath sources and investigate different scenarios, taking into consideration different antennas, different positions on the aircraft, and different airframes.In a later section, the same approach will be used to investigate multipath levels for platforms with different multipath environments (such as business aviation aircraft and large commercial aircraft having antennas installed further back on the fuselage) and will highlight differences with the results obtained for commercial aircraft. Finally, the results will be bounded toward the creation of a multipath model, fitting the requirements of the DFMC MOPS.More work in this area is surely needed to address the new research questions arising, as better stated in the conclusion of the paper. Antenna Group Delay Characterization The first step to characterize antenna-induced errors is to characterize the antenna itself (e.g., as in Van Dierendonck and Erlanson [2007]).As demonstrated by Appleget and Bartone (2019), Caizzone, et al. (2019aCaizzone, et al. ( , 2019b)), and Murphy et al. (2007), antennas' intrinsic errors on GNSS pseudorange errors are caused by group delay variations.A limit on the allowable variations is also included in DO-373 (RTCA, 2018), the aviation dual-frequency MOPS document on antenna performance.In order to characterize group delay variations and the related pseudorange error, we followed the procedure shown in Figure 1. Two commercial antennas, hereafter referred to as Antenna A and Antenna B, are analyzed in depth in the rest of the paper.For the electromagnetic characterization, said antennas were installed on a rolled-edge ground plane and then measured in Microwave Vision Group's Starlab, a semi-anechoic chamber (Figure 1 and Figure 2) available on DLR premises. The anechoic chamber measurement provides a transfer function of the antenna dependent on frequency, elevation, and azimuth angles.For the present work, frequency was sampled every 1 MHz (in a bandwidth of 24 MHz) and elevation/ azimuth were sampled every 2 degrees.Group delay skyplots (proportional to the derivative of the phase over frequency) only consider one frequency: Exemplary results for the two antennas at one frequency (1176 MHz) are shown in Figure 3. In order to assess the antenna-induced errors on each GNSS frequency band, it is important to properly weight the antenna transfer function with the GNSS signal spectrum.This was obtained by passing the antenna transfer function through an ideal receiver using the methodology shown in Vergara et al. (2016): Pseudorange errors (only due to the antenna and relative to the whole GNSS band) were obtained as outputs (as shown in Figure 4 and Figure 5). For the present analysis, the configuration of the receiver was chosen according to the current draft material for standardization (i.e., 0.1 correlator chip spacing for L1/E1 and 1 chip for L5/E5a with a bandwidth of 24 MHz).The following signals were used depending on the frequency: BPSK(1) for L1 C/A, CBOC(6,1,1/11,-) Differences in pseudorange error patterns shown in Figure 4 and Figure 5 are due to the intrinsic architecture of the antennas (because Antennas A and B are commercial products, their design is proprietary and not publicly available).An analysis of the behavior of group delay variations with respect to antenna architectures can, however, be found in Caizzone et al. (2019aCaizzone et al. ( , 2019b)). Antenna Multipath Susceptibility Characterization The former analysis showed a characterization of two commercial avionic antennas in terms of group delay variation and pseudorange error predictions.Such errors will add to the overall pseudorange error at the receiver if not properly considered (i.e., calibrated).However, apart from these error sources, due to their intrinsic properties, the antennas' impact on the overall error is also affected by the amount of multipath that the receiver is exposed to. The antenna MOPS, however, only specify the antenna system, itself, and there is no direct connection to performance levels obtained on the receiver side yet. In particular, such connections appear very tricky for what concerns multipath performance. In the currently valid single-frequency MOPS for satellite-based augmentation systems (RTCA DO-229), for instance, multipath was accounted for by a σ multipath term that was dependent on the elevation angle θ i of the satellite, modeled as: An improvement has been recently pursued in DFMC documents under development, with the antenna term being explicitly stated and combined with the multipath term in ED-259A, as also approved at the International Civil Aviation Organization (ICAO) level (Circiu et al., 2020b).More work in the area is surely needed to clearly associate antenna-related effects to the corresponding pseudorange error and its impact on, for example, protection levels. The only requirement in the antenna MOPS that is currently linked with multipath errors (through a specification of the cross-polarization level of the antenna) is the axial ratio, which is defined as the ratio of the major axis to the minor axis of the polarization ellipse, having to be less than or equal to 3 dB in a region extending from boresight down to 40 degrees of elevation off boresight across all azimuth angles (RTCA DO-373, 2018).Such a parameter, though, useful to ensure good polarization purity of the antenna and, therefore, good reception of the desired signal, is, however, not fully representative of the multipath characteristics of antennas for two reasons. First, the axial ratio only considers the goodness of the antenna in suppressing cross-polar radiation (i.e., multipath) coming from the same angle as the co-polar radiation (i.e., the satellite signal).In general, however, and even more in the aeronautic scenarios, such an assumption is not valid.In this case, multipath is most likely coming from directions other than that of the satellite signals (e.g., from low elevations after reflections on tail or winglets).Moreover, the axial ratio is defined in the MOPS as only applicable for a portion of the hemisphere, namely for the most favorable one, around boresight.The behavior of the antenna in low elevations (where most multipath phenomena are expected) is left unspecified. In previous work from the authors (Caizzone et al., 2018), more suitable parameters to describe the multipath-related characteristics of GNSS antennas have been identified.They have been called multipath suppression capability indicators (MPSIs) and have different formulations for different types of multipath (i.e., from the lower hemisphere, from the upper hemisphere, specular, diffuse). In this work, a related parameter called the multipath susceptibility ratio (MPSR) is used as suitable for the aviation scenario (and more intuitive than the MPSI).The multipath susceptibility ratio is basically a worst-case undesired-to-desired ratio (i.e., a ratio between the maximum value of the Left-Hand Circularly Polarized (LHCP) gain in the upper hemisphere and the Right-Hand Circularly Polarized (RHCP) gain in the direction of the signal: , where θ ∈ °°( , ) 0 90 and φ ∈ °°( , ) 0 360 denote elevation and azimuth, respectively.In particular, the angles θ φ s s and denote the incident angles of the line-of-sight signal, for which the metric is evaluated.For the multipath, on the other hand, no a-priori knowledge about the angle of arrival is assumed, so that the worst case (i.e., the biggest value of the cross-polar gain Gain LHCP in the upper hemisphere) is taken into account. It is worth noticing that the antenna, itself, cannot suppress any RHCP multipath due to the fact that the distinction between multipath and signal can only happen on the receiver side and not the antenna side.Therefore, the only possible multipath suppression on the antenna side for reflections in the upper hemisphere is related to the suppression of the LHCP gain. An MPSR close to 0 dB means that the multipath is barely suppressed by the antenna (i.e., the multipath is as strong as the signal).On the other hand, an MPSR of about -20 dB means that the multipath will be attenuated by 20 dB with respect to the direct signal. When evaluating the MPSR for all possible angles, a 3D map is obtained giving an indication of the capability of the antenna to suppress multipath amplitude, which is variable according to the angle.An example is given in Figure 6 and Figure 7 for the two commercially available antennas under consideration for the L1/E1 and L5/E5a bands. Antenna Installed Performance Characterization From the previous discussion, it is clear that antennas act as spatial-polarimetric filters and such antenna characteristics strongly influence the amount of multipath passed to the receiver.In order to investigate and demonstrate its use in the aviation context, the process described in Figure 1 was extended to be capable of integrating the electromagnetic measurement of real commercial-off-the-shelf (COTS) avionic antennas with a simulation of said antenna on a given platform (in our case, on a CAD model of an airplane; in this case, Airbus A320).Such a process enables us to obtain the installed antenna response (dependent on frequency, elevation, and azimuth) containing the contributions of both actual antennas per se and that of the interaction with the airplane (i.e., due to the multipath of the airplane).By passing this response through an ideal software receiver, it was possible to estimate the pseudorange error produced by the specific installation of that antenna on that position of that airplane (Figure 8). It is worth highlighting here that in order to analyze the impact of the antenna on the multipath contribution, there is a need to de-embed the antenna-only errors (as calculated in the former section) from the overall results by a calibration step, as reported in Circiu et al. (2020c). For example, the multipath (and noise) contribution is shown in Figure 9 over elevation before and after calibration.The calibration step basically removes the antenna group delay variation error from each raw pseudorange measurement according to the respective angle of arrival of the satellite (corresponding to a specific antenna group delay variation error) before statistical processing is started. In this way, the antenna impact due to multipath suppression capability can still be considered while its intrinsic error due to group delay variations is not.This separation allows us to perform an analysis of the multipath contribution for different antennas.The result of the formerly described process is found in a 3D map of the predicted multipath (at each GNSS band) for that specific installation (expressed as skyplot or as 2D Map, see Figure 10).It was, then, possible to compare it with the data obtained from flight test measurements and processed to isolate the multipath-only component (i.e., by using the dual-frequency dual-constellation code-minus-carrier [CMC] method [Circiu et al., 2020c]). An exemplary comparison for one satellite (PRN 9) between the flight data ("measurement") and the data from the installed performance simulation ("simulations") is shown in Figure 11.Simulations very closely follow the shape of the measured multipath. The absolute values differ slightly as the measurements were affected by further errors (such as receiver imperfections, atmospheric effects, etc.) that were not considered in the simulation.Moreover, the simulated data were obtained strongly simplifying the scenario, considering a simplified full-metal aircraft and not changing its setup during flight (i.e., not considering wing flex and further effects). An extensive use of electromagnetic simulation will be presented in the next section to gain further insight into the physics of multipath on aircraft structures in avionic applications. EXPLOITING SIMULATION CAPABILITIES TO OBTAIN INSTALLED PERFORMANCE The simulation capabilities established and described in the former section can be exploited to gain insight into the phenomena underlying the multipath results on aircraft, as well as providing the means to perform an analysis of the expected multipath on different platforms.This has the strong advantage, on one hand, to make a step-by-step analysis possible (for instance, by analyzing the effects of different parts of the airplane) and, on the other hand, it allows for the analysis of multipath in scenarios for which no flight data is yet available (e.g., for new aircraft or for different antennas that are not certified for flight).It is worth mentioning that the simulations consider a realistic but simplified model of the aircraft and that the airframe is, for simplicity, considered here to be fully metallic. Impact of Different Parts of the Airframe An Airbus A321NEO was taken as a reference with an antenna placed at the standard GPS Position 2 (i.e., roughly above the entrance door toward the front of the plane and on the top centerline).The airplane was simulated considering progressively more and more components of the structure so a better understanding of the objects causing reflections becomes possible (see Figure 12).In particular, The airframe multipath error prediction results shown are both in terms of skyplots (Figure 13) as well as versus elevation angle (with respect to the aircraft body frame) with a plot of the average value with error bars to show the maximum and minimum values found over azimuth at the given elevation bin (Figure 14).The analysis was performed with Antenna A. The difference obtained when using Antenna B will be shown later.The orientation of antenna with respect to aircraft is shown in Figure 13a and will be valid for all plots. Please note that, in this calculation, no smoothing is applied (differently from the processing performed during real flights, where 100s smoothing is applied to Figure 13 and Figure 14 clearly show the impact of the tail on the aircraft multipath error at high elevations, as well as the even more relevant effect of wings/ winglets for low elevations.For the A321NEO example, the winglets appear to have the most impact, as they are more clearly visible from the antenna standpoint.Other airplanes having strongly slanted wing configurations will also experience a similar impact due to the wing tips.Such structural parts appear, therefore, to be the dominant sources of multipath. Impact of Different Antennas In order to investigate the impact of different antennas, Antenna B was simulated on the same aircraft in the same position as Antenna A. Results are shown in Figure 15 and Figure 16 for A321NEO at both the L1 and L5 bands.Bars in the plots express the minimum and maximum values obtained in the corresponding elevation bin. Moreover, simulations were also performed with Airbus A3301 , placing the antenna again in primary position GPS Position 2. Results are shown in Figure 17 and Figure 18.Though the shape of the curve is roughly similar among the two antennas for the same aircraft (due to similar multipath environment determined by the installation geometry), the amplitudes differ.In particular, Antenna A appears to produce slightly larger errors on the L1/E1 frequency band than Antenna B. The An explanation for this phenomenon can be found in the multipath suppression capability of the two antennas introduced in previous section.Antenna A's MPSI values were worse than that of Antenna B and this difference is even stronger at the L5 band for low elevations (Figure 6).Even if the signal on the L5/E5a band had better multipath rejection capabilities on long delay multipath, for short range multipath below 30 m (dominant in this scenario), the expected multipath code error on the L1/E1 and L5/E5a bands is similar.However, it can be also noticed that, though using two totally different antennas, the amount of multipath error did not differ substantially for both bands. Figure 19 and Figure 20 show the standard deviation of the predicted multipath to be comparable with metrics used in MOPS2 .It can be observed that the difference among the results obtained for the two antennas is not really substantial (on the order of a few cms).This can be explained by the fact that the considered installation point of the antenna (primary position GPS Position 2) is, indeed, a point where impinging multipath levels are low.Therefore, differences in the filtering capability of the antenna will not have too much of an impact.In order to analyze whether further installation points appear to be more challenging from a multipath point of view, further simulations are performed in the next section. Impact of Different Antenna Locations In order to investigate the impact of different installation points, we considered the A350 aircraft, identifying two different locations: The primary position (GPS Position 2) as well as the Automatic Direction Finder (ADF) position (Figure 21).The ADF position resembles the installation point (in terms of ratio of the antenna position to the fuselage) on aircraft from other manufacturers and will, therefore, provide valuable information on the multipath to be expected in such cases. In this case, two effects can be observed: On the one hand, the ADF position (due to its closer vicinity to the wings and tail and the even better visibility of the tail from antenna position) appears to be much more affected by multipath than Position 2 (i.e., cannot be considered a low-multipath position; see Figures 22 through 25).On the other hand, it is also evident how Antenna B, in the case where the environment is rich in multipath, manages to suppress the amount of multipath much better than Antenna A thanks to its better MPSR (see Figure 6 and Figure 7).than on that of the L5/E5a, due to the worse MPSR of Antenna A around the zenith at L1.Such an example is relevant for installations in business aviation and/ or installations that do not/cannot benefit from the low-multipath environment as in the case of the commercial aircraft. When calculating the standard deviation values for Falcon and A350 installations, the results in Figure 30 and Figure 31 were found.Indeed, the values obtained with Antenna B were significantly lower than the ones obtained from Antenna A for all elevations, with differences reaching about 20 cm. TOWARD MULTIPATH MODELS FOR COMMERCIAL AVIATION The results shown in previous sections have been compared to those obtained by flight data.Overall, a very good agreement was found, validating the simulation approach and also strengthening, on the other side, the validity and correctness of the flight data for the creation of the multipath models. Figure 32 shows an exemplary comparison of 100-s smoothed multipath root-mean-square (RMS) as obtained by simulation and measurements of data from an A350 aircraft at the L1/E1 band.The measurements-based results were obtained from GNSS observables recorded on the A350 aircraft and then processed to obtain the 100-seconds' smoothed multipath and noise errors with code-minus-carrier techniques (Circiu et al., 2020c).After collecting data from all flights, the estimated errors were then sorted on satellite elevation bins and the RMS for each bin was computed.The satellite elevation refers to the elevation in the level frame (with respect to the horizon).The simulation-based results were obtained by mapping the simulated multipath results for the specific aircraft type (available for all azimuth and elevations, but with no time sequence) to the actual angles of arrival for the satellites during the flight campaign to get a sequence of multipath values that could be processed as data from the flight measurement.All the flight trajectories flown on A350 were considered.Based on the satellite elevation and azimuth values at each epoch, the predicted multipath was calculated (i.e., its value for the specific elevation and azimuth was considered) for each epoch, resulting in a time sequence of predicted multipath errors for each satellite.After applying the 100-seconds' smoothing filter, the data was sorted by elevation bin and the RMS of the smoothed predicted multipath was computed for each elevation bin, similar to the measurements-based approach. For the simulations, the satellite angles were first considered in body-frame and translated into the level frame to be comparable with the measurements-based model and in line with the current defined model.The receiver parameters used were the same as those used in the flight receiver and as specified in Section 2. Antenna A was considered in simulation, as it was the antenna that was actually flown. The existing residual differences between the two curves (i.e., measured flight data and simulations) could have been caused by a number of effects, including deformation of the aircraft in flight (especially wing flex), the receiver noise (ideal receiver was assumed for simulations), and residual errors from the ionosphere and troposphere, as well as the simplification of the airplane structure in simulation. Figure 33 shows moreover the raw RMS of multipath obtained using simulation data (using Antenna A) and then evaluated for the satellite geometries obtained during real flights.The big difference between ADF and Position 2 on A350 (as shown already in Figure 30 and Figure 31) is still clearly visible. However, when performing 100-s smoothing of the results (Figure 34), the difference shrinked consistently due to the capability of the smoothing process to reduce the high-frequency multipath effects.This result is important for the practical usability of the multipath models, since it shows that, even in installation points that are moderately scarce from a multipath point of view, the models obtained after 100 seconds of smoothing can be safely used.The validity of this consideration for installations is much more challenging in terms of multipath (for business aviation, helicopters, UAVs, etc.), but it will still need to be investigated and is a topic for future research. Dual-frequency multi-constellation (DFMC) multipath models have recently been approved by the community based on flight data from the DUFMAN project (Circiu et al., 2020b).Even if data from different aircraft types was analyzed to develop new models, other different installations and antenna positions might need to be taken into account to validate the applicability of the models. The simulation capability shown here, besides shedding light on the physical phenomena of aircraft multipath, provides the possibility of augmenting the applicability of the models for further installations/aircraft types.Moreover, it provides a basis for the analysis of more complicated installations, as is typical of various avionics platforms such as UAVs, helicopters, and business aviation jets.Further research is, indeed, needed in this respect to evaluate the validity of the current and future multipath models for these avionics platforms. CONCLUSION The present paper has brought insight into the role of antennas on pseudorange and multipath errors as experienced in aeronautics.A characterization of different commercial off-the-shelf avionics antennas was performed and indicators for multipath susceptibility have been presented accordingly.Thanks to the integration of electromagnetic measurements and simulations, it was possible to perform a hybrid analysis leading to an estimation of the expected multipath of specific antennas on specific aircraft. FIGURE 1 FIGURE 1 Functional flow diagram of procedure to characterize the antenna-induced error FIGURE 3 FIGURE 3 Group delay variations (normalized to zenith) in ns for (a) Antenna A and (b) Antenna B at 1176 MHz, as measured on the 0.4-m rolled-edge ground plane in Starlab chamber has no clear correlation with antenna-specific requirements. FIGURE 5 FIGURE 5 Pseudorange error (in m and normalized to zenith) obtained from the antenna measurement in Starlab and then processed through the ideal receiver for Antenna B at the L1 band (left) and at the L5 band (right) FIGURE 6 FIGURE 6 MPSR (in dB) for Antennas A and B (on the left and on the right, respectively) at the L1 band FIGURE 8 FIGURE 8 Functional flow diagram of the process to obtain pseudorange errors starting from electromagnetic measurement of the antenna FIGURE 10 3D FIGURE 10 3D map of the predicted multipath error (normalized to zenith and with antenna-intrinsic error calibrated out) of Antenna A at the L1/E1 band FIGURE 12 FIGURE 12 Different parts of the A321NEO aircraft considered in the installed performance analysis to identify the sources of multipath: a) fuselage only; b) fuselage and tail; c) fuselage, tail, and wings (without winglets); and d) full aircraft FIGURE 13 FIGURE 13 Skyplot of multipath error, calibrated and normalized to zenith, for Antenna A on A321NEO, Position 2, at the L1 band considering: a) fuselage only; b) fuselage and tail; c) fuselage, tail, and wings (no winglets); and d) full aircraft FIGURE 14 FIGURE 14 Multipath on L1 band vs elevation for Antenna A on A321NEO, Position 2, considering: a) fuselage only; b) fuselage and tail; c) fuselage, tail, and wings (no winglets); and d) full aircraft FIGURE 15 FIGURE 15 Predicted multipath error (in m) as a skyplot (top) or versus elevation (bottom) at the L1 band for Antenna A (left) and Antenna B (right), installed in the primary position GPS Position 2 on the A321 NEO aircraft FIGURE 19 FIGURE 19 Standard deviation of the predicted multipath error (in m, calibrated for antenna-intrinsic pseudorange error and normalized to zenith) at the L1 band for Antennas A and B from the installed performance simulations on A321NEO and A330 aircraft FIGURE 20 FIGURE 20 Standard deviation of the predicted multipath error (in m, calibrated for antenna-intrinsic pseudorange error and normalized to zenith) at the L5 band for Antennas A and B from the installed performance simulations on A321NEO and A330 aircraft FIGURE 32 FIGURE 32Comparison of the 100-s smoothed RMS of the multipath between measurement and simulations for A350 data on the L1 band using Antenna A FIGURE 33 FIGURE 33Raw RMS of multipath with simulated data from A350 aircraft, processed using the trajectories and satellite constellations seen in real-life flights during the DUFMAN project FIGURE FIGURE 34 100-s smoothed RMS of multipath with simulated data from A350 aircraft processed using the trajectories and satellite constellations seen in real-life flights during the DUFMAN project
2022-09-15T16:14:15.651Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "70b4fc14dc7bede2e101a2d13972b13a5ac049f2", "oa_license": "CCBY", "oa_url": "https://navi.ion.org/content/navi/69/3/navi.532.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d25dd281f8c707231b51aeb30d00020dd35b52ad", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
234164456
pes2o/s2orc
v3-fos-license
Situation awareness framework for industrial control system based on cyber kill chain . Information and cyber security of Industrial Control Systems (ICS) has gained considerable importance. Situation Awareness (SA) is an exciting mechanism to achieve the perception, comprehension and projection of the ICS information security status. Based on the Purdue Enterprise Reference Architecture (PERA), a situation awareness framework for ICS is presented considering the ICS cyber kill chain. The proposed framework consists of IT SA Centre, OT SA Centre, and Comprehensive SA Centre. Comprehensive SA Centre is responsible for creating and maintaining an integrated and high level of security visibility into the whole environments. The introduced framework can be used to guide the development of the situation awareness infrastructure in organization with industrial control systems. Introduction Industrial Control Systems (ICSs), usually include Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), Programmable Logic Controllers (PLC), Process Control Systems (PCS), Remote Terminal Unit (RTU), and Intelligent Electronic Device (IED), are widely used in critical infrastructure of industries to control process in the industrial sectors, such as electric, water and wastewater, oil and natural gas, transportation, chemical, pharmaceutical, pulp and paper, food and beverage, and discrete manufacturing. For the applications of information and communication technologies, the frequency and seriousness of the cyber-attacks targeting ICSs is increasing quickly [1]. Cyber security of ICSs is increasingly important, and may have potential impacts on the safety. The perception of cyber security status is an expectable method to detect various attacks and anomalies. And cyberspace situation awareness can help the organization to improve the ability to detect and investigate the cyber-related attacks and anomalous behaviours. Situation Awareness (SA) is "the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future" [2]. Perception, comprehension and projection are three main factors of situation awareness. Network isolation between IT and OT systems is one of the mainstream technologies to ensure the security and safety of the ICSs. But the cyber-attacks such as "Stuxnet" [3] can break through the network boundary, and then disable or disrupt the target ICSs. The convergence of perception data across the IT and OT systems and related equipment will be better protecting the organization from suffering the cyber-attack, especially the ICSs. Reference [4] develops a formal model and risk assessment method for security-critical real-time embedded systems called OMR (Object-Message-Role) using Z notation. Reference [5] introduces multimodal-based incident prediction and risk assessment for industrial control systems. Security assessment and vulnerability assessment for critical infrastructure control systems are also discussed in [6]. The information security assessment methods referred in [7] can be adopted to analyse the security risks in industrial control system. The well-designed SA infrastructure must deal with both safety (control zone) and security risks. Based on the industrial control system cyber kill chain [8], a situation awareness framework for industrial control systems is developed. The framework can guide the organization to design the situation awareness infrastructure across the IT and OT networks. Typical ICS model We use Purdue Enterprise Reference Architecture (PERA), which was developed in the 1900s by Theodore J. Williams and members of the Industry-Purdue University Consortium for Computer Integrated Manufacturing, to illustrate the model of an ICS. The PERA has been adopted by ISA/IEC 62443. As shown in Figure.1, the ICS network of an organization is divided into three main zones consisting of enterprise zone, DMZ, and control zone [9]. Enterprise Zone. In this zone, Office Automation (OA) and Enterprise Resource Planning (ERP) Systems are usually used to manage the supply chain of the enterprise. ICSs are rarely connected directly to the enterprise zone. DMZ. The ICS-Demilitarized Zone is set between IT (Enterprise Zone) and OT (Control Zone). Replication servers, patch management servers, engineering workstations, and configuration management systems are commonly deployed in this area. Control Zone. Control Zone is subdivided into four levels including operations control, supervisory control, basic control and process. The operations control level typically contains SCADA's master station and other ICS system's master station with supervisory function. And HMIs commonly present in the supervisory control level. Basic control level is the main location for equipment like PLCs, and the functional include batch control, discrete control, continuous control, and hybrid control. The process level usually named as Equipment Under Control (EUC), the physical equipment being controlled by basic control level is located in this level. The situation awareness solution for ICSs should be well cover the three zones described above. That is to say, the SA infrastructure have the ability of comprehensive situation awareness across enterprise zone, DMZ, and control zone environments. ICS cyber kill chain The Cyber Kill Chain, adapted from the concept of military kill chains, is created for better detecting and responding to cyber-attacks [10]. Considering the nature of ICS-custom cyber-attacks, the two stage of the ICS Cyber Kill Chain is introduced [8]. The brief block diagram of ICS Cyber Kill Chain Stage1 and Stage2 is shown in the right part of the Figure.1. Stage1 Cyber Intrusion Preparation and Execution. This stage includes five phases consisting of planning, preparation, cyber intrusion, management & enablement, sustainment, entrenchment development & execution. In this stage, the purpose of attackers may be to collect the information about ICSs, defeat internal perimeter protections, or gain access to OT environments. Stage2 ICS Attack Development and Execution. This stage consists of three phases including attack development & tuning, validation, and ICS attack. Attackers in stage2 must use the knowledge leaned in stage1. The attack-behaviour in this stage may be trigger, deliver, modify, inject, hide, or amplify. The situation awareness solution for ICSs should be have the ability of the perception of the clues in the ICS cyber kill chain within a volume of time and space. The ICS cyber kill chain can be used for the comprehension of the data gathered from IT and OT environment. And then the future states and events can be projected based on the situation awareness infrastructure. Situation awareness framework Based on the ICS cyber kill chain, a situation awareness solution framework for industrial control system is shown in Figure.1. The situation awareness solution has four basic components: Perception Probes, IT SA Centre, OT SA Centre, and Comprehensive SA Centre. Perception robes are located in enterprise zone, DMZ, and control zone. They can be professional equipment with flow analysis function, firewall, intrusion detection system, intrusion protection system, log audit system, network switch etc. Probes are responsible for sensing and capturing important cues and elements in IT and OT environments, especially the warning messages. And then the data collected by probes will be sent to IT SA Centre or OT SA Centre. IT SA Centre is in charge of storage, integrating, and processing the perception data from various probes deployed in enterprise zone and DMZ. Then correlating among the information is analysed. Based on the perception and understanding of the IT environment, considering the ICS Cyber Kill Chain Stage1, IT SA Centre can provide a real-time, converged SA capability, a variety of cyber-attack prevention, detection, response, reporting, and mitigation capabilities within the range of IT networks of the organization. OT SA Centre provides a real-time, converged SA capability includes probe data from OT networks and equipment. Based on the perception and understanding of the OT environment, considering the ICS Cyber Kill Chain Stage2, OT SA Center can provide a variety of ICS-cyber-attack prevention, detection, response, reporting, and mitigation capabilities within the range of control zone of the organization. Comprehensive SA Centre correlates meaningful sensor data between IT SA Centre and OT SA Centre, that will produce actionable alerts. Some organizations monitor IT and OT separately. According to the whole ICS Cyber Kill Chains, a more comprehensive SA is necessary for enhancing the ability to advanced persistent threat toward ICSs. Logical architecture of SA centre Logical architecture of SA Centre is shown in Figure 2. In every SA Centre, the capabilities of attack prevention, detection, response, reporting, and mitigation are set up based on the analysis of the sensor data collected by perception probes. The correlation analysis models are usually related to the ICS Cyber Chain, considering the threat intelligence, user behaviour, vulnerabilities, port, security events, or statistic. The machine learning model, backtracking model, and attack scenario analysis model are also used in the comprehension of the situation, and the projection of the security status in the near future. Visualization technology is adopted for intelligent analysis presentation. The situation of threat, risk, external attacks, insider attacks, and data security can be friendly shown to the manager. The view of data monitoring, collection, aggregation, and analysis is helpful in making decisions with the goal of enabling a state prediction. The function of a SA solution may include: advanced security dashboard views, intrusion detection, IT and ICS devices management, security event management, and so on. A well designed and developed ICS SA Centre is expected to provide alerting mechanism. Conclusions In this paper, we propose a reference framework for ICS situation awareness based on Purdue ICS Model. Also the block diagram for the logical architecture of SA Center is presented. The proposed framework can be used to design ICS SA to monitor the whole cyber kill chain of attacks target to ICSs. The development of ICS SA solutions for different scenes will be focused on in the future. In this paper, the research was sponsored by the 333 Project in Jiangsu Province of China under Grant No. BRA2018317.
2021-05-11T00:07:35.955Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7b61b2726423a353b696ed325befbabbb416962a", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/05/matecconf_cscns20_02013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "107929d5b71e4effaa26bbb1cefaca4a6de7591f", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
212423419
pes2o/s2orc
v3-fos-license
Outcomes of gastrointestinal bleeding in patients with left ventricular assist devices: a tertiary care experience Background and study aims  Left ventricular assist device (LVAD) placement is a therapeutic modality for patients with end-stage heart failure. Gastrointestinal bleeding is a common complication following LVAD implantation. The aim of this study was to report our experience in management and outcomes of gastrointestinal bleeding in a large cohort of patients with LVADs. Patients and methods  We performed a retrospective review of all patients who underwent LVAD implantation at the University of Rochester Medical Center from January 2008 to June 2017. Data were collected on patient characteristics, clinical aspects of gastrointestinal bleeding events, and procedural interventions. A Cox proportional hazard model was utilized to identify potential risk factors for a gastrointestinal bleeding event. Results  During the study period, 345 patients underwent LVAD implantation. Of these, 125 patients (36.2 %) experienced 297 gastrointestinal bleeding events resulting in 533 endoscopic procedures. The diagnostic yield of endoscopy in determining a bleeding source was 49.5 %. If required, therapeutic interventions were successful in achieving homeostasis in 96.2 % of procedures. Our 30-day overall post-procedure adverse event (AE) rate was 6.6 %. Procedure-related (bleeding, infection, and perforation) AEs were very minimal (2.8 %). A Cox proportional hazard model indicated that older age at implant, female sex, African-American race, diabetes mellitus, and pulmonary hypertension were statistically significant predictors of a gastrointestinal bleeding event following LVAD implantation. Conclusions  LVAD patients have a high risk of gastrointestinal bleeding. Endoscopy was able to safely locate a bleeding lesion in approximately half of our patients and was successful in treating bleeding lesions in a majority of the cases. ished and is comparable to severe aortic stenosis. A loss of pulse pressure is believed to create alterations in hemodynamics that trigger the development of angiodysplasias [12,13]. In addition, reduction in pulse pressure creates shear forces that can contribute to occurrence of an acquired von Willebrand disease (vWD) [14]. In vivo reductions in von Willebrand factor (vWF) have been confirmed in LVAD patients, with normalization of levels post-heart transplant [15]. Anacquired vWD produces a coagulopathy, which increases overall bleeding risk that is compounded by the requirement for prophylactic anticoagulation for all LVAD patients. Cohort studies examining preimplantation risk factors for development of gastrointestinal bleeding events in patients with LVADs have produced mixed results. Several studies have reported that increased age at time of LVAD placement may increase risk of a subsequent gastrointestinal bleeding [6,[16][17][18][19]. Other studies have indicated that a history of gastrointestinal bleeding, use of the LVAD as destination therapy, or right heart dysfunction are independent risk factors for gastrointestinal bleeding in LVAD patients [20][21][22][23]. Most suggested management approaches advocate for endoscopic evaluation and intervention, with a reported diagnostic yield ranging from 30 % to 71 % for gastrointestinal bleeding in the LVAD population [6,18,[24][25][26][27]. There is a school of thought that endoscopic management may have limited utility in the LVAD population, as a majority present with an occult gastrointestinal bleeding, making diagnosis more difficult with conventional endoscopic techniques [28]. This limitation may be overcome with early use of device-assisted enteroscopy (DAE) to directly visualize more of the small bowel. A recent systematic review reported that performing DAE early in the course of assessment for suspected gastrointestinal bleeding in this population is associated with decreased transfusion requirements, decreased time to endoscopic intervention, and a high diagnostic yield [29]. In addition, use of video capsule endoscopy (VCE) may further increase diagnostic yield as VCE allows the endoscopist to narrow down the location of a gastrointestinal bleeding source prior to performing DAE. Although the literature describing the utility of endoscopy in assessment of a gastrointestinal bleeding in the LVAD patient is increasing, there is still considerable disagreement on its utility versus conventional management. The aim of this study is to report our experience in endoscopic management and outcomes of gastrointestinal bleeding in the LVAD population at an academic tertiary care setting. To date, our cohort represents the largest single-center gastrointestinal bleeding data in the LVAD population reported in the literature. Specific areas of interest were to identify potential risk factors for gastrointestinal bleeding and describe outcomes and safety of endoscopic management of gastrointestinal bleeding in LVAD patients. Study population & data collection We retrospectively reviewed electronic medical records of all patients who underwent implantation of an LVAD at the University of Rochester Medical Center (Rochester, NY) between January 2008 and June 2017 to identify those that were admitted with gastrointestinal bleeding (gastrointestinal bleeding). Each gastrointestinal bleeding event was considered independent, and thus analyzed separately. Data were obtained on clinical presentation at time of gastrointestinal bleeding, length of each hospitalization, relevant laboratory studies, and requirements for transfusion per gastrointestinal bleeding event. In addition, data were collected on all endoscopic and radiologic procedures performed during each hospitalization, location of gastrointestinal bleeding (if source identified), endoscopic interventions (if applicable), and post-procedural (30-day) AE rates. An identifiable source of bleeding was defined as: a lesion seen during an endoscopic exam that had evidence of active or recent bleeding that was documented by the endoscopist to be significant enough to cause the patient's clinical presentation. This study protocol was reviewed and approved by the University of Rochester Medical Center institutional review board. All patients who underwent an endoscopic procedure had proper informed consent taken in accordance with institutional policies. Statistical methods Baseline clinical characteristics were compared between patients with or without gastrointestinal bleeding events during follow-up. Continuous measures were expressed as mean ± SD and range while categorical data were summarized as frequencies and percentages. Statistical comparisons were performed using the Wilcoxon rank-sum test for continuous variables and chi-square test for dichotomous variables, as appropriate. Diagnostic yield was calculated as the number of endoscopic procedures that revealed a bleeding source/total number of endoscopic procedures (note: procedures performed after a source of bleeding was located were not included in this calculation). Success rate of endoscopic intervention(s) was defined as the number of procedures wherein hemostasis was achieved/ total number of procedures with endoscopic intervention (note: there were several procedures with more than one endoscopic intervention, for the purpose of this calculation, each procedure with an intervention was only counted once toward the denominator). Thirty-day AE event rate was defined as any AE occurring within 30 days of the completion of the procedure. Each procedure was considered independently. For the vast majority of subjects with complete follow-up data (n = 330), survival analysis techniques were utilized. The cumulative probability of gastrointestinal bleeding was displayed using the Kaplan-Meier method and statistical significance was determined to compare different groups with the log-rank test. Multivariate Cox proportional hazards regression models were used to model the time-to-event endpoints of index gastrointestinal bleeding and mortality. Covariates associated with predicting risk of these endpoints were determined employing the "best subsets" regression methodology. Specifically, the best subsets method of variable reduction examines the best models containing one, two, or three variables, and so on, and makes comparisons based on the global score chi-square statistic. In addition, variables needed to be significant at P < 0.10 for inclusion in the model. For the mortality endpoint, gastrointestinal bleeding was modeled as a time-dependent covariate in the proportional hazards regression model. Analyses were performed using Microsoft Excel, SPSS Version 24 (IBM) and SAS 9.4. Results A total of 345 patients underwent LVAD implantation during the study period, with 125 (36.2 %) having at least one gastrointestinal bleeding event. Each gastrointestinal bleeding event was recorded independently (n = 297). Patients experienced a median of two bleeds with a median time-to-index gastrointestinal bleeding of 0.54 years (range = 0-6.24 years). Patient characteristics are described in ▶ Table 1. Statistical analyses revealed that those with a gastrointestinal bleeding were more likely to be older (60. Results of the multivariate Cox proportional hazards models for index gastrointestinal bleeding and mortality are shown in ▶ Table 2 and ▶ Table 3. Predictors associated with risk of gastrointestinal bleeding were age at implant, sex, African-American race, diabetes, pulmonary hypertension, and history of acute MI. Risk of a gastrointestinal bleeding event increased by 6 % for each year older at the time of implant. Male patients exhibited a 37 % lower risk than females. Patients of African-American race had more than double (2.75 times) the risk of developing a gastrointestinal bleeding. Comorbidities of DM-II and pulmonary hypertension each independently elevated risk of gastrointestinal bleeding by approximate two-fold (HRs 1.8 and 2.2, respectively) while history of acute MI halved the risk. For the mortality endpoint, even after adjustment for age at implant, diabetes and NYHA class, time-dependent gastrointestinal bleeding significantly increased risk of mortality (HR 2.36, 1.58-3.53). When adjusting for baseline Hemoglobin, these results were similar (over 20 % were missing this biomarker and so were included in primary analysis). Characteristics of each independent gastrointestinal bleeding event are portrayed in ▶ Table 4. In patients hospitalized for their gastrointestinal bleeding event, median length of stay (LOS) was 8 days (range 0-173 days). The majority of patients (59.2 %) were readmitted for a subsequent gastrointestinal bleeding event. Median time to readmission (following prior hemostasis, if the patient had multiple readmissions for gastrointestinal bleeding events) was 118 days. Upon presentation for gastrointestinal bleeding evaluation, median hemoglobin concentration was 6.9 g/dL (Range 4.0-15.0), and a median INR of 2.0 (Range 1.0-8.6). Patients were transfused with a median of four units of packed red blood cells per admission for gastrointestinal bleeding, using standard transfusion thresholds of hemoglobin less than 7.0 g/dL. Procedural characteristics are described in ▶ Table 5. To evaluate LVAD patients with a suspected gastrointestinal bleeding, 533 endoscopic procedures were performed. At time of their gastrointestinal bleeding event, the majority of patients were on an antithrombotic regimen (97 %), with the most common being a combination of warfarin plus aspirin (73 %). The most common presentations of gastrointestinal bleeding were melena (n = 142; 47.8 %) and symptomatic anemia, without overt signs of gastrointestinal bleeding (n = 77; 25.9 %). If a source of bleeding was determined, the location was most often in the stomach (n = 113; 39.4 %) or small bowel (n = 83; 28.9 %). GIADs were the most frequent endoscopic finding (n = 121; 42.4 %). Our overall diagnostic yield for endoscopic evaluation of gastrointestinal bleeding was 49.5 %. A total of 226 interventions were performed, with the most common being argon plasma coagulation (n = 77; 34.1 %) or endoclip placement (n = 67, 29.6 %). The success rate in achieving hemostasis by performing endoscopic interventions was 96.2 %. Procedurerelated AEs were very minimal (2.8 %) in our cohort. Thirteen post-procedure bleeds were noted; however, it was difficult to delineate if these were a continuation of their index bleeding event. Perforation (n = 1) occurred in the rectosigmoid, during colonoscopy and was managed successfully with endoscopic Ovesco clip closure. One case of acute phlebitis was seen following peripheral intravenous placement and was managed conservatively with a 10-day course of antibiotics. Thirty-day post-procedure AEs included LVAD pump thrombosis (0.38 %), cerebrovascular accident (CVA; 0.75 %), and death (2.6 %). No reported deaths were associated with endoscopic procedures or interventions. Discussion Gastrointestinal bleeding is the most common, long-term AE post-LVAD placement and can lead to significant morbidity and need for repeated endoscopic procedures. To our knowledge, this cohort represents the largest single-center report on endoscopic management of gastrointestinal bleeding events in the LVAD population. Our study confirms that gastrointestinal bleeding is a very common AE following LVAD implantation (36.2 %; n = 125/345). Interestingly, our cohort had a higher rate of bleeding in LVAD patients as compared to national averages (36.2 % vs. 14.8-23.0 % in prior literature, potentially related to the significantly longer time patients spend in our region waiting for heart transplantation, coupled with the increasing number of LVAD devices being implanted as destination therapy [29]. Although moderate in severity, these gastro- Statistical analysis demonstrated significant association between older age (P < 0.001), African American race (P = 0.05), Type 2 diabetes (P = 0.05), chronic kidney disease (P < 0.001), pulmonary hypertension (P = 0.02), and ischemic cardiomyopathy as the indication for LVAD implant (P = 0.02) and development of gastrointestinal bleeding in the post-LVAD implantation course. intestinal bleeding events may increase risk of morbidity due to lengthy hospitalizations and the high propensity to develop recurrent gastrointestinal bleedings, often resulting in need for more endoscopic interventions. Endoscopy was demonstrated to be a safe and effective diagnostic and therapeutic modality to manage bleeding lesions in the LVAD population in our study. Nearly half the endoscopic procedures (49.5 %) identified a source of bleeding, with an interventional success rate (to achieve hemostasis) of 96.2 %. No reported deaths were associated with endoscopic procedures or interventions in our study.A recent retrospective study reviewed a cohort of 87 patients with LVADs implanted at a tertiary care center with a total of 164 gastrointestinal bleeding events [28]. The reported diagnostic yield of endoscopy was significantly lower (30 %) as compared to the current study. Given these findings, the authors recommended against routine endoscopic evaluation for occult gastrointestinal bleeding events, unless hemodynamic instability is present or significant transfusions are required; however, with a higher diagnostic yield during the episode of Pathophysiology of bleeding in LVAD patients is multifactorial with acquired von Willebrand factor deficiency, hemodynamic flow alterations, and coagulopathy with need for ongoing anticoagulation to prevent pump dysfunction/thrombosis. The majority of the bleeding in these patients is due to GIAD; however, several studies have also reported peptic ulcer disease as a very common source of bleeding in the LVAD patients due to nonsteroidal anti-inflammatory drug (NSAID)-induced damage to the gastrointestinal tract mucosa coupled with platelet inhibitor use and anticoagulation [30]. Endoscopy remains the mainstay in the evaluation of gastrointestinal bleeding events in these patients, with our study indicating upper endoscopy to have the highest diagnostic yield. Push enteroscopy can be considered as an early intervention in recurrent gastrointestinal bleeding events, specifically in patients who present with melena, iron deficiency anemia or occult gastrointestinal bleeding (as the location of the bleeding lesion is most often in the small bowel. This approach is supported in the literature, as it has been reported that performing an enteroscopy early in the course of a gastrointestinal bleeding event may reduce transfusion requirements and increase endoscopic diagnostic yield [31]. For hemodynamically stable patients and those with a negative upper and lower endoscopic evaluation with persistent gastrointestinal bleeding, a VCE with or without computed tomography enterography or a tagged red blood cell scan should be performed. A device-assisted enteroscopy (DAE) would follow, as information gained from VCE and/or radiologic exams would provide the endoscopist with an appropriate target for the procedure. Multiple algorithms have been proposed in the literature [28,32,33] for evaluation of gastrointestinal bleeding in LVAD patients; however, there is no standardized guideline for evaluation of these patients and data regarding screening high-risk patients for bleeding prior to LVAD placement are scarce. A recent study reviewed 64 gastrointestinal bleeding events in LVAD patients to evaluate risk of mortality after the index gastrointestinal bleeding event [34]. Their findings suggest increased mortality after ▶ ) that could potentially be used in the preimplantation period for risk-stratification [35]. Our data suggest that there may be a subset of LVAD patients that have a predisposition to develop a gastrointestinal bleeding. In this cohort, patients that experienced a bleeding event were more inclined to have recurrences; however, there was also a significant proportion (63.8 %) that never had a gastrointestinal bleeding develop in their post-implantation course. Independent risk factors of female sex, African-American race, DMII and pulmonary hypertension were each predictive of a gastrointestinal bleeding event. In existing literature, the only consistently reported risk factor of gastrointestinal bleeding in LVAD patients has been older age at time of implantation [6,[16][17][18][19]. If the risk factors demonstrated in our cohort can be validated, appropriate counseling preimplantation and/or endoscopic screening for patients at high risk for post LVAD gastrointestinal bleeding may be considered; however, careful attention should be paid to the risk/benefit profile of such, as pre-LVAD implant patients are at high risk for cardiac events There are several limitations of this study, inherent to its retrospective design. Our observations and clinical decision-making in management of gastrointestinal bleeding events reflect our single practice, thus results may be difficult to generalize to other institutions. Data were only collected on frequency of endoscopic procedures, not necessarily in the order in which they were performed. Conclusion As prevalence of heart failure in the general population continues to rise, the number of patients requiring LVAD implantation is also expected to increase. Evaluation and management of gastrointestinal bleeding in the LVAD population should be well understood by physicians practicing in high-volume LVAD institutions. Our study, with one of the largest cohorts (to our knowledge) of gastrointestinal bleeding in LVAD patients, demonstrated that endoscopy is a useful, effective, and most importantly safe modality, despite this being a high-risk population with necessity for long-term anticoagulant use. A systematic approach is necessary to manage the LVAD population, as risk of developing a gastrointestinal bleeding (most often recurrent GIAD) is inherent due to the requirement for longterm anticoagulation. Thus a multidisciplinary approach is key in management of these patients with close collaboration between the gastroenterology and cardiology teams regarding timing of endoscopy and anticoagulation management. Further studies need to be conducted regarding patient-specific factors that may predict a gastrointestinal bleeding event, the role of screening endoscopy, and the optimal standardized management approach for this population.
2020-03-06T02:52:45.484Z
2020-02-21T00:00:00.000
{ "year": 2020, "sha1": "408ab2d4deed4090a09a7e58313c0539e168a1c9", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-1090-7200.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97f5887f69838a45fc45baf02caf5417581f3c4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
153801791
pes2o/s2orc
v3-fos-license
European Systemic Risk Board in Legal and Economical View : Empirical Analysis in Front of Theory This study examines the legal framework for municipal governance and analyses whether, in practice, the relationships follow the pattern as intended in the law. But, today there is growing recognition of the important role that community-based organizations (CBOs) can play in supporting young people’s postsecondary aspirations and success Business takes place worldwide, in a huge diversity of societies and between widely varying organizations. Actually, the business environment has become more complex, with expanding and deepening ties between societies and between the many organizations within those societies. Moreover, many large organizations now see themselves as truly global in scope, not rooted in any one society. This study empirically examines the impact of debt management policies on borrowing costs incurred by state governments when issuing debt in the municipal bond market and it is in focus of public administration reforms. Introduction The aim of this article is to present an overview of the international environment, highlighting the differing levels, from local and national, to regional and international.Assessment of the Municipal Governance Situation Municipalities want to know what activities they can do on their own without restriction, and further which activities require approval from above, or which are the responsibility of provincial government, but for which municipalities must be consulted.They want to be able to address public needs as expressed by their residents, either by having the full responsibility to do projects, or by having the right to participate and be consulted on all work in their jurisdictions. And, of course, they want to know how services and projects are to be funded.The European Union (EU) is an economic and political union of 28 member states that are located primarily in Europe.The EU operates through a system of supranational independent institutions and intergovernmental negotiated decisions by the member states.Institutions of the EU include the European Commission, the Council of the European Union, the European Council, the Court of Justice of the European Union, the European Central Bank, the Court of Auditors, and the European Parliament.The European Parliament is elected every five years by EU citizens. The EU traces its origins from the European Coal and Steel Community (ECSC) and the European Economic Community (EEC), formed by the Inner Six countries in 1951 and 1958, respectively.In the intervening years the community and its successors have grown in size by the accession of new member states and in power by the addition of policy areas to its remit.The Maastricht Treaty established the European Union under its current name in 1993.The latest major amendment to the constitutional basis of the EU, the Treaty of Lisbon, came into force in 2009. The EU has developed a single market through a standardized system of laws that apply in all member states.Within the Schengen Area (which includes 22 EU and 4 non-EU states) passport controls have been abolished.EU policies aim to ensure the free movement of people, goods, services, and capital, enact legislation in justice and home affairs, and maintain common policies on trade, agriculture, fisheries, and regional development.Unfortunately, the current legal framework lacks clarity on both the functions and revenue sources permitted municipalities under law. The Law of Municipalities 1379 (2000), while seemingly permitting a wide array of functions, qualifies almost every use: municipalities can "take measures towards participation in construction" or do xyz "according to the law", or "through relevant offices", or "based on relevant regulation".While the current Law of Municipalities does not mentioned municipal revenue sources, the City Charges Act lists 83 fees/user charges/taxes, which municipalities may collect, thus addressing the issue indirectly. However, we have heard from municipalities who have tried to collect revenues based on the list and who have had their efforts rejected out of hand by higher levels of government.The level of confusion about the roles, responsibilities and revenues of municipalities is considerable. The discussion focuses on the main identifying features of the business organization, including ownership and decision-making structures, as they adapt in differing geographical contexts.It is emphasized that the multinational enterprise (MNE), central to international business activities, covers a variety of organizations, large and small and the growing interactions between organizations, governmental and societal players are resulting in a broader view of the business organization in society.Clearly there is a disconnect between the law as written and the reality on the ground.More importantly, the criteria in the current law may not reflect the desires of small communities to have the representation and empowerment to solve their own problems which comes with their designation as a municipality.This argument looks at varying perspectives on globalization, often argued to be the defining characteristic of our times. What are community-based organizations and why are they important? CBOs are public or private, nonprofit organizations engaged in addressing the social and economic needs of individuals and groups in a defined geographic area, usually no larger than a county.The college access and success efforts of CBOs vary, depending on their mission and vision.For example, direct service organizations provide college information, advice, and application assistance to individual students and families; organize college awareness workshops, financial aid nights, and college fairs; and support students in high school through their college years. H1. Youth development organizations often offer extended learning opportunities such as traditional after-school activities with an academic focus, apprenticeships and internships, summer enrichment and travel, and activities on college campuses. The analysis of the existing organizational structure of Municipality identified a number of deficiencies that could be addressed in a capacity building approach focused on: department restructuring reorganization of personnel assignments based on functional needs and to-be-developed job descriptions capacity building including training to enhance the skill sets of department managers and staff focused initially on techniques to improve revenue collection and solid waste management services. The intent of this initial limited capacity building approach is to gain an early buy-in of municipal government officials through efforts that could achieve early success.There is, however, a need and an expressed interest for a more comprehensive capacity building program that could provide technical assistance, training, mentoring, and systems development to supplement the initiative. Municipalities have few support systems and facilities to do the work they are responsible to do.With several exceptions, the cities we have visited have no computers or management information systems, no service delivery equipment, few vehicles, and often no electricity in the dilapidated offices in which they work.Support programs are vitally needed.Integrated student services organizations work with schools to identify and assist individual students needing support with academic issues and non-academic problems that interfere with their school achievement averaging resources from appropriate agencies, including health care, social services, and counseling.Finally, community mobilization coalitions consist of public and private entities focused on systemic change to achieve an overarching community-wide goal such as doubling the number of high school graduates or improving college completion within a specified time period.(Source: Root Cause, Colliner A, 2011) EU and Maastricht Treaty legally organize The creation of a European single currency became an official objective of the European Economic Community in 1969.However, it was only with the advent of the Maastricht Treaty in 1993 that member states were legally bound to start the monetary union no later than 1 January 1999.On this date the euro was duly launched by eleven of the then 15 member states of the EU.It remained an accounting currency until 1 January 2002, when euro notes and coins were issued and national currencies began to phase out in the euro zone, which by then consisted of 12 member states.The eurozone (constituted by the EU member states which have adopted the euro) has since grown to 18 countries, the most recent being Latvia which joined on 1 January 2014. All other EU member states, except Denmark and the United Kingdom, are legally bound to join the euro when the convergence criteria are met, however only a few countries have set target dates for accession.Sweden has circumvented the requirement to join the euro by not meeting the membership criteria. Organizing the Effort by Developing a Management Team and Partnership Network Mayor and the body of councilmen must take pride in knowing their community and its residents.Community goals/image is constructed and must be visible to members of the community.Level of Organization and Full Democratic Representation: Representative organizations are necessary to serve as a forum for persons to give input.It is also important to determine the interests of each segment of the population.Political Will: Local authorities must demonstrate the political will to carry out the plan, i.e.A readiness to set objectives and to accomplish them.Availability of Basic Resources: Resources and funds are necessary.Objectives achieved and assumed by the population: The organization needs to have ownership of achievable objectives that combine individual interests with those of the community.Social sector defined in and for each objective: Each development objective has to have its social subject defined.Schedule of Activities: The plan must be concrete, with dates and deadlines set in advance and publicly announced.Permanent Evaluation: The performance, accomplishments, progress and failures must also be public to lay the foundation for new objectives and participation practices.Information and Transparency: Information is the most secure basis for transparency in public life.The euro is designed to help build a single market by, for example: easing travel of citizens and goods, eliminating exchange rate problems, providing price transparency, creating a single financial market, price stability and low interest rates, and providing a currency used internationally and protected against shocks by the large amount of internal trade within the euro zone.It is also intended as a political symbol of integration and stimulus for more.Since its launch the euro has become the second reserve currency in the world with a quarter of foreign exchanges reserves being in euro.The euro, and the monetary policies of those who have adopted it in agreement with the EU, are under the control of the European Central Bank(ECB). The Importance of this Study This study empirically examines the impact of debt management policies on borrowing costs incurred by state governments when issuing debt in the municipal bond market.Based on positive political theory and the benefit principle of taxation, it is proposed that states that adhere to best practice debt management policies transmit signals to the credit ratings, investment community and taxpayers that the government should meet its obligations in a timely manner, resulting in lower debt costs.The ECB is the central bank for the euro-zone, and thus controls monetary policy in that area with an agenda to maintain price stability.The application of the 45%-55% ordinary to development budget ratio prescribed in the Law on Municipalities is not applied consistently and is leading to situation where some municipalities are spending more funds in the development budget at the expense of meeting their recurrent requirements. Many municipalities are unable to adjust their expenditures during the budget year because of the belief that they are not able to make any changes to the approved budget, thus limiting their budget execution flexibility.Another key area of concern with the municipal level of governments in particular is the lack of a transfer of funds from the central to the municipal level to equalize the financial resources and service delivery capabilities across the municipalities. It is at the centre of the European System of Central Banks, which comprehends all EU national central banks and is controlled by its General Council, consisting of the President of the ECB, who is appointed by the European Council, the Vice-President of the ECB, and the governors of the national central banks of all 28 EU member states. Foreign companies continue to face significant challenges in entering the market, particularly in areas that touch on property rights. Despite advancements government bureaucracy and inefficiency greatly hampers the ability to hold successful, open and transparent government tenders.They are under-equipped to handle essential tasks.These are primarily to manage transition, provide the regulatory/administrative framework for the market, establish relations with the international community and negotiate and manage aid flows.But these tasks must be carried out while re-establishing order and maintaining social safety nets, under conditions of budget stringency.(Dumi A. AJIS 2012) The problem lies in policies that respond to the bond market but virtually exclude any other community interest in policy making.six key areas of consideration that taken together provide a comprehensive approach for the design and implementation of a municipal governance improvement agenda: 1 To develop a coherent program of building the capacity of municipalities to better deliver needed public services, it is important to frame, debate and decide critical policy options.A well-defined Policy Framework can serve as a process to engage stakeholders and as an expression of the general purpose and more specific objectives of a municipal governance program. To be effective, a policy framework for municipal governance must be both comprehensive and strategic.The set of components below represents a fairly complete universe of the critical issues that need to be addressed in the process of developing a policy framework: 1. Building capacity to develop policies to guide a program of governance improvement 2. Clarifying/reorganizing the roles, structures and functions of Municipalities 3. Providing a comprehensive program of capacity building activities 4. Providing the resources required to do the work of Municipalities 5. Setting performance incentives/sanctions 6. Partnering to gain inter-governmental commitments needed to affect a change agenda 7. Fostering citizen participation in local governance 8. Achieving sustainability of governance improvement initiatives It is recommended that openness in government and allowing taxpayers to understand government services are essential goals in ensuring responsible citizen oversight and providing taxpayers the opportunity to be less likely to propose restrictive initiatives or force dramatic political or management changes through the electoral process or bond referenda. Albanian Local Government Programs and Projects Refers to European Community, the difference of single states policy applied to grow the own international trade, are small.What is the real situation in states not members of European Community, like Albania, according to international business environment?The external environment includes an array of dimensions, including economic, political, legal and technological factors.The article analyses their impacts on societies and the environment, and considers the roles of governments and firms in the wider stakeholder context.government representatives and civic groups on freedom of information, (2) Media outreach to inform and update the public and the government on freedom-of-information, (3) production of a freedom-of-information (FOI) website; (4) Improving mechanisms for proactive publication of government-held information; and (5) free legal counseling to citizens and community organizations on FOI.The project will reach this objective through three programmatic components: (1) local government and civil society collaboration, (2) fostering civic participation, advocacy and activism, and (3) facilitating decentralization and local fiscal autonomy.Project aims to support Albania rapprochement through developing new business partnerships and regional professional networks; engaging civil society in alliance-building to further contribute to Albania normalization; and supporting government and non-government efforts toward rapprochement with research.(SJAS 2010. College 2) enhancing the institutional capacity of the Independent Bar Association; (3) improving the quality of legal education; and (4) fostering an effective environment for human rights protection. Objectives These results suggest a product of a pull-push process between the economic forces of the bond market on one hand and politics on the other, pulling the administrative function toward efficiency in the former and democratic values of responsiveness and transparency in the latter.The problem lies in policies that respond to the bond market but virtually exclude any other community interest in policy making.It is recommended that openness in government and allowing taxpayers to understand government services are essential goals in ensuring responsible citizen oversight and providing taxpayers the opportunity to be less likely to propose restrictive initiatives or force dramatic political or management changes through the electoral process or bond referenda Administrative changes in Albanian public policies as an obstacle to the operating foreign investment, comparison of EU like these factors: A dynamic local government leadership; • A healthy climate of cooperation with business; • Improving the quality of legal rules; Institutional Mission • Promote economic development of the city. • Efficiently provide public services. • Administers funds to raise the population's standard of living. • Regulate activities such as healthfulness of district and its ecological conservation. Legal Framework The patchwork of formal but out-of-date legal authorities and regulations, and informal/traditional customs/arrangements/understandings about the roles and structure of local government results in a fragmented approach both in the delivery of public services and the attempts to improve local governance.A new legal framework is ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 6 No 1 January 2015 181 needed to set the parameters for Municipal Governance Strategic Framework The delegation and regulation of the responsibilities of municipalities based on the deliberations and recommendations of the policy framework Suggestions are made as to the changes that would be required to the Law of Municipalities to clarify/strengthen/make explicit the delegation of responsibilities, authorities, and sources of revenues for municipalities.The external environment includes an array of dimensions, including economic, political, legal and technological factors.The article analyses their impacts on societies and the environment, and considers the roles of governments and firms in the wider stakeholder context. The principal arguments treated.Through its programs in the areas of anti-corruption, local governance, rule of law, alternative media, and parliamentary assistance, USAID is working with civil society and reformers within the Government of Albania to help create opportunities that helps Albania to advance the country's democratic reform both at the local and national levels.Based on an "active citizen" approach to democratic development, USAID is broadening efforts to foster greater citizen participation at the grassroots level and strengthening advocacy NGOs by providing core funding, advocacy grants and tailored technical assistance. Graph 2: The indicators of the GDP, Source: B Analyses selected indicators for responsiveness of public administration The business environment may be visualized in terms of layers, beginning with the immediate internal environment within the organization, and moving outwards to the external environment surrounding the business and influencing its organization and operations.While only a few decades ago these external aspects were seen as centering on the home country of the business, the environmental horizon of business has now widened to take in a host of international forces, which interact with national and local factors. For the larger municipalities with more substantial capacities, there is an opportunity to build an in-house capacity for an economic development office.Effective local economic development requires the active participation of three key constituencies --the private sector, local government, and citizens.Afghanistan's economy has failed to grow and diversify in part because its local governments do not understand the role of municipalities in economic development and do not have the capacity to play that role effectively. Service functions require better management, which in turn requires better training, structures, systems and standards.A central point of direction is needed to: provide leadership on policies governing service delivery; better manage the service operations of municipality; and, better communicate Municipal Governance Strategic Framework purposes and needs of service departments to the Mayor, Council (when they are elected) and to the public.The introduction of a Managing Director for Service Delivery on performance contract, and professionally trained and certified, could ensure a higher level of service. Tensions exist between an organization and the external forces that impact on it, from local through to international, and these tensions are reflected in its internal environment. For this, when we think of international business, we tend to think of large multinationals, but most of the world's businesses are very much smaller, and, increasingly, these smaller firms are becoming international in their outlook.Nowadays, thanks to advances in communication technology and transport, it is easier for companies to expand a variety of business activities across national borders.A large American corporation such as IBM may seem to have very little in common with a family-run firm in Tirana that selling its products or purchasing raw materials abroad, and go on to producing its products abroad.Even if their answers on how to achieve a smooth-running and efficient organization and how to satisfy the needs of customers may be different, both companies in their own way will affronting universal issues. Like we know in the past the most important factors which influenced the firms were cultural and social, legal and technological factors.Now the factors which compose the economic policy and influenced international environment are not so unpredictable.The problem is that are complicated for the different decision making stakeholders at a time of a stagnant economy. Methodology From the interviews, it is clear that role confusion exists between the various organs within municipalities.This has lead to uncertainty and turf battles, shifting the energy of the council to technical issues and impeding efficient service delivery.Municipalities have not been sufficiently able to design and implement role divisions and agree on workable protocols.Local government legislation establishes various organs within the municipality and broadly defines the functions of these organs.Capacity Building for Managers Management training is needed in a variety of capacities, including but not limited to: 1. Planning: Deciding what must be done, when, and by whom; 2. Organizing: Scheduling the effective use of resources -people, materials, and equipment to implement plans; 3. Leading: Influencing the actions of individuals and groups in order to obtain the desired results; 4. Team-Building: Developing your workers into a cohesive team; fostering a sense of independence and partnership among your work group; 5. Decision-making: Providing workers with the opportunity to make contributions and feel a part of the decisionmaking process; exhibiting both the firmness of decisive action and decision-making flexibility depending on the situation; 6. Problem-Solving: Identifying problems, gathering and analyzing relevant facts, and selecting best alternatives; ability to cope with complexity; 7. Coordinating: Building and maintaining good relations with the public; working cooperatively with other agencies and departments; The real situation of financial sector in Albania It also creates various instruments for accountability and oversight.Importantly, municipalities themselves must define the precise roles of their organs in delegations and terms of reference.These role definitions, terms of reference and instruments of accountability are intended to produce clear and sound internal municipal governance arrangements.This, in turn, is meant to define and shape the relationships within the municipal council and between the council and the administration. The international economic policy in Albania The research methodology used to complete this article is that to compare the latest international economic policies to respond to different features present or not in them.To pursue this purpose we will use as a reliable research sources such as the European community, the Ministry of Economy of Albania, etc. Albanian companies act in an environment that is more or less favorable to them.The environment is significantly limited by the institutional framework that the rules of the game and is controlled by public administration and is responsive to the needs of foreign companies. In the empirical part of the paper, we analyze selected indicators for responsiveness of the public administration in selected Albanian programs, compared European Union (EU). Public Debt and Albanian Situation Analyzing the relationship between economic shocks and public debt that is having lately the European Community (we refer in particular to the crisis of the GDP of the two member states like Greece and Italy) budgeting decisions in the context of local economic shocks reveal the local fiscal policy priorities.The analysis of Albanian incomes, finds that current expenditure paths are more influential when making cuts than when expanding budgets.The government support programs that help those in need who strive to provide for their families; and provide the youth of the community with the tools necessary to become leaders.Also focus on two primary areas to ensure the company provides meaningful contributions to the community.Our corporate contributions of time and money go to promoting philanthropy and to youth leadership development through initiatives that produce measurable outcomes and sustainable results in these two areas. Conclusions and Recommendation Administrative changes in Albanian public policies as an obstacle to the operating foreign investments, comparison of EU are progressed in Albania like as: • Developing; • International investments founds; • Entrepreneurship ambition; • Marshalling resources to exploit business opportunity; • State regulatory statistical and tax reporting. Local communities are also seen to employ some short-term use of reserve funds when facing negative expenditure pressures, but these funds are not used to completely prevent expenditure cuts.Furthermore, communities do not use debt as a mitigating response to external tax base pressures, but instead alter expenditure patterns.Using the EU measurements and assessment of different areas of the business, namely the production of goods and services, can take place smoothly in Albania.PSI worked closely with high-level public officials and community leaders and provided critical guidance and strategic planning.On an ongoing basis, PSI continues to adapt to: the changing needs of the communities we serve, legislative and administrative directives, research findings and promising practices from the field.These efforts include: • Providing customized training and learning opportunities for partners; • Managing the daily activities of project staff; • Monitoring large-scale project implementation benchmarks; • Monitoring the new opportunities of developing and LC of new business; • Manage the human resources, youth and women.Additionally, PSI conducts field research that may lead to adjustments to the program in order to meet greater project goals.This has resulted in a high level of trust with state officials and more effective service delivery because program adjustments are made in a timely manner. Results and Profits from this Research Entrepreneurship ambition has worked to support changes in Albanian law that would offer anti-discrimination protections in keeping with international standards.The another point of administrative changes in Albanian public policies as an obstacle to the operating foreign investments, comparison of EU is: (1) International investments contributed to the improvement of Albania's financial regulatory environment which has strengthened public confidence in the banking system and has provided a more secure, efficient and transparent financial system to meet the credit, savings and insurance needs of businesses and individuals. Results of Paper Research Local government legislation establishes various organs within the municipality and broadly defines the functions of these organs.It also creates various instruments for accountability and oversight.Importantly, municipalities themselves must define the precise roles of their organs in delegations and terms of reference.These role definitions, terms of reference and instruments of accountability are intended to produce clear and sound internal municipal governance arrangements.This, in turn, is meant to define and shape the relationships within the municipal council and between the council and the administration.Whatever cannot be solved in strictly institutional or legal terms needs to be solved through agreed protocols, gentleman's agreements and working arrangements. The result is a carefully crafted system of governance and oversight whose success is dependent on all constituent parts working in sync.Practically, if one component of the system is deficient, it has a detrimental knock-on effect which : European Court of Human Rights (ECHR, 2014) Hooker, Sara and Betsy Brand.2009.Programs Support Youth on the Path to College and Beyond.Washington, DC: American Youth Policy Forum.The Access to Information for Albanian Community Involvement program focuses on: (1) Training of public officials, local Strategies employees strive to support and improve the communities in which they live and work. Access and Success: Education and Youth Development.Cambridge, MA: Root Cause) 2.1.2Rule of Law Albanian Development Program core programmatic objectives include: (1) increasing the judiciary's knowledge of the European Court of Human Rights (ECHR); (
2017-09-09T14:26:10.562Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "0327a08cb2388bd7c0ad0d532c885bf05863d863", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/5451/5256", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0327a08cb2388bd7c0ad0d532c885bf05863d863", "s2fieldsofstudy": [ "Law", "Economics", "Business", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
257057602
pes2o/s2orc
v3-fos-license
Increased risk of secondary bladder cancer after radiation therapy for endometrial cancer To investigate the effect of radiation therapy (RT) after endometrial cancer (EC) diagnosis on the risk of occurring secondary bladder cancer (SBC) as well as on the survival outcome of those patients who suffered with SBC. Data was extracted from the Surveillance, Epidemiology, and End Results database between 1973 and 2015. Chi-squared test was utilized to compare clinicopathological characteristics among different groups. The Fine and Gray’s competing risk model was utilized to assess cumulative incidence and risk of occurring SBC in EC survivors. The Kaplan–Meier method and the Cox regression model were used for survival analysis. As a result, a total of 108,060 EC patients were included, among which 37,118 (34.3%) patients received RT while others did not. The incidence of SBC was 1.31%, 1.76% and 0.96% among patients who received prior brachytherapy, external-beam radiotherapy (EBRT) and others, respectively. Both of the EBRT (standardized incidence ratio (SIR) = 2.24, 95% CI [1.94–2.58]) and brachytherapy (SIR = 1.76, 95% CI [1.44–2.13]) group had a higher incidence of SBC than the general population in USA. The competing risk analysis demonstrated that receiving EBRT (HR = 1.97, 95% CI [1.64–2.36]) or brachytherapy (HR = 1.46, 95% CI [1.14–1.87]) were all independent risk factors for developing SBC. A survival detriment was only observed in SBC patients who received prior EBRT after EC diagnosis, but not for brachytherapy, when compared with those who did not undergo RT. Additionally, there were no significant survival differences between primary bladder cancer and SBC with or without prior RT history. Patients who underwent RT after EC had an increased risk of developing bladder cancer as secondary primary cancer. The prognosis of these SBC patients varied depending on types of RT that received after EC diagnosis. Scientific Reports | (2022) 12:1032 | https://doi.org/10.1038/s41598-022-05126-w www.nature.com/scientificreports/ also observed in a previously published study 14 . Hence, it is still controversial for the increased risk of developing secondary primary malignancy after RT. Specifically, the bladder is within the irradiation field when RT is conducted for endometrial cancer which located in the uterus. Considering the early and late toxicity associated with RT, the current study aimed to study the impact of RT on the risk of developing secondary bladder cancer (SBC) in EC survivors as well as on the prognosis of patients suffered with SBC, by utilizing the Surveillance, Epidemiology, and End Results (SEER) database. Our investigation might provide an important clue for future RT selection, patients counseling and prevention strategies among EC survivors at increased risk of developing SBC. Materials and methods Database and case selection. We performed a retrospective research by utilizing the custom SEER database [Incidence-SEER 9 Regs Custom Data (with additional treatment fields), Nov 2017 Sub ]. The SEER program, a database established by the National Cancer Institute of the U.S., collected data of cancer patients that accounts for about approximately 28% of the U.S. population 15 . The SEER*Stat software (version 8.3.8, National Cancer Institute, Washington, USA) was used to access the data from SEER database. Patients who were diagnosed with EC (site code C54.0-54.3, C54.8, C54.9, and C55.9) between 1973 and 2015 were extracted from the database. Only patients who had undergone endometrial cancer specific surgery and had endometrial cancer as their first malignancy were eligible. Exclusion criteria was listed as follows: (1) patients younger than 18 years old at diagnosis; (2) patients with unknown information of survival time; (3) patients with unknown information of race; (4) patients whose diagnosis was made at autopsy or based on a death certificate. Eligible endometrial cancer patients were grouped into two subgroups based on whether they received RT or not. Subsequent SBC were eligible when it occurred more than 12 months after EC diagnosis. Patients in the no RT cohort who developed SBC were classified into group A, while those in the RT cohort who developed SBC were classified into group B. Based on the RT modality, group B was further divided into patients who received brachytherapy (Group C) and those received external-beam radiation therapy (EBRT) (Group D). Moreover, patients were also extracted from SEER database if diagnosed with a first primary bladder cancer (PBC) from 1973 to 2015. In order to reduce possible selection bias for survival comparison, three cohorts of female PBC patients were matched respectively for group A, C and D by using the propensity score matching (PSM) method with a ratio of 5:1. The detailed flowchart for the patient's selection was shown in Fig. 1. All methods were performed in accordance with the relevant guidelines and regulations. The SEER database is an open database. Data released from the SEER database do not require informed patient consent, because cancer is a reportable disease in every state of the United States. The present study complied with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. www.nature.com/scientificreports/ Covariates and outcomes. Multiple variables were included in this study, including demographic characteristics (age, race and year of diagnose), disease characteristics (stage and histologic grade), and treatment modalities (radiotherapy and chemotherapy). Specially, races, including American Indians, Asians, AK Natives and Pacific Islanders, were classified into other races. Continuous variable, such as age and years at diagnosis, was transformed into categorical variable. According to the "radiation recode" in SEER database, radiotherapy for endometrial cancer were classified into EBRT and brachytherapy (radioactive implants). Patients who received combination of two or more RT modalities were excluded in this study. The primary outcome was to evaluate the risk of occurring SBC among patients who had not received EC specific RT as well as among those who had received different RT modalities. The secondary outcome was to assess the impact of EC specific RT on overall survival (OS) and bladder cancer specific survival (BCSS) of those SBC patients and compared it to matched PBC patients. Statistical analysis. Demographic and clinical characteristics between different cohorts were summarized by descriptive statistics and compared by using the Pearson's Chi-square test. The standardized incidence ratios (SIRs) for SBC after EC diagnosis were defined by calculating the ratio of observed-to-expected (O/E) incidence, which represents change of the risk of developing SBC after EC diagnosis in comparison with the general US population. The SIR analysis was performed by using the SIR tools in SEER program software (SEER*Stat 8.3.6). In order to assess the risk of SBC dynamically, the SIRs were stratified by latency time after EC diagnosis, age and year at EC diagnosis. Fine and Gray competing risk analysis was utilized to evaluate the risk of developing SBC after EC diagnosis. Specifically, SBC occurrence was considered as an event and all non-SBC caused deaths were defined as competing events. Cumulative incidence curve for SBC occurrence was plotted and compared by Gray's test 16 . Besides, univariate and multivariable Fine and Gray competing risk regression models and Cox proportional hazards regression model were also built to analyze BCSS and OS, respectively. By means of a backward selection method, variable with P value of < 0.05 in univariable analysis was included in multivariable analyses. The Kaplan-Meier curves were plotted for the OS and BCSS between different cohorts, and the Log Rank test was utilized for comparison between curves. Descriptive statistic, and the Cox analysis were conducted by utilizing the SPSS 24.0 (IBM Corp). The Fine and Gray competing risk analysis, cumulative incidence curve and Kaplan-Meier curves were performed and plotted by utilizing the R software (version 4.0.0). We defined a 2-sided P value of < 0.05 as statistically significant. Results Patient characteristics. A total of 108,060 EC patients were finally extracted from the SEER database, among which 37,118 (34.3%) patients had received RT, and 70,942 (65.7%) patients had not received RT. Compared with patients who did not undergo RT, patients who received RT presented with older age, earlier diagnosis, poorer histologic differentiation, higher rate of white race and regional stage. More patients in RT group received chemotherapy than no RT group. Additionally, among RT group, patients who underwent EBRT had a higher rate of grade 3-4, regional and distant stage (P < 0.001). The detailed information for clinicopathological features among these different groups were listed in Table 1. After one year latency since EC diagnosis, a total of 451 survivors in RT cohort and 365 in no RT cohort were diagnosed with a SBC at the end of the followup. Moreover, among those SBC patients in RT cohort, 211 patients received EBRT and 107 patients received brachytherapy. Cumulative incidence of SBC in EC survivors. The cumulative incidence of SBC in EC patients who underwent RT or not was compared in this study. As shown in Fig. 2A, EC patients who received any RT were more likely to develop SBC than patients who did not receive RT, with cumulative incidence being 1.69% and 0.96% (P < 0.001), respectively. Moreover, when RT group were subdivided into brachytherapy and EBRT categories, the cumulative incidence between RT and no RT groups remained significant (No RT vs brachytherapy: P = 0.017; No RT vs EBRT: P < 0.001) (Fig. 2B). The SIRs of SBC were also calculated in EC survivors with different radiotherapy modalities. In comparison with the US general population, the incidence of SBC was dramatically increased both in brachytherapy (SIR = 1.76, 95% CI [1.44-2.13]) and EBRT (SIR = 2.24, 95% CI [1.94-2.58]) group (Table 2). Nevertheless, a similar incidence of SBC was found among EC survivors who did not undergo RT (SIR = 0.99, 95% CI [0.89-1.10]). In sub-analyses, SIR for SBC was stratified by latency time after EC diagnosis, year and age at EC diagnosis. As shown in Table 2, no significant incidence change was observed for patients who did not undergo RT in all subgroups in comparison with the US general population. In latency-SIR sub-analyses, EC patients undergone brachytherapy had significantly increased incidence of SBC only after more than 5 years of follow-up. However, in comparison with the US general population, the incidence of SBC was significantly increased after 1 year of follow-up after received EBRT. In sub-analyses of age or year of EC diagnosis, dramatical increase in the incidence of SBC was observed in almost all subgroups of EC survivors regardless of receiving brachytherapy or EBRT ( Table 2). Effect of RT on risk for developing SBC in EC survivors. To further investigate effects of RT on SBC risk, Fine and Gray competing risk regression analysis were conducted ( Effect of RT on survival of SBC in EC survivors. Both of the OS and BCSS between SBC patients treated with RT or not were compared in this study. As shown in Fig. 3, the Kaplan-Meier curves indicated that patients who received prior EBRT had significant inferior OS (P = 0.002) and BCSS (P = 0.003) when compared with patients who did not receive RT. However, our results also showed that brachytherapy had no significant effect on OS and BCSS of SBC patients when compared with no RT patients. Additionally, no significant survival difference was also observed between brachytherapy group and EBRT group. In order to further understand the effect of RT on prognosis of EC survivors who developed SBC, we then performed univariate and multivariate Cox and competing risk analysis for OS and BCSS, respectively. The multivariate analysis, as shown in Table 4, further demonstrated that a prior EC-specific EBRT was an independent prognostic factor both for OS Table 4. In order to understand if bladder cancer is different after RT, the Pearson's Chi-square test was used to compare the SBC patients who received RT or not. However, no significant difference was observed (Supplementary Table 1). Subsequently, by using the PSM method, three cohorts of primary bladder cancer patients were matched separately for SBC patients who previously treated with brachytherapy, EBRT or no RT after EC diagnosis. After adjusted for propensity score, all features were well balanced between matched PBC and SBC patients after EC diagnosis (Supplementary Table 2 -4). As shown in Fig. 4, three cohorts of matched PBC patients all had similar OS and BCSS as compared with SBC patients who previously treated with brachytherapy, EBRT or no RT after EC diagnosis. Discussion The present study concentrated on evaluating the effect of a prior RT on the risk of occurring SBC in EC survivors as well as on the prognosis of subsequent SBC. Our data showed that the cumulative incidence of SBC in EC survivors who underwent brachytherapy or EBRT was dramatically higher than patients who did not undergo RT. Both of the brachytherapy and EBRT were demonstrated as independent risk factors for developing SBC in EC survivors. A survival detriment was only observed in SBC patients who underwent prior EBRT, but not for brachytherapy after EC diagnosis, as compared with patients who did not undergo prior RT. Several previous studies have evaluated the risk for developing SBC in patients received pelvic RT for their pelvic cancer, with varying results. A publication by Wiltink et al. included a total of 2500 EC or RC patients pooling data from three randomized trial and reported that patients who underwent brachytherapy or EBRT had no increased risk of SBC occurrence as compared with patients received surgery alone 13 . A randomized trial reported by Onsrud et al. randomly assigned 568 patients with stage I endometrial cancer to either vaginal radium brachytherapy (VBT) followed by EBRT or VBT alone. An increased risk (HR = 1.42, 95% CI [1.01-2.00]) of secondary cancer was observed in EBRT group as compared with the control group. Importantly, the proportion of SBC was higher in the EBRT group (3.7%) than in the control group (2.6%) 17 . However, the small sample size, with only 13 SBC observed, caused limited statistical power for the conclusion. Wang et al. assessed the risk of developing secondary cancer in rectal cancer (RC) survivors received pre-or postoperative RT by using the Taiwan's National Health Insurance Research Database 18 . Their result showed an increased risk for SBC occurrence among patients who underwent postoperative RT but nor for those underwent preoperative RT. When death was not considered as a competing event, the probability of developing SBC may overestimate due to the number of patients who died before experiencing SBC. Hence, the Fine-Gray competing risk model was utilized in our study to analyze the risk of SBC occurrence. Our result demonstrated that EC survivors who www.nature.com/scientificreports/ underwent postoperative brachytherapy or EBRT all had an increased risk of developing SBC in comparison with patients received no RT. This could be attributable to the fact that the typical radiation fields of both EBRT and vaginal cuff brachytherapy for EC have included a portion of bladder. Our result also indicated that EBRT would result in higher risk of developing SBC than brachytherapy, which could be explained by a dose effect of RT. A similar dose-dependent association were reported for SBC after pelvic RT for cervical cancer 19,20 . Additionally, the SIR analysis in our study showed a significantly high probability of developing a SBC among EC survivor who received prior brachytherapy or EBRT, as compared with the US general population. This result echoed previous studies concentrating on evaluating the risk of second primary malignancy in EC survivors 21,22 . However, our data also confirmed that EC survivors who did not undergo RT had similar incidence risk of developing SBC in comparison with the US general population, which further implied that SBC may be induced by RT treatment. In SIR sub-analyses stratified by latency time after EC diagnosis, no obvious increase of SBC incidence was observed in the early follow-up after brachytherapy but not for EBRT. We also found that the SBC incidence increased with the prolongation of follow-up time after EC diagnosis, especially after a latency of over 10 years. Currently, the primary objective of surveillance in EC survivors is to detect recurrence or metastasis within 3-5 years of follow-up 23 . However, our data suggested that EC survivors who received prior RT would benefit from long-term detection of SBC. Regarding to the effect of age on the risk of SBC, our data showed that the younger EC survivors who underwent RT had the highest risk of occurring SBC as compared with the www.nature.com/scientificreports/ that this effect was also due to the RT. We suspect that an increasing number of EC survivors would result in an increased SBC potential because more cancer patients had been cured with the advancement of RT technology. In order to further study the impact of EC-specific RT on prognosis of subsequent SBC, survival analyses were conducted to compare OS and BCSS of SBC after RT with those who did not undergo prior RT. Our result demonstrated that patients who received prior EBRT had significant inferior survival as compared with patients who did not undergo prior RT. We suspect that a SBC after EBRT might have different biological behavior due to induction of distinct tumorigenic signaling pathways after radiation exposure. Moreover, no survival difference was observed between brachytherapy group and no RT group, implying that brachytherapy had less radiation damage on adjacent bladder than EBRT. By means of PSM method, we also demonstrated no significant survival differences between PBC and SBC with or without prior RT history. This result is supported by many previous studies demonstrated that a history of prior cancer has no impact on survival of various cancers [24][25][26] . Several limitations exist in this study. Firstly, it is not entirely clear whether the radiotherapy was given only adjuvant or encompass also treatment in a recurrence setting. Secondly, the risk factors for endometrial and bladder cancer are difference and some predisposing factors, such as smoking history, lifestyle and genetic susceptibility, are unavailable in SEER database. Thirdly, it is uncapable to determine whether SBC were EC recurrences in the SEER database. Fourthly, selection bias is inherent in our study due to the intrinsic weaknesses of retrospective databases. In conclusion, the current study confirmed that patients who underwent RT for a primary endometrial cancer had an increased risk for developing bladder cancer as a secondary primary cancer. A prior EC-specific EBRT, but not brachytherapy, had an adversely impact on the survival of SBC patients. There was no significant survival difference between PBC and SBC with or without prior RT history. Data availability The data analyzed in this study were extracted from publicly available datasets, which can be found here: https:// seer. cancer. gov/.
2023-02-22T15:47:28.020Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "8a9763e9dcc9fabb86a731f415fd0b6b5d170892", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-05126-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "8a9763e9dcc9fabb86a731f415fd0b6b5d170892", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246668178
pes2o/s2orc
v3-fos-license
Simultaneous Heterotrophic Nitrification and Aerobic Denitrification of Water after Sludge Dewatering in Two Sequential Moving Bed Biofilm Reactors (MBBR) Water after sludge dewatering, also known as reject water from anaerobic digestion, is recycled back to the main wastewater treatment inlet in the wastewater treatment plant Porsgrunn, Norway, causing periodic process disturbance due to high ammonium of 568 (±76.7) mg/L and total chemical oxygen demand (tCOD) of 2825 (±526) mg/L. The main aim of this study was the simultaneous treatment of reject water ammonium and COD using two pilot-scale sequential moving bed biofilm reactors (MBBR) implemented in the main wastewater treatment stream. The two pilot MBBRs each had a working volume of 67.4 L. The biofilm carriers used had a protected surface area of 650 m2/m3 with a 60% filling ratio. The results indicate that the combined ammonia removal efficiency (ARE) in both reactors was 65.9%, while the nitrite accumulation rate (NAR) and nitrate production rate (NPR) were 80.2 and 19.8%, respectively. Over 28% of the reject water’s tCOD was removed in both reactors. The heterotrophic nitrification and oxygen tolerant aerobic denitrification were the key biological mechanisms found for the ammonium removal in both reactors. The dominant bacterial family in both reactors was Alcaligenaceae, capable of simultaneous heterotrophic nitrification and denitrification. Moreover, microbial families that were found with equal potential for application of simultaneous heterotrophic nitrification and aerobic denitrification including Cloacamonaceae, Alcaligenaceae, Comamonadaceae, Microbacteriaceae, and Anaerolinaceae. Introduction In conventional wastewater treatment processes, the reject water, which is the water after sludge dewatering from anaerobic digestion effluent, is directly recycled into the main inlet without any pre-treatment. Normally, reject water that is recycled to the inlet is about 1-2% of the main flow. However, reject water is highly concentrated wastewater that contains up to 25% of the total nitrogen load of the mainstream [1]. The main reject water constituents are ammonium (ca. 600 mg/L) as well as slowly degradable chemical oxygen demand (COD) (ca. 2 to 3 g/L). Mostly, the COD in reject water is associated with a low fraction of biodegradable substances. Occasionally, the high ammonium and COD concentration in reject water may cause process disturbance when recycled in the main treatment system. Hence, to avoid overload and process disturbance in the main treatment process, the biological treatment of reject water is vital [2]. The conventional biological treatment processes for the treatment of nitrogen in reject water involved mainly nitrification by autotrophs under aerobic conditions and denitrification by heterotrophs under anaerobic conditions [3][4][5]. Nitrification is a biological process that takes place by the two autotrophic bacteria called ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) [4,6]. However, the typical AOB and NOB bacteria 2 of 13 species known as Nitrosomonas and Nitrobacter, respectively, may not grow well or survive in wastewater containing high free ammonia as well as other toxic compounds [6][7][8]. For instance, the inhibitory effects of reject water on NOB were due to increased concentrations of free ammonia [9]. Furthermore, industrial effluents contain high concentrations of toxic compounds such as phenols, cyanides, and thiocyanate inhibit the AOB and NOB activities [8]. It has been found that under such adverse environmental conditions for autotrophic bacteria, nitrification, and denitrification could also occur with the help of heterotrophic nitrifying bacteria as well as oxygen tolerant denitrifying bacteria [4,6,10,11]. It has been reported that there are several potential bacteria species capable of combined heterotrophic nitrification and aerobic denitrification in biological nitrogen removal systems [3,[12][13][14]. Besides the biochemical mechanisms of heterotrophic nitrification, the bacteria possess ammonia and hydroxylamin-oxidizing enzymes to oxidize NH 4 + to NO 2 − as well as a large number of heterotrophic microorganisms that have the ability to convert NO 2 − to NO 3 − [11,15,16]. The heterotrophic nitrification and aerobic denitrification, which occur simultaneously, have the following stoichiometric formula [6,17,18]. 3 (1) Additionally, it has been reported that some specific bacteria species have shown the ability to convert ammonium to nitrogen under aerobic via a hydroxylamine intermediate instead of nitrate and nitrite reductase activity [6,19]. The heterotrophic nitrification and aerobic denitrification have many potential advantages in wastewater treatment. These major advantages are (i) simultaneous nitrification and denitrification, (ii) fewer acclimation problems, and (iii) compensation of alkalinity (i.e., alkalinity consumption in nitrification will be compensated alkalinity generated during denitrification) [3,20]. Moving bed biofilm reactors (MBBR) have shown a promising result in treating reject water through nitrification and denitrification processes. The MBBR is suitable for simultaneous nitrification-denitrification because of oxygen diffusion through the biofilm and can maintain an aerobic environment inside and outside of biofilm as well as the growth of suspended biomass. However, the efficiency of MBBR depends on the type of carrier material and the percentage of carrier filling. In particular, to avoid detachment of biofilms, some novel carriers are being developed through physical and chemical surface modifications to enhance biofilm adhesion to the carriers [21]. Moreover, flow and mixing conditions are the crucial parameter to maintain appropriate turbulence, which maintains the suitable thickness of biofilm suitable for full-substrate penetration [18]. High turbulence causes more detachment of the biofilm from the carrier, and low turbulence results in slower movement of the carrier and higher thickness of microorganisms in the biofilm. Few studies have been conducted by implementing a pilot case of sequential MBBR for reject water treatment using simultaneous heterotrophic nitrification and aerobic denitrification in the main wastewater treatment stream ( Figure 1). Hence, this study has proposed implementing two sequential MBBR to treat reject water before it is recycled to the main inlet to improve the overall treatment efficiency of wastewater. The nitrogen and organic removal of the reactors were analyzed during the experimental period. Moreover, to promote the simultaneous heterotrophic nitrification and aerobic denitrification process, the development of the bacterial communities was identified through microbial population sequencing analysis. This study promotes the use of MBBR and simultaneous heterotrophic nitrification and aerobic denitrification process in reject water treatment for two reasons. Firstly, the slowly degradable particulate and colloidal organics in the reject water will be biodegraded. Secondly, the simultaneous heterotrophic nitrification and aerobic denitrification process reduce the nitrogen load due to the recycling of reject water to the main inlet where the treated reject water will cause less disturbance in the treatment process. water will be biodegraded. Secondly, the simultaneous heterotrophic nitrification an obic denitrification process reduce the nitrogen load due to the recycling of reject w the main inlet where the treated reject water will cause less disturbance in the trea process. Experiment Setup Two pilot moving bed biofilm reactors (MBBRs), from now onwards called MB and MBBR R2, made of stainless steel and polycarbonate were set up in series f experiment ( Figure 2). Each MBBR has a length, breadth, and height of 0.35, 0.35, an m, respectively, resulting in a total volume of 67.4 L. The biofilm carriers used wer S ® (Biowater Technology AS, Tønsberg, Norway) type with dimensions of 14.5 × 18 mm and a protected surface area of 650 m 2 /m 3 . The total protected area was calc using the working volume of the reactors and the carrier filling, which was 60% i reactor. The water temperature in both reactors was set at 30 (±2) °C. The water was h by an aquarium heater (EHEIM 300 W, max 1000 L, Germany), and to avoid tempe loss to the surrounding the reactors were covered by black PVC/NBR rubber plasti lation sheets. MBBR R1 was aerated continuously while the aeration in MBBR R intermittent. The flow of aeration in both reactors was controlled and regulated by flow meter with an air flow rate set to 22 L/min. The detail design and operating pa ters of pilot-scale sequential MBBR reactor within the experimental period is d strated in Table 1. Experiment Setup Two pilot moving bed biofilm reactors (MBBRs), from now onwards called MBBR R1 and MBBR R2, made of stainless steel and polycarbonate were set up in series for this experiment ( Figure 2). Each MBBR has a length, breadth, and height of 0.35, 0.35, and 0.55 m, respectively, resulting in a total volume of 67.4 L. The biofilm carriers used were BTW S ® (Biowater Technology AS, Tønsberg, Norway) type with dimensions of 14.5 × 18.5 × 7.3 mm and a protected surface area of 650 m 2 /m 3 . The total protected area was calculated using the working volume of the reactors and the carrier filling, which was 60% in each reactor. The water temperature in both reactors was set at 30 (±2) • C. The water was heated by an aquarium heater (EHEIM 300 W, max 1000 L, Germany), and to avoid temperature loss to the surrounding the reactors were covered by black PVC/NBR rubber plastic insulation sheets. MBBR R1 was aerated continuously while the aeration in MBBR R2 was intermittent. The flow of aeration in both reactors was controlled and regulated by an air flow meter with an air flow rate set to 22 L/min. The detail design and operating parameters of pilot-scale sequential MBBR reactor within the experimental period is demonstrated in Table 1 Reject Water Characteristics and Chemical Analysis The feed for both reactors was reject water from the centrifuge of the anaerobic digester ( Figure 1). The reject water coming directly from the centrifuge was stored in a 1 m 3 HDPE plastic IBC tank and pumped to MBBR R1 using a Watson Marlow 520 s peristatic pump (WATSON MARLOW, Falmouth, the UK). The effluent of MBBR R1 is the inlet of MBBR R2. The setup of the pilot rectors is shown in Figure 2. To analyze the water characteristics of the reject water three samples were collected twice a week. The three samples were the inlet to MBBR R1, outlet of MBBR R1, and outlet of MBBR R2. The samples were collected using a standard procedure and kept in a refrigerator before complete analysis of all the physical, inorganic, and organic chemical constituents in the reject water. The analyses include pH, total and soluble chemical oxygen demand (tCOD and sCOD), NH4-N, NO2-N, NO3-N, total solids (TS), total suspended solids (TSS), total volatile solids (TVS), volatile suspended solids (VSS), and alkalinity. The tCOD and sCOD were measured by chemical wet oxidation in a closed glass vial by using a spectroquant ® pharo 300 UV/VIS photometer (Darmstadt, Germany). For the tCOD analysis, the samples were first homogenized by an overhead stirrer for 2 to 3 min. Two milliliters of the sample was pipetted into spectraquant COD cells in the measuring range of 300 mg/L to 3500 mg/L. To measure sCOD, the sample was centrifuged at 10,000 rpm for 30 min and then filtered at 0.45 µm (GxF multilayered, Acordisc ® PSF syringe filters) pore size before analysis. The COD method corresponds to US standard 5220 D [22]. For ammonium nitrogen (NH4-N) analysis, the samples were centrifuged. The samples were then diluted with milli-Q water by dilution factor 50×. A volume of 0.1 mL of the sample was pipetted into a spectraquant ammonium-nitrogen cell test with a measuring range of 4.0 mg/L to 80 mg/L. The method is analogous to US standard 4500-NH3 [22]. For the nitrite-nitrogen (NO2-N) and nitrate-nitrogen (NO3-N) analyses, samples were centrifuged and filtered before analysis. For alkalinity, concentration was measured as CaCO3 mg/L in the photometer at 605 nm. The pH of samples was measured by a Beckman 390 pH meter after calibration with buffer solutions of pH 4.0 and 7.0. Temperature and dissolved oxygen (DO) were measured using a WTW Oxi 3310 (Weilheim, Germany) oxygen meter. Total suspended solids (TSS), volatile suspended solids (VSS), and pH were Reject Water Characteristics and Chemical Analysis The feed for both reactors was reject water from the centrifuge of the anaerobic digester ( Figure 1). The reject water coming directly from the centrifuge was stored in a 1 m 3 HDPE plastic IBC tank and pumped to MBBR R1 using a Watson Marlow 520 s peristatic pump (WATSON MARLOW, Falmouth, the UK). The effluent of MBBR R1 is the inlet of MBBR R2. The setup of the pilot rectors is shown in Figure 2. To analyze the water characteristics of the reject water three samples were collected twice a week. The three samples were the inlet to MBBR R1, outlet of MBBR R1, and outlet of MBBR R2. The samples were collected using a standard procedure and kept in a refrigerator before complete analysis of all the physical, inorganic, and organic chemical constituents in the reject water. The analyses include pH, total and soluble chemical oxygen demand (tCOD and sCOD), NH 4 -N, NO 2 -N, NO 3 -N, total solids (TS), total suspended solids (TSS), total volatile solids (TVS), volatile suspended solids (VSS), and alkalinity. The tCOD and sCOD were measured by chemical wet oxidation in a closed glass vial by using a spectroquant ® pharo 300 UV/VIS photometer (Darmstadt, Germany). For the tCOD analysis, the samples were first homogenized by an overhead stirrer for 2 to 3 min. Two milliliters of the sample was pipetted into spectraquant COD cells in the measuring range of 300 mg/L to 3500 mg/L. To measure sCOD, the sample was centrifuged at 10,000 rpm for 30 min and then filtered at 0.45 µm (GxF multilayered, Acordisc ® PSF syringe filters) pore size before analysis. The COD method corresponds to US standard 5220 D [22]. For ammonium nitrogen (NH 4 -N) analysis, the samples were centrifuged. The samples were then diluted with milli-Q water by dilution factor 50×. A volume of 0.1 mL of the sample was pipetted into a spectraquant ammonium-nitrogen cell test with a measuring range of 4.0 mg/L to 80 mg/L. The method is analogous to US standard 4500-NH3 [22]. For the nitrite-nitrogen (NO 2 -N) and nitrate-nitrogen (NO 3 -N) analyses, samples were centrifuged and filtered before analysis. For alkalinity, concentration was measured as CaCO 3 mg/L in the photometer at 605 nm. The pH of samples was measured by a Beckman 390 pH meter after calibration with buffer solutions of pH 4.0 and 7.0. Temperature and dissolved oxygen (DO) were measured using a WTW Oxi 3310 (Weilheim, Germany) oxygen meter. Total suspended solids (TSS), volatile suspended solids (VSS), and pH were also measured according to the U.S. standard 2540 D, 2540 E, and 4500-H [22], respectively. Reject Water Element Analysis The reject water consists of different kinds of metal constituents. To identify these different constituents of metals the inductively coupled plasma-mass spectrometry (ICP-MS) method was applied. The samples were first diluted in 5% HNO 3 and analyzed on Agilent 8800 Triple Quadropole ICP-MS (ICP-QQQ) with an SPS 4 Autosampler. The analysis results are quantified against certified reference materials (CR) and inorganic internal standards. Biomass Growth on Carriers The biomass growth was per unit protected surface area (g/m 2 ) was calculated from the measurement of total suspended solids (TSS) in g/L per surface area of carriers 650 m 2 /m 3 . The biomass on carriers was measured using ten carriers sampled out every week from each reactor. The carriers were dried at 105 • C for 24 h. The dried and cooled carriers were weighed as first weight (m 1 ). After the first weight was measured, the carriers were washed with sodium hypochlorite solution (NaOHCl) and tap water thoroughly to remove the attached biomass. After cleaning the biomass, the carriers were let to dry at 105 • C for 24 h. The dried and carriers were weighed as second weight (m 2 ). Biomass per carrier was calculated as: where W is biomass per unit protected surface area (g/m 2 ), m is biomass per carrier, VC is the number of pieces per m 3 carrier, and A is the protected surface area (m 2 /m 3 ). The specific nitritation rate (SNR), which is the nitrite produced per total surface area of carrier per day (mg NO 2 -N/m 2 ·d) and specific denitrification rate (SDR), which is the total ammonium reduced per total surface area of carrier per day (mg N 2 /m 2 ·d) in MBBR R1 and MBBR R2 were calculated as follows: where Q is the flow (L/day) and A is the total surface area of the biocarrier (m 2 ). The ammonia removal efficiency (ARE) and the nitrite accumulation rate (NAR) were calculated as: For the microbiome genomics analysis, samples from both reactors were collected according to the DNA extraction procedure where samples were collected in such a way that no cross-contamination occurred during sampling. The samples were stored in a cold refrigerator in a separate kit to avoid any further cross-contamination before the genomic analysis. The standard protocol for DNA extraction and amplification from soil samples that produces DNA pure enough for PCR amplification was applied [23]. The samples were first well mixed and transferred to separate 30 mL test tubes. Meanwhile, the remaining samples were stored as glycerol stocks and stored at −80 • C as reserve stock. The 30 mL samples were centrifuged at 4000 rpm (revolution per minute) for 10 min, at 4 • C. The supernatants were transferred to new tubes for chemical analysis, while precipitate cell mass was used for making pellets. The cell mass was taken from the pellet for DNA extraction using a Fast DNA SPIN Kit for soil. Whenever needed the cell mass was re-suspend in extraction buffer. DNA extracted from all samples with sufficient yield and purity for further 16S analysis. Microbiome Gene Amplification by Polymerase Chain Reaction (PCR) The DNA was amplified by polymerase chain reaction (PCR) and the targeted regions of the bacterial 16S rRNA gene were amplified using primers. Amplicon PCR of 16S fragment amplified in all samples as well as an image of agarose gel (600 bp) was produced showing amplified 16S fragment of all samples. The recovered PCR product for sequencing was carried out using illumina Miqeq sequencing (i.e., 16S metabarcoding and microbial annotation). The sequence was illustrated into different operational taxonomic units (OUT) based on the similarity [24]. Data Analysis The data generated from the biochemical analysis and onsite measurements were processed in Microsoft Excel for data visualization, mass balance analysis, and standard plotting. The mean of each biochemical and physical parameter measurement over time was used for the statistical comparisons between the different measurement periods. Ammonium Transformation in MBBR Reactors The average inlet reject water ammonium concentration fed to MBBR R1 during the experimental period was 568 (±76.7) mg/L. Therefore, this resulted in an average of 51.1 (±6.9) mg/m 2 ·d specific ammonium loading rate (SALR) per square meter of carrier surface. In the biological treatment process, the ammonia in MBBR R1 was close to half converted to nitrite, and the effluent ammonia was reduced to 296.3 (±81.7) mg/L. Hence, in MBBR R1 the ammonia removal efficiency (ARE) was 48.8%, while the nitrite accumulation rate (NAR) and nitrate production rate (NPR) were 79.7% and 20.3%, respectively ( Figure 3). The effluent of the MBBR R1 fed as an inlet for MBBR R2 had a SALR of 25.8 (±7.3) mg/m 2 ·d. Likewise, the ARE, NAR, and NPR in MBBR R2 was 32.3, 80.8, and 19.2%, respectively. In terms of specific nitritation rate per carrier surface area per day, (SNR) MBBR R1 was 66% higher than MBBR R2 (Table 2). Whereas the specific denitrification rate per carrier surface area in MBBR R2 was 25% higher than in MBBR R1. This shows that there was relatively higher nitritation in MBBR R1 while the relative denitrification was higher at MBBR R2. verted to nitrite, and the effluent ammonia was reduced to 296.3 (±81.7) mg/L. Hence, in MBBR R1 the ammonia removal efficiency (ARE) was 48.8%, while the nitrite accumulation rate (NAR) and nitrate production rate (NPR) were 79.7% and 20.3%, respectively ( Figure 3). The Reject Water Element Composition The metallic and nonmetallic element analysis of the reject water both from the effluents of the MBBR R1 and MBBR R2 is shown in Figure 5. There was a slight difference in the concentration of the metallic and non-metallic elements found in both reactors. However, in both reactors calcium (Ca), potassium (K), sodium (Na), magnesium (Mg), and iron (Fe) are the major metallic elements in the reject water in the order mentioned. To a lesser extent, none metallic elements such as sulfur (S) and phosphorous (P) were also The Reject Water Element Composition The metallic and nonmetallic element analysis of the reject water both from the effluents of the MBBR R1 and MBBR R2 is shown in Figure 5. There was a slight difference in the concentration of the metallic and non-metallic elements found in both reactors. However, Int. J. Environ. Res. Public Health 2022, 19, 1841 8 of 13 in both reactors calcium (Ca), potassium (K), sodium (Na), magnesium (Mg), and iron (Fe) are the major metallic elements in the reject water in the order mentioned. To a lesser extent, none metallic elements such as sulfur (S) and phosphorous (P) were also found. Biomass on the Carriers The biomass growths on the carriers in reactors MBBR R1 and MBBR R2 were in the range of 74-128 g/m 2 and 48-128 g/m 2 , respectively. There was a large variation in biomass accumulation in both reactors over time. However, the carriers in MBBR R1 accumulated larger biomass on average than the carriers in MBBR R2. Therefore, with similar carriers having equal total surface areas, in most of the cases, the biofilm concentration was higher in MBBR R1 than in MBBR R2. Microbial Community in the Reactors The overall microbial community composition in both MBBR R1 and MBBB R2 is clustered into different operational taxonomic units (OUT) as shown in Figure 6. There was a very diverse bacterial community and a slight difference in bacterial species abundance and richness between the two reactors. The major dominant microbial families in MBBR R1 were Cloacamonaceae, Alcaligenaceae, and Comamonadaceae. While in MBBR R2 the most dominant microbial families were Alcaligenaceae but the species Cloacamonaceae and Comamonadaceae were also present in addition to other microbial families such as Microbacteriaceae and Anaerolinaceae. However, there was also a substantial proportion of the OUT that showed an unknown bacterial population. Biomass on the Carriers The biomass growths on the carriers in reactors MBBR R1 and MBBR R2 were in the range of 74-128 g/m 2 and 48-128 g/m 2 , respectively. There was a large variation in biomass accumulation in both reactors over time. However, the carriers in MBBR R1 accumulated larger biomass on average than the carriers in MBBR R2. Therefore, with similar carriers having equal total surface areas, in most of the cases, the biofilm concentration was higher in MBBR R1 than in MBBR R2. Microbial Community in the Reactors The overall microbial community composition in both MBBR R1 and MBBB R2 is clustered into different operational taxonomic units (OUT) as shown in Figure 6. There was a very diverse bacterial community and a slight difference in bacterial species abundance and richness between the two reactors. The major dominant microbial families in MBBR R1 were Cloacamonaceae, Alcaligenaceae, and Comamonadaceae. While in MBBR R2 the most dominant microbial families were Alcaligenaceae but the species Cloacamonaceae and Comamonadaceae were also present in addition to other microbial families such as Microbacteriaceae and Anaerolinaceae. However, there was also a substantial proportion of the OUT that showed an unknown bacterial population. Ammonium Transformation to Nitrite and Nitrate The ammonium concentration in both reactors (i.e., MBBR R1 and MBBR R2) transformed largely to nitrite but also to nitrate and nitrogen gas too (Figure 3). It was reported that the nitrogen removal in MBBR reactors depends mainly on the types of microbial community and the functional features of the bacterial population. Autotrophic nitrification is the most common nitrification process in wastewater treatment by chemolithoautotrophic AOB and NOB bacteria communities [11,21,25]. The most recognized autotrophic AOB genus is Nitrosomonas in the family of Nitrosomonadaceae whereas the NOB genus is Nitrobacter in the family of Nitrobacteraceae. In our study, the operational taxo- Ammonium Transformation to Nitrite and Nitrate The ammonium concentration in both reactors (i.e., MBBR R1 and MBBR R2) transformed largely to nitrite but also to nitrate and nitrogen gas too (Figure 3). It was reported that the nitrogen removal in MBBR reactors depends mainly on the types of microbial community and the functional features of the bacterial population. Autotrophic nitrification is the most common nitrification process in wastewater treatment by chemolithoautotrophic AOB and NOB bacteria communities [11,21,25]. The most recognized autotrophic AOB genus is Nitrosomonas in the family of Nitrosomonadaceae whereas the NOB genus is Nitrobacter in the family of Nitrobacteraceae. In our study, the operational taxonomic units (OUT) and taxonomic identities of bacterial biomass in the sequencing analysis in both reactors showed that none of these autotrophic nitrifier genera were found in both reactors ( Figure 6. Therefore, we concluded that the AOB and NOB were completely inhibited due to the high concentration of free ammonia or other unknown toxic constituents in the system. It was reported that the AOB and NOB bacteria species might not survive in wastewater containing high free ammonia as well as other toxic compounds [6][7][8][9]. A free ammonia (FA) concentration higher than 8-120 mg/L inhibits the AOB bacteria, while 0.08-0.82 mg/L FA concentrations hinder NOB activity [26]. For instance, free ammonia (FA) as low as 0.6 mg/L inhibits NOB [21]. In this study, the FA was much higher in both reactors, 14.9 (±13.6) mg/L and 2.9 (±4.4) mg/L in MBBR R1 and MBBR R2, respectively. On top of that, the reject water may contain some toxic compounds that may have inhibited the AOB and NOB activities [8]. Therefore, the most plausible nitrification process that happened in both reactors was heterotrophic nitrification and aerobic denitrification. The types of microbial communities found in the sequencing analysis supported this conclusion. The microbial community analysis showed that the higher out% and the dominant microbial families in MBBR R1 were Cloacomonaseae, Alcaligenaceae, Comamonadaceae, and Cryomorphaceae. Similarly, the dominant microbial community in the MBBR R2 were Alcaligenaceae, Comamonadaceae, Microbacteriaceae, Cloacomonaseae, Anaerolinaceae, and others ( Figure 6). The heterotrophic aerobic ammonia oxidation known as heterotrophic nitrification is one of the biological nitrogen removal processes carried out by a diverse and wide range of bacterial communities using organic substrates as energy sources to oxidize ammonia [11,25,27]. It is also reported in several studies that many heterotrophic nitrifying bacteria can also be capable of aerobic denitrification [25]. The dominant bacterial family Alcaligenaceae found in both reactors are capable of simultaneous nitrification and denitrification. In a selective enrichment of Alcaligenaceae, Kalnin , š [6] used this family from industrial wastewater for simultaneous nitrification and denitrification of wastewater. Four strains representing the Alcaligenaceae family have been isolated from the green water system for their ability to nitrify ammonia and nitrite aerobically [10]. The reject water has sufficient soluble and slowly degradable COD, which was used by these groups of bacteria ( Figure 4). Hence, heterotrophic nitrification has an advantage through simultaneous organic removal especially for reject water that consists of a large fraction of degradable organics. Aerobic Bacterial Denitrification The most common denitrification is the conversion of nitrates to nitrogen gas by the facultative chemo-organoheterotrophic bacterial communities under anoxic conditions [11,25,28]. In our experiment, both reactors were well aerated (Table 1), but there was sufficient denitrification in both reactors (Table 2). There could exist some anoxic microenvironment inside the biofilm on carriers [21,29]. However, the microbial community analysis showed the existence of an oxygen-tolerant aerobic denitrifying bacterial community. Many heterotrophic nitrifiers such as the Alcaligenaceae family can also carry out aerobic denitrification. Aerobic denitrifiers tend to work efficiently at 25-37 • C and pH 7-8 when dissolved oxygen concentration is 3-5 mg/L and the C/N load ratio is 5-10 [30]. The most extensively characterized aerobic denitrifying bacterium, Paracoccus denitrificans reduced 27% of added nitrate to gaseous nitrogen in the presence of oxygen [11,25,31]. In both reactors, the bacterium in the families of Alcaligenaceae, Comamonadaceae, Microbacteriaceae, Cloacomonaseae, and Anaerolinaceae can carry out denitrification in the presence of oxygen or in any anoxic microenvironment created on the carriers. Most of the denitrifiers reported in solid-phase denitrification are affiliated to the family Comamonadaceae [32]. Moreover, several of the families correlated with the denitrification rates were significantly associated with the families such as Anaerolinaceae and Microbacteriaceae, suggesting that these families potentially play an important role in denitrification [33]. The Cloacomonaseae family of bacterial communities is common in anaerobic digesters as a denitrifier [34]. Aerobic denitrification is a good alternative to conventional denitrification for its unique advantage of allowing simultaneous nitrification and denitrification in one aerated reactor [30,31]. Moreover, for reject water that consists of a high C/N ratio, a combination of nitrification and denitrification in one aerated reactor has the advantage of simultaneous organic removal. Biomass Growth and Metallic Metallic Elements The dynamics of biomass concentration on the carriers in both reactors were in the range of 48-128 g/m 2 during the operational period. The biomass accumulation in both reactors had a large variation and did not stabilize over time. Several conditions such as carrier type (i.e., size, shape, and specific surface area), filling ratio, hydrophilicity, and electrophilicity of bio-carriers affect biofilm growth and stable accumulation on the carriers [21,35]. Moreover, some nitrifying bacteria form thin biofilm on the carriers due to their poor EPS (extracellular polymerase) production. However, the study showed that there is a possibility of enhancing nitrifying biofilm formation rate with the aid of EPS produced by heterotrophic bacteria [21,36]. In a study to improve the hydrophilicity and electrophilicity of carriers, iron oxide (Fe 2 O 3 ) was used due to the positive electricity [21]. Iron affects biofilm formation in some other bacteria and especially ferrous (Fe 2+ ) and ferric (Fe 3+ ) iron-stimulated biofilm formation [37]. In this study, the reject water had a substantial amount of Fe ( Figure 5) and this may help with the biomass formation. However, excessive biofilm accumulation and scaling on the bio-carriers should get due attention. Scaling on the biofilm carriers can occur when there is a high concentration of ammonium, phosphorus, and metal ions causing biofilm carriers to sink to the bottom of reactors. Scaling causes less carrier motion and requires higher energy consumption, causing lower process efficiency and increased operational cost. In this study the excessive Fe 3+ and Ca 2+ ions may have a potential to form mineral precipitates and scaling on the biofilm carriers [2,38]. Conclusions The ammonium concentration in both reactors transformed largely to nitrite but also to nitrate and nitrogen gas too. The combined ammonia removal efficiency (ARE) in both reactors was 65.9%, while the nitrite accumulation rate (NAR) and nitrate production rate (NPR) were 80.2% and 19.8%, respectively. The heterotrophic aerobic ammonia oxidation known as heterotrophic nitrification and oxygen tolerant aerobic denitrifying were the identified biological mechanisms for the ammonia removal in both reactors. The dominant bacterial family Alcaligenaceae found in both reactors and other related bacterial species are capable of simultaneous nitrification and denitrification. Moreover, the simultaneous heterotrophic nitrification and denitrification removed over 28% of the reject water's tCOD combined in both reactors. Hence, simultaneous heterotrophic nitrification and aerobic denitrification have many potential advantages in reject water treatment. In addition to ammonia removal, heterotrophic nitrification and denitrification have additional advantages for organic removal especially for reject water that consists of a large fraction of slowly degradable organics because heterotrophic nitrifiers and aerobic denitrifiers use organic substrates as energy sources to oxidize ammonia. The very diverse bacterial communities identified need a strategic and targeted enrichment of heterotrophic nitrification and aerobic denitrification for future application. In conclusion, the dominant microbial families that have the potential for application of simultaneous heterotrophic nitrification and aerobic denitrification are Cloacamonaceae, Alcaligenaceae, Comamonadaceae, Microbacteriaceae, and Anaerolinaceae.
2022-02-09T16:12:36.427Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "327cd8dbef621ab8806d5c8c6db9b360c8eba421", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/3/1841/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05fcdb85fc2f8928709141a128c401ab36b8c323", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252466301
pes2o/s2orc
v3-fos-license
Effect of food portion on masticatory parameters in 8‐ to 10‐year‐old children Abstract The objective of this study was to explore differences in bite size and the amount of intraoral processing of four different foods between a reference and a double portion in 8‐ to 10‐year‐old children and, also to explore if there were differences depending on the child's weight status. The study was undertaken in 8‐ to 10‐year‐old children (n = 89). Body mass index was determined, and weight status was established based on Centers for Disease Control and Prevention(CDC) guidelines. A reference (half a banana, half a large peeled carrot, a slice of loaf cake, and half a salami stick), and a double portion of each food were offered to children in a randomized order in two different sessions. Three consecutive bites were taken and averaged. Variables in this study were bite size (g), number of cycles until swallowing, sequence duration as well as cycles/g. Comparisons were performed with Mann–Whitney, Kruskal–Wallis, and Wilcoxon tests, regressions and correlations were run. Bite size was ≈13% larger with the double portion (p ≤ .05 for salami, banana, and loaf cake). Cycles/g decreased for all foods with the double portion, although only significantly for banana and loaf cake. Normal and obese children had larger bite sizes (p ≤ .05) of banana than overweight children, while only obese had larger bites of loaf cake with the double portion. In conclusion, the bite size of foods in 8‐ to 10‐year‐old children increases (13%) when the portion size is doubled and the larger bite size leads to fewer cycles/g (8%). These effects differ among foods. These parameters do not depend on weight status. . Fisher, Liu, Birch, and Rolls (2007) reported that the total consumption of a child increases with a larger portion size; energy intake increases as well Mooreville et al., 2015). Some of the studies on portion size effect have found that this effect is stronger in obese adults (Hill & McCutcheon, 1984;Zijlstra et al., 2011) and obese children . Obesity is a complex multifactorial disease, a serious global health problem and an important cause of morbidity and mortality. There is a high incidence of obesity among children (WHO, Global Health Observatory data repository, n.d.). Among many factors influencing weight status, there are: food sources and the quality of nutrients, frequent snack eating, ample screen time, less physical activity, an incorrect parental perception of the child's weight status, social, behavioral, or psychological factors, birth weight and commercial advertising (Aljassim & Jradi, 2021;Lee & Yoon, 2018;Parsons, Power, Logan, & Summerbell, 1999). It is difficult to sort out the relationships between potential factors. Genetic factors are also known to play a role in determining an individual's predisposition to being obese (Lin & Li, 2021;Singh, Kumar, & Mahalingam, 2017); however, children may also "inherit" parental dietary behaviors and food practices (Larsen et al., 2015). The mechanisms of the portion size effect are not clear (English et al., 2015). Some of the possible drivers that have been suggested are visual cues, anchoring and adjustment, learning, and reward, as well as previous experiences/expectations (English et al., 2015;Steenhuis & Poelman, 2017). One of the facets of the portion size effect that has received little attention is the relationship between variations in food portion size and intraoral processing (i.e., size of the bites, number of chews needed to prepare food for swallowing, and chews/g). The scarce information on the influence of portion size on bite size has reported that when portion size is doubled in adults, their bite size is larger (Burger, Fisher, & Johnson, 2011). In a study that only included overweight women increasing the portion size led not only to a larger bite size but also to a faster eating rate (g/min) (Almiron-Roig et al., 2015). To the best of our knowledge, information is practically nil in relation to the effect of a larger bite size on intraoral processing in children. Two studies did not identify differences in chewing cycles performed over 15 min with a larger portion size (Fisher, 2007;Fisher et al., 2003). A recent study in children chewing two different portions of an artificial test food reported that food breakdown is less with a larger bolus size since less chews per gram are performed (Wintergerst & G omez-Zúñiga, 2022). Because more information on bite size and to a greater extent on other parameters of intraoral processing related to the portion size effect are needed in children, the objective of this study was to explore differences in bite size and amount of intraoral processing of four different foods with a reference and a double portion in 8-to 10-year-old children and, also to explore if there were differences depending on the child's weight status. | Design and study participants Bite size depending on the portion size was tested with a withinsubject crossover design, the comparison of the bite size based on weight status was performed as a cross-sectional study. Participants in the study were 8-to 10-year-old children from two primary schools of the same socioeconomic level of the same city. Children were excluded if they had any diagnosed systemic disease, dental pain, large cavities or a very loose tooth, evident craniofacial abnormalities, a severe malocclusion (such as a clear class III or a posterior crossbite), behavioral problems that could complicate the testing procedure, an allergy or a definitive aversion to any of the foods. This study was conducted in conformity with the ethical guidelines of the World Medical Association Declaration of Helsinki and was approved by the Ethics Committee of the affiliated institution of the authors (CIE/0810/11/2018). Written informed consent was obtained from all the parents before the study; children gave their written and verbal assent. | Test foods and portion sizes The foods used for this study were banana (Tabasco), loaf cake (Panque Marmoleado Bimbo), raw carrot, and a salami stick (Peperami Zwan) (Figure 1). These foods were chosen because of their different texture: two hard foods (carrot and salami) and two soft ones (banana and loaf cake). These foods are eaten as a snack, for lunch or during a full meal. The data on the bite size was obtained over two different sessions, 1 week apart, with portion size ("reference" vs. "double") randomized (Dado, Atheris Apps). When an odd number, the portion size was "reference" (half a banana, 54.1 ± 10.9 g; half a large peeled carrot, 36.5 ± 10.1 g; a slice of loaf cake, 31.5 ± 4.0; and half a salami stick, 10.5 ± 0.9 g). When the number was paired, the portion size was "double" (a whole banana, 105 ± 13.1 g; a large peeled carrot, 67.0 ± 16.0 g; two slices of cake, 65.4 ± 6.0 g; and a complete salami stick, 20.5 ± 1.4 g). The order of the foods was set as salty foods first and then sweet ones since this is the way food is generally offered in a meal, but the order of the foods was randomized. When an odd number the order was carrot, salami, banana, and cake; when an even number the order was salami, carrot, cake, and banana. Portion sizes were determined based on the results of a prior study in 8-to 10-year-old children (Wintergerst et al., 2016). In this study, we define portion size as the amount of food offered to the children (Fisher, Goran, Rowe, & Hetherington, 2015). | Experimental procedure Sessions were undertaken between 1 and 3 hr after either breakfast or lunch to avoid a possible hunger effect. The project was presented to parents of 412 eight-to ten-year-old children during parent/teacher meetings; 318 of them agreed to their child's participation and signed the informed consent forms. Since parents had given their informed consent in a parent-teacher meeting, we first verified that the children met the selection criteria. After screening for selection criteria, only 91 children were included. Children then watched a 79-s video of a girl explaining the experimental procedure so that they would feel comfortable and understand the procedure. If they gave their assent, children were weighed with light clothing using a portable electronic scale (Tanita, BF-689) to the nearest 0.1 kg. Height was measured with a tape attached to the wall while children were standing straight, looking forward with their backs to the wall, with no shoes and heels touching the wall. These parameters were used to determine body mass index (BMI) [body weight (kg)/height squared (m 2 )]. Children's weight status was determined according to the Centers for Disease Control and Prevention (CDC) guidelines (Kuczmarski et al., 2002): obese (≥95th percentile), overweight (85th-94th percentile), and normal weight (5th-84th percentile). Before starting the experimental procedure, children cleaned their hands with alcohol gel for 60 s. Children were seated on a chair with no head support and in front of a table. The experimental procedure consisted in children taking the food placed on a plate on top of a scale (BEB, 120 g  0.001 g, BOECO, Germany). The initial weight was registered and after the child took a bite (no specific instructions on the bite), they would then return the food to the plate and the new weight was registered. The subtraction of the weights was used as the bite size. Children were asked to chew normally when instructed to start and to lift their hand at the moment they had swallowed the food completely. The sequence was filmed with a cell phone (iPhone 11 Pro). Bite size (for each food and portion size) was determined as the average of three consecutive bites. Children could rest between repetitions if they requested to when asked. The testing time lasted approximately 10 min per session. At the end of the session, the uneaten portions of food for each child were placed in a bag with the name of the child and given to their teachers so they could give it to them at the end of the school day. The first author was standardized by the second author in the counting of the chewing cycles with natural foods. Reliability between and within researchers was established over 15 chewing sequences with each test food (intraclass correlation coefficients, two-way mixed effect model, absolute agreement, means of raters). There was excellent between subject reliability for carrot, salami, and banana (.960-.998) and good reliability with cake (.858); within-subject reliability was excellent (.972-.998). A chewing cycle is described as a cycle with an opening and closing phase as in typical cycles eliminating those clearly used to only shift or conform the bolus (e.g., long cycles with the tongue pressing the cheeks or lips; 0-2 cycles per sequence). To determine the number of cycles required by the children before swallowing as well as sequence duration, the videos were downloaded to a computer. Software VLC media player (VideoLAN, version 3.0.11) at a reproduction speed of 0.50 was used to analyze and count the number of cycles. Sequence duration was determined by running the video at a normal speed. The number of cycles/g of the test food was determined by dividing the number of chews by the child's average bite size of each test food. | Statistical analysis Data was captured in an Excel sheet and later analyzed using IBM SPSS Statistics for Windows, version 28 (IBM Corp., Armonk, NY). Descriptive statistical procedures were performed. Data distribution was assessed by inspecting skewness and kurtosis. BMI was normally distributed, but the other variables were not consistently normally distributed for all food, portion size, or weight status. Data are expressed as mean ± SD or as the median and interquartile range (IQR) of three repetitions. Data were compared (portion sizes, weight status) using nonparametric tests (Wilcoxon signed-rank tests, Kruskal-Wallis, or Mann-Whitney tests). Spearman correlations between BMI and bite sizes as well as with cycles/g were assessed, and linear regression models were tested (with BMI as an independent variable (I.V) and bolus size/ reference and double portions, as well as cycles/g as dependent variables (D.V.), with bolus size with the reference portion as the I.V. and bolus size with the double portion as the D.V., and with bite size of each food as the I.V. and cycles/g as the D.V.). Statistical significance was set at p ≤ .05. F I G U R E 1 Test foods in the study. The reference portion consisted of half a carrot, half a salami stick, half a banana, and a slice of loaf cake | RESULTS Characteristics of the study groups are found in Table 1. Medians for the reference and double portions for bite size, sequence duration, number of cycles needed to reach swallowing threshold, and cycles/g are presented in Tables 2-5. Bite size was larger with the double portion for all foods although the difference was statistically significant for salami, banana, and loaf cake (15%, 20%, and 9%, respectively) ( Table 2). Sequence duration was only longer for banana (1%) ( Table 3). Cycles/g decreased for all foods, but the difference was only statistically significant for banana and loaf cake (10% and 9%, respectively) ( Table 5). The values for the different nutritional groups for the different variables tested are also presented in Tables 2-5. Normal and obese children had statistically larger bite sizes of banana with the double portion (14% and 40%, respectively), while only obese children had significantly larger bite sizes with loaf cake (24%) ( Table 2). Only obese children had significantly longer sequence durations with salami stick and banana (22% and 8%, respectively) as well as less cycles/g with banana (31%) (Tables 3 and 5). Obese children had less cycles/g with the double portion of loaf cake than the normal and overweight children (Table 5). Differences between the reference and the double portions were also compared between the nutritional groups (data not shown); there was only a significant difference for bite size with banana in obese children (3.95 g IQR 6.3 vs. 1.3 g IQR 6.2, p = .023, and .61 g IQR 2.7, p = .004, Mann-Whitney, normal, and overweight, respectively). Correlations between BMI and bite size and cycles/g with both food portions were not significant except for bite size with the reference portion of banana (À.240). Linear regressions with bite size with the double portion as the dependent variable are presented in Table 6. Variance explained by the small portion increases from carrot, to salami stick, to banana and is highest with loaf cake (53%). The regressions with bite size of each food as the independent variable and cycles/g as the dependent variable were all significant and only slightly higher for the double portion than with the reference portion, but the variance explained was less than 30% for all foods (data not shown). Linear regressions indicate that the variance in bite sizes with the two portion sizes cannot be explained by BMI (data not shown). | DISCUSSION The results of this study indicate that portion size influences bite size in 8-to 10-year-old children and that the 13% larger bite size with a Note: Bite size is expressed in grams. Values with statistical differences between the reference and double portion (Mann-Whitney) are in bold. When comparing each variable between nutritional groups, for example, bite size with the reference portion between normal, overweight, and obese, no statistical differences were found. double portion of food leads to less intraoral processing (i.e., fewer cycles/g). Contrary to what was expected bite size did not vary consistently between children of different weight status. Different factors affect bite size. The type of food is one of these factors; in the current study bite size was different for all foods. Bite size was largest for banana followed by carrot; Hiiemae et al. (1996) found a larger bite size for banana, than for apple or biscuit in adults. The This trend is similar in adults. Bite size of obese women was evaluated with different portion sizes of chili-con-carne sauce with rice. The finding was that for every 100 g increase in the portion, bite size increased by 10%; although in that study, subjects were asked to fin- Note: When comparing each variable between nutritional groups, for example, number of cycles to reach swallowing threshold with the reference portion between normal, overweight, and obese, no statistical differences were found. with increased portion sizes, it does so mainly because of high energy dense food, and weight status does not appear to drive this effect (Mooreville et al., 2015). Most studies evaluate the portion size effect basically over a single meal, but in a study including all meals over 5 consecutive days in a childcare setting, Smethers et al. (2019) identified that children's body size is positively related to consumption of larger portions, even after energy needs are taken into account. The increased energy intake from the larger portions persisted over the entire 5-day period and the increase was mainly due to a higher consumption of high energy density foods and less low energy dense foods. In our study, bite size increased more with the high than with the low energy dense foods. reported a tendency toward preschool children with a higher BMI to have larger bite sizes regardless of the portion size but they did not find the same effect in their 2007 study using the same test food in 2to 9-year-old children. In a study that included 4-year-old children statistically significant differences in bite size (1.8 vs. 2.2 g, p = .002) were found between normal and overweight children (Fogel et al., 2017). However, the testing procedure they used was different since they included 9 foods in a buffet with children serving themselves freely; they counted the bites over the whole meal giving them a better opportunity to detect a difference, which we could not identify with specific food items and only 3 bites. The relationship between bite size and nutritional status is also unclear in adults; some studies have not found an influence of weight status on bite size (Burger et al., 2011;Spiegel, 2000), while others have (Hill & McCutcheon, 1984;Zijlstra et al., 2011). Mattfeld, Muth, and Hoover (2017) reported an increase of 0.20 g in bite size per BMI point increase in a cafeteria setting with adults freely selecting the amount and type of food from more than 300 different foods. We did not find the expected differences in sequence duration or number of cycles to reach the swallowing threshold between the T A B L E 5 Medians and interquartile ranges (in parenthesis) for cycles/g for the different foods for the reference and double portions reference and double portions or among the different nutritional status children. Differences were found in these parameters when these same children were tested chewing two different bolus sizes of an artificial test food but the increase in bolus size was larger (25% instead of 13%) although likewise, no differences were found for these two variables among the nutritional groups (Wintergerst & G omez-Zúñiga, 2022). The reasons suggested for the portion size effect are many (Steenhuis & Poelman, 2017;. One possibility is that we assume that the food on the plate is what we are expected to eat or should eat (appropriateness mechanism according to Steenhuis & Poelman, 2017), especially as children. Food-related parenting practices definitively influence the way children eat (Laessle, Uhl, & Lindel, 2001;Loth, 2016). An evolutionary theory indicates that one should eat when food is available (Kersbergen, German, Westgarth, & Robinson, 2019). If we assume that we should enhance intake when there is more food available, a strategy to try to finish the food probably over the same time period could be to increase bite size and try to finish the food eating faster, which would necessarily have an effect of reducing oral processing. This speculation would go along with our findings: a larger bite size with children performing less chews per gram with the double portion size, although the difference was only significant for banana and loaf cake. In relation to chews/g, the smaller the ratio, the greater the rate at which food is delivered as a source of metabolic energy to the gastrointestinal tract (Lucas & Luke, 1984). Two studies in children reported no differences in bite frequency (chews during 15 min) between the reference and the double portion sizes (Fisher, 2007;Fisher et al., 2003). A study in adults, did find that portion size and eating rate (g/min) were positively correlated (Almiron-Roig et al., 2015), indicating that oral processing time per amount eaten is reduced as in our study. It is important to mention that when these same children chewed on two different portions of an artificial test food, chews/g decreased as the bolus size increased and the effect of less cycles/g lead to a larger particle size at the moment of swallowing (Wintergerst & G omez-Zúñiga, 2022). Therefore, although we did not evaluate the median particle size of the chewed food at swallowing threshold, we can assume that the median particle size was larger with the double portion because there were less chews/g for all foods with the double portions and this effect was present for all nutritional groups. A fast-eating rate is significantly associated with greater risk of being overweight in children (Garcidueñas-Fimbres, Paz-Graniel, Nishi, Salas-Salvad o, & Babio, 2021) and is associated with increased adiposity in adolescents (Fagerberg et al., 2021). Fogel et al. (2020) reported that the association between risk factors for obesity and adiposity at 6 years of age was moderated by eating behaviors such as bite size and eating rate and in their 2017 study they identified that overweight children ate faster (g/min). We did not find that obese children eat faster, although when we compared the differences in cycles/g for the reference and double portions for the three nutritional groups, we identified that cycles/g were slightly less for the obese than for the overweight and normal weight children. Thus, the delivery of each gram of food with its potential metabolic energy to the gastrointestinal tract is faster for these children than for normal weight children (e.g., cycles/g for cake with the double portion). It was interesting to find that under careful inspection there are more significant differences for the obese children which were not seen in the other nutritional groups, this leads us to think that there are differences in the way obese children eat, but we were not able to detect important differences with our study design. In adults, it has been reported that as obesity increases, bite size and eating rate increase (amount/second), although body size is also an important contributor (Hill & McCutcheon, 1984). On the contrary, White et al. (2015) matched overweight or obese (adults) with normal BMI participants and based on EMG activity chewing rate did not differ between high and normal BMI participants while eating two rice and one pizza meals. The rheology and palatability of the food items used in different studies definitively influence study results. We acknowledge the limitations for the current study. The most important one may be that we only studied three consecutive bites for each child under standardized conditions. Other studies have evaluated the portion size effect over a whole meal with mixed or amorphous foods, or over several meals, although this allowed us to have data for each individual food. The study was performed at a school, and despite not a home environment, children habitually eat their lunch at school so we considered that the setting was more appropriate than if the study would have been performed in a laboratory. Overweight and obesity are among the main factors that contribute to non-communicable diseases. Because larger portion sizes lead to more daily total energy intake in children , the development of healthy food portion sizes is considered to be central to obesity prevention in childhood (WHO Technical Staff, 2014). It is surprising that there is scarce research relating the microstructure of eating to the portion size effect, particularly bite size, chewing rate, and amount of intraoral processing in children. Consequently, although complex, there is a need for further research in this area with many factors to consider such as rheology, palatability, energy denseness, and so forth, without neglecting the important role that marketing and parents play in establishing portion norms early in children's lives. | CONCLUSIONS The bite size of food for 8-to 10-year-old children increases 13% when the portion size is doubled. The results of this study contribute valuable information on the portion size effect on cycles/g, which is an important variable for the microstructure of eating. The 13% larger bite size leads to 8% fewer cycles per gram which indicate less intraoral processing. These effects differ among carrot, salami stick, banana, and loaf cake. Bite size and oral processing did not vary consistently among children of different weight status; larger sample sizes or different methodologies might be needed to further explore if they are truly not different in obese children. Further research is needed in relation to the portion size effect and intraoral processing in children.
2022-09-24T06:18:26.240Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "91d54f0621d8f88159bd840a3535470c1675a988", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jtxs.12724", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5da42d15d4ed024f0c4a648b81b4866bcaf8c309", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244162347
pes2o/s2orc
v3-fos-license
Antibacterial Activity of T22, a Specific Peptidic Ligand of the Tumoral Marker CXCR4 CXCR4 is a cytokine receptor used by HIV during cell attachment and infection. Overexpressed in the cancer stem cells of more than 20 human neoplasias, CXCR4 is a convenient antitumoral drug target. T22 is a polyphemusin-derived peptide and an effective CXCR4 ligand. Its highly selective CXCR4 binding can be exploited as an agent for the cell-targeted delivery and internalization of associated antitumor drugs. Sharing chemical and structural traits with antimicrobial peptides (AMPs), the capability of T22 as an antibacterial agent remains unexplored. Here, we have detected T22-associated antimicrobial activity and biofilm formation inhibition over Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa, in a spectrum broader than the reference AMP GWH1. In contrast to GWH1, T22 shows neither cytotoxicity over mammalian cells nor hemolytic activity and is active when displayed on protein-only nanoparticles through genetic fusion. Under the pushing need for novel antimicrobial agents, the discovery of T22 as an AMP is particularly appealing, not only as its mere addition to the expanding catalogue of antibacterial drugs. The recognized clinical uses of T22 might allow its combined and multivalent application in complex clinical conditions, such as colorectal cancer, that might benefit from the synchronous destruction of cancer stem cells and local bacterial biofilms. Introduction The peptide T22, also known as [Tyr5,12,Lys7]-polyphemusin II, is a 18-mer derivative of the horseshoe crab cationic antimicrobial peptide (AMP) polyphemusin I, in which three amino acid replacements enable it for a precise binding to the cell surface chemokine receptor CXCR4 [1,2]. Since CXCR4 is an HIV co-receptor [3], T22 was developed as an anti-HIV peptide potentially effective in antiretroviral therapies, blocking the fusion between the viral envelope and the cell membrane and thus preventing viral infection [1,4]. From another point of view and taking advantage of its selective CXCR4 binding, T22 has been largely exploited as a targeting agent, for precision therapies against diverse CXCR4 + human cancers. Among others, these include leukemia, lymphoma, head and neck and colorectal cancer [5][6][7][8][9][10][11][12][13][14], in which CXCR4 is overexpressed in metastatic cancer stem cells. In this context, when T22 is engineered as an N-terminal peptide in H6-tagged proteins it promotes, due to its cationic character [15], protein self-assembly into homomeric nanoparticles [16], which include around 10 monomers positioned in a regular, toroidal architecture [17]. The multivalent display of T22 on the particle surface and the nanometric size of these constructs (usually ranging from 12 to 30 nm, depending of the domain composition of the fusion protein) enhances CXCR4 binding and the consequent penetrability into CXCR4 + cells, while preventing the renal filtration of chemically coupled drugs [18]. By a combination of all these properties, T22 ensures the architectonic stability of the protein material in the bloodstream and allows a selective intracellular accumulation of T22-empowered protein-only nanoparticles and associated drugs into CXCR4-overexpressing cancer stem cells [17][18][19]. Then, upon the systemic administration of the reporter protein T22-GFP-H6 and derived cytotoxic constructs, a precise in vivo biodistribution is observed, with the destruction of CXCR4-overexpressing cancer tissues and metastatic foci in the absence of side toxicities [18]. Despite polyphemusins generically displaying potent antimicrobial activities [4,[20][21][22][23], these functionalities and their interactivity with bacterial cell membranes largely depend on the precise amino acid sequence, which is highly sensitive to even a few amino acid substitutions [24,25]. This is because even point mutations can alter its amphipathicity and hydrophobicity, features that have been postulated to be pivotal in maintaining the right balance between toxicity and antimicrobial activity [26][27][28]. Although several structural variants of polyphemusin peptides have been tested for antimicrobial properties [24,25], T22 has been never explored in this regard. Considering the growing need for new antimicrobial agents and the proved clinical potential of T22 in cell-targeted drug delivery [11,18,29], the detection of any new antimicrobial activities in this peptide would be of broad interest and deserves a thorough investigation. More so, these functionalities might be conserved in T22 peptides displayed on multimeric protein nanoparticles, since antimicrobial activities largely benefit from nanostructured and multivalent presentations [30,31]. Additionally, the combination of anticancer and antimicrobial properties could be of special relevance in many cancers and cancer-linked conditions in which bacteria have a predominant or even triggering role [32][33][34][35][36]. Bacterial Growth and Determination of the Minimum Inhibitory Concentration The effects of the different antimicrobial agents were evaluated against E. coli ATCC 25922, S. aureus ATCC 29737 and P. aeruginosa ATCC-27853. The assay was performed using a broth microdilution method. In 96-well plates, after a two-fold dilution process, each well contained a specific amount of the corresponding peptide, ranging from 2 to 32 µmol/L for GWH1 and 2 to 64 µmol/L for T22 in Mueller Hinton Broth Cation-adjusted medium (MHB-2, Sigma-Aldrich, Saint Louis, MO, USA). Then, 50 µL of MHB-2 containing 10 6 colony forming units per mL (CFU/mL) was inoculated in each well. After inoculation, the plates were incubated without agitation at 37 • C for 18 h. Bacterial growth was measured by OD 620 . The effect of protein nanoparticles in bacterial growth was analyzed following the same protocol with concentrations ranging from 2 to 16 µmol/L, in S. aureus ATCC 29737. Maximal growth was achieved in control wells with no protein and each concentration was evaluated in duplicate. To determine the minimum inhibitory concentration (MIC) of the agents, the lowest concentration showing no bacterial growth, evaluated by visual inspection, was taken. The raw numerical data for all the experimental can be found in the dataset in the Supplementary Materials. Time-Killing Kinetic Assay Different concentrations of GWH1 and T22 were distributed in 96-well plates and incubated with a suspension (in Mueller Hinton Broth Cation-adjusted medium, MHB-2, Sigma-Aldrich, Saint Louis, MO, USA) containing 10 6 CFU/mL of E. coli ATCC 25922 or S. aureus ATCC 29737. Plates were incubated without agitation at 37 • C. At the indicated times (0, 0.5, 1, 2, 3, 4, 5 and 24 h), an aliquot of 10 µL (out of a total of 200 µL per well) was serially diluted (10-fold) in a different 96-well plate and subsequently seeded in LB plates to evaluate the bacterial viability by CFU counting. Each concentration was evaluated in triplicate and each dilution was seeded in duplicate; therefore, a maximum of six individual counts were used to determine the final CFU for each concentration. A control was included to evaluate bacterial growth in absence of the peptides. Evaluation of Biofilm Formation Biofilms were formed by addition of 106 CFU mL −1 of the bacterial suspension (E. coli ATCC 25922 or S. aureus ATCC 29737) in sterile, flat-bottomed, 96-well polystyrene microwell plates (100 µL per well) and incubated in a static condition for 18 h at 37 • C. To determine the antibiofilm activity, different concentrations of the peptides GWH1 and T22 were added to the wells to prevent cell adherence. After incubation, the total biomass of the biofilm was analyzed using the crystal violet (CV) staining method [41]. The contents of the wells were discarded and washed three times with distilled water to remove the planktonic bacteria. Then, biofilms formed by adherent sessile bacteria in the plate wall were fixed by air-drying at 60 • C for 60 min and stained for 15 min with 150 µL of (CV) solution at 0.1%. The stained biofilms were again washed with distilled water and dried for 30 min at 37 • C. Finally, the adhered biofilms were extracted with 200 µL of 30% acetic acid. The biofilm quantification was determined by the photometric measurement of the CV intensity at 550 nm using the multilabel plater Reader VICTOR3 (PerkinElmer, Inc., Waltham, MA, USA). Each concentration was evaluated in duplicate. Mammalian Cell Viability Assay The potential cytotoxicity of peptides was tested in murine embryo (NIH3T3 cells Waltham, MA, USA). All cell lines were supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, Waltham, MA, USA) and incubated in a humidified atmosphere at 37 • C and 5% of CO 2 . A total of 5000 cells/well for fibroblasts and 3000 cells/well for cervical cells were cultured in opaque-walled 96-well plates for 24 h at 37 • C until reaching 70% confluence and were then exposed to peptides at 8, 16, 32 and 64 µmol/L for 48 h. After incubation, CellTiter-Glo ® Luminescent Cell Viability Assay (Promega, Madison, WI, USA) was used to determine the potential peptide cytotoxicity. The luminescent signal, proportional to the amount of ATP present in the sample, was measured in a conventional microplate reader VICTOR3 (PerkinElmer, Inc., Waltham, MA, USA). The cell viability experiments were performed in triplicate. Hemolysis Assay Freshly drawn human erythrocytes were harvested by centrifugation for 5 min at 1500 g and washed three times with PBS (137 mM NaCl, 2.7 mM KCI, 10 mM Na 2 HPO 4 , 1.8 mM de KH 2 PO 4 ). Subsequently, a work solution was prepared by diluting the washed erythrocytes with PBS (1%, v/v). In a 96-well conical bottom plate, the 1% (v/v) erythrocyte suspension was incubated for 1 h at 37 • C with different concentrations (16, 32 and 64 µmol/L of the GWH1 and T22 peptides. After incubation, the plates were centrifuged for 5 min at 1500× g, and the supernatant was transferred to a new 96-well plate to measure the absorbance at 405 nm in a multilabel plater Reader VICTOR3 (PerkinElmer, Inc., Waltham, MA, USA). Two controls were included, PBS as a non-hemolysis control and Triton X-100 as a 100% hemolysis control. Experiments were performed in triplicate. Measurement of the Nanoparticle Size Size distribution of protein samples was determined by dynamic light scattering (DLS). Average values were obtained after the independent measurement of protein samples in triplicate, at 633 nm, in a Zetasizer Nano ZS (Malvern Instruments Ltd., Malvern, UK). Results The generated T22 model reproduces an early NMR structure of T22 [4], which revealed the conservation of the hairpin structure that promotes the interaction with the membrane and antimicrobial activity of PM1 and other AMPs [28]. A closer analysis of the T22 structure ( Figure 1) demonstrates a similarity of traits with other AMPs, including length, hydrophobicity, net charge and amphipathicity, as reflected by the Hydrophobic Moment (HM) vector. Although the majority of its properties are naturally closer to those of other β-hairpin-forming AMPs ( Figure 1A, all but GWH1), in particular and as expected the close relative PM1 and its derivative PV7 ( Figure 1A,B), it stands out that T22 presents an HM magnitude (a quantity that increases with the unbalance of the distribution of polar and nonpolar surface areas in the molecule, i.e., amphipathicity) closest to GWH1. When the corresponding HM vectors are aligned with the membrane normal it is shown that T22 and GWH1 present very similar tilts ( Figure 1C, the alignment of the HM vector and the membrane normal provides an indication of the orientation that the peptide may adopt when inserted in the membrane). July 2021) [50] was used to calculate the surface hydrophobicity and what the authors call hydrophobic free energy (a solvent-accessible-area-based estimate of the non-polar component of the change in solvation free energy upon folding). Superposition of structures for analysis or representation in figures was based on the main chain N, CA, C and O atoms with the McLachlan algorithm [51] as implemented in ProFit v3.3 (http://www.bioinf.org.uk/software/profit/, accessed on 22 March 2021) and using residue equivalences as obtained with jCE, the java implementation of the CE method [52]. Results The generated T22 model reproduces an early NMR structure of T22 [4], which revealed the conservation of the hairpin structure that promotes the interaction with the membrane and antimicrobial activity of PM1 and other AMPs [28]. A closer analysis of the T22 structure ( Figure 1) demonstrates a similarity of traits with other AMPs, including length, hydrophobicity, net charge and amphipathicity, as reflected by the Hydrophobic Moment (HM) vector. Although the majority of its properties are naturally closer to those of other β-hairpin-forming AMPs ( Figure 1A, all but GWH1), in particular and as expected the close relative PM1 and its derivative PV7 ( Figure 1A,B), it stands out that T22 presents an HM magnitude (a quantity that increases with the unbalance of the distribution of polar and nonpolar surface areas in the molecule, i.e., amphipathicity) closest to GWH1. When the corresponding HM vectors are aligned with the membrane normal it is shown that T22 and GWH1 present very similar tilts ( Figure 1C, the alignment of the HM vector and the membrane normal provides an indication of the orientation that the peptide may adopt when inserted in the membrane). In view of these AMP-like physicochemical properties, T22 was tested for its antimicrobial activity over liquid cultures of three bacterial pathogens, namely Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa. GWH1, of similar length ( Figure 1A), was used as control. GWH1 is a non-natural peptide, developed as AMP [53,54], that adopts an amphipathic helical structure when bound to a membrane and its GFP fusion construct (GWH1-GFP-H6) self-assembles similarly to T22-GFP-H6 [55]. Upon exposure to bacterial cultures, both peptides promoted a clear drop of optical density in a dose-dependent manner (Figure 2A), with E. coli and S. aureus being the most sensitive species. GWH1 was superior to T22 in terms of its antibacterial activity, with lower MIC values in all cases ( Figure 2B). While the biological impact of GWH1 was immediate, T22 required longer times to reach comparable disruptive effects over bacterial cells (Figure 3A,B). In addition, both peptides inhibited biofilms formed by E. coli and S. aureus ( Figure 3B), which we value as a promising feature regarding the potential applicability of T22 as an AMP. Due to their interaction with biological membranes, many AMPs show hemolytic or cytotoxic activities over mammalian cells, bottlenecking a widespread use of these materials as safe drugs [56,57]. In this context and as expected, GWH1 showed a mild cytotoxicity ( Figure 4A) and a dose-dependent hemolysis ( Figure 4B) that compromise its clinical use. In contrast, T22 shows only a moderate or absent cytotoxicity in several cell lines ( Figure 4A) and a complete absence of hemolytic activity up to the very high doses tested here ( Figure 4B), which represents a clear competitive advantage over the control peptide GWH1. If T22 would keep its antimicrobial activities when presented in assembled, proteinonly nanoparticles, it would have a potential dual application as a CXCR4-targeting agent and AMP. It is widely recognized that bacterial infections not only represent further complications in solid tumors [58][59][60] but that they also participate in tumor formation or as triggering agents in several human neoplasias [32,33]. Such triggering effects are specially suspected in organs such as colon that are continuously exposed to microbiome components that might largely contribute to, or modulate, the initiation and progression of colorectal cancer [61][62][63][64][65]. Particularly, the formation of E. coli biofilms has been recently pointed out as an oncogenic driver in colorectal cancer development [66], and as shown above ( Figure 3B), T22 is a good inhibitor of E. coli biofilm formation. On the other hand, T22, in the form of fusion proteins assembled as nanoparticles, has proved to be highly effective in the targeted delivery of antitumoral drugs. In this context, the P. aeruginosa exotoxin A has been genetically inserted in a T22-based protein construct, thus generating the build-in cytotoxic, CXCR4-targeted nanoparticle T22-PE24-H6. In animal models of metastatic human cancers, T22 confers selectivity for CXCR4-overexpressing cancer stem cells while the bacterial toxin PE24 causes cell death and cancer remission [6,8]. The combination of the toxin and T22 is then a clinically promising concept [18,67]. Envisaging a dual role of T22 in protein constructs, the T22-empowered fusions T22-GFP-H6 and T22-PE24-H6, assembled as regular nanoparticles ( Figure 5A), were tested for their potential activity as antibacterial agents against S. aureus ( Figure 5B). Interestingly, the display of T22 in oligomers ( Figure 5C) is not only maintained but it also tends to enhance the antibacterial capacity of the free peptide, particularly for T22-PE24-H6 (compare data from Figures 2A and 5B). In the oligomers, 10 copies of the T22 loop are predicted to be exposed to the solvent with their HM pointing in perfect sense and direction to allow interaction with the membrane ( Figure 5C). In this orientation, T22 would keep the βhairpin structure associated with AMP activity ( Figure 5D). The observation of AMP activity linked to T22 in the form of multimeric nanoparticles, in which the peptide is genetically fused to the building block, opens interesting routes for the engineering of antimicrobial peptides in more effective formulations easily reachable through simple genetic engineering. Protein concentration at the X axis refers to monomers. Significant differences over GFP control protein are indicated as * p < 0.01, one-way analysis of variance (ANOVA) followed by Tukey´s multiple comparisons test. (C) A proposed model for the T22-GFP-H6 nanoparticle according to a previous approach [17]. Each T22 peptide has been colored differently and its calculated HM vector has been drawn in black. (D) Left: detail of (C) for a single nanoparticle monomer. Right: Closeup of the T22 region with PM1selected model superposed (RMSD 1.34 calculated with PyMOL's "super" function). Protein concentration at the X axis refers to monomers. Significant differences over GFP control protein are indicated as * p < 0.01, one-way analysis of variance (ANOVA) followed by Tukey's multiple comparisons test. (C) A proposed model for the T22-GFP-H6 nanoparticle according to a previous approach [17]. Each T22 peptide has been colored differently and its calculated HM vector has been drawn in black. (D) Left: detail of (C) for a single nanoparticle monomer. Right: Closeup of the T22 region with PM1-selected model superposed (RMSD 1.34 calculated with PyMOL's "super" function). Discussion The therapies for solid cancers are based on the resection of the primary tumor and further chemotherapy with cytotoxic, low-molecular weight drugs, administered systemically. The lack of drug targeting is associated with severe side effects [68,69] limiting the usable doses and minimizing the local drug concentration, which usually remains insufficient to prevent recurrence and metastasis [70,71]. Tumor-targeted nanomedicines are pointed out as innovative ways to enhance drug selectivity, increase the local drug concentrations and minimize side effects [72][73][74][75][76][77]. This should result in a higher efficacy at low drug doses, which should concomitantly enhance life quality and survival expectancy. The expression levels of the cytokine receptor CXCR4 are associated with invasiveness and aggressiveness in more than 20 human neoplasias [78][79][80][81][82][83][84][85][86][87][88][89], including colorectal cancer and breast cancer. This makes this cell-surface protein, occurring in metastatic cancer stem cells, a good target for precision therapies [2,10,11,14,84,90,91]. Colorectal cancer is among one of the highly prevalent CXCR4 + cancers in men and women, with growing incidence and worldwide spread [92][93][94][95]. Early and advanced lesions in the colon mucosa are endoscopically detected [96][97][98] and resected [97][98][99][100], and the treatment is completed with systemically administered chemotherapy, which exploits the cytotoxic activities of several low-molecular weight drugs such as irinotecan, 5-fluorouracil and capecitabine, among others [101][102][103][104]. T22 assists the protein self-assembly of given constructs in the form of regular proteinonly nanoparticles [15,17]. This peptide, as an amino-terminal protein fusion, also targets these constructs to selectively bind and penetrate CXCR4-overexpressing cancer cells [15]. Therefore, we have adapted T22-empowered nanoparticles to deliver Floxuridine [18], Auristatin E [11,29] or several protein toxins [8,13] for the selective destruction of colorectal cancer tissues. In the present study, we have demonstrated that T22 also shows a modest antimicrobial activity (Figures 2 and 3) and a capacity to inhibit biofilm formation broader to that of other conventional AMPs such as GWH1 and over Gram-negative and Grampositive species. GWH1 and T22 are both amphipathic peptides, that is, with hydrophilic and hydrophobic sides. However, the distribution of charges in their surfaces is completely different, a fact that generates a much clearer difference between sides in GWH1 than in T22 ( Figure 1D). The magnitudes of their hydrophobic moment (HM) vectors, a measure of amphipathicity, are in both cases high, which has been correlated with the membrane pore-formation capacity [27], a common feature of AMPs. Amphipathicity, however, is also related to toxicity over mammalian cells [105][106][107]. It has been postulated that a right balance between amphipathicity and hydrophobicity is the key to attaining a high antimicrobial activity and low toxicity [28], although it has also been described that detailed surface electrostatics [108] and the HM angle [47,109] highly influence the outcome. On the other hand, it has been proposed that an HM-vector-magnitude threshold, also modulated by the other factors mentioned, could exist that defines the onset of toxicity [27]. Thus, while T22 and GWH1 share features expected in many AMPs, such as a high HMvector magnitude and a certain tilt angle relative to the membrane normal, the slightly lower amphipathicity, higher net charge and lower hydrophobicity of T22 may result in the absence of generic toxicity and hemolysis ( Figure 4B), while retaining a moderate antibacterial activity ( Figure 2) and potent biofilm inhibition capacity (Figure 4). This set of properties would be kept in complex macromolecular structures as long as the peptide remains accessible on the surface ( Figure 5). These results are relevant not only when considering T22 and its fusion construct as an AMP but also the combination of the AMP activity with its functionality as a targeting agent in advanced nanomedicines to treat colorectal cancer. In this type of cancer, chemotherapy upon resection is applied by intravenous infusion. However, a recent study [110] proposes the administration of 5-fluorouracil-loaded nanoparticles against colorectal cancer via intestinal mucosa. The concept of surface chemotherapy of colorectal cancer lesions via gastric administration is supported by independent studies stressing the possibility of mucosal treatment of this type of cancer through various categories of polymeric materials [111][112][113][114][115]. In contrast to systemic administration, such an approach would allow, using appropriate agents, a combined treatment to kill cancer cells and concomitantly inhibit participating bacterial biofilms at the local level. If tumor cell-targeted, such a dual treatment would have localized effects on damaged mucosal areas with enhanced precision and effectiveness. Since attaching small chemical drugs to T22-empowered protein-only nanoparticles does not interfere with the targeting abilities of this peptide [29,55], the set of findings presented here opens a new line of exploratory research addressed to combine targeted drug delivery and overlapping biofilm inhibition at the local level through the mucosal delivery of nanoparticles [110]. This possibility is exemplified here by the pleiotropic character of T22 as a structural agent for nanoparticle formation, as a targeting agent and as an AMP. These notions are relevant in the context of the expanding recognized roles of bacterial biofilms in colorectal cancer development and progression [66,116,117], and regarding the progressive identification of involved bacterial species and consortia [118], which highlights E. coli as one among the relevant contributor species through several virulence factors [66,119,120]. Importantly, and due to a distinctive microbiota composition [121][122][123], biofilm formation appears to be specifically relevant in the right-sided colorectal cancer [121], for which antibiofilm drugs could be specially effective. As the microbiota components involved cancer development and progression are more precisely identified, highly focused studies about the potency and real efficacy in vivo of dual-acting protein agents such as T22 should be conducted. Conclusions A biologically significant antimicrobial activity with associated biofilm destruction has been found, for the first time, associated with T22, a short peptide used to selectively target nanomedicines for CXCR4 + human cancers. Such activity is maintained in the nanostructured forms of the peptide, ideal for drug delivery, in the absence of toxicity over human cells. Even being moderate, the antibacterial capacity of T22 is superior to that shown by other reference antimicrobial peptides. This discovery and the supporting general concepts open the possibility to design nanomedicines for human neoplasias, such as colorectal cancer, that show an important bacterial component. In this context, a local dual performance combining the selective (or even broad) antimicrobial impact and highly selective antitumoral activity could be extremely interesting as a new way to develop more effective, multifunctional anticancer therapies or preventive approaches from the mucosal side of the tumor that might complement the currently applied systemic therapies. . U.U. was supported by Miguel Servet contract (CP19/00028) from ISCIII co-funded by European Social Fund (ESF investing in your future) and ISCIII (PI20/00400) co-funded by FEDER (A way to make Europe). O.C.S. and X.D. received support from the Spanish Ministry for Science and Innovation (PID2019-111364RB-I00). A.V. was granted with an ICREA ACADEMIA award. Data Availability Statement: Data is available at Table S1.
2021-11-17T16:20:05.625Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "770c531fea92526cb5db2f9b5998b6384ab8e570", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/13/11/1922/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5459f20f2683bc748c7210fbc95eacffa3f665d6", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
252190505
pes2o/s2orc
v3-fos-license
Payer-Addressable Burden of Crohn’s Disease in Members Treated with Biologics in the United States: Actuarial Analysis Findings from RAINBOW Payer-addressable burden (PAB) reflects how real-world disease-associated costs impact the per member per month (PMPM) budget of a health plan, and can help to delineate drivers of PMPM costs and inform cost-management strategies for diseases with a high cost burden, such as Crohn’s disease (CD). We aimed to evaluate the U.S. PAB of CD managed with biologics. Weighted mean costs per member with CD in the commercial health plan population between 2017 and 2019 were evaluated from a health plan actuarial perspective. In addition to the overall population of members with CD treated with adalimumab, infliximab, vedolizumab, or ustekinumab, the subpopulations of members who were naive to biologic therapies at treatment initiation and/or treatment-adherent members were also analyzed. Members treated with vedolizumab contributed the lowest PMPM costs. A similar number of members were treated with vedolizumab and ustekinumab, yet PMPM costs associated with ustekinumab were more than double those of vedolizumab. Biologic naivety and treatment adherence drove lower CD-related PMPM costs. The analyses we present here highlight that treatments and patient subgroups with lower PMPM costs are important focus areas for payers in terms of identifying strategies to manage the budget for CD in a U.S. plan population. Video Abstract Read the transcript Watch the video on Vimeo © 2022 The Author(s). Published with license by Taylor & Francis Group, LLC INTRODUCTION Inflammatory bowel disease (IBD), which includes Crohn's disease (CD) and ulcerative colitis (UC), was diagnosed in over 3 million adults in the United States-1% of the population-in 2015, according to Dahlhamer et al. (2016).It was estimated that about 200 per 100,000 adults had CD in 2016, following a steady increase since 2007 (Ye et al. 2020). There are several therapies and treatment strategies for CD that aim to induce and subsequently maintain remission (Lichtenstein et al. 2018).Biologic treatments, which have greatly improved outcomes for many patients with IBD (Samaan et al. 2019), are generally reserved for patients with moderate to severe CD that has responded inadequately to conventional therapies used earlier in treatment pathways (Lichtenstein et al. 2018), although guidelines from the American Gastroenterological Association published in 2021 recommend the early introduction of a biologic, rather than delaying biologic use until specific prior therapies have failed (Feuerstein et al. 2021).Infliximab and adalimumab are tumor necrosis factor (TNF) antagonists, recommended for use in the induction and maintenance of remission in patients with moderate to severe CD (Lichtenstein et al. 2018;Feuerstein et al. 2021).Vedolizumab, an anti-integrin therapy, and ustekinumab, an anti-interleukin therapy, are also recommended for use in patients with moderate to severe CD (Lichtenstein et al. 2018;Feuerstein et al. 2021). CD is associated with a high health care cost (Park et al. 2020;Rao et al. 2017;Ganz et al. 2016) and resource burden (Manceur et al. 2020); biologics used to treat CD may have a varying impact on overall health plan costs, depending on the Address correspondence to Jason Fehr, Pharmacy Advisory Services, Optum, 11000 Optum Circle, Eden Prairie, MN 55344.E-mail: jason.fehr@optum.com;Tao Fan, Takeda Pharmaceuticals U.S. A., Inc., Lexington, MA 02421. E-mail: tao.fan@takeda.comThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Ã This author was employed by Takeda Pharmaceuticals U.S.A., Inc., at the time of the study. timing of their use in the course of disease (Pillai et al. 2020) and the treatment adherence of members with CD (Feagan et al. 2014).It is therefore pertinent for U.S. payers to understand the impact of different biologics on costs at a population level and to evaluate their real-world utilization in the management of budget for members with CD.Payer-addressable burden (PAB) is used to represent the impact of disease-associated costs of a national care plan, including both pharmacy and medical expenses, and aims to delineate key drivers of costs in the member population of a health plan to facilitate improved allocation of budgets (Schwark Pratt et al. 2019).PAB is estimated based on analysis of health care administrative claims; in this study, per member per month (PMPM) cost was used to represent PAB.Unlike other retrospective claims database analyses, PAB analyses consider the payer perspective, encompassing all real-world scenarios that have a combined effect on disease-related costs, which in turn drive PMPM costs at the population level.PAB analyses also differ from financial or economic modeling of PMPM costs because they use a single source of data to analyze both the health care resource utilization and costs that plan members generate in the real world, with different access and clinical choice dynamics. The purpose of the RAINBOW (acronym drawn from actuaRiAl analysIs of payer burdeN of Biologic treated crOhn patients in real World) study was to elucidate the impact of costs incurred by members with CD who were treated with biologics on the U.S. payer cost burden of a health plan, in three successive years. Study Design This study was an actuarial retrospective claims analysis of disease-related direct costs based on payer-allowed resource utilizations over 2017, 2018, and 2019. Member Identification These analyses used data from medical and pharmacy claims of adult members (over 18 years of age) with commercial health plan coverage from Optum's proprietary deidentified Normative Health Information database.Administrative claims data for members with pharmacy and medical coverage were used to determine medical and pharmacy utilization and associated cost patterns for CD treatment.The overall study population included members with CD who had at least one claim for a biologic received in a year.Biologics included in the analysis were vedolizumab intravenous (IV), ustekinumab subcutaneous (SC; after an initial IV loading dose), infliximab IV, and adalimumab SC. Biologic prescriptions were identified using CD diagnosis codes listed in the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10).Members with diagnosis codes for both CD and UC were included only if they had more claims for CD than for UC.Both biologic-experienced and biologic-naive members were included in the overall population.Biologic-naive members were defined as those who had not received any other biologic treatment within the current year and no biologic treatments in the 12-month period before the initial treatment date in the current year. Data Analyses The analyses, evaluating CD-specific annual allowed direct claims costs per member and PMPM costs, were conducted over three successive 12-month periods, from January 2017 through December 2019.Weighted mean annual allowed direct claims costs described the sum of all medical and pharmacy costs.Costs captured were the total paid by the health plan and the total paid by members for medical or pharmacy claims.The following were included in medical allowed costs: inpatient, outpatient, physician, and other costs; pharmacy costs included the cost of health care professional (HCP)-administered drugs and self-administered drugs.For HCP-administered drugs (i.e., infliximab and vedolizumab), administration costs were included within outpatient and physician costs.PMPM costs were calculated by dividing total annual costs of members with CD by the plan's entire population months of coverage (i.e., the number of individuals participating in the insurance plan each month) for that year.The analyses were performed from a health plan actuarial perspective. Analyses were performed on the overall CD population and on subpopulations of treatment-adherent and/or biologic-naive members, to delineate the drivers of PMPM.Treatment-adherent members were defined as members with at least 80% of days covered by prescription for the respective biologic (including self-administered and HCP-administered drugs) within the coverage year.Demographics are also presented stratified by age group. RESULTS In total, 9180, 10,269, and 11,259 adult members with CD treated with one of four biologics-vedolizumab, ustekinumab, infliximab, or adalimumab-for 2017, 2018, and 2019, respectively, were included in the analyses.Table 1 displays the demographics for the population analyzed.The majority of members were aged 18-44 years.Overall, more members had previously received a biologic therapy at initiation of biologic treatment than had not (Table 1). Between 2017 and 2019, there was an increase in the number of members treated with all biologics.The proportion of members treated with vedolizumab increased from 10.66% to 13.54% of total analyzed members.Similarly, during the same time period, the proportion of members treated with ustekinumab increased from 7.14% to 15.77%.The proportion of members treated with infliximab or adalimumab decreased over the same period. The proportion of members who were classed as biologic naive and were treated with vedolizumab remained relatively stable between 2017 and 2019 (35.75%, 35.92%, and 34.58% in 2017, 2018, and 2019, respectively), while the proportion of members treated with ustekinumab, infliximab, or adalimumab who were biologic naive decreased by about 13%, 3%, and 7%, respectively, from 2017 to 2019. Table 2 presents the treatment adherence of members with CD.The proportion of treatment-adherent members was greater than 60% for all of the four biologics across all years and was higher in the biologic-naive population than in the overall population.In the overall population, from 2017 to 2019, the proportion of treatment-adherent members increased for all the biologics analyzed; the greatest increase in adherence was observed for those treated with vedolizumab.Furthermore, a greater proportion of members treated with vedolizumab were treatment adherent than those treated with ustekinumab and adalimumab in all years.Adherence to infliximab was similar to that for vedolizumab. The proportion of biologic-naive members who were treatment adherent slightly increased between 2017 and 2019 for vedolizumab (76.57% and 79.88%), infliximab (77.75% and 78.02%), and adalimumab (69.65% and 73.38%) but decreased for ustekinumab (72.12% and 68.00%).The proportion of adherent members within each age group was similar, although adherence was lower for those treated with infliximab who were aged 65-74 years than for those aged 18-64 years, and for members treated with adalimumab aged 18-44 years than for those aged 45-74 years, over all years analyzed. Pharmacy costs were the main driver of total CD-related costs in all populations evaluated, primarily comprising prescription drug costs for members treated with ustekinumab or adalimumab and HCP-administered drug costs for members treated with vedolizumab or infliximab (Tables 3 and 4).Pharmacy costs per treated member were lower for those treated with vedolizumab and infliximab than for those using ustekinumab and adalimumab in the overall population (Table 3), with the highest pharmacy costs observed for ustekinumab.The same trend was observed for members who were treatment adherent and/or biologic naive (Tables 3 and 4).Overall, lower mean biologic drug costs were observed in the biologic-naive member population than in the overall population (Table 3).Among biologic-naive members who were treatment adherent, of the four biologics, mean total CD-related costs were lowest for those treated with vedolizumab in 2017, and for those treated with infliximab in 2018 and 2019 (Table 4). Overall, the CD-related impact on PMPM costs, that is, the PAB, was lowest for vedolizumab out of the four biologics analyzed between 2017 and 2019 (Table 3).Similar mean numbers of members were treated with ustekinumab and vedolizumab (1215 and 1243 for ustekinumab and vedolizumab, respectively); however, PMPM costs for members treated with ustekinumab were more than double those of members managed with vedolizumab ($0.86 and $0.41 for ustekinumab and vedolizumab, respectively).This trend was similar between adalimumab and infliximab, whereby a comparable mean number of CD members were treated with adalimumab and infliximab overall, but the impact on PMPM cost for adalimumab was approximately one-third greater than for infliximab ($1.45 and $1.12 for adalimumab and infliximab, respectively).In the overall population, PMPM costs increased from 2017 to 2019 for all treatments, with the greatest increase observed for ustekinumab (Table 3).In The portion of submitted charges covered under plan benefits.This is the amount after discounts and excluded expenses, and before employee and member responsibility (e.g., benefits limitations, co-pay amounts).Costs presented are before rebates. 624 S. GHOSH ET AL. f Costs not classified above as inpatient, outpatient, physician, or pharmacy (e.g., durable medical equipment and transportation). g The portion of submitted charges covered under plan benefits.This is the amount after discounts and excluded expenses, and before employee and member responsibility (e.g., benefits limitations, co-pay amounts).Costs presented are before rebates. 2019, there were about half the number of treated with ustekinumab than with infliximab; however, PMPM costs for ustekinumab were higher (Table 3).Adherent members (Table 4) had lower PMPM costs than the overall population (Table 3).A greater number of members were adherent to vedolizumab treatment than to ustekinumab in 2017-2019.Notably, members adherent to vedolizumab treatment had lower PMPM costs than ustekinumab-adherent members in 2018 and 2019, with the same PMPM costs observed for the two treatments in 2017 (Table 4).Similarly, overall, there were more treatment-adherent members receiving infliximab than adalimumab; however, PMPM costs were lower for the infliximab population (Table 4). We observed about threefold lower PMPM costs for biologic-naive members compared with the overall population, with these costs appearing to be more stable for the biologic-naive than for the overall population between 2017 and 2019 (Table 3).These costs were further reduced by adherence to treatment (Table 4).In the subpopulation of biologic-naive, treatment-adherent members, PMPM costs were lower for vedolizumab than ustekinumab, despite a greater number of members being treated with vedolizumab (Table 4). DISCUSSION The impact of CD on PMPM costs, that is, the PAB, is important for payers in that it provides insights into areas for potential cost-saving assessments that may inform more effective strategies targeting biologic utilizations in the real world.The current actuarial analysis presented the combined effect of pharmacy and medical costs at the PMPM level, with an opportunity to delineate the scenarios of biologic naivety and treatment adherence that potentially have a positive impact on PMPM costs. In our analyses, treatment costs contributed most to overall costs for members with CD, and were highest for ustekinumab; however, the PMPM reflects the combined impact of both medical and treatment costs.In a population of members with CD treated with one of four different biologics, those with relatively lower treatment costs, that is, those treated with HCP-administered vedolizumab or infliximab, had savings that potentially offset the incurred medical costs, resulting in a positive impact on PMPM costs.Given the high costs for members treated with ustekinumab, the PMPM cost is likely to increase if more members are treated with ustekinumab than with the other biologics evaluated in these analyses.Potentially, the impact on PMPM costs would be lower if more members were treated with vedolizumab than with adalimumab, ustekinumab, or infliximab.Consistent with our analyses, an analysis evaluating claims for IBD between 2007 and 2016 (Park et al. 2020), and a study using the Medical Expenditure Panel Survey (Ganz et al. 2016), demonstrated that costs for patients with IBD are driven largely by medication costs.Similar findings were also demonstrated in a retrospective claims study conducted from 1999 to 2017: A high proportion of health care costs was attributable to medication costs for patients with CD who were treated with biologics (Manceur et al. 2020). Biologic-naive status and treatment adherence drove lower CD-related costs.In our study, a greater proportion of all members treated with vedolizumab were treatment adherent compared with those treated with ustekinumab or adalimumab, while proportions were lower than for infliximab in 2017 and 2018, and higher than for infliximab in 2019.The same trend was observed for those who were biologic naive.Our analyses included members treated with IV vedolizumab and infliximab, and SC adalimumab and ustekinumab.The greater adherence observed in our study for vedolizumab and infliximab may therefore be as a result of the need for physician visits for administration of the medication.Adherence to vedolizumab and infliximab likely had the most positive impact on PMPM costs.Adherence to a biologic potentially resulted in a greater medical benefit, thereby offsetting the higher pharmacy costs incurred. In our analyses, lower PMPM costs were associated with treatment-adherent and biologic-naive members compared with the overall CD member population, suggesting that approaches to cost management should be focused in these areas.In a U.S. retrospective observational study, vedolizumab treatment was associated with better treatment persistence, lower rates of increased dosing frequency, and lower rates of a composite health care resource utilization endpoint than infliximab in patients with CD who were biologic naive (Patel et al. 2019), in support of our observations on overall treatment adherence and costs for vedolizumab compared with infliximab.In addition, a retrospective analysis of real-world claims data observed that patients who received an intervention to improve adherence to the anti-TNF agent certolizumab pegol had significantly fewer all-cause hospitalizations and lower total health care costs than those without (Wolf et al. 2018).A 2018 U.S. budget impact model evaluated the effect of including vedolizumab as a first-line biologic treatment option (i.e., along with the existing preferred first-line biologics infliximab and adalimumab) for patients with UC or CD in a hypothetical formulary, demonstrating large potential overall and PMPM cost savings over 3 years in comparison with its utilization as a second-line biologic (Wilson et al. 2018), which may align with our observation of lower PMPM costs in biologic-naive members. The current actuarial analysis, unlike budget impact model analyses, includes a real-world assessment of biologic utilization, with an opportunity to assess the dynamics of formulary access and clinical choices that impact PMPM costs.These PAYER-ADDRESSABLE BURDEN OF CROHN'S DISEASE analyses therefore facilitate better understanding of different drivers of PMPM costs at the health plan level when members with CD are treated with biologics than with other types of real-world claims analyses.With varying formulary access, some biologics are used preferably to others early in the management of CD.Furthermore, a biologic that results in greater treatment adherence is likely to have a positive impact on medical costs, despite an incrementally higher pharmacy cost. There are some limitations of this study.Our analyses did not include longitudinal follow-up of members for cost difference or adjustment of confounders.Additionally, these analyses represent costs before rebates, which may vary by health plan or pharmacy benefit manager and may affect formulary positioning.These analyses should therefore be considered from the perspective of a U.S. payer, regardless of plan-negotiated pharmacy rebates.Furthermore, our ability to measure treatment adherence was limited to the proportion of days covered as evaluated by prescriptions, and thus did not take into account the extent to which patients were actually taking the medication as prescribed.Finally, the definition of biologic-naive members differed from the clinical definition, in that members classed as biologic naive within our analyses may have previously received a biologic, but not within a certain time frame; however, data were limited to those available within the study period, so we cannot determine the full treatment history of members included within the analyses. CONCLUSIONS These analyses provide U.S. payers with a cross-sectional view of the financial burden at the health plan level when its members with CD are treated with biologics, and capture the real-world burden encompassing all forms of clinical practice and resource utilization for members with CD enrolled in a commercial national plan. 's disease; HCP, health care professional; PMPM, per member per month; Rx, prescription.The sum of inpatient, outpatient, physician, pharmacy, and other costs may not fully align with total allowed amount owing to rounding.a Costs associated with an inpatient facility, excluding HCP-administered drugs, durable medical equipment, transportation, and physician services; includes home health and hospice care.b Costs associated with an outpatient facility, excluding HCP-administered drugs, durable medical equipment, transportation, and physician services.c Costs associated with physician service both in and out of a facility setting.d Costs associated with HCP-administered drugs and prescription drugs.e Pharmacy costs associated with prescription drugs. TABLE 1 Demographics of Overall and Biologic-Naive Members Note: NS, not enough sample; SD, standard deviation.a Low member count. TABLE 2 Clinical Characteristics of Overall and Biologic-Naive Members Note: NS, not enough sample; SD, standard deviation.a Low member count. TABLE 3 CD -Related Costs in Overall and Members Between 2017 and 2019Weighted mean costs per treated member per year (proportion of total allowed amount, %) Note:CD, Crohn's disease; HCP, health care professional; PMPM, per member per month; Rx, prescription.The sum of inpatient, outpatient, physician, pharmacy, and other costs may not fully align with total allowed amount owing to rounding.aCosts associated with an inpatient facility, excluding HCP-administered drugs, durable medical equipment, transportation, and physician services; includes home health and hospice care.bCosts associated with an outpatient facility, excluding HCP-administered drugs, durable medical equipment, transportation, and physician services.cCosts associated with physician service both in and out of a facility setting.dPharmacy costs associated with HCP-administered drugs.ePharmacy costs associated with prescription drugs.fCosts not classified above as inpatient, outpatient, physician, or pharmacy (e.g., durable medical equipment and transportation).g TABLE 4 CD -Related Costs in Overall and Biologic-Naive, Members Between 2017 and 2019 Weighted mean costs per treated member per year (proportion of total allowed amount, %)
2022-09-12T16:22:54.862Z
2022-09-09T00:00:00.000
{ "year": 2023, "sha1": "8bab66ef0885698917a34a44519d00e17f815ccd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/10920277.2022.2102041", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c7eede02295b811e772795b6fbf3b24f82595a2e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
115545507
pes2o/s2orc
v3-fos-license
Giant vortices in combined harmonic and quartic traps We consider a rotating Bose-Einstein condensate confined in combined harmonic and quartic traps, following recent experiments [V. Bretin, S. Stock, Y. Seurin and J. Dalibard, cond-mat/0307464]. We investigate numerically the behavior of the wave function which solves the three-dimensional Gross Pitaevskii equation. When the harmonic part of the potential is dominant, as the angular velocities $Omega$ increases, the vortex lattice evolves into a giant vortex. We also investigate a case not covered by the experiments or the previous numerical works: for strong quartic potentials, the giant vortex is obtained for lower $Omega$, before the lattice is formed. We analyze in detail the three dimensional structure of vortices. I. INTRODUCTION The existence and formation of quantized vortices have recently been widely studied in Bose Einstein condensates [1-4, 6, 7]. One type of experiments consists in rotating the magnetic trap confining the atoms. For a harmonic trapping potential (1/2)mω 2 ⊥ r 2 , and a rotating frequency Ω close to 0.7ω ⊥ , vortices start to appear and arrange themselves into a lattice [5]. As Ω is increased, the number of vortices increases as well. In the case of a harmonic trap, the confinement and the centrifugal force prevent the condensate from rotating at a frequency Ω beyond ω ⊥ . The regime of fast rotation is especially interesting since it provides a setting for a large number of vortices and eventually giant vortices [8,9]. Theoretical and numerical studies have considered stiffer potentials than the harmonic one, behaving like r n or r 2 + r 4 [10][11][12][13]. This type of trapping, which eliminates the singular behavior at Ω = ω ⊥ , has recently been achieved experimentally by superimposing a blue detuned laser beam to the magnetic trap holding the atoms [14]. The resulting potential is with (2) For r/w sufficiently small, the resulting potential can be approximated by : The purpose of this paper is to find the stable states (vortex lattice, vortex array with hole and giant vortices) of the condensate with this type of trapping potential and to analyze their three-dimensional structure. We consider a case similar * Electronic address: aftalion@ann.jussieu.fr † Electronic address: danaila@ann.jussieu.fr to the experiments and previous theoretical settings, where the amplitude U 0 of the superimposed laser is small, so that the coefficient of the r 2 term is positive. But we are especially interested in the case where the laser beam has sufficiently large amplitude so that This changes the sign of the harmonic part of the potential (3). The point is that, this case of a quartic minus harmonic potential allows to observe giant vortices at lower angular velocities than previously and the structure of vortices is different. II. NUMERICAL APPROACH We consider a pure BEC of N atoms confined in a trapping potential V trap , rotating along the z axis at angular velocity Ω. The equilibrium of the system corresponds to local minima of the Gross-Pitaevskii energy in the rotating frame where g 3D = 4πh 2 a/m and the wave function φ is normalized to unity D |φ| 2 = 1. For numerical purposes, it is convenient to rescale the variables as follows: ,Ω = Ω/(εω ⊥ ). (6) In this scaling, the trapping potential (3) becomes where Note that we take ω ⊥ (which is the frequency of the original harmonic potential V h ), and not ω ⊥ |1 − α|, as a scaling frequency for Ω. For numerical applications, we choose ε = 0.02, β = ω z /ω ⊥ = 1/7, k/α = 0.25, which fit the experimental values of Ref. [14]. In [14], α = 0.25, but we will take bigger values since our aim is to understand the influence of α when it gets bigger than 1. Then, we use the dimensionless energy introduced in [15] where H is the hamiltonian and L z the angular momentum axis Using a hybrid Runge-Kutta-Crank-Nicolson scheme described in Ref. [15], we compute critical points of E(u) by solving the norm-preserving imaginary time propagation of the corresponding equation: where µ ε is the Lagrange multiplier for the constraint D |u| 2 = 1 and with u = 0 on ∂D and. Here, D is a rectangular domain containing the condensate. A typical simulation with a refined grid of 200 × 200 × 140 nodes, which is sufficient to achieve grid-independence for all considered numerical experiments. We first compute the steady state corresponding to a nonrotating (Ω = 0) condensate, using as initial condition u = √ ρ TF , the Thomas-Fermi profile Depending on the choice of α, the Thomas-Fermi density profile can display three different shapes, as shown in figure 1. The corresponding steady solutions obtained for Ω = 0 (which will be used as initial conditions for the subsequent runs with Ω > 0) are displayed in figure 2. We can distinguish three cases: • α < 1 (weak quartic case) is the case closest to the experiments and is strongly influenced by the harmonic part. As Ω increases, the effective trapping potential V ef f (r) = V (r) − ε 2 Ω 2 r 2 starts to have a mexican hat structure. A vortex lattice appears for intermediate values of Ω and turns into a lattice with a hole for large Ω. • α > ∼ 1 (intermediate quartic case): the density profile has a depletion close to the center at Ω = 0 but no hole. The criterion for this case is The density profile starts to have a hole for intermediate values of Ω. III. DESCRIPTION OF THE RESULTS Depending on the values of α and Ω, we observe different types of configurations: vortex free configurations where the amplitude of the wave function takes into account the shape of the effective trapping potential, vortex lattices, vortex arrays with hole and giant vortices. The potential V has a Mexican hat structure. The isosurface of lowest density of the solution is plotted in figure 3, the top view in figure 4 and in the middle plane z = 0 in figure 5. For Ω small, the density has a depletion close to the center of the condensate but no hole and no vortices. For Ω larger (Ω/ω ⊥ ≥ 0.16), vortices are nucleated. For 0.16 ≤ Ω/ω ⊥ < 0.24, the density of the solution is zero close to the top and bottom of the condensate, but not at the center, which gives rise to a special structure of vortices: the vortices arrange themselves along two concentric circles. The inner circle is made up of vortices which are isolated in the center of the condensate but reconnect towards the top of the condensate (see the details in figure 6). The outer circle is made up of almost straight U vortices that reconnect to the inner circle close to the top and bottom of the condensate. As Ω increases, the number of vortices on each circle increases. In figure 4(b), the inner vortices seem to be bigger, but this is just an effect due to the projection and the bending: the view at z = 0 (figure 5) allows to check that all vortices have the same size. For Ω/ω ⊥ ≥ 0.24, the density profile of the solution is zero in the center of the condensate, hence this creates a giant vortex: the straight vortices that were close to the center on the inner circle have merged into a giant vortex. There are also isolated vortices regularly scattered on a circle around the giant vortex. As Ω increases, the number of vortices inside and outside the giant vortex increases and the length of the isolated vortices decreases as can be seen in figure 3. Note that the isolated vortices are U vortices that reconnect to the giant vortex at the center, not to the boundary of the condensate, as in the case of the harmonic trapping [15], that is their bending is concave not convex. For Ω/ω ⊥ = 0.48, (see figure 7) the number of vortices has increased and there are 2 outer circles of vortices around the giant vortex: one circle of U vortices that reconnect to the giant vortex and one circle of vortices that reconnect to the outer boundary of the condensate. Both have different concavity in their bending as illustrated in figure 7. The effective potential has a Mexican hat structure for all Ω and the density profile of the solution always has a hole in the center as illustrated in figure 8. vortex phase profiles ( figure 10) show that the phase singularities do not completely overlap in the center of the vortex. This feature has already been observed in two-dimensional numerical simulation of a fast rotating condensate by Kasamatsu et al [13]. They described the giant vortex as the hole containing single quantized vortices with such low density that they are discernible only by the phase defects. This is the case closest to the experiments [14]. The special feature of this case is that one has to achieve larger values of Ω in order to obtain giant vortices. The density profile of the solutions are shown in figures 11 and 12. Figure 12 show the three dimensional structure of vortices. There are isolated single quantized vortices, forming a lattice. ¿From Ω/ω ⊥ = 0.48, the vortices near the center of the condensate start to merge, leading to a central structure similar to that displayed in figure 6(a). For Ω/ω ⊥ ≥ 0.56, the central vortices have merged into a giant vortex. The lattice still exist around. Similarly to the experiments, the hole is obtained for large values of angular velocity (Ω/ω ⊥ ≥ 0.56). It is interesting to note from the side view of the condensate ( figure 12) that most vortices of the lattice are straight, but some bent vortices (U shape) exist. The U vortices are either connected to the outer boundary (bending outwards) of the condensate (figure 12a,c) or to the giant vortex (bending inwards) (figure 12d). IV. CONCLUSION We have studied stable configurations of the Gross Pitaveskii energy when the trapping potential is modified to include a quartic minus a harmonic term. For weak quartic potentials, the solution evolves from a vortex lattice to a vortex array with hole when the angular velocity Ω is increased. For stronger quartic potentials, giant vortices are obtained for lower Ω, at a stage where the lattice is not so dense. The typical structure of vortices is to have a central giant vortex with an outer circle of vortices around. We believe that there should be a criterion depending on the radius of the condensate and the radius of the annulus that should characterize the final structure of the giant vortex: whether there is or not a circle of vortices around the giant vortex and its precise location. The form of the potential considered in our simulations was inspired from recent experiments [14]. We have checked that keeping the exponential part of the potential instead of its quartic minus harmonic approximation does not change the qualitative behaviour of the solutions. This suggests that if this situation could be achieved experimentally, it would allow to observe giant vortices for lower velocities than previously, that is before reaching the fast rotation regime.
2019-04-14T02:03:50.532Z
2003-09-29T00:00:00.000
{ "year": 2003, "sha1": "e72b3f69c80dafe327919d89e9069576bb84435b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0309668", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e72b3f69c80dafe327919d89e9069576bb84435b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10119440
pes2o/s2orc
v3-fos-license
Antibacterial Activity of Polyphenols: Structure-Activity Relationship and Influence of Hyperglycemic Condition Polyphenols are plant-derived natural products with well-documented health benefits to human beings, such as antibacterial activities. However, the antibacterial activities of polyphenols under hyperglycemic conditions have been rarely studied, which could be relevant to their antibacterial efficacy in disease conditions, such as in diabetic patients. Herein, the antibacterial activities of 38 polyphenols under mimicked hyperglycemic conditions were evaluated. The structure-antibacterial activity relationships of polyphenols were also tested and analyzed. The presence of glucose apparently promoted the growth of the bacterial strains tested in this study. The OD600 values of tested bacteria strains increased from 1.09-fold to 1.49-fold by adding 800 mg/dL glucose. The polyphenols showed structurally dependent antibacterial activities, which were significantly impaired under the hyperglycemic conditions. The results from this study indicated that high blood glucose might promote bacterial infection, and the hyperglycemic conditions resulting from diabetes were likely to suppress the antibacterial benefits of polyphenols. Introduction Polyphenols are secondary metabolites of plants, and are well known as natural antioxidants. Based on their chemical structures, polyphenols can be classified as flavonoids and non-flavonoids [1]. Flavonoids possess C 6 -C 3 -C 6 carbon structures consisting of two phenyl rings (A and B) and a heterocyclic ring (C). According to the hydrogenation degree of the heterocyclic ring and the connection site of ring B, flavonoids can be further classified into several subclasses, such as flavones, flavonols, flavanones and isoflavonoids [2]. Non-flavonoids mainly include stilbenes, chalcones, anthraquinones, ellagitannins, ellagic acids and phenolic acids [3]. Polyphenols can be ingested by humans from the consumption of fruits, vegetables, and plant derived beverages. The consumption of diets rich in polyphenol have usually been associated with beneficial effects to human health [4]. Despite controversies, epidemiological studies suggest that dietary polyphenols could lower risk of cardiovascular disease, prevent obesity, cancer and type 2 diabetes, attenuate brain aging and Alzheimer's disease, as well as maintain gut health [5,6]. These benefits have usually been associated with their diverse biological activities, such as anti-oxidation, anti-inflammation, anti-bacteria, enzyme inhibition, glycation inhibition, immunomodulation and miRNA interference [7][8][9]. Among these bioactivities, the antibacterial activities have attracted much interest due to the potential in dealing with the drug-resistant bacteria that are insensitive to conventional antibiotics [10]. Polypenols, especially flavonoids, have been suggested to exert their antibacterial effects in three ways; namely, direct killing of bacteria, synergistic activation of antibiotics, Characterization of the Glycation Products In the process of glycation of proteins, fluorescent advanced glycation products (AGEs) such as vesperlysines A, B, C, pentosidine and pyrropyridine can be used as indicators of the glycation degree [20]. As shown in Table 1, the fluorescence intensities of gBSA and gBp at 460 nm were both significantly increased compared to the controls, indicating the formation of fluorescent glycation products after incubation. However, some glycation products of proteins were non-fluorescent, and could not be detected by fluorescent spectra. Hence the colorimetric determination of frucotosamine residues by using NBT was performed. As shown, the E DMF value of BSA was only 1.84 mM, whereas that value for gBSA was increased by about 12 fold to 20.71 mM. The E DMF value of gBP was also more than three fold of that of BSA. Moreover, since the glycation of proteins could modify the functional groups of chromophores, the effects of glycation on the UV spectra of BSA and BP were also investigated. Similar to the increase of fluorescence intensities, the absorbance at specific wavelengths was also increased after glycation, indicating the onset of AGE formation of proteins [21]. Antibacterial Susceptibilities of Polyphenols A disc diffusion assay is usually applied to evaluate the antimicrobial susceptibility in vitro. Therefore, the potential antibacterial capacities of polyphenols were qualitatively screened using this method. The area of the inhibition zone for the tested polyphenol is positively correlated with its antibacterial effect. As shown in Table 2, 15 polyphenols show apparent inhibitory effects on the growth of certain bacteria (Φ > 8 mm), and the sensitivities of different bacteria to the same polyphenol are varied. Compared to the positive controls of antibiotics, most of the polyphenols showed weaker activities, except phenolic acids with antibacterial effects on VP comparable to antibiotics. However, unlike the broad-spectrum bactericidal effects of antibiotics, the antibacterial effects of polyphenols were selective. For example, hesperetin showed considerable antibacterial activity against the four Gram-negative bacteria (EC, ST, ES and VP), but no activity against the Gram-positive bacterium SA. Meanwhile, the remarkable inhibition zones of resveratrol were seen for EC, ST, SA and VP (9~12 mm), but not for ES. Similarly, selective antibacterial effects have been found by disc diffusion test for six antibacterial flavonoids from the aerial parts of Pterocaulon alopecuroides, which exhibited antibacterial activities only against the Gram-positive bacteria [22]. polyphenol are varied. Compared to the positive controls of antibiotics, most of the polyphenols showed weaker activities, except phenolic acids with antibacterial effects on VP comparable to antibiotics. However, unlike the broad-spectrum bactericidal effects of antibiotics, the antibacterial effects of polyphenols were selective. For example, hesperetin showed considerable antibacterial activity against the four Gram-negative bacteria (EC, ST, ES and VP), but no activity against the Gram-positive bacterium SA. Meanwhile, the remarkable inhibition zones of resveratrol were seen for EC, ST, SA and VP (9~12 mm), but not for ES. Similarly, selective antibacterial effects have been found by disc diffusion test for six antibacterial flavonoids from the aerial parts of Pterocaulon alopecuroides, which exhibited antibacterial activities only against the Gram-positive bacteria [22]. 8 9 8 8 -Myricetin 3, 5, 7, 3′, 4′, 5′ 8 10 8 8 12 Quercetrin 5, 7, 3′, 4′ 3-o-β-D-Glucoside -----Fisetin 3, 7, 3′, 4′ -----Rutin 5, 7, 3′, 4′ ----8 Isoflavones Genistein 5, 7, 4′ ----7 Formononetin 7 4′ -----Biochanin A 5, 7 4′ ----7 Puerarin 7, 4′ -----Dadazein 7, 4′ -----Dadazin 7-glucoside ---- Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the polyphenol are varied. Compared to the positive controls of antibiotics, most of the polyphenols showed weaker activities, except phenolic acids with antibacterial effects on VP comparable to antibiotics. However, unlike the broad-spectrum bactericidal effects of antibiotics, the antibacterial effects of polyphenols were selective. For example, hesperetin showed considerable antibacterial activity against the four Gram-negative bacteria (EC, ST, ES and VP), but no activity against the Gram-positive bacterium SA. Meanwhile, the remarkable inhibition zones of resveratrol were seen for EC, ST, SA and VP (9~12 mm), but not for ES. Similarly, selective antibacterial effects have been found by disc diffusion test for six antibacterial flavonoids from the aerial parts of Pterocaulon alopecuroides, which exhibited antibacterial activities only against the Gram-positive bacteria [22]. Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the polyphenol are varied. Compared to the positive controls of antibiotics, most of the polyphenols showed weaker activities, except phenolic acids with antibacterial effects on VP comparable to antibiotics. However, unlike the broad-spectrum bactericidal effects of antibiotics, the antibacterial effects of polyphenols were selective. For example, hesperetin showed considerable antibacterial activity against the four Gram-negative bacteria (EC, ST, ES and VP), but no activity against the Gram-positive bacterium SA. Meanwhile, the remarkable inhibition zones of resveratrol were seen for EC, ST, SA and VP (9~12 mm), but not for ES. Similarly, selective antibacterial effects have been found by disc diffusion test for six antibacterial flavonoids from the aerial parts of Pterocaulon alopecuroides, which exhibited antibacterial activities only against the Gram-positive bacteria [22]. Table 2. Results of antimicrobial susceptibility tests for polyphenols. Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the polyphenol are varied. Compared to the positive controls of antibiotics, most of the polyphenols showed weaker activities, except phenolic acids with antibacterial effects on VP comparable to antibiotics. However, unlike the broad-spectrum bactericidal effects of antibiotics, the antibacterial effects of polyphenols were selective. For example, hesperetin showed considerable antibacterial activity against the four Gram-negative bacteria (EC, ST, ES and VP), but no activity against the Gram-positive bacterium SA. Meanwhile, the remarkable inhibition zones of resveratrol were seen for EC, ST, SA and VP (9~12 mm), but not for ES. Similarly, selective antibacterial effects have been found by disc diffusion test for six antibacterial flavonoids from the aerial parts of Pterocaulon alopecuroides, which exhibited antibacterial activities only against the Gram-positive bacteria [22]. Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the Minimum Inhibitory Concentration (MIC) Values and Structure-Activity Relationships Since the disc diffusion tests only give qualitative results, and may not be suitable for comparative studies on the antibacterial activities of polyphenols, the quantitative MIC values of the antibacterial polyphenols were determined. The structure-activity relationships were discussed based on both the qualitative and quantitative results. As shown in Table 3, baicalein and myricetin show the most significant antibacterial effects among the tested flavonoids. Baicalein has a pyrogallol structure on ring A (5, 6, 7-OH) and myricetin also has a pyrogallol structure on ring B (3 , 4 , 5 -OH), which in combination indicated that the pyrogallol structure was an indictor for potent antibacterial activity for flavonoids. Additionally, all the flavonols and flavanones with antibacterial activities have two hydoxyl substituents on C-5 and C-7 of ring A in common, such as quercetin, rutin, narigenin, and hesperitin. These results are in agreement with the previous report that these structures are associated with the antibacterial ability of flavonoids [23]. In addition, flavanones are more active than the corresponding flavones. For example, narigenin showed antibacterial effects on all the tested bacteria whereas apigenin showed almost no effect. This result may indicate that the saturation of the C 2 =C 3 double bond increased the antibacterial activity. Comparing the activities between flavonoid aglycones and glycosides, it could be seen that the glycosides showed lower activity than aglycones. Moreover, resveratrol showed remarkable antibacterial activity, which vanished after the hydroxyl group at position 3 was substituted by a glucosyl group to form Piced. All the phenolic acids showed significant antibacterial activities, especially the pyrogallic acid, and the substituents possessing longer carbon chains offered the gallic acid derivatives stronger antibacterial activities. Effects of Glucose on the Bacterial Growth Diabetic patients are characterized with a higher blood glucose level than normal people. These glucoses are likely to be the nutrients for the growth of bacteria with infections. Herein, the tested bacteria were cultivated in Luria-Bertani (LB) media with glucoses at the range of 200 mg/dL to 800 mg/dL, and the OD 600 values for the bacterial suspensions were monitored to evaluate the influence of glucose on the bacterial growth. As shown in Figure 1, glucoses promoted the growth of the bacteria in LB medium in varying degrees. For ST and SA, their OD 600 values at stationary phases were significantly increased by only adding 200 mg/dL glucose (1.42-fold for SA and 1.22-fold for ST by adding 200 mg/dL glucose). When higher concentrations of glucoses were added, the increases of OD 600 values were not apparent (1.49-fold for SA and 1.23-fold for ST by adding 800 mg/dL glucose). In addition, OD 600 values at stationary phases of EC and ES were increased along with increasing glucose concentrations in LB medium (1.21-fold for EC and 1.35-fold for ES by adding 800 mg/dL glucose). However, the glucoses only slightly enhanced the growth of VP (1.09-fold by adding 800 mg/dL glucose). These results are consistent with the previous study, which showed that the addition of glucose (less than 1000 mg/dL) in both urine and Mueller-Hinton broth enhanced the growth rate and final bacterial yield for several Escherichia coli strains [24]. Recently, it was also found that the elevation of basolateral glucose concentration promoted growth of apical S. aureus in an in vitro airway epithelia-bacteria co-culture model [25]. The improved growth rate and cell density of Lactobacillus species have also been observed by increasing the glucose concentrations from 0.1% to 0.5% in a defined vaginal secretion-simulating medium with the presence of 0.005% MnCl 2 [26]. Except for nourishing pathogenic bacteria, glucose might also play various important roles in infections. It might help the formation of a biofilm of bacteria, which is closely related to the occurrence of drug-resistance [27]. In addition, glycolysis is required for S. typhimurium infections of mice and macrophages, and the transport of glucose is required for the replications within macrophages [28]. Moreover, glucose sometimes even acts as an environmental signaling molecule, which can, for example, trigger ATP secretion from bacteria and regulate the transcription of invasion-associated genes of bacteria [29]. Molecules 2017, 22, 1913 5 of 11 mg/dL glucose). These results are consistent with the previous study, which showed that the addition of glucose (less than 1000 mg/dL) in both urine and Mueller-Hinton broth enhanced the growth rate and final bacterial yield for several Escherichia coli strains [24]. Recently, it was also found that the elevation of basolateral glucose concentration promoted growth of apical S. aureus in an in vitro airway epithelia-bacteria co-culture model [25]. The improved growth rate and cell density of Lactobacillus species have also been observed by increasing the glucose concentrations from 0.1% to 0.5% in a defined vaginal secretion-simulating medium with the presence of 0.005% MnCl2 [26]. Except for nourishing pathogenic bacteria, glucose might also play various important roles in infections. It might help the formation of a biofilm of bacteria, which is closely related to the occurrence of drug-resistance [27]. In addition, glycolysis is required for S. typhimurium infections of mice and macrophages, and the transport of glucose is required for the replications within macrophages [28]. Moreover, glucose sometimes even acts as an environmental signaling molecule, which can, for example, trigger ATP secretion from bacteria and regulate the transcription of invasion-associated genes of bacteria [29]. Antibacterial Activities of Polyphenols under Hyperglycemic Conditions The simulated hyperglycemic conditions were created by adding glucose or glycated products into the LB medium, and the MIC of values of the most potent antibacterial polyphenols against the corresponding bacteria were subsequently determined. As shown in Table 4, with the presence of glucose, the MIC values of polyphenols were mostly increased compared to the control. For instance, the MIC values of bacalein for ES and SA were 0.5 mmol/L and 0.25 mmol/L, respectively, and both of them were doubled in the medium with glucose. In addition, for most of the tested polyphenols, the MIC values obtained in the medium by adding BSA, gBSA, BP, and gBP were larger than those in the control. These results indicated that the proteins, when added to LB medium, whether glycated or non-glycated, all inhibited the antibacterial effects of polyphenols. Moreover, the antibacterial effects of polyphenols in the LB medium with gBSA were generally weaker than those in the LB medium with BSA. Pyrogallic acid showed a MIC of 0.75 mmol/L for both ST and VP in BSA group, and this value was doubled in gBSA group. Comparing the MIC values of BP group with those of gBP group, it was also found that the gBP presence reduced the antibacterial activity of polyphenols more than BP did. Since glucose has been shown to promote the growth of bacteria, it is not surprising that the polyphenols in the medium with glucose showed inhibited antibacterial activities. Proteins, especially serum albumins, usually bind with polyphenols, and it has been suggested that only the non-protein-bound fraction of polyphenols is microbiologically active [30]. Hence, the presence of proteins reasonably posed a negative influence on the antibacterial activity of polyphenols. Other researchers have also found the impairment of antibacterial activity by plasma protein binding, which is attributed to the prevention of intra-bacterial uptake of antibiotics [31]. According to our previous study, the glycation of plasma proteins lowers the binding affinities for polyphenols, and the glycation of human serum albumins is believed to reduce the binding affinities for acidic drugs such as polyphenols and phenolic acids [19]. However, with the concentrations used in this study (>0.1 mmol), the ratios of polyphenol-BSA binding and polyphenol-gBSA binding are different. Thus, the gBSA itself, being rich in highly reactive AGEs, may play important roles in diminishing the antibacterial effects of polyphenols. It has been revealed that AGEs are also produced, metabolized, and accumulated even in short-lived bacteria cells, which are usually secreted by the energy-dependent efflux pump systems [32]. Hence, the AGEs in gBSA are likely to interact with the possible AGEs receptors in bacteria and stimulate the efflux pump systems which are responsible for the removal of polyphenols from the bacteria cells [33]. Accordingly, the significantly decreased antibacterial effects of polyphenols in gBP can also be attributed to AGEs. Moreover, the highly reactive carbonyl species produced during the glycation process are able to trap polyphenols and inhibit their antibacterial activities [34]. Effects of Polyphenols on the Growth of Bacteria under Hyperglycemic Conditions Two potent antibacterial polyphenols, resveratrol (Re) and pyrogallic acid (PA) were further investigated on their inhibitory effects against the growth of EC and SA, respectively. As shown in Figure 2, with the addition of 1 mM resveratrol (MIC) the growth of EC was completely inhibited in the pure culture medium. However, the glucose apparently increased the survivability of EC in response to Re. In addition, the supplements of proteins and plasmas also significantly weakened the inhibitory effects of Re on the growth of EC. The growth rate and the final cell density of EC in BSA group were lower than those in gBSA group, indicating that gBSA impaired the antibacterial effect of Re. It has been reported that the growth rate of enterobactin-producing E. coli was significantly increased in an iron-limited medium with addition of glycated BSA, which was attributed to the enhanced iron availability by protein glycation [35]. Comparing the growth in BP group with that in gBP group, the antibacterial performance of Re was reduced in the hyperglycemic environments. Effects of Polyphenols on the Growth of Bacteria under Hyperglycemic Conditions Two potent antibacterial polyphenols, resveratrol (Re) and pyrogallic acid (PA) were further investigated on their inhibitory effects against the growth of EC and SA, respectively. As shown in Figure 2, with the addition of 1 mM resveratrol (MIC) the growth of EC was completely inhibited in the pure culture medium. However, the glucose apparently increased the survivability of EC in response to Re. In addition, the supplements of proteins and plasmas also significantly weakened the inhibitory effects of Re on the growth of EC. The growth rate and the final cell density of EC in BSA group were lower than those in gBSA group, indicating that gBSA impaired the antibacterial effect of Re. It has been reported that the growth rate of enterobactin-producing E. coli was significantly increased in an iron-limited medium with addition of glycated BSA, which was attributed to the enhanced iron availability by protein glycation [35]. Comparing the growth in BP group with that in gBP group, the antibacterial performance of Re was reduced in the hyperglycemic environments. The effects of PA on the growth of SA in different culture mediums are shown in Figure 3. PA at a concentration of 1 mM completely restrained the growth of SA in the pure medium. The glucose and proteins negatively influenced the antibacterial effects of PA. Additionally, the negative influences of gBSA and gBP were stronger than those of BSA and gBP, respectively, which also supported the conclusion that the antibacterial performance of polyphenols was suppressed under hyperglycemic conditions. These results indicated that high blood glucose may create a hotbed of bacterial infection, and the hyperglycemic conditions results from diabetes are likely to suppress the antibacterial benefits of polyphenols. The effects of PA on the growth of SA in different culture mediums are shown in Figure 3. PA at a concentration of 1 mM completely restrained the growth of SA in the pure medium. The glucose and proteins negatively influenced the antibacterial effects of PA. Additionally, the negative influences of gBSA and gBP were stronger than those of BSA and gBP, respectively, which also supported the conclusion that the antibacterial performance of polyphenols was suppressed under hyperglycemic conditions. These results indicated that high blood glucose may create a hotbed of bacterial infection, and the hyperglycemic conditions results from diabetes are likely to suppress the antibacterial benefits of polyphenols. Effects of Polyphenols on the Growth of Bacteria under Hyperglycemic Conditions Two potent antibacterial polyphenols, resveratrol (Re) and pyrogallic acid (PA) were further investigated on their inhibitory effects against the growth of EC and SA, respectively. As shown in Figure 2, with the addition of 1 mM resveratrol (MIC) the growth of EC was completely inhibited in the pure culture medium. However, the glucose apparently increased the survivability of EC in response to Re. In addition, the supplements of proteins and plasmas also significantly weakened the inhibitory effects of Re on the growth of EC. The growth rate and the final cell density of EC in BSA group were lower than those in gBSA group, indicating that gBSA impaired the antibacterial effect of Re. It has been reported that the growth rate of enterobactin-producing E. coli was significantly increased in an iron-limited medium with addition of glycated BSA, which was attributed to the enhanced iron availability by protein glycation [35]. Comparing the growth in BP group with that in gBP group, the antibacterial performance of Re was reduced in the hyperglycemic environments. The effects of PA on the growth of SA in different culture mediums are shown in Figure 3. PA at a concentration of 1 mM completely restrained the growth of SA in the pure medium. The glucose and proteins negatively influenced the antibacterial effects of PA. Additionally, the negative influences of gBSA and gBP were stronger than those of BSA and gBP, respectively, which also supported the conclusion that the antibacterial performance of polyphenols was suppressed under hyperglycemic conditions. These results indicated that high blood glucose may create a hotbed of bacterial infection, and the hyperglycemic conditions results from diabetes are likely to suppress the antibacterial benefits of polyphenols. Determination of the Glycation Products The glycation products were quantified by measuring the contents of fructosamine residues, determined by the method in the literature, with slight modification [37]. DMF at concentrations between 0 and 1 mM containing 50 mg/mL BSA was used for calibration. Contents of fructosamine residues in samples were monitored by comparison to the standard curve (R 2 > 0.99) and expressed by DMF equivalent concentrations (E DMF , mM). The UV spectra of the samples were scanned from 250 nm to 800 nm, and fixed wavelength data at 330, 360 and 400 nm were obtained. The fluorescence spectra of the glycated BSA (gBSA) and glycated BP (gBP) were measured in the wavelength range of 350-700 nm upon excitation at 355 nm and the fluorescence intensities (FI) at 460 nm were recorded. Samples were diluted for measurements if necessary. The measures were performed in triplicate, and the results were found to be reproducible within experimental error. Antibacterial Assay The antibacterial activities of polyphenols against five common pathogenic bacterial strains were tested: one Gram-positive bacteria: Staphyloccocus aureus ATCC 12600 (SA); four Gram-negative strains: Vibrio parahemolyticus ATCC 17802 (VP), Escherichia coli O517:H7 ATCC 43895 (EC), Salmonella typhimurium ATCC 14028 (ST), Enterobacter sakazakii ATCC 51329 (ES). All the isolates were provided by Dr. Yu Zhao of Shanghai Normal University. The paper disc diffusion method was employed to determine the antibacterial activities of polyphenols. In brief, a 100 µL suspension of tested microorganism (10 9 CFU/mL) was evenly spread on Luria-Bertani (LB) agar plates. Then, sterilized filter paper discs (Φ = 6 mm) soaked in 0.1 mol tested polyphenols (dissolved in DMSO) were placed on the inoculated plates. After incubation at 37 • C for 24 h, the diameters of the inhibition zones were measured by rulers. Kanamycin and carbenicillin were used individually as positive controls, and DMSO without polyphenols was used as a negative control. Each assay was repeated at least twice. The polyphenols showing apparent inhibition zones in disc diffusion assay were employed to investigate the antibacterial activity under hyperglycemic conditions by determination of minimum inhibitory concentration (MIC) values. The MIC was defined as the lowest concentration of polyphenol at which no bacterial growth was observed after incubation. The MIC values of polyphenols were determined using microdilution tests with LB broth, and were evaluated in the range of 0.1 mmol/L~2.5 mmol/L. In a typical assay, bacteria reaching their expotential phase were added into sterilized LB broth to obtain the culture broth with bacterial concentration of 1.0 × 10 5 CFU/mL. 80 µL Bp (or BSA, gBSA, gBp) was separately added to 680 µL culture broth in 2 mL tubes; then, 40 µL polyphenol with different concentrations was added into each tube, and fully mixed. LB broths without bacteria were used as negative controls, and culture broths without polyphenols were used as positive controls. All the samples were placed in a 37 • C incubator for 24 h. After that, the MICs were determined by the microplate method [38]. To examine the effects of polyphenols on the bacterial growth rate and behavior under hyperglycemic conditions, the growth curves of the five bacteria in the presence of glucose were firstly studied. LB broth with glucose in the range of 200 mg/dL~80 mg/dL were prepared in 2 mL culture tubes, and 1.0 × 10 5 CFU/mL of tested bacteria were inoculated. The bacteria were allowed to grow in a shaking incubator (200 rpm) at 37 • C. The Optical Density at 600 nm (OD 600 ) values of the bacterial suspensions were then recorded by spectrophotometer at predetermined intervals. The inoculated LB broth without glucose was used as a control. Secondly, the effects of selected polyphenols on the growth curves of polyphenol-sensitive bacteria in different simulated hyperglycemic environments were investigated. Briefly, 80 µL glucose (800 mg/dL), gBSA or gBP was added into 680 µL inoculated LB broth, 40 µL polyphenol solution was then added to achieve a concentration of 1 mmol/L. Bacteria incubation and OD 600 measurement were the same as in the above method. The bacterial growth curves in the presence of polyphenol along and together with BSA or BP were also investigated. Inoculated LB broths without polyphenols were used as controls. The measuring of samples was performed in triplicate. Conclusions In the present study, we have demonstrated that glucose increased the growth rate of five common pathogenic bacteria to different degrees in vitro. The antibacterial performances of polyphenols were found to be significantly impaired in the hyperglycemic conditions simulated to the diabetic state. These results supported the hypothesis that high blood glucose creates a hotbed of bacterial infection, and the hyperglycemic conditions resulted from diabetes are likely to suppress the antibacterial benefits of polyphenols. This hypothesis may provide another angle for interpreting why there is an enhanced risk of infection for diabetic patients, and why diabetic infections are always difficult to treat. Additionally, it is also helpful for effectively utilizing dietary polyphenols in maintaining the health of diabetic patients. Nevertheless, further in vivo studies are still needed to validate the present conclusions.
2017-11-27T11:45:42.080Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "0ab75cd9fd5a222d55294a9414fb55c22293f385", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/22/11/1913/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ab75cd9fd5a222d55294a9414fb55c22293f385", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259890873
pes2o/s2orc
v3-fos-license
Coleon U, Isolated from Plectranthus mutabilis Codd., Decreases P-Glycoprotein Activity Due to Mitochondrial Inhibition Multidrug resistance in cancer is often mediated by P-glycoprotein. Natural compounds have been suggested as a fourth generation of P-glycoprotein inhibitors. Coleon U, isolated from Plectranthus mutabilis Codd., was reported to modulate P-glycoprotein activity but the underlying mechanism has not yet been revealed. Therefore, the effects of Coleon U on cell viability, proliferation, and cell death induction were studied in a non-small-cell lung carcinoma model comprising sensitive and multidrug-resistant cells with P-glycoprotein overexpression. P-glycoprotein activity and mitochondrial membrane potential were assessed by flow cytometry upon Coleon U, sodium-orthovanadate (an ATPase inhibitor), and verapamil (an ATPase stimulator) treatments. SwissADME was used to identify the pharmacokinetic properties of Coleon U, while P-glycoprotein expression was studied by immunofluorescence. Our results showed that Coleon U is not a P-glycoprotein substrate and is equally efficient in sensitive and multidrug-resistant cancer cells. A decrease in P-glycoprotein activity observed with Coleon U and verapamil after 72 h is antagonized in combination with sodium-orthovanadate. Coleon U induced a pronounced effect on mitochondrial membrane depolarization and showed a tendency to decrease P-glycoprotein expression. In conclusion, Coleon U-delayed effect on the decrease in P-glycoprotein activity is due to P-glycoprotein’s functioning dependence on ATP production in mitochondria. Introduction Non-small-cell lung cancer (NSCLC) is a common and highly lethal tumor worldwide [1]. Most NSCLC patients are diagnosed at an advanced stage of the disease; however, despite extensive research, the intrinsic and acquired chemotherapy resistance, particularly multidrug resistance (MDR), remains a significant challenge in improving treatment outcomes, often leading to treatment failure and disease relapse [2]. Statistical data indicate that drug resistance is a major contributor to over 90% of cancer patient mortality [3]. Cell Culture NSCLC NCI-H460 cell line was purchased from the American Type Culture Collection (Rockville, MD, USA). P-gp overexpressing MDR NCI-H460/R cells were selected by continuous exposure to stepwise increasing concentrations of DOX from NCI-H460 cells [21]. NCI-H460/R and their sensitive counterparts were maintained in RPMI 1640 medium supplemented with 10% FBS, 2 mM L-glutamine, and 10,000 U/mL penicillin, 10 mg/mL streptomycin, 25 mg/mL amphotericin B solutions. Normal human embryonic pulmonary fibroblasts-MRC-5 cells were cultured in DMEM supplemented with 10% FBS, 4 g/L glucose, 2 mM L-glutamine, 5000 U/mL penicillin, and 5 mg/mL streptomycin solution. All cell lines were subcultured at 72 h intervals using 0.25% trypsin/EDTA and seeded into a fresh medium at 16,000 cells/cm 2 . All cell lines were maintained at 37 • C in a humidified 5% CO 2 atmosphere. MTT Assay Cell viability was assessed by the MTT assay (AppliChem GmbH, Darmstadt, Germany). NCI-H460, NCI-H460/R, and MRC-5 cells grown in 25 cm 2 tissue flasks were trypsinized, and 2000 cells/well were seeded into flat-bottomed 96-well tissue culture plates following the overnight incubation. Subsequently, the cells were treated with increasing concentrations of Coleon U (2, 5, 10, 20, and 50 µM) for 72 h. At the end of the incubation period, MTT was added to each well in a final concentration of 0.2 mg/mL for 4 h. MTT-containing medium was then removed and formazan product was dissolved in 200 µL of dimethyl sulfoxide, and the absorbance was measured at 570 nm using an automatic Multiskan Sky reader (Thermo Scientific, Waltham, MA, USA). CFSE Proliferation Assay CFSE proliferation assay is based on the ability of the carboxyfluorescein succinimidyl ester (CFSE) dye to covalently bind free amines of the intracellular molecules via its succinimidyl group. CFSE is cell permeable and has no significant effect on their proliferative capacity. It enables the monitoring of cells over a period of 15 divisions [22]. The fluorescence intensity of CFSE, which gradually declines during cell divisions, enables the assessment of cell proliferation rates in treated versus untreated cells. For CFSE labeling, NCI-H460 and NCI-H460/R were incubated for 15 min in 0.1% FBS/PBS solution containing 1 µM CFSE at 37 • C, after which the cells were washed three times with PBS. Next, the CFSE-treated cells were seeded in 6-well plates, accommodated overnight, and then treated with Coleon U (7.5 µM, 15 µM, and 30 µM). PTX (100 nM in NCI-H460 and 2 µM in NCI-H460/R) was used as a positive control. After 48 h and 72 h, the cells were trypsinized and washed in ice-cold PBS and finally resuspended in 500 µL PBS. The fluorescence intensities of at least 20,000 cells per sample were detected on CytoFLEX flow cytometer (Beckman Coulter, Indianapolis, IN, USA) using fluorescence channel 1 (FL1) at 525 nm. The results were analyzed in Summit v4.3 software (Dako Colorado Inc., Fort Collins, CO, USA) using subtraction analysis. Cell Death Analysis by Flow Cytometry Induction of cell death after Coleon U treatment was assessed by Abcam Apoptosis Detection Kit through dual staining with Annexin V-FITC/Propidium Iodide (AV/PI) according to the manufacturer's instructions. NCI-H460 and NCI-H460/R cells were seeded in adherent 6-well plates at a density of 50,000 cells/well and incubated overnight. Both cell lines were treated with Coleon U (15, and 30 µM). Treatment with 500 nM PTX Rho123 Accumulation Assay The accumulation of Rho123 was analyzed by flow cytometry utilizing the ability of Rho123 as a P-gp substrate to emit fluorescence. The intensity of the fluorescence is proportional to Rho123 accumulation [23]. NCI-H460/R cells were seeded in 6-well plates and grown overnight. Cells were treated with Coleon U (5 µM and 10 µM), verapamil (5 µM), or Na 3 VO 4 (1 µM), in single treatments or in combined treatments of Coleon U (5 µM and 10 µM) or Verapamil (5 µM) with Na 3 VO 4 (1 µM). The cells were incubated for 30 min and 72 h. At the end of incubation periods, Rho123 (2.5 µM) was added and the cells were further incubated for 30 min at 5% CO 2 atmosphere at 37 • C. After the accumulation period, the cells were pelleted by centrifugation, washed with cold PBS, and kept on ice in the dark until the analysis. The samples were analyzed on a CytoFLEX flow cytometer (Beckman Coulter, Indianapolis, IN, USA). The orange fluorescence of Rho123 was assessed on fluorescence channel 1 (FL1) at 525 nm. A minimum of 10,000 events were assayed for each sample. The obtained mean fluorescence intensities were analyzed in Summit v4.3 software (Dako Colorado Inc., Fort Collins, CO, USA). Mitochondrial Membrane Potential Analysis To analyze the effect of Coleon U on mitochondrial membrane potential, an early marker for apoptosis, TMRE dye, was used [24]. TMRE is a cell membrane permeable, fluorescent dye that accumulates in intact mitochondria. Depolarized or inactive mitochondria exhibit decreased membrane potential, resulting in reduced TMRE accumulation. NCI-H460/R cells were seeded in 6-well plates at a density of 50,000 cells/well. After 24 h, cells were treated with Coleon U (5 µM and 10 µM), verapamil (5 µM), and Na 3 VO 4 (1 µM). Cells were incubated for an additional period of 72 h. CCCP was used as a positive control due to its notable depolarizing effect caused by the uncoupling of the proton gradient in the inner mitochondrial membrane [25]. The cells were treated with 10 µM CCCP 30 min prior to TMRE staining. For TMRE staining the cells were trypsinized, resuspended in RPMI media containing 500 nM TMRE, and incubated for 30 min at 37 • C in the dark. After washing twice in PBS, the red fluorescence emission of TMRE was immediately detected in the FL2 channel on a CytoFLEX flow cytometer (Beckman Coulter, Indianapolis, IN, USA). A minimum of 20,000 events were assayed per sample. The obtained data were analyzed in Summit v4.3 software (Dako Colorado Inc., Fort Collins, CO, USA), using subtraction analysis. Immunocytochemistry Immunocytochemistry was used for the quantification of the P-gp expression. For immunostaining, NCI-H460/R cells were seeded into 8-well chamber slides (Nunc, Nalgene, Roskilde, Denmark) at a density of 25,000 cells/chamber. Cells were allowed to attach to the surface overnight before the treatment with 30 µM Coleon U. In addition, NCI-H460/R cells were treated with PTX (500 nM), which served as a positive control for the induction of P-gp expression. After 72 h, cells were fixed with 4% paraformaldehyde for 20 min at RT and washed three times with PBS. Cells were then blocked with 2% bovine serum albumin (BSA) in PBS for 1 h at room temperature (RT). The anti-P-gp mouse monoclonal antibody was diluted 1:1000 in 2% BSA in PBS and incubated with the cells at 4 • C overnight. The cells were washed three times with PBS before adding Alexa Fluor 555 goat anti-mouse secondary antibody diluted 1:1000 in 2% BSA in PBS. The secondary antibody was incubated with the cells at RT for 2 h in the dark. To label the nuclei, the cells were incubated for 2 h in the dark with Hoechst 33342 at a final concentration of 1 µg/mL at RT. Finally, cells were washed three times with PBS, mounted in Mowiol, and stored at 4 • C in the dark before imaging. Fluorescently labeled cells were imaged using the ImageXpress ® Pico Automated Cell Imaging System (Molecular Devices, San Jose, CA, USA). Analysis of the obtained images was performed using CellReporterXpress 2.9 software (Molecular Devices) with the Cell Scoring Analysis Protocol. SwissADME Online Tool WLOGP (water partition coefficient, a measure of the lipophilicity of a molecule) and TPSA (topological polar surface area, a measure of the surface area of a molecule that is polar) values for Coleon U and verapamil were generated using the SwissADME website [26,27]. The generated "boiled-egg" illustrates the position of the molecules in the WLOGP-versus-TPSA plot and enables the evaluation of passive gastrointestinal absorption and brain penetration of small molecules such as Coleon U and verapamil. The white region indicates a high probability of passive absorption by the human intestine (HIA), and the yellow region (yolk) indicates a high probability of blood-brain barrier (BBB) penetration. Yolk and white areas are not mutually exclusive. Points colored in blue show molecules predicted to be actively extruded by P-gp (PGP+) and are in red if predicted as nonsubstrate of P-gp (PGP−). Statistical Analyses For the results obtained by MTT assay, a nonparametric Kruskal-Wallis multiple comparisons test was applied, while IC 50 values were calculated by nonlinear regression analysis using GraphPad Prism 8.0.2 for Windows (San Diego, CA, USA). The results obtained by CFSE, Rho123 accumulation, mitochondrial membrane potential, and immunocytochemistry were analyzed by GraphPad Prism 8.0.2 (San Diego, CA, USA) using two-way-ANOVA Sidak's multiple comparisons test. For the cell death analysis, the results were analyzed by GraphPad Prism 8.0.2 (San Diego, CA, USA) using two-way-ANOVA Dunnett's multiple comparisons test. Sensitivity of NSCLC Cells to Coleon U (Effects on Cell Viability, Proliferation, and Cell Death Induction) The effects of Coleon U on cell viability were evaluated by MTT assay, which detects viable mitochondria ( Figure 1). Therefore, the cell viability sensing of the MTT assay is due to preserved mitochondrial membrane electron transport chain [28]. The results obtained after 0 h and 72 h treatments in NSCLC NCI-H460, NCI-H460/R cell lines, and normal human embryonic pulmonary fibroblasts show that the greatest reduction in cancer cell viability in the NCI-H460 and NCI-H460/R cell lines was observed with three concentrations of Coleon U (10, 20, and 50 µM), while the greatest reduction in cell viability in normal MRC-5 cells was achieved with the two highest concentrations of Coleon U (20 and 50 µM), as shown in Figure 1A. These concentrations that were at or below the threshold (half maximum absorbance, Figure 1A) were considered cytotoxic for each tested cell line. Sensitivity of NSCLC Cells to Coleon U (Effects on Cell Viability, Proliferation, and Cell Death Induction) The effects of Coleon U on cell viability were evaluated by MTT assay, which detects viable mitochondria (Figure 1). Therefore, the cell viability sensing of the MTT assay is due to preserved mitochondrial membrane electron transport chain [28]. The results obtained after 0 h and 72 h treatments in NSCLC NCI-H460, NCI-H460/R cell lines, and normal human embryonic pulmonary fibroblasts (MRC-5) show that the greatest reduction in cancer cell viability in the NCI-H460 and NCI-H460/R cell lines was observed with three concentrations of Coleon U (10, 20, and 50 µM), while the greatest reduction in cell viability in normal MRC-5 cells was achieved with the two highest concentrations of Coleon U (20 and 50 µM), as shown in Figure 1A. These concentrations that were at or below the threshold (half maximum absorbance, Figure 1A) were considered cytotoxic for each tested cell line. Importantly, Coleon U exerted a similar inhibitory effect on cell viability in sensitive and MDR NSCLC cell lines with similar IC 50 values ( Figure 1B,C). This indicates that the MDR phenotype of NCI-H460/R cells did not reduce the effectiveness of Coleon U, which should be considered as a favorable feature of the potential anticancer compound. Additionally, the IC 50 value determined for the normal MRC-5 cells was threefold higher compared with the IC 50 values obtained in the cancer cell lines. Therefore, Coleon U has a greater inhibitory effect on the cancer cells compared to the normal cells, implying its selectivity towards cancer cells. Other authors identified Coleon U as a selective activator of protein kinase C-δ [20], whose activity is related to antiproliferative effects [29]. Therefore, we assessed the antipro-liferative effects of Coleon U after 48 h and 72 h by CSFE in NCI-H460 and NCI-H460/R cells (the live cell population was gated during acquisition on the flow cytometer). Cells were treated with three concentrations of Coleon U identified according to MTT assay: a noncytotoxic concentration of 7.5 µM, and two cytotoxic concentrations of 15 µM (approximately IC 50 ) and 30 µM (2 × IC 50 ). The results showed that treatment with 7.5 µM Coleon U led to a significant increase in CFSE fluorescence in both cell lines after 72 h (Figure 2A,B). Other treatments (15 µM and 30 µM) significantly sustained the proliferation of the non-small-cell lung carcinoma cells after 48 h and 72 h (Figure 2A,B). Obtained results also showed that there was a significant decrease in cell proliferation in both cell lines after treatment with PTX (100 nM in NCI-H460 and 2 µM in NCI-H460/R). Thus, Coleon U at 7.5 µM and 15 µM showed a time-dependent antiproliferative effect, while the effect of 30 µM was similar after 48 h and 72 h. Likewise, the PTX effect was not time-dependent (Figure 2A,B). MDR phenotype of NCI-H460/R cells did not reduce the effectiveness of Coleon U, which should be considered as a favorable feature of the potential anticancer compound. Additionally, the IC50 value determined for the normal MRC-5 cells was threefold higher compared with the IC50 values obtained in the cancer cell lines. Therefore, Coleon U has a greater inhibitory effect on the cancer cells compared to the normal cells, implying its selectivity towards cancer cells. Other authors identified Coleon U as a selective activator of protein kinase C-δ [20], whose activity is related to antiproliferative effects [29]. Therefore, we assessed the antiproliferative effects of Coleon U after 48 h and 72 h by CSFE in NCI-H460 and NCI-H460/R cells (the live cell population was gated during acquisition on the flow cytometer). Cells were treated with three concentrations of Coleon U identified according to MTT assay: a noncytotoxic concentration of 7.5 µM, and two cytotoxic concentrations of 15 µM (approximately IC50) and 30 µM (2 × IC50). The results showed that treatment with 7.5 µM Coleon U led to a significant increase in CFSE fluorescence in both cell lines after 72 h ( Figure 2A,B). Other treatments (15 µM and 30 µM) significantly sustained the proliferation of the non-small-cell lung carcinoma cells after 48 h and 72 h (Figure 2A,B). Obtained results also showed that there was a significant decrease in cell proliferation in both cell lines after treatment with PTX (100 nM in NCI-H460 and 2 µM in NCI-H460/R). Thus, Coleon U at 7.5 µM and 15 µM showed a time-dependent antiproliferative effect, while the effect of 30 µM was similar after 48 h and 72 h. Likewise, the PTX effect was not time-dependent (Figure 2A,B). To examine whether the induction of cell death contributes to the NSCLC cells' sensitivity to Coleon U, 15 µM and 30 µM treatments in NCI-H460 and NCI-H460/R were assessed by Annexin-V-FITC/Propidium Iodide staining after 72 h (Figure 3). The results showed that treatment with 30 µM Coleon U significantly increased the proportion of necrotic, late apoptotic, and early apoptotic NCI-H460 and NCI-H460/R cells compared to controls ( Figure 3A,B). We reconfirmed that the potential of Coleon U to induce cell death was not compromised by the MDR phenotype of NCI-H460/R. However, the effect of 15 µM Coleon U was less pronounced, indicating that this concentration is not cytotoxic, as was previously suggested by the MTT assay. Therefore, the MTT can be considered the most sensitive method for the evaluation of Coleon U effects. It should be noted that the percentage of dying cells in the MDR NCI-H460/R cell line was significantly higher after 30 µM Coleon U treatment, compared with 500 nM PTX, and that the resistance to PTX was clearly mirrored in different efficacy in NCI-H460 and NCI-H460/R cells ( Figure 3A,B). µM Coleon U was less pronounced, indicating that this concentration is not cytotoxic, as was previously suggested by the MTT assay. Therefore, the MTT can be considered the most sensitive method for the evaluation of Coleon U effects. It should be noted that the percentage of dying cells in the MDR NCI-H460/R cell line was significantly higher after 30 µM Coleon U treatment, compared with 500 nM PTX, and that the resistance to PTX was clearly mirrored in different efficacy in NCI-H460 and NCI-H460/R cells ( Figure 3A,B). Coleon U Mechanisms Involved in P-gp Activity Modulation P-gp protein, found in MDR cancer cells, reduces the concentration of certain drugs inside the cells due to its transporter activity mediated by ATP hydrolysis. Therefore, the P-gp overexpression and activity significantly enhance the cellular ATP requirements [30]. In our previous study, Coleon U exerted a delayed effect on P-gp activity by decreasing it after 72 h and reversing doxorubicin resistance in the subsequent treatment [17]. To investigate how Coleon U affects P-gp activity, we conducted experiments using well-known inhibitors of this pump, sodium orthovanadate (Na3VO4), and verapamil. Sodium orthovanadate directly inhibits plasma membrane ATPases, including P-gp, and has shown anticancer activity against several types of cancer [31,32]. On the other hand, verapamil, a first-generation P-gp inhibitor, is a P-gp substrate that stimulates the ATPase activity of Pgp [33]. Verapamil promotes intracellular accumulation of the drug when applied with chemotherapeutic agents, as shown in various cancer cell lines: NSCLC, colorectal cancer, leukemia, and neuroblastoma [34][35][36]. To evaluate the ability of Coleon U to inhibit the P- Coleon U Mechanisms Involved in P-gp Activity Modulation P-gp protein, found in MDR cancer cells, reduces the concentration of certain drugs inside the cells due to its transporter activity mediated by ATP hydrolysis. Therefore, the P-gp overexpression and activity significantly enhance the cellular ATP requirements [30]. In our previous study, Coleon U exerted a delayed effect on P-gp activity by decreasing it after 72 h and reversing doxorubicin resistance in the subsequent treatment [17]. To investigate how Coleon U affects P-gp activity, we conducted experiments using wellknown inhibitors of this pump, sodium orthovanadate (Na 3 VO 4 ), and verapamil. Sodium orthovanadate directly inhibits plasma membrane ATPases, including P-gp, and has shown anticancer activity against several types of cancer [31,32]. On the other hand, verapamil, a first-generation P-gp inhibitor, is a P-gp substrate that stimulates the ATPase activity of P-gp [33]. Verapamil promotes intracellular accumulation of the drug when applied with chemotherapeutic agents, as shown in various cancer cell lines: NSCLC, colorectal cancer, leukemia, and neuroblastoma [34][35][36]. To evaluate the ability of Coleon U to inhibit the P-gp transporter, a Rho123 accumulation assay was performed. Resistant NSCLC cells (NCI-H460/R) expressing P-gp, were treated with Coleon U (5 µM and 10 µM) and with verapamil (5 µM), in single treatments or in combination with sodium orthovanadate (1 µM). The treatments were conducted at two different time points: 30 min for the evaluation of the direct interaction with P-gp and 72 h for the evaluation of the indirect effect on P-gp activity such as effects on the P-gp expression [37], related proteins involved in the P-gp expression regulation [38], changes in the intracellular pH [39], or availability of ATP as a fuel molecule for the P-gp functioning [36]. Verapamil at 5 µM significantly inhibited the efflux of Rho123 in NCI-H460/R cells after 30 min, while Coleon U at both concentrations of 5 µM and 10 µM and sodium orthovanadate at 1 µM were not effective, as shown in Figure 4A. Sodium orthovanadate significantly decreased the level of Rho123 in combination with verapamil and Coleon U (10 µM) ( Figure 4A). After 72 h, sodium orthovanadate (1 µM), verapamil (5 µM), and Coleon U (5 µM and 10 µM) increased the accumulation of Rho123 in NCI-H460/R cells. This suggests that verapamil directly interacts with P-gp as a P-gp substrate and sustains this effect during 72 h as an ATPase stimulator, while sodium orthovanadate and Coleon U effects on Rho123 intracellular accumulation are evident later, after 72 h. When NCI-H460/R cells were treated with combinations of verapamil or Coleon U with sodium orthovanadate, a decrease in the fluorescent Rho123 signal was observed after 72 h ( Figure 4A). This suggests an antagonistic effect between verapamil (an ATPase stimulator) and sodium orthovanadate (an ATPase inhibitor) as well as Coleon U and sodium orthovanadate. Therefore, we assumed that Coleon U interferes with the ATP metabolism, probably its production in mitochondria. This is also supported by the fact that the MTT assay, which detects viable mitochondria, was more powerful in sensing Coleon U effects than AV/PI assay, which discriminates viable from dead cells. fuel molecule for the P-gp functioning [36]. Verapamil at 5 µM significantly inhibited the efflux of Rho123 in NCI-H460/R cells after 30 min, while Coleon U at both concentrations of 5 µM and 10 µM and sodium orthovanadate at 1 µM were not effective, as shown in Figure 4A. Sodium orthovanadate significantly decreased the level of Rho123 in combination with verapamil and Coleon U (10 µM) ( Figure 4A). After 72 h, sodium orthovanadate (1 µM), verapamil (5 µM), and Coleon U (5 µM and 10 µM) increased the accumulation of Rho123 in NCI-H460/R cells. This suggests that verapamil directly interacts with P-gp as a P-gp substrate and sustains this effect during 72 h as an ATPase stimulator, while sodium orthovanadate and Coleon U effects on Rho123 intracellular accumulation are evident later, after 72 h. When NCI-H460/R cells were treated with combinations of verapamil or Coleon U with sodium orthovanadate, a decrease in the fluorescent Rho123 signal was observed after 72 h ( Figure 4A). This suggests an antagonistic effect between verapamil (an ATPase stimulator) and sodium orthovanadate (an ATPase inhibitor) as well as Coleon U and sodium orthovanadate. Therefore, we assumed that Coleon U interferes with the ATP metabolism, probably its production in mitochondria. This is also supported by the fact that the MTT assay, which detects viable mitochondria, was more powerful in sensing Coleon U effects than AV/PI assay, which discriminates viable from dead cells. . The statistically significant difference between the treated and control groups is shown as *** (p < 0.001). The statistically significant difference between single treatment and corresponding combined treatment is shown as ### (p < 0.001). (B) Coleon U triggers loss of mitochondrial membrane potential in MDR NCI-H460/R cells. Mitochondrial membrane potential in NCI-H460/R cells was evaluated after 72 h treatments with Coleon U (5 and 10 µM), verapamil (5 µM), and sodium orthovanadate-Na 3 VO 4 (1 µM) using TMRE staining. CCCP was used as a positive control. Histograms in the left panel represent the average fluorescence intensity of TMRE from three independent experiments. The statistically significant difference between the treated and control groups is shown as *** (p < 0.001). Representative flow cytometric profiles for each condition are shown in the right panel. The inner mitochondrial membrane is essential for the generation of ATP via the mitochondrial respiratory chain. When the mitochondrial membrane potential is depolarized, the respiratory chain is blocked, leading to a reduced efficiency of ATP production [30]. Decreased intracellular ATP levels also can affect the functioning of the P-gp, as an ATPdependent membrane transporter [30]. Thus, the compound Coleon U was examined for its effect on the disruption of mitochondrial potential in MDR NCI-H460/R cells. The TMRE labeling was used to assess changes in mitochondrial membrane potential. The results showed that 72 h after treatment with Coleon U (5 µM and 10 µM), TMRE fluorescence intensities were significantly decreased compared with the control ( Figure 4B). This suggests that Coleon U induced notable depolarization of the mitochondrial membrane in NCI-H460/R. Treatments with sodium orthovanadate (1 µM) and verapamil (5 µM) also significantly lowered the TMRE signal in NCI-H460/R cells. The obtained results are consistent with previous findings in the literature that sodium orthovanadate decreases mitochondrial membrane potential and ATPase activity in sorafenib-resistant hepatocellular carcinoma cells, ultimately promoting apoptosis [32]. Our results suggest that Coleon U induces depolarization of the mitochondrial membrane potential in MDR NCI-H460/R cancer cells that can cause a decrease in ATP production, indirectly affecting P-gp function. Furthermore, the disruption of mitochondrial function caused by Coleon U can potentially contribute to the activation of cell death pathways. Although the results obtained by MTT, CFSE, and AV/PI showed no significant difference in Coleon U efficacy between NCI-H460 and NCI-H460/R cells, which means that Coleon U efficacy is not affected by MDR and P-gp activity, we performed SwissADME analysis to ensure that Coleon U is not a P-gp substrate. SwissADME results illustrated as "boiled egg" [27] showed that Coleon U can be easily absorbed by the human intestine and confirmed that Coleon U is not a P-gp substrate, while verapamil is a P-gp substrate ( Figure 5A). To examine the above mechanisms that may lead to a decrease in P-gp expression and contribute to the observed delayed effect of Coleon U on P-gp activity, we studied P-gp expression in NCI-H460/R cells after 72 h treatment with Coleon U (5 and 10 µM), as shown in Figure 5B,C. Analysis of P-gp expression by flow cytometry revealed that Coleon U tended to decrease P-gp expression in NCI-H460/R cells. Specifically, treatment with 5 µM resulted in an approximate 9% decrease in P-gp expression, while treatment with 10 µM led to an approximately 12% decrease in P-gp expression ( Figure 5B). We also examined whether the cytotoxic concentration of Coleon U (30 µM) affects P-gp expression in NCI-H460/R cells ( Figure 5C). To that end, we used fluorescent labeling of P-gp and analysis performed on the ImageXpress ® Pico Automated Cell Imaging System. The results obtained by CellReporterXpress software revealed that 30 µM Coleon U significantly reduced the portion of NCI-H460/R cells expressing P-gp without changing the level of expression per cell ( Figure 5C). This suggests that Coleon U exerts its cytotoxic effect on P-gp-positive cells and does not induce P-gp expression. Therefore, Coleon U has valuable characteristics for the treatment of MDR cancers. In contrast, treatment of NCI-H460/R cells with PTX resulted in a significant increase in P-gp fluorescence intensity per cell, while the portion of P-gp cells was not changed ( Figure 5C). This suggests a continuation of the P-gp upregulation upon PTX treatment, while Coleon U may hold promise in overcoming MDR by modulating both P-gp activity and expression. valuable characteristics for the treatment of MDR cancers. In contrast, treatment of NCI-H460/R cells with PTX resulted in a significant increase in P-gp fluorescence intensity per cell, while the portion of P-gp cells was not changed ( Figure 5C). This suggests a continuation of the P-gp upregulation upon PTX treatment, while Coleon U may hold promise in overcoming MDR by modulating both P-gp activity and expression. Figure 5. The effects of Coleon U on P-gp expression. (A) The "boiled-egg" plot -Brain or IntestinaL EstimateD permeation predictive model (plot of WLOGP against TPSA) of Coleon U and verapamil from SwissADME online tool. The points in yellow (boiled-egg's yolk) represent molecules predicted to passively permeate through the blood-brain barrier (BBB), while the points in white (boiled-egg's white) represent molecules predicted to be passively absorbed by the gastrointestinal tract (HIA). Blue dots represent molecules predicted to be P-gp substrates, while red dots are molecules predicted not to be substrates for P-gp. (B) Flow cytometric profiles of P-gp expression in NCI-H460/R cells treated with Coleon U for 72 h treatment. (C) The histograms in the left panel show Pgp fluorescence intensity per cell and percent of P-gp positive NCI-H460/R cells treated with Coleon U and PTX for 72 h. Values are presented as mean ± SEM (n = 3). The statistically significant difference between the treated and control groups is shown as * (p ˂ 0.05) and *** (p < 0.001). Representative immunofluorescence micrographs of anti-P-gp labeled NCI-H460/R cells are shown in the right panel. Nuclei were counterstained with Hoechst 33342. Scale bar = 200 µm. The "boiled-egg" plot -Brain or IntestinaL EstimateD permeation predictive model (plot of WLOGP against TPSA) of Coleon U and verapamil from SwissADME online tool. The points in yellow (boiled-egg's yolk) represent molecules predicted to passively permeate through the blood-brain barrier (BBB), while the points in white (boiled-egg's white) represent molecules predicted to be passively absorbed by the gastrointestinal tract (HIA). Blue dots represent molecules predicted to be P-gp substrates, while red dots are molecules predicted not to be substrates for P-gp. (B) Flow cytometric profiles of P-gp expression in NCI-H460/R cells treated with Coleon U for 72 h treatment. (C) The histograms in the left panel show P-gp fluorescence intensity per cell and percent of P-gp positive NCI-H460/R cells treated with Coleon U and PTX for 72 h. Values are presented as mean ± SEM (n = 3). The statistically significant difference between the treated and control groups is shown as * (p < 0.05) and *** (p < 0.001). Representative immunofluorescence micrographs of anti-P-gp labeled NCI-H460/R cells are shown in the right panel. Nuclei were counterstained with Hoechst 33342. Scale bar = 200 µm. Conclusions Coleon U is a natural compound with a significant anticancer potential whose activity against MDR cancers has not been truly investigated. Although the activation of protein kinase C-δ (that promotes antiproliferative and proapoptotic effects) was identified as the Coleon U mechanism of anticancer action [20], the literature data on Coleon U as an
2023-07-15T15:08:44.190Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "b0667fe7ea51f382837b0ae79848ccd80a9e9db3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/15/7/1942/pdf?version=1689212607", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5aaf495c2f666cbdb9a6222beada6bb199a68617", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
59368628
pes2o/s2orc
v3-fos-license
The Regulation of International NGOS: Assessing the Effectiveness of the INGO Accountability Charter The INGO Accountability Charter is the only global, cross-sectoral regulatory initiative for international NGOs. This is the first independent study of perceptions of its effectiveness, based upon 26 in-depth semi-structured interviews with key individuals from 11 leading international NGOs. Firstly, it analyzes interviewees’ beliefs about the motivations of NGOs in joining the Charter. The findings contribute to the scholarly debate about the key drivers for voluntary regulation between ‘club theorists’ and ‘constructivists’ by demonstrating that NGO behavior in this regard is both self-interested and norm-guided. Secondly, it investigates the extent to which the interviewees believe that the Charter has been effective in enhancing the accountability of its members. Their responses further underline the applicability of club theory and constructivist explanations of NGO behavior, and lead to several policy recommendations about the future direction of Charter. responsibilities regarding the Charter within their organizations. The latter group of participants was drawn from eleven leading NGOs from the humanitarian, development and advocacy sectors. Accountability for NGOs has been variously defined by scholars, NGOs and standard-setting bodies over the years. Debates revolve around to whom and for what NGOs should be held accountable (for an overview see Crack 2013). The Charter defines accountability as follows: • Being transparent on what the organization is, what it commits to doing and progress achieved; • Engaging key stakeholders in meaningful dialogue to enable continuous improvement for those we serve; • Using power responsibly and enabling stakeholders to hold us to account effectively (INGO Accountability Charter 2015a). The Charter stipulates a number of principles, guidelines and policies that member organizations should observe in order to be deemed 'accountable,' and members have to report annually against these commitments and publish the results online. Many peer regulation initiatives are simply codes of conduct that require little from their members other than a self-proclaimed commitment to the standards. However, those with 'global membership are less likely to have formal complaints mechanisms and to punish rule violators than their regional and single-country counterparts' (Tremblay-Boire et al. 2016: 713) The Charter is notable for having a complaints mechanism, an independent vetting procedure, and a sanctions clause that enables it to expel members that are non-compliant. It is therefore of interest not only because of its cross-sectoral positioning and global membership, but also because of its complaints and enforcement procedures. The first question that this study addresses is: What do the interviewees believe motivated NGOs to join the Charter? The key drivers behind peer regulation have repeatedly been the subject of academic debate. The interviewees' responses provide the opportunity of exploring a puzzle in the literature: Why do NGOs participate in regulatory initiatives? There are two main explanations: club theory (which, put simply, argues that they do so for self-interested reasons, primarily to send a reputational signal to stakeholders) and constructivist 3 theory (which, put equally simply, argues that NGOs are strongly influenced by shared beliefs about accountability norms). The interview data suggest that both theoretical explanations have some traction: Organizations maintain membership of the Charter to satisfy a mixture of 'selfinterested' and 'norm-guided' motivations. The second question that this study addresses is: To what extent do the interviewees believe that the Charter has been effective in enhancing the accountability of its member organizations? The interviewees were asked to evaluate the effectiveness 4 of the Charter, based upon their experiences of engaging with the reporting process and their understandings about changes in behavior and performance of member organizations. The study finds that the interviewees believe that the Charter provides NGOs with a defense against criticisms of poor accountability from hostile parties, and also helps members to improve performance through feedback and peer learning. However, they felt that the effectiveness of the Charter was limited due to several factors, including poor awareness of the Charter among key stakeholders, variable levels of engagement inside the member organizations, and misaligned understandings of accountability between advocacy/campaigning NGOs and humanitarian/development NGOs. I argue that their critical appraisals of the Charter attest to the influence of both self-interested and normbased considerations. The discussion proceeds as follows. The first section provides an overview of the rise of NGO peer regulation and outlines the main contentions of club theory and constructivist theory. The second section provides background information on the Charter, and the third section explains the methodology of the study. The fourth section explores the interviewees' opinions on the motivations for NGOs joining the Charter. The fifth and sixth sections turn to consider the perceived efficacy of the Charter, which is discussed in terms of the benefits and challenges of Charter membership. The concluding section offers some policy recommendations and suggestions for future research. Club Theory, Constructivism and Measures of Efficacy It has become evident that there is a pressing need for NGOs to raise their standards of accountability and to address perceptions that they are unaccountable (Schmitz et al. 2012;Thrandardottir 2015). Accountability issues are now high on the NGO policy agenda, particularly given that major donors attach more importance than ever before to transparency and evidence of 'value for money.' It is against this background that NGOs have cooperated to establish several peer regulation initiatives in recent decades. The topic of NGO regulation has attracted some interest from scholars, both within a domestic (Bies 2010;Bloodgood et al. 2014;Gugerty 2008) and international context (Brown 2008). The literature on the efficacy of peer regulation is relatively scant, not least because of the challenges of finding a common measure of effectiveness (Crack 2016;Featherstone 2013). The scholarship therefore focuses on the factors underpinning the emergence and design of regulation mechanisms. There are two main explanatory approaches in this regard: club theory and constructivist theory. 5 Club theory builds upon principal-agent theory, a political economy approach to understand the problems that are posed when a principal contracts an agent to carry out certain tasks in conditions where both parties may have competing interests and asymmetrical information. The principal will have less information than the agent and so will be uncertain about whether the agent is serving the principal's best interests, particularly when it is difficult for a principal to monitor the agent's actions and/or an agent will find it profitable to exploit the principal. There are all manners of ways in which principals and agents might try to ameliorate these problems-regulatory 'clubs' are but one. Clubs serve to provide a reputational signal to principals. Agents may join regulatory clubs to improve their performance. A strong signal is likely to be sent to principals if standards are stringent and compliance is monitored. Membership can signify high levels of accountability and performance if the club is widely regarded as credible. The voluminous literature on clubs mainly focuses on the private sector (Cornes and Sandler 1996;Sandler and Tschirhart 1997), but the perspective has also been used by nonprofit scholars (Gugerty and Prakash 2010;Potoski and Prakash 2009;Prakash and Gugerty 2010). NGOs have accountability relationships with multiple principals (donors, intended beneficiaries, supporters, etc.), and clubs offer the potential to help NGOs to build trust with these different stakeholders. Effective clubs prompt changes in NGO behavior and open possibilities of receiving certain 'rewards.' NGOs may hope that clubs could encourage donors to increase their funding. '[P]roactive voluntary regulation might dampen the demand for new laws that restrict their activities in even less desirable ways' (Gugerty and Prakash 2010: 11). According to this perspective, effective regulatory initiatives are ones that (a) have high levels of compliance; (b) send a credible and widely recognized signal to principals in order to build trust; (c) could lead to increased funding; (d) could help to preempt the threat of government interference and/or regulation (see Table 1). Distinct from this is the constructivist approach, which considers the influence of shared ideas and values as key to understanding what shapes forms of regulation. They acknowledge that self-interest may partly account for a NGO's decision to join a regulatory mechanism, but are also interested in how NGOs are incentivized by a concern for shared norms, a desire to engage in social learning and to share best practice. Deloffre, for example, accounts for the design of the regulatory initiatives that were established after the Rwandan genocide as being shaped by debates among NGOs and key stakeholders 'that created a feeling of mutual engagement and commitment to defining collective accountability practice ' (2016: 22). According to this constructivist perspective, an effective regulatory initiative would be one that shapes understandings about 'rightful conduct' for responsible NGOs among practitioners and stakeholders. Such understandings may correlate with donor expectations, but are not necessarily determined by the preferences of donors (Pallas et al. 2014). For constructivists, 'effective' standards would help to produce an institutional environment that promotes social learning and norm-compliant behavior, by encouraging individuals to internalize and uphold the norms (see Table 1). Although club theory and constructivism have a different focus, they are not mutually exclusive. The interpretations that each approach generates can be compatible, and provide a nuanced and multidimensional account of actor behavior. As will be seen, the interviewees' opinions about the effectiveness of the Charter revealed evidence of both selfinterested and norm-guided behavior among member organizations. The next section provides some background on the Charter, before the discussion proceeds to data analysis. INGO Accountability Charter: Structure and Objectives The INGO Accountability Charter was established by a consortium of leading NGOs and launched in 2008. It is funded by annual membership fees from NGOs. Since 2010, it has been based upon the Global Reporting Initiative (GRI), which is the world's largest sustainability reporting framework. The GRI is used by corporations and other organizations on a voluntary basis to report on their performance. The Charter commissioned the GRI to produce a 'NGO Sector Supplement,' a modified version of the guidelines designed to 'enable NGOs to demonstrably meet the same standards of transparency…that are demanded by other sectors' (Global Reporting Initiative 2011: 6). The Charter consists of ten commitments that are intended to promote the goals of 'greater transparency, accountability and effectiveness' (INGO Accountability Charter 2015b). The commitments are summarized thematically in the Charter as follows: respect for human rights; independence; transparency; good governance; responsible advocacy; participation; diversity/inclusion; environmental responsibility; ethical fundraising; and professional management (INGO Accountability Charter 2014a: 2). The Charter Text goes on delineate each theme in terms of specific undertakings. For example, the commitment to 'good governance' requires NGOs to ensure, among other things, 'publication of a clearly defined and transparent mission, governance structure and decisionmaking process at the governance level' (ibid: 6). Member organizations must produce an annual report to demonstrate that policies and procedures are in place to promote adherence to the Charter. The report framework consists of 36 'profile disclosures' about the organization, and 20 'performance indicators' about program effectiveness, ethical fundraising and communication, and management of issues concerning finance, the environment, human resources and impact on wider society (INGO Accountability Charter 2014b). Members have to account for any failure to report against all the criteria. The reports are submitted to an Independent Review Panel (IRP), which is composed of 'respected accountability experts' (ibid: preamble). The IRP assesses the strength of the evidence presented and indications of institutional commitment to accountability in the reporting exercise. They provide targeted feedback, advising on how the member's reporting and/or performance should be improved. Organizations are also encouraged to complete a 'gap analysis' exercise to identify areas in need of improvement and to set self-imposed targets for change. The Panel scrutinizes progress against these targets in forthcoming annual appraisals. The documentation is made available on the Charter Web site. Member organizations could be expelled if they are found to be in contravention of the Charter commitments or if they fail to submit reports without sufficient explanation. There are nineteen full members at the time of writing: ActionAid; Amnesty International; Article 19; BRAC; Care; CBM; Civicus; Educo; European Environmental Bureau; Greenpeace; Islamic Relief; Oxfam; Plan; Sightsavers; SOS Children's Villages International; Terre des Hommes; Transparency International; World Vision and World YWCA. Methodology The findings are based upon semi-structured interviews with 26 participants, during August-November 2014. Sixteen of these were participants from member NGOs who were centrally involved in the decision to join the Charter and/or were closely involved with producing reports for the Charter. They were speaking in a personal capacity rather than on behalf of their organization. All of the relevant NGOs were contacted with requests for interviews, and participants from 11 of the 18 6 'full member' organizations responded. Ten participants were involved with the administration of the Charter, including five current/former Board members, four current/former members of the Independent Review Panel (IRP) and a representative from the Charter Secretariat. Most respondents spoke on condition of anonymity. The data should be Key actors internalize the norms An institutional environment is created that promotes social learning and norm-compliant behavior treated with a degree of caution, since the views of the participants may reflect their interest in appearing to uphold high standards of transparency and accountability. Nonetheless, the participants did express significant reservations about the efficacy of the Charter, as shall be seen. Data were manually sorted into a list of preset codes, derived from keywords used in club theory (e.g., 'reputation,' 'brand,' 'trust') and constructivism (e.g., 'norms,' 'learning,' 'sharing'). Emergent codes were identified when analyzing the data that enabled the capture of recurrent ideas and meanings. Two validation strategies were adopted to improve the rigor of the study (Creswell 2008). Firstly, a preliminary report of the prevailing themes was circulated to participants for feedback. The quotations used in this article have all been approved by the participants concerned. Secondly, claims made by the interviewees were corroborated with document analysis. This included reports from the member organizations along with feedback from the IRP and responses to feedback from the NGO where provided, the minutes from Charter AGMs 2011-2015; the Charter Annual Report 2011-2014, as well as sundry materials relating to the membership criteria and reporting requirements. The following questions were posed to interviewees: (a) What motivates NGOs to join the Charter? (b) What are the perceived benefits of being a member of the Charter? (c) What are the perceived disadvantages of membership? (d) Bearing in mind the benefits and challenges of membership that you have just described, to what extent do you feel that the Charter is effective in enhancing the accountability of member organizations? The responses to these questions are detailed below. Motivations for Joining All the participants agreed that the key incentive for joining the Charter is the legitimacy it promises to bestow upon member organizations, given its self-proclaimed status as the 'only global, cross-sectoral accountability framework for NGOs' (INGO Accountability Charter 2015a). The high profile of the largest member NGOs was acknowledged as a key factor underpinning the credibility of the Charter and the attractiveness of membership to smaller NGOs. In the words of one interviewee: 'I think it helps your organization to build its brand, its reputation, its acceptance by the public and by other constituencies including donors' (Int.13). It was seen as an additional advantage by some that the Charter is an initiative driven by NGOs, rather than by donors, thus enabling NGOs to shape the 'accountability agenda' in a way that reflects common values and priorities across the sectors. The interviewees were not specific about how the agenda might differ if driven by donors. Thirteen out of sixteen NGO participants admitted that joining the Charter was partly a defensive move on behalf of their organizations to ward off actual and anticipated criticisms of poor accountability from donors, the media and political opponents. Joining the Charter was a way for NGOs to seize the initiative, because it was feared that attacks on their integrity could gain traction if there was not a concrete way to demonstrate their commitment to standards of excellence. To quote an interviewee from Amnesty: 'When we are questioned by government, for instance, with questions about legitimacy and our accountability-particularly if we're pushing for greater accountability by government-there have been times when we've been able to use our membership of the Accountability Charter to strengthen our position and show how we are accountable' (Int.1). Therefore, a large part of the Charter's appeal to member NGOs is the 'insurance' it provides against possible accusations of poor accountability. This is despite the fact that, even by the Charter's own admission, it has a low profile among those parties that have an interest in holding organizations accountable (INGO Accountability Charter 2014c: 9). For example, no major donors stipulate Charter membership as a precondition of funding. The interviewees generally acknowledged that the documentation on the Charter web site is rarely accessed by external stakeholders. A participant involved in the administration of the Charter argued that this did not detract from the value of the reporting exercise, because the requirements of membership compel organizations to engage with accountability issues and thus raise standards of performance: The general public, quite frankly, are never going to sit and read those reports… I would hope that the civil society department in DFID who are actually giving out the massive amounts of money and so on would actually look at them, but I've no idea whether they do or not. But I think the fact is that they're there, and that's what's important. And also the process that the NGOs have to go through in order to put them there and to get that information is important, because that in itself drives greater accountability and transparency (Int.5). None of the interviewees suggested that donors have a meaningful appreciation of the Charter; even the Charter's web site will only go so far to claim that it 'has a good chance of reaching donor recognition due to its unique positioning' (INGO Accountability Charter 2015a). All of the interviewees involved in the administration of the Charter spoke of the importance of increasing donor awareness of the initiative in order to maintain its relevance to existing members and to enhance the attractiveness of membership to other organizations. These were Voluntas (2018) 29:419-429 423 sentiments that were echoed by six participants from humanitarian/development NGOs and five participants from advocacy/campaigning NGOs. Club theorists contend that NGOs join regulatory initiatives to send a reputational signal to principals (Connelly et al. 2011;Gugerty 2009;Prakash and Potoski 2006). The Charter's emphasis on reporting and compliance suggests that signaling is important, especially given that the more rigorous GRI framework was incorporated into the standards four years after the Charter's launch. However, by this yardstick, the Charter seemingly has little efficacy if crucial stakeholders such as donors have poor awareness of its existence, which raises the question of why organizations have continued to pay their membership dues for years. Some of the responses above indicate that club theorists are correct to identify signaling as a key incentive, particularly to evade unwelcome government interference, but the data also suggest the presence of drivers other than self-interest. The following two sections turn to consider the interviewees' perceptions of the Charter's efficacy after joining, beginning with their assessments of the benefits of Charter membership. Perceived Benefits of Membership and Reflections on the Charter's Efficacy Thirteen out of the sixteen NGO participants identified peer learning opportunities as one of the most valuable aspects of Charter membership. The Charter provides formal occasions for knowledge exchange; for example, they run Webinars and Peer Advice Groups on numerous accountability-related topics. Peer learning also happens informally, such as networking outside of meetings. Indeed, two interviewees stated that they found that formal and informal peer learning that occurs at these events as more useful in developing their thinking about accountability and performance than the actual exercise of compiling the Charter report. Six NGO respondents claimed that the high-quality feedback from the IRP was one of the most significant benefits of Charter membership, and asserted that it has led to substantive improvements in practice. It was possible to identify several concrete examples of the influence of the Charter on policies of member organizations. Perhaps the most significant is the introduction of a Complaints Handling Mechanism, which has recently been made a prerequisite of Charter membership (INGO Accountability Charter 2015c). For example, one interviewee explained that an anonymous Web-based whistle-blower system had been implemented within two years following feedback from the IRP: 'We would have probably gone into the direction of reviewing our anti-corruption policies at certain stage, no question about it, but to actually boost and to really make that an urgent matter-that is thanks to the expert panel' (Int.9). Several non-environmental NGOs have also taken measures to reduce their carbon footprint in order to comply with the performance indicators on environmental responsibility. For example, Oxfam International was commended by the IRP for reducing its greenhouse gas emissions by 8.5% from 2010 to 2013 (INGO Accountability Charter 2015f: 96). Here, the value of the peer learning opportunities that the Charter provides was in evidence, since some respondents particularly singled out Greenpeace for praise in assisting other members to take issues of environmental impact seriously. An interviewee from a humanitarian/development NGO acknowledged that working with Greenpeace helped the organization develop climate-sensitive policies, and claimed that they may not have embraced environmental reporting without the impetus of Charter membership: Basically nobody disagrees with this being the right thing, but in terms of priority there are always so many things to be done… if you're working as a NGO with donations and you always have to, you know, justify 36 other top priorities-it really helps if you also get this kind of external push to say, ok, compared with best practice this is where you are behind (Int.9). To summarize, participants felt that the channels for feedback helped to promote internal learning, and Charter membership helps to maintain focus on obligations to improve aspects of performance that might otherwise be side-tracked. It was said that the impending deadline of the report helped to increase the urgency for changes in practice. These reflections suggest that a NGO's decision to join a regulatory mechanism may be partly motivated by selfinterest (as club theory would predict) since there was near consensus that NGOs were spurred to join the Charter to defend their operational freedom. Club theorists could also argue that peer learning opportunities are in the interest of NGOs, if it enables them to adopt better practices, increase compliance and send a stronger reputational signal. However, there was general agreement that the Charter has a low profile among key stakeholders such as donors. None of the interviewees suggested that membership had helped them to retain/increase their levels of funding. The interview data therefore present club theorists with a problem: Why should NGOs participate in the Charter if, according to club theory, its efficacy is limited? The constructivist approach offers an alternative way to interpret the data (see Table 1). There are indications of norm-guided behavior from the policy-making level to the level of the individual staff member. There is evidence that membership does provoke progressive reforms in policy and practice. It opens channels for more informal and participatory forms of learning about best practice, and dialogue between counterparts in different sectors that may not otherwise exist. The value that the participants claim to attach to these interactions suggests that they have internalized accountability norms. It presents a picture that accords with constructivist predictions: NGOs join regulatory mechanisms because it is widely understood to be an inherently 'right' thing to do. Efficacy is partly measured by the extent to which the Charter has shaped the understandings of practitioners about 'rightful conduct,' and helped to foster an institutional environment that promotes accountability norms. This could partly explain why NGOs abide by the Charter, even though membership does not help them to transmit a widely recognized reputational signal. The participants discussed the disadvantages of Charter membership, and the responses further underlined the applicability of both theories to understanding key drivers behind peer regulation. Perceived Challenges of Membership and Reflections on the Charter's Efficacy Club theorists would expect the participants to frame their criticisms of Charter in terms of poor signaling to principals and membership costs. These themes were indeed evident in the data. There was consensus among all participants that the effectiveness of the Charter is impaired by its low profile among donors and within the NGO community. Also, seven NGO participants complained about allocating resources to meet the commitments of Charter membership (including six from humanitarian/development organizations). The membership fee ranges from €1000 for NGOs with an annual income of less than €1 million to €25,000 for organizations with an income of more than €1 billion (INGO Accountability Charter 2015c). The financial commitment extends to staff time devoted to compiling the Charter reports. Participants expressed weariness with bureaucracy and concerns about the time spent on potentially duplicating information for different internal and external reporting frameworks. This was noted by one participant as particularly problematic for humanitarian/development organizations that also seek to comply with other regulatory initiatives. There are far more of these in the humanitarian/development sector than the advocacy sector. Humanitarian/development organizations also have to contend with stringent reporting requirements from donors. Monitoring and evaluating impact, articulating theories of change and completing log-frames have long been a core activity of their work. The perceptions of NGO participants about the onerous nature of the reporting requirements should be weighed against efforts by the Charter to reduce the workload entailed by membership by streamlining the reporting process. The recently revised Reporting Requirements state that reports should be a maximum of 40 pages long, and that the relevant information can be embedded in the organization's annual report (INGO Accountability Charter 2015d: 5). Further, once an organization has achieved 'a sufficient level of accountability, it only has to submit full reports every two years' and submit a 4-6-page report in the interim (ibid: 2). The recent simplification of reporting requirements was universally welcomed in the interviews. Nevertheless, for club theorists, the grumbles about the resource-intensive nature of the reporting process would be expected since agents have an interest in minimizing the 'costs' of regulation (or at least to the extent that it does not compromise the credibility of the signal sent to principals). Another prominent theme in the data was frustration with the low profile of the Charter. NGO participants did not just complain that donors were hardly aware of the Charter, but also that NGO staff were similarly under-informed. This is particularly the case for NGOs with a large 'family' structure with many national entities. The problem is exacerbated by high levels of staff turnover, which is commonplace for NGOs and results in persistent problems with knowledge management. Respondents observed that it can be a challenging task to coordinate data collection for the Charter from country offices and even more so given such poor levels of awareness about the purpose of the exercise. Some expressed feelings of disenchantment because so much time was invested in producing the reports, and yet readership is very low, even within their own organizations. For one interviewee, the low rate of access seriously compromised the value of the reporting process: You know, if only four people have read this, does this even remotely mean accountability? Because there's a presumption that when you've written it, people are actually going to read it and take note of it. You know, asking the questions might influence the way we do things internally, but you want people externally to be reading and asking the questions, otherwise you think, well, is this just a scheme for full employment? Are we all just writing reports that no one else reads? (Int.7) This desire to have an internal/external audience reveals that the participant measures the efficacy of the Charter in terms of how well it performs a 'signaling' function, in line with the predictions of club theory. Furthermore, the Charter is currently working on a Global Standard to 'generate public trust and recognition,' which indicates a common desire to signal even though these ambitions have not yet been realized (INGO Accountability Charter 2015e). Constructivist themes were also evident in the data. Although constructivists do not deny the presence of selfinterest, they focus attention on how actors evaluate efficacy in terms of the extent to which regulatory mechanisms foster an institutional environment that promotes normcompliant behavior. Several interviewees seemed to employ constructivist measures of efficacy by expressing cynicism about the potential of the Charter to produce positive outcomes. Three participants from humanitarian/ development NGOs voiced skepticism over whether any meaningful changes were implemented in their organization as a result of feedback, and suggested that the report could be regarded as a bureaucratic exercise rather than a real driver of change. Meaningful change, it was suggested, can only occur when commitment to accountability is 'embedded in the DNA of the organization somewhere'; the Charter cannot deliver such a shift because it is 'only a reporting tool… and that's all this I think is ever going to be' (Int.7). Other participants agreed about the importance of encouraging strong engagement with the accountability agenda across the organization, and here the commitment of senior leadership was seen as key. It was observed that the Charter cannot hope to have more than limited effectiveness if accountability is not a strategic priority. Jeremy Hobbs, former Chair of the Charter Board, confirmed that the Board was aware of this problem: 'So very often the CEO intellectually gets it, but is not committed emotionally if you like. Or they are committed, but the next layer of staff are not.' The problem of uneven levels of commitment also happens in reverse. The potential of the Charter to promote change could be neutralized if it is seen as a 'pet project' of the CEO and little valued by staff at lower levels of the organization, as exemplified by the following extract from an interview with a participant from a humanitarian/development NGO: 'At the moment the CEO says we do it, so we do it. But the trouble with that approach is you don't get a very consistent buy-in across the organization' (Int.7). These candid remarks about varying levels of reveal that actors will evaluate the efficacy of the Charter in terms of the extent to which it fosters an environment that promotes norm internalization and norm compliance, as constructivists would predict. Constructivists would also expect that key actors would evaluate the Charter in terms of how well the standards shape expectations of 'rightful conduct.' Normative measures of efficacy were evident when interviewees complained of disconnect between NGOs from different sectors regarding conceptions of accountability. Participants from the advocacy/campaigning sector felt that conversations tend to revolve around service delivery, to which their organizations cannot always meaningfully contribute or learn from. In the words of Clare Doube, a member of the Charter's Board of Directors and the Director of Strategy and Evaluation at Amnesty International: 'Therefore, in terms of the experience sharing, peer learning aspects, I feel we sometimes don't gain as much as some of the conversations aren't really relevant for us.' Such participants also felt that the discourse about accountability that takes place under the aegis of the Charter is primarily framed around the working model of humanitarian/development organizations. The complex interplay between selfinterested and normative concerns is illustrated by the following quote from Janet Dalziell, the Director of Global Development at Greenpeace International, and a member of the Charter's Board of Directors: 'I really struggle with it because these concerns are so driven by the model that relies on government funding-or other very large donors-and we at Greenpeace don't have any of that, and so it's just irrelevant for us. It drives the conversation into a very Northern-focused set of obsessions and worries and discussion that I find don't actually…help us….It has all the potential to really distract us from some more overarching considerations about what accountability is and should be.' The quote reveals unease about the tension between sending signals to key stakeholders and what the participant regards as 'appropriate' accountability practice. Moreover, it illustrates how constructivism can supplement club theory by providing additional dimensions to interpretations of actor behavior. Some participants also observed that it is relatively easier for humanitarian/development organizations to identify stakeholders than it is for advocacy organizations. 'Stakeholders,' for humanitarian and development organizations, tend to constitute a more sharply defined group of people-the users of a newly constructed well, for example, or the borrowers in a micro-finance initiative. Advocacy/campaigning organizations have a more difficult time in identifying and justifying their key constituencies and evaluating the impact of their activities on the lives of the people that they claim to represent. This gives rise to recurrent debates about what it means to 'do good,' which are particularly tricky when NGOs claim to work on behalf of constituencies who are 'voiceless' (e.g., animals, 'future generations'). Tensions arising from competing notions of accountability are to be expected in some degree in an initiative that attempts to articulate common standards across different sectors, which is after all the unique selling point of the Charter. An interviewee from a campaigning NGO reflected upon the problems involved in establishing a set of cross-sectoral standards that are suited for a wide diversity of organizations and suggested that it could impact upon Charter recruitment: I understand the need for standardization…but I would like to have seen probably a little more openness to flexibility rather than what could be interpreted as judgments based on a framework which works for probably development but not necessarily for all organizations. And I think that probably could be the reason why some organizations may not want to join, because fear of being judged because they don't fit into the reporting requirements-but that doesn't mean to say they're less connected to accountability than anybody else (Int.22). In sum, the interviewees evaluated the challenges of membership using rationalist and normative measures of efficacy-thus underlining the applicability and complementarity of both club theory and constructivism in understanding the drivers behind peer regulation. Participants offered reflections on a range of diverse topics that included the integrity of the memberships' involvement with the Charter. Advocacy/campaigning organizations expressed theoretical and practical concerns about the compatibility of certain accountability standards to their work. These extracts revealed disquiet about the potential of signaling to sidetrack organizations from engaging in normative debates about accountability. However, evidence of self-interested behavior can be found in the complaints about poor signaling, the cost of membership and the 'burdensome' requirements of reporting. Participants measured the Charter's efficacy both in terms of the extent to which it enables organizations to 'do accountability well,' and to the extent that it serves their interest in portraying members as credible and trustworthy. Conclusion The findings of this study are salient for academics and practitioners. For the former, the interview data cast light on the club theory-constructivist debate about the key drivers behind NGO peer regulation. For the latter, the participant's views on the efficacy of the Charter suggest several policy recommendations. The first question that this study sought to address was: What do the interviewees believe motivated NGOs to join the Charter? The interviewees' interpretations of what constituted 'effectiveness' were informed by their understanding of the reasons why organizations submit to peer regulation. The literature offers rival explanations for the drivers behind NGO behavior, which are linked to distinct measures of efficacy. Club theory posits that members join a regulatory mechanism to acquire an exclusive benefit: a signal of 'virtue' that is communicated to important stakeholders. It predicts that informants would regard a regulatory initiative as 'effective' if (a) NGOs are compliant; (b) it is widely recognized as a signal of credibility and helps to build trust with principals; (c) it could boost funding; (d) it could discourage governments from encroaching upon NGOs' operational freedom. The interview data suggests that club theory has some purchase, since there was evidence of self-interested behavior. There was general agreement that organizations joined the Charter to demonstrate that they were being proactive in improving their accountability, and to send a reputational signal to donors. Moreover, the fact that the standards have been progressively strengthened lends credibility to the interpretation of the Charter as a club. Club theorists argue that agents can gain from positive 'network effects' from club membership, resulting in enhanced standing with their principals (Prakash and Potoski 2006: 33). It could be argued that if member NGOs gain from a generalized perception that they are credible organizations, it may not matter if donors are unaware of the specifics of the Charter. Voluntary regulatory activities may have an indirect influence on principals, perhaps leading to, for example, increased funding. Future research could test such a hypothesis by interviewing donors to establish how funding decisions are made. In contrast to club theorists, constructivists consider the influence of shared ideas, norms and values as key to understanding what shapes forms of peer regulation. They would predict that informants would regard a regulatory initiative as 'effective' if (a) it shaped shared expectations about 'rightful conduct,' (b) the norms were internalized by key actors and (c) it helped to foster an institutional environment that supported norm-compliant behavior. The interview data contained themes that revolved around the integrity of the memberships' involvement with the Charter. There was little indication that the participants' opinions about the efficacy of the Charter were shaped by the presence or absence of financial 'rewards,' which were not mentioned at all. Interviewees expressed their disappointment with the poor recognition of the Charter inside and outside the organization and cautioned that a reporting procedure could not deliver meaningful change alone. They stressed the importance of organizational culture and of individual engagement with accountability norms. They valued opportunities to learn from their peers. The findings suggest that organizations participate in the Charter to satisfy a mixture of 'self-interested' and 'norm-guided' motivations. The second question that this study sought to address has direct policy significance: To what extent do the interviewees believe that the Charter has been effective in enhancing the accountability of its member organizations? The interviewees discussed three main benefits of Charter membership: Firstly, it provides NGOs with a defense against actual or anticipated criticisms of poor accountability from the media and political opponents. Secondly, membership provides peer learning opportunities. Thirdly, the IRP provides high-quality feedback that can be a useful impetus to boost standards of performance. The interviewees also listed a series of challenges associated with Charter membership. Concerns were raised that the low readership of the reports makes it problematic to maintain 'buy-in' at all levels of the organization. Respondents from advocacy/campaigning NGOs felt that Charter membership is more relevant to the working model and concerns of humanitarian/development NGOs. Lastly, some participants also perceived the reporting process as resource intensive, and several cautioned of the danger that reporting becomes a bureaucratic exercise rather than a real driver of change. A number of policy recommendations arise from these findings. The Charter company should invest further efforts into raising the profile of the initiative among stakeholders. It should also explore ways in which it can work with members to raise awareness of the initiative among NGO staff. It should work closely with advocacy/campaigning organizations to identify ways to enhance the relevance and value of Charter membership to their work, and expand opportunities for members to engage in peer learning. The Charter company has recently attempted to simplify the reporting process by setting clear maximum limits on the amount of information required-future research into this area might investigate whether these new guidelines have helped to address perceptions that the process is overly bureaucratic. This study also has implications for peer regulation initiatives more generally. Firstly, NGOs should consider more extensive consultation with their principals about what constitutes an effective signal. It was notable that interviewees cited the ability to shape the accountability agenda, free of donor influence, as a benefit of Charter membership. However, they were also concerned that the Charter sent weak signals because of its low profile. There seems to be some tension between their desire for autonomy and their ambitions for greater recognition. If member organizations want to signal that they are more credible than non-members, principals should ideally not only know about the club, but also have faith in it. That may be achieved by inviting principals to contribute to how verification and certification mechanisms are designed. Creating a stronger signal serves the self-interest of member organizations, and so the initiative will be more likely to be perceived as effective in club theory terms. However, regulatory mechanisms should also promote social learning if they are to be perceived as effective by actors who are motivated by norm-guided, as well as selfinterested, considerations. Organizational learning is best achieved in a forum where actors can admit to failures without fear of punishment (Crack 2013). This is difficult to achieve within a regulatory initiative, as actors may be disinclined to speak with candor if this will undermine the reputational signal sent to principals. NGOs should explain to their stakeholders that owning up to failure can actually improve accountability, as long as lessons are learned and shared with peers. The willingness to disclose evidence of under-performance should be considered as a sign of credibility as long as the club facilitates dialogue about the best practice. In this way, the measures of efficacy employed by club theory and constructivists can be better aligned. This article is a starting point in addressing the knowledge gap about the effectiveness of the Charter. It is a timely juncture for further research to be conducted into the Charter as they inaugurate a 'Global Standard for CSO Accountability' with other global networks and embrace an ambitious new strategy to expand its membership (INGO Accountability Charter 2015e). Accountability is a centrally important value for progressive NGOs, so it is in the interests of practitioners and stakeholders to ensure that policy is designed in accordance with a robust evidence base.
2018-12-27T00:36:22.655Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "788b12833968d7ace86ceeeef912fb4504aab20d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11266-017-9866-9.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "d2cbf25fd002a5b62977f16b605621cc66d96118", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
11245164
pes2o/s2orc
v3-fos-license
Association of ISMav6 with the Pattern of Antibiotic Resistance in Korean Mycobacterium avium Clinical Isolates but No Relevance between Their Genotypes and Clinical Features The aim of this study was to genetically characterize clinical isolates from patients diagnosed with Mycobacterium avium lung disease and to investigate the clinical significance. Multi-locus sequencing analysis (MLSA) and pattern of insertion sequence analysis of M. avium isolates from 92 Korean patients revealed that all isolates were M. avium subspecies hominissuis. In hsp65 sequevar analysis, codes 2, 15, and 16 were most frequently found (88/92) with similar proportions among cases additionally two isolates belonging to code N2 and an unreported code were identified, respectively. In insertion element analysis, all isolates were IS1311 positive and IS900 negative. Four of the M. avium subsp. hominissuis isolates did not harbor IS1245 and 1 of the M. avium isolates intriguingly harbored DT1, which is thought to be a M. intracellulare-specific element. M. avium subsp. hominissuis harboring ISMav6 is prevalent in Korea. No significant association between clinical manifestation and treatment response has been found in patients with the hsp65 code type and ISMav6, indicating that no specific strain/genotype among M. avium subsp. hominissuis organisms was a major source of M. avium lung disease. Interestingly, the presence of ISMav6 was correlated with greater resistance to moxifloxacin. Conclusively, the genotype of Korean M. avium subsp. hominissuis isolates is not a disease determinant responsible for lung disease and specific virulent factors of M. avium subsp. hominissuis need to be investigated further. Introduction A rise in the incidence of pulmonary disease caused by nontuberculous mycobacteria (NTM) has been reported worldwide [1,2]. Mycobacterium avium complex (MAC) is the most frequent etiology of NTM lung disease [3]. MAC initially included two species, M. avium and M. intracellulare. M. avium is the most clinically significant species for humans and animals within the MAC and is divided into four subspecies: M. avium subsp. avium, M. avium subsp. hominissuis, M. avium subsp. paratuberculosis, and M. avium subsp. silvaticum [4,5]. Although subspecies of M. avium in different geographic regions or populations may have different levels of virulence due to co-evolutionary processes, consequently leading to varying epidemiological dominance, most cases of M. avium human disease are due to M. avium subsp. hominissuis. Recently, lymphadenitis patients in France were found to be infected by only M. avium subsp. hominissuis among M. avium subspecies [6]. More recently, a subspecies identification analysis of M. avium clinical strains in the USA showed M. avium subsp. hominissuis to be the dominant M. avium subspecies (92.6%), followed by M. avium subsp. avium (7.4%) [7]. All German M. avium strains isolated from children and adults were identified as M. avium subsp. hominissuis [8]. Many studies have emphasized the importance of taxonomy in distinguishing species and subspecies of MAC because non-sequencing methods or 16S rRNA sequencing frequently fails to distinguish closely related species [9,10]. Multi-locus sequencing analysis (MLSA) has been suggested as the new standard method for identifying Mycobacterium species that are not well discriminated by 16S rRNA gene sequences alone [11][12][13][14]. The presence and distribution of various insertion sequences (IS) among M. avium subspecies have provided an unprecedented opportunity to define the genomic differences between M. avium subspecies as well as to develop molecular typing methods with sufficient discriminatory power to differentiate M. avium subspecies and isolates [15]. At our institution, the rpoB-PCR restriction fragment length polymorphism (RFLP) analysis [PRA] method was used for species identification and diagnosis of MAC lung disease until 2009 [16][17][18]. To gain better insight into M. avium lung disease in Korea, we used sequencingbased methods for subspecies identification and genotyping and compared clinical characteristics and treatment outcomes according to genotype. Furthermore, we investigated patterns of antibiotic resistance according to mycobacterial genotype as well as the presence or absence of ISMav6. Study subjects Clinical isolates from 92 patients with newly diagnosed M. avium lung disease from Jan. 2008 to Dec. 2009 at Samsung Medical Center (Seoul, Korea) were collected and stored. The data in the present study are part of an ongoing prospective observational cohort study investigating NTM lung disease (ClinicalTrials.gov Identifier: NCT00970801). The study protocol for isolates collection and genotyping analysis was approved by the institutional review board of the Samsung Medical Center (IRB approval 2008-09-016), and written informed consent was obtained from all participants. All patients met the diagnostic criteria for NTM lung disease [3]. All patients were immunocompetent and none of the patients tested positive for human immunodeficiency virus. All isolates were collected before initiating antibiotic treatment for NTM lung disease. Additionally, M. avium species initially identified by PRA based on the rpoB gene at the time of diagnosis as previously described were used for subsequent analysis. . MLSA including hsp65, rpoB, and 16S rRNA fragments was carried out using PCR primer sets as described previously [20][21][22]. The PCR products of target genes were subjected to sequence analysis. The nucleotide sequences of these genes were compared with data reported by BLAST analysis (http://www.ncbi.nlm.nih.gov) against sequences from M. avium subspecies type and related strains. M. avium subsp. avium ATCC 25291, M. avium subsp. hominissuis 104, M. avium subsp. paratuberculosis K-10, and M. avium subsp. silvaticum ATCC 49884 were used as reference strains. For phylogenetic analysis, sequences were trimmed using the CLUS-TAL-W multiple sequence alignment program [23]. Phylogenetic trees were obtained from DNA sequences utilizing the neighbor-joining method and Kimura's two parameter distance correction model with 1,000 bootstrap replications supported by MEGA 6.0 software [24]. hsp65 code analysis hsp65 code analysis was performed as previously described [25]. hsp65 gene PCR products were subjected to sequence analysis. The nucleotide sequences of the hsp65 gene were compared with data reported by BLAST analysis (http://www.ncbi.nlm.nih.gov) against the M. avium type and related strains. hsp65 codes were classified according to previously reported papers [25][26][27]. Insertion sequences element analysis Multiplex PCR was performed to detect three target genes, IS900, IS1311, and DT1 using previously described methods [28]. A previously described primer set was used for the IS1245 insertion element [29]. The presence of ISMav6 was determined by PCR followed by sequencing analysis using a previously described primer set [26]. PCR products of insertion elements were sequenced and the existence of a specific insertion element in each strain was confirmed. [30]. Statistical analyses were performed using SAS 9.1 (SAS Institute Inc., Cary, NC, USA) and a P-value less than 0.05 was considered statistically significant. ). Phylogenetic analysis based on the concatenated hsp65 and rpoB sequences from all isolates and from those of closely related species within the MAC showed that all isolates belong to M. avium subsp. hominissuis (Fig 2). Therefore, all isolates were identified as M. avium subsp. hominissuis using MLSA. Subspecies identification of M. avium clinical isolates by MLSA Distribution of hsp65 codes in M. avium subsp. hominissuis strains In total, the 92 isolates were classified into five different hsp65 sequevars. There were no isolates classified to hsp65 code 4, the M. avium subsp. avium sequevar. Four of these sequevars were well recognized as M. avium subsp. hominissuis type and clinical strains and 1 sequevar code was newly identified in this study. The new sequevar was coded N7 (following the code name given in the previous paper [N1-N3] [26] and accepted paper [N4-N6]) and two isolates were classified as code N7. The distribution of hsp65 sequevars in the 92 isolates is shown in Table 1. The major codes were 2, 15, and 16. Relatedness of clinical characteristics, treatment response, and drug susceptibility to hsp65 codes and presence/absence of ISMav6 Detection of insertion sequence elements We analyzed clinical characteristics and treatment response among 3 major codes (code 2, 15, and 16). There were no significant differences in clinical features among the 3 groups (S1 and S2 Tables). We also analyzed clinical characteristics and treatment response according to the presence of ISMav6. There were no significant differences in clinical features between those with and without ISMav6 (S3 and S4 Tables). The association of genotype and the presence of ISMav6 with drug susceptibility patterns in the M. avium subsp. hominissuis isolates was evaluated for CLR and MXF. Drug susceptibility test were performed in 72 and 71 patients for CLR and MXF, respectively. None of the hsp65 codes showed trends in drug susceptibility levels (data not shown); however, the presence of ISMav6 was correlated with greater resistance to MXF (Table 2). Discussion In this study, clinical isolates from 92 patients previously diagnosed with M. avium lung disease over a two-year period were further analyzed. Species identification was initially performed by a non-sequencing method and then species were re-identified using a sequencing method. Among the 92 isolates identified as M. avium by PRA at the time of diagnosis, all isolates were precisely identified as M. avium subsp. hominissuis. ISMav6 is a novel IS recently reported in the genetic characterization of Japanese human clinical isolates [27]. In the present study, the prevalence of ISMav6 in Korean patients with M. avium lung disease was 61%. Interestingly, more clinical isolates with hsp65 code 15 harbored ISMav6 (84%, 21/25) than isolates with hsp65 code 2 and code 15 (47% and 58%, respectively). Also, both hsp65 code 15 and ISMav6 have rarely been reported in the literature except in Japan. In Germany, one M. avium subsp. hominissuis strain with hsp65 code 15 harboring ISMav6 was reported [8]. The high proportion of ISMav6 in M. avium subsp. hominissuis strains from Korea and Japan is thought to be a specific genetic feature. Thus, both hsp65 code 15 and ISMav6 may be related to the epidemiological diversity of M. avium clinical strains. In general, DT1 is present in M. intracellulare and not present in M. avium subsp. hominissuis. One M. avium subsp. hominissuis isolates possessed DT1 in this study, which is a novel observation. Since a number of different IS elements have been described in various NTM species, species-specific IS elements have been revisited for MAC identification [28,29,35]. IS elements are mobile by nature, so there is a risk that similar elements will be found in unrelated bacteria because of mobility to or from MAC organisms. For example, natural occurrence of horizontal transfer of M. avium-specific IS1245 to M. kansasii has been reported [36]. Thus, the use of insertion sequences for species-specific markers should be more carefully conducted because it may influence molecular diagnosis and, consequently, treatment outcomes. Kikuchi et al. reported that a variable number of tandem repeats (VNTR)-genotyping of 37 M. avium clinical isolates was associated with the progression of M. avium lung disease in Japan [37]. However, our study, which included more than 100 clinical isolates, did not identify any association between the M. avium VNTR genotype and disease progression of M. avium lung disease [38]. In the present study, disease progression was defined as when patients with M. avium lung disease require antibiotic treatment due to worsening symptoms, deteriorating chest radiograph features and microbiological findings within 2 years of diagnosis [39]. There was no difference in clinical characteristics and treatment response according to hsp65 sequevar codes and ISMav6, in agreement with previous VNTR-based observations that there was no association between the genotype and clinical characteristics of Korean patients [38]. Interestingly, the presence of ISMav6 was associated with drug resistance to MXF in this study. Tatano et al. reported an association between the VNTR genotype and susceptibility to quinolones and EMB [40]. Dvorska et al. found no relationship between IS1311 and IS1245based RFLP genotypes and drug susceptibility in MAC isolates [41]. These findings suggest that some genetic factors may influence the acquisition of drug resistance and ISMav6 may be a genetic factor associated with drug resistance. As far as we know, this is the first study to suggest an association between genotypes according to hsp65 codes and ISMav6 and clinical features with drug susceptibility. Our results indicate that specific genotypes among M. avium subsp. hominissuis organisms are not predominantly responsible for M. avium lung disease in Korea and further analysis of ISMav6 (i.e. the location of ISMav6 in the genome of M. avium isolates) will help identify relationships between genetic features and drug susceptibility. The present study has some limitations. First, this study was conducted at a single center and was performed on a referral basis with final analysis of only a small number of Korean patients; therefore, caution should be used when attempting to generalize our findings. Second, this study was preliminary because we did not investigate the specific genes associated with drug resistance. Thus, further precise drug resistance typing of rpoB and gyrA/B with a large number of isolates will provide a better understanding of the association between M. avium subsp. hominissuis genotypes and drug resistance. Nevertheless, to the best of our knowledge, this is the first report to investigate the link between ISMav6 and drug resistance to MXF in M. avium subsp. hominissuis strains from Korean patients. Future studies of informative and valuable genetic factors related to M. avium lung disease should be conducted in both the pathogen and host. Supporting Information S1 Fig. rpoB sequence-based phylogenetic tree using the neighbor-joining method with Kimura's two-parameter distance correction model. Bootstrap analyses determined from 1,000 replicates are indicated at the nodes. Bar, 0.5% difference in nucleotide sequence. Gen-Bank accession numbers are given in parentheses. (TIF) S1
2018-04-03T02:42:15.908Z
2016-02-09T00:00:00.000
{ "year": 2016, "sha1": "a69a0704185c32b55bd6bc7e99fe114119f25510", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148917&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a69a0704185c32b55bd6bc7e99fe114119f25510", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
53022622
pes2o/s2orc
v3-fos-license
Prevalence of chronic kidney disease in South Asia: a systematic review Background Chronic kidney disease (CKD) is becoming a major public health problem around the world. But the prevalence has not been reported in South Asian region as a whole. This study aimed to systematically review the existing data from population based studies in this region to bridge this gap. Methods Articles published and reported prevalence of CKD according to K/DOQI practice guideline in eight South Asian countries between December 1955 and April 2017 were searched, screened and evaluated from seven electronic databases using the PRISMA checklist. CKD was defined as creatinine clearance (CrCl) or GFR less than 60 ml/min/1.73 m2. Results Sixteen population-based studies were found from four South Asian countries (India, Bangladesh, Pakistan and Nepal) that used eGFR to measure CKD. No study was available from Sri Lanka, Maldives, Bhutan and Afghanistan. Number of participants ranged from 301 in Pakistan to 12,271 in India. Majority of the studies focused solely on urban population. Different studies used different equations for measuring eGFR. The prevalence of CKD ranged from 10.6% in Nepal to 23.3% in Pakistan using MDRD equation. This prevalence was higher among older age group people. Equal number of studies reported high prevalence among male and female each. Conclusions This systematic review reported high prevalence of CKD in South Asian countries. The findings of this study will help pertinent stakeholders to prepare suitable policy and effective public health intervention in order to reduce the burden of this deadly disease in the most densely populated share of the globe. Electronic supplementary material The online version of this article (10.1186/s12882-018-1072-5) contains supplementary material, which is available to authorized users. Background Globally, Chronic Kidney Disease (CKD) is one of the leading causes of death and disability. In 1990, CKD was the 27th leading cause of death which rose up and became 18th leading cause of death in 2010 [1]. In 2013, around 1 million people died because of CKD related cause [2]. Despite of being a global concern, CKD disproportionately affects the people from developing countries. A systematic review, conducted in 2015 reported that, 109.9 million people from high-income countries had CKD (men-48.3 million, women-61.7 million) whereas the burden was 387.5 million in lower-middle income countries (men-177.4 million, women-210.1 million) [3]. CKD is associated with a wide range of life threatening diseases [4]. CKD is considered as one of the major risk factors for developing cardiovascular disease [5]. A study conducted in 2003 reported that patients having Glomerular filtration rate (GFR) between 15 and 59 ml/min/ 1.73 m 2 are at 38% higher risk of development of cardiovascular disease than patients having GFR 90 and 150 ml/ min/1.73m 2 [6]. Along with the impact on individual health, CKD also affects the social life and responsible for loss of productivity [7]. The most common form of social impact due to CKD is financial burden [7]. CKD patients are at higher risk to develop end-stage renal disease (ESRD) which requires costly management like dialysis and kidney transplantation [8]. A study conducted in USA revealed that the treatment cost for CKD and ESRD imposes a huge financial burden to the health care system and the average annual cost for end-stage renal disease without transplantation was near 75 billion US dollar in 2001 [8]. CKD needs to be given priority because it is the consequence of uncontrolled diabetes and hypertension that are considered as world wide epidemic now a days. Despite the acute and chronic harmful consequences, CKD is hardly studied specially in lower and middle income countries of Asia and Africa. Few segregated studies have been conducted in India, Bangladesh, Pakistan, Nepal, and Sri Lanka, however, no systematic review is available in South Asian region portraying the current burden of CKD. Hence, it is difficult for policy makers and public health leaders to get a complete scenario about CKD burden in these countries and formulate relevant policies to overcome CKD related mortality and morbidity. Therefore, we have conducted this systematic review to identify the prevalence of CKD in South Asian countries. Search strategy We conducted a systematic review of relevant existing literatures from South Asian countries using PRISMA guideline [9]. Two researchers separately searched the potential literatures in PubMed, Google Scholar, and POPLINE. In addition, they searched national online journal for India, Pakistan, Bangladesh, Nepal, and Sri Lanka. However, no national online journal was available for Bhutan, Maldives and Afghanistan. During search, medical sub-heading as well as plain text were used for the following keywords: 'epidemiology' , 'prevalence' , 'chronic renal insufficiency' , 'chronic kidney disease' , 'India' , 'Bangladesh' , 'Sri Lanka' , 'Nepal' , 'Bhutan' , 'Maldives' , 'Pakistan' and ' Afghanistan'. Using those key terms together with Boolean operators, global search term was developed for potential literature search. We also manually searched the bibliography of all selected studies (snow bowling) to identify more articles. Inclusion and exclusion criteria Inclusion criteria for this study were a) study reported data from South Asian countries; b) study published between December 1955 (earliest publication) and 30, April 2017; c) study reported prevalence of CKD; d) study published in English language; and e) study carried out in general population. Exclusion criteria for this study were a) study did not report data from South Asian countries; b) study published in other languages than English; c) conference proceedings, book chapters, editorials, and study published only in abstract form; d) study carried out in high risk group of people (known case of diabetes, hypertension, kidney disease); e) study with a sample size of less than 200 participants; and f ) study did not determine CKD based on GFR estimation by serum creatinine-based equations. At first, two researchers (IS and RDG) searched and screened all the articles individually. The third researcher (MH) critically reviewed the overall search and screening process to ensure the consistency. Finally, the full text of selected publications was assessed for eligibility by all three researchers (MH, RDG, and IS). Any discrepancies were resolved by group (MH, IS, RDG and MS) consensus throughout the whole process. Quality appraisal Three researchers (MH, IS and RDG) independently determined risk of bias of included studies. For this purpose, we adopted a quality assessment checklist where eight study characteristics were used to assess the quality of included studies such as selection of representative study participants, sample size, sampling technique, response rate, exclusion rate and method used for determination of CKD. This checklist was prepared based on the criteria used in a systematic review on CKD conducted in Sub-Saharan Africa [10]. If the study participants were representative of the general population, we scored it as "2", however, if the study participants were representative of the population in question, we scored it as "1" otherwise we scored it as "0". If the study participants were not included or excluded on the basis of specific risk factors, sample size was adequate (at least 384 considering 50% prevalence rate), sampling technique was random, response rate was > 40%, exclusion rate was < 10%, methods used to diagnose CKD was mentioned, consistent method for determination of CKD was used, we scored articles as "1", however, if the study participants were included or excluded on the basis of specific risk factors, sample size was not adequate, sampling technique was non-random, response rate was ≤ 40%, exclusion rate was ≥ 10%, methods used to diagnose CKD was not mentioned and consistent method for determination of CKD was not used, we scored articles as "0". Later, the number for each study was added to get the final score. The maximum score was 9. If any study gets 7-9, we considered it as "high quality" study. Score 4, 5 and 6 were considered as "moderate quality" study, and score 0, 1, 2 and 3 were considered as "poor quality" study. All the discrepancies that arouse while quality assessment were solved by consensus. Definition of CKD Chronic Kidney Disease (CKD) is defined as the structural/functional abnormalities of kidney or decreased GFR < 60 ml/min/1.73 m 2 for 3 months [11]. We used the definition of CKD from the K/DOQI practice guideline that was published in 2002 by the National Kidney Foundation (NKF). CKD was defined as creatinine clearance (CrCl) or GFR less than 60 ml/min/1.73 m 2 [11,12]. In the included studies for this review, three equations were used to estimate eGFR: Four-variable MDRD equation [13,14], CKD-EPI equation [15] and Cockcroft-Gault equation [16]. Data extraction Two authors (MH and RDG) separately extracted data from the selected articles and for this purpose a data extraction table was developed in excel file. This table included (a) title, (b) journal name, (c) name of authors, (d) publication year, (e) year of data collection, (f ) study objective, (g) study setting (urban/rural), (h) study design, (i) sampling strategy (random/non-random), (j) sample size, (k) study population, (l) outcome assessment (objective/subjective), (m) diagnostic criteria for CKD, (n) prevalence (overall), (o) prevalence (gender, age, location specific), and (p) authors' conclusion. After data extraction, a third author (IS) crosschecked both of the tables to ensure consistency. Any dispute that arose during data extraction was resolved by group consensus. Subsequently, data was analyzed using tabulation, grouping and thematic approach. Bangladesh Three studies were identified from Bangladesh [25][26][27], of which all were conducted in Dhaka city (capital of Bangladesh). Two studies performed community based survey [25,27] of which one targeted slum dwellers [27]. These two studies selected participants using random sampling technique [25,27]. However, Fatema et al. carried out their study among participants attending a health screening camp and their sampling technique was non-random [26]. The number of participants in Bangladeshi studies ranged from 402 to 1000. Male were predominant in two studies (51.0%, 88.3%) [25,26]. Pakistan We found four studies from Pakistan and all of those studies were conducted in urban areas of Karachi [28][29][30][31]. Three out of four studies performed community based survey and selected participants using random sampling technique [28,30,31]. However, Imran et al., conducted study among volunteers who willingly participated in a health camp and sampling technique of this study was non-random [29]. Amidst three Pakistani studies, lowest [31] in these studies. eGFR was measured using MDRD [28,30], CKD-EPI [29] and CKD-EPI Pakistan equation [31] in Pakistani studies. Nepal Only one article was available from Nepal that carried out population based study to identify CKD (according to K/DOQI guideline) prevalence. This study adopted community-based cross sectional survey design and was conducted in urban Dharan [32]. One thousand individuals (male-48%, female-52%) who were at least 20 years old participated in this survey (Table 2) [32]. This study measured eGFR using MDRD equation for diagnosis of CKD [32]. Prevalence of CKD India The overall pooled prevalence of CKD among Indian adults was 10.2%. As per high quality studies, highest prevalence was 17.2% found among participants of SEEK (Screening and Early Evaluation of Kidney Disease) study [21] and lowest prevalence was 4.2% found among ≥ 20 years old adult residing in Delhi [20]. (MDRD-15%, CKD EPI-13.1%) found that CKD prevalence was slightly higher while using MDRD equation compared to that found using CKD-EPI equation [21,23]. Studies that used both MDRD and CG-BSA equations found that the prevalence of CKD was markedly higher using CG-BSA equation than that found using MDRD equation ( (Table 3) [18,20]. Age-specific prevalence: Three studies from India reported age-specific prevalence of CKD. Two studies reported the age specific prevalence using MDRD equation and the rest one used CKD-EPI equations. All of these studies found that prevalence of CKD rose with increasing age (Table 3) [17,18,22]. Gender specific prevalence: Six Indian studies reported gender specific CKD prevalence.Three out of these six studies reported higher prevalence of CKD among men ranged between 8.1% and 21.0% [18,21,22]. However, rest three studies reported that the CKD prevalence was higher among female participants ranged between 16.3% and 19.1% than their male counterparts [17,18,20,23] (Table 3). Bangladesh The overall pooled prevalence of CKD among Bangladeshi adults was 17.3%. As per high quality studies, in Bangladesh, highest prevalence of CKD was reported as 26.0% [25] whereas Fatema et al. reported the lowest prevalence (12.8%) [26] (Table 4). This discripency might be attributable to the age difference of study participants in these two studies. Mean age of study participants were 49.5 years and 37 years in Anand et al. [25] and Fatema et al. [26] respectively. The only study that focused on urban slum dwellers, CKD prevalence was found as 16.0% using CG/BSA method (Table 5) [27]. Age-specific prevalence: Among the three Bangladeshi studies, only Huda et al. reported age specific prevalence of CKD. According to this study, the prevalence of CKD was higher among elderly people aged more than 40 years (16.5%) than their counterparts whose age was between 25 years and 40 years (10.7%) ( Table 5) [27]. Pakistan The overall CKD prevalence among Pakistani adults was 21.2%. According to high quality studies, highest CKD prevalence in Pakistan was reported as 29.9% [30] and the lowest prevalence was 12.5% [31]. Though both of these studies were conducted among similar age group participants, use of different equations for determining CKD might be attributable to this difference. Age-specific prevalence: Among the Pakistani studies, only Alam et al. reported age specific prevalence of CKD. The study found highest prevalence of CKD among elderly participants having age more than 50 years (43.6%) and lowest prevalence among comparatively younger participants aged less than 30 years (10.5%) ( Table 5) [28]. Gender specific prevalence: All the four Pakistani studies reported gender specific prevalence of CKD [28][29][30][31]. Alam et al. and Imran et al. reported higher CKD prevalence among men [28,29], however, Jessani et al. and Jafar et al. identified women to suffer from CKD more frequently than men [30,31]. In the high quality study that used country specific equation for determining CKD, slightly higher proportion of female participants were found to have CKD than their male counterparts (male-11.6%, female-13.3%) ( Table 4) [31]. Nepal Only one Nepalese study met eligibility criteria for this systematic review [32]. This moderated quality study was conducted among ≥20 years old adults residing in urban Dharan and reported CKD prevalence as 10.6%. While segregated by age, CKD prevalence has shown rising trend with increasing age (Table 4). However, gender specific prevalence was not mentioned in this study. Discussions To the best of our knowledge, our systematic review is the first of this type that portrayed the prevalence of CKD in South Asian countries. This study will, expectantly, bring attention of international, regional as well as national stakeholders to the magnitude of CKD and importance of reducing burden of this deadly disease in the most densely populated share of the globe. It was reveled from our study that there is a scarcity of population based data on CKD in South Asian countries. This finding approves the statement of a previous study that reported that data on non-communicable diseases are rarely available outside developed countries [33]. Ample inconsistencies in characteristics of study population, study design, sampling technique and methods used to determine CKD makes it challenging to depict exact figure of CKD prevalence as well as to offer persuasive comparison of prevalence estimates in these countries. Nevertheless, according to the existing literature, one to four out of every 10 individuals in South Asia are suffering from CKD. Highest and lowest prevalence of CKD was reported from Pakistan (21.2%) and India (10.2%) respectively. The country specific prevalence of India, Bangladesh and Nepal is similar with the global prevalence of CKD (13.4%) [34] and with the prevalence in some developed countries like the USA and Japan (10% to 13%) [35,36]. However, the unusually high prevalence reported in Pakistan might be due to higher minimum age requirement set as eligibility criteria of study participants in Pakistani studies (> 40 years). The age specific distribution of CKD unveiled from this systematic review also supports this finding. Studies from four different countries (India, Bangladesh, Pakistan and Nepal) revealed that the prevalence of CKD was higher among elderly people than their younger counterparts. Age is a well-established risk factor for development of CKD [37,38]. Usually, as a part of the normal physiologic process, renal function (GFR) starts to decline even in a healthy individual after 30 to 40 years of age, which might deteriorate after 50-60 years of age due to structural changes in kidneys [39,40]. This increased prevalence of CKD among elderly individuals also can be explained by the higher prevalence of diabetes and hypertension among this group of people that are considered as important risk factors for developing CKD [17,28,29,32]. Seven studies included in our review found higher prevalence of CKD among men whereas rest of the studies reported that women suffer from CKD more frequently than men. This finding is in contrast with the pattern of gender distribution of CKD across the globe. In a recently conducted systematic review on global prevalence of CKD, two-third of included studies identified that CKD was more prevalent in women than in men [34]. A population-based study conducted in Norway reported that female gender was associated with slower decline of GFR with increasing age [41]. Women are also considered protected from CKD to some extent because of their distinctive biological phenomenon (glomerular structure, glomerular hemodynamics systolic blood pressure, hormonal status) and life style related factors (dietary protein and Table 5 Prevalence of CKD in Bangladesh, Pakistan and Nepal salt intake, smoking and alcohol consumption) [42,43]. However, further research is needed to identify gender specific prevalence of CKD in South Asian countries. This systematic review indicates that CKD poses a huge burden on the health system of South Asian countries (India, Bangladesh, Pakistan and Nepal). This is not unusual considering the high prevalence of diabetes and hypertension in this region [44][45][46][47][48][49][50]. However, awareness on different non-communicable diseases like diabetes, hypertension and CKD is very little among South Asian people and people usually do not seek health care until any sign or symptom of CKD appears [44,45,51]. In addition, people commonly prefer self-treatment or rely on informal and unqualified practitioners [52][53][54]. Like other LMICs, health system of South Asian countries are not prepared to combat the huge burden of NCDs [55]. Number human resources dedicated for prevention and treatment of kideny dieseases is also less and disproportonate in these countries [55]. Along with these, poor referral system prevailing in South Asian countries makes it difficult to detect CKD cases in early stage [56][57][58]. It is evident that untreated CKD is a risk factor for developing end stage renal disease (ESRD) and cardiovascular diseases (CVDs) that are leading causes of death in LMICs [59][60][61][62]. CKD is also found to be associated with poor health-related quality of life and loss of productivity [63]. To combat the CKD related burden, prevention and early detection of the disease through low-cost community based screening programs is important especially in resource constrain settings of South Asian countries. It is also a timely need for pertinent stakeholders of these countries to perform advocacy in order to offer low cost kidney transplantation and dialysis facility for advanced stage CKD patients. Further research is warranted to identify actual burden of CKD among people of different age group, sex, ethnicity and geographical location as well as among underprivileged group of people residing in slums and rural areas. This systematic review is not free from limitations. The main limitation of this review was equations used for determining CKD by included studies were not validated amid South Asian population except one study carried out by Jessani et al. [31]. Moreover, studies considered for this review adopted cross-sectional design, though, to be declared as having CKD, one person needs to show abnormal kidney structure or function for more than 3 months, which cannot be captured by cross sectional studies [64]. Conclusions Chronic Kidney Disease is a major public health concern in South Asian countries. Studies reported that one to four out of every ten individuals in these countries are suffering from CKD with variation attributable to discrepancy in research methodology and methods used for determining CKD. Prevalence of CKD rose with increasing age, however issues such as gender and other socio-economic factors have not been explored fully, therefore, further research is warranted. Limited number of population-based studies using cross-sectional design also created the need for further research to identify actual burden of CKD and its distribution in these countries. It is also a timely need for relevant stakeholders of this region to develop suitable policy and effective public health intervention for prevention, control and treatment of CKD in South Asia.
2018-11-10T06:29:28.297Z
2018-10-23T00:00:00.000
{ "year": 2018, "sha1": "7a47b62e221e1001b10b44b7db3cc3ef79bf7d76", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-018-1072-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a47b62e221e1001b10b44b7db3cc3ef79bf7d76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247011844
pes2o/s2orc
v3-fos-license
Computing a spectral sequence of finite Heisenberg groups of prime power order Let $p\geq 5$ be a prime number, let $n\geq 2$ be a natural number and let $\text{Heis}(p^n)$ denote the Heisenberg group modulo $p^n$. We study the Lyndon-Hochschild-Serre spectral sequence $E(\text{Heis}(p^n))$ associated to $\text{Heis}(p^n)$ considered as a split extension, and show that, $E(\text{Heis}(p^n))$ collapses in the third page. Moreover, for a fixed $p$, the spectral sequences $E(\text{Heis}(p^n))$ are isomorphic from the second page on. Introduction Group cohomology provides a framework to analyse intrinsic algebraic properties of a given group (see [9,Section 2.1], [15] for instance) or to study automorphisms of groups (compare [11], [12] and [20]) and it also has applications in algebra and number theory (see [13] and references therein). It is also interesting to know which type of graded rings can occur as cohomology rings of finite groups and how many of them are distinct (compare [4], [7], [18]). However, computing cohomology is extremely complicated and thus, there are few examples of such rings in the literature. One of the most powerful tools in computing such rings is the Lyndon-Hochschild-Serre spectral sequence (LHSss, for short) and in this paper, we provide one of the first infinite families of groups of prime power order, whose associated LHSss collapse in the same page. More precisely, let p denote an odd prime number, let n ≥ 1 be an integer and let G = Heis(p n ) = C p n ⋉ (C p n × C p n ) be the Heisenberg group modulo p n . Note that G is just a finite quotient of the infinite Heisenberg group G = Z ⋉ (Z × Z). Let moreover K denote a field of characteristic p with trivial G-action and let H • (G) = H • (G; K) denote the cohomology ring of G with coefficients in K. We study the LHSss E associated to G as a split extension of C p n by C p n × C p n . We show that, for all prime numbers p ≥ 5 and integers n ≥ 2, the spectral sequence E collapses in the third page, and that for such fixed p, the spectral sequences E are isomorphic from the second page on; independently of n. To obtain that result, we follow Siegel's techniques [17], where he computes the spectral sequence associated to Heis(p). We summarise the main results and give an outline of the paper below. We start by setting the notation in Section 2, and in Section 3 we describe the additive and multiplicative structure of the second page E 2 of the spectral sequence E (see Proposition 3.1 and Theorem 3.4). In Section 4, we use maps between cohomology rings to detect some of the generators in E 2 that survive to the infinity page E ∞ . In Section 5, we provide a generalization of [17,Corollary 2]; being the key step to deduce the image of the second differential of the remaining generators in E 2 (see Theorem 5.1 and Propositions 5.3 and 5.4, respectively). We postpone the statement of Theorem 5.1 to Section 5, as it requires introducing a considerable amount of notation, and its proof can be found in Appendix A. In Sections 6 and 7, we describe the third page E 3 of the spectral sequence and we show that all the remaining differentials are trivial. In turn, we attain the main result of this paper. Theorem 1. Let p ≥ 5 be a prime number and let n ≥ 2 be an integer. Then, the following statements hold: (i) The LHS spectral sequence E(Heis(p n )) collapses in the third page. (ii) For a fixed prime number p, the spectral sequences E(Heis(p n )) are isomorphic as bigraded K-algebras from the second page on. The description of the infinity page E ∞ (Heis(p n )) determines the dimension of the K-vector space H k Heis(p n ) , for every k ≥ 0. Therefore, we obtain the principal result of Section 8. Corollary 2. Let p ≥ 5 be a prime number and let n ≥ 2 be an integer. Then, the Poincaré series of H • (Heis(p n )) is given as follows: In Section 9, we consider the case where K is a finite field of characteristic p and we obtain the next result (see Corollary 9.1). Corollary 3. Let p ≥ 5 be a prime number and assume that K is a finite field of characteristic p. Then, there are only finitely many isomorphism types of (graded) algebras in the infinite collection {H • (Heis(p n ))} n≥1 . The above result is not surprising as the rank of G is two (see [18]), and it also motivates us to state a conjecture (see Conjecture 1). Acknowledgements. We would like to thank S. F. Siegel for clarifying how to compute the equalities in Proposition A.7. We would also like to thank Jon González-Sánchez for the interesting conversations regarding this project and for his support. Background and notation Throughout, let p denote an odd prime number, let n ≥ 1 be an integer and let K denote a field of characteristic p. We write G = C p n ⋉ (C p n × C p n ) for the Heisenberg group modulo p n and we set M = C p n × C p n = a, b and Q = C p n = σ . Note that the element σ ∈ Q acts (on the right) on M via a σ = ab and b σ = b. The cohomology ring of M with coefficients in K is with |x i | = |y i | = i, for i = 1, 2 (see [3,Proposition 4.5.4]). We can take where (·) * denotes the dual element and β n : H 1 (M ) −→ H 2 (M ) is the n-th Bockstein homomorphism [14,Section 6.2,p.197]. The (left) action of σ on H • (M ) can be shown to be given by For a groupG with normal subgroupM , there exists a first quadrant spectral sequence E(G) converging to H • (G) (see [9,Section 7.2] and references therein). It is called the Lyndon-Hochschild-Serre spectral sequence (LHSss, for short), and satisfies that , with r, s, ≥ 0. In the case under study, M is a normal subgroup of G with quotient Q and, for simplicity, we will denote by E the LHSss associated to the split extension 3. Description of the second page of the spectral sequence We follow the notation in the previous section and unless otherwise stated, we additionally assume until the end of the manuscript that n ≥ 2. We use the minimal KQ-resolution ([2, Section I.6]) to compute the cohomology groups E r,s 2 . Let N (σ) = p n −1 i=0 σ i ∈ KM and as n ≥ 2, it can be readily checked that, for all ϕ ∈ H • (M ), σ p · ϕ = ϕ and N (σ) · ϕ = 0 hold. The second page of the spectral sequence then takes the following form: , if r is odd. Let now and observe that the element z 2p is invariant under the action of σ. Furthermore, if we write if r is even, Consequently, it suffices to study the structure of D 2 so that the structure of E 2 is determined. 3.1. Additive structure. The first step will be determining a basis of the K-vector space D r,s 2 for each r, s ≥ 0. (i) For s ≥ 0, the basis elements of (W s ) Q are the following: (ii) For s ≥ 1, the basis elements of (σ − 1)W s are the following: For s ≥ 0, the basis elements of W s /(σ − 1)W s are the following: Proof. The proof follows verbatim that of [17,Proposition 3]. Using this result, we can write a table with the basis elements of D r,s 2 : and using the diagonal approximation, we describe the multiplicative structure of E 2 , that is, the bigraded algebra structure of E 2 over K. For r, s, r ′ , s ′ ≥ 0, let ϕ ∈ H s (M ) and Lemma 3.2. Letφ ∈ E r,s 2 andφ ′ ∈ E r ′ ,s ′ 2 be as above with r and r ′ odd. Then,φφ ′ = 0. In order to describe the multiplicative structure of E 2 , we fix the following notation. Proof. The first claim follows from Equation (3). Using the identifications in Proposition 3.1, note that multiplication by γ 2 =1 is simply the identity homomorphism and so, the second item holds. The last statement is clear by the description of the bases in Proposition 3.1 Using the previous results, we can deduce the multiplicative structure of E 2 . Theorem 3.4. The structure of the second page can be described as follows: (i) The graded commutative algebra structure of the zeroth column is given by the following tensor product: (ii) For r = 0, 1 and s ≥ 0, the basis elements of D r,s 2 are the following: For r ≥ 2 and s ≥ 0, we have that D r,s Proof. The first statement can be obtained as in [17,Corollary 4 (iii)] and the remaining assertions follow from Propositions 3.1 and 3.3. We encapsulate the previous result in the following table: Basis elements of E r,s 2 for 0 ≤ r ≤ 2 and 0 ≤ s ≤ 2p, with the multiplicative generators highlighted. Non-direct second differential computations In this section, we use restriction, inflation and the norm maps to determine some of the generators of E 2 that survive to the infinity page E ∞ . We fix the following notation: the inclusion homomorphism M ֒−→ G induces the restriction map in cohomology and, by a slight abuse of notation, we also write res G→M to denote the com- It can be readily checked that λ 1 = res G→M • π(λ 1 ) and thus, This yields that λ 2 ∈ Im(res G→M ) = E 0,2 ∞ . For ν 2 , consider the inflation homomorphism inf : E 2 (Heis(p)) −→ E 2 . In particular, for ν 2 ∈ E 0,2 2 (Heis(p)) defined analogously to ν 2 (see [17,Corollary 4], where Siegel uses y 2 ), we have that ν 2 = inf(ν 2 ). By [17,Theorem 5],ν 2 ∈ E ∞ (Heis(p)), and since the inflation map commutes with differentials, we conclude that Finally, we will study the generator ν 2p . The subgroup L = C p p n ⋉ M of G is normal, and so we have that L\G/M = G/LM = G/L. Applying the properties in [9, Theorem 6.1.1] of the Evens norm map N , we obtain that, for any ϕ ∈ H • (M ), , and we can write y 2 = res L→M (ỹ 2 ). Therefore, and we deduce that ν 2p ∈ E ∞ . Generalisation of Siegel's result In this section, we explicitly compute the image of the second differential on the remaining generators of E 2 . To that aim, we employ a generalization of Siegel's result [17,Corollary 2], which is derived from a theorem by Charlap and Vasquez [6]. To avoid technicalities in the current section, we collect most of the details and computations of the proof of Theorem 5.1 in Appendix A. We introduce the necessary notation to state our result. Let P • −→ K be the minimal projective KM -resolution and let V be a KG-module with trivial M action. Furthermore, for each g ∈ Q, write P g • for the KM -complex with underlying K-complex P • and M -action given as follows: for h ∈ M and x ∈ P • , we set h · x = h g −1 . Also, for every i ∈ N, we write Hom KM (P • , P • ) i to denote i k=0 Hom KM (P k , P k+i ). • be a KM -chain map commuting with the augmentation, and τ ∈ Hom Proof. See Appendix A. 5.1. Chain maps α and τ . The problem of computing d 2 is reduced to finding appropriate maps α and τ satisfying the hypotheses in the previous theorem. We start by defining such maps. Let P ′ • −→ K and P ′′ • −→ K be the minimal projective resolutions of K as a module over K a and K b , respectively. For each k ≥ 0, let e ′ k and e ′′ k be the basis elements of P ′ k and P ′′ k , respectively. We can then write P ′ k = K a e ′ k and P ′′ k = K b e ′′ k , and so P then, for each k ≥ 0, the elements e k 0 , . . . , e k k constitute a basis of P k as a KM -module. Using the duality H with a slight abuse of notation we can identify the elements of H • (M ) as follows: Consider the elements ρ, κ ∈ KM given by and define the maps α ∈ Hom KM (P • , P σ −1 • ) 0 and τ ∈ Hom KM (P • , P • ) 1 as the homomorphisms that for 0 ≤ j ≤ i < p n satisfy the following equalities: Lemma 5.2. The maps α and τ defined as above satisfy the equalities ∂α − α∂ = 0 and Proof. See Appendix A.2. 5.2. Direct second differential computations. Using Theorem 5.1 and the maps in Lemma 5.2, we can now compute the second differential of the remaining generators. Proposition 5.3. The second differential of the elements µ 4 , . . . , µ 2p is as follows: We can easily compute f • τ to obtain that, for 0 ≤ j ≤ k < p n , we have that The proof of the next result is verbatim to the previous one and we leave it to the reader. Proposition 5.4. The second differential of the element ν 3 is trivial. Third page of the spectral sequence Using the results in Sections 4 and 5.2, we can now determine the structure of the third page E 3 . First, write D 3 = E 3 / ν 2p , and define the elements One can easily verify that these elements have trivial second differential, and so they are in fact elements of E 3 . Proposition 6.1. Multiplication by the elements ν 2p , γ 2 , λ 2 induces vector space homomorphisms as follows: is injective for all r, s ≥ 0. As a consequence, is an isomorphism for all s ≥ 2p. To infinity and beyond Our objective in this section is to show that if p ≥ 5 the spectral sequence E collapses at E 3 , i.e. E 3 = E ∞ . In order to achieve our goal, we will define two group automorphisms that will help us show that all the differentials starting with d 3 are trivial. Let u ∈ U (Z/p n Z) be a generator, i.e. u p n−1 (p−1) = 1 but u i = 1 for any 1 ≤ i < p n−1 (p − 1). For 0 ≤ i, j, k ≤ p n − 1, we define the group automorphisms Φ : G −→ G and Ψ : G −→ G by Because Φ(M ), Ψ(M ) ≤ M , for every m ≥ 2, there are induced automorphisms Φ * : E m −→ E m and Ψ * : E m −→ E m . These automorphisms act on the generators of D 3 by multiplying each of them by a power of u as described in the following table: Proof. Assume by induction that, for m ≥ 3, ξ 2p+1 ∈ E m , and we will show that ξ 2p+1 ∈ E m+1 . Consider first the case m = 2j + 1 with j ≥ 1. We have that with t 1 , t 2 ∈ K. Applying Ψ, we obtain that and, equating coefficients with those in (6), we get the conditions From these, we deduce that t 1 = 0 for all j ≥ 1, and t 2 = 0 for all j > 1. If j = 1, applying Φ to (6) we deduce that t 2 (1 − u p−3 ) = 0 and t 2 = 0 for p ≥ 5. Therefore, ξ 2p+1 ∈ E 2j+1 survives to E 2j+2 . Theorem 7.4. Let n ≥ 2 and let p ≥ 5. Then, the LHSss E associated to G collapses in the third page, i.e. E 3 = E ∞ . Poincaré series In this section, we will compute the Poincaré series of H • (G), i.e. the power series Then, the Poincaré series of D ∞ is given by the power series P D (t) = ∞ k=0 (dim D k ∞ )t k , and so we first need to obtain the values dim D k ∞ for each k ≥ 0. Note that, for every r, s ≥ 0, the number dim D r,s ∞ is computed in Theorem 6.2. Indeed, for i ≥ 0, we have that This information can be showcased in the following table: Proof. The values dim D k ∞ for 0 ≤ k ≤ 3 can be easily computed from the table in Figure 3. Let 4 ≤ k ≤ 2p and write k = 2i + ε with ε = 0, 1. Then, we can compute Therefore, we obtain that Let now k ≥ 2p + 1 and write k = 2i + ε with ε = 0, 1. Then, we can compute the following values: Therefore, we obtain that As a result, we can compute the Poincaré series of H • (G). Theorem 8.2. The Poincaré series of H • (G) is Proof. Using Lemma 8.1, we can compute the Poincaré series for D ∞ as follows: Therefore, because E ∞ = K[ν 2p ] ⊗ D ∞ , we have that Conclusion and further questions We follow the notation introduced in Section 2. As a consequence of Theorem 7.4, we obtain that, for a prime number p ≥ 5, the LHSss E of G are isomorphic from the second page on as bigraded K-algebras. We have not however determined the ring structure of H • Heis(p n ) and we encourage the ambitious reader to do so. Assume now that K is a finite field of characteristic p. Then, by [4, Theorem 2.1], there are finitely many liftings of E ∞ Heis(p n ) to the cohomology ring H • Heis(p n ) . This in particular yields the following result. Corollary 9.1. Let p ≥ 5 be a prime number. Then, there are only finitely many isomorphism types of K-algebras in the infinite collection {H • (Heis(p n ))} n≥1 . The above result is in slight analogy with the previously obtained results in the area [4], [7], [8], [10], [18]. Let G(−) denote an affine group scheme over a ring. For example, the Heisenberg group G and the group G are obtained by applying such a functor G(−) to Z and to Z/p n Z, respectively. The presentation of the cohomology rings of such groups is intrinsically hard to obtain. For instance, in [16], Quillen described the cohomology rings of the general linear groups GL n (K) over a field K of characteristic p with coefficients in a finite field F of characteristic coprime to p. However, the case where K and F have the same characteristic is widely open. Based on Corollary 9.1, we ask whether the following conjecture holds or not. Conjecture 1. Let p be a prime number and let G(−) be an affine group scheme over the p-adic integers Z p . Then, there exists a natural number f = f (p, G) that depends only on p and on G, such that for each p and for all n ≥ f , the cohomology rings H • (G(Z p /p n Z p ); K) are isomorphic, where K is a field of characteristic p with trivial G(Z p /p n Z p )-action. The first reason to support the previous conjecture is that the Quillen categories of the groups G(Z p /p n Z p ) are isomorphic. That is, the cohomology rings H • (G(Z p /p n Z p ); K) are F -isomorphic (see [15]). Secondly, observe that for each n ≥ 2, there is an extension where G 1 (Z p /p n Z p ) denotes the first congruence subgroup of G(Z p /p n Z p ). It is known that G 1 (Z p /p n Z p ) is a powerful p-central group with the Ω-extension property and thus, for every n ≥ 2, the cohomology rings H • (G 1 (Z p /p n Z p ); K) are isomorphic ( [19]). Moreover, the actions of G(Z p /pZ p ) on H • (G 1 (Z p /p n Z p ); K) are isomorphic, in the sense of [7,Definition 5.5]. In turn, the spectral sequences E 2 (G(Z p /p n Z p )) are isomorphic as bigraded K-algebras. Therefore, based on [7, Conjecture 6.1], we would expect that the above conjecture holds by taking f to be equal to 2. Appendix A. Generalisation of Siegel's result In this section, we will state a theorem by Charlap and Vasquez [6] regarding the computation of the second differential of the LHSss associated to a split extension of finite groups and then provide a generalization of [17,Corollary 2] for split extensions of cyclic p-groups. We start by introducing the necessary definitions and notation to state the aforementioned result by Charlap and Vasquez. Let G = Q ⋉ M be a split extension of Q by the finite group M and let V be a KG-module with trivial M -action. Let X • −→ K be a projective KG-resolution, let Y • −→ K be the KQ-bar resolution and let P • −→ K be the minimal KM -resolution. If E = E(G) is the LHSss associated to the split extension of Q by M , the following identifications hold ([9, Section 7.2]): For each g ∈ Q, we write P g • for the KM -complex with underlying K-complex P • and Maction given by Also, for every i ∈ N, we write Hom KM (P • , P g • ) i to denote i k=0 Hom KM (P k , P g k+i ). Then, for each g, g ′ ∈ Q the Comparison Theorem guarantees (see [1,Theorem 2.4.2 ] and subsequent remark) the existence of maps A(g) ∈ Hom KM (P • , P g • ) 0 and U (g, g ′ ) ∈ Hom KM (P • , P gg ′ • ) 1 satisfying the following conditions: Theorem A.1 ([17, Theorem 1]). Let A and U as above. Let r ≥ 0, s ≥ 1 and suppose that ζ ∈ E r,s 2 is represented by f ∈ Hom KM (P s , V ). Then d 2 (ζ) is represented by (−1) r D 2 (f ), where ). Although the previous result is for a split extension of a general finite group Q, it requires the use of the KQ-bar resolution of K. In [17], the previous result has been extended for the minimal resolution of a cyclic group Q of size p. We generalise Siegel's result to the case where Z • −→ K is the minimal KQ-resolution with Z k = KQe k , for k ≥ 0, and where Q = C p n is a cyclic p-group of size p n , with n ≥ 1. A.1. Proof of Theorem 5.1. The aim of this section is to finish the proof of Theorem A.3. We follow the notation introduced in the beginning of Appendix A and additionally assume that Z • −→ K is the minimal KQ-resolution with Z k = KQe k , for k ≥ 0, and where Q = C p n is a cyclic p-group of size p n , with n ≥ 1. Under those hypotheses, the first page of the LHSss described in (7) can be identified with In order to use Theorem A.1 for the above description of the spectral sequence, we first need explicit chain maps between the bar resolution Y • and the minimal resolution Z • . For that purpose, we define the following maps: (i) For k ≥ 1 and 0 ≤ i 1 , . . . , i 2k+1 ≤ p n − 1, let θ : Y • −→ Z • be a K-map that satisfies the next identifications: (ii) For k ≥ 1, let η : Z • −→ Y • be a K-map that satisfies the following identifications: Lemma A.2. The above maps θ and η are K-chain maps. We will study the first case carefully and we omit the rest of the cases as the steps to follow are identical. On the one hand, we can easily see that On the other hand, for m > l + 1, it is clear that Furthermore, the equalities in (10) and (11) yield that, for 2 ≤ m ≤ l + 1, If 2 ≤ m ≤ l, using (8), the expression (12) is reduced to Likewise, if m = l + 1 we obtain that Let us now show that η is a chain map. Once again, we will focus on the even case and only show that (∂η − η∂)(e 2k ) = 0 for k ≥ 1. On the one hand, because the initial sum covers all possible exponents 0 ≤ i 1 , . . . , i k < p n , it is easy to see that and so we have that [σ i1 | · · · |σ i j−1 |σ|σ i j +1 |σ i j+1 |σ| · · · |σ i k |σ] On the other hand, η∂(e 2k ) = η p n −1 i=0 σ i e 2k−1 = 0≤i,i 1 ,...,i k−1 <p n σ i [σ|σ i 1 | · · · |σ|σ i k−1 |σ]. We will now state and prove Theorem 5.1. Proof. The proof of this result can be done by following that of [17,Corollary 2], using the chain maps from Lemma A.2 and writing p n instead of p where appropriate. A.2. Proof of Lemma 5.2. In this section, we will give the explicit computations required in the proof of Lemma 5.2. To that aim, we display the equalities that will be used during our computations while the proof of such properties is left for the reader. Proposition A.5. The map α is a chain map, i.e. ∂α − α∂ = 0. We are left to prove that the identity ∂τ + τ ∂ = 1 − α p n holds. In order to do that, we first show the identities that will be used throughout the proof. (ii) For any i, j ≥ 0 and m ≥ 1, we have that j≤k≤l≤i m k−j l k k j = j≤l≤i (m + 1) l−j l j .
2022-02-22T06:47:21.936Z
2022-02-21T00:00:00.000
{ "year": 2022, "sha1": "7f42b6315c80efdeded4e2a802055e5b3563dc7c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7f42b6315c80efdeded4e2a802055e5b3563dc7c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
211122826
pes2o/s2orc
v3-fos-license
Long-lived entanglement generation of nuclear spins using coherent light Nuclear spins of noble-gas atoms are exceptionally isolated from the environment and can maintain their quantum properties for hours at room temperature. Here we develop a mechanism for entangling two such distant macroscopic ensembles by using coherent light input. The interaction between the light and the noble-gas spins in each ensemble is mediated by spin-exchange collisions with alkali-metal spins, which are only virtually excited. The relevant conditions for experimental realizations with ^{3}\text{He} or ^{129}\text{Xe} are outlined. Quantum entanglement describes correlations between distinct quantum systems and is often used to set borders between the quantum and classical worlds [1,2]. It is a valuable resource for quantum information and computing [3][4][5][6][7] and for metrology beyond the standard quantum limits [8,9]. Generating and maintaining entanglement in matter systems requires exquisite control and isolation, as achieved in ensembles of alkali-metal spins [10][11][12], trapped ions and atoms [13,14], quantum defects in crystals [15], and high-quality mechanical oscillators [16]. Rare isotopes of noble-gas atoms, such as 3 He and 129 Xe, have nuclei with nonzero spins. These spins are exceptionally isolated from the environment and can remain coherent for extremely long times, exceeding tens of hours above room-temperature [17,18]. Accordingly, the collective nuclear spin of noble-gas ensembles is the longest-living macroscopic quantum object currently known. Nevertheless, while these spin ensembles could potentially maintain entanglement for record times [19,20], they do not interact with optical photons. This limits their applicability for optical quantum communication [10,[21][22][23][24], or to advanced sensing applications such as hybrid optomechanical-spin systems, e.g., for gravitational-wave detection [25,26]. In 2007, Pinard and coworkers proposed to entangle 3 He ensembles using incoherent collisions with metastable 3 He atoms and via adiabatic state transfer with nonclassical light in an optical cavity [27]. This pioneering and rather challenging proposal was never realized. Here we develop a readily feasible scheme for entangling two macroscopic ensembles of noble-gas spins contained in distant cells, as shown in Fig. 1. Our scheme employs the archetypal mechanism for entanglement of spin ensembles, based on continuous measurement of spin fluctuations by off-resonant Faraday rotation of probe light [24]. This mechanism was successfully employed to entangle distant alkali spin ensembles [10]. While there is no direct interaction between light and noble-gas spins, we propose to use auxiliary ensembles of alkali-metal atoms as mediators. The alkali mediators are opticallyaccessible and couple to the noble-gas spins via coherent Collective spin-states of polarized alkali and noble-gas atoms. The shaded disks denote quantum spin fluctuations. (c) Polarization state of linearly-polarized probe, and its rotation via indirect Faraday interaction with the noble-gas spins, as described by Eq. (6). The in-phase (xLy) and out-of-phase (xLz) components of the probe commute and can be simultaneously measured. Shaded purple disks denote the photon shot-noise. spin-exchange collisions [19]. We show that continuous optical measurement of the alkali spins generates a vital entanglement between the noble-gas ensembles. At the same time, dissipation and fluctuations of the alkali spins can be circumvented by introducing a frequency mismatch, such that quantum correlations are mediated without actual excitations of the (alkali) mediators. We outline the physical conditions for experiments with 3 He-K and 129 Xe-Rb mixtures towards a demonstration of long-lived entanglement of macroscopic systems. Before diving into the detailed model, we consider a simplified picture of the interaction mechanisms within each cell, presenting the emergence of the Faraday in-1 − (Fig. 1c), leads to (conditional) squeezing and displacement of the spin-state.k1y −k2y andk1z −k2z commute, and their combined uncertainty can be smaller than 1. (c) A short transverse magnetic-field pulse rotates the spin-state, yielding an unconditioned entanglement, satisfying inequality (1). (d) During the memory time, application of a large magnetic-field decouples the noble-gas and alkali spins. The memory lifetime is governed by the long coherence-time of the noble-gas spins. teraction between light and optically-inaccessible spins. We describe quantum excitations of the alkali spins by the bosonic operatorsf ,f † , excitations of noble-gas spins byk,k † , and the polarization state of probe light by the canonical bosonic operatorsx L andp L . The probe couples to the alkali ground-level spins via the opticallyexcited levels. These levels are subject to rapid relaxation at a rate Γ e due to spontaneous emission and buffer-gas broadening, leading to spin relaxation and to probe attenuation. Detuning the probe by |δ e | Γ e from the optical transition circumvents this relaxation, rendering the atom-photon interaction dispersive. The excited-level spins then adiabatically follow the groundlevel spins, yielding the Faraday interaction H L−a = i Qp L (f † −f )/ √ 2 between the probe and the alkali spins. H L−a describes the polarization rotation of far-detuned probe and the resulting alkali-spin rotation at the rate Q ∝ 1/δ e [11]. The coherent coupling of the alkali spins to the noble-gas spins is described by the exchange Hamiltonian H a−b = J(f †k +k †f ), where J is the collective exchange-rate due to atomic collisions [19]. The resonance conditions for this coupling are governed by the non-interacting Hamiltonian H 0 = ω af †f + ω bk †k , where the difference in precession frequencies ∆ = ω a −ω b is tunable with an external magnetic field. The alkali spins are prone to fast dephasing at a rate γ a due to photon absorption, collisions with different atoms and with the cell walls. Here again, the detuning (∆) determines to what extent this fast alkali relaxation affects the noble-gas spins. On resonance (|∆| γ a , J), the noble-gas spins inherit the alkali-spin relaxation [19], whereas off resonance (|∆| J, γ a ), the interaction is dispersive, suppressing the relaxation induced by the alkali by a factor γ a /∆ 1. The alkali spins then adiabatically follow the noble-gas spins, yielding the overall ) in a frame rotating at ω b when |∆| J, Q, up to shifts proportional to Q 2 /∆ and J 2 /∆. We thus arrive at an indirect Faraday interaction of light with noble-gas spins via virtual excitations of alkali spins. The concept described above can be applied for entangling two distant noble-gas spin ensembles using probe light and alkali spins [ Fig. 1(a)]. Each cell contains N b noble-gas atoms with spin-1/2, initially polarized along the quantization axis e x . Ensemble i = 1, (i = 2) is polarized upwards +e x (downwards −e x ). Given the spin operatorsk (n) i of the n-th noble-gas atom in the i-th cell, we define the normalized macroscopic spin op- 1 and fully polarized ensembles (P b = 1), the initial states are known as coherent spin-states (CSS). A partially polarized ensemble of spin-1/2 atoms may be seen as a mixture of P b N b polarized atoms and (1 − P b )N b unpolarized atoms, only reducing the coherent interaction strength [11]. The two ensembles have definitive collective spin along e x with a classical measurement outcome k ix = ±M 1/2 b and negligible variance, where henceforth the symbol '±' stands for '+' in cell i = 1, and for '−' in cell i = 2. On the other hand, the transverse components of the normalized collective spink iy andk iz satisfy the commutation relation [k iy ,k jz ] = ±iδ ij and consequently are governed by quantum fluctuations. These operators are normalized and unitless, giving the collective spin variance in units of vacuum noise. These fluctuations, known as atom-projection noise, are zero on average and have a nonzero variance, satisfying the Robertson inequality 4var(k iy )var(k iz ) ≥ | [k iy ,k iz ] | 2 = 1, where var(k iy ) = var(k iz ) for CSS. Visually, these fluctuations can be represented as a small uncertainty disk around the classical spin vector, as shown in Fig. 1 Two spin ensembles are entangled if their quantum fluctuations are correlated, as in a two-mode squeezed state. For spins of equal magnitude | k 1x | = | k 2x |, a sufficient criterion for EPR-type entanglement is given by [10,28] var(k 1y −k 2y ) + var(k 1z −k 2z ) < 2. (1) Therefore, simultaneous measurement of the nonlocal observablesk 1y −k 2y andk 1z −k 2z generates entanglement, if the total noise variance of the two cells is less than two vacuum-noise units. Such measurement is allowed for oppositely oriented spins k 1x = − k 2x , for whicĥ k 1y −k 2y andk 1z −k 2z commute. We measure the noble-gas spins using alkali spins and a probe field. Each cell contains N a alkali atoms, polarized to a polarization degree P a ≤ 1 (using auxiliary circularly-polarized pump beams) along the same directions ±e x as the noble-gas spins. We define for each cell the normalized macroscopic alkali-spin opera- is the alkali magnetization, and I is the alkali nuclear spin. Similarly to the noble-gas spins,f ix are considered classical, with f ix = ±M 1/2 a , whereasf iy andf iz are governed by quantum fluctuations. The probe is a square pulse of duration T , propagating along e z with initial linear polarization e x . We represent its state by the normalized Stokes operatorsŜ(z) where Ŝ x 2 = M L is the total number of photons in the pulse, andŜ y ,Ŝ z , describe the ellipticity of the polarization-state subject to quantum polarization-fluctuations. The Hamiltonian describing the interactions in the system is given by [10,19] The first term describes a mutual precession of the alkali and noble-gas spins around each other at a rate J. It manifests the coherent collective coupling between these spins via multiple weak spin-exchange collisions [19]. The second term in Eq. (2) describes the dispersive interaction of the alkali spins with the far-detuned probe traversing the two cells [11]. The spin components along the optical axis (f 1z +f 2z ) govern the Faraday rotation of the light polarization, while circularlypolarized light (Ŝ z ) acts back to rotate the spins via lightshifts. The coupling rate is given by Q =(a/T ) √ M a M L , where a ∝ 1/δ e is the unitless optical-coupling coefficient [11,30] and L is the length of each cell. See Supplementary Material for detailed expressions of J, Q, and a [31]. To generate entanglement, we set common precession frequencies (ω a , ω b ) in the two cells, by tuning the magnetic fields and the light-shifts induced by the pumps in each cell [31]. We describe the spin dynamics in a common rotating frame, defined byk rotates a vector by an angle θ around e x . In this frame, the alkali spins precess at frequency ∆ = ω a − ω b . We now take the off-resonance regime ∆ γ a , J, Q and first present the results for negligible relaxations. Given the interaction Hamiltonian (2), we find that the transverse fluctuationsf iy ,f iz of the alkali spins adia-batically follow the noble-gas spins-fluctuations, and the probe polarization, where e(t) = sin(ω b t)e y + cos(ω b t)e z is the optical axis in the rotating frame. Thus, the large frequency mismatch ∆ renders the interaction dispersive, moderating the response of the alkali spins to both spin-exchange and back-action of light. We use Eqs. (2)(3) to derive the Heisenberg-Langevin equations for the transverse operatorsŜ andk i [31]. First, we find that the difference between the noble-gas spins remains constant Importantly, the preparation of the two cells with oppositely oriented spins eliminates the back-action effect [second term in Eq. (3)] of the probe on the operator k 1 −k 2 . Second, we find thatk 1 −k 2 determines the evolution of the probe polarization along the cell Equation (5) manifests the indirect Faraday interaction between the probe and the noble-gas spins, with the outgoing polarizationŜ y (L) providing a monitor ofk 1 −k 2 . In particular, a simultaneous measurement of the inphase and out-of-phase components ofŜ y (L) via homodyne detection yields the nonlocal spin componentŝ k 1y −k 2y andk 1z −k 2z , respectively. The procedure for entanglement generation is shown in Fig. 2. Initially, homodyne measurement of the probe, which underwent the evolution in Eq. (5), drives the noble-gas ensembles to a nonclassical two-mode squeezed state, displaced according to the measurement outcome [11]. Subsequently, feeding-back the measurement outcome to rotate the spins (using a short magnetic pulse) sets the mean value of their squeezed components to zero, yielding unconditioned entanglement. To quantify this process, we define canonical operators for the probex These constitute two independent Harmonic oscillators. The total evolution is then given by a set of input-output relations, obtained by integration of Eqs. (4)(5) [31] The input components of the probex in L ,p in L comprise the photon shot-noise at z = 0, and the output componentsx out L ,p out L describe the probe state at z = 2L Figure 3. Attainable degree of two-mode spin squeezing for noble-gas ensembles. We present results for both η = 0.22 and η = 0.12, where η characterizes the fractional decoherence of the noble-gas spins during the entangling process. The parameters σa, σ b , and σ L denote the contributions of the alkali spin-projection noise, noble-gas spin-projection noise, and photon shot-noise, respectively, to the optical measurements. The squeezing is maximized when the noble-gas noise σ b dominates the measurement. The calculations are done using Eq. (7), with σ b /σ L = κ √ 1 − and σa/σ b = . The crosses mark proposed working points with 129 Xe-87 Rb (green) and 3 He-K (red,orange). after the cells. Similarly, the noble-gas spin operatorŝ x in b ,p in b comprise the atomic projection-noise at t = 0, andx out b ,p out b describe the collective spin-state at t = T . Therefore, Eqs. (6) describe the Faraday rotation (x out L ) of the linearly polarized input light (x in L ) by the total noble-gas spin (p in b ), as shown in Fig. 1(c), with no backaction (p out b =p in b ). The unitless coupling constant κ ≡ QJT /∆ quantifies the net polarization rotation of the probe. It characterizes the measurement strength of the noble-gas spins with respect to the photon shot-noise, depending on the resonant optical-depth of the alkali ensembles [31]. For coherent light and coherent spin-states, the input uncertainties are at the classical minimum, satisfying var(x in L,α ) = var(p in L,α ) = var(x in b,α ) = var(p in b,α ) = 1/2 with α = y, z. Following the measurement, a magnetic pulse feedback is used for rotating the noble-gas spins fromp out b =p in b top in b + Gx out L . The feedback proportionality constant G can be optimally chosen to minimize var(p out b,α ) = (2 + 2κ 2 ) −1 for both α = y, z. Identifying var(p out b,α ) = exp(−2ξ)/2 as the degree of two-mode squeezing, we obtain the squeezing parameter ξ = ln(1 + κ 2 )/2. Evidently, any system with κ > 0 yields nonzero squeezing and satisfies the inequalities var p out b,α < 1/2, thus satisfying the entanglement condition in Eq. (1). We therefore conclude that our scheme correlates the spin-states of two distant noble-gas ensembles, generating unconditional entanglement. We now return to consider relaxation processes expected in realistic conditions. The mechanisms dominating the relaxation rate γ sd of the alkali spin are absorption of probe photons, collisions with noble-gas atoms, spin destruction during alkali collisions, and collisions with the cell walls [19,29,33,34]. Continuous optical-pumping at a rate R op can be used to maintain a constant alkali magnetization M a = P a N a (I + 1/2), with P a = R op /γ a and γ a = γ sd + R op . The noble gas is hyperpolarized via spin-exchange optical-pumping (SEOP) at a high magnetic field prior to the experiment [29,35]. For polarized alkali spins, the decoherence rate of the noble-gas spins is Γ b = γ b + (J/∆) 2 γ a ; it inherits a fraction (J/∆) 2 of the alkali decoherence rate γ a , which often dominates Γ b [36]. At low alkali densities, γ b is typically limited by technical magnetic inhomogeneities to γ b (minute) −1 for 129 Xe and γ b (hour) −1 for 3 He [17,18,37]. These relaxation processes are accompanied by noise, which increases the measurement variance and limits ξ. We generalize Eqs. (6) and include the relaxation and noise effects, deriving the best attainable two-mode squeezing parameter [31] Here = 4γ L L denotes the total fraction of scattered probe photons, η = 2Γ b T denotes the fraction of decohered noble-gas spins, and = 4qγ a /(J 2 T ) characterizes the ratio between the contributions of alkali spins and noble-gas spins to the projection noise. The unitless parameter q(I, P a ) ≥ 1 quantifies the increase of alkali projection-noise (variance) due to imperfect spinpolarization, where q(0, P a ) = q(I, 1) = 1 [30]. Equation (7) guarantees the generation of entanglement between the two ensembles for η 1. Notably, it has the same form as for squeezing two alkali ensembles [11] except for the additional parameter . In Fig. 3, we use Eq. (7) to plot the degree of squeezing exp(−2ξ) of the two noblegas spin-ensembles as a function of κ √ 1 − and for two values of η. Our entanglement generation scheme can be realized with various alkali and noble-gas mixtures within a large range of experimental parameters. Here we present a representative configuration for entangling two 3 He ensembles in two cylindrical cells of length L = 5 cm and cross-section A = 2 mm 2 . We consider a gaseous mixture of 880 Torr 3 He, 70 Torr N 2 , and a droplet of K at 250 • C. Here R op = 1.6γ a yields P a = 0.62 [with q(3/2, P a ) = 1.22] and P b = 0.56, assuming γ −1 b = 50 hour. The 400-mW probe is detuned 3 THz from the optical line, and B 1 ≈ 10 mG. Homodyne detection for T = 200 msec yields κ = 2, = 0.3, η = 0.125, and = 0.162, generating 4 dB of two-mode squeezing (ξ = 0.45), which could live for tens of hours. The performance for this configuration is marked in Fig. 3 (orange cross). Other exemplary experimental configurations, marked in Fig. 3 and detailed in [31], yield 6 dB of squeezing for 3 He-K mixture (red cross) and 3 dB of squeezing for 129 Xe-87 Rb mixture (green cross). The long coherence time within each noble-gas spin ensemble ideally also applies to the entanglement life-time, even though each ensemble comprises a macroscopic number of spins. In the Holstein-Primakoff approximation, the number of spin excitations is independent of the total number of spins. Indeed we show in [31] that the squeezed quadrature, var(p out b ) < 1/2, decays at a constant rate 2Γ b . The long-lived entanglement can be verified by applying an off-resonant probe pulse, measuring the two spin-ensembles simultaneously by utilizing the same experimental configuration used for their generation [10]. Alternatively, the spin of each cell could be measured independently, and their cross-correlations can be found. In systems featuring strong-coupling between the alkali and noble-gas (J γ a ).Transfer times J −1 of a few milliseconds are possible [19], realizing fast operations yet maintaining long coherence-times. The alkali squeezedstate could then be projected using a short probe pulse. In summary, we presented a scheme for entangling the collective nuclear spins of two macroscopic noble-gas ensembles, relying on alkali spin for obtaining an indirect Faraday interaction between the noble-gas and light. The role of relaxations has been considered, revealing that sizable degree of entanglement can be generated at standard experimental conditions, and maintained for extremely long times. With technologically available miniature cells [38][39][40] and exceptionally long coherence-times, entanglement of hot spin ensembles holds a promise for realizing new quantum-optics applications and enhanced sensing at ambient conditions. The scheme could potentially be extended to generate entanglement in other physical systems having hybrid electronic and opticallyinaccessible nuclear spins, including quantum dots, diamond color-centers, and rare-earth impurities interacting with nearby nuclear spins in the crystal. In both cells (i = 1, 2), each alkali atom denoted by m has a spinf , where [I] = 2I + 1. For alkali-noble-gas mixtures polarized along ±e x , the collisional interaction leads to coherent exchange between the quantum spin fluctuations (the transverse spin components), as well as to fictitious magnetic fields along ±e x imposed by each species on the other. Consequently, the total precession frequencies of the fluctuations of the alkali and noble-gas spins are given respectively by ω ia =ω ia ± J M b /M a and ω ib =ω ib ± J M a /M b . To synchronize the precession frequencies in both cells, we set B 2 and (Ω 2 − Ω 1 ) to satisfy ω 1a = ω 2a ≡ ω a and ω 1b = ω 2b ≡ ω b , for any choice of B 1 . Under these conditions, the quantum dynamics of the system derived from the Hamiltonian H tot is described by the Heisenberg-Langevin equations for the transverse operators Here γ L denotes the attenuation per-unit-length of the probe (including the absorption by the alkali atoms), γ a denotes the total decoherence rate of the alkali spins in the presence of the probe, and γ b denotes the slow relaxation of the noble-gas spins [S24]. The vacuum noise operatorsF L ,F ia , andF ib are associated with these decays [S11, S19]. The spin-exchange interaction allows for coherent state-exchange between the alkali and noble-gas spins within each of the two cells independently at a rate J = g √ M a M b /(AL). The coherent spin-exchange rate coefficient is g, with g = 4.9 × 10 −15 cm 3 s −1 for a K− 3 He mixture or g = 1.9 × 10 −13 cm 3 s −1 for 87 Rb− 129 Xe [S19, S29]. At the same time, the polarization state of the probe is altered by both ensembles together:Ŝ y depends on the nonlocal spin operatorf 1z +f 2z , andŜ z exerts a common back-action light-shift on the two cells. The optical coupling rate Q =(a/T ) √ M a M L depends on a = 2r e cf /[Aδ e (2I + 1)], where r e = 2.8 × 10 −17 cm is the classical electron radius, f ≤ 1 is the oscillator strength of the atomic transition, and δ e is the detuning of the laser from the optical transition. Also note that the operators in Eqs. (S2-S4) satisfy the commutation relations [f iy ,f jz ] = ±iδ ij for the alkali, Ŝ y (z ),Ŝ z (z ) = icT δ(z − z ) for the light and [k iy ,k jz ] = ±iδ ij for the noble-gas spins. To simplify Eqs. (S2-S4), we transform the system to the rotating frame of the noble-gas spins and describe the adiabatic following of the alkali in the limit of large magnetic field limit (the off-resonance regime) ∆ γ a , J, Q. The formal transformation of the collective spin operators in each cell to the rotating frame is given byf The operatorsf andk are the stationary spin components of the alkali and noble-gas spins, respectively. The dynamics of the y, z components of the alkali spins in the rotating frame is then given by We are interested in the slow, adiabatic dynamics off i , which naturally oscillates at a rate ∆. The leading order of the dynamics is thus determined by considering the instantaneous steady state ∂ tf i = 0, which yields the linear relationf Here e(t) = sin(ω b t)e y + cos(ω b t)e z is the optical axis in the rotating frame, and we define cos ψ ≡ ∆/ ∆ 2 + γ 2 a and sin ψ ≡ γ a / ∆ 2 + γ 2 a . Eq. (S7) describes the slow temporal dependence of the alkali spin operators on the noble-gas spins via spin-exchange, on the light circular polarization via back-action noise, and on the infiltrated vacuum white noise associated with the decay rate γ a . The noise terms are given byF ia = R x (ω b )F ia , which are statistically equivalent toF ia . In the off-resonance regime ∆ γ a , we obtain ψ 1, such that the leading term in Eq. (S7) is free of decay and noise, yielding the simple form of Eq. (3) in the main text (note that in the main text, we dropped the prime notation for brevity). Similarly, we derive the equations for the noble-gas spins in each cell in the rotating frame The first term describes the back-action of the probe circular polarization on the noble-gas spins, mediated via its effect on the alkali. Interestingly, the back-action is the same in both cells, being polarized in opposite orientations. is the unitless optical coupling strength. Similarly, substituting the definitions (S17-S24) in Eqs. (S15-S16) and temporally integrating for the pulse duration T yield the atomic evolution of the spins by the emerging Faraday interaction with noble-gas spinsx Equations. (S25) and (S29) give Eqs. (6) in the main text. In the presence of noise and relaxations, the modified input-output relations are given byx Here we identifyŵ n (0 ≤ n ≤ 4) as standard vacuum-noise operators which correspond to normalized quantum-Weiner processes, satisfying ŵ n = 0 and ŵ mαŵnβ = 1 2 δ mn δ αβ for α, β ∈ {y, z} and 0 ≤ m, n ≤ 4 [S11]. To estimate the attainable degree of squeezing and choose the optimal feedback pulse, we calculate the variance of the atomic spins after the feedback var(p out A,i + Gx out L,z ) and find that it attains a minimal value of var(p out A,i + Gx out L,z ) = 1 2 for the feedback proportionality constant . (S32) Appendix B: Entanglement lifetime In this section, we show that the variance of the squeezed quadrature of the two ensembles decays at the rate 2Γ b . In our case, the two independently squeezed quadratures arep b,y (t) andp b,z (t), which according to Eq. (S15) satisfy the dynamics where F b− = 0, and for α, β ∈ {y, z}. Integration of Eq. (S33) yieldŝ where the second integral represents a standard stochastic integration. We first note that the initial vacuum-squeezed state is not displaced yielding p bα (0) = 0 and this p bα (t) = 0. To calculate the variance as a function of time we first find where {·} denotes the anti commutator. We can than calculate the variance by var (p bα (t)) = e −2Γ b t p 2 bα (0) + Therefore, the variance of a squeezed state initially with p 2 bα (0) < 1 2 would decay at the rate 2Γ b (i.e. twice the individual decoherence rate), whereas std(p bα ) decays at a rate Γ b . Note that if the degree of squeezing is represented in dB scale via the definition 10 log 10 (2var (p bα (t))), then the decay would seem faster for higher degree of squeezing as demonstrated in Fig. S1 for different initial squeezing degrees.
2020-01-30T09:15:13.326Z
2020-01-29T00:00:00.000
{ "year": 2020, "sha1": "22699cdcc1e5f6f16c073212429583e9db110b8c", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2002.07030", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5b132fac77a12532ed7b1c3bac4b67b308558cab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
53053451
pes2o/s2orc
v3-fos-license
Primary and secondary sources of formaldehyde in urban atmospheres : Houston Texas region We evaluate the rates of secondary production and primary emission of formaldehyde (CH 2O) from petrochemical industrial facilities and on-road vehicles in the Houston Texas region. This evaluation is based upon ambient measurements collected during field studies in 2000, 2006 and 2009. The predominant CH 2O source (92± 4 % of total) is secondary production formed during the atmospheric oxidation of highly reactive volatile organic compounds (HRVOCs) emitted from the petrochemical facilities. Smaller contributions are primary emissions from these facilities (4± 2 %), and secondary production (∼3 %) and primary emissions (∼1 %) from vehicles. The primary emissions from both sectors are well quantified by current emission inventories. Since secondary production dominates, control efforts directed at primary CH 2O emissions cannot address the large majority of CH 2O sources in the Houston area, although there may still be a role for such efforts. Ongoing efforts to control alkene emissions from the petrochemical facilities, as well as volatile organic compound emissions from the motor vehicle fleet, will effectively reduce the CH2O concentrations in the Houston region. We do not address other emission sectors, such as off-road mobile sources or secondary formation from biogenic hydrocarbons. Previous analyses based on correlations between ambient concentrations of CH2O and various marker species have suggested much larger primary emissions of CH 2O, but those results neglect confounding effects of dilution and loss processes, and do not demonstrate the causes of the observed correlations. Similar problems must be suspected in any source apportionment analysis of secondary species based upon correlations of ambient concentrations of pollutants. Introduction Formaldehyde (CH 2 O) is an oxygenated volatile organic compound (VOC) that plays an important role in the formation of ozone pollution in urban areas.Both primary sources (i.e.direct emissions from anthropogenic sources) and secondary sources (i.e.production in the atmosphere during oxidation of other, directly emitted VOCs) contribute to atmospheric concentrations of CH 2 O.Most secondary production of CH 2 O is expected to occur during the atmospheric oxidation of ethene, propene and higher terminal alkenes, such as 1-butene, 1,3-butadiene and isoprene, but CH 2 O is additionally formed more slowly from the oxidation of alkanes and aromatic compounds.CH 2 O is lost from the atmosphere through photolysis, reaction with the hydroxyl radical (OH), and deposition. Quantifying the relative contribution of primary and secondary CH 2 O sources is crucial to developing effective ozone control strategies in urban areas.Photolysis of CH 2 O is an important source of OH radicals, which are the species that initiate atmospheric photo-oxidation, and serves as a fuel Published by Copernicus Publications on behalf of the European Geosciences Union. D. D. Parrish et al.: Primary and secondary sources of formaldehyde in urban atmospheres for the photochemical cycles that produce ozone.Accumulation of CH 2 O during nighttime hours from direct emissions could provide large CH 2 O concentrations at dawn that could initiate photochemistry earlier in the diurnal cycle than would be the case in their absence.Thus, emissions from primary sources are an attractive target for regulatory efforts designed to reduce urban ozone concentrations. Urban sources of atmospheric CH 2 O have been investigated for decades.In Los Angeles in 1980 Grosjean (1982) measured concentrations as high as 48 ppbv, and reported measurements by others from the 1960s showing that CH 2 O exceeded 100 ppbv in the worst photochemical episodes in that city.Based upon the observed diurnal cycle, Grosjean (1982) concluded that both direct anthropogenic emissions and photochemical production made substantial contributions to ambient CH 2 O concentrations.A variety of statistical studies have attempted to quantify the relative amounts of ambient CH 2 O contributed by primary and secondary sources in several cities, including Vancouver (Li et al., 1997), Houston (Friedfeld et al., 2002;Rappenglück et al., 2010;and Buzcu Guven and Olaguer, 2011) and Mexico City (Garcia et al., 2006).More generally, many different approaches have estimated the relative emissions of VOCs based upon their measured ambient concentrations.Only relatively few of these approaches (e.g. de Gouw et al., 2005;Liu et al., 2009) have explicitly accounted for the different rates of loss and, in the case of secondary species, formation of the VOCs.We will see here that properly accounting for loss and formation rates are particularly important for determining sources of CH 2 O in particular and secondary products in general. The quantification of primary and secondary formaldehyde sources is particularly important in Houston, Texas, which is characterized by strongly elevated atmospheric CH 2 O concentrations (Wert et al., 2003;Ryerson et al., 2003;Martin et al., 2004).Houston is home to a very large industrial sector associated with petrochemical and petroleum refining activity, and these industrial activities are associated with the elevated CH 2 O concentrations.Given this industrial activity, the relative contributions from primary and secondary sources may be significantly different from most urban areas.Indeed, Olaguer et al. (2009) have argued that primary emissions from this industrial sector may make large contributions to ambient CH 2 O, and thus should be identified, quantified and controlled. In this work, we present analytical methods for quantifying both primary and secondary sources of CH 2 O.The major primary sources of CH 2 O that have been suggested to be important in Houston-Galveston-Brazoria (HGB) are motor vehicles and the area's industrial facilities.Primary emissions from the industrial facilities are derived from direct flux measurements, and those from the vehicle fleet are derived from measured ambient CH 2 O to CO ratios under conditions dominated by vehicle emissions, combined with emission inventory estimates for vehicle CO emissions.The secondary sources of CH 2 O in HGB are production from primary emissions of parent VOCs emitted from these same anthropogenic sources, as well as VOCs of biogenic origin.Photochemical oxidation initiated by OH during daytime is expected to dominate this secondary production, but nighttime oxidation initiated by ozone (O 3 ) or the nitrate radical (NO 3 ) reacting with those emitted VOCs also contributes.The amount of CH 2 O produced by secondary sources is derived from the estimated yield of CH 2 O from reacted VOCs combined with emission inventory estimates of industrial and vehicle VOC emissions.Although our primary goal is to provide a quantitative analysis of CH 2 O emitted by primary sources and formed from secondary sources within the HGB ozone nonattainment area, the approach presented here is applicable to other urban areas and to other photochemical species. The following section describes the data sets utilized in this paper, and Sects.3 and 4 address emissions from petrochemical facilities and on-road vehicle emissions.Section 5 compares our results to other analyses and discusses the reasons for the divergent results, and Sect.6 discusses the results and presents conclusions. Data sets The analysis presented here is based upon archived data sets that have been described elsewhere; only brief introductions and references to these descriptions are given here.NOAA conducted two airborne studies in the HGB region during the TexAQS 2000 (Ryerson et al., 2003;Wert et al., 2003) and TexAQS 2006 (Washenfelder, et al., 2010;Peischl et al., 2010) field studies; those data are available at http: //esrl.noaa.gov/csd/tropchem/.The aircraft platforms were the NCAR Electra in 2000 and the NOAA WP-3D in 2006.Airborne CH 2 O concentrations were acquired by NCAR employing tunable infrared laser absorption spectroscopy.During the 2000 study a tunable diode laser absorption spectrometer described by Wert et al. (2003) was employed, while the 2006 study employed a tunable difference frequency generation laser absorption spectrometer, as described by Weibring et al. (2007).Both instruments provided 1-s to 10s CH 2 O measurements.Both aircraft campaigns included 1 Hz measurements of O 3 , nitric oxide (NO), nitrogen dioxide (NO 2 ) total reactive nitrogen (NO y ), carbon monoxide (CO), sulfur dioxide (SO 2 ), and carbon dioxide (CO 2 ) (Ryerson et al., 1998(Ryerson et al., , 1999(Ryerson et al., , 2000;;Holloway et al., 2000;Daube et al., 2002).Speciated VOCs were measured by gas chromatography (GC) of whole air samples acquired during each flight (Schauffler et al., 1999).Both aircraft campaigns included speciated VOC measurements by proton transfer reaction mass spectrometry (PTR-MS) (de Gouw and Warneke, 2007), and the 2006 field campaign included ethene (C 2 H 4 ) measurements at 5 s resolution with laser photoacoustic spectroscopy (LPAS) (de Gouw et al., 2009).Parrish et al. (2009) give additional details of the 2006 measurements. Chalmers University of Technology equipped a mobile van with Solar Occultation Flux (SOF) and mobile Differential Optical Absorption Spectroscopy (DOAS) instrumentation (Mellqvist, 1999;Rivera et al., 2010;Mellqvist et al., 2010a) to measure vertical columns of CH 2 O, ethene, propene, andother VOCs in 2006 andin 2009.The SOF technique is based on open path Fourier Transform Infrared (FTIR) Spectroscopy using direct solar radiation as the light source, while the mobile DOAS is an open path system with scattered solar radiation as the light source.Installation in a mobile van allows continuous column concentration measurements to be performed while transecting an emission plume.These measurements, together with measured position and wind speed, make it possible to calculate emission fluxes in the plume.The accuracy of these flux determinations is estimated to be on the order of 30 %, primarily due to the uncertainty of the wind speed.The SOF results are available from Mellqvist et al. (2010b). The University of Houston conducted extensive measurements at Moody Tower, a site on the top of a 65 m building in Houston, Texas during the TexAQS-II radical and aerosol measurement project (TRAMP) (Lefer and Rappenglück, 2010), which was a component of the second Texas Air Quality Study (TexAQS II) (Parrish et al., 2009).Lefer and Rappenglück (2010) and references therein describe the measurements including CH 2 O, CO, O 3 , NO y , and the photolysis rate of NO 2 (j NO 2 ).The analysis in the present paper utilizes the CH 2 O (measured by Hantzsch reaction fluorescence) and CO (measured by Gas Filter Correlation) data.The measurements were conducted from 13 August to 2 October 2006.The results reported here are based on 10-s averaged data that were provided to us by the TRAMP measurement team on 23 May 2008. Baylor University deployed a Piper Aztec aircraft in the HGB region during the summer of 2006 (Baylor University, 2009;Olaguer et al., 2009).Measurements included CH 2 O (measured by Hantzsch reaction fluorescence), O 3 , NO, NO 2 , NO y , CO, and VOCs (measured by canister sampling with gas chromatograph/flame ionization detection analysis).The data are available from the Texas Environmental Research Consortium (TERC) website: http: //projects.tercairquality.org/AQR/H063. Formaldehyde fluxes from petrochemical facilities in HGB In this section, we quantify the flux of secondary CH 2 O formed during the atmospheric oxidation of VOCs emitted from the petrochemical facilities in the HGB region, and compare it to the flux of primary CH 2 O emitted from these same facilities.The focus here is on the routine emissions that occur on a daily basis.It is much more difficult to address extraordinary, sporadic events, but some comments concerning literature reports of such events will be provided at the end of this section. Quantification of formaldehyde formed from oxidation of petrochemical HRVOC emissions Analysis of observations made during the TexAQS 2000 study (Ryerson et al., 2003;Wert et al., 2003;Kleinman et al., 2002Kleinman et al., , 2003;;Daum et al., 2003) established that the petrochemical industrial facilities in Houston consistently emit large amounts of VOCs and oxides of nitrogen (NO x = NO + NO 2 ) to the atmosphere.The VOCs characteristically include especially large concentrations of highly reactive volatile organic compounds (HRVOCs), in particular the alkenes ethene and propene.During daytime, these emissions produce plumes of elevated O 3 concentrations downwind from the sources, and analysis confirmed that the initial hydrocarbon reactivity in the petrochemical source plumes is primarily due to the alkenes.These plumes also contain high (as much as >30 ppbv) concentrations of CH 2 O formed as a secondary product of the HRVOC oxidation (Wert et al., 2003).Figure 1 shows one example of such a plume observed downwind of the Houston Ship Channel (HSC). The evolution of the relationship between O 3 and CH 2 O measured aboard the NCAR Electra in the 27 August 2000 plume is illustrated in Fig. 1 and quantitatively examined in Fig. 2. The flight involved multiple, crosswind transects flown upwind and downwind from HSC.The molar enhancement ratio of CH 2 O to O 3 produced in the plume at a particular downwind transect is given by the slope of the linear correlation between the measurements made during that transect.In Fig. 2 all linear correlations are required to pass through the estimated background concentrations of CH 2 O and O 3 appropriate for that day: 0.5 ppbv CH 2 O (the concentration in background air over the Central Gulf of Mexico, Gilman et al., 2009) and 31.7 ppbv O 3 (the O 3 concentration at CH 2 O = 0.5 ppbv calculated from the CH 2 O-O 3 correlation for the farthest upwind transect at 29.0 • N).Downwind of HSC the concentrations of both species increased rapidly, and by the second transect at ∼24 km downwind (30.0 • N) CH 2 O reached its maximum concentration and the two species were well correlated (r 2 = 0.88).On subsequent transects, O 3 reached its maximum concentration, but the ratio of CH 2 O to O 3 continually decreased through the farthest downwind plume transect while the correlation continued to increase to a maximum of r 2 = 0.94. Figure 3 summarizes the CH 2 O to O 3 ratios at the downwind transects and compares the 27 August flight to a second flight conducted under similar conditions on 28 August (see Fig. 8 of Ryerson et al., 2003 andFigs. 5 and6 of Wert et al., 2003). The photochemical evolution of CH 2 O in the plume illustrated in Figs. 2 and lifetimes of its HRVOC precursors are short (3-8 h for ethene and 1-2.5 h for propene, Wert et al., 2003).This slows CH 2 O production as transport proceeds.In addition, the lifetime of CH 2 O is also short (3 to 4 h in the sunlit lower troposphere, Seinfeld and Pandis, 1998).This leads to a rapid decrease of the CH 2 O concentration when production slows. Given these constraints, the total quantity of secondary CH 2 O formed from primary HRVOC emissions can be calculated from the product of the total emissions times the yield of CH 2 O produced during the atmospheric oxidation of these alkenes.The total HRVOC emissions in HGB are available from emission inventories and from direct ambient measurements of HRVOC fluxes in the downwind plumes.However, since the available inventories generally underestimate the alkene emissions from these facilities by large factors, we cannot directly use the 2005 National Emission Inventory (NEI) (Ryerson et al., 2003;de Gouw et al., 2009, Mellqvist et al., 2010a).Instead we use an inventory (Brioude et al., 2011;Kim et al., 2011) that has been modified on a facilityby-facility basis to agree with the measured fluxes of ethene and propene (Mellqvist et al., 2010a).Since the lifetime of the alkenes are generally shorter than the time for transport of air masses out of HGB, this calculation will provide a realistic estimate of the secondary source of CH 2 O from the petrochemical facilities. On this basis the results of the quantification of the secondary CH 2 O flux from specific petrochemical facilities and the total HGB area are given in Table 1.Assuming that OH is the primary oxidant of the alkenes, Seinfeld and Pandis (1998) give the product yields of 1.44 molecules CH 2 O per molecule ethene and 0.86 molecules CH 2 O per molecule propene.The product of the emission flux of each alkene times the product yield of CH 2 O from that alkene yields an estimate of the secondary CH 2 O formed from that alkene.A sum over the emitted alkenes gives an estimate of the total secondary CH 2 O. Table 1 gives the alkene fluxes directly measured from specific facilities, as well as the integration over the entire HGB region (latitude 28.9 to 30.6 • N; longitude 94.4 to 96.2 • W) from the emission inventory.Table 1 also gives the flux of secondary CH 2 O that would result from the atmospheric oxidation of those primary alkene emission fluxes. In the above paragraphs we have formulated a simple approach to estimating the total average production of secondary CH 2 O from petrochemical facilities in the HGB region.This approach is based upon two assumptions: first, the total average CH 2 O production rate is well-approximated by the rate of CH 2 O formed by complete OH oxidation of the ethene and propene emitted by those facilities.Second, the CH 2 O yields from ethene and propene are constant at 1.44 and 0.86 molecules CH 2 O per molecule ethene and propene, respectively.The quantification of the uncertainties in this approach is difficult.The CH 2 O yields from OH oxidation are well known, but the emissions of ethene and propene are Table 1.Summary of the measured and inventory average primary emission fluxes and estimated secondary formation rate of CH 2 O from petrochemical facilities in the HGB given as 24-h averages.The indicated uncertainties are estimated 1-σ confidence limits.Units are kg h −1 except as noted. Area Primary Total HGB 4 91 101 220 ± 90 10.6 1 Estimated from product of the fluxes of ethene and propene multiplied by the CH 2 O product yield of the respective alkene. uncertain.To minimize this uncertainty, we have based our analysis upon inventories supported by direct measurements of fluxes.However, a fraction of ethene and propene may not react before leaving HGB, leading to an overestimate.On the other hand, the contribution from oxidation of emissions of heavier alkenes, alkanes and aromatics is neglected, which would lead to an underestimate.Wert et al. (2003) present an analysis of the "CH 2 O production potential" of the individual VOCs measured in specific atmospheric samples. This CH 2 O production potential gives the total rate at which CH 2 O is formed from all measured VOCs during oxidation by OH radicals.For the eight most concentrated (i.e.least photochemically processed) VOC samples collected over industrial regions in HGB, the terminal alkenes, largely ethene and propene, on average, composed 95 % of total CH 2 O production potential.However, this percentage gives an instantaneous picture of CH 2 O formation early in the oxidation of the plume, while the total CH 2 O production derived above is an integration over the time that the emissions remain in the HGB region.It must also be noted that NO 3 and O 3 also are important oxidants of alkenes heavier than ethene (Brown et al., 2011); however these oxidation pathways are less important than OH, and they also produce CH 2 O with similar yields.In summary, the above quantification of secondary production likely is an underestimate for daytime, when contributions from heavier alkenes, alkanes and aromatics are neglected, but an overestimate for nighttime when chemical processing is slower, and some fraction of the emissions can be transported out of HGB before reacting.There are also uncertainties in the CH 2 O yield from the oxidation of the alkenes by NO 3 and O 3 .Overall, the approach has been designed so that some uncertainties likely compensate for others.We judge that a conservative estimate for the 1-σ uncertainty of the quantification of the rate of secondary CH 2 O formation is ±40 %.This value is reflected in the uncertainties indicated in Table 1. Direct measurement of the primary formaldehyde flux from petrochemical facilities The most direct measurement of the primary flux of CH 2 O from industrial facilities in HGB is that reported by Mellqvist et al. (2010b) and Johansson et al. (2010), who deployed a mobile van just downwind of specific industrial areas to measure emission fluxes in the plumes from the facilities.Table 1 presents a summary of measurements conducted in 2009, which found relatively small fluxes of CH 2 O immediately downwind of the industrial facilities.Mellqvist et al. (2010b) argue that these CH 2 O fluxes represent mostly primary emissions, because the measurements were made so close to the facility that transport times were short enough that secondary formation was assumed to contribute little to the observed CH 2 O fluxes.Mellqvist et al. (2010b) and Johansson et al. (2010) present one flux measurement that allows our determination of the quantity of secondary CH 2 O formation to be tested.On 20 May 2009 under easterly winds they measured the flux of alkenes and CH 2 O in the coalesced plume from the HSC and Mont Belvieu areas during a transect on the west side of the HSC (see Fig. 58 of Mellqvist et al., 2010b).The transport time was sufficient (∼2-3 h from Mont Belvieu) for substantial photochemical production of CH 2 O to have proceeded.The measured CH 2 O flux was about 1200 kg h −1 , and the plume still had a significant flux of unreacted alkenes (e.g.490 kg h −1 ethene).When these unreacted alkenes do react, the ultimate total flux of CH 2 O is expected to be at least 1960 kg h −1 , which agrees to within 4 % with the combined 2040 kg h −1 secondary source calculated by summing the separate contributions from HSC and Mont Belvieu in Table 1. A comparison of primary and secondary CH 2 O fluxes from the petrochemical facilities is included in Table 1.Summing over the three petrochemical industrial areas, 4 ± 2 % of the CH 2 O flux is of primary origin and 96 ± 2 % is of secondary origin, produced during photochemical oxidation of primary alkene emissions.We take this relative primarysecondary partitioning to be characteristic of the entire petrochemical sources of CH 2 O in HGB. Sporadic formaldehyde emission events from petrochemical facilities Olaguer et al. ( 2009) have focused attention on sporadic episodes in the HGB area characterized by very high reported concentrations of CH 2 O up to 52 ppbv (Eom et al., 2008).They argue that direct primary emissions can possibly explain these high concentrations.Here we briefly discuss the expected signature of concentrations of trace species within plumes of primary CH 2 O emissions, and then examine two episodes that have received particular attention (Olaguer et al., 2009).The goal is to determine if secondary formation alone is adequate to explain the observed CH 2 O concentrations, or if there is substantial evidence for significant sporadic episodes of primary CH 2 O emissions.A unique signature is expected for measurements made within a fresh plume of primary CH 2 O emissions.Initially upon emission of primary CH 2 O the enhanced CH 2 O concentrations would not be accompanied by enhanced O 3 concentrations.In contrast, secondary production of CH 2 O is generally accompanied by production of O 3 .Plumes with significantly enhanced CH 2 O concentrations without correlated O 3 concentration enhancements were not encountered in either of the two NOAA airborne field campaigns conducted during TexAQS 2000 (Wert et al., 2003;Ryerson et al., 2003) and TexAQS 2006 (Washenfelder, et al., 2010).Figure 2 shows the relationship between CH 2 O and O 3 found on 27 August 2000, which was typical of that found in all the research flights conducted by NOAA during the two TexAQS studies.The number of coincident CH 2 O and O 3 data points (14 031 10-s averages and 146 624 1-s averages in 2000 and 2006, respectively) represent over 14 000 km flight distance in each study from 14 days in 2000 and 12 days in 2006.Many individual plumes were examined during the analysis performed for publications based on these data (Wert et al., 2003;Ryerson et al., 2003;Washenfelder, et al., 2010).The TexAQS 2006 study included nighttime flights (Brown et al., 2009), when primary emissions of CH 2 O would be particularly obvious, but evident plumes of primary CH 2 O emissions were not encountered.If concentrated plumes (i.e.several ppbv enhancements) of fresh CH 2 O primary emissions are present in the HGB region, they were not encountered in either of these aircraft studies. It is, of course, impossible to prove that primary emissions never play a significant or even a dominant role in some isolated episodes.A plume of primary CH 2 O emissions released in daytime would be expected to produce significant amounts of O 3 from the photochemical processing of CH 2 O as long as sufficient NO x is also present, so a plume of primary CH 2 O emissions would soon lose its unique signature.However, it is possible to investigate if secondary formation alone is adequate to explain specific observed episodes.Here we examine two episodes that have received particular attention. During a morning flight on 31 August 2006, the Baylor Aztec aircraft repeatedly sampled a plume over and downwind of the HSC.This plume contained CH 2 O concentrations higher than the instrument could quantify (∼9 ppbv), as well as high concentrations of a variety of primary species and ozone (see Supplement and Fig. 8 of Olaguer et al., 2009).Examination of the original data set (Baylor University, 2009) demonstrates that this plume represented a very complicated air mass with separate parts of the plume showing markedly different ratios of the primary pollutants NO x , CO and SO 2 .It is also evident that relatively fresh emissions (i.e.those with a large fraction of NO y still present as NO x ) were mixing with aged pollution, as indicated by high O 3 concentrations approaching 200 ppbv, which is the highest O 3 observed by the Baylor Aztec during 2006.The time resolution of the CH 2 O instrument (∼1 min) was not adequate to resolve the rapid concentration changes encountered by the aircraft.Hence, it is undetermined whether the high observed CH 2 O concentrations were associated with the fresh emissions or the aged pollution.It is apparent however, that the observed high O 3 concentrations are consistent with very high concentrations of secondary CH 2 O; for example Wert et al. (2003) report CH 2 O > 30 ppbv in a plume with O 3 ∼ 150 ppbv.Thus, the measurements reported by the Baylor Aztec in the 31 August 2006 plume do not provide strong evidence for primary emissions of CH 2 O as the main source of this plume.Rivera et al. (2010) report the flux of CH 2 O from the HSC on this same day, and conclude that its source was predominately secondary production from VOC emissions within HSC.Eom et al. (2008) report the observation of a CH 2 O plume during the morning of 27 September 2006 at the Lynchburg Ferry USEPA site in Baytown, TX.This plume reached a maximum concentration of 52 ppbv, which is reportedly the maximum ambient concentration of CH 2 O ever observed in the HGB region.There was no conclusive evidence for the source of this CH 2 O. Based upon poor correlation with O 3 and other arguments, the authors argue that primary CH 2 O emissions may have played a role.A definitive examination of the sources of CH 2 O in this (or any other) plume requires consideration of the recent transport of the sampled air parcel.Meteorological analyses (see Supplement) indicate that the air from the HGB region on 26 September was transported south over Galveston Bay and returned to the HGB area at the time that the 27 September plume was observed.The stagnation and recirculation transport pattern of this plume is ideal for accumulation of high CH 2 O concen-trations from secondary processing of the HRVOC emissions from the HSC.Until the transport and chemical processing that occurred in this plume are understood in detail, no definitive assignment of the source of CH 2 O in this plume is possible.In summary, no strong evidence has been presented for episodes of sporadic CH 2 O primary emissions from the petrochemical facilities in the HGB region. Formaldehyde fluxes from on-road vehicles in HGB In this section, we quantify the fluxes of primary CH 2 O emissions from on-road vehicles in the HGB region, and estimate the rate of secondary formation of CH 2 O during the atmospheric oxidation of the alkenes emitted by these vehicles. Determination of the primary emission flux from on-road vehicles To estimate the flux of primary CH 2 O from on-road vehicle emissions, we multiply the CH 2 O to CO emission ratio deduced from field observations in Houston by the total CO emission rate from on-road vehicles in HGB.This latter quantity is available from emission inventories constrained by ambient measurements.The CH 2 O to CO emission ratio is quantified from the relationship between the concentrations of these two species observed during the morning traffic peak.This time period is selected because traffic related sources can dominate the ambient CH 2 O concentrations, and the loss of CH 2 O from the atmosphere is minimized because OH levels are suppressed by high NO x concentrations and photolysis is still slow.The predominant source of CO in HGB is on-road vehicle emissions, so the ambient enhancement ratio of CH 2 O to CO is not affected by dilution.In the following, all emission ratios are expressed as molar ratios, not mass ratios. A preliminary analysis prepared for the TexAQS II Rapid Science Synthesis (Cowling et al., 2007) estimated that the primary emissions of CH 2 O from mobile sources were, as an upper limit, 0.18 to 0.30 % of the CO emissions.This estimate was based upon nighttime measurements made on the NOAA research vessel Ronald H. Brown and WP-3D aircraft (see Fig. E2 of Cowling et al., 2007).This estimate was deemed an upper limit, due to the possibility that the sampled air had been photochemically processed to at least some extent during the preceding daytime period, or that some fraction of the observed formaldehyde had been produced from nighttime secondary production through O 3 or NO 3 reaction with primary VOCs.These findings are broadly consistent with previous determinations of the CH 2 O to CO emission ratios of ∼0.2 to 0.3 % in Los Angeles (Grosjean, 1982), 0.10 to 0.14 % in Denver, Colorado (Anderson et al., 1996), and 0.24 % in Rome (Possanzini et al., 1996).Rappenglück et al. (2010) report CH 2 O and CO measured at Moody Tower in Houston, Texas as part of the TRAMP study (Lefer and Rappenglück, 2010).As shown in Fig. 4, the relationship between the concentrations of these two species measured at all times of day is not well represented by a single linear correlation.Thus, sources other than direct emissions from the on-road vehicle fleet must be important.The large open circle and dotted black line in Fig. 4 show the CH 2 O-CO relationship expected if background air from the Central Gulf of Mexico with 80 ppbv CO and 0.5 ppbv CH 2 O (Gilman et al., 2009) were transported into HGB and impacted only by on-road vehicle emissions with a CH 2 O to CO emission ratio of 0.3 %.Virtually none of the Moody Tower data lies on this reference line, but it does define the lower envelope of the observed CH 2 O as a function of CO. To obtain the best estimate for the CH 2 O to CO emission ratio for on-road vehicles from the Moody Tower data set, we examine the correlation between these two species in the period before and during the morning traffic peak on individual days.The time window on each day is generally selected to include a pre-sunrise CO minimum, which represents the background air on that specific day to which the traffic emissions are added, and extend to the morning CO maximum.Only days with substantial CO enhancements (selected as peak CO exceeding 480 ppbv) are included in this evaluation.The color-coded points in Fig. 4 identify the 13 days during the TRAMP measurements when both CH 2 O and CO data were collected during the morning traffic peak, and the peak CO exceeded 480 ppbv. Only one (18 September) of the 13 days with strong morning CO enhancements closely approximates the reference line in Fig. 4.That day was nearly ideal for evaluating the on-road vehicle emission ratio.During the entire preceding day (a Sunday) the wind remained southerly (171 ± 19 • ; average ± standard deviation) and brisk (4.5 ± 1.3 m s −1 ).These winds brought relatively clean marine air to the Moody Tower site; for example, between midnight and 01:00 a.m.local standard time on 18 September, O 3 = 9.6 ± 0.2 ppbv, CO = 93 ± 2 ppbv, NO y = 2.4 ± 0.2 ppbv, and CH 2 O = 0.84 ± 0.04 ppbv.Between midnight and 06:00 a.m. the wind decreased in speed and rotated through westerly to northerly.By 06:00 a.m., winds were nearly calm allowing traffic emissions to accumulate in the resulting stagnant air.Since the petrochemical facilities lie generally east of the Moody Tower, no industrial emissions are expected to have impacted the measurements under such wind conditions (see Rappenglück et al., 2010).This expectation is supported by the measured SO 2 , which remained below 0.6 ppbv during the predawn period.Figure 5 shows the gradual increase in CO, NO y and CH 2 O during this time.(The Supplement gives similar plots for all 13 days.)From the predawn CO minimum to the morning maximum, CH 2 O was well correlated with CO (r 2 = 0.92) with a linear regression slope of 0.0026 ± 0.0003 (average ± 95 % confidence limit).This linear fit is included in Fig. 4. Since little day-to-day variability is expected in the HGB on-road vehicle fleet (at least for weekdays), the best estimate for the CH 2 O to CO emission ratio is 0.26 ± 0.03 %, which agrees with the 0.18-0.30% upper limit estimate of Cowling et al. (2007).The 0.26 ± 0.03 % estimate is also an upper limit, since secondary production of CH 2 O from the VOCs co-emitted with CH 2 O by on-road vehicles are mixed with the primary emissions, even though the meteorological conditions on 18 September limit the time that the vehicle emissions remained in the atmosphere before measurement. The slopes derived from the linear regressions for all 13 days with strong morning CO enhancements vary widely, which reflects variability of the influence of other sources (i.e.transport of petrochemical emission plumes containing secondary CH 2 O) rather than variability in the vehicle fleet emissions themselves.Figure 4 shows the linear fits and Table 2 summarizes the slopes derived from those fits for all 13 days.Except for 18 September, the rush hour data all lie well above the reference line.This is attributed to transport of CH 2 O to Moody Tower from other sources within the HGB area.The variability of the slopes is attributed to the degree of correlation or anti-correlation of transported plumes with the morning traffic.Figure 6 illustrates two days that exemplify high correlation and high anti-correlation.On 15 September strong correlation (r 2 = 0.85) between a transported plume with high CH 2 O concentrations and the morning CO maximum resulted in a relatively large slope (0.0066 ± 0.0017) due to the transport of enhanced CH 2 O concentrations (compare upper panel of Fig. 6 with Fig. 5, which use the same concentration scales.)In contrast, on 20 September, transported air with high CH 2 O concentrations reached Moody Tower throughout the early morning period, with the peak arriving before the CO traffic peak, which resulted in a negative correlation with CO (r = −0.41)and a negative slope (−0.0035 ± 0.0030). If we assume that, on average, CH 2 O from other (nonvehicle) sources transported to Moody Tower is uncorrelated with the morning CO traffic peak, then the linear regression slopes derived for the morning traffic peaks averaged over a large number of days should provide a measure of the CH 2 O to CO emission ratio for on-road vehicles alone.The weighted average (i.e. each day's slope weighted by the inverse of the square of its confidence limit, Bevington, 1969) of the regression slopes for all 13 days is 0.30 ± 0.02 %, which is in excellent agreement with the result above for 18 September and the estimate of Cowling et al. (2007). A recent tunnel study (Ban-Weiss et al., 2008) suggests significantly lower CH 2 O to CO emission ratio for on-road vehicles.Using 2006 measurements made in a San Francisco Bay Area highway tunnel, these workers derive molar ratios of 0.062 % and 0.149 % for light duty, gasoline fueled vehicles and medium duty/heavy duty diesel fueled trucks, respectively.Both of these results are significantly lower than the result from the 2006 ambient measurements presented here.The reason for the differences between the two studies is not well established, but it may reflect the specific driving conditions, the vehicle mix and the relative absence of cold starts in the tunnel.However, the tunnel study does suggest that the result from the present work likely overesti- mates rather than underestimates the CH 2 O to CO emission ratio for on-road vehicles in HGB. Quantification of formaldehyde formed from oxidation of on-road VOC emissions Following a procedure similar to that of Sect.3.1, the total amount of secondary CH 2 O that can form within HGB from on-road vehicle emissions can be estimated from the product of the total ethene and propene emissions from vehicles times the product yield of CH 2 O from these alkenes.Rather than relying upon emission inventories to provide total ethene and propene emissions, we use the measured alkene to CO emission ratios multiplied by total CO emissions.This latter quantity will be taken from emission inventories, since this aspect of inventories has been more extensively tested.The primary CH 2 O emission flux determined in the preceding section is also based upon the total CO emissions, so any uncertainty in this quantity will not affect the determination of the relative amount of primary versus secondary CH 2 O associated with vehicle emissions.In this section, we again neglect any unreacted ethene or propene and CH 2 O produced from oxidation of alkane, aromatics, and heavier alkenes.Warneke et al. (2007) have derived the emission ratios of ethene and propene to CO characteristic of urban emissions using ambient measurements near the US east coast.They find good agreement with the results of Baker et al. (2008), who analyzed measurements from 28 US cities.Both of these studies generally quantified the ratios from on-road vehicle emissions, since that is the primary source of alkenes and CO in most of these cities.Since the vehicle fleet and the hydrocarbon gasoline composition does not vary markedly among different regions of the US, the Warneke et al. (2009) results are taken to be representative of the HGB vehicle fleet.Table 3 gives these alkene to CO ratios, as well as the secondary CH 2 O to CO ratio implied by these ratios combined with the product yields of CH 2 O from these alkenes (Seinfeld and Pandis, 1998) discussed earlier. Table 3 includes the integration of the on-road vehicle emissions of CO, ethene, propene and CH 2 O in the HGB region, which is defined here as latitude 28.9 to 30.6 • N and longitude 94.4 to 96.2 • W. The integration is performed on the NEI 2005 inventory provided by EPA.However, CO emissions in the NEI 2005 inventory, which is based upon the MOBILE6 emission model, exceeds measured CO concentrations by about a factor of 2 (Parrish, 2006;Brioude et al., 2011).Consequently, to obtain an accurate estimate we reduce the integrated CO emission estimate by half.The alkene and CH 2 O to CO emission ratios then allow total emissions of the alkenes and CH 2 O to be derived, which are included in Table 3 in the row labeled "best estimate".For all species except CO these "best estimate" emissions are in good agreement (±25 %) with the integrated NEI 2005 emissions. Here again, the estimate of the secondary CH 2 O may be an overestimate, since some of the ethene and propene may be transported out of the HGB region before reacting to form CH 2 O, but may be an underestimate as CH 2 O produced from oxidation of alkane, aromatics, and heavier alkenes is not included.The emission ratio of the alkenes to CO are estimated as accurate to ±30 % (Warneke et al., 2009), which are taken as the uncertainties for the primary emissions of the alkenes, while the estimate for the uncertainty of the secondary CH 2 O formation rate is taken as ±40 % for reasons similar to the arguments given in Sect.3.1. Table 3 summarizes the estimated primary CH 2 O emitted and secondary CH 2 O formed from the on-road vehicle fleet.The primary emission estimate is based upon the ambient CH 2 O to CO ratio measured during the morning traffic peak, and hence is an upper limit.These results indicate that no more than 28 ± 8 % of the CH 2 O from the on-road vehicle fleet in HGB is of primary origin, with the remainder, at least 72 ± 8 %, of secondary origin, produced from oxidation of alkenes also emitted by the on-road vehicles.This estimated apportionment is expected to approximately apply to all US urban areas. Comparison to other analyses Based upon the 2000-2009 measurements and the 2005 emission inventory considered here, we have found that secondary production from alkenes emitted by petrochemical facilities and the on-road vehicle fleet is the major source of CH 2 O (95 ± 3 % of total) in HGB (see Table 4 for summary).Primary emissions from these sources make a much smaller contribution (5 ± 3 %).Three previous studies addressed these same issues using correlations of ambient CH 2 O concentrations with concentrations of pollutants that are recognized as predominantly from either primary emissions (CO, SO 2 ) or secondary formation processes (O 3 , PAN).All three of these studies concluded that primary emissions make much larger contributions: 37 % (Friedfeld et al., 2002), 40 % (Buzcu Guven and Olaguer, 2011) (with 36 % from secondary sources and an additional 24 % biogenic contribution), and 47 % (Rappenglück et al., 2010) (with only 24 % from secondary sources and the remaining 29 % unattributed).These contrasting findings are attributed to two important problems that led the correlation-based approaches to inaccurate results; these same problems may affect many correlation-based source apportionment analyses of secondary pollutants. The first problem is that the correlation-based studies explicitly or implicitly addressed source contributions to measured ambient CH 2 O concentrations at particular sites, while the present analysis addresses the total mass of CH 2 O emitted and formed within the entire HGB region.It is the emission fluxes and production rates (expressed as mass or moles per unit time) that quantify the amount of CH 2 O emitted or produced within HGB, and it is these quantities that determine the importance of CH 2 O to the photochemical production of O 3 within HGB.It is critical to note that measured ambient concentrations at any particular location are affected not only by emission fluxes and production rates, but also by transport (including dilution) processes and loss rates.The relative contributions to measured ambient concentrations are directly related to the relative emission fluxes and production rates only if the loss rates and the effects of transport and dilution are identical for each of the sources. In the case of CH 2 O, this direct relationship does not apply, because secondary sources are at a maximum rate during the daytime when dilution and photochemical loss rates are also at a maximum.The diurnal cycle of CH 2 O in HGB provides an example of the potentially confounding effects of dilution and loss rates.Observed surface concentrations of CH 2 O (Fig. 7a) exhibit a relatively modest daytime maximum, but those daytime concentrations are present throughout a deep mixed convective boundary layer (CBL).Nighttime concentrations average only a factor of 2 lower than the daytime maxima, but represent a much shallower mixed layer.After normalizing those observed concentrations for mixing height (Fig. 7b), the average daytime maximum is more than a factor of 10 higher than the average nighttime concentrations.In addition to the greater dilution of formaldehyde during the day, the lifetime of CH 2 O (3 to 4 h in full sun, Seinfeld and Pandis, 1998) is relatively short during the day, but much longer at night.Thus, CH 2 O from any particular source would accumulate to higher concentrations at night than during the day, even if the emission rates and dilution effects remained constant. The preceding discussion indicates that CH 2 O from different sources is expected to experience a wide spectrum of loss rates and transport effects depending upon the diurnal dependence of the source strength.Hence, any analysis that aims to determine the relative importance of different sources must account for these confounding effects.In the present work, careful attention is given to ensure comparison between sources on the basis of total mass of formaldehyde emitted or produced, not directly on observed concentrations.Figure 1 of Buzcu Guven and Olaguer (2011) shows that the source factors derived from correlation analyses can have very strong diurnal variation.Such analyses based solely upon concentrations without accounting for varying transport and loss rates are expected to err substantially. A second major problem with the three earlier studies is that they are based on multivariate correlation approaches, and interpretation of the results required assumptions regarding the cause of the correlations; however, the hypothesized causes are incorrect in important respects.First, all three studies take CO and two of the studies (Rappenglück et al., 2010;Buzcu Guven and Olaguer, 2011) take SO 2 as markers for primary emissions of CH 2 O.They also assume that O 3 (Friedfeld et al., 2002) or PAN (Rappenglück et al., 2010;Buzcu Guven and Olaguer, 2011) is a reliable marker for secondary production of CH 2 O.They then further assume that any correlation of CH 2 O with CO or SO 2 indicates primary emission, and that only correlation of CH 2 O with O 3 or PAN can indicate secondary production.However, none of the studies presents analysis to support these assumptions; in effect they assume that correlation proves cause.They neglect to consider that ambient CH 2 O concentrations may well correlate with ambient concentrations of CO from mobile source emissions and SO 2 from industrial emissions because those same sources also emit large quantities of reactive VOCs that form secondary formaldehyde.None of the three studies presents any evidence regarding the actual source of the formaldehyde that correlates with the primary emission tracers. The TexAQS 2000 aircraft data discussed above in Sect. 3 (Figs.1-3) can illuminate the dominant cause of the correlation of CH 2 O with SO 2 .The 27 and 28 August flights sampled the plume from HSC under similar meteorological conditions.Figure 8 shows the CH 2 O vs. SO 2 correlation for those two flights with the measurements divided into relatively fresh emissions (grey points) and the more aged plume (red points).The fresh emissions have a weak corre- 1).In this plume the CH 2 O/SO 2 ratio was 0.07-0.12,much smaller than the 0.4-1.3ratio found downwind of the HSC (Fig. 8), which again indicates that the CH 2 O downwind of HSC is of secondary origin. Similar considerations apply to correlations of CH 2 O with CO.Vehicle emissions of CO and VOCs, including alkenes, accumulate together in urban air masses.Photochemical processing produces CH 2 O, which leads to significant correlations of ambient concentrations of CH 2 O and CO. Figure 9 illustrates the development of this correlation observed in the 27 and 28 August flights.As the air moves downwind, increased concentrations of both CO (from accumulation of emissions) and CH 2 O (from accumulation of photochemical production) are observed.There is significant correlation of CH 2 O with CO (r = 0.76 for all data in Fig. 9), with higher correlations and different slopes observed downwind of HSC (r = 0.87, red to orange points in Fig. 9) and downwind of the central urban area (r = 0.83, green to purple points in Fig. 9).Importantly, nearly all of the observed CH 2 O is due to secondary production, as the ratio of CH 2 O to CO in primary emissions from vehicles (black dotted line in Fig. 9; see Sect.4.1) is a factor of 15 to 30 smaller than the observed CH 2 O vs. CO slopes.In summary, it is incorrect to assume that correlations of CH 2 O with either SO 2 or CO necessarily indicate primary emissions of CH 2 O. Similarly, neither O 3 nor PAN can necessarily be taken as a tracer for secondary CH 2 O formation without firm analysis to justify that assumption.Further, the correlation coefficient and slope between these species and CH 2 O vary significantly depending upon the precursor mix and degree of processing.The formation of both O 3 and PAN requires both VOCs and NO x to be present.The photochemical processing of an emitted plume with large amounts of reactive VOCs without NO x would be expected to form copious amounts of secondary CH 2 O, but little or no O 3 or PAN.Alternatively, the photochemical processing of a plume with large primary emissions of both CH 2 O and NO x would be expected to form large amounts of O 3 , but any remaining unreacted CH 2 O that correlated with that O 3 would be considered secondary.Figure 2 shows an example of the variability of the CH 2 O correlation with O 3 within the HSC plume (east of −95.5 • longitude).Downwind of the Houston central urban area (west of −95.5 • longitude), the CH 2 O correlation with O 3 is significant (r = 0.72) but with a much smaller slope (0.07 ppbv CH 2 O/ppbv O 3 ) than observed downwind in the HSC plume (as large as 0.15 ppbv CH 2 O/ppbv O 3 ).The coincident CH 2 O and PAN data from the 27 and 28 August flights are much more limited, but variability in correlation coefficient and slope between these two species is also apparent.For example, downwind of the Houston central urban area, the CH 2 O vs. PAN correlation coefficient is 0.91 with a slope of 2.7 ppbv CH 2 O/ppbv PAN; the corresponding values downwind of HSC are 0.77 with a slope of 4.4 ppbv CH 2 O/ppbv PAN. In summary, the correlations between ambient concentrations CH 2 O and those of primary pollutants (e.g.SO 2 and CO) and other secondary products (e.g.O 3 and PAN) arise from complex atmospheric interactions, vary substantially depending upon the mix of precursors in and air mass, and are strongly affected by transport and loss processes.Consequently, source apportionment analyses based solely on correlations cannot be expected to be reliable.The problems with such approaches are expected to be particularly severe when attempting source apportionment analyses of secondary species such as CH 2 O, since such a large number of processes are involved in determining the correlations between the atmospheric concentrations of various secondary and primary species.4) shows that by far the predominant source of CH 2 O (92 ± 4 % of total) is secondary production formed during the atmospheric oxidation of the alkenes emitted from the petrochemical facilities that characterize the industrial activity in HGB.These same facilities also emit much smaller amounts of primary CH 2 O (4 ± 2 % of total); these primary emissions (in contrast to the alkene emissions) are well predicted by current emission inventories.CH 2 O from the onroad vehicle fleet (4 ± 2 % of total) is also dominated by the secondary CH 2 O formed from the alkenes directly emitted by the vehicles.We quantified an upper limit for the amount of primary CH 2 O emitted by this fleet; that amount is relatively small (28 ± 8 % of the vehicle total), and is well predicted by current emission inventories.This evaluation indicates that there is no strong observational evidence for large primary CH 2 O emissions beyond those presently included in emission inventories.There is also no need to hypothesize such emissions for models to adequately reproduce observed CH 2 O or O 3 concentrations within HGB.Several studies (Wert et al., 2003;Jiang and Fast, 2004;Byun et al., 2007;Kim et al., 2011) have shown reasonable agreement with observations when the ethene and propene emissions are increased according to the results of measured emissions from the petrochemical facilities. Since CH 2 O is dominated by secondary production, there is no large fraction of CH 2 O sources in HGB that can respond to direct, emission control efforts focused on primary CH 2 O emissions, although the Texas City source (Stutz et al., 2011) discussed above could be controlled by a focused effort.Ongoing efforts to control HRVOC emissions from the petrochemical facilities and VOC emission controls on the motor vehicle fleet will effectively control secondary CH 2 O formation in HGB. We find no evidence that sporadic episodes of primary CH 2 O emissions from the petrochemical industrial facilities make a significant contribution to CH 2 O in HGB.Although we do not quantify other possible sources of primary emissions, such as off-road mobile sources, these are not expected to constitute major CH 2 O emission sources in HGB.Secondary formation of CH 2 O from biogenic VOCs, especially isoprene, has not been addressed, and air coming into the Houston area from forested regions to the north and east may contain a significant amount of secondary formaldehyde formed from isoprene.This biogenic secondary CH 2 O could play a role in initiating the photochemical processing of the ozone precursors emitted in Houston. The correlation-based analyses of Friedfeld et al. (2002), Rappenglück et al. (2010) andBuzcu Guven andOlaguer (2011) reached conclusions in conflict with those presented here.However, those studies are flawed because (1) they analyze ambient concentrations, not the total quantity of CH 2 O emitted or formed and do not account for differential dilution and loss processes between sources, and (2) they rely only on correlations without firmly establishing the causes of the cor-relations.Analyses presented here indicate that the assumed causes were in fact incorrect.Similar problems must be suspected in any correlation-based analyses of CH 2 O sources conducted in other urban areas (e.g.Li et al., 1997;Garcia et al., 2006).Indeed, all correlation-based source apportionment analyses of secondary species must be investigated for similar problems before their conclusions can be confidently accepted. Fig. 1 . Fig. 1.Distributions of ozone (left) and formaldehyde (right) downwind of the HSC measured by the Electra aircraft during TexAQS 2000.The data were collected between 12:00 and 18:00 local standard time, and are plotted on the 27 August 2000 flight track, with the symbols sized and color-coded according to the measured mixing ratios of the respective species as indicated by the keys above each plot.During this flight, measured winds were southerly (wind direction = 162 ± 17 • ) and steady (wind speed = 5.4 ± 1.5 m s −1 ), where standard deviations of the respective quantities are indicated.Text boxes with arrows indicate approximate locations of specific petrochemical complexes and a measurement site referred to in the text. Fig. 2 . Fig. 2. Relationship of formaldehyde versus ozone mixing ratios measured during the 27 August 2000 flight.The data collected at one upwind (29.0 • N latitude) and five downwind transects from HSC (east of −95.5 • longitude) are shown by different symbols color-coded according to latitude as indicated in the annotations.All other data are shown as grey dots.Linear least squares fits to the data from each transect are shown also color-coded.These fits all pass through the background mixing ratios of O 3 (31.7 ppbv) and CH 2 O (0.5 ppbv) as explained in text. Fig. 3 . Fig. 3. Dependence of the slope with 95 % confidence limits of the CH 2 O versus O 3 relationship as a function of downwind distance from HSC.The 27 August 2000 data are from the linear regressions illustrated in Fig. 2; the 28 August 2000 data are from a similar analysis of a second flight conducted under similar meteorological conditions.The bar with arrows indicates the location and approximate width of the HSC industrial region.The farthest downwind transect corresponds to about 6 h transport time. Fig. 4 . Fig. 4. Relationship between CH 2 O and CO observed at Moody Tower during TRAMP.Gray points include all data with CH 2 O mixing ratios ≤12 ppbv.Small circles color-coded by date indicate the morning traffic peak data discussed further in the text.The solid colored lines indicate the linear, least-squares fits to the respective color-coded data.The large black circle indicates the Central Gulf of Mexico mixing ratios reported by Gilman et al. (2009), and the heavy, dotted black line indicates the expected mixing ratio enhancements from primary emissions of CH 2 O and CO in a ratio of 0.3 %. Fig. 5 . Fig. 5. Time series of the photolysis rate of NO 2 and the mixing ratios of NO y , O 3 , CH 2 O and CO observed during the morning of 18 September 2006 at Moody Tower.Small circles indicate the CH 2 O data for that day included in the linear regression illustrated in Fig. 4. Time is given as local standard time (CST). Fig. 6 . Fig. 6.Time series observed during the mornings of 15 and 20 September 2006 at Moody Tower in the same format as Fig. 5. Small circles indicate the CH 2 O data for those days included in the linear regressions illustrated in Fig. 4. Fig. 7 . Fig. 7. CH 2 O concentrations and mixing heights measured aboard the NOAA research vessel Ronald H. Brown during TexAQS 2006 within the HGB area.(a) The light blue points include all 30-min averages recorded during the study, and the dark blue symbols indicate averages and standard deviations for 30 min diurnal periods.The red line indicates average mixing height (i.e.CBL depth) (b) The calculated CH 2 O concentrations expected if the integrated column concentration in (a) were uniformly mixed to a constant mixing height of 500 m (after Gilman et al., 2009). Fig. 8 . Fig. 8. Relationships between CH 2 O and SO 2 measured by the Electra on 27 and 28 August 2000 within the plume from HSC (taken as east of −95.5 • longitude to avoid plume from Parish power plant that moves over the western part of the city).The track for the first flight is shown in Fig. 1, and the second flight track was similar.Data are color-coded according to whether they were collected directly over HSC and immediately downwind (grey points, 29.7-29.8• N) or further downwind (red points, 30.0-30.3 • N).The lines and annotations of the respective colors indicate the linear regressions to the data sets. Fig. 9 . Fig. 9. Relationships between CH 2 O and CO measured by the Electra on 27 and 28 August 2000 in the same format as Fig. 8.The track for the first flight is shown in Fig. 1, and the second flight track was similar.Data from within the entire plume downwind from the Houston area are included.Data are color-coding according latitude range (grey points, 29.7-29.8• N; colored points, 30.0-30.3 • N) and longitude according to color-scale in plot.The lines and annotations of the respective colors indicate the linear regressions to the data sets divided by latitude range and longitude (red east and blue west of −95.4 • longitude).The dotted black line indicates the expected mixing ratio enhancements from primary emissions of CH 2 O and CO from the on-road vehicle fleet with a ratio of 0.3 %. 6 Discussion and conclusionsWe have evaluated the rates of secondary production and primary emission of CH 2 O from petrochemical industrial facilities and on-road vehicles in Houston Texas region based upon ambient measurements made in the 2000-2009 period and a measurement constrained emission inventory based www.atmos-chem-phys.net/12/3273/2012/Atmos.Chem.Phys., 12, 3273-3288, 2012 upon the EPA NEI 2005.This evaluation (summarized in Table www.atmos-chem-phys.net/12/3273/2012/ Atmos. Chem. Phys., 12, 3273-3288, 2012 3 suggests a useful approach for calculating the flux of secondary CH 2 O formed in plumes downwind of petrochemical facilities.The peak CH 2 O concentration is reached early in the plume transport since the daytime Ethene Primary Propene Secondary CH 2 O 1 Primary CH 2 O Table 2 . Slopes derived from linear regressions of CH 2 O vs. CO for the selected morning vehicle traffic peak periods during 2006.Data were collected at the Moody Tower site. Table 3 . Summary of emission fluxes of CO, ethene, propene and formaldehyde estimated for the HGB on-road vehicle fleet, given as 24-h averages.The indicated uncertainties are estimated 95 % confidence limits. Table 4 . Summary of the rates of secondary production and primary emission of CH 2 O in HGB given as 24-h averages with estimated 1-σ confidence limits.The percentages in parentheses indicate relative contributions to the total (primary + secondary) rate.Units of absolute rates are kmol h −1 and uncertainties of primary emissions are estimated as ± 30 %.
2018-10-03T08:04:08.929Z
2012-04-05T00:00:00.000
{ "year": 2012, "sha1": "1bfe88a2ca009091ffb70b9c9dc496732ebdbe09", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/12/3273/2012/acp-12-3273-2012.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "28bfbe57992ccf666da1016626a78429b1e89094", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
153084863
pes2o/s2orc
v3-fos-license
Libraries in Mexico: Context and Collaboration. An Interview with Dr. Jesús Lau, President, Mexican Library Association Strengths and challenges facing library of all types in Mexico and Latin America are addressed. Librarians seeking opportunities for personal and professional development will find national and international library organizations offering exciting programs for involvement and collaboration. Introduction Since 1992, Dr. Jesús Lau has been Director of the USBI VER Library at The Universidad Veracruzana Veracruz-Boca del Rio campus. The library offers information services to 11 faculties and a student population of 15,000. Dr. Lau's first degree is in Law, from the Universidad Autonoma de Sinaloa, located in Northwestern Mexico. Immediately after obtaining this first degree, he earned a Masters Degree in Library Science at the University of Denver, where he met Janet Lee while both were graduate students. Seven years later, Dr. Lau received a Ph.D. from Sheffield University in England. On the professional side, he is the President of the Mexican Library Association, 2009 to 2011. Dr. Lau also is a member of the Governing Board and member of the Executive Committee of the International Federation of Library Associations and Institutions (IFLA) and serves on several editorial/advisory boards of various publications, among them Collaborative Librarianship. As part of our interview series with members of our Advisory Board, Collaborative Librarianship caught up with Dr. Lau to find out about Mexican libraries and the opportunities and challenges in collaboration. They hire the greatest number of library professionals, have the largest budgets and they acquire the latest technology. Obviously not all libraries have the same development but the medium and large academic libraries tend to have good facilities and services. The second group of libraries that runs well in Mexico is public libraries; in fact, it is the largest library system in Latin America. This is a great achievement considering Brazil is twice the size in terms of geography and population. Again, the size and quality of services vary more or less according to the size of the town. The third type of libraries that also has achieved significant development is special libraries. They certainly are well organized, well funded, and provide some of the best information services. School libraries are the least developed; they hardly exist, and those that do have limited resources. There are approximately 5,000 school libraries in the more than 120,000 schools. On a positive note, however, the Federal Government has had a reading program that has been quite successful in providing small library collections to all schools in the country whether they are public or private. According to Federal Government statistics, the government has distributed more than 200 million books to schools in the last 10 years. So, although most school libraries don't formally exist, there are library collections of some description in every school in Mexico. Now in considering the strengths of libraries in Mexico, I would say that one strength is service orientation. Most libraries, includ-ing academic and special libraries, are open to anyone who lives in the community. The second strength is that library demand is growing, in large part due to the expansion of education over the past few decades and due to the growth of internet access in urban settings. While the population growth rate is expected to slow down because of changing demographics, internet access and use are expected to increase. The question about the future of libraries in Mexico is difficult to answer, but clearly, Mexican libraries will be affected by new technological developments, especially the expansion of internet services. People increasingly are relying on the internet for their day-to-day information needs. The increasing availability of Mexican information sources on the internet also fuels this demand. Libraries and librarians will have to adapt to the increased demand for webbased information. The greatest opportunity for all types of libraries is to adopt a new teaching role that society demands of them. Our users need to develop information capabilities that take advantage of the wider availability of free and open internet access, including the ability to locate, retrieve, evaluate and use information for a broad range of purposes. CL: As President of the Mexican Library Association, tell us about recent initiatives of the Association and the importance of a professional organization for libraries and librarianship in Mexico. Lau: The Mexican Library Association is one of the oldest associations in Latin America. It was originally founded in the early 1920s, but established its "legal existence" in 1956. During my term, 2009-2011, I am working vigorously with the Executive Committee and Chapters, and Committees to move the Association to a higher level of development to provide the services that our members demand. The first step I took as President-Elect was to initiate consultations leading to the drafting of a long-term plan. After a year and half, we completed the "AMBAC Strategic Plan 2015, Leader-ship in Action." It is the first strategic plan of the Association. The general objective is to make the Association more dynamic and relevant to its members. We are currently working on the main objectives, goals and actions, among them: (1), the creation of an Institutional Repository that includes the digitization of the proceedings of the last 40 Association annual conferences that began in the 1950s, as well as all the publications of the Association such as the bulletin "Noticiero de AMBAC." Members will soon be able to retrieve individually each publication and each paper. The collection of papers has been catalogued using Dublin Core Metadata Standards. A side project is (2), the indexing of the Association's bulletin "Noticiero AMBAC" that dates back to 1967 and records the history of the Association and of Mexican libraries in general. It is believed that this is the oldest library association bulletin in Latin America. Another key action is (3), the development of a new website that will have more content and be easier to navigate. We hope to have the first release during the summer of 2010. As action item (4), the Association hired a graphic designer to work on the corporate image of the organization so that all its activities and communications are similarly branded, including official colors, typographics, layout and other design elements. And (5), we work on strengthening communication channels with the members, including two newly created listservs-one that is open to anyone and one just for members. We also created Facebook and Twitter accounts to deliver AMBAC news. Our younger members, we assume, are Facebook and Twitter fans so we are hoping to meet the communication expectations of a new generation of Association members. Other initiatives include a donation campaign for modernizing of the Library Association facilities-our offices in Mexico City really need to be upgraded to achieve the Association's strategic plan. The campaign will be rolled out during the Association's Annual Conference this May. We hope to raise at least 50% of the funding required for the project. A campaign is also underway to increase member participation in library conferences and meetings. We have plans in place for a very exciting Annual Conference that will take place in the Colonial Zacatecas, a high-altitude, former mining city located in the north central part of Mexicooffering dry, temperate weather for that time of year. Promising to be one of the best conferences in recent years, it will convene in the new Convention Center at the heart of the city. We have an excellent program with three keynote speakers: the President of ALA, Dr. Camila Alire (also on the Advisory Board of Collaborative Librarianship), and two well-known Mexican library colleagues, the Director of the Library Research Center of UNAM, and a mining expert from Zacatecas. CL: You have had many opportunities to travel throughout Latin America. What particular successes and challenges do you see affecting library collaboration within and between these countries? Lau: I think there are three main challenges for library collaboration in Latin America: social, political and economic. Among these challenges, the greatest is social. Social development determines a library association's capabilities in any particular country. Nations with less development also have less association growth-less interest in association affiliation and less association work with fewer numbers of library professionals. Latin America needs to have greater development in collaboration leading to stronger library associations. A socially welldeveloped society has citizens with more social skills for collaboration and for association. Political and economic challenges also become barriers to collaboration. As probably the second greatest challenge, economic factors play an important role in collaboration since funding is crucial to cooperative programs. Brazil, Mexico, Chile, and Colombia have more active library exchanges because they have greater economic resources. Political boundaries also frequently limit the movement of people and the exchange of library resources. In summary, Latin America needs a more association-friendly attitude, and library associations need to be more meaningful to their members. However, despite these challenges, there is collaboration in Latin America, including organizations for joint training programs, professional librarian exchanges, and shared publications. But with a region as big as Latin America there is still a long road to go. CL: International border communities are uniquely positioned, perhaps, to engage in library collaboration across national boundaries. What have been such opportunities and challenges between Mexico and the USA, for example? Lau: This is a difficult question to answer, but taking into account my experience of working in Juarez University located just across the border from El Paso, Texas, I can say that there has been excellent collaboration. The best example is the Transborder Library Forum (FORO) that, if I recall correctly, will be 20 years old next year when the Forum meets in Austin, Texas. This is a unique conference that takes place between two countries with very different levels of library development: the USA with impressive library development and Mexico, a smaller and less developed country, with mid-level library development. FORO was first organized after the signing of NAFTA, the North American Free Trade Agreement, between both countries, as well as Canada. FORO's objective is to try to use NAFTA's framework for better and more productive library collaboration, and has been the vehicle for some important bi-national projects, one being an interlibrary exchange between Southwestern libraries in the USA and various Mexican libraries. Supported by the US Embassy through the Benjamin Franklin Library, its diplomatic delivery system is used for exchanging library material. Another example of collaboration is Juarez University's reciprocal borrowing agreement with El Paso Community College and Mexico State University in Las Cruces, an arrangement in place now for many years. There are other similar agreements between the Californias. Some challenges that the FORO and any international collaboration faces are cultural in nature. American librarians tend to be more action oriented; they normally like things done quickly and efficiently. However, the Mexican culture is more leisurely and it relies more on face-to-face communication rather than on sending email messages or using other written communication. Collaboration tends to halt if cultures are not understood by the participating parties. As a personal example, I helped host a FO-RO conference and booked a hotel that offered some, but limited, vendor exhibit space. Well, American companies quickly booked almost all of the tables a month before the event, then suddenly we realized there would not be space for Mexican vendors. We had to book another hotel that fortunately had the vendor space we needed, but this experience demonstrated the cultural difference in time management. Other cross-border challenges in library collaboration are economic in nature. American libraries have more resources while Mexican libraries generally are smaller and normally with fewer or no professional staff that can take part in collaboration projects. The political environment can also be another challenge, especially when a library manager wants to send someone to the USA. Getting a visa can be tricky if the librarian comes from a region where salaries are low; he or she may not be able to declare and prove enough income or savings to get the visa. However, there are still many opportunities to cooperate with American colleagues. I, myself, have greatly benefitted through opportunities for collaboration with the American library community in conferences, staff exchanges, training programs and writing for American publications. In summary, I think librarians from both sides of the border have opportunities for collaboration as long as they overcome the cultural or social barriers. CL: As a leader on the international stage, what do you see as some emerging opportunities for library collaboration? Lau: Communication technology enables us to do many things that we could not do before. We can now communicate easily with colleagues from most corners of the world. Internet is a wonderful amalgamation of a great variety of telecommunication technologies and allows persons to form and be part of networks of all types, as long as we have a disposition for collaboration and are able to overcome the aforementioned cultural factors and other issues related to language. It doesn't matter if one is a librarian from a developing country or a rich country, one can always collaborate in some way. The main factor at the international level is willingness to spend time to make the library world a better place. The benefit of international collaboration is learning about other cultures and specifically learning how libraries operate. Important for me was understanding how librarians themselves live. Coming to terms with the values of the colleagues, learning the history and geography of foreign groups and societies makes international collaboration a very enriching activity. As an example, under the direction of the President of IFLA, I joined a three-party editorial committee to compile a book on "Access to Knowledge" (A2K). One of my editorial colleagues is from Italy and the other from South Africa. A call for chapter contributions was made via the internet and we received chapter proposals from every continent. This amazing response demonstrated the value of international collaboration and the way understanding and awareness can be broadened in the international arena. CL: As a leader in IFLA, what role do you see it playing in the field of library collaboration? Lau: IFLA is a premier association in collaboration. I would say that is the largest international library association that fully engages in collaboration and cooperation among libraries and librarians. For example, IFLA offers all of it conference papers and documents free of charge through its website, www.ifla.org, not only to members but to the rest of the library world. This large collection of conference papers is a unique source of library information from most countries, its greatest asset being the contributions from developing countries that users will not find anywhere else. IFLA, however, faces a challenge in translating documents into its seven official languages. While publications are mainly in English, the international lingua franca, great effort is made to translate papers into the other languages. The job is accomplished mainly with the help of IFLA member volunteers representing their respective library associations. Another great contribution of IFLA is its standards, guidelines and manifestos for all types of libraries and library-related activities. These documents are tools that help develop potential areas of library cooperation. Its international conference, the World Library and Information Congress, is also a major venue for the exchange of library experiences and knowledge where participants come from approximately 100 countries. It is a wonderful place to meet library colleagues from around the planet and to engage in information sharing and library development on a wide scale. Funding is one challenge facing IFLA, but like many organizations it is meeting this challenge through the work of its membership. Library associations, individual members and corporate partners all contribute to its financial viability. Participants perform professional association duties for free and for the sake of the good of the global library community. CL: Library collaboration on the international stage perhaps is more complex and challenging than on the national or local levels. If this is the case, what might be some of these challenges and how could they be overcome? Lau: It is true that international collaboration is complex, but I feel, it is easier than doing this work on a national level. One can always find an international partner who is willing to participate in a joint international task, especially in countries in North America, Europe and some of the larger Asian countries with well-developed library organizations. This may not be the case in developing countries where on the national level one faces challenges related to language, to social and political environments. Again, on the national or regional level there is often little or less developed "cultures of cooperation." At the local level, collaboration can be impeded because of the low numbers of libraries with professional librarians. For example, the library where I work is the largest in the region and belongs to the largest university in the State, but other than this university campus library, there are few libraries in the region that have the capability or impetus to fully engage in joint efforts. Often these smaller libraries are run by nonprofessionals who seem to stay in their management roles for short periods. I led a regional annual colloquium on library management (ALCI) that gathered together library colleagues to discuss common management concerns; we have accomplished some objectives but due to some of the factors noted above, we have been unable to develop big collaboration projects, like joint borrowing agreements. To sum up, the complexity of collaboration covers local, national and then international levels, the local one being the most difficult. How can we overcome these challenges? At least in my local community, we need to have library professionals in charge of libraries because they normally have a better idea of what collaboration can bring to their libraries. If there is little possibility of getting library professionals, we need to provide more library training-workshops on collaborations and cooperation-to the nonprofessional library leaders, so long as they remain in their positions for a reasonable length of time. CL: If a librarian, new or seasoned, wants to become involved in library collaboration across national boundaries, how might one proceed? Would it entail getting involved in international library organizations? Do development organizations offer much by way of library programs? Are there other avenues of involvement? Lau: My first recommendation for any young or seasoned librarian is to become a member of library associations-at least one local, one national and one international. Obviously, when someone becomes a member of an international association, the opportunities are greater for wide-scale collaboration because you are meeting foreign library professionals. Associations, in general, are the best place to develop skills in international collaboration and in library leadership that sees beyond one's own borders. If one cannot afford to pay international association membership fees, there are many national library associations with international committees, like those of the American Library Association and the Special Libraries Association. Joining an association and becoming part of an international committee is great training and is the best place to meet people who are interested in international collaboration. Development organizations are also excellent resources, although perhaps they are more difficult to approach, but using your information skills you can dig up information to identify what development organizations offer (or need) in regard to international library collaboration. Another strategy is to attend international conferences, not necessarily library-related, when they are offered in your community or region. For example, I coordinate an online library management training program for Latin American university library staff. The program is organized by the Inter-American Organization of Higher Education based in Canada with initial funding from the Canadian Development Agency. I was able to attend an Inter-American university meeting in my own town, Veracruz where I met the President of this Association and talked to her about library projects. To my surprise, a month later I received an invitation to lead this training program, and I immediately accepted the opportunity. Being in charge of creating the training program from scratch was an enriching experience, and I have been able to reach nearly 1,000 people from most Latin American countries in the 60-hour post-graduate level course. This opportunity also has allowed me to increase online course development skills and to work with library colleagues from North America and Latin America in the management and development of the program. An important organization for library collaboration is UNESCO. I have worked on different projects with this United Nations organization, and if anyone visits and follows their website announcements, I am positive that one can find a project that is relevant and of interest. If you follow through, you can share your expertise and your knowledge with the rest of the international community. Another source of support for library collaboration is the foreign ministry department of most governments. One of its objectives, commonly, is to encourage cooperative agreements with other nations, and often literacy programs and libraries factor into such agreements. In conclusion, there are opportunities for collaboration if one looks for them. If you get engaged in international library work, your personal life will be transformed every time you encounter and embrace a foreign culture; your personal boundaries will be expanded. International collaboration is needed if we want to make this world a more livable place.
2018-07-20T05:16:36.467Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "4af4a0eb41d58619847f742c193a2cfe50b5bb1e", "oa_license": null, "oa_url": "https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1259&context=collaborativelibrarianship", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "298a8a84c4f3558cb323e1434d62fef31dc7f6d9", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
245249360
pes2o/s2orc
v3-fos-license
Building models for coupled thermo elastic problems in different software architectures using a new model exchange format To reduce the energy consumption of machine tools while preserving the productivity and quality, the deformation caused by thermal loads must be taken into account. Two approaches to reach this goal are the integration of model predictive error correction algorithms into the machine controller and a machine design that mitigates unwanted thermo‐elastic behaviour. Introduction The Collaborative Research Centre/ Transregio 96 (CRC/TR 96) aims to increase the productivity and quality while reducing energy consumption by incorporating thermo-elastic simulations into the machine development and machine application. From the multitude of model developers, users and different scientists backgrounds, a demand for a cross-platform approach to exchange and use models emerges, which is tackled by the use of a custom exchange format. The format, along with a simple use case, will be shown. Exchange Format The proposed exchange format uses the javascript-object-notation (JSON) as widely used file format in the web. Therefore, libraries for parsing and writing exist for nearly all platforms and programming languages. As it is a plain text format, version control systems can be used to keep track of model changes easily. The model structure is separated into five core sections, shown in Fig The model comprises of a geometry section, containing parts with their respective meshes and material associations. The movement section describes the movements of the machine parts. In the section boundary Condition -Sources -Constraints, physical information for the model are defined and ordered by there physical domain. At last, so called properties are given, which can be constants, look-up-tables or functions and are used to parametrize boundary-conditions, movements and so on. The major advantage of a custom structured format is the high flexibility, as additional informations can easily be added and format changes can be incorporated quickly. As the format is used by multiple subprojects of the CRC/TR 96, a common vocabulary for the same entities, despite of different modelling approaches and modeller backgrounds, has been established. Example Use-Case On top of the file format a simulation tool chain, as shown in Fig. 2, was developed. The model definition to generate a JSON-model-instance is realised in ANSYS ® Mechanical using a custom extension. The JSON-model-instance is fed into the open source finite element (FE) toolbox DUNE to carry out a transient FE analysis of the problem. Additionally the toolbox can generate a state space system (SSS). The size of those models can be reduced using custom model order reduction (MOR) techniques from the MESS-toolbox. The transient simulation for both SSS can be carried out with the same custom solver. State-Space-Model Reduced Order State-Space-Model json-Modelinstance Simulationresults Timestep-Integration MOR Export The tool chain shows the major advantages of platform and non proprietary exchange formats. Open source and closed source software get interoperable and problem tailored numerical methods can be applied to industrial problems. At the same time the development of the tools remains independent. In Fig. 3 The results of the FE model in Fig. 3 a) and the SSS b) are identical, because the problem is linear in the temperature and the same coupling method was used. The maximum difference between the reduced order solution c) and the full model a) has a magnitude of 0.25 K, while the reduced model is 1000 times smaller. In turn, the time integration is at least 1000 times faster as the simulation of the full model. Those numbers show how the use of MOR methods enables the application of complex FE models for the real-time control of machine tools. Further details to the modelling techniques and more results can be found in [1] and [2]. Summary and Outlook This paper gives an overview of a custom model description format for coupled thermo-elastic problems. The tool chain uses the format to exchange models between different tools and realize simulation tasks in the CRC/TR96. Additionally some results for a simplified demonstrator machine were shown. The next steps include the extension of the format to facilitate the usage for coupled thermo-elastic simulations. Therefore, the tool chain needs some extensions with respect to the boundary conditions for the elasticity. Furthermore it is planned to release the format, along with a reference implementation in C++, which allows the incorporation into other tools.
2021-12-17T16:30:09.724Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "a3d5f789f8f45bb0b929255728b7aa381ff99601", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pamm.202100035", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "7b753b2c3fb4cedf3b75481c9be91b81c7142472", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
91181320
pes2o/s2orc
v3-fos-license
Kinematics and Dynamics Analysis of a 3-DOF Upper-Limb Exoskeleton with an Internally Rotated Elbow Joint The contradiction between self-weight and load capacity of a power-assisted upper-limb exoskeleton for material hanging is unresolved. In this paper, a non-anthropomorphic 3-degree of freedom (DOF) upper-limb exoskeleton with an internally rotated elbow joint is proposed based on an anthropomorphic 5-DOF upper-limb exoskeleton for power-assisted activity. The proposed 3-DOF upper-limb exoskeleton contains a 2-DOF shoulder joint and a 1-DOF internally rotated elbow joint. The structural parameters of the 3-DOF upper-limb exoskeleton were determined, and the differences and singularities of the two exoskeletons were analyzed. The workspace, the joint torques and the power consumption of two exoskeletons were analyzed by kinematics and dynamics, and an exoskeleton prototype experiment was performed. The results showed that, compared with a typical anthropomorphic upper-limb exoskeleton, the non-anthropomorphic 3-DOF upper-limb exoskeleton had the same actual workspace; eliminated singularities within the workspace; improved the elbow joint force situation; and the maximum elbow joint torque, elbow external-flexion/internal-extension and shoulder flexion/extension power consumption were significantly reduced. The proposed non-anthropomorphic 3-DOF upper-limb exoskeleton can be applied to a power-assisted upper-limb exoskeleton in industrial settings. Introduction The exoskeleton is a human-robot interaction system that enhances the operator strength in various environments.It has been widely used in rehabilitation, medical, haptic interaction and power-assisted fields.The power-assisted upper-limb exoskeleton acts as a power amplifier to assist the user in performing tasks that are impossible or difficult to accomplish on human power alone, which is mainly used in rehabilitation and material handling and other fields.In industrial applications, the movement of the upper-limb exoskeleton used for material handling is in front of the body, and the weight of load is heavy [1].Therefore, it requires a light structure and stronger power than those of the rehabilitation exoskeletons.The existing exoskeleton with anthropomorphic structure can complete material handling [2][3][4][5][6].However, it has many joints, corresponding drivers and other components; thus, the exoskeleton has a large volume and a complex structure, and its own weight is relatively heavy.Hence, in industrial applications, it is necessary to ensure that it has a heavy load capacity and light self-weight, which is difficult for structural design. Professor Amami Kanou of the University of Tsukuba in Japan developed a HAL-5 exoskeleton robot that weighted 21-kg [7,8].This robot can help rescuers, porters and other physically demanding staff and can also help disabled people and elderly people take care of themselves.Galina Ivanova and Sergey Bulavintsev of Korea University of Technology and Education developed a 7-DOF upper-limb exoskeleton that can assist the human shoulder and elbow joints to complete daily activities [9].Joel C. Perry and Jacob Rosen of the University of Washington designed an exoskeleton called CADEN-7 with seven active DOFs [10][11][12].It can imitate the actions of 95% of healthy arms and is mainly used for rehabilitation, virtual reality simulation and power assistance.Professor Yamamoto of the Kanagawa Institute of Technology in Japan developed a wearable exoskeleton [13,14].It can increase a person's power by 0.5-1 times and provides help for nurses to help them move patients.R.A.R.C. Gopura and Kazuo Kiguchi of the University of Saga in Japan developed a 6-DOF upper extremity exoskeleton called SUEFUL-6, which is used primarily to assist with the daily activities of the upper-limbs [15].F. Martinez and I. Retolaza in Spain designed the 5-DOF upper-limb exoskeleton IKO, which is mainly used to provide help in performing daily activities [16].Wenbin Chen and Caihua Xiong of Huazhong University of Science and Technology in China developed a 10-DOF upper-limb exoskeleton prototype, which is mainly used to assist the body in daily activities.The shoulder joint contains 6-DOF, and the elbow and wrist joints each contain 4-DOF [17].F. Xiao et al. of Hefei University of Technology designed a 6-DOF cable-driven upper limb exoskeleton (CABexo) based on epicyclic gear trains.This exoskeleton has a parallel mechanical structure to the traditional serial structure, but is stiffer and has a stronger carrying capacity.Comparisons between CABexo and some of the existing exoskeleton systems show that it is able to meet the movement needs of most disabled and elderly individuals [18].These power-assisted exoskeletons with anthropomorphic structures fit well with the human arm in the course of work and can provide help to a user in daily life.Although they are lighter and less bulky, they have less joint torque and limited load capacity and cannot be used in an industrial setting. The Raytheon Company developed the XOS2 exoskeleton; the upper-limb portion has 5-DOF, including 3-DOF shoulders and 2-DOF elbows.The exoskeleton can perform lateral-lifting, raising-up, pull down and other actions.It can amplify user strength and endurance and allow them, for example, to lift 200-lb loads repeatedly without tiring [2,3].Massimo Bergamasco of the Scuola Superiore Sant'Anna in Italy developed a type of whole-body exoskeleton called the Body Extender system with a weight of 160-kg [4][5][6].The upper-limb of the exoskeleton has 5-DOF (including a crawl DOF).It can amplify the power of the human body 3-20 times, and each arm can lift a 50-kg load.These power-assisted exoskeletons with anthropomorphic structures have large joint torque and heavy load capacity.However, due to their complex structure and heavy weight, they are not suitable for industrial applications. Lee et al. [1] of the Hanyang University developed an upper-limb exoskeleton for material handling, which is called the Hanyang University EXoskeleton Assistive Robot (HEXAR).It has 3-DOF, including 2-DOF shoulders and 1-DOF elbow, and is actuated by motors.The exoskeleton is used to handling heavy loads in industrial settings by the front side of the operator body. When carrying loads in front of the body, HEXAR's assistive principle is the same as the XOS2 developed by Raytheon and the Body Extender system developed by Santa Ana University of Italy.They are all assisted by the anthropomorphic shoulder flexion/extension and elbow flexion/extension.The torque generated by a load on the elbow joint is entirely driven by the elbow flexion/extension actuator, which makes the elbow actuator larger and heavy, and the volume and weight of the upper-limb exoskeleton can hardly be reduced. Many of the existing power-assisted exoskeletons adopt an anthropomorphic structure similar to that of the human body, with few considerations for non-anthropomorphic structures.In this article, a typical anthropomorphic 5-DOF power-assisted upper-limb exoskeleton for material handling was designed based on the upper-limb structure of the human body.On this basis, the joints were analyzed and optimized to reduce the self-weight and ensure the load capacity and the workspace.Then, a non-anthropomorphic 3-DOF upper-limb exoskeleton applied in an industrial setting was proposed, which internally rotated the traditional humanoid elbow joint 90 • .The shoulder was modeled as a 2-DOF joint, and the elbow was modeled as a 1-DOF joint.The proposed exoskeleton can improve the force status of the elbow, reduce the required torque and power consumption of the elbow joint when carrying the load in industrial settings, and thus reduce the size and weight of the elbow joint actuator. The paper is structured as follows.Section 2 gives the principle of the 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton.The differences and the singularity are analyzed.In Section 3, the forward and inverse kinematics analysis based on workspace is presented.Section 4 illustrates the dynamic analysis of the joint torque and power consumption.Section 5 presents the experimental results of the exoskeleton prototypes.In Section 6, the authors draw conclusions and provide directions for future work. Principle of the 5-DOF Upper-Limb Exoskeleton The structure of the human arm is complex, so the joints and segments of a human arm are usually simplified into a 7-DOF kinematic system [19], as shown in Figure 1.The shoulder joint can be modeled as a 3-DOF ball-and-socket joint: flexion/extension (Z 2 ), abduction/adduction (Z 3 ) and internal/external-rotation (Z 1 ); The wrist joint can be modeled as a 3-DOF ball-and-socket joint: palmar flexion/dorsiflexion (Z 6 ), ulnar deviation/radial deviation (Z 7 ) and medial rotation (Z 5 ); The elbow joint can be modeled as a 1-DOF hinge joint with (Z 4 ) [20-22].The motion range of the joints is shown in Table 1 [23].Here, the ball-and-socket wrist joint with 3-DOF is primarily used to ensure the flexibility of the end-effector.Because the exoskeleton mainly performs lifting, pulling and pushing of a load during material handling, it does not require the end-effector to have the same DOFs as the hand.To further simplify the structure, only the wrist internal/external-rotation is retained.The elbow joint is considered a 2-DOF joint, i.e., flexion/extension and pronation/supination [24]. Based on the above analysis, a hydrodynamic 5-DOF upper-limb exoskeleton was designed, as shown in Figure 2. The 5 DOFs are as follows: shoulder flexion/extension θ 5 2 , shoulder adduction/abduction θ 5 1 , shoulder internal/external-rotation θ 5 3 , elbow flexion/extension θ 5 4 and elbow pronation/supination θ 5 5 .While operating the 5-DOF upper-limb exoskeleton, the operator can drive the exoskeleton through the handle, the exoskeleton upper-arm and forearm fit perfectly with the upper-arm and the forearm of the human body, and a strap between the human and exoskeleton is not required.We measured the motion range of the upper-limb of human body when performing the typical movements of material handling (raising up and lateral lifting), and after appropriately increasing and rounding to the value of the range, the motion range of the 5-DOF upper-limb exoskeleton was obtained.The motion range of the 5-DOF upper-limb exoskeleton is described in Table 2. The 5-DOF upper-limb exoskeleton that has similar degrees of freedom to human body is a typical anthropomorphic upper-limb exoskeleton.The power-assisted principle of the 5-DOF upper-limb exoskeleton shoulder and elbow is the same as that of the XOS2 [2,3], the Body Extender system [4][5][6] and the HEXAR [1] when carrying loads.Thus, the workspace, the joint torque and the power consumption of the anthropomorphic upper-limb exoskeleton are obtained by analyzing the 5-DOF upper-limb exoskeleton. Principle of the 3-DOF Upper-Limb Exoskeleton The 5-DOF upper-limb exoskeleton has characteristics of flexibility due to the multiple degrees of freedom, but too many degrees of freedom will lead to a complex structure, which increases the self-weight.Therefore, the degrees of freedom of the 5-DOF upper-limb exoskeleton are analyzed and optimized.The function of the degrees of freedom of the 5-DOF upper-limb exoskeleton is shown in Table 3. Complete lifting, pull down, etc., which require the upper-limb to swing back and forth.Elbow Pron./Sup.θ 5 5 Increase the flexibility of the end-effector. The shoulder flexion/extension θ 5 2 , elbow flexion/extension θ 5 4 and shoulder adduction/abduction θ 5 1 are significant for performing tasks.The shoulder internal/external-rotation θ 5 3 and elbow pronation/supination (θ 5 5 ) are mainly used to increase workspace and flexibility.The weights of these two joints and the corresponding actuators account for 24% of the total weight of the 5-DOF upper-limb exoskeleton.In order to simplify the structure and reduce weight, the shoulder internal/external-rotation θ 5 3 , elbow pronation/supination θ 5 5 and their actuators are removed, which will cause the problem of a smaller workspace.To solve this problem, internal-rotation of the elbow flexion/extension by 90 • around the upper-arm will create a new degree of freedom: elbow external-flexion/internal-extension (Elbow EF/IE) θ 3 3 .The problem of flexibility in the end-effector is solved by the tools designed for different loads. Based on the results of the above analysis, a non-anthropomorphic 3-DOF upper-limb exoskeleton with an internally rotated elbow joint is proposed.The 3 DOFs include the following: shoulder flexion/extension θ 3 1 , shoulder adduction/abduction θ 3 2 and elbow external-flexion/internal-extension θ 3 3 , as shown in Figure 3. Different from the 5-DOF upper-limb exoskeleton, when the upper-limb of the human body performs elbow flexion/extension in the sagittal plane, the 3-DOF upper-limb exoskeleton performs elbow external-flexion/internal-extension in the plane of the vertical sagittal plane; the exoskeleton and the human arm are not fully attached together.According to the motion range of human joints and the motion range of the 5-DOF upper-limb exoskeleton, the motion range of the 3-DOF upper-limb exoskeleton is determined, as shown in Table 4.As described in Figure 2, the 5-DOF upper-limb exoskeleton is an anthropomorphic structure.The length of the upper-arm and the forearm can refer to the human body, but the 3-DOF upper-limb exoskeleton is a non-anthropomorphic structure.The length of the upper-arm and forearm is determined by the workspace of the 5-DOF upper-limb exoskeleton and the angle between the forearm of the exoskeleton and the sagittal plane, shown in Figure 4a, so that the 3-DOF upper-limb exoskeleton has the same workspace as the 5-DOF upper-limb exoskeleton.In order for the 3-DOF upper-limb exoskeleton to reach the workspace of the 5-DOF upper-limb exoskeleton, the distance between the end-effector and the shoulder should be the same as that of the 5-DOF upper-limb exoskeleton.The lengths of the upper-arm and forearm of the 5-DOF upper-limb exoskeleton are 330 mm and 370 mm, respectively.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 5-DOF upper-limb exoskeleton and the shoulder joint is shown in Figure 5a.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 3-DOF upper-limb exoskeleton and the shoulder joint is described in Figure 5b.As described in Figure 2, the 5-DOF upper-limb exoskeleton is an anthropomorphic structure.The length of the upper-arm and the forearm can refer to the human body, but the 3-DOF upper-limb exoskeleton is a non-anthropomorphic structure.The length of the upper-arm O 1 A 1 and forearm A 1 B 1 is determined by the workspace of the 5-DOF upper-limb exoskeleton and the angle α between the forearm of the exoskeleton and the sagittal plane, shown in Figure 4a, so that the 3-DOF upper-limb exoskeleton has the same workspace as the 5-DOF upper-limb exoskeleton.As described in Figure 2, the 5-DOF upper-limb exoskeleton is an anthropomorphic structure.The length of the upper-arm and the forearm can refer to the human body, but the 3-DOF upper-limb exoskeleton is a non-anthropomorphic structure.The length of the upper-arm and forearm is determined by the workspace of the 5-DOF upper-limb exoskeleton and the angle between the forearm of the exoskeleton and the sagittal plane, shown in Figure 4a, so that the 3-DOF upper-limb exoskeleton has the same workspace as the 5-DOF upper-limb exoskeleton.In order for the 3-DOF upper-limb exoskeleton to reach the workspace of the 5-DOF upper-limb exoskeleton, the distance between the end-effector and the shoulder should be the same as that of the 5-DOF upper-limb exoskeleton.The lengths of the upper-arm and forearm of the 5-DOF upper-limb exoskeleton are 330 mm and 370 mm, respectively.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 5-DOF upper-limb exoskeleton and the shoulder joint is shown in Figure 5a.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 3-DOF upper-limb exoskeleton and the shoulder joint is described in Figure 5b.In order for the 3-DOF upper-limb exoskeleton to reach the workspace of the 5-DOF upper-limb exoskeleton, the distance between the end-effector and the shoulder should be the same as that of the 5-DOF upper-limb exoskeleton.The lengths of the upper-arm and forearm of the 5-DOF upper-limb exoskeleton are 330 mm and 370 mm, respectively.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 5-DOF upper-limb exoskeleton and the shoulder joint is shown in Figure 5a.In the sagittal plane of the shoulder joint, the distance between the end-effector of the 3-DOF upper-limb exoskeleton and the shoulder joint is described in Figure 5b In Figure 5a, and can be calculated as follows: As shown in Table 2, 62° 180°, and using Equations ( 1) and ( 2), the result is derived, where max is 54° and and have the same range of variation of 362.3, 700 .In Figure 5b, and can be calculated as follows: As shown in Table 3, 62° 180°, , and using Equations ( 3) and ( 4), the result is derived, where max is 53°, is 328, and is 372. Differences Analysis Since the elbow joint of the 3-DOF upper-limb exoskeleton is internally rotated 90°, the upperarm and forearm of the exoskeleton are not parallel to the human arm.This problem mainly occurs when the human elbow performs the flexion action, i.e., elbow is in a non-straight state. In the course of work, the impact of non-parallel is mainly reflected in two aspects: 1.There is an angle between the forearm of the exoskeleton and the sagittal plane, as shown in Figure 4a.This results in the relative rotation between the end-effector and the hand.When the flexion of the elbow joint of the human body is at a maximum, the angle of the relative rotation between the end-effector and the hand is 53°.Since the movements of the end-effector under the control of the human are continuous and smooth, angle does not affect the manipulability of the exoskeleton.This will be verified by the experiment in Section 5. 2. The length of the human upper arm and forearm refer to human dimensions of Chinese adults, which is the National Standard of the People's Republic of China, as shown in Figure 4b.There is an angle between the human forearm and the plane formed by the upper-arm and forearm of the exoskeleton.This results in ulnar deviation in the human wrist.When the elbow flexion of human body is at a maximum, the ulnar deviation is 54°.The maximum ulnar deviation allowed by the physiological structure is 55° [23], which is greater than angle .Therefore, ulnar deviation does not affect the manipulation of the exoskeleton.In Figure 5a, O 0 B 0 and β can be calculated as follows: As shown in Table 2, 62 • ≤ θ 0 ≤ 180 • , and using Equations ( 1) and ( 2), the result is derived, where max(β) is 54 • and O 0 B 0 and O 1 B 1 have the same range of variation of [362.3, 700]. In Figure 5b, O 1 B 1 and α can be calculated as follows: As shown in Table 3, 62 , and using Equations ( 3) and ( 4), the result is derived, where max(α) is 53 • , O 1 A 1 is 328, and A 1 B 1 is 372. Differences Analysis Since the elbow joint of the 3-DOF upper-limb exoskeleton is internally rotated 90 • , the upper-arm and forearm of the exoskeleton are not parallel to the human arm.This problem mainly occurs when the human elbow performs the flexion action, i.e., elbow is in a non-straight state. In the course of work, the impact of non-parallel is mainly reflected in two aspects: 1. There is an angle α between the forearm of the exoskeleton and the sagittal plane, as shown in Figure 4a.This results in the relative rotation between the end-effector and the hand.When the flexion of the elbow joint of the human body is at a maximum, the angle of the relative rotation between the end-effector and the hand is 53 • .Since the movements of the end-effector under the control of the human are continuous and smooth, angle α does not affect the manipulability of the exoskeleton.This will be verified by the experiment in Section 5. 2. The length of the human upper arm and forearm refer to human dimensions of Chinese adults, which is the National Standard of the People's Republic of China, as shown in Figure 4b.There is an angle β between the human forearm and the plane formed by the upper-arm and forearm of the exoskeleton.This results in ulnar deviation in the human wrist.When the elbow flexion of human body is at a maximum, the ulnar deviation is 54 • .The maximum ulnar deviation allowed by the physiological structure is 55 • [23], which is greater than angle β.Therefore, ulnar deviation β does not affect the manipulation of the exoskeleton. Singularity Analysis Since the mechanical system will lose some degrees of freedom at the singular points, some motions cannot be achieved.So, singularities should be avoided in the design [20]. As described in Figure 2, the 5-DOF upper-limb exoskeleton is in the singular position.With this configuration, the arm is obliquely upward, i.e., θ 5 2 = 135 • .In this position, abduction or adduction motion cannot be achieved, because the rotation axes of θ 5 1 and θ 5 3 are collinear.Since the 3-DOF upper-limb exoskeleton does not have two or more collinear rotation axes inside the workspace, there is no singularity within the workspace [25], and it is not necessary to avoid singularity in the workspace by rotating the axis of the coordinate or using redundant degrees of freedom [4,20]. Kinematic Analysis In order to verify whether the end-effector of the 3-DOF upper-limb exoskeleton can reach all points within the workspace of the 5-DOF upper-limb exoskeleton, the 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton are analyzed with forward and inverse kinematics methods, respectively. Forward Kinematics Analysis of the 5-DOF Upper-Limb Exoskeleton Elbow pronation/supination θ 5 5 has no effect on the size of the workspace, so it is considered a fixed constraint in the kinematics modeling.In order to simplify the configuration, the tilt of the abduction/adduction θ 5 1 axis of the shoulder joint is removed, and the flexion/extension θ 5 2 range of the shoulder joint is changed accordingly.Then, the kinematic model is obtained, as shown in Figure 6.The kinematics of the 5-DOF upper-limb exoskeleton is analyzed with the Denavit-Hartenberg parameters (DH), where 0 is the initial coordinate.The DH parameters are listed in Table 5. Singularity Analysis Since the mechanical system will lose some degrees of freedom at the singular points, some motions cannot be achieved.So, singularities should be avoided in the design [20]. As described in Figure 2, the 5-DOF upper-limb exoskeleton is in the singular position.With this configuration, the arm is obliquely upward, i.e., 135°.In this position, abduction or adduction motion cannot be achieved, because the rotation axes of and are collinear.Since the 3-DOF upper-limb exoskeleton does not have two or more collinear rotation axes inside the workspace, there is no singularity within the workspace [25], and it is not necessary to avoid singularity in the workspace by rotating the axis of the coordinate or using redundant degrees of freedom [4,20]. Kinematic Analysis In order to verify whether the end-effector of the 3-DOF upper-limb exoskeleton can reach all points within the workspace of the 5-DOF upper-limb exoskeleton, the 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton are analyzed with forward and inverse kinematics methods, respectively. Forward Kinematics Analysis of the 5-DOF Upper-Limb Exoskeleton Elbow pronation/supination has no effect on the size of the workspace, so it is considered a fixed constraint in the kinematics modeling.In order to simplify the configuration, the tilt of the abduction/adduction axis of the shoulder joint is removed, and the flexion/extension range of the shoulder joint is changed accordingly.Then, the kinematic model is obtained, as shown in Figure 6.The kinematics of the 5-DOF upper-limb exoskeleton is analyzed with the Denavit-Hartenberg parameters (DH), where 0 is the initial coordinate.The DH parameters are listed in Table 5.The workspace of the 5-DOF upper-limb exoskeleton is shown in Figure 7, obtained by forward kinematics according to the motion range in Table 2. Here, the x-axis is in the same direction as the exoskeleton, and point (0,0,0) is the center of shoulder flexion/extension.The workspace of the 5-DOF upper-limb exoskeleton is shown in Figure 7, obtained by forward kinematics according to the motion range in Table 2. Here, the x-axis is in the same direction as the exoskeleton, and point (0,0,0) is the center of shoulder flexion/extension. Inverse Kinematics Analysis of the 3-DOF Upper-Limb Exoskeleton Given an arbitrary end-effector position of the 5-DOF upper-limb exoskeleton in the workspace, the corresponding joint angles of the 3-DOF upper-limb exoskeleton can be calculated using inverse kinematics.In case the obtained angles are in the motion range of the corresponding joint, the 3-DOF upper-limb exoskeleton can reach the position of the 5-DOF upper-limb exoskeleton in the workspace, i.e., the given position is in the workspace of the 3-DOF upper-limb exoskeleton. According to the previous results, the kinematics of the 3-DOF upper-limb exoskeleton are analyzed with the Denavit-Hartenberg parameters (DH), where 0 is the initial coordinate, as shown in Figure 3.The DH parameters are listed in Table 6.Table 6.DH parameters of the 3-DOF upper-limb exoskeleton. The extrema of , and obtained by inverse kinematics are listed in Table 7 [26].The range of listed extremum shown in Table 7 is 5° greater than the range of elbow externalflexion/internal-extension shown in Table 4.The motion range of the elbow joint actuator in the 5-DOF upper-limb exoskeleton reaches 130°.It is feasible to reference this parameter as the motion range of the elbow external-flexion/internal-extension , thus derived from inverse kinematics meets the motion range of the 3-DOF upper-limb exoskeleton. Inverse Kinematics Analysis of the 3-DOF Upper-Limb Exoskeleton Given an arbitrary end-effector position of the 5-DOF upper-limb exoskeleton in the workspace, the corresponding joint angles of the 3-DOF upper-limb exoskeleton can be calculated using inverse kinematics.In case the obtained angles are in the motion range of the corresponding joint, the 3-DOF upper-limb exoskeleton can reach the position of the 5-DOF upper-limb exoskeleton in the workspace, i.e., the given position is in the workspace of the 3-DOF upper-limb exoskeleton. According to the previous results, the kinematics of the 3-DOF upper-limb exoskeleton are analyzed with the Denavit-Hartenberg parameters (DH), where 0 is the initial coordinate, as shown in Figure 3.The DH parameters are listed in Table 6.Table 6.DH parameters of the 3-DOF upper-limb exoskeleton. Links The extrema of θ 3 1 , θ 3 2 and θ 3 3 obtained by inverse kinematics are listed in Table 7 [26].The range of listed extremum θ 3 3 shown in Table 7 is 5 • greater than the range of elbow external-flexion/internal-extension θ 3 3 shown in Table 4.The motion range of the elbow joint actuator in the 5-DOF upper-limb exoskeleton reaches 130 • .It is feasible to reference this parameter as the motion range of the elbow external-flexion/internal-extension θ 3 3 , thus θ 3 3 derived from inverse kinematics meets the motion range of the 3-DOF upper-limb exoskeleton. Motion Motion Range Inverse Kinematics Extremum Range Based on the motion range of the 3-DOF upper-limb exoskeleton in Table 4, the workspace of the 5-DOF upper-limb is divided into two parts: Part of the workspace that the 3-DOF upper-limb exoskeleton can reach called valid workspace; and part of the workspace that the 3-DOF upper-limb exoskeleton cannot reach called invalid workspace.The inverse kinematics workspace consists of valid workspace and invalid workspace (Figure 8).The area consisting of the blue points is the valid workspace, and the area consisting of the red points is the invalid workspace.Here, the x-axis is in the same direction as the exoskeleton, and point (0,0,0) is the center of shoulder flexion/extension.Based on the motion range of the 3-DOF upper-limb exoskeleton in Table 4, the workspace of the 5-DOF upper-limb is divided into two parts: Part of the workspace that the 3-DOF upper-limb exoskeleton can reach called valid workspace; and part of the workspace that the 3-DOF upper-limb exoskeleton cannot reach called invalid workspace.The inverse kinematics workspace consists of valid workspace and invalid workspace (Figure 8).The area consisting of the blue points is the valid workspace, and the area consisting of the red points is the invalid workspace.Here, the x-axis is in the same direction as the exoskeleton, and point (0,0,0) is the center of shoulder flexion/extension.A total of 5001 points are placed in the inverse kinematics workspace, with 4621 points placed in the valid workspace, accounting for approximately 92.4%, and 380 points placed in the invalid workspace, accounting for approximately 7.6%.The invalid workspace is located behind the coronal plane of the shoulder joint.The end-effector of the 5-DOF upper-limb exoskeleton will not reach this area, while the valid workspace is the actual workspace of the 5-DOF upper-limb exoskeleton when working.The existence of invalid workspace does not affect the accessibility of the 3-DOF upper-limb exoskeleton, i.e., the 3-DOF upper-limb exoskeleton can reach all points in the actual workspace of the 5-DOF upper-limb exoskeleton. Dynamics Analysis The upper-limb exoskeleton used for material handing is mainly used to perform lifting, pulling, pushing, raising and other complex movements in front of the body, or lifting on the side of the body.To obtain the loading conditions on the joints of the two exoskeletons during work, they are analyzed by dynamics through two typical movements: raising up and lateral lifting.A total of 5001 points are placed in the inverse kinematics workspace, with 4621 points placed in the valid workspace, accounting for approximately 92.4%, and 380 points placed in the invalid workspace, accounting for approximately 7.6%.The invalid workspace is located behind the coronal plane of the shoulder joint.The end-effector of the 5-DOF upper-limb exoskeleton will not reach this area, while the valid workspace is the actual workspace of the 5-DOF upper-limb exoskeleton when working.The existence of invalid workspace does not affect the accessibility of the 3-DOF upper-limb exoskeleton, i.e., the 3-DOF upper-limb exoskeleton can reach all points in the actual workspace of the 5-DOF upper-limb exoskeleton. Dynamics Analysis The upper-limb exoskeleton used for material handing is mainly used to perform lifting, pulling, pushing, raising and other complex movements in front of the body, or lifting on the side of the body.To obtain the loading conditions on the joints of the two exoskeletons during work, they are analyzed by dynamics through two typical movements: raising up and lateral lifting. Joint Trajectories Figure 9 shows the given end-effector trajectory, based on the structure and parameters of the 5-DOF upper-limb exoskeleton.The end-effector trajectory of raising-up is located in the sagittal plane of the shoulder joint; the end-effector trajectory of lateral-lifting is located in the coronal plane of the shoulder joint while the elbow joint is straight.The parameters of the two movements are shown in Table 8. O 0 describes the shoulder joint; A 0 , B 0 describe the initial position of the elbow and the end-effector; A 1 , B 1 describe the end position of the elbow and the end-effector; and B 0 , B 1 describe the projection on the ground of the end-effectors. Joint Trajectories Figure 9 shows the given end-effector trajectory, based on the structure and parameters of the 5-DOF upper-limb exoskeleton.The end-effector trajectory of raising-up is located in the sagittal plane of the shoulder joint; the end-effector trajectory of lateral-lifting is located in the coronal plane of the shoulder joint while the elbow joint is straight.The parameters of the two movements are shown in Table 8. describes the shoulder joint; , describe the initial position of the elbow and the end-effector; , describe the end position of the elbow and the end-effector; and , describe the projection on the ground of the end-effectors. Dynamics The upper-arm and the forearm of the 5-DOF upper-limb exoskeleton are 330 mm and 370 mm, and the upper-arm and the forearm of the 3-DOF upper-limb exoskeleton are 328 mm and 372 mm, respectively.The load capacity of the upper-limb exoskeleton for material hanging is heavy, and the design index of the 5-DOF upper-limb exoskeleton lifting capacity is 50 kg.In order to compare the characteristics of the two exoskeletons with a heavy load, the load on both end-effectors is 50 kg.The load on the end-effector is much heavier than the upper arm and the forearm of the 5-DOF upperlimb exoskeleton; therefore, the exoskeleton is considered as a 0 mass in the dynamic analysis.The time for raising up and lateral lifting is 5 s and 3 s, respectively, which refers to the rounded time of the human body to perform the actual actions.The dynamic equations of motion for the two exoskeletons are derived using the Lagrange method, and joint torques can be solved [27].Figures 10-13 show the joint torque curves.Because shoulder flexion/extension ( , ) of the two exoskeletons does not work during lateral-lifting, the joint torques and power consumption of the two joints will not be considered in the process. Action Parameters Data Raising up Dynamics The upper-arm and the forearm of the 5-DOF upper-limb exoskeleton are 330 mm and 370 mm, and the upper-arm and the forearm of the 3-DOF upper-limb exoskeleton are 328 mm and 372 mm, respectively.The load capacity of the upper-limb exoskeleton for material hanging is heavy, and the design index of the 5-DOF upper-limb exoskeleton lifting capacity is 50 kg.In order to compare the characteristics of the two exoskeletons with a heavy load, the load on both end-effectors is 50 kg.The load on the end-effector is much heavier than the upper arm and the forearm of the 5-DOF upper-limb exoskeleton; therefore, the exoskeleton is considered as a 0 mass in the dynamic analysis.The time for raising up and lateral lifting is 5 s and 3 s, respectively, which refers to the rounded time of the human body to perform the actual actions.The dynamic equations of motion for the two exoskeletons are derived using the Lagrange method, and joint torques can be solved [27].Figures 10-13 show the joint torque curves.Because shoulder flexion/extension (θ 3 1 , θ 5 1 ) of the two exoskeletons does not work during lateral-lifting, the joint torques and power consumption of the two joints will not be considered in the process.Since elbow torque is mainly borne by structural components during raising up, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced.According to the Since elbow torque is mainly borne by structural components during raising up, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced.According to the Since elbow torque is mainly borne by structural components during raising up, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced.According to the Since elbow torque is mainly borne by structural components during raising up, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced.According to the Since elbow torque is mainly borne by structural components during raising up, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced.According to the simulation results, the maximum joint torques in the two exoskeletons are obtained during the process of raising up and lateral lifting (Table 9). From Table 9, the maximum joint torques of shoulder flexion/extension and shoulder adduction/abduction for the two exoskeletons are identical during the process of raising up and lateral lifting, but compared with elbow flexion/extension θ 5 4 of the 5-DOF upper-limb exoskeleton, the maximum joint torque of elbow external-flexion/internal-extension θ 3 3 for the 3-DOF upper-limb exoskeleton was reduced by approximately 50%.According to the joint torque curves and angular velocity .q obtained by kinematics, the joint power consumption curves can be further obtained (Figures 14-17).simulation results, the maximum joint torques in the two exoskeletons are obtained during the process of raising up and lateral lifting (Table 9).From Table 9, the maximum joint torques of shoulder flexion/extension and shoulder adduction/abduction for the two exoskeletons are identical during the process of raising up and lateral lifting, but compared with elbow flexion/extension of the 5-DOF upper-limb exoskeleton, the maximum joint torque of elbow external-flexion/internal-extension for the 3-DOF upper-limb exoskeleton was reduced by approximately 50%.According to the joint torque curves and angular velocity obtained by kinematics, the joint power consumption curves can be further obtained (Figures 14-17).simulation results, the maximum joint torques in the two exoskeletons are obtained during the process of raising up and lateral lifting (Table 9).From Table 9, the maximum joint torques of shoulder flexion/extension and shoulder adduction/abduction for the two exoskeletons are identical during the process of raising up and lateral lifting, but compared with elbow flexion/extension of the 5-DOF upper-limb exoskeleton, the maximum joint torque of elbow external-flexion/internal-extension for the 3-DOF upper-limb exoskeleton was reduced by approximately 50%.According to the joint torque curves and angular velocity obtained by kinematics, the joint power consumption curves can be further obtained (Figures 14-17).From Figures 14 and 15, the joint power consumption for the 3-DOF upper-limb exoskeleton is smoother than for the 5-DOF upper-limb exoskeleton during raising up.During the course of raising up, the end-effector attempts to have uniform motion along the trajectory, so there are significant mutations in the power consumption curves of the two exoskeletons at the beginning and at the end.The maximum power consumption time for each joint is selected in the smooth section to avoid the mutation, i.e., set t = 0.1 s for the maximum elbow external-flexion/internal-extension power consumption time in Figure 14, and set t = 0.5 s and t = 4.5 s for the maximum elbow flexion/extension and shoulder flexion/extension power consumption time in Figure 15.The maximum joint power consumption is shown in Table 10.From Table 10, the maximum joint power consumption of shoulder flexion/extension and elbow external-flexion/internal-extension for the 3-DOF upper-limb exoskeleton is reduced by 46% and 55%, respectively, compared to the 5-DOF upper-limb exoskeleton. In conclusion, compared with the 5-DOF upper-limb exoskeleton, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced, the power consumption curves are smoother, maximum joint power consumptions of the shoulder and elbow decrease significantly, From Figures 14 and 15, the joint power consumption for the 3-DOF upper-limb exoskeleton is smoother than for the 5-DOF upper-limb exoskeleton during raising up.During the course of raising up, the end-effector attempts to have uniform motion along the trajectory, so there are significant mutations in the power consumption curves of the two exoskeletons at the beginning and at the end.The maximum power consumption time for each joint is selected in the smooth section to avoid the mutation, i.e., set t = 0.1 s for the maximum elbow external-flexion/internal-extension power consumption time in Figure 14, and set t = 0.5 s and t = 4.5 s for the maximum elbow flexion/extension and shoulder flexion/extension power consumption time in Figure 15.The maximum joint power consumption is shown in Table 10.From Table 10, the maximum joint power consumption of shoulder flexion/extension and elbow external-flexion/internal-extension for the 3-DOF upper-limb exoskeleton is reduced by 46% and 55%, respectively, compared to the 5-DOF upper-limb exoskeleton. In conclusion, compared with the 5-DOF upper-limb exoskeleton, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced, the power consumption curves are smoother, maximum joint power consumptions of the shoulder and elbow decrease significantly, From Figures 14 and 15, the joint power consumption for the 3-DOF upper-limb exoskeleton is smoother than for the 5-DOF upper-limb exoskeleton during raising up.During the course of raising up, the end-effector attempts to have uniform motion along the trajectory, so there are significant mutations in the power consumption curves of the two exoskeletons at the beginning and at the end.The maximum power consumption time for each joint is selected in the smooth section to avoid the mutation, i.e., set t = 0.1 s for the maximum elbow external-flexion/internal-extension θ 3 3 power consumption time in Figure 14, and set t = 0.5 s and t = 4.5 s for the maximum elbow flexion/extension θ 5 4 and shoulder flexion/extension θ 5 1 power consumption time in Figure 15.The maximum joint power consumption is shown in Table 10.From Table 10, the maximum joint power consumption of shoulder flexion/extension θ 5 1 and elbow external-flexion/internal-extension θ 3 3 for the 3-DOF upper-limb exoskeleton is reduced by 46% and 55%, respectively, compared to the 5-DOF upper-limb exoskeleton. In conclusion, compared with the 5-DOF upper-limb exoskeleton, the maximum elbow joint torque of the 3-DOF upper-limb exoskeleton is significantly reduced, the power consumption curves are smoother, maximum joint power consumptions of the shoulder and elbow decrease significantly, and the mechanism is more feasible.Since the design and selection of the joint actuators are based on the maximum joint torque and power consumption, the volume and weight of the shoulder joint and elbow joint of the 3-DOF upper-limb exoskeleton will be further reduced. Posture Analysis The unactuated 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton were manufactured as exoskeleton prototypes, as shown in Figure 18. and the mechanism is more feasible.Since the design and selection of the joint actuators are based on the maximum joint torque and power consumption, the volume and weight of the shoulder joint and elbow joint of the 3-DOF upper-limb exoskeleton will be further reduced. Posture Analysis The unactuated 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton were manufactured as exoskeleton prototypes, as shown in Figure 18.When the handle is closest to the shoulder joint, maximum ulnar deviation of the wrist occurs.In the experiment, the user could drive the exoskeleton smoothly to this position, i.e., the ulnar deviation of the wrist does not affect operation, as shown in Figure 19.As described in Figure 20, the 5-DOF exoskeleton prototype was obliquely upward.In this position, the rotation axes of shoulder adduction/abduction and shoulder internal/externalrotation were collinear; thus, it was the singularity position in the workspace of the 5-DOF exoskeleton prototype.When the handle is closest to the shoulder joint, maximum ulnar deviation of the wrist occurs.In the experiment, the user could drive the exoskeleton smoothly to this position, i.e., the ulnar deviation β of the wrist does not affect operation, as shown in Figure 19.and the mechanism is more feasible.Since the design and selection of the joint actuators are based on the maximum joint torque and power consumption, the volume and weight of the shoulder joint and elbow joint of the 3-DOF upper-limb exoskeleton will be further reduced. Posture Analysis The unactuated 5-DOF upper-limb exoskeleton and the 3-DOF upper-limb exoskeleton were manufactured as exoskeleton prototypes, as shown in Figure 18.When the handle is closest to the shoulder joint, maximum ulnar deviation of the wrist occurs.In the experiment, the user could drive the exoskeleton smoothly to this position, i.e., the ulnar deviation of the wrist does not affect operation, as shown in Figure 19.As described in Figure 20, the 5-DOF exoskeleton prototype was obliquely upward.In this position, the rotation axes of shoulder adduction/abduction and shoulder internal/externalrotation were collinear; thus, it was the singularity position in the workspace of the 5-DOF exoskeleton prototype.As described in Figure 20, the 5-DOF exoskeleton prototype was obliquely upward.In this position, the rotation axes of shoulder adduction/abduction θ 5 1 and shoulder internal/external-rotation θ 5 3 were collinear; thus, it was the singularity position in the workspace of the 5-DOF exoskeleton prototype.As described in Figure 20, the 5-DOF exoskeleton prototype was obliquely upward.In this position, the rotation axes of shoulder adduction/abduction and shoulder internal/externalrotation were collinear; thus, it was the singularity position in the workspace of the 5-DOF exoskeleton prototype. Motion Analysis The manipulability of the exoskeleton prototype could be verified by the effect and time of the hanging action.In the experiment, as shown in Figure 21, three vertical rods were placed in front of the shoulder joint (OB = 600 mm), in left-front of the shoulder joint (OC = 600 mm, CD = 250 mm) and in right-front of the shoulder joint (OA = 600 m, AD = 250 mm), and two hooks (AE = 1000 mm, AF = 1860 mm) were placed on each of the rods. Motion Analysis The manipulability of the exoskeleton prototype could be verified by the effect and time of the hanging action.In the experiment, as shown in Figure 21 Six healthy subjects (all males, height: 174 4 cm ) participated in the experiment.The participants wore two different exoskeleton prototypes: the 3-DOF exoskeleton prototype and the 5-DOF exoskeleton prototype.They were asked to perform two actions in one test while holding the handle, as shown in Figure 22.A complete hanging action should include the following procedure: 1. Starting from the body side, remove the load from the lower hook, hang the load on the upper hook, and replace the exoskeleton prototype back to the body side.2. Starting from the body side, remove the load from the upper hook, hang the load on the lower hook, and replace the exoskeleton prototype back to the body side. To ensure the accuracy of the experiment, the participants were required to make the action as smooth as possible under the premise of ensuring comfort and action.Each of the participants wore two exoskeleton prototypes and completed the action 10 times in front of the shoulder, left-forward of the shoulder and right-forward of the shoulder.The experimental process was recorded using a camera.The action completion time was compared using a t-test [20]. The statistical analysis results of the two exoskeleton prototypes are shown in Figure 23.The thick bars and thin lines represent the mean values and standard deviations of the data, respectively.The p-values of the t-test are located at the bottom of the graph.The maximum average time for completing the 10 actions using the 3-DOF exoskeleton prototype was 18.1 s, and it was 18.7 s using Six healthy subjects (all males, height: 174 ± 4 cm) participated in the experiment.The participants wore two different exoskeleton prototypes: the 3-DOF exoskeleton prototype and the 5-DOF exoskeleton prototype.They were asked to perform two actions in one test while holding the handle, as shown in Figure 22.A complete hanging action should include the following procedure: 1. Starting from the body side, remove the load from the lower hook, hang the load on the upper hook, and replace the exoskeleton prototype back to the body side.2. Starting from the body side, remove the load from the upper hook, hang the load on the lower hook, and replace the exoskeleton prototype back to the body side. To ensure the accuracy of the experiment, the participants were required to make the action as smooth as possible under the premise of ensuring comfort and action.Each of the participants wore two exoskeleton prototypes and completed the action 10 times in front of the shoulder, left-forward of the shoulder and right-forward of the shoulder.The experimental process was recorded using a camera.The action completion time was compared using a t-test [20]. The statistical analysis results of the two exoskeleton prototypes are shown in Figure 23.The thick bars and thin lines represent the mean values and standard deviations of the data, respectively.The p-values of the t-test are located at the bottom of the graph.The maximum average time for completing the 10 actions using the 3-DOF exoskeleton prototype was 18.1 s, and it was 18.7 s using the 5-DOF exoskeleton prototype.The angle between the exoskeleton forearm and the sagittal plane does not affect the manipulability of the exoskeleton.Also, the p-values are quite high; there is no statistical difference in the time for the two exoskeletons to complete the action [20].This shows that, compared with the 5-DOF exoskeleton prototype, the 3-DOF exoskeleton prototype has the same manipulability, and it also further verified that the ulnar deviation β did not affect the manipulability. Conclusions A non-anthropomorphic 3-DOF upper-limb exoskeleton with an internally rotated elbow joint for material handling was proposed and analyzed based on the typical anthropomorphic 5-DOF upper-limb exoskeleton for power-assisted activity in this paper.The conclusions are as follows: 1.The proposed 3-DOF exoskeleton had a reduced self-weight by removing two joints, and their corresponding actuators and eliminated singularity in the workspace.In addition, it is not necessary to avoid singularity in the workspace by means of rotating the base coordinate axis or the design of redundant degrees of freedom.2. The kinematics and dynamics analysis showed that the 3-DOF upper-limb exoskeleton had the Conclusions A non-anthropomorphic 3-DOF upper-limb exoskeleton with an internally rotated elbow joint for material handling was proposed and analyzed based on the typical anthropomorphic 5-DOF upper-limb exoskeleton for power-assisted activity in this paper.The conclusions are as follows: 1.The proposed 3-DOF exoskeleton had a reduced self-weight by removing two joints, and their corresponding actuators and eliminated singularity in the workspace.In addition, it is not necessary to avoid singularity in the workspace by means of rotating the base coordinate axis or the design of redundant degrees of freedom.2. The kinematics and dynamics analysis showed that the 3-DOF upper-limb exoskeleton had the Conclusions A non-anthropomorphic 3-DOF upper-limb exoskeleton with an internally rotated elbow joint for material handling was proposed and analyzed based on the typical anthropomorphic 5-DOF upper-limb exoskeleton for power-assisted activity in this paper.The conclusions are as follows: 1. The proposed 3-DOF exoskeleton had a reduced self-weight by removing two joints, and their corresponding actuators and eliminated singularity in the workspace.In addition, it is not necessary to avoid singularity in the workspace by means of rotating the base coordinate axis or the design of redundant degrees of freedom. 2. The kinematics and dynamics analysis showed that the 3-DOF upper-limb exoskeleton had the same actual workspace as the 5-DOF upper-limb exoskeleton; compared with the 5-DOF upper-limb exoskeleton, the maximum joint torque of the 3-DOF upper-limb exoskeleton decreased by 50%, and the elbow external-flexion/internal-extension and the shoulder flexion/extension power consumption decreased by 55% and 46%, respectively, which will further reduce the exoskeleton weight. 3. The experimental results showed that the angle α between the forearm of the 3-DOF upper-limb exoskeleton and the sagittal plane and the ulnar deviation β had no influence on operating tasks; therefore, the proposed 3-DOF upper-limb exoskeleton with an internally rotated elbow joint had the same manipulability as the 5-DOF upper-limb exoskeleton for the hanging action process. The concept of a non-anthropomorphic 3-DOF upper-limb exoskeleton with an internally rotated elbow joint was verified using simulations and experiments in this paper.For future work, a non-anthropomorphic 3-DOF upper-limb exoskeleton with actuators and sensors will be designed and manufactured; the exoskeleton control will also be studied and discussed. Figure 1 .Figure 2 . Figure 1.Kinematic model of the upper-limb of the human body. Figure 1 .Figure 1 .Figure 2 . Figure 1.Kinematic model of the upper-limb of the human body. Figure 4 . Figure 4. Working configuration of the 3-DOF upper-limb exoskeleton.(a) Top view; and (b) right view and represent the shoulder, and represent the elbow, and represent the end-effector, and represent the angle between the upper-arm and forearm, α represents the angle between the upper-arm and the sagittal plane, and represent the upper-arm, and and represent the forearm. Figure 4 . Figure 4. Working configuration of the 3-DOF upper-limb exoskeleton.(a) Top view; and (b) right view and represent the shoulder, and represent the elbow, and represent the end-effector, and represent the angle between the upper-arm and forearm, α represents the angle between the upper-arm and the sagittal plane, and represent the upper-arm, and and represent the forearm. Figure 4 . Figure 4. Working configuration of the 3-DOF upper-limb exoskeleton.(a) Top view; and (b) right view. . O 0 .and O 1 represent the shoulder, A 0 and A 1 represent the elbow, B 0 and B 1 represent the end-effector, θ 0 and θ 1 represent the angle between the upper-arm and forearm, α represents the angle between the upper-arm and the sagittal plane, O 0 A 0 and O 1 A 1 represent the upper-arm, and A 0 B 0 and A 1 B 1 represent the forearm. Figure 6 . Figure 6.Kinematic configuration and coordinate of the 5-DOF upper-limb exoskeleton.Figure 6. Kinematic configuration and coordinate of the 5-DOF upper-limb exoskeleton. Figure 7 . Figure 7.The workspace of the right arm of the 5-DOF upper-limb exoskeleton (mm): (a) Isometric view; and (b) right view. Figure 7 . Figure 7.The workspace of the right arm of the 5-DOF upper-limb exoskeleton (mm): (a) Isometric view; and (b) right view. Figure 10 . Figure 10.Joint torque curves of raising up for the 3-DOF exoskeleton. Figure 11 . Figure 11.Joint torque curves of raising up for the 5-DOF exoskeleton. Figure 12 . Figure 12.Joint torque curves of lateral lifting for the 3-DOF exoskeleton. Figure 13 . Figure 13.Joint torque curves of lateral lifting for the 5-DOF exoskeleton. Figure 10 . Figure 10.Joint torque curves of raising up for the 3-DOF exoskeleton. 19 Figure 10 . Figure 10.Joint torque curves of raising up for the 3-DOF exoskeleton. Figure 11 . Figure 11.Joint torque curves of raising up for the 5-DOF exoskeleton. Figure 12 . Figure 12.Joint torque curves of lateral lifting for the 3-DOF exoskeleton. Figure 13 . Figure 13.Joint torque curves of lateral lifting for the 5-DOF exoskeleton. Figure 11 . Figure 11.Joint torque curves of raising up for the 5-DOF exoskeleton. 19 Figure 10 . Figure 10.Joint torque curves of raising up for the 3-DOF exoskeleton. Figure 11 . Figure 11.Joint torque curves of raising up for the 5-DOF exoskeleton. Figure 12 . Figure 12.Joint torque curves of lateral lifting for the 3-DOF exoskeleton. Figure 13 . Figure 13.Joint torque curves of lateral lifting for the 5-DOF exoskeleton. Figure 12 . Figure 12.Joint torque curves of lateral lifting for the 3-DOF exoskeleton. 19 Figure 10 . Figure 10.Joint torque curves of raising up for the 3-DOF exoskeleton. Figure 11 . Figure 11.Joint torque curves of raising up for the 5-DOF exoskeleton. Figure 12 . Figure 12.Joint torque curves of lateral lifting for the 3-DOF exoskeleton. Figure 13 . Figure 13.Joint torque curves of lateral lifting for the 5-DOF exoskeleton. Figure 13 . Figure 13.Joint torque curves of lateral lifting for the 5-DOF exoskeleton. Figure 14 . Figure 14.Joint power consumption curves of raising up for the 3-DOF exoskeleton. Figure 15 . Figure 15.Joint power consumption curves of raising up for the 5-DOF exoskeleton. Figure 14 . Figure 14.Joint power consumption curves of raising up for the 3-DOF exoskeleton. Figure 14 . Figure 14.Joint power consumption curves of raising up for the 3-DOF exoskeleton. Figure 15 . Figure 15.Joint power consumption curves of raising up for the 5-DOF exoskeleton.Figure 15.Joint power consumption curves of raising up for the 5-DOF exoskeleton. Figure 15 . Figure 15.Joint power consumption curves of raising up for the 5-DOF exoskeleton.Figure 15.Joint power consumption curves of raising up for the 5-DOF exoskeleton. Figure 16 . Figure 16.Joint power consumption curves of lateral lifting for the 3-DOF exoskeleton. Figure 17 . Figure 17.Joint power consumption curves of lateral lifting for the 5-DOF exoskeleton. Figure 16 . Figure 16.Joint power consumption curves of lateral lifting for the 3-DOF exoskeleton. Figure 16 . Figure 16.Joint power consumption curves of lateral lifting for the 3-DOF exoskeleton. Figure 17 . Figure 17.Joint power consumption curves of lateral lifting for the 5-DOF exoskeleton. Figure 17 . Figure 17.Joint power consumption curves of lateral lifting for the 5-DOF exoskeleton. , three vertical rods were placed in front of the shoulder joint ( 600 mm), in left-front of the shoulder joint ( 600 mm, 250 mm) and in right-front of the shoulder joint ( 600 m, 250 mm), and two hooks ( 1000 mm, 1860 mm) were placed on each of the rods. Figure 21 . Figure 21.Schematic of the experiment. Figure 21 . Figure 21.Schematic of the experiment. Figure 23 . Figure 23.Statistical analysis results of the action completion time. Figure 23 . Figure 23.Statistical analysis results of the action completion time. Figure 23 . Figure 23.Statistical analysis results of the action completion time. Table 1 . Motion range of human joints. Table 1 . Motion range of human joints. Table 1 . Motion range of human joints. Table 3 . Function of the DOFs of the 5-DOF upper-limb exoskeleton. Table 8 . Parameters of the end-effector trajectory. Table 8 . Parameters of the end-effector trajectory. Table 10 . Maximum joint power consumption. Table 10 . Maximum joint power consumption. Table 10 . Maximum joint power consumption.
2019-03-31T14:05:06.331Z
2018-03-17T00:00:00.000
{ "year": 2018, "sha1": "eab9e75d85f1b807f16b389a5c36ac5b37f2dcf1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/3/464/pdf?version=1525344313", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eab9e75d85f1b807f16b389a5c36ac5b37f2dcf1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
59428467
pes2o/s2orc
v3-fos-license
Effect of Cooling and Shot Peening on Residual Stresses and Fatigue Performance of Milled Inconel 718 The present study highlights the effect of cooling and post-machining surface treatment of shot peening on the residual stresses and corresponding fatigue life of milled superalloy Inconel 718. It was found that tensile residual stresses were created on the milled surface, regardless of the use of coolant, however, the wet milling operation led to a lower surface tension and a reduced thickness of the tensile layer. The shot peening performed on the dry-milled specimens completely annihilated the surface tensile residual tresses and introduced a high level of surface compression. A comparable fatigue life for the wet-milled specimens was obtained as compared with the specimens prepared by dry milling. This is very likely attributed to that the milling-induced surface damage with respect to cracked nonmetallic inclusions is the predominant cause of the fatigue failure. The presence of the compressive layer induced by shot peening resulted in a significant increase of the fatigue life and strength, while the extent to which the lifetime was prolonged was decreased as the applied load was increased. Introduction Fatigue is one of the main causes of failure to various structures in turbine engines. The fatigue life of a component strongly depends on its surface condition produced by machining since in most cases fatigue crack initiation starts on free surfaces. Residual stress is one of the most relevant practical parameters to assess the surface quality of a machined surface; it is superimposed on the applied cyclic loads, altering the driving force for crack initiation and propagation during fatigue. Generally, tensile residual stresses are perceived to be detrimental to the fatigue performance, whereas compressive residual stresses have a beneficial effect. The formation of residual stresses in machining processes is essentially dominated by the plastic deformation in subsurface of the workpiece material together with the thermal impact at surface [1]. The thermally-induced residual stresses are usually in tension, thus sufficient cooling could effectively reduce the surface tension on a machined surface by lowering the cutting temperature, or even introduce compressive residual stresses [2]. On the other hand, mechanical surface treatments, such as shot peening, are nowadays widely used on machined components by which compressive residual stresses are induced as it produces a workhardened layer and misfit strains between the bulk and surface material. An enhanced fatigue life and strength by shot peening have been found for a variety of engineering materials [3-5]. Inconel 718 is a polycrystalline nickel-based superalloy and has wide applications in aerospace and power generation industries because of its superior mechanical properties and good resistance to oxidation/corrosion environments. A great number of studies have been conducted to improve the surface integrity of machined Inconel 718 by approaches with process optimization or post-machining surface treatments (like shot peening) [6]. However, further investigations on the effect of changes in Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 13-18 doi: http://dx.doi.org/10.21741/9781945291173-3 14 surface integrity on the fatigue properties of the components are somewhat limited although it has a great practical importance for the assessment of the component life and also the knowledge obtained can be used backwards to guide the surface integrity modification. The purpose of the current study is to characterize the residual stresses generated on milled Inconel 718 as influenced by the use of coolant and a subsequent shot-peening treatment. Meanwhile, the fatigue performance of the specimens corresponding to the different surface conditions has also been studied in a four-point bending mode. Experimental work The material used in this study was taken from a disc forging of Inconel 718 with chemical composition given in Table 1. The forging was solution annealed at 970 ̊C followed by air cooling to room temperature, and then a two-stage ageing was performed first at 720 ̊C for 8 h, further at 620 ̊C for another 8 h, and finally air cooled to room temperature. Table 1. Chemical composition [wt%] of the Inconel 718 disc forging. Fe Ni Cr Mo Nb Ti Al C Min. (%) Bal. 50 17 2.8 4.75 0.65 0.2 Max. (%) 55 21 3.3 5.5 1.15 0.8 0.08 Fatigue test bars with a dimension of 10 × 10 × 80 mm3 were pre-manufactured from the heat-treated forging by wire electric discharge machining. The surface to be loaded in tension during fatigue was then machined by face milling using a 20 mm diameter cutter with two uncoated cemented carbide inserts. The cutting speed was fixed at 30 m/min (corresponding spindle speed was 382 rpm) and the depth of cut was 0.5 mm. The feed direction was along the longitudinal direction of the bar with a feed rate of 76 mm/min. Chamfers on the tensile side were introduced in order to avoid corner crack initiation. Three groups of specimens were prepared; the specimens of the first two groups were dry milled and milled under coolant respectively, while for the last group, the surface that has been machined by dry milling was subsequently shot peened using spherical S170 H cast steel shots with 150 to 200 % surface coverage, while the shot-peening intensity was varied from 0.2 to 0.3 mmA. The microstructure beneath the dry-milled, wet-milled and shot-peened surface was characterized on polished cross-sections prior to fatigue testing using a scanning electron microscope (SEM) together with electron channeling contrast imaging (ECCI). In addition, the In-depth residual stresses created by milling and shot peening were measured by using X-ray diffraction, combined with layer removal by electrolytical polishing. Cr-Kα radiation was chosen, giving a diffraction peak at 2θ ~ 128 ̊ for the {220} family of lattice planes of the nickel-based matrix. Peaks were measured at nine ψ-angles between ψ = ± 55 ̊, and residual stresses were calculated based on the “sin2 ψ” method [7] with an X-ray elastic constant of 4.65 × 10-6 MPa-1. Deviation in the measured residual stresses due to the layer removal were corrected in the case of a flat plate. All fatigue tests were conducted at room temperature under load control using a sinusoidal waveform at a load ratio of 0.1 and a frequency of 20 Hz. The distance between the two loading and two supporting rollers was 12 mm and 60 mm, respectively. For each group of the three, four specimens were tested at different peak loads in the range of 8 kN to 16 kN. The corresponding peak stress at the surface, calculated assuming pure elastic loading, were approximately 600 MPa to 1200 MPa. The yield strength of the Inconel 718 forging at room temperature, on the other hand, is slightly above 1000 MPa. All specimens were fatigued until rupture and the specimen deflection at the maximum/minimum load versus the number of cycles was recorded. A line was fitted to the initial linear part of the deflection range-number of cycles curve and extrapolated to the larger cycle region. The number of cycles corresponding to 1% increase of the deflection range from the fitted line was then defined as the fatigue life in the present study. Accordingly, the lifetime of the specimens is largely dominated by the fatigue cycles spent on crack initiation. The failed specimens were examined under SEM in order to identify the preferential sites where fatigue cracks may initiate. Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 13-18 doi: http://dx.doi.org/10.21741/9781945291173-3 15 Results and discussions Fig. 1(a) and (b) shows the in-depth residual stresses induced by dry milling and wet milling. Stress components in two in-plane directions, i.e. transverse direction (TD) and longitudinal direction (LD) (corresponding to the cutting direction and feed direction), were measured. In general, tensile residual stresses were created on the milled surface, regardless of the application of coolant, but it is clear that the wet milling operation led to a lower surface tension and a reduced thickness of the tensile layer. As the depth increases, the residual stresses gradually shift to compression until stabilizing at ~ 0 MPa. Figure 1. In-depth residual stresses generated by (a) dry milling, (b) wet milling, and (c) shot peening; (d) A comparison of the full width at half maximum intensity (FWHM) obtained from the measured diffraction peaks. The post-milling surface treatment by shot peening annihilated the high tension on the dry-milled surface and introduced a surface plateau, extending to a depth of 100 μm, with great compressive residual stresses in both TD and LD, see Fig. 1(c). The high level of surface compression was created as a consequence of the mechanically-induced plastic deformation during shot peening. This can be seen from the dramatically increased broadening of the diffraction peaks, i.e. full width at half maximum intensity (FWHM), measured in the shot-peened surface layer, see Fig. 1(d). The formation of the tensile residual stresses on the milled surface is most likely to be of thermal origin associated with the great heat generation during machining [1]. From Fig. 1(d), one can see that the wet-milled surface underwent less plastic deformation compared with that in the case of dry milling. A very likely explanation is that the coolant could contribute to lowering the friction and dissipating the generated heat, leading to a relatively low cutting temperature. As an effect of the reduced cutting temperature, the thermally-induced residual stresses became less in tension on the surface produced by wet milling. The Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 13-18 doi: http://dx.doi.org/10.21741/9781945291173-3 16 reduced thermal impact during wet milling can be further supported by Introduction Fatigue is one of the main causes of failure to various structures in turbine engines.The fatigue life of a component strongly depends on its surface condition produced by machining since in most cases fatigue crack initiation starts on free surfaces.Residual stress is one of the most relevant practical parameters to assess the surface quality of a machined surface; it is superimposed on the applied cyclic loads, altering the driving force for crack initiation and propagation during fatigue.Generally, tensile residual stresses are perceived to be detrimental to the fatigue performance, whereas compressive residual stresses have a beneficial effect.The formation of residual stresses in machining processes is essentially dominated by the plastic deformation in subsurface of the workpiece material together with the thermal impact at surface [1].The thermally-induced residual stresses are usually in tension, thus sufficient cooling could effectively reduce the surface tension on a machined surface by lowering the cutting temperature, or even introduce compressive residual stresses [2]. On the other hand, mechanical surface treatments, such as shot peening, are nowadays widely used on machined components by which compressive residual stresses are induced as it produces a workhardened layer and misfit strains between the bulk and surface material.An enhanced fatigue life and strength by shot peening have been found for a variety of engineering materials [3][4][5]. Inconel 718 is a polycrystalline nickel-based superalloy and has wide applications in aerospace and power generation industries because of its superior mechanical properties and good resistance to oxidation/corrosion environments.A great number of studies have been conducted to improve the surface integrity of machined Inconel 718 by approaches with process optimization or post-machining surface treatments (like shot peening) [6].However, further investigations on the effect of changes in surface integrity on the fatigue properties of the components are somewhat limited although it has a great practical importance for the assessment of the component life and also the knowledge obtained can be used backwards to guide the surface integrity modification.The purpose of the current study is to characterize the residual stresses generated on milled Inconel 718 as influenced by the use of coolant and a subsequent shot-peening treatment.Meanwhile, the fatigue performance of the specimens corresponding to the different surface conditions has also been studied in a four-point bending mode. Experimental work The material used in this study was taken from a disc forging of Inconel 718 with chemical composition given in Table 1.The forging was solution annealed at 970 ˚C followed by air cooling to room temperature, and then a two-stage ageing was performed first at 720 ˚C for 8 h, further at 620 ˚C for another 8 h, and finally air cooled to room temperature. were pre-manufactured from the heat-treated forging by wire electric discharge machining.The surface to be loaded in tension during fatigue was then machined by face milling using a 20 mm diameter cutter with two uncoated cemented carbide inserts.The cutting speed was fixed at 30 m/min (corresponding spindle speed was 382 rpm) and the depth of cut was 0.5 mm.The feed direction was along the longitudinal direction of the bar with a feed rate of 76 mm/min.Chamfers on the tensile side were introduced in order to avoid corner crack initiation.Three groups of specimens were prepared; the specimens of the first two groups were dry milled and milled under coolant respectively, while for the last group, the surface that has been machined by dry milling was subsequently shot peened using spherical S170 H cast steel shots with 150 to 200 % surface coverage, while the shot-peening intensity was varied from 0.2 to 0.3 mmA. The microstructure beneath the dry-milled, wet-milled and shot-peened surface was characterized on polished cross-sections prior to fatigue testing using a scanning electron microscope (SEM) together with electron channeling contrast imaging (ECCI).In addition, the In-depth residual stresses created by milling and shot peening were measured by using X-ray diffraction, combined with layer removal by electrolytical polishing.Cr-Kα radiation was chosen, giving a diffraction peak at 2θ ~ 128˚ for the {220} family of lattice planes of the nickel-based matrix.Peaks were measured at nine ψ-angles between ψ = ± 55˚, and residual stresses were calculated based on the "sin 2 ψ" method [7] with an X-ray elastic constant of 4.65 × 10 -6 MPa -1 .Deviation in the measured residual stresses due to the layer removal were corrected in the case of a flat plate. All fatigue tests were conducted at room temperature under load control using a sinusoidal waveform at a load ratio of 0.1 and a frequency of 20 Hz.The distance between the two loading and two supporting rollers was 12 mm and 60 mm, respectively.For each group of the three, four specimens were tested at different peak loads in the range of 8 kN to 16 kN.The corresponding peak stress at the surface, calculated assuming pure elastic loading, were approximately 600 MPa to 1200 MPa.The yield strength of the Inconel 718 forging at room temperature, on the other hand, is slightly above 1000 MPa.All specimens were fatigued until rupture and the specimen deflection at the maximum/minimum load versus the number of cycles was recorded.A line was fitted to the initial linear part of the deflection range-number of cycles curve and extrapolated to the larger cycle region.The number of cycles corresponding to 1% increase of the deflection range from the fitted line was then defined as the fatigue life in the present study.Accordingly, the lifetime of the specimens is largely dominated by the fatigue cycles spent on crack initiation.The failed specimens were examined under SEM in order to identify the preferential sites where fatigue cracks may initiate. Results and discussions Fig. 1(a) and (b) shows the in-depth residual stresses induced by dry milling and wet milling.Stress components in two in-plane directions, i.e. transverse direction (TD) and longitudinal direction (LD) (corresponding to the cutting direction and feed direction), were measured.In general, tensile residual stresses were created on the milled surface, regardless of the application of coolant, but it is clear that the wet milling operation led to a lower surface tension and a reduced thickness of the tensile layer.As the depth increases, the residual stresses gradually shift to compression until stabilizing at ~ 0 MPa. Figure 1. In-depth residual stresses generated by (a) dry milling, (b) wet milling, and (c) shot peening; (d) A comparison of the full width at half maximum intensity (FWHM) obtained from the measured diffraction peaks. The post-milling surface treatment by shot peening annihilated the high tension on the dry-milled surface and introduced a surface plateau, extending to a depth of 100 μm, with great compressive residual stresses in both TD and LD, see Fig. 1(c).The high level of surface compression was created as a consequence of the mechanically-induced plastic deformation during shot peening.This can be seen from the dramatically increased broadening of the diffraction peaks, i.e. full width at half maximum intensity (FWHM), measured in the shot-peened surface layer, see Fig. 1(d).The formation of the tensile residual stresses on the milled surface is most likely to be of thermal origin associated with the great heat generation during machining [1].From Fig. 1(d), one can see that the wet-milled surface underwent less plastic deformation compared with that in the case of dry milling.A very likely explanation is that the coolant could contribute to lowering the friction and dissipating the generated heat, leading to a relatively low cutting temperature.As an effect of the reduced cutting temperature, the thermally-induced residual stresses became less in tension on the surface produced by wet milling.The reduced thermal impact during wet milling can be further supported by the microstructural characterization beneath the milled surfaces.Instead of a continuous thick white layer, approximately 4 to 5 μm, as observed on the dry-milled surface (Fig. 2(a)), the surface white layer appeared discontinuously with a thickness less than 1 μm on the wet-milled surface (Fig. 2(b)).As suggested by Bushlya et al. [8], the development of white layers takes place in machining of Inconel 718 normally when the cutting temperature is increased, e.g. at high cutting speeds, cutting with worn tools or in dry machining operations.The shot-peened surface compared with the dry-milled surface showed significantly increased plastic deformation in microstructure, see Fig. 2(c), which is consistent with the greatly higher FWHM measured in the shot-peened layer. Figure 2. Electron channeling contrast imaging (ECCI) micrographs showing the microstructure beneath (a) the dry-milled surface, (b) the wet-milled surface, and (c) the shot-peened surface. In (a) and (b), dash lines are drawn to compare the thickness of the superficial white layer formed in dry and wet milling. A comparison of the fatigue performance as influenced by the use of coolant and post-machining surface treatment of shot peening is presented in Fig. 3.Although relatively low surface residual stresses were obtained by wet milling, the fatigue life was observed to be comparable with that for the specimens prepared by dry milling.However, in the high-cycle regime with a lower applied load, it showed a slight increase in fatigue resistance for the wet-milled condition which is very likely due to a stronger effect of residual stresses.The shot peening, on the other hand, led to a great increase of the fatigue life, particularly in the high-cycle regime; the enhancement could be up to roughly two orders of magnitude compared with the lifetime of the dry-milled specimens. Fracture surface examinations can offer insights into the fatigue failure mechanism of the specimens with different surface conditions.Shown in Fig. 4 is an example of the typical fracture appearance observed on the fatigued specimens with either a dry-milled or wet-milled surface.It can be clearly seen that multiple cracks were initiated at the milled surface during fatigue loading and the coalescence of the cracks led to a macroscopic fluctuant fracture surface.Close examinations further revealed that the initiation of fatigue cracks took place primarily associated with the cracking of surface non-metallic inclusions (Nb-rich carbides and/or Ti-rich nitrides).Previous studies by the authors [9] have shown that the giant plastic work during machining of Inconel 718 could cause cracking of non-metallic inclusions on the machined surface.In this study, substantial cracked carbides as well as a few cracked nitrides (due to the much lower amount of nitrides in the alloy) were also observed after the milling operations.These pre-existing surface defects provide multiple sites where fatigue cracks preferably initiate, or could even start to grow without an incubation of nucleation.Based on such predominance of the failure mechanism, the comparable fatigue life of the specimens prepared by dry and wet milling is very likely attributed to the similar damage on the milled surfaces with respect to the non-metallic inclusion cracking.The effect of residual stresses in this case appears to be less significant. In the case of the shot-peened specimens, the surface compression was deep and strong enough to shift the crack initiation sites to subsurface regions corresponding to the depth of the compressive layer, see Fig. 5.The surface microstructure of the shot-peened specimens still consists of a large amount of cracked inclusions, however, the development of fatigue cracks from these flaws was retarded due to the presence of the great compressive residual stresses.As a result, an enhanced fatigue life and strength were obtained, as shown in Fig. 3.The beneficial effect of compressive residual stresses in terms of retarding surface cracking during fatigue and increasing the resistance of the component to fatigue failures is well consistent with the previous findings in shot peening of other metallic materials [3,4].The extent to which the lifetime was prolonged was decreased as the applied load was increased due to the residual stress relaxation in low cycle fatigue resulting from significant cycling strains [10]. Conclusions The present work investigated the residual stresses generated on milled Inconel 718 as influenced by the use of coolant in machining or by post-machining surface treatment with shot peening.The corresponding fatigue performance of the specimens was also investigated.The results showed that the wet milling led to reduced tensile residual stresses on the machined surface compared with that in the case of dry milling.However, a comparable fatigue life was obtained for the specimens milled with or without the use of the coolant.This is very likely due to that for both conditions the milling-induced surface damage with respect to cracked non-metallic inclusions dominated the crack initiation during fatigue.The shot-peening treatment annihilated the surface tension induced by milling and introduced high compressive residual stresses.The presence of the compressive layer retards surface cracking from the pre-existing cracked carbides and/or nitrides and shifts the crack initiation sites to sub-surface regions, leading to a significant increase of the fatigue life and strength for the shot-peened specimens.The extent to which the lifetime was prolonged was reduced as the applied load was increased. Figure 5. Fatigue fracture surface of a shotpeened specimen where it shows a transition of the crack initiation site from surface to subsurface regions (pointed by the arrow) compared with the observation in Fig. 4. Figure 3 . Figure 3.A comparison of the obtained fatigue life as influenced by the use of coolant as well as the post-machining surface treatment of shot peening. Figure 4 . Figure 4. Typical fracture appearance after the fatigue of the milled specimens regardless of the use of coolant where multiple crack initiation sites at the milled surface were observed; some of them are located by dash lines.
2018-12-21T11:08:45.363Z
2017-01-01T00:00:00.000
{ "year": 2016, "sha1": "b2e76b751e685eddf4b8374f87bf972fe06ba491", "oa_license": "CCBY", "oa_url": "http://www.mrforum.com/wp-content/uploads/open_access/9781945291173/3.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b2e76b751e685eddf4b8374f87bf972fe06ba491", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
255121186
pes2o/s2orc
v3-fos-license
Guest-adaptive molecular sensing in a dynamic 3D covalent organic framework Molecular recognition is an attractive approach to designing sensitive and selective sensors for volatile organic compounds (VOCs). Although organic macrocycles and cages have been well-developed for recognising organics by their adaptive pockets in liquids, porous solids for gas detection require a deliberate design balancing adaptability and robustness. Here we report a dynamic 3D covalent organic framework (dynaCOF) constructed from an environmentally sensitive fluorophore that can undergo concerted and adaptive structural transitions upon adsorption of gas and vapours. The COF is capable of rapid and reliable detection of various VOCs, even for non-polar hydrocarbon gas under humid conditions. The adaptive guest inclusion amplifies the host-guest interactions and facilitates the differentiation of organic vapours by their polarity and sizes/shapes, and the covalently linked 3D interwoven networks ensure the robustness and coherency of the materials. The present result paves the way for multiplex fluorescence sensing of various VOCs with molecular-specific responses. The observed reflection conditions can be summarized as hk0: h + k = 2n, 00l: l = 2n, which are consistent with the PXRD pattern. S24 The central carbon atom positions of the tetrahedral building block can be located from this potential map. Compared to the constructed structure model, the origin of the unit cell in this potential map shifts 1/2 alone the a axis. To obtain a reasonable framework, the possible structure naturally follows the dia-cN topology. Three possible candidate structure models with the interpenetration of 6fold, 10-fold, and 14-fold were exanimated (Supplementary Figure 20) 4 . The result shows that the length between two adjacent central carbon atoms of the 6-fold structural model is too short while the 14-fold one is very long. At the same time, the 14-fold interpenetrated structure model is too crowded which indicates that the structure is not geometrically robust. Therefore, the 10-fold interpenetrated structural model was determined to be a proper choice. During the construction of 10-fold interpenetrated structure model, we found that the adjacent linear linking units are still a little bit close to each other if no fragment distortion was involved. This indicates that the structure needs to be distorted a little bit to ensure a more reasonable geometry. Then the generated model was further refined by Rietveld refinement against PXRD intensity. The final structure of dynaCOF-330 is identified to adopt a 10-fold interpenetrated dia topology with Pnn2 space group. The in-situ PXRD under varied temperatures illustrates the structure changed slightly upon temperature variation. Supplementary Section 5. In-situ fluorescence spectroscopy for dynamic multi-component gas sensing. A special customized relative humidity and gas partial pressure controller was prepared as shown in Supplementary Figure 45. Two mass flow controllers (MFCs) with different controlling ranges (100 sccm, and 5 sccm for low partial pressure less than 5%) were connected to n-C4H10 cylinder. Two MFCs were connected to N2 cylinder as a purge gas to activate the sample or balanced gas mixed with n-C4H10. The N2 and n-C4H10 mixture was then passed through a water bottle with saturated MgCl2 to humidify the working gas with 53% relative humidity. A tee valve was used to switch the wet working gas to dry gas. All the stainless-steel valves and joints were purchased from Shanghai X-tec Fluid Technology Co, Ltd, and the MFCs were purchased from Alicat Scientific (A Halma company). The film sample was prepared by dropping a slurry of dynaCOF-330 (10 mg dispersed in 3 mL acetone) in the washed glass slide (1 cm × 3 cm) to measure its fluorescence response for n-butane gas. Before measurement, the COF film was dried under vacuum at room temperature for 5 hours to ensure the guest was entirely removed. pressure, a Nose-Hoover chains method 12 was applied during the whole process. For the result, the dihedral analysis was based on the production run. We chose the frame structure at 0.5 ps as the first initial frame. The first plane is composed of the three carbon atoms. Every 0.5 ps, we chose a shot from the trajectory and the second plane consists of the same three atoms. The angle between the two planes is the dihedral. When the dihedral is beyond 90 degrees, it means the whole fragment is upside down. For a better understanding of the flexibility of the COF, we rule the dihedral is 180 degrees minus angle when the dihedral is beyond 90 degrees.
2022-12-26T16:02:25.200Z
2022-12-24T00:00:00.000
{ "year": 2022, "sha1": "4e2e9f764e03103b2e9514d64f84baaf1a86bc63", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f837430f242b9e0f1e90da6a23721bcc62c00119", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
249679744
pes2o/s2orc
v3-fos-license
A Case of Hypophysitis Associated With SARS-CoV-2 Vaccination Background/Objective Although SARS-CoV-2 vaccines have been developed with multiple novel technologies and rapidly disseminated worldwide, the full profile of adverse effects has not been known. Recently, there are sporadic but increasing reports of endocrinopathy in relation to SARS-CoV-2 vaccination. Here we report a rare case of hypophysitis with acute onset of diabetes insipidus, immediately after SARS-CoV-2 vaccination. Case Report A 48-year-old female patient had been in her usual state of health until she received the first SARS-CoV-2 vaccine. Two days after vaccination, she started to have flu-like symptoms, including severe headache and myalgia as well as persistent headache, polydipsia, and polyuria. She was diagnosed with diabetes insipidus, and magnetic resonance imaging revealed thickening of the pituitary stalk. Three months after vaccination, her symptoms had somewhat improved, but she still had pituitary stalk thickening on magnetic resonance imaging. Discussion Given the timing of the occurrence of diabetes insipidus, we believe that the patient’s hypophysitis may be associated with SARS-CoV-2 vaccination. We also found 19 cases of endocrinopathy after SARS-CoV-2 vaccination by literature search. The reported endocrine organs were the thyroid, pituitary, and adrenals. Twelve cases of diabetes were also reported. Among 3 pituitary cases, diabetes insipidus was reported only in our case. Conclusion We report a rare case of SARS-CoV-2 vaccine-triggered hypophysitis, which led to diabetes insipidus. SARS-CoV-2 vaccine–related endocrinopathy seems, indeed, possible. Endocrinopathy is associated with infrequent complications; however, it may be underestimated in the post–SARS-CoV-2-vaccinated population. Further studies are warranted to better understand SARS-CoV-2 vaccine–related endocrinopathy. Introduction After the development and approval of vaccines against SARS-CoV-2, the dissemination has been rapid and worldwide. 1 Endocrine cells seem to be susceptible to acute elevation of the cytokine levels, which may cause endocrinopathy, 3,4 although the exact mechanism underlying this phenomenon remains unclear. For example, endocrinopathies were reported after cancer immunotherapy, which is known to elevate the cytokine levels. Specifically, administration of ipilimumab (a CTLA-4 inhibitor) increases the circulating tumor necrosis factor-a levels, and administration of nivolumab (a PD-1 inhibitor) increases both the tumor necrosis factor-a and interleukin 6 levels in treated patients. The endocrine organs involved in these immunotherapy-associated endocrinopathies are the thyroid, pituitary, and adrenals. 5,6 Interestingly, it is reported that SARS-CoV-2 vaccines also induce acute cytokine release. 7 Therefore, it is conceivable to hypothesize that SARS-CoV-2 vaccination can cause endocrinopathies, although the frequency is unknown. In agreement with this hypothesis, possible associations of SARS-CoV-2 vaccination and endocrinopathies have been sporadically reported worldwide. [8][9][10][11][12][13][14][15][16][17][18][19][20][21] Here, we report a rare case of hypophysitis with acute onset of diabetes insipidus, immediately after SARS-CoV-2 vaccination. We also review the reported endocrinopathies, which occurred after SARS-CoV-2 vaccination. Case Report A 48-year-old female patient with a past medical history of obesity had been in her usual state of health until she received the first SARS-CoV-2 vaccine (BNT162b2; Pfizer-BioNTech) on May 21, 2021. Two days after vaccination, she started to have flu-like symptoms, including severe headache and myalgia. She also noticed excessive thirst and urination at the same time. Prior to vaccination, she has never had an episode of polyuria and polydipsia. Myalgia was resolved by 2 weeks, but headache, polydipsia, and polyuria persisted. She visited a primary care clinic to seek medical attention; however, she was told that the symptoms could be the general side effects of SARS-CoV-2 vaccination and would resolve over time. When she received the second SARS-CoV-2 vaccine (BNT162b2; Pfizer-BioNTech) on June 27, 2021, she still reported persistent polydipsia, polyuria, headache, and lethargy. The day after her second vaccination, the patient reported exacerbation of polyuria and polydipsia, with urinary frequency every hour. The daily urinary output was approximately 4 L. She also noted worsened headache, excessive fatigue, and multijoint pain. She reported a significant weight loss of a total of 18 kg since the first vaccination. She finally presented to the emergency department of our medical center on August 27, 2021 for increasing fatigue, intolerable polydipsia, polyuria, headache with nausea, emesis, and light-headedness. She denied syncopal episodes, vision changes, and altered mental status. Upon history taking, she admitted no menses since the first vaccination. Prior to that, her menses had been regular. Her family history was unremarkable for autoimmune disease. On presentation, her vital signs were stable: (1) pulse rate, 81/m; (2) blood pressure, 122/84 mm Hg; and (3) temperature, 98.6 F. Physical examination was unremarkable. The input and output during 8 hours of stay in the emergency department were markedly discrepant with 0.4 L of intake and 2.2 L of urine output. Laboratory evaluation revealed normal values of basic metabolic panels, including a sodium level of 142 mmol/L (reference,133-144 mmol/L) and normal complete blood cell count ( Table 1). Because of prolonged and worsening headache, she underwent brain magnetic resonance imaging (MRI) (Fig. A), which revealed a 4-mm, round, thickened pituitary stalk (Fig. A, yellow arrow). In addition, MRI showed a partially empty sella (Fig. A, red arrow). Polydipsia, polyuria, and pituitary stalk thickening led us to further analyze her pituitary workup. Pituitary biopsy was not performed because of patient preference and difficulty in accessing the organ. The initial serum osmolality was elevated (306 mmol/kg; reference, 275-295 mmol/kg), whereas the urine osmolality was low (97 mmol/kg; reference, 100-1200 mmol/kg) ( Table 2). Pituitary hormone workup revealed that the insulin-like growth factor 1 (IGF1) level was lower than the normal range (66 ng/mL; reference, 60-240 ng/mL). Considering her age, the follicle-stimulating hormone and luteinizing hormone levels were low (5.2 IU/L and 2.6 IU/L, respectively); however, the estradiol level was within the normal range (307 pg/ mL). The human chorionic gonadotropin level was undetectable. The thyroid axis and prolactin levels were within the normal range ( Table 2). The 250-mcg cosyntropin test showed appropriate response without adrenal insufficiency ( Table 2). During hospitalization, she underwent the overnight water deprivation test followed by the desmopressin challenge test (Table 2 and Fig. C). The overnight water deprivation test showed hypernatremia (sodium level, 147 mmol/L; reference, 133-144 mmol/L), elevated serum osmolality (309 mmol/kg; reference, 275-295 mmol/kg), and low urine osmolality (83 mmol/kg; reference, 100-1200 mmol/kg), which were compatible with diabetes insipidus. After subcutaneous administration of 2-mcg desmopressin, hypernatremia improved (sodium level, 139 mmol/L; reference, 133-144 mmol/L), and the Highlights We report a rare case of SARS-CoV-2 vaccination eassociated hypophysitis SARS-CoV-2 vaccinationeassociated endocrinopathy has been sporadically reported Endocrinopathy may be underestimated in the post eSARS-CoV-2-vaccinated population Clinical Relevance We report a rare case of hypophysitis with acute onset of diabetes insipidus, immediately after SARS-CoV-2 vaccination. We also summarized all currently published cases of SARS-CoV-2erelated endocrinopathy and described them by onset, age, sex, and clinical course. We believe that SARS-CoV-2 vaccineerelated endocrinopathy would be warranted further attention. urine concentration increased (urine osmolality, 468 mmol/kg; reference, 100-1200 mmol/kg). With these test results, we confirmed the diagnosis of central diabetes insipidus. Other relevant laboratory workup results included increased levels of the following 2 inflammatory markers: (1) C-reactive protein (23 mg/L; reference, 0.0-8.0 mg/L) and (2) erythrocyte sedimentation rate (34 mm/h; reference, 0-20 mm/h) ( Table 1). The levels of immunoglobulin G (IgG) and its subclasses (IgG1 to IgG4), serum angiotensinconverting enzyme, and ferritin were within the normal ranges, and the result of chest radiography was negative. She was discharged with a fixed dose of 10 mcg of DDAVP nasal sprays twice a day. With the DDAVP use, her symptoms of polyuria and polydipsia markedly improved. Two months after hospitalization, the patient underwent repeat brain MRI, which showed a persistently thickened pituitary stalk (Fig. B). At present, she still requires DDAVP intranasal sprays 10 mcg twice a day to manage her symptoms. The IGF1 level has improved to the midnormal range 5 months after vaccination ( Table 2). Her amenorrhea has resolved 5 months after the second vaccination. Although her symptoms have improved, she still experiences some fatigue, joint pain, and brain fogginess. She has regained part of the weight lost. known side effects of SARS-CoV-2 vaccination, including headache, fever, and myalgia, the full profile of adverse effects is yet to be elucidated. There are sporadic but increasing reports of endocrinopathy, which occurred after SARS-CoV-2 vaccination. According to our English literature search, 19 cases of posteSARS-CoV-2 vaccination endocrinopathy have been reported (Table 3). [8][9][10][11][12][13][14][15][16][17][18][19][20][21] Among them, the majority of the cases involved the thyroid (15/19 cases, 79%), 1 (5%) case involved the adrenal gland, and 3 (16%) cases, including our case, involved the pituitary. The mean age of these patients was 46 years, and 78% of them were women. In all cases, the patients were relatively healthy prior to vaccination, except 1 patient who was on treatment of colon cancer. 19 With respect to the types of vaccines, endocrinopathy was reported in all 3 types of vaccines; 9 (47%) cases involved messenger ribonucleic acid vaccines, 6 (32%) cases involved adenovirus vector vaccines, and 4 (21%) cases involved inactivated vaccines. The majority of the cases (n ¼ 13, 68%) occurred acutely in 1 to 5 days after vaccination. In 4 cases, subacute onset of thyroiditis 2 to 3 weeks after vaccination was reported. In 3 cases, including our case, the second vaccine dose was administered despite the onset of symptoms following the first vaccine dose, and the initial symptoms worsened. In patients with diabetes, sporadic cases are reported, in which glucose level control acutely worsened after SARS-CoV-2 vaccination (Table 4). [22][23][24][25][26][27][28] Among them, some patients presented with diabetes ketoacidosis or hyperglycemic hyperosmolar state. Eight of 12 reported cases of diabetes involved middle-aged men. However, 2 recent observational studies demonstrated that SARS-CoV-2 vaccination only minimally impacted glycemic control in patients with diabetes. 27,28 The natural course and prognosis of these endocrinopathies after SARS-CoV-2 vaccination remain unknown. Among the cases involving the thyroid, 7 (47%) were reported as full recovery after 1 to 3 months. [8][9][10]16,17 In contrast, 6 cases (40%) with the thyroid were not fully recovered at the time of reports. With respect to 3 pituitary cases, 2, including our case, showed partial recovery in 1 12 or 5 months. For patient management, steroids were administered in 4 cases for prolonged thyroiditis. 8,9,16,17 The mechanism of endocrinopathies after SARS-CoV-2 vaccination is unknown. One potential mechanism would involve acute elevation of the cytokine levels. Endocrine cells seem to be susceptible to acute elevation of the cytokine levels, as reported in cancer immunotherapy-associated endocrinopathy. 5,6 It has been reported that SARS-CoV-2 vaccination can cause cytokine release syndrome. 7 Therefore, we speculate that the vaccination may have caused acute changes in the cytokine levels, which led to disruption of endocrine functions. However, the onset of SARS-CoV-2 vaccinationeassociated endocrinopathy is more acute than that of immunotherapy-associated endocrinopathy, whose onset usually takes more than several weeks. 5,6 Another suggested mechanism for posteSARS-CoV-2 vaccination thyroiditis is cross-reaction of the SARS-CoV-2 spike protein antibody and thyroid peroxidase antibody. 29 Furthermore, it is possible that vaccination may have triggered underlying conditions in susceptible subjects. SARS-CoV Regarding endocrinopathy involving the pituitary, 3 cases have been reported. These cases had different clinical presentations (Table 3). Our case presented with pituitary stalk thickening and diabetes insipidus. The second case presented with hypopituitarism (secondary adrenal insufficiency and hypothyroidism) with enlargement of the pituitary gland without diabetes insipidus. 12 The third case presented with hemorrhagic pituitary apoplexy without pituitary hormone deficiency. 21 In our case, 4 important points were noted for differential diagnosis. First, the broad differential diagnosis of the cause of pituitary stalk thickening needed to be considered. 30 In our initial workup, we ruled out the following: (1) germinoma by an undetectable human chorionic gonadotropin level, (2) sarcoidosis by a low angiotensin-converting enzyme level and negative chest radiographic result, and (3) autoimmune hypophysitis by a normal IgG level. The second point of differential diagnosis was the partially empty sella. This may be because of the following: (1) elevated body mass index; 31 (2) underlying pituitary conditions, although there was no significant past medical or family history; or (3) the effect of SARS-CoV-2 vaccination. The third point was the low IGF1 level. This could be because of the high body mass index 31 or effect of SARS-CoV-2 vaccination. We speculate that our case is likely SARS-CoV-2 vaccineerelated rather than because of the high body mass index. This interpretation is based on the observation that the low IGF1 level was transient and the level returned to normal 5 months after vaccination ( Table 2). The last interesting point was transient amenorrhea. Upon initial evaluation, the luteinizing hormone and follicle-stimulating hormone levels were relatively low, whereas the estradiol level was within the normal range ( Table 2). Given that SARS-CoV-2 vaccination can induce acute changes in the cytokine levels, 7 we speculate that her postvaccination amenorrhea could be functional hypothalamic amenorrhea because of acute stress owing to SARS-CoV-2 vaccination or possibly mild and transient central hypogonadism. 32 Interestingly, transient menstrual irregularity after SARS-CoV-2 Conclusion We report a rare case of possible SARS-CoV-2 vaccineerelated hypophysitis, which led to diabetes insipidus. Endocrinopathy after SARS-CoV-2 vaccination is, indeed, possible. Endocrinopathy is associated with infrequent complications; however, it may be underestimated in the posteSARS-CoV-2-vaccinated population. Further studies are warranted to better understand endocrinopathy and its possible association with SARS-CoV-2 vaccination.
2022-06-16T13:05:09.320Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "bab2a1ab53f2ac630f1918bd07381ccd10167116", "oa_license": "CCBYNCND", "oa_url": "http://www.aaceclinicalcasereports.com/article/S2376060522000402/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "015e07afe9098e92551888acb5d1518b358f5185", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
40239777
pes2o/s2orc
v3-fos-license
Macrocyclic complexes : synthesis and characterization A no vel series of co mplexes of the type [M(C28H24N4)X]X2, where M = Cr(III), Fe(III) or Mn(III), X = Cl−, NO3, CH3COO and (C28H24N4) corresponds to the tetradentate macrocyclic ligand, were synthesized in methanolic media by the template condensation of 1,8-diaminonaphthalene and 2,3-butanedione (diacetyl) in the presence of trivalent metal salts. The complexes were characterized by elemental analyses, conductance and magnetic measurements, and U V/Vis an d IR spectroscopy. Based on these studies, a five-coor dinate square pyramidal geometry for all the prepared co mplexes is proposed. All the synthesized macrocyclic co mplexes were test ed for their in vitro anti fungal activity against so me fung al strains viz. Aspergillus niger a nd A. fumigatus. The results obt ained were co mpared with t he stand ard an tifungal drug fluconazole. INTRODUCTION A number of nitrogen donor macrocyclic derivatives have long been used in analytical, industrial and medical applications. 1acrocyclic compounds and their derivatives are interesting ligand system because they are good hosts for metal anions, neutral molecules and organic cation guests. 2 The metal-ion and host-guest chemistry of macrocyclic compounds are very useful in fundamental studies, e.g., in phase transfer catalysis and biological studies. 3In situ, one pot template condensation reactions lie at the heart of macrocyclic chemistry. 4herefore template reactions have been widely used for the synthesis of macrocyclic complexes, 5 where, generally, transition metal ions are used as the templating agent. 6The metal ions direct the reaction preferentially towards cyclic rather than oligomeric or polymeric products. 7Synthetic macrocyclic complexes mimic some naturally occurring macrocycles because of their resemblance to many natural macrocycles, such as metalloproteins, porphyrins and cobalamine. 8,9ransition metal macrocyclic complexes have received great attention due to their biological activities, including antiviral, anticarcinogenic, 9 antifertile, 10 antibacterial and antifungal. 11Macrocyclic metal complexes of lanthanides e.g., Gd(III), are used as MRI (Magnetic Resonance Imaging) contrast agents. 12In a previous paper, the synthesis and characterization of macrocyclic complexes of Co(II), Ni(II), Cu(II), Zn(II) and Cd(II) derived from 1,8-diaminonaphthalene and diacetyl were reported. 13rompted by these facts, in the present paper, the synthesis and characterization of Cr(III), Fe(III) and Mn(III) macrocyclic complexes derived from 1,8-diaminonaphthalene and diacetyl (2,3-butanedione) are discussed.Complexes were characterized using various physicochemical techniques, such as IR and NMR spectroscopy, elemental analyses, and magnetic susceptibility and conductivity measurements. All the synthesized macrocyclic complexes were tested for their in vitro antifungal activities against some fungal strains viz.Aspergillus niger (MTCC 282), A. fumigatus (MTCC 870).The obtained results were compared with those for the standard antifungal drug fluconazole. Materials All the chemicals and solvent used in this study were of AnalaR grade.1,8-Diaminonaphthalene and 2,3-butanedione were procured from Acros, the metal salts were purchased from s.d.-fine, Merck, Ranbaxy and were used as received. Isolation of complexes All the complexes were synthesized by the template method, i.e., by condensation of 1,8-diaminonaphthalene and 2,3-butanedione in the presence of the respective trivalent metal salt.To a hot stirring methanolic solution (≈ 50 cm 3 ) of 1,8-diaminonaphthalene (10 mmol) was added trivalent chromium, manganese or iron salt (5 mmol) dissolved in the minimum quantity of MeOH (≈ 20 cm 3 ).The resulting solution was refluxed for 0.5 h.Subsequently, 2,3-butanedione (10 mmol) was added to the refluxing mixture and refluxing was continued for 8-10 h.The mixture was then concentrated to half its volume, cooled to room temperature and kept in a desiccator overnight, whereby, dark-colored precipitates formed, which were filtered, washed with methanol, acetone and diethyl ether and dried in vacuo.The obtained yields were ≈ 60-70 %.The complexes were soluble in DMF and DMSO.They were thermally stable up to ≈ 265-290 °C, after which decomposition occurred. Analytical and physical measurements The microanalysis for C, H, and N were realized using an elemental analyzer (Perkin Elmer 2400) at SAIF, Punjab University, Chandigarh.The magnetic susceptibility measurements were made at SAIF, IIT Roorkee, on a Vibrating Sample Magnetometer (Model PAR 155).The metal contents in the complexes were determined by literature methods. 14The IR spectra were recorded on a FT-IR spectrophotometer (Perkin Elmer) in the range 4000-200 cm -1 using the Nujol Mull method.The 1 H-NMR spectra (at room temperature) (in DMSO d 6 ) were recorded on a Bruker AVANCE II 400 NMR spectrometer (400 MHz) with Me 4 Si (0.0 ppm) as the reference, at SAIF, Punjab University, Chandigarh.The electronic spectra (in DMSO) were recorded at room temperature on a Hitachi 330 spectrophotometer in the 200-850 nm range.The conductivity was measured on a digital conductivity meter (HPG system, G-3001).The melting points were determined in capillaries using an electric melting point apparatus. In-vitro antifungal activity All the newly synthesized complexes were evaluated for their antifungal activities towards A. niger and A. fumigatus by the poison food technique. 15 Chemistry The analytical data showed the suggested formula for macrocyclic complexes as: [M(C 28 H 24 N 4 )X]X 2 , where M = Cr(III), Fe(III) or Mn(III), X = Cl − , NO 3 − or CH 3 COO − and (C 28 H 24 N 4 ) corresponds to the tetradentate macrocyclic ligand.The measurements of molar conductance in DMSO showed that these chelates are 1:2 electrolytes 16 (conductance 155−185 ohm− 1 cm 2 mol −1 ).Various attempts, such as crystallization using mixtures of solvents and low temperatures, were unsuccessful for the growth of a single crystal suitable for X-ray crystallography.However, the analytical, spectroscopic and magnetic data enabled the possible structure of the synthesized complexes to be predicted.All complexes gave satisfactory elemental analyses results, as shown in Table I.Table I IR Spectra It was noted that two bands present in the spectrum of 1,8diaminonaphthalene at 3350 and 3390 cm -1 , corresponding to the ν(NH 2 ) group, were absent from the infrared spectra of all the complexes.Furthermore, no strong absorption band was observed near 1716 cm -1 indicating the absence of the >C=O group of the 2,3butanedione (diacetyl) moiety.The disappearance of these bands and the appearance of a new strong absorption band in the range 1590-1629 cm -1 confirms the condensation of the carbonyl group of 2,3butanedione and the amino group of diaminonaphthalene and the formation of macrocyclic a Schiff's base, 17 as these bands may be assigned to ν(C=N) stretching vibrations. 18The lower value of the ν(C=N) vibrations may be explained by a drift of the lone pair electron density of the azomethine nitrogen towards the metal atom, 19 indicating that coordination occurs through the nitrogen of the C=N groups.The medium intensity bands present in the region 2830-2950 cm -1 may be assigned to the ν(C-H) stretching vibrations of the methyl group of the diacetyl moiety. 20The various absorption bands in the region 1400-1588 cm -1 may be assigned to ν(C=C) aromatic stretching vibrations of the naphthalene ring. 21,22The bands in the region 740-785 cm -1 may be assigned to the ν(C-H) out of plane bending of the aromatic ring. 23he presence of the absorption bands at 1408-1440, 1290-1320 and 1010-1030 cm -1 in the IR spectra of all the nitrato complexes suggest that the nitrate groups are coordinated to the central metal ion in a unidentate fashion. 24The IR spectra of all the acetate complexes show an absorption band in the region 1650-1680 cm -1 that is assigned to the ν(COO -) as asymmetric stretching vibrations of the acetate ion and another in the region 1258-1290 cm -1 that can be assigned to the ν(COO -) s symmetric stretching vibration of the acetate ion.7][28] The bands in the spectra of all the complexes in the region 420-450 cm -1 originate from (M-N) azomethine vibrational modes and identify coordination of the azomethine nitrogen. 297][28] The bands present in the region 220-250 cm -1 in all the nitrato complexes are related to the ν(M-O) stretching vibration. 26,27 NMR spectra The 1 H-NMR spectrum of the zinc(II) complex shows multiplets in the region 6.65-7.32ppm, corresponding to the aromatic ring protons 30 of the naphthalene moiety (12H).The singlet at 2.32 ppm may be assigned to the methyl protons 31 of 2,3-butanedione (12H). Magnetic measurements and electronic spectra Chromium complexes The magnetic moments of the chromium(III) complexes at room temperature were found in the range 4.25-4.50B.M., which are close to the predicted values for three unpaired electrons in the metal ion. 15The electronic spectra of the chromium complexes show bands at ≈9010-9320 cm -1 , 13030-13350 cm -1 , 17460-18320 cm -1 , 27420-27850 cm - 1 and 34810 cm-1 .These spectral bands cannot be interpreted in terms of either a four-or six-coordinated environment around the metal atom.However, the spectra are consistent with that of five-coordinated Cr(III) complexes, the structures of which were confirmed with the help of Xray measurements. 32Based on the analytical data, spectral studies and electrolytic nature of these complexes, a five-coordinated, square pyramidal geometry may be assigned to these complexes.Thus, assuming C 4V symmetry for these complexes, 33,34 the various spectral bands may be assigned as: 4 B 1 → 4 E a , 4 B 1 → 4 B 2 , 4 B 1 → 4 A 2 and 4 B 1 → 4 E b , respectively. Manganese complex The magnetic moment of the manganese(III) complex was found to be 4.89 B.M., which indicates a high spin (d 4 ) system. 15 The electronic spectra of the manganese complexes show three d-d bands, which lie in the range 12350-12590, 16050-18820 and 35420-35700 cm -1 .The higher energy band at 35440-35750 cm -1 may be assigned to charge transfer transitions.The spectra resemble those reported for fivecoordinate, square pyramidal manganese complexes. 33,34This idea is further supported by the presence of a broad ligand field band at 20400 cm -1 , which is diagnostic for C 4V symmetry, and thus the various bands may be assigned as follows: 5 B 1 → 5 A 1 , 5 B 1 → 5 B 2 , and 5 B 1 → 5 E, respectively.The band assignment in a single electron transition may be given as: d z2 → d x2-y2 , d xy → d x2-y2 , and d xz and d yz → d x2-y2 , respectively, in order of increasing energy. Iron complexes The magnetic moments of iron complexes lie in the range 5.81-5.90B.M., corresponding to five unpaired electrons, which is close to the predicted high spin values for these metal ions. 15The electronic spectra of the iron complexes show various bands at 9820-9970, 15520-15575 and 27550-27730 cm -1 , which do not suggest octahedral or tetrahedral geometry around the metal atom.The spectral bands are, however, consistent with the range of spectral bands reported for fivecoordinate, square pyramidal iron(III) complexes. 34,35Assuming C 4V symmetry for these complexes, the various bands can be assigned as: d xy → d xz , and d yz and d xy → d z 2 .Any attempt to make an accurate assignment is thwarted due to interactions of the metal-ligand л-bond systems, which lifts the degeneracy of the d xz and d yz pair. Biological results and discussion The antifungal activities of all the complexes were determined against two fungal strains, i.e., A. niger and A. fumigatus and then compared with the activity of the standard antifungal drug fluconazole The antifungal activities (percentage inhibition) of the complexes and fluconazole are given in Table II.In the whole series, complex 1 showed the highest percentage inhibition against both fungal strains, whereas none of the tested compounds restricted the fungal growth excellently (Table II).However, of all the tested complexes, 2 and 7 showed nearly 50 % inhibition of mycelial growth against A. niger, whereas complexes 1 and 4 showed nearly 55 % inhibition of mycelial growth against A. fumigatus (Table II).Table II CONCLUSIONS Based on various studies such as elemental analyses, conductance measurements and magnetic susceptibilities, as well as IR, NMR and electronic spectral studies, a five-coordinate, square pyramidal geometry as shown in Fig. 1 may be proposed for all the synthesized complexes.Fig. 1 It has been suggested that chelation/coordination reduces the polarity of the metal ion mainly because of the partial sharing of its positive charge with the donor group within the whole chelate ring system. 36Besides this, many other factors, such as solubility, dipole moment, conductivity influenced by the metal ion, may possibly explain the antifungal activities of these complexes. 37It has also been observed that some moieties, such as azomethine linkage or heteroaromatic nucleus introduced into such compounds, exhibit extensive biological activities that may be responsible for the increase in the hydrophobic character and liposolubility of the molecules in crossing the cell membrane of the microorganism and hence enhance the biological utilization ratio and activity of the complexes. 38oi: 8 TABLE I Analytical data of the trivalent chromium, manganese and iron complexes derived from 1,8-diaminonaphthalene and 2,3-butanedione TABLE II . In vitro antifungal activities of the synthesized macrocyclic compounds determined by the poisoned food method.
2019-04-05T03:29:44.729Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "41924bf66b2b42b88a9841a485f8d8b44d7e7899", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51391000028S", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "41924bf66b2b42b88a9841a485f8d8b44d7e7899", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
234595006
pes2o/s2orc
v3-fos-license
Intersecting Stigmas among HIV-Positive People Who Inject Drugs in Vietnam HIV-related stigma remains a barrier to ART adherence among people living with HIV (PLWH) globally. People who inject drugs (PWID) may face additional stigma related to their behavior or identity; yet, there is little understanding of how these stigmas may co-exist and interact among these key populations. This study aims to explore the existence of multiple dimensions of HIV-related stigma, and how they may intersect with stigma related to drug injection. The study took place in Vietnam, where the HIV epidemic is concentrated among 3 key population groups; of those, PWID account for 41% of PLWH. The vast majority (95%) of PWID in Vietnam are male. Data came from in-depth interviews with 30 male PWID recruited from outpatient clinics, where they had been receiving ART medications. Deductive, thematic analysis was employed to organize stigma around the 3 dimensions: enacted, anticipated, and internalized stigma. Findings showed that HIV- and drug use-related stigma remained high among participants. All 3 stigma dimensions were prevalent and perceived to come from different sources: family, community, and health workers. Stigmas related to HIV and drug injection intersected among these individuals, and such intersection varied widely across types of stigma. The study revealed nuanced perceptions of stigma among this marginalized population. It is important for future studies to further investigate the influence of each dimension of stigma, and their interactive effects on HIV and behavioral outcomes among PWID. Introduction In Vietnam, the HIV epidemic remains concentrated among 3 key population (KP) groups: people who inject drugs (PWID), female sex workers (FSW ), and men who have sex with men (MSM). PWID accounted for 41% of PLWH in 2014; HIV prevalence was highest among PWID, compared to other groups: 10.3% in 2013, 1,2 and 14% in 2017. 3 Furthermore, PWID is of concern since drug initiation has been shown to start at a younger age and the transition time from non-injecting to injecting use is becoming shorter, while most PWID are not aware of their HIV status. 4 Anti-retroviral therapy (ART) has been rapidly expanded across the country since 2005. 1,5 However, stigma and discrimination towards drug use prevented many PWID from accessing HIV testing. 6 Evidence from elsewhere also indicated that drug use-related stigma remained prevalent. [7][8][9] PWID are usually perceived as losing personal control over their behaviors, thus deserved to be blamed, 10,11 as drug use is often seen as immoral and criminal. 8,[12][13][14] Drug-related stigma was associated with less access to health care in general 15 as well as drugrelated healthcare services. 16 Stigmas and dissatisfaction among healthcare providers who provided ART treatment in Vietnam were also reported. 17,18 Most research has not investigated both HIV-and drug use-related stigmas concurrently. [19][20][21][22] The last few years have seen an increasing number of studies examining the role of stigma in ART adherence 4,[23][24][25] ; yet, few have focused on PWID. 21,[26][27][28] Only Lim et al 29 and Li et al 30 attempted to quantify these stigmas, but not their joint impact; in the latter study, perceived and internalized stigma were examined among Vietnamese PLWH who use drugs, yet no interactions were investigated. In addition, Li et al 30 only studied those who had not accessed ART; consequently, their results might not be generalizable to those who already initiated treatment. This study was guided by the HIV Stigma Framework, 31 which posits that stigma consists of 3 distinct yet interrelated dimensions: enacted, anticipated, and internalized stigma. Enacted stigma is the extent to which PLWH believe they have experienced discrimination by others, anticipated stigma is the degree to which PLWH expect that they will be discriminated against, and internalized stigma refers to the extent to which PLWH endorse the negative beliefs and feelings about themselves. 31 Stigma intersectionality Stigma intersectionality is defined as the simultaneous existence of stigma that is beyond additive stigmas: 2 or more co-existing stigmas can interact to present greater oppression than their 2 Health Services Insights simple summation. 32 HIV-positive PWID may experience coexisting stigmas that are related to HIV, drug use, and possible involvement with the criminal justice system, resulting in avoidance of treatment. 33,34 They may also experience stigma and discrimination at multiple levels. 33,[35][36][37] Turan et al's 38 and Stangl et al's 39 frameworks also emphasized that stigma could come from multiple sources, facilitating cross-condition examinations of how stigma related to different conditions can intersect. Earnshaw and Kalichman's intersectionality model 40 suggested that the impact of one stigma depends on that of the other; for example, PLWH who inject drugs may be more likely to conceal their HIV status than PLWH who do not inject drugs. 41 Calabrese et al, 42 focused on internalized stigma, also found an interactive impact of HIV and drug-related stigmas on health service utilization. A recent study by Stringer et al 43 found evidence of impacts of substance use stigma on ART adherence, independent of HIV-related stigma. There is a need for a more comprehensive investigation of stigma intersectionality. 44 In this study, using an intracategorical methodological approach, we aim to describe stigma related to HIV and injecting drug use, the existence of multiple dimensions of stigma, and how they may intersect through a series of indepth interviews with HIV-positive male PWID in a concentrated epidemic. 45 Methods Data Data for this analysis came from a larger study on barriers to ART adherence among HIV-positive PWID in Vietnam. Male PWID, accounting for 95% of PWID in Vietnam, 2 were the target population; recruitment criteria included: (1) HIV positive status, (2) having been in ART treatment for at least 6 months, even if not continuously, and (3) 18 years old or older at the time of the interview. The final sample included 30 male PWID selected using quota sampling at 2 outpatient clinics (OPCs) with leading ART patient loads in Nghe An province, where the epidemic is concentrated among PWID. 2 Their age ranged from 19 to 53 (mean = 32.4); their demographic characteristics were presented in Table 1. Semi-structured interviews were conducted in Vietnamese at a place of the participant's choice, often their homes. The interview guide (see Appendix 1) included questions about challenges that participants might have encountered in accessing HIV testing, ART, and their perceptions and experiences, if any, with stigma and discrimination. There were questions about personal experiences with HIV testing as someone who injected drugs, being HIV-positive, and accessing services. Participants were asked about how they were treated by health care providers, family, and community members, and how such treatment might be different from those of a person living with HIV but did not inject drugs. The interviews lasted between 1 and 2 hours. Each participant received the equivalent of $10 for their time. Ethical approval was obtained from the 2 universities in the United States and in Vietnam. Interviewers were senior researchers in Vietnam with qualitative method experience, whereas junior researchers transcribed the interviews verbatim. Another team member checked the transcripts against the original audio recordings for completeness and accuracy. Once the interviews were transcribed in Vietnamese, researchers at 2 universities in the US and Vietnam conducted a spot check for completeness and accuracy. Analysis Data analysis, using NVivo 11.0, 46 was conducted in Vietnamese in order to preserve the original meanings of responses; quotes were translated into English for presentation. Content analysis was conducted using a priori codes developed for the 3 stigma dimensions, and related concepts that emerged. A codebook was first developed by researchers in Vietnam who conducted interviews. Next, 5 transcripts, diverse in terms of duration of treatment and level of adherence, were coded using the initially drafted codebook. Free nodes were added whenever a new theme emerged. All researchers revised the codebook, each first coded 3 interviews, then re-grouped to discuss and revised the codebook or the codes used, if necessary. Finally, each transcript was coded independently and then merged into 1 data file. For this paper, a deductive, thematic approach was employed to Results The vast majority of our participants had been in ART treatment for 2 or more years at the time of the study, several had been in treatment for less than 2 years, and only 1 had begun the treatment just over a month before the interview. Adherence was very high, in part because ART medications could be picked up by family members on behalf of the patients; 5 out of 30 participants had their medications picked up for them at the time of the interview. Our data indicated wide variations in perceptions of stigma related to HIV and drug injection among PWID living with HIV. Following Earnshaw and Chaudoir's 31 framework, we present evidence of possible intersectionality and major themes of stigma related to HIV and injecting drug use. In this section, participants' age was reported in parentheses, following their ID number. Theme 1: Enacted stigma There was a stark difference in enacted stigma related to HIV and drug injection. While the majority of participants reported declines in perceived HIV-related discrimination, nearly everyone reported perceived stigmatization and discrimination associated with injecting drug use. As it could come from multiple sources-spouses and family, community members, and health workers, such stigmatization remained a barrier for PWID to initiate and remain in HIV care and treatment. Declines in HIV-related perceived stigmatization were attributed to increases in the number of people living with HIV, and increased information and education materials, printed or on TV; the latter contributed to the general public's improved knowledge of HIV. In contrast, illicit drug injection continued to be perceived as a consequence of one's losing control of one behavior, and that there was no returning to being a "normal" person once an individual became addicted. Enacted stigma from families and relatives. It was common to hear about couple separation due to the man's drug use habits, like this one: "When I returned from the habilitation center, my wife had left me, she had someone else." (IDI3, 28). In contrast, spouses and families seemed more willing to stay to take care of their HIV-positive men; very few participants reported being separated from spouses or family members solely because of their HIV status: 31) Enacted stigma from the community. The most common stigmatization perceived by our participants came from the community. It ranged from a hesitant look, avoidance of interactions, to gossips about the participant's drug use habits and perceived thefts committed by drug users. In many cases, such stigmatization extended to PWID's parents and family, which concerned our participants as much stigmatization against themselves. One said: Our neighbors gossip about us, that makes my parents very sad. If I were not addicted, this would not have happened to my parents. . .. They said things like it was a pity that my parents were government workers but I am a drug addict. (IDI27, 28) However, it was difficult to tease out if such stigmatization was due to our participants' HIV status, drug use, or both. In a few cases, like the one below, one may guess that our participant's reported experience with discrimination was more likely due to his HIV status than drug use: 53) Enacted stigma from health workers. Perceived stigmatization from health workers has frequently been reported as a key barrier for PLWH who inject drugs to going to health centers. Health workers were often seen as unfriendly, unwilling to spend time and answer questions from patients. Stories like the one below were common: 33) It was not clear though if this perceived stigma was associated with a patient's drug use or his HIV status, or if it was due to a possible high patient load as our participants mentioned. 37) Meanwhile, a participant echoed the feelings of many about disclosing their drug use: People conceal their practice of injecting drugs because even if they only use drugs three times, they will get addicted. The wife will think that if he injects drugs once, there will be the second time, then the third time. . . and addiction is inevitable. Maybe wives find it hard to forgive for that reason. (IDI18, 28) Several also commented on how they expected more HIVrelated stigma and discrimination because they got infected through drug injection, compared to those who got infected through sexual intercourse with commercial sex workers. The same man explained the reason: Who injects drugs will definitely get addicted eventually. Once addicted to drugs, they will no longer work, leading to stealing. And the society doesn't accept that. Prostitution, on the other hand, is only considered as an occupation, which does not make people addicted. (IDI18, 28) Theme 3: Internalized stigma Feelings of guilt and loss of hope. A prominent sub-theme that emerged was feelings of guilt, where participants blamed themselves for getting infected with HIV and creating a burden for their family. Many expressed more regrets about using drugs and felt more depressed when they compared themselves to HIV-negative drug users: Many times I feel guilty about the mistakes that I made [engaging in drug use], so now I have to accept the consequence [HIV infection]. I am very heavy-hearted right now, I think a lot about my mother. My mother sacrificed a lot to raise me and my brother, but now both my brother and I are addicted, and now we both got this disease [HIV infection]. I wonder why I was named XX [meaning loyalty and respect for your parents], but I am the opposite. Sometimes I can see my grandmother and mother walk by our room, and my mother cries or talked to herself, I feel such a huge pain in my heart. (IDI3, 28) Another PWID blamed himself for getting addicted to injecting drugs: 37) There also seemed to exist a vicious cycle, where PWID got infected through injecting drugs, making them feel guilty and useless to their family, which in turn contributed to their declining health and increased desires to return to injecting drugs. One man summed this up: 22) Avoidance and self-isolation. In general, reports of self-isolation and avoidance of social interactions due to HIV were more common than those due to injecting drug use, either out of guilt for getting infected, or for mistaken fears of spreading the HIV, like this one: "I was very, very sad when I found out that I was positive. . . I quit my job. I often feel pity for myself. . . [omitted] At home, I can help my wife cook the rice or boil the water, but I let her cook other foods or do other work. I don't dare do those things anymore." (IDI8, 33). Despite being a college-educated man, this participant was still afraid of spreading the virus through cooking, which suggested that HIV misconceptions may remain common in the public. Some participants confined themselves within the limits of their home and avoid all interactions, which could potentially make their situation and health worse, mentally and physically. Several men avoided having a partner or committing to a long-term relationship for fears that they could not become a meaningful part of anyone's future. Only a few participants reported being somewhat accepting of the situation and willing to move on. They acknowledged that their mistakes, which got them infected with HIV, but now that ART treatment was available, allowing PLWH to continue leading a healthy and meaningful life, they would plan on doing so. Discussion This analysis showed that despite some declines in HIV-related stigma, stigma remained high for PWID living with HIV in Vietnam. Evidence of all 3 dimensions of stigma was prevalent and perceived to be from multiple sources: family, community, and health workers. Our interviews also suggested that there was a potential intersection between HIV and drug use-related stigmas and that the intersection varied across dimensions of stigma, consistent with previous studies in Vietnam. 21,47 On the other hand, ART adherence was nearly universal in this sample, in part because ART medications could be picked up by someone in the family, thus reducing the potential of experiencing stigmatization in the public by patients themselves. While HIV-related enacted and anticipated stigma seemed to have declined, enacted and anticipated stigma related to drug injection remained widespread. This finding is similar to those reported by Li et al 30 ; however, we went beyond the previous study by documenting possible intersections of different types of stigmas among individuals who were both HIV-positive and injecting drugs. We found that a person who was living with HIV might face the enacted stigma that was substantially magnified if he got infected through drug injection, compared to one not injecting drugs. A man who was infected with HIV through other types of transmission, for example, sexual intercourse with a sex worker, was more likely to be forgiven and sympathized, compared to one who got infected through drug injection. Enacted stigma reportedly came from multiple sources, including health workers at OPCs where PWID received treatment. While some participants justified such enacted stigma by the heavy workload of health workers, it is still possible that this structural barrier contributed negatively to the way patients were treated. Recent studies in Vietnam have also suggested that enacted and anticipated stigma was common among PWID, and that stigmatization by health care providers was an important barrier for PWID to access services. [48][49][50] For example, health care providers often described patients using terms like lazy, unreliable and viewed PWID as unable to prioritize HIV treatment and care. 49 While our participants rarely reported explicit stigmatization from health care providers, many reported providers being silent, cold, or not allowing questions from patients. Such treatment could discourage patients from engaging in treatment and care. 51 Courtesy biases cannot be ruled out, thus stronger stigmatization from health care providers than reported as possible. While we did not investigate providers' perceptions of PWLH who inject drugs, taken together this study and the current literature suggest a clear disconnection between PWID and providers' views of challenges faced by PWID living with HIV to access ART. It is critical to close this gap to foster a supportive environment for PWID to access and stay in treatment and care. Internalized stigma related to HIV and drug use, however, both remained high. Together, such co-existing stigma contributed to many participants blaming themselves for having injected drugs and brought HIV to their family and the community. Those who had disclosed their HIV status and drug-injecting behavior became even more depressed and regretful, while those who had not disclosed either status were more reluctant to do so. It should be noted, though, that everyone in our study had accessed ART treatment, which could have either spurred or been a consequence of HIV status disclosure within the family. As a result, we were not able to compare PWID whose HIV status was known versus unknown to family members like in Rudolph et al's. 47 There are a few limitations in our study. A key limitation of this analysis is that data came from a study originally designed to assess ART adherence among PLWH who inject drugs. Consequently, our sample did not include PLWH who did not inject drugs or PWID who were HIV-negative. The sample design limited our abilities to compare stigma between these groups. However, our interview guides (see Appendix 1) included questions asking specifically about experiences and perceptions related to HIV vs. injecting drug use. Most of our participants were able to distinguish their experiences in this regard. Therefore, our data still provide useful insight into the intersecting stigma in this KP group. Second, the transferability of our findings is limited, since the sample was small and purposively recruited from 2 OPCs, using specific criteria. Our findings are not transferrable to female PWID living with HIV, as they may have very different experiences with stigma and discrimination due to perceived gender roles. 47,52 As female PWID account for only 5% of PWID in Vietnam, oversampling of them would be necessary to draw meaningful conclusions. Third, selection bias is possible as our participants were recruited from OPCs attendees. It is possible that their perceptions of stigma were very different from those not routinely attending or who have never attended OPCs, as well as those who had not disclosed their HIV status to families. Studies have shown that drug use-related stigma and the illegal nature of drug injection were significant barriers for users to get tested and access health services. 15,53,54 In Vietnam, the government's guidelines dictate that anyone in the KP groups who have tested positive for HIV will be immediately referred for ART treatment, 55 and ART retention has been consistently over 95%, 56 consequently we did not attempt to differentiate between PWID who had consistently been in treatment and those who had not since it would be difficult to find the latter group. We also did not examine stigma by the duration of treatment. 50 However, since our focus was on the co-existence and intersection of stigmas instead of a quantitative assessment of stigma, we believe that the study still makes important contributions to the stigma literature. Finally, social desirability and courtesy biases could not be ruled out. Despite the limitations, the study has several important contributions to the current literature. First, it sheds light on the nuanced perceptions of 3 stigma dimensions not previously understood in a middle-income country with a concentrated epidemic. Second, our study suggested a potential intersection between stigma related to HIV with drug use-related stigma, 6 Health Services Insights which varied across stigma dimensions. Future studies should include comparison groups to allow assessment of stigma dimensions related to HIV only, injecting drug use only, or both. Still, our current study serves as a basis to inform the development of a conceptual model of intersecting stigma in Vietnam, which is critical for quantitative assessments of the influence of stigma dimensions on health outcomes. Our findings underline the need for stigma reduction strategies to be planned and well thought out with consideration of the operationalization of different dimensions of stigma within a population group. Author's Note An earlier analysis of this data was presented at the International AIDS Society Conference.
2021-05-16T05:27:16.156Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f2d038bfb8a053d7fe70000257932a8a35fcde4e", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11786329211013552", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2d038bfb8a053d7fe70000257932a8a35fcde4e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
202559393
pes2o/s2orc
v3-fos-license
Ozone Inhalation Attenuated the Effects of Budesonide on Aspergillus fumigatus-Induced Airway Inflammation and Hyperreactivity in Mice Inhaled glucocorticoids form the mainstay of asthma treatment because of their anti-inflammatory effects in the lung. Exposure to the air pollutant ozone (O3) exacerbates chronic airways disease. We and others showed that presence of the epithelial-derived surfactant protein-D (SP-D) is important in immunoprotection against inflammatory changes including those induced by O3 inhalation in the airways. SP-D synthesis requires glucocorticoids. We hypothesized here that O3 exposure impairs glucocorticoid responsiveness (including SP-D production) in allergic airway inflammation. The effects of O3 inhalation and glucocorticoid treatment were studied in a mouse model of allergic asthma induced by sensitization and challenge with Aspergillus fumigatus (Af) in vivo. The role of O3 and glucocorticoids in regulation of SP-D expression was investigated in A549 and primary human type II alveolar epithelial cells in vitro. Budesonide inhibited airway hyperreactivity, eosinophil counts in the lung and bronchoalveolar lavage (BAL) and CCL11, IL-13, and IL-23p19 release in the BAL of mice sensitized and challenged with Af (p < 0.05). The inhibitory effects of budesonide were attenuated on inflammatory changes and were completely abolished on airway hyperreactivity after O3 exposure of mice sensitized and challenged with Af. O3 stimulated release of pro-neutrophilic mediators including CCL20 and IL-6 into the airways and impaired the inhibitory effects of budesonide on CCL11, IL-13 and IL-23. O3 also prevented budesonide-induced release of the immunoprotective lung collectin SP-D into the airways of allergen-challenged mice. O3 had a bi-phasic direct effect with early (<12 h) inhibition and late (>48 h) activation of SP-D mRNA (sftpd) in vitro. Dexamethasone and budesonide induced sftpd transcription and translation in human type II alveolar epithelial cells in a glucocorticoid receptor and STAT3 (an IL-6 responsive transcription factor) dependent manner. Our study indicates that O3 exposure counteracts the effects of budesonide on airway inflammation, airway hyperreactivity, and SP-D production. We speculate that impairment of SP-D expression may contribute to the acute O3-induced airway inflammation. Asthmatics exposed to high ambient O3 levels may become less responsive to glucocorticoid treatment during acute exacerbations. Inhaled glucocorticoids form the mainstay of asthma treatment because of their anti-inflammatory effects in the lung. Exposure to the air pollutant ozone (O 3 ) exacerbates chronic airways disease. We and others showed that presence of the epithelial-derived surfactant protein-D (SP-D) is important in immunoprotection against inflammatory changes including those induced by O 3 inhalation in the airways. SP-D synthesis requires glucocorticoids. We hypothesized here that O 3 exposure impairs glucocorticoid responsiveness (including SP-D production) in allergic airway inflammation. The effects of O 3 inhalation and glucocorticoid treatment were studied in a mouse model of allergic asthma induced by sensitization and challenge with Aspergillus fumigatus (Af) in vivo. The role of O 3 and glucocorticoids in regulation of SP-D expression was investigated in A549 and primary human type II alveolar epithelial cells in vitro. Budesonide inhibited airway hyperreactivity, eosinophil counts in the lung and bronchoalveolar lavage (BAL) and CCL11, IL-13, and IL-23p19 release in the BAL of mice sensitized and challenged with Af (p < 0.05). The inhibitory effects of budesonide were attenuated on inflammatory changes and were completely abolished on airway hyperreactivity after O 3 exposure of mice sensitized and challenged with Af. O 3 stimulated release of pro-neutrophilic mediators including CCL20 and IL-6 into the airways and impaired the inhibitory effects of budesonide on CCL11, IL-13 and IL-23. O 3 also prevented budesonide-induced release of the immunoprotective lung collectin SP-D into the airways of allergen-challenged mice. O 3 had a bi-phasic direct effect with early (<12 h) inhibition and late (>48 h) activation of SP-D mRNA (sftpd) in vitro. Dexamethasone and budesonide induced sftpd transcription and translation in human type II alveolar epithelial cells in a glucocorticoid receptor and STAT3 (an IL-6 responsive transcription factor) dependent manner. Our study indicates that O 3 exposure counteracts the effects of budesonide on airway INTRODUCTION In the era of novel biologicals being introduced into the clinic, glucocorticoids remain the main choice of asthma treatment due to their significant anti-inflammatory, immunosuppressive, and immunomodulatory effects (1,2). A subset of patients however is refractory to glucocorticoids (3)(4)(5)(6)(7), making their asthma difficult to control (7). Glucocorticoid insensitivity can be a primary genetic trait but more commonly it is acquired during acute inflammatory exacerbations of airway disease (2,(4)(5)(6)8). Epidemiologic studies indicate a causal link between air pollution and worldwide increases in asthma prevalence and severity. Inhalation of O 3 , an ubiquitous, oxidizing, and toxic air pollutant induces acute exacerbations with proinflammatory mediator release, neutrophilic granulocyte influx and obstruction of airways (9)(10)(11)(12)(13)(14)(15) and substantially worsens asthma morbidity and mortality (16,17). Data obtained from studies on mice (18), dogs (19) rhesus macaques (20), healthy volunteers (21), and asthma patients (22,23) have been controversial on whether glucocorticoids are effective to inhibit O 3 -induced exacerbation of airway inflammation and airway hyperreactivity in asthma. Further, the mechanisms of increased susceptibility of the asthmatic airways to O 3 and how glucocorticoid action is affected by inhalation of this air pollutant remain unclear. Individual susceptibility suggests that genetic predisposition is involved in O 3 responsiveness (24). This is corroborated by strain dependence of the inflammatory response to O 3 observed in mice (14,15,25). In addition, increasing evidence supports that a failure of protective immune mechanisms also likely plays an important role in shaping the O 3 effects in the lung. Surfactant protein-D (SP-D), an epithelial cell product of the airways is a critical factor in the maintenance of pulmonary immune homeostasis. We have originally raised the importance of changes in SP-D expression in resolving allergen and O 3 -induced airway inflammation (26) by demonstrating that a differential ability of Balb/c and C57BL/6 mice to respond to allergen (27) or O 3 (28), was inversely associated with the amount of SP-D recovered from the airways of these mouse strains (28,29). Accordingly, genetically low SP-D producer or SP-D deficient mice were highly susceptible to and had a prolonged recovery period from airway inflammation after allergen or O 3 exposure (28,30,31). O 3 -inhalation induced exacerbation of Th2-type airway inflammation in allergen challenged mice was also associated with the appearance of abnormal oligomeric molecular forms of SP-D indicating that oxidative damage can cause conformational change with a potential loss of its immunoprotective function (32,33). While our lab and others showed that glucocorticoids are necessary for expression of SP-D in epithelial cells (34)(35)(36)(37), we also demonstrated a feedback regulation between SP-D and the Th2 cytokines IL-4/IL-13 (30) as well as IL-6 (28), respectively. Interestingly, we found no glucocorticoid response elements in the proximal promoter region of the SP-D gene (sftpd) however, this region contains an evolutionarily conserved STAT3/6 response element in a prominent proximal location. Pertinent to this, IL-4/IL-13 (activators of STAT6) as well as IL-6 (activator of STAT3) directly upregulated SP-D synthesis in airway epithelial cells in vitro and in mice in vivo (28,30). Lastly, there are indications that STAT3 can be directly phosphorylated by H 2 O 2 (the molecular product of O 3 when mixed in water) treatment of airway epithelial cells in vitro (38). We hypothesized that exposure to O 3 interferes with the effects of glucocorticoids on Af -induced airway inflammation and hyperreactivity and, that O 3 and glucocorticoid treatment have antagonistic effects on SP-D expression and function in the lung. To study these hypotheses we utilized our in vivo mouse model of combined Af + O 3 exposure and in vitro human airway epithelial cell cultures. In vivo Studies Balb/c mice were obtained from the Jackson laboratories (Bar Harbor, ME) and bred in-house. All experiments were performed on 8-10 weeks old mice. Experiments where mice were sensitized and challenged with Af and exposed to air or O 3 were carried out as previously described (30,39,40). In brief, mice were sensitized with 20 µg Af and alum by intraperitoneal injection (i.p.) on days 0 and 7, then challenged with 25 µg Af by intranasal (i.n.) instillation on day 13. In Figure 1, mice were treated with vehicle (Dimethyl sulfoxide, DMSO) or budesonide (0.25 or 2.5 mg/kg) i.n. at the time of Af challenge. 48 h post challenge, lung function (enhanced pause, Penh) was measured using the Buxco R system. In Figures 2-5, mice followed the Af sensitization and challenge protocol as described, however 84 h post challenge/budesonide they were exposed to 3 ppm O 3 or air for 2 h. Animals were studied 96 h post Af challenge (12 h post O 3 ). These time points were selected to mimic O 3 -induced exacerbation of allergic changes, because by 96 h post Af (Figures 2A,B) (33,40). That a 3 ppm inhaled dose in rodents results in O 3 concentration in the lungs relevant to human exposure levels has been experimentally validated by others, using oxygen-18-labeled O 3 ( 18 O 3 ). Hatch et al. showed that exposure to 18 O 3 (0.4 ppm for 2 h) caused 4-5-fold higher 18 O 3 concentrations in humans than in rats, in all of the BAL constituents measured (41). Rats exposed to 2.0 ppm, had still less 18 O 3 in BAL than humans exposed to 0.4 ppm. The species discrepancies between the recoverable O 3 levels in the lung are not entirely clear. It is thought however that as rodents are obligate nose breathers (while humans breathe through their nose and mouth), this reduces the delivered dose of O 3 to the lungs of rodents. Further, Slade et al. found that after exposure to O 3 , mice react by a rapid decrease of core temperature, a species and strain specific characteristics (42). The recoverable 18 O 3 in the lung tissue was negatively associated with the extent of hypothermia that significantly altered O 2 consumption and pulmonary ventilation, explaining at least partly, the interspecies differences seen in O 3 susceptibility. In addition, in pilot studies we also performed a careful assessment of the biological effects on a range of 0.5-6.0 ppm O 3 exposure. Doses lower than 3 ppm did not elicit a significant inflammatory response that would be commensurate with what is seen in humans, in regards to BAL or peripheral blood neutrophilia, upon O 3 inhalation for 2 h. Higher than 3 ppm doses caused observable respiratory distress especially in Balb/c mice. The O 3 dose we used here therefore represents a level of exposure that is well-tolerated by both Balb/c and C56BL/6 mice and that causes a significant airway inflammatory response. Lung function was measured using the Flexivent R system (Scireq, Montreal, Canada) in response to increasing FIGURE 2 | O 3 induced airway inflammation and hyperreactivity and enhanced allergic airway changes in mice sensitized and challenged with Af. (A) Balb/c mice were exposed to air or 3 ppm O 3 for 2 h and studied at the indicated time points for airway inflammation. 12 h after O 3 exposure, lung function to methacholine was measured (Flexivent ® ) prior to BAL. (B) BAL neutrophils and eosinophils were quantified by differential cell counts on cytospin preparations multiplied by the total cell counts recovered from the BAL (Countess ® ). (C) O 3 exposed mice and air exposed controls were studied for methacholine responsiveness 12 h later. Mean ± SEM of n = 6 **p < 0.01 Two-way ANOVA with Tukey's multiple comparison's test (air vs. O 3 exposure). (D) Balb/c mice were sensitized to 20 µg Aspergillus fumigatus (Af), with alum (i.p.) on days 0 and 7. On day 13, mice were challenged with 25 µg Af (i.n.). 82 h post-Af challenge, mice were exposed to air or 3 ppm O 3 for 2 h, then 12 h later (96 h post-Af challenge), lung function was measured (Flexivent ® ) and BAL was harvested. (E) BAL neutrophils (live Ly6G + CD11b + cells) and eosinophils (live CD11c − Siglec-F + cells) were quantitated by FACS analysis. The absolute numbers of eosinophils and neutrophils were calculated by multiplying the percentage of cells determined by flow cytometric gating with the total numbers of cells/lung (Countess ® ). Mean ± SEM of n = 6; **p < 0.01 Student's t-test (air vs. O 3 ). (F) Lung function (airway resistance, Raw) was measured as indicated. Mean ± SEM of n = 6; ***p < 0.001 Two-way ANOVA with Tukey's multiple comparison's test (air vs. O 3 exposure). concentrations of inhaled methacholine. BAL and lung cells were harvested to study inflammatory cells by flow cytometry. Following collection of BAL, 10 mL ice-cold PBS was injected into the right ventricle to perfuse the lung. The lung lobes were then carefully removed and snipped into small pieces before undergoing digestion with Liberase TL (Millipore Sigma, Burlington, MA) for 40 min at 37 • C on a shaker. Digested whole lung homogenate was filtered through a 70 µm cell strainer to create a single cell suspension for flow cytometric analysis. In cell-free BAL supernatant, a custom mouse magnetic Luminex assay was utilized to study cytokines and chemokines while SP-D was measured by western blot, native gel electrophoresis (structure) or sandwich ELISA (quantity). All mouse procedures were reviewed and approved by the University of California, Davis, and University of Pennsylvania Institutional Animal Care and Use Committees. Flow Cytometry BAL and lung cells were harvested and single cell suspensions were prepared as previously described for analysis by flow cytometry (40). Fluorescently-conjugated monoclonal antibodies were purchased from Biolegend (San Diego, CA), BD Biosciences (San Jose, CA), or eBioscience (San Diego, CA). Single cell suspensions were incubated with antibodies targeting surface markers for 20 min at 4 degrees C in the dark. In the BAL samples, the following antibodies were used: APC-Cy7-CD11c, PE-Siglec F, PE-Cy7-CD11b, PerCP-Cy5.5-Ly6G. In the lung digest suspensions, neutrophils, and eosinophils were identified using APC-Cy7-CD11c, PE-Siglec F, and PerCP-Cy5.5-Ly6G. Live/dead Aqua was used in the panels throughout the study to exclude dead cells. Flow cytometry was performed on a Fortessa (BD Biosciences, San Jose, CA) and data was analyzed using FlowJo software (Ashland, OR). Frontiers in Immunology | www.frontiersin.org (C) Native-PAGE western blot was used to study SP-D structure. Native (intact) SP-D is the band that due to its molecular size remain on the top of the gel. Due to the variability of migratory capabilities of the de-oligomerized SP-D components, these appear as a "smear" throughout the gel. (D) SP-D optical density by Image J analysis; ratio over the mean value of "air+vehicle" control group data. Mean ± SEM or n = 5-6; *p < 0.05 air vs. O 3 , #p < 0.05 vehicle vs. budesonide, Two-way ANOVA with Tukey's multiple comparison's test. Luminex Assay Cytokines and chemokines were assayed in the BAL via a custom Magnetic Mouse Luminex Assay (R&D System, Minneapolis, MN). C-C motif chemokine 11 (CCL11), Interleukin-23p19 (IL-23p19), IL-13, IL-6, Chemokine (CXC motif) ligand 2 (CXCL2), and CCL20 were measured in the premixed kit. BAL fluid was first concentrated using a 2 mL, 3 k Amicon Ultra Centrifugal Filter (Millipore Sigma, Burlington, MA) spun at 3,000 g for 30 min. The kit was performed following the instructions from the manufacturer. BAL SP-D Analysis Total protein concentration was measured by the BCA Assay (Thermo Fisher Scientific, Waltham, MA). BAL SP-D was assayed by sandwich ELISA using our in-house generated monoclonal and polyclonal antibodies as previously described (33). BAL SP-D was also measured by native gel electrophoresis (33) to assess the tertiary structure of SP-D, which is critical to maintain its anti-inflammatory functions (43,44). Proteins were transferred to a nitrocellulose membrane (Thermo Fisher Scientific, Waltham, MA). Goat anti-mouse SP-D (1:2,000, R&D Human Primary Type II alveolar epithelial (hAECII) cells were incubated for 2 h with dexamethasone (Dex), curcurbitacin I (Cu I), and RU486 as indicated. 48 h later, cells were harvested and SP-D was studied by western blot (protein, relative expression to GAPDH) and qPCR (sfptd mRNA). (C) qPCR of sftpd (fold over no dexamethasone). (D-F) SP-D protein was studied by western blot, compared to control GAPDH. Optical density of SP-D and GAPDH by Image J analysis; GAPDH was subtracted from SP-D density then the ratio over the mean value of "no treatment" group was calculated. Mean ± SEM of n = 6 (C) or n = 3 (D-F). *p < 0.05, **p < 0.01 vs. "no treatment"; # p < 0.05, ## p < 0.01 vs. the same concentration of Dex or Bud alone; One-way ANOVA with Bonferroni's multiple comparison's test (D-F). Western blots are representative of three independent experiments. Systems, Minneapolis, MN) was the primary antibody while donkey anti-goat antibody coupled to horseradish peroxidase (1:10,000 GE Healthcare Life Sciences, Marlborough, MA) was the secondary antibody. SP-D signal was detected using the ECL Western Blotting Substrate (Thermo Fisher Scientific, Waltham, MA) on film (ECL Hyperfilm, GE Healthcare Life Sciences, Marlborough, MA). Image J (National Institutes of Health, Rockville, MD) analysis was used to determine the optical density of SP-D bands. In vitro Studies Human primary type II airway epithelial (hAECII) cells were acquired from normal human lung tissues from NDRI (National Disease Research Interchange). A549 cells were purchased from ATCC (Manassas, Virginia). Dexamethasone, budesonide, curcurbitacin I (Cu I), and RU486 was purchased from Millipore Sigma (Burlington, MA). A549 cells are a human type II alveolar epithelial cell line used by our laboratory and others (45)(46)(47) to model functions including expression of mRNA for SP-D. We used these readily available cells to establish conditions of SP-D mRNA expression upon treatment with ozone and budesonide (Figure 6). The budesonide effects were then recapitulated in primary hAECII cells (Figure 7). A549 cells were cultured in DMEM supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin (Thermo Fisher Scientific, Waltham, MA). Primary hAECII cells were cultured in DMEM-H21 plus F-12 Ham's (1:1) supplied with 5% fetal bovine serum, 100 U/ml penicillin, 0.1 mg/ml streptomycin, 2 mM L-glutamine. D-valine (Invitrogen) prevented growth of fibroblasts. hAECII cells were treated with budesonide/dexamethasone for 2 h, with or without RU486 and Cu I added to the culture. A549 cells were treated with budesonide and exposed to air or O 3 (300 ppb) for 2 h, which was generated as previously described (31). DMSO was used as the vehicle control in the in vitro studies. At the time points indicated in the figure, cells were harvested for analysis of sftpd mRNA (qPCR) or SP-D protein (western blot). Cells were harvested in TRIzol reagent (Thermo Fisher Scientific, Waltham, MA) for mRNA analysis by qPCR or RIPA buffer (Thermo Fisher Scientific, Waltham, MA) for protein analysis by western blot. For the western blots, total intracellular protein was measured by the BCA assay, then 20 µg protein was loaded for each lane. The primary antibody was goat anti-SP-D (1:500). The secondary antibody was HRP anti-goat IgG (1:1,000). For control antibodies, the primary was rabbit anti-GAPDH (1:1,000) and the secondary was HRP anti-rabbit (1:5,000). All antibodies were purchased from Santa Cruz Biotechnology (Dallas, TX). Image J (National Institutes of Health, Rockville, MD) analysis was used to determine the optical density of SP-D bands. For the qPCR, RNA was extracted from the TRIzol by chloroform layering and isopropanol precipitation, then reverse transcribed into cDNA via the QuantiTect Reverse Transcription Kit (Qiagen, Hilden, Germany). qPCR was performed on the recovered cDNA using SYBR green reagents (Applied Biosystems, Foster City, CA) on a ViiA 7 Real-Time PCR system (Thermo Fisher Scientific, Waltham, MA). Fold change was calculated using the Ct method, first normalizing values to GAPDH. Human SP-D primer with sequence of 5 ′ -ACACAGGCTGGTGGACAG-3 ′ (sense); 5 ′ -TGTTGCAAGGCGGCATT-3 ′ (anti-sense) were used to produce 61 bp products. Statistical Analysis All statistics were performed using Prism v7 software (GraphPad Inc., La Jolla, CA). Data are expressed as mean ± SEM and are representative of at least 2 independent experiments. A Student's t-test was used to compare vehicle vs. budesonide or air vs. O 3 . A One-way ANOVA with Tukey's multiple comparison's test was used in the budesonide dose response experiment (Figure 1). A Two-way ANOVA with Tukey's multiple comparison's test was used when comparing all experimental groups. A p < 0.05 was considered statistically significant. Budesonide Inhibited Airway Hyperreactivity Induced by Af Sensitization and Challenge in Balb/c Mice in a Dose-Dependent Manner Inhaled glucocorticoids improve lung function and airway inflammation in allergic asthma (1, 48, 49) but their effectiveness in acute asthma exacerbations are subject of on-going investigations (49-51). We developed a model in which mice were sensitized (i.p.) and challenged with Af at the time of budesonide administration (i.n.) (Figure 1A). To establish the dose-dependent effects of budesonide on methacholine responsiveness we used Penh (enhanced pause), a non-invasive measure of airway obstruction, because it enabled us to obtain data from multiple individual animals simultaneously and complete a study using a large number of mice. These results showed that sensitization and challenge to Af significantly increased baseline Penh and methacholine responsiveness and that budesonide significantly inhibited airway hyperreactivity to methacholine at 2.5 mg/kg dose (Figure 1B). For the subsequent experiments presented in this paper we used this budesonide dose and confirmed its inhibitory effects on allergic airway hyperreactivity by the invasive FlexiVent R system (Figures 2C,F, 3D). These data also provided the basis for the subsequent studies on the effects of O 3 on allergic airway inflammation and glucocorticoid responsiveness. O 3 Induced Airway Inflammation and Hyperreactivity and Enhanced Allergic Airway Changes in Mice Sensitized and Challenged With Af To establish the time course of O 3 induced airway inflammation, Balb/c mice were exposed to air or 3 ppm O 3 for 2 h and studied at several time points afterwards (Figure 2A). Neutrophil influx into the airways peaked 12 h after O 3 exposure (Figure 2B). O 3 exposed mice and air exposed controls were studied for methacholine responsiveness at the 12 h time point. O 3 induced a significant increase in methacholine responsiveness compared with air exposed controls (p < 0.01) (Figure 2C). To investigate the effects of O 3 on allergic airway inflammation Balb/c mice were sensitized and challenged with Af. In this model the acute inflammatory changes resolve by 96 h after allergen challenge. We therefore studied the mice at this time point, but we also exposed them to either air or O 3 12 h before ( Figure 2D). BAL neutrophils and eosinophils were quantitated by FACS analysis (the gating strategy is shown in Figure 3B). O 3 exposure clearly enhanced numbers of eosinophils and neutrophils in the airways of Af sensitized and challenged mice in comparison with air exposure (p < 0.01; Figure 2E). In addition, lung resistance to methacholine challenge was also significantly amplified in the O 3 exposed animals (p < 0.001; Figure 2F). These results confirm our previous findings (39) and strongly indicate that O 3 induces airway hyperreactivity on its own and that it enhances airway changes in allergen sensitized and challenged animals. The Inhibitory Effects of Budesonide on Af-Induced Airway Inflammation Were Attenuated, and on Airway Hyperreactivity Were Completely Abolished by O 3 Exposure in Sensitized and Challenged Mice Experiments in dogs (19) our previous studies in healthy volunteers (52) and investigations in mild asthmatics (53) showed that glucocorticoid treatment inhibited O 3 -induced inflammation in the airways. However, how would O 3 alter the inhibitory effects of budesonide on allergic airway inflammation and hyperreactivity has not been documented. To study the hypothesis that O 3 impairs the antiinflammatory effects of budesonide, Balb/c mice were sensitized and challenged with Af as described, and were intranasally treated with 2.5 mg/kg budesonide or vehicle. 82 h post-Af challenge mice were exposed to air or 3 ppm O 3 for 2 h, then 12 h later (96 h post-Af challenge), lung function was measured (Flexivent R ), and BAL and lungs were harvested ( Figure 3A). BAL eosinophils (live Siglec-F + CD11c − cells) and neutrophils (live Ly6G + CD11b + cells) and lung eosinophils (live CD45 + CD11c − Siglec-F + ) and neutrophils (live CD45 + CD11c − Ly6G + ) were analyzed by FACS from single cell suspensions as shown in Figure 3B. Budesonide significantly suppressed eosinophil (not neutrophil) numbers both in the BAL and the lung in air exposed but not in O 3 exposed mice ( Figure 3C). Strikingly, inhibition of lung resistance (upper panel) and tissue damping (lower panel) by budesonide seen in air exposed mice (gray plain squares) was completely abolished in O 3 -exposed mice (gray hatched squares, p < 0.001, Figure 3D). These data indicated that the inhibitory effects of budesonide on Af -induced airway inflammation were attenuated and on airway hyperreactivity were completely abolished by O 3 exposure in sensitized and challenged mice. Since O 3 mitigated the inhibitory effect of budesonide on airway eosinophilia and airway hyperreactivity, we wanted to investigate the underlying mediator profile. We measured BAL CCL11 (eotaxin, an eosinophil chemoattractant), IL-23p19, IL-6, CXCL2 (pro-neutrophilic mediators), CCL20 (a lymphocyte chemoattractant), and IL-13 (known to prime smooth muscle cells for airway hyperreactivity). O 3 strongly induced BAL IL-6, CXCL2, and CCL20 in a budesonide-independent manner (Figure 4). Meanwhile, budesonide significantly reduced BAL CCL11 showed a trend for reduction of IL-13 (p = 0.07) and IL-23p19 (p = 0.05) in the BAL of air exposed, but not O 3 exposed mice (Figure 4, lower panels). These data suggested that O 3 induced pro-neutrophilic factors regardless of budesonide treatment, and that the suppressive effects of budesonide on eosinophilia and airway hyperreactivity-inducing factors was attenuated by O 3. O 3 Caused SP-D De-oligomerization and Inhibited Budesonide-Induced SP-D Expression in the BAL We and others previously showed that SP-D plays an important role in suppressing proinflammatory mediator release in allergen or O 3 -induced airway inflammation (28)(29)(30)54) and that production of SP-D required the presence of glucocorticoids in airway epithelium (34)(35)(36)(37). Further, we found that O 3 -induced airway inflammation in allergen challenged mice resulted in abnormal oligomeric molecular forms of SP-D indicating that oxidative damage can cause conformational changes with a potential inactivation of SP-D's immunoprotective function (28,32,33). Here we wanted to investigate how the combination of allergen with O 3 exposure would alter the glucocorticoid effects on SP-D expression. Assessment of total BAL protein levels showed that those were returned to normal 96 h after Af challenge, indicating inflammatory resolution in air exposed mice. In O 3 exposed mice however, BAL protein levels were significantly elevated indicating acute inflammation that was not prevented by budesonide treatment (Figure 5A). As expected on the basis of previous investigations (34)(35)(36)(37) budesonide significantly increased BAL SP-D in air exposed mice. Importantly, this budesonide effect on SP-D expression was lost in O 3 exposed mice (Figure 5B). By native gel electrophoresis, structurally intact SP-D was found at the top of the gel and did not separate from the well, while de-oligomerized SP-D was resolved as a smear (Figure 5C). Native SP-D density was not statistically different between the groups studied ( Figure 5C). O 3 caused de-oligomerization of SP-D in the BAL of mice sensitized and challenged with Af. This change was prevented by budesonide treatment (gray hatched bar, Figure 5D). Our data suggested that budesonide induction of SP-D is inhibited by O 3 . We speculate that in budesonide treated mice SP-D was indirectly protected from de-oligomerization possibly as a result of inhibition of eosinophils (the main source of iNOS and nitric oxide) in the lungs of mice (Figure 3C). Time Dependent Effects of O 3 on Budesonide-Induced sftpd mRNA in A549 Cells in vitro We previously showed that IL-6 directly induced SP-D in type II alveolar epithelial cell cultures (28). This is interesting in light of O 3 while strongly inducing IL-6 (Figure 4), did not increase SP-D 12 h after exposure, but in fact it prevented the stimulatory effects of budesonide on SP-D release in the airways of mice (Figures 5A,B). To better understand the mechanisms of how budesonide and O 3 regulate SP-D expression we used A549 cells, a readily available human type II alveolar epithelial cell line that models functions such as expression of the SP-D gene (sftpd, Figure 6A). To confirm our findings, the budesonide effects were then recapitulated in primary human type II alveolar epithelial cells (hAECII, Figure 7). O 3 exposure of A549 cells inhibited sftpd expression 1.5 h later, but by 48 h post exposure this effect was reversed into an induction of the sftpd gene ( Figure 6B). Budesonide induced sftpd mRNA in A549 cells exposed to air. O 3 completely prevented the budesonide induction of the sftpd gene 1.5 h later. However, by 48 h O 3 and budesonide synergistically increased sftpd mRNA (Figure 6C). These results are in line with our previous in vivo study on Balb/c mice (28) and suggest that O 3 acts on sftpd transcription in a bi-phasic manner with an early phase inhibition (<12 h) and a late phase activation >48 h. Based on these and our previous findings on IL-6 we speculated that budesonide and O 3 may interact on a common signaling pathway involved in SP-D transcription in airway epithelial cells. Glucocorticoid Receptor-Induced sftpd mRNA Transcription Is Facilitated by STAT3/6 Binding The proximal promoter region of the human SP-D gene (sftpd) has binding elements for C/EBP, NFAT, AP1, HNF-3, and STAT3/6 that all contribute to transcription of SP-D ( Figure 7A) (55). Dexamethasone induced lung SP-D in mice at the level of transcription in the absence of a full glucocorticoid response element in the proximal promoter region of sftpd. Zhang and colleagues previously reported that STAT3 can act as a coactivator in glucocorticoid receptor signaling (56). To test if the glucocorticoid receptor works in concert with the STAT3/6 binding element to induce SP-D we studied primary hAECII cells using a specific inhibitor of STAT3 (cucurbitacin Cu I) (57) and the glucocorticoid receptor (RU486). We treated human primary type II alveolar epithelial (hAECII) cells with budesonide and dexamethasone in vitro and studied SP-D mRNA (qPCR for sftpd) and protein (western blot for SP-D) expression ( Figure 7B). We used dexamethasone in the in vitro experiments as a positive control because it is a wellcharacterized glucocorticoid that induces SP-D (35). Indeed, dexamethasone induced sftpd expression in human primary type II aleveolar epithelial cells that was abolished in the presence of Cu I or RU486 (Figure 7C). Similarly, Cu I and RU486 inhibited dexamethasone and budesonide-induced SP-D protein in hAECII cells (Figures 7D-F). Optical density analysis by Image J analysis confirmed that antagonism of the glucocorticoid receptor or STAT3 impaired dexamethasoneinduced SP-D (Figures 7D,E) and that budesonide induced SP-D in a glucocorticoid receptor dependent manner ( Figure 7F). These data suggested that glucocorticoid-induced SP-D synthesis was dependent on glucocorticoid receptor and STAT3 activation. DISCUSSION We report here the effects of O 3 on intranasal budesonide treatment in allergic airway inflammation and hyperreactivity, implicate the alterations in SP-D expression in the O 3 -induced airway changes and propose the involvement of STAT3 in glucocorticoid signaling during sftpd transcription. Our study raises the significance of air pollution in the regulation of respiratory immunity and treatment responsiveness in asthma. Inhaled glucocorticoids are currently the main choice for asthma treatment because they can profoundly improve lung function, alleviate airway inflammation and airway hyperreactivity (1, 48, 49) but their effectiveness in acute asthma exacerbations is subject of on-going debate (49-51, 58-60). Studies on asthma exacerbations caused by exposure to air pollutants are limited (61,62) and the available experimental data on animals (18)(19)(20) and humans (21)(22)(23) are unclear on whether inhaled corticosteroids are effective to treat O 3induced airway inflammation and/or airway hyperreactivity in asthma. We wanted therefore to further investigate the effects of budesonide on O 3 -induced exacerbation of allergic airway changes. We found that in the Af sensitization and challenge model airway hyperreactivity to methacholine was inhibited by budesonide at 2.5 mg/kg dose. To mimic asthma exacerbation, we sensitized and challenged Balb/c mice with Af, waited for 4 days for the acute inflammation to subside and then exposed the mice to O 3 . Our results show that O 3 exposure induced airway hyperreactivity on its own and significantly enhanced lung resistance to methacholine and the numbers of eosinophils and neutrophils in the airways of Af sensitized and challenged mice confirming previous findings (39). To study the hypothesis that O 3 impairs the anti-inflammatory effects of budesonide, mice were intranasally treated with 2.5 mg/kg budesonide or vehicle. Budesonide significantly suppressed eosinophil (not neutrophil) numbers both in the BAL and the lung in air exposed but not in O 3 exposed mice. Strikingly, inhibition of lung resistance by budesonide (seen in air exposed mice) was completely abolished by O 3 exposure. In various experimental conditions budesonide was previously shown to inhibit mediators relevant to O 3 -induced airway changes such as IL-6 (63), CCL11 (64), CXCL2 mRNA in the lung (65) and IL-13-induced ex vivo airway hyperreactivity (66), while CCL20 was actually stimulated by budesonide in asthmatic airway epithelial cells (67) and there is no data in the literature on the effects of budesonide on IL-23p19. O 3 upregulated BAL IL-6, CXCL2, and CCL20 in a budesonide-resistant manner and reversed the inhibitory effects of budesonide on CCL11, IL-13, and IL-23p19 expression in mice sensitized and challenged with Af. Induction of CCL20 by O 3 is interesting because CCL20 was thought to be responsible for recruitment of neutrophils into the airways conveying budesonide resistance (67). The role of CCL20 in O 3 -induced resistance to the budesonide effects however would need further confirmation. The reduction seen in BAL CCL11 and IL-23p19 of the budesonide treated air exposed animals corresponded with decreased BAL eosinophil and neutrophil counts, while reduced IL-13 matched the observed inhibition of airway hyperreactivity in the same animals. Since O 3 exposure prevented these budesonide effects, it is possible that these mediators are directly involved in the immunologic and physiologic response to combined Af and O 3 exposure. On the other hand, O 3 induced IL-6, CXCL2, and CCL20 was not altered by budesonide treatment and thus may be implicated in the observed neutrophilic inflammation caused by O 3 under allergic conditions. Our results are significant because they reproduce a glucocorticoid resistant airway inflammation and the hallmark characteristics of severe neutrophilic asthma exacerbation (68,69). SP-D plays an important role in suppressing proinflammatory mediator release in allergen or O 3 -induced airway inflammation (29,30,54). Levels of SP-D expression in the lung are correlated to disease severity in asthma (70,71). Therapeutics that boost SP-D expression are thought to improve asthma symptoms (70)(71)(72). Indeed production of SP-D requires the presence of glucocorticoids in airway epithelium (34)(35)(36)(37). Our previous work showed that O 3 exposure induced the expression of SP-D in the BAL >48 h later, as a protective mechanism (28,33,39) but O 3induced airway inflammation in allergen challenged mice also led to appearance of abnormal oligomeric molecular forms of SP-D indicating that oxidative stress can cause conformational changes that can inactivate SP-D's immunoprotective function (28,32,33). Such de-oligomerization was due to S-nitrosylation of SH bonds responsible for holding the dodecameric SP-D together (43,44,73). S-nitrosylation of SP-D requires NO, resulted from increased iNOS activity produced by the large numbers of activated inflammatory cells, particularly eosinophils in the allergen and O 3 -exposed lung (33,43,44,73). Here we wanted to know if the combination of allergen with O 3 exposure would alter the glucocorticoid effects on SP-D expression and whether budesonide treatment would affect O 3 -induced SP-D de-oligomerization. As expected on the basis of previous investigations (34)(35)(36)(37)72) budesonide significantly increased BAL SP-D in air exposed mice. Importantly, this budesonide effect on SP-D expression was lost in O 3 exposed mice. O 3 in addition caused de-oligomerization of SP-D in the BAL of mice sensitized and challenged with Af. This change was prevented by budesonide treatment. We speculate that in budesonide treated mice SP-D was indirectly protected from de-oligomerization possibly as a result of inhibition of eosinophils (the main source of iNOS and nitric oxide) in the lungs of mice. Taken together, our data suggested that budesonide induction of SP-D is inhibited by O 3 revealing a novel mechanism by which O 3 antagonizes the therapeutic benefits of this inhaled glucocorticoid. We propose that budesonide enhances SP-D expression thereby amplifying its local therapeutic effects in asthma. We previously showed that IL-6 directly induced SP-D in type II alveolar epithelial cell cultures (28). This is interesting in the light that O 3 while strongly inducing IL-6, did not increase SP-D 12 h after exposure, but in fact it prevented the stimulatory effects of budesonide on SP-D release in the airways of mice. To better understand the mechanisms of how budesonide and O 3 regulate SP-D expression we used A549 cells, a readily available human type II alveolar epithelial cell line that models functions such as expression of the SP-D gene (sftpd). To confirm our findings, the budesonide effects were then recapitulated in primary human type II alveolar epithelial cells. O 3 exposure of A549 cells inhibited sftpd expression 1.5 h later, but by 48 h post exposure this effect was reversed into an induction of the sftpd gene. Budesonide induced sftpd mRNA in A549 cells exposed to air. O 3 completely prevented the budesonide induction of the sftpd gene 1.5 h later. However, by 48 h O 3 and budesonide synergistically increased sftpd mRNA. These results are in line with our previous in vivo study on Balb/c mice (28) and suggest that O 3 acts on sftpd transcription in a bi-phasic manner with an early phase inhibition (<12 h) and a late phase activation >48 h. Based on these and our previous findings on IL-6 we speculated that budesonide and O 3 may interact on a common signaling pathway involved in SP-D transcription in airway epithelial cells. Two groups independently established that glucocorticoids induced SP-D mRNA protein in vitro and in vivo (34,35). These pioneering studies showed that hydrocortisone and dexamethasone stimulated both sftpd mRNA and SP-D protein in vitro and in vivo in the fetal rat lung. Since the proximal promoter region of the SP-D gene does not contain complete binding elements for the glucocorticoid receptor, it was hypothesized that glucocorticoids indirectly induced expression of the sftpd gene or work in concert with other binding elements. The proximal promoter region of the human SP-D gene has binding elements for C/EBP, NFAT, AP1, HNF-3, and STAT3/6 that all contribute to transcription of SP-D (55). Interestingly, Zhang et al. reported that STAT3 (an IL-6 responsive transcription factor) can act as a co-activator in glucocorticoid receptor signaling (56) and H 2 O 2 -treatment directly phosphorylated STAT3 in airway epithelial cells (38). We tested the role of STAT3 and the glucocorticoid receptor in SP-D mRNA (sftpd) and protein expression. Dexamethasone induced sftpd expression in human primary type II aleveolar epithelial cells was abolished by blockade of either the glucocorticoid receptor or STAT3. We established here that dexamethasone induced sftpd mRNA and SP-D protein via the glucocorticoid receptor and critically, STAT3. Recent evidence suggests that O 3 -induced glucocorticoid insensitivity involves p38 MAPK, MKP-1, and IL-17A. Inhibition of p38 MAPK prevented the decreased the inhibitory effects of dexamethasone on O 3 stimulated inflammation and IL-17A (18) and inhibition of IL-17A reduced dexamethasone insensitivity in a mouse model of chronic O 3 exposure (74). Here we showed for the first time that STAT3 is involved in glucocorticoid-induced SP-D synthesis. Cooperation between the glucocorticoid receptor and STAT3 may be crucial for SP-D synthesis in airway epithelial cells. There are likely many pathways that contribute to the BAL SP-D levels in vivo, including but not limited to budesonide treatment, O 3 exposure, and BAL IL-6 expression. Since glucocorticoids are known to have numerous side effects and after chronic administration patients can become refractory, novel asthma therapeutics to induce SP-D may seek to directly activate STAT3 signaling (5). While prior work suggested that O 3 may impair the effectiveness of budesonide, here we studied the potential role for SP-D in this pathway. We propose a novel SP-D-mediated mechanism for the anti-inflammatory and functional effects of budesonide on the lung. A better understanding of how air pollutants such as O 3 might affect asthma treatment will lead to improved therapeutic approaches. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the University of California, Davis and University of Pennsylvania Institutional Animal Care and Use Committees. The protocol was approved by the University of California, Davis, and University of Pennsylvania Institutional Animal Care and Use Committees. Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. AUTHOR CONTRIBUTIONS Copyright © 2019 Flayer, Ge, Hwang, Kokalari, Redai, Jiang and Haczku. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
2019-09-13T13:07:07.327Z
2019-09-13T00:00:00.000
{ "year": 2019, "sha1": "b0b0dba544b50265fc1c14a36afc77ea5d44c25b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02173/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0b0dba544b50265fc1c14a36afc77ea5d44c25b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139532280
pes2o/s2orc
v3-fos-license
Photochromic textile materials Smart textiles are the materials that can perceive and respond to the changes in environmental conditions. Photochromic textiles, one of the smart textile products, can reversibly change their color by UV irradiation. Photochromic textiles have become more attractive with the increasing interest in functional textile materials. In this paper, the applications of photochromic dyes in textile industry, the problems experienced in the processes and general information about the solution possibilities of these problems are given. Introduction People's expectations from textile materials are changing with the development of technology and accordingly, functional properties of textile materials have also become important beside their properties such as aesthetics, design and suitability to fashion. In this context, functional textile products increase the competitive power of the companies in the sector. The chromic materials, which are one of the functional textiles and can change colors with external factors, can be used to obtain smart textiles and also to get fashion effect. Chromism is a general term used for color-changing materials and is used as a suffix. Chromic materials are also called "chameleon materials" due to their color changing properties caused by the effect of an external stimulus. There are various types of chromism according to the external stimulus ( 2 Chromism has been explored since 1900's and its main applications are in the field of photochromism, thermochromism and electrochromism, such as dye, cosmetics, plastics and many optical applications. In the textile industry, photochromism and thermochromism applications are generally used among these types. 2. Photochromism and its applications Photochromic materials change from colorless when indoors to color when outdoors. Especially, photochromics change color in response to UV light. When the UV light source is removed, their color returns to their original state ( Figure 1 The most widely used class of photochromic dyes, along with many different classes, exhibit photochromism based on pericyclic reactions. These compounds, which show photochromism effect by the ring opening/closing reaction, are separated into 5 groups, spiropyran, spirooxazine, naphthopyran, diarylethene and fulgide [2,3]. Photochromic eyeglasses are the most known photochromic products. These glasses get darker with the increase in the intensity of the UV light and thus the amount of light passing through the glass is reduced ( Figure 2). Photochromic dyes can also be used at window glasses and, like the photochromic eyeglasses, with the increase in UV light intensity, the glass get darker and reduce the sunlight entering the building [4]. The use of photochromic dyes in the optical industry is widespread, and the applications of these dyes are also available in the plastic and cosmetics industries. In addition, one of the commercial uses of these dyes is printing ink, which can be transferred to the materials such as cloth or paper by different processes such as screen printing, flexography, dry offset etc. with different effects [4]. The use of photochromic dyes in the textile field is based on the 1990's, and today there are some examples of commercial use [4,6,7]. Photochromic dyes can be added in the polymer matrix during the production of synthetic fibers and so, photochromic yarns can be obtained (Figure 3) 3 dyes are also used by the fashion industry to obtain different effects such as photochromic t-shirts produced by printing method (Figure 4) [7]. Photochromic textile materials can be used as a UV sensor by changing the color depending on the amount of UV light in the environment. Thus, the person using the material would be warned that UV protection is required [4]. Photochromic dyes have also been used in the nanofiber production to obtain functional nanofibers. These dyes are incorporated into the polymer solution and then the nanofiber surface shows photochromic effect. Thus, sensitivity of photochromic dyes increase and the time necessary to respond to the UV irradiation reduce due to large surface area of the nanofibers. Photochromic nanofibers could find applications in fields such as optical data storage devices, optical sensors, processing media and functional components for smart surfaces [2]. The problems and solutions in the textile applications of photochromic dyes In the studies on photochromic textiles, several problems have been encountered due to the sensitive structure of these dyes and their low water solubility [8][9][10][11][12][13]. The technologies such as encapsulation, sol-gel and electrospinning can be used to solve the problems in the textile applications of photochromic dyes [14][15][16]. The aims of these technologies are to provide the homogeneous distribution of the photochromic dyes in the solvent or carry the dyes in the polymer matrix. Thus, the use of photochromic dyes in textile materials can be improved with these alternative methods. Encapsulation is a coating process of a core material with a shell material [17]. The encapsulation process provides advantages such as protecting the core from atmospheric conditions, increasing stability, improving processability, extending the shelf life of core material, etc. Along with many different encapsulation methods, methods such as in-situ polymerization, interfacial polymerization, emulsion-solvent evaporation and spray drying have generally been used to encapsulate photochromic dyes [15,[18][19][20][21][22][23]. Photochromic dyes based on spirooxazine, naphthopyran and diarylethene were used 4 as core materials and polymers such as ethyl cellulose, polystyrene, polyurethane, polymethyl methacrylate, melamine and chitosan were used as shell materials in these studies. Sol-gel technology can be used as another application method for photochromic textiles [16,24,25]. Sol is the stabilized suspension of colloidal solid particles in liquid and gel refers to a web structure with a form between solid and liquid. Photochromic dyes can be applied on textile materials with sol-gel method which consists of application, drying and condensation steps. However, photochromic dyes are sensitive to high temperature and therefore the curing temperature of the photochromic sol-gel matrices is limited [26]. Photochromic dyes exhibit ring opening/closing reaction when exposed to UV light, and color change reactions may not occur due to their fatigue. Spiropyrans have relatively low fatigue resistance and the spirooxazines and naphthopyrans which have higher fatigue resistance have become more important than spiropyrans. The use of different stabilizers such as hindered amine light stabilizers (HALS), antioxidants, UV absorbers, etc. can also improve the fatigue resistance of photochromic dyes [27]. Conclusion The competition in textile industry is increasing and the production of high-value added materials is getting more importance. Smart textiles are the materials which are high-value added and therefore highly competitive. Photochromic materials have also become one of the remarkable products in this area, but the problems experienced in their applications restrict the use of these dyes in the textile industry. However, many different application methods have been studied to solve these problems. In this context, it is considered that the use of photochromic dyes in the textile industry will continue to be of interest by solving the problems experienced in the application.
2019-04-30T13:08:44.386Z
2018-12-07T00:00:00.000
{ "year": 2018, "sha1": "a3f3cf1ec30e9f0b9b075cae3e893abdfda14386", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/459/1/012053", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e30a394ab455a8a153b3a8b78d4870a5d8e6e0e6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
259354010
pes2o/s2orc
v3-fos-license
Blood test result communication in primary care: mixed-methods systematic review protocol Background After testing, ensuring test results are communicated and actioned is important for patient safety, with failure or delay in diagnosis the most common cause of malpractice claims in primary care worldwide. Identifying interventions to improve test communication from the decision to test through to sharing of results has important implications for patient safety, GP workload, and patient engagement. Aim To assess the factors around communication of blood test results between primary care providers (for example GPs, nurses, reception staff) and their patients and carers. Design & setting A mixed methods systematic review including primary studies involving communication of blood test results in primary care. Method The review will use a segregated convergent synthesis method. Qualitative information will be synthesised using a meta-aggregative approach, and quantitative data will be meta-analysed or synthesised if pooling of studies is appropriate and data are available. If not, data will be presented in tabular and descriptive summary form. Conclusion This review has the potential to provide conclusions about blood test result communication interventions and factors important to stakeholders, including barriers and facilitators to improved communication. Introduction Blood tests are important for diagnosis and monitoring, but tests in themselves do not make people better, unless actions based on the test result lead to a change in patient management or reassurance.Both are dependent on test communication, which requires clear systems and processes for information sharing before and after blood testing. ][4] Protocol Safe and efficient systems of test result communication are important in the current context of rising primary care workload, 5 with the average GP estimated to spend 1.5-2 hours per day reviewing and actioning test results. 6Recent advances in IT systems, such as text messaging or online patient access to results, offer potential to improve test result communication, and NHS case studies have suggested these may reduce primary care workload, 7 but evidence to back up these claims is lacking.In England, the NHS has started rolling out online access to blood test results to patient by default, via the NHS App and other online services.This is part of a wider move towards transparency and openness in health care; however, mixed methods studies have found that patients face challenges and confusion when viewing their test results, 8 which are currently not presented in a patient-centred way.The James Lind alliance identified as a priority the need to provide information in patient medical records in a way that improves safety and quality of care. 9he study protocol has been registered on PROSPERO (Registration Number CRD42023427433) and is reported according to the PRISMA-P guidelines. 10 Aims To assess the factors around communication of blood test results between primary care providers (for example GPs, nurses, reception staff) and their patients and carers. Objectives This study aims to answer the following research questions: Eligibility criteria Primary studies of any design, except case studies, that provide information on the communication of blood test results by primary care staff (for example doctors, nurses, physiotherapists, receptionists) and providers (for example primary care practices, primary care networks, medical health insurance providers) to adult patients and carers will be eligible for inclusion.As 'primary care' varies across the world and has no single agreed definition, 11 the authors will include all studies except those on emergency, urgent, or acute care or where the participants were inpatients.This will allow the authors to review all available evidence relevant to primary care provision.If the number of included studies is unfeasible using these criteria, then the authors will restrict this to studies where a primary care population was described and the terms accepted in the full review report will be reported. The authors define 'communication of blood test results' as any communication from the time of agreeing to order a test onwards, including what to expect from the results; when to expect the results; conveying the test results; how to interpret and understand the results; and conveying and understanding the next steps.This includes the systems within primary care that are aimed at ensuring communication of blood test results to patients and carers takes place.The authors will include studies where artificially generated data or hypothetical scenarios were used.Figure 1 shows the review's inclusion and exclusion flowchart. Studies that focus exclusively on point-of care tests (this includes self-administered tests, such as blood glucose testing), genetic tests or communication of test results from laboratories to primary care, and studies of blood tests exclusively in children will also be excluded. Method Search strategy The authors will use an iterative and flexible approach to the searches, to find both quantitative and qualitative research. The authors will search the following databases: Medline (Ovid), Embase (Ovid), PsycINFO (Ovid), CINAHL (ESCOHost), and the Cochrane Library for primary studies.They will also hand search the reference lists of eligible full texts.In addition, the authors will search the grey literature, including NHS websites and will contact experts in the field, as they anticipate that some relevant literature will be published as quality improvement reports rather than formal research.The search will be restricted to 2013 onwards to keep the review relevant, as communication knowledge, interventions, and technologies develop rapidly. Many of the key search terms are generic, frequently occurring words in the biomedical and healthcare literature, therefore a set of known studies (gathered from experts in the field and informal scoping searches) will be used to help seed the full search strategy.Elements of cluster searching and key pearl citations to find other 'kinship' studies will be used. 12The main complementary search techniques will be citation searching (forwards and backwards), following lead authors and related projects.These studies will be used to help create a list of the most relevant key words and database subject headings.The initial MEDLINE search created using the methods outlined above (see Appendix 1) will be adapted, optimising it for use in the other search databases, taking into account their size, functionality, and subject coverage.Searches will be amended and rerun if the authors believe that it will be beneficial to do so, after identifying any new search terms throughout the process of screening and study selection. Study selection EndNote (version 20) will be used to save and de-duplicate search results, and the total number of results before and after de-duplication for each database searched will be recorded.Two reviewers Protocol will independently screen titles and abstracts identified by the searches using Rayyan. 13Full copies of all reports considered potentially relevant will be obtained, and two reviewers will independently assess these full texts for inclusion.Any disagreements will be resolved by consensus or discussion with a third reviewer.Any studies excluded at this stage will be recorded with the reason for exclusion. Data extraction Data will be extracted using forms, which will be jointly developed by the team, piloted on a small sample of studies, and adapted iteratively as necessary.One reviewer will extract data, and these will be checked by a second reviewer.Study authors will be contacted for clarification, where data needed for the review are not available in the report. Interventions will be subdivided according to a framework developed by Singh et al 14 into interventions targeting: 1) interpersonal communication, defined as the verbal exchange of information between primary healthcare staff and patients; and 2) informational communication, defined as written instructions or laboratory values shared by text message, email, online portal, or printed text or leaflets. The authors anticipate that the data extraction form will include (but not be limited to) the following study characteristics: • geographical location of study population, • healthcare system(s) the study is conducted in, • study aims, • sample size, • study design, • details of test(s) studied, • mode of delivery of test results, • type of intervention studied, • barriers and facilitators to test communication, • outcomes reported, • results of study, where applicable. Quality assessment of included studies The authors anticipate that the included studies will be of different designs.Risk of bias will be assessed for quantitative studies that aim to assess the benefits and harms of an intervention and the quantitative component of mixed-methods studies, using the RoB 2 tool 15 for randomised controlled trials and ROBINS-I tool 16 for non-randomised studies of interventions.The ROBINS-E tool 17 will be used for non-randomised studies of exposure.These tools were chosen for consistency of toolset when assessing risk of bias. Qualitative studies and the qualitative component of mixed-methods studies will be assessed using the JBI tool for qualitative studies. 18here the study design does not fit into the designs detailed above, these will be discussed as a team and a suitable quality assessment tool chosen on a case by case basis, if available.Where a suitable tool does not exist, the authors will highlight the key strengths and weaknesses.All eligible studies will be included, regardless of the results of the quality assessments, and the information on quality will be used when drawing conclusions about the evidence. Data synthesis and integration This synthesis will be conducted following the guidance of the JBI for mixed-methods systematic reviews using a segregated convergent approach. 19In this approach, quantitative and qualitative evidence are first evaluated separately using a segregated approach to synthesis, followed by (if appropriate, based on the data) a mixed-methods convergent synthesis to combine the quantitative and qualitative evidence.This approach, as opposed to a convergent integrated approach, has been chosen as the authors do not know in advance whether both qualitative and quantitative evidence can be used to answer each of their review questions.If it is not appropriate to conduct the mixed method convergent synthesis of the evidence, then the authors will summarise the results of the two segregated syntheses in this review or they may choose to report the two syntheses as separate reviews.The authors will report and explore care setting, healthcare system, and mode of delivery of test results and outcomes, where this information is available.They will report any gaps in the evidence on blood test communication where they identify them from synthesising and integrating the evidence. Synthesis of qualitative evidence The authors will use the meta-aggregative approach to qualitative synthesis, following the JBI guidance. 20,21This pragmatic method involves extracting study findings, often as a direct quote, then creating categories of findings, and, if possible, pooling the categories of findings into synthesised findings.Synthesised findings aim to convey the overall meaning of the categorised findings via statements.This approach is a good fit for systematic reviewing and will enable the authors to produce generalisable recommendation statements aimed at guiding practitioners and policy makers without seeking to re-interpret the primary studies' findings. The meta-aggregative approach uses only the primary study author's findings in the aggregation of information from the studies.The authors will also look at the primary study evidence to identify any themes that relate to the research questions that have not been dealt with in the author's findings.Using a thematic approach 22 in addition, if necessary, ensures that the authors will be able to use all relevant information available. Synthesis of quantitative evidence If the authors find two or more studies on the same intervention with the same outcomes, they will conduct meta-analyses to estimate summary measures of effect.They will calculate the mean difference between groups with 95% confidence intervals for continuous outcome data and relative risk with 95% confidence intervals for dichotomous outcomes, where possible.The intention for this review is to generalise the results of any meta-analyses beyond the studies included, therefore, a random effects model will be the default choice of statistical model.The authors will consider applying a fixed effects model if only five or fewer studies can be included in a meta-analysis and/or statistical and study characteristics (for example, population, setting, proposed mechanism of action of the interventions) heterogeneity are low, suggesting a common underlying effect. 23The authors will assess the heterogeneity of studies using the X 2 and I 2 statistics.Prior to conducting a meta-analysis, they will create a detailed analysis plan. The authors anticipate that the included studies are highly unlikely to test the same interventions and use the same outcomes, making meta-analyses inappropriate.Where meta-analysis is inappropriate or not possible, they will synthesise the findings using the following narrative synthesis methods.Included studies will be grouped by intervention type within summary tables.Where they find studies reporting effect estimates, they will present these using median, quartiles, and range, if possible, in the tabular summary and as a bubble (small numbers) or box and whisker graphs (larger numbers).Where studies do not report effect estimates, they will report the results presented in the tables and summarise these descriptively. 24They will report these narrative syntheses using the SWiM guidance, 25 which promotes transparent reporting of narrative synthesis methods using nine key reporting items. Mixed-methods (convergent) synthesis The authors will attempt to integrate the separate qualitative and quantitative evidence syntheses using juxtaposition and organisation to answer the review's research questions in a configured analysis.If configuration is not appropriate, then the findings of the qualitative and quantitative evidence will be reported separately, using tabular and descriptive summaries to address each research objective of this review.The authors will 'qualitise' the quantitative evidence, which involves translating it into textual descriptions that can be integrated with qualitative data.This is less prone to error than 'quantising' qualitative evidence by ascribing numbers to the descriptions and themes found. 19 Patient and public involvement Patient and public involvement participants are co-authors on this protocol and and will be involved as co-authors for the full review.A patient and public involvement group has provided input into the design of the protocol and will be consulted for the review on: the emerging results; the dissemination strategy; the review findings; and future research plans, in order to address gaps in the current literature. Figure 1 Figure 1 Study inclusion and exclusion flowchart 1 . What interventions can be used to improve communication of blood test results to patients and carers in primary care? 2. What are patients' and carers' needs and preferences for blood test result communication? 3. What are the needs and preferences of primary care staff and providers when communicating blood test results?4. What are the barriers and facilitators to successful communication of blood test results?
2023-07-07T22:15:46.133Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "ae1d726b51c5f2c95ffa4537260928a3cad4d29f", "oa_license": "CCBY", "oa_url": "https://bjgpopen.org/content/bjgpoa/early/2023/07/04/BJGPO.2023.0105.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b241e115ac546bc8977e1ff7ca5fd43f33d4b200", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221740612
pes2o/s2orc
v3-fos-license
Stereotactic body radiotherapy versus conventional/moderate fractionated radiation therapy with androgen deprivation therapy for unfavorable risk prostate cancer Background Ultrahypofractionation using stereotactic body radiotherapy (SBRT) is an increasingly utilized technique for men with prostate cancer (PC). The comparative efficacy of SBRT plus androgen deprivation therapy (ADT) compared to fractionated radiotherapy (EBRT) plus ADT in higher-risk prostate cancer is unknown. Methods Men > 40 years old with localized PC treated with external beam radiation and concomitant ADT for curative intent between 2004 and 2016 were analyzed from the National Cancer Database. Patients who lacked ADT or risk stratification data were excluded. 558 men treated with SBRT versus 40,797 men treated with conventional or moderately hypofractionated EBRT were included. Patients were stratified by unfavorable intermediate (UIR) and high (HR) risk using NCCN criteria. Kaplan Meier and Cox proportional hazards were used to compare overall survival (OS) between RT modality, adjusting for age, race, and comorbidity index. Results With a median follow up of 74 months, there was no difference in estimated 6-year OS between men treated with SBRT versus EBRT regardless of risk group. On multivariable analysis, there was no difference in risk of death for men treated with SBRT compared to EBRT (UIR: adjusted HR 1.09, 95% CI 0.68–1.74, p = .72; HR: adjusted HR 0.93, 95% CI 0.76–1.14, p = .51). On sensitivity analyses, when confining the cohort to men treated with NCCN-preferred dose fractionations, with no comorbidities, or < 65 years old, there remained no survival difference between treatment groups for both UIR and HR. Conclusion Within study limitations, we found no difference in survival between SBRT+ADT and standard of care EBRT+ADT for UIR or HR PC. These results support recent NCCN guideline updates, which include SBRT as a non-preferred option for higher risk men. Prospective validation would further strengthen the evidence basis behind these recommendations. Introduction Hypofractionated radiation therapy for prostate cancer is an appealing and increasingly adopted approach that has advantages from a radiobiologic, cost, and patient convenience standpoint [1][2][3][4]. Non-inferiority phase 3 randomized trials have confirmed the safety and efficacy of moderate hypofractionation (2.5-3 Gy per fraction) compared to conventional fractionation (1.8-2 Gy per fraction) [5][6][7]. Furthermore, one randomized trial showed superior biochemical control with moderate hypofractionation compared to conventional fractionation [8]. Moderate hypofractionation has now been accepted as a standard-of-care across all risk groups and a preferred regimen in ASTRO and NCCN Guidelines [9,10]. More recent randomized trials have shown that ultrahypofractionated radiation (≤7 fractions, ≥5 Gy per fraction), or stereotactic body radiation (SBRT) when delivered in ≤5 fractions with image/stereotactic guidance, is non-inferior to conventional fractionation for tumor control and toxicity, and to moderate hypofractionation for toxicity [11,12]. There is increasing interest in ultrahypofractionated radiation therapy for prostate cancer to further optimize patient convenience and cost effectiveness [13]. The ASTRO/ASCO/AUA societal guidelines do not currently recommend routine use of ultrahypofractionated radiation therapy for men with unfavorable risk prostate cancer, with a conditional recommendation against its use in men with high risk disease [9]. Since publication of those guidelines, however, the HYPO-RT-PC randomized trial [11] showed noninferiority of ultrahypofractionation in a cohort of intermediate and high risk men. However, androgen deprivation therapy (ADT), which is standard in the United States in these men, was not permitted in that trial. Furthermore, only 11% of enrolled men on that trial had NCCN-defined high risk disease. How ultrahypofractionation plus ADT compares with conventional/ moderate fractionation plus ADT in men with higher risk prostate cancer remains unknown. Herein, we examine outcomes between these two approaches in men with UIR and HR prostate cancer who receive concomitant ADT. We hypothesize that ultrahypofractionation has similar outcomes as conventional/moderate fractionation for these men. Methods Men > 40 years with localized prostate cancer treated with external radiation and ADT with curative intent between 2004 and 2016 were analyzed from the National Cancer Database. Patients who received brachytherapy, surgery, chemotherapy, or immunotherapy were excluded. Patients missing ADT or risk stratification data were excluded. Those that received ADT > 180 days before or after the start of radiation were excluded. Ultrahypofractionation (SBRT) was defined as 5 fractions of ≥5 Gy per fraction and conventional/moderate fractionation (EBRT) as ≤3 Gy per fraction and total dose ≥60 Gy. Patients were stratified by risk using NCCN criteria: unfavorable intermediate (UIR) and high (HIR). 1 ANOVA and chi square test was used to compare patient/demographic characteristics. Cochran-Armitage test for trend was used to evaluate utilization of SBRT in this cohort between 2004 and 2016. Kaplan Meier and multivariable Cox proportional hazards were used to compare overall survival (OS) between those who received EBRT versus SBRT, accounting for age, race, comorbidity index, and year of diagnosis. All analyses were computed using SAS 9.4 (SAS Institute Inc., Cary, NC). Tests were 2-sided with a 0.05 level of significance. This study received IRB exemption. Results Forty-one thousand three hundred fifty-five men were eligible for this analysis: 40,797 treated with EBRT and 558 treated with SBRT (Table 1). Although SBRT has been minimally utilized in UIR and HIR prostate cancer between 2004 and 2016, there has been a significant rise in its use over this time (p for trend <.001). There was an uptick in use of SBRT in UIR men after 2011-2012 (Supplemental Figure). A larger proportion of men in the SBRT cohort were Black, treated at an academic center, had median household incomes ≥$46,000, were treated in the Northeast and West United States, lived > 50 miles away from treatment facility, and resided in metro/urban over rural areas ( Table 1). The median follow up time was 74 months. There was no difference in estimated 6-year OS between men treated with SBRT versus EBRT regardless of risk group (SBRT versus EBRT, UIR: 93.3% versus 90.9%, log-rank p = .40, Fig. 1a; HIR: 80.8% versus 80.4%, log-rank p = .21, Fig. 1b). On multivariable analysis, accounting for age, race, and comorbidity, there was no difference in mortality for men treated with SBRT compared to EBRT (UIR: Discussion We found no difference in survival between SBRT+ADT and standard of care EBRT+ADT for UIR or HIR PC. ASTRO/ASCO/AUA consensus guidelines, though outdated, do not recommend routine use of SBRT for higher risk prostate cancer. Conversely, recent NCCN guidelines provide support of SBRT for UIR and HIR patients, particularly when more protracted courses may provide social or medical hardship [10]. The NCCN note that moderate fractionation is the preferred external beam radiation therapy regimen for all risk categories. Our results reinforce the NCCN's recent decision to endorse SBRT as an option for men with higher risk prostate cancer and may motivate ASTRO to reconsider their guidelines. More widespread SBRT use in these patients may be appropriate after publication of the HYPO-RT-PC trial that showed non-inferiority of ultrahypofractionation compared to conventional fractionation after a median follow up of 5 years. ADT use, which is standard for these patients in the United States, was not permitted in that study. Prospective data regarding SBRT with concomitant ADT is lacking; data showing favorable outcomes with SBRT, though with inconsistent ADT use, for higher risk prostate cancer is largely retrospective [14]. Our study corroborates institutional results regarding comparable disease control and survival with SBRT compared to conventional/moderate hypofractionation. There are several potential advantages of SBRT. For one, the alpha-beta ratio of prostate cancer may potentially be lower than for late normal tissue reactions [15]. If true, ultrahypofractionation could increase the therapeutic ratio and thereby offer more efficacious local therapy. Second, despite the use of complex immobilization, on-board imaging, and physics resources, SBRT reduces overall costs to payers and patients, with up to half the cost per allowable Medicare fee schedules, largely due to its abbreviated treatment schedule [16,17]. In an era of rising healthcare costs, as well as anticipated Alternative Payment Models with bundled fee schedules, providers will be incentivized to utilize the most cost-effective options. Finally, with reduced treatment visits, SBRT provides a more convenient treatment option for patients compared to protracted fractionation schemes. Based on recently available level one evidence published in 2019, specifically PACE-B and HYPO-RT-PC, SBRT should be more widely accepted as an appropriate regimen for PC in patients eligible for prostate +/− seminal vesicle treatment alone, regardless of risk group. This is relevant in an era of optimal locoregional imaging, namely MRI, which can help rule out high risk features that may otherwise support larger treatment margins and/or pelvic nodal irradiation. Even for patients who may require pelvic nodal treatment, however, the SATURN trial has shown safety and promising efficacy of elective nodal irradiation utilizing ultrahypofractionation [18]. For PC there is a radiobiologic advantage of ultrahypofractionation over protracted courses utilizing smaller doses per treatment. Now, there is prospective basis for its use. One concern that may limit utilization of SBRT for localized prostate cancer is toxicity. The HYPO-RT-PC trial [11] showed higher patient-reported urinary and bowel toxicity with ultrahypofractionation, with higher urinary toxicity extending to 1 year after completion of treatment; late toxicity, however, was similar between ultrahypofractionation and conventional fractionation. PACE-B [12], on the other hand, showed non-inferior toxicity within the first 12 weeks after treatment between SBRT and conventional/moderate fractionation for favorable risk prostate cancer. The discrepancy in acute toxicity between these two studies may be due to the time frame of each trial accrual, with patients enrolled on HYPO-RT-PC treated between 2005 and 2015 and those enrolled on PACE-B treated between 2012 and 2018. Approximately 80% of men on HYPO-RT-PC received 3-dimensional conformal RT; advancements in treatment delivery, including intensity modulation, between these two eras may explain the discrepancy in acute toxicity findings [19]. Furthermore, recent multiinstitutional analysis of prospectively-collected data of over 2000 men treated with SBRT showed very low rates of grade 3 genitourinary and gastrointestinal toxicity after 7 years of follow up [20]. Integration of rectal spacer or balloon, as allowed in the NRG GU005 phase 3 trial, may lower toxicity even further. Whether the addition of androgen deprivation therapy, postulated to function as least partly through radiosensitization [21], may increase acute/late toxicities when delivered with SBRT is unknown and remains subject for future analysis; however, based on similar toxicity seen between moderate and conventional fractionation when delivered with concomitant ADT [4][5][6], this likelihood is low. This analysis has several limitations. First, given the retrospective design using a population-based database, analyses are subject to selection biases and imbalances in unmeasured variables. However, multivariate modeling was utilized to address potential confounding. Furthermore, we completed stringent sensitivity analyses confining the cohort to those treated with modern-day doses, as well as excluding older and comorbid patients, with consistent results. Second, outcome measures in the NCDB are limited to OS, so details regarding biochemical control and toxicity unavailable. While we believe OS is a primary outcome of measure in these higher risk patients to influence management decisions, several of these unavailable data are relevant in this cohort given that treatment decisions often consider patient quality of life. Conclusion In conclusion, we found no difference in survival between SBRT+ADT and standard of care EBRT+ADT for UIR or HIR PC. SBRT offers a cost-effective, convenient option for prostate cancer patients in centers that are able to deliver safe therapy with precise, image-guided techniques. SBRT has wide guideline support for low and favorable intermediate risk prostate cancer. For UIR and HIR prostate cancer, however, there is historically low utilization and reserved support for SBRT use, with conditional recommendation against its use by ASTRO/ ASCO/AUA task force. HYPO-RT-PC trial provides level one support for SBRT for unfavorable intermediate and high risk prostate cancer, but ADT was not permitted in that study. How SBRT plus ADT compares against conventional/moderate fractionated EBRT plus ADT is unknown, but our results suggest that long-term outcomes may not differ. These findings are concordant with the updated NCCN guidelines, which list SBRT as an option in men with higher risk disease. Additional file 1 Supplemental Figure. Utilization of standard or moderately hypofractionated radiation (EBRT) versus ultrahypofractionated radiation (SBRT) in men with unfavorable intermediate (a) and high (b) risk prostate cancer receiving androgen deprivation therapy.
2020-09-17T05:06:34.130Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "e83689fa6639a64dc578eb86add08768bffc507a", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-020-01658-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e83689fa6639a64dc578eb86add08768bffc507a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
252151990
pes2o/s2orc
v3-fos-license
A Potential Anticancer Mechanism of Finger Root (Boesenbergia rotunda) Extracts against a Breast Cancer Cell Line Breast cancer is the most common type of cancer women suffer from worldwide in 2020 and the 4th leading cause of cancer death. Boesenbergia rotunda is an herb with high potential as an anticancer agent. This study explores the potential bioactive compounds in B. rotunda as anti-breast cancer agents using in silico and in vitro approaches. The in silico study was used for active compound analysis, selection of anticancer compound candidates, prediction of target protein, functional annotation, molecular docking, and molecular dynamics simulation, respectively. The in vitro study was conducted by measurement toxicity, rhodamine 123, and apoptosis assays on T47D cells. Based on the KNApSAcK database, B. rotunda contained 20 metabolites, which are dominated by chalcone and flavonoid groups. Seven of them were predicted to have anticancer activity, namely, sakuranetin, cardamonin, alpinetin, 2S-pinocembrin, 7.4′-dihydroxy-5-methoxyflavanone, 5.6-dehydrokawain, and pinostrobin chalcone. These compounds targeted proteins related to cancer progression pathways such as the PI3K/Akt, FOXO, JAK/STAT, and estrogen signaling pathways. Therefore, these compounds are predicted to inhibit growth and induce apoptosis of cancer cells through their interactions with MMP12, MMP13, CDK4, JAK3, VEGFR1, VEGFR2, and KCNA3. Anticancer activity of B. rotunda through in vitro study confirmed that B. rotunda extract is strong cytotoxic and induces apoptosis of breast cancer cell lines. This study concludes that Boesenbergia rotunda has potency as an anticancer candidate. Introduction Breast cancer is the most common type of cancer in women worldwide in 2020, followed by lung, prostate, and skin cancers. is type of cancer is the 4th leading cause of death from cancer after lung, stomach, and liver cancers [1]. Breast cancer patients are mostly in the reproductive age, namely, 30 to 39 years [2]. Previous studies have reported that the breast cancer deaths were increasing over the last 25 years. us, the effort is needed for the discovery of cancer drugs that are effective in treating breast cancer [3]. Breast cancer is mostly caused by obesity, alcohol consumption, genetics, and age [4]. e breast cancer cells caused by alteration of specific genes that result in dysregulation of several pathways related to cell proliferation and survival [5]. Several pathways were dysregulated in breast tumor cells, including the estrogen signaling pathway, PI3K/Akt signaling pathway, and JAK/STAT signaling pathway. e estrogen signaling pathway that plays a role in regulating cell division is dysregulated through overactivity of estrogen receptor alpha (ERα). ERα will be activated after binding to estrogen, form a dimer, and then attach to the estrogen response element (ERE) in DNA [6]. ERE consists of genes related to cell growth [7,8]. e PI3K/Akt signaling pathway has a crucial role in the progression of breast cancer cells because it is involved in proliferation, survival, invasion, migration, apoptosis, glucose metabolism, and DNA repair in cells. e mutation of PI3K protein, especially in ER + subtype breast cancer, causes PI3K hyperactivation [9]. Several protein tyrosine kinase receptors of the PI3K/Akt pathway, such as HER2 and EGFR, are overexpressed and mutated in breast cancer cells [10,11]. e JAK/STAT signaling pathway has an essential role in the development of breast cancer cells. ree major proteins play a role in this pathway including receptor tyrosine kinase, JAK (Janus kinase), and STAT (signal transducer and activator of transcription). Alteration of these proteins causes proliferation and metastasis in breast cancer cells [12]. Various therapies have been developed to inhibit the activity of these pathways from preventing breast cancer progression. Today, the most common treatment for breast cancer is chemotherapy [13]. However, chemotherapy has adverse side effects for patients such as constipation, dyspnea, fatigue, pain, rash, vomiting, and pain, and the most dangerous side effect is peripheral neuropathy [14,15]. Cisplatin is the main chemotherapy drug for treating solid tumors; however, it has side effects that caused kidney and liver damage [16]. Doxorubicin is a DNA intercalation agent that effectively inhibits tumor progression. Unfortunately, doxorubicin has a cardiotoxicity effect [17]. Gefitinib is quite popularly used for cancer treatment that targets the epidermal growth factor receptor (EGFR). However, this drug has side effects such as rash, diarrhea, and even inflammation of the lower urinary tract and bladder [18]. Other chemotherapy drugs also have side effects that are no less dangerous. In addition, chemotherapy is not provided in all hospitals globally because it is expensive. erefore, agents for cancer therapy are needed from natural sources that are cheap and have minimal side effects. Boesenbergia rotunda is an herb with high potential as an anticancer agent. B. rotunda belongs to the Zingiberaceae family that grows in Southeast Asia, India, Sri Lanka, and southern China. In these countries, B. rotunda is known as a medicinal plant that can treat various diseases [19]. Previous research mentioned that B. rotunda hexane extract was toxic to liver, lung, and colon cancer cell lines [20]. Previous studies stated that the ethanol extract of B. rotunda has antiproliferative activity and induces apoptosis in HeLa cervical cancer cells [21]. e anticancer activity of B. rotunda is predicted because of the presence of their bioactive compounds. Sakuranetin has a cytotoxic effect on B16BL6 melanoma cells by inhibiting the PI3K/Akt signaling pathway [22]. Cardamonin and pinostrobin chaconne isolated from rhizome B. rotunda have a cytotoxic effect on the H-29 colon cancer cell line [23]. No previous studies have investigated the potential anticancer activity of all potential bioactive compounds found in B. rotunda. In silico followed by in vitro experiments are the most appropriate approach for the initial study of anticancer potential bioactive compounds present in B. rotunda. e in silico approach is very appropriately used for drug candidate screening because it can accelerate the finding of drug candidate compounds by predicting their cellular and molecular mechanisms [24]. e results will be more valid if supported by an in vitro approach, and there is a strong positive correlation between in silico and in vitro results, [25,26]. erefore, in silico and in vitro approaches are very appropriate for this study. is study aimed to explore the potential bioactive compounds in B. rotunda as anti-breast cancer agents using in silico and in vitro approaches. Compound Data Mining. e compounds contained in B. rotunda were obtained from the KNApSAcK database (https://www.knapsackfamily.com/KNApSAcK/) and previous studies. e KNApSAcK is a plant metabolite database containing 20,741 species and 50,048 metabolites [27]. Canonical SMILES of all compounds were obtained from the PubChem database (https://pubchem.ncbi.nlm. nih.gov/). Screening Based on Druglikeness and Probable Activity. e compounds contained in B. rotunda obtained from the database were selected by screening using druglikeness and possible bioactivity. Druglikeness screening was conducted using the SWISS ADME web server (https://www. swissadme.ch/) to identify compounds that might have medicinal properties that integrated with those of the Lipinski, Ghose, Veber, Egan, and Muegge rules. Screening for possible activities was conducted to select the compounds that have function to interact with pathway signaling in the cells using the PASS Online web server (https://www. way2drug.com/passonline/). e pathway activity was selected based on their functions as anti-breast cancer agents such as MMP9 expression inhibitor [28], apoptosis agonist [29], JAK2 expression inhibitor [30], antineoplastic (breast cancer) [31] and proliferative disease treatment agent [32], caspase-3 stimulant [33], caspase-8 stimulant [33], topoisomerase I inhibitor [34], topoisomerase II inhibitor [34], cancer-associated disorder treatment agent [35], protein kinase C inhibitor [36], CDC25 phosphatase inhibitor [37], and CDK9/cyclin T1 inhibitor [38]. Protein Target Prediction. e compounds that met the druglikeness and probable activity parameters were used for target protein prediction. Direct targets were predicted using the SWISS Target Prediction database (https://www. swisstargetprediction.ch/); then, the five proteins that related to breast cancer were taken. SwissTargetPrediction is a web server to accurately predict the target protein based on the similarity of compound structure to a previously known compound [39]. Indirect target proteins were obtained from direct targets using the STRING 11.0 database with a confidence level of 0.4 and a maximum interaction number of 5. STRING is a database that predicts protein-protein interactions computationally [37]. Visualization of the target protein analysis was performed using Cytoscape 3.8.2. Functional Annotation. Functional annotation was performed to predict the role of target proteins in cell biology systems using the Database for Annotation, Scientifica Visualization, and Integrated Discovery web server (https:// david.ncifcrf.gov/) [40]. e databases used for this analysis were the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway databases. e GO database groups genes based on their roles in cells according to three domains, namely, molecular function, biological process, and cellular component [41]. e KEGG pathway is a database that groups genes based on cellular pathways [42]. e three-dimensional structure of the compound contained in B. rotunda was obtained from the PubChem database and then was prepared using OpenBabel [43] integrated into the PyRx software. Specific docking was conducted between compounds on the active site of each protein using the AutoDock Vina software integrated into PyRx 0.8 [44,45]. e docking results were visualized using the Biovia Discovery Studio 2019 software. Molecular 2.6. Molecular Dynamics Simulation. Molecular dynamics simulation was conducted using the YASARA (Yet Another Scientific Artificial Reality Application) software with the AMBER14 force field [46]. e system conditions were adjusted according to the physiological conditions of the cells (37°C, pH 7.4, 1 atm, and 0.9% salt content) for 20 ns. e macro programs used were md_run to run simulations, md_analyze to analyze RMSD, and md_bindingenergy to analyze the molecular dynamics binding energy of proteinligand complexes. B. rotunda Extraction. Six grams of powdered B. rotunda (Materia Medika, Batu, East Java, Indonesia) and distilled water or 96% ethanol in a ratio of 1 : 10 were put in a vessel of MAE (microwave-assisted extraction) (Anton-Paar). MAE was operated according to the specified protocol (holding temperature, 50°C; 5 min warming up, 50°C; time holding, 10 min; 5 min cooling down; power, 1500 W). e extract was filtered using the Whatman filter paper and then evaporated using a Buchi R-210 Rotavapor System (50 rpm, 37°C). e obtained extract was stored at 4°C. Total Phenol and Flavonoid Analysis. Total phenol analysis was conducted using the Folin-Ciocalteu method with gallic acid as standard. is assay adopted the method of Jing et al. [47]. A total of 100 μL of B. rotunda extract and standard solution (1.5625-100 μg/mL) were added to 1.0 mL of Folin-Ciocalteu reagent, which had been diluted 10 times with distilled water. e solution was added with 1 mL of Na 2 CO 3 (7.5%, w/v) and incubated in the dark for 90 min at room temperature. Total phenolic content was measured using spectrophotometry at a wavelength of 725 nm. is test was performed in triplicate. Total flavonoid content is expressed in terms of gallic acid equivalent (mgGE/g). e mixture was added to 10 μL of 1 M CH3COONa. A 96% ethanol solution was used as a blank. e mixture was incubated for 40 min at room temperature in the dark. Absorbance was measured at a wavelength of 405 nm. is test was performed in triplicate. Total flavonoid content is presented in terms of quercetin equivalent (mgQE/g). DPPH and NO Scavenging Assay. e antioxidant activity of the ethanol extract of B. rotunda was analyzed using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay and ascorbic acid as the standard. One hundred microliters of extract and standard at a concentration of 31.25 to 1000 μg/ mL were added to 100 μL of 0.4 M DPPH solution on a 96well plate. e mixture was incubated at room temperature for 30 min. Absorbance readings were conducted at a wavelength of 490 nm using a ELx880TM microplate reader (BioTek Instrument, USA). is assay was performed in triplicate. e antioxidant activity of the extracts was determined based on the IC 50 value. e antioxidant activity through NO scavenging was analyzed using the NO scavenging assay method with some optimizations [50]. A total of 60 μL of the extract was serially diluted onto a 96-well plate. A total of 60 μL sodium nitroprusside (SNP) at a concentration of 10 mM was added to each well. e control used was SNP only. e SNP was dissolved in phosphate-buffered saline and then incubated at room temperature under bright lighting for 150 min. Griess reagent (5% phosphoric acid, 1% sulfanilamide, and 0.1% naphthyl ethylene diamine dihydrochloride) was added to each well. Absorption readings were conducted at a wavelength of 570 nm using a ELx880TM microplate reader. is assay was performed in three replicates. e NO scavenging activity of the extract was determined based on the IC 50 value. Cell Culture Preparation. e breast cancer cell line T47D and the human fibroblast cell line TIG-1 were obtained from the Animal Physiology, Structure, and Growth Laboratory, Brawijaya University. e cells were cultured using complete media (RPMI 1640 (Gibco, USA) for T47D and MEM (Gibco, USA) for TIG-1 + 10% fetal bovine serum (Gibco, USA) + 1% penicillin-streptomycin (Gibco, USA)) on 60 mm dishes. Cells were incubated at 37°C and 5% CO 2 . Cell Viability Assay. e T47D and TIG-1 cell lines were seeded onto 96-well plates at a density of 7,500 cells per Scientifica 3 well and incubated at 37°C and 5% CO 2 for 24 h. e cells were treated with aqueous and ethanol extracts of B. rotunda at concentrations of 0, 10, 20, and 40 μg/mL for 24 h. e treatment medium was replaced with a medium containing 5% WST-1 (Sigma-Aldrich, USA), and the cells were incubated for 30 min. e absorbance was measured at a wavelength of 450 nm using an ELx880TM absorbance microplate reader. e IC 50 value was determined using the inhibition curve. e assay was conducted in triplicate [51]. 2.12. Apoptosis Assay. T47D cells were seeded onto 24-well plates (75,000 cells/well) and then incubated for 24 h. e cells were then treated with various doses of ethanol extract of B. rotunda, 0 (untreated), 20, 40, and 80 μg/mL, for 24 h. e cells were harvested and added with annexin V and propidium iodide (PI) (BioLegend, USA) and incubated in the dark at 4°C for 20 min. e cell suspensions were run using the flow cytometer (BD FACSCalibur, USA), and the data analysis was performed using the CellQuest software (BD Bioscience, USA). e analysis was conducted in triplicate [51]. Rhodamine Assay. e rhodamine assay referred to the method of Wei et al. [52] with some modifications. T47D cells were seeded onto 24-well plates at a density of 75,000 cells/well and then incubated at 37°C and 5% CO 2 for 24 h. e cells were treated with B. rotunda ethanol extract at the concentrations of 0 (untreated), 20, 40, and 80 μg/mL and incubated for 24 h. After incubation, 2 μM rhodamine 123 ( ermo Fisher Scientific, USA) was added to each well and the cells were incubated for 1 h at 37°C and 5% CO 2 . e cells were harvested and centrifuged at 2500 rpm and 10°C for 5 min. e pellet was resuspended with basal media, incubated for 30 min at room temperature, and then washed with phosphate-buffered saline. e rhodamine 123 staining analysis was conducted using the FACSCalibur analyzer, and the data were obtained using the CellQuest software. e assay was conducted in triplicate. Functional Annotation. All target proteins have roles in cancer cell progression as shown in Figure 2. Based on the GO analysis, the target proteins have roles in cancer-related biological processes such as epidermal growth factor receptor signaling pathway, negative regulation of apoptosis, and positive regulation of cell proliferation, among others. All target proteins are mostly located in the cytoplasm, nucleoplasm, and membrane. ese proteins functioned in kinase activity, protein kinase binding, protein binding, enzyme binding, and so on (Figure 2(c)). Based on the KEGG pathway, these target proteins play a role in cancerrelated pathways such as the PI3K/Akt, FOXO, ErbB, and JAK/STAT signaling pathways (Figure 2(b)). e results of this functional annotation analysis showed that based on the GO and KEGG pathway databases, the target proteins have a cancer-related role. Molecular Docking. Molecular docking simulation was conducted on the seven bioactive compounds with direct target proteins, and the results are shown in Table 2. e most negative binding affinity value of docking results was selected for further molecular dynamics simulation. e docking results showed that the seven bioactive compounds contained in B. rotunda bind to their respective target proteins on the same site as the control. is indicates that the compound is predicted to have similar activity to the control. e most negative binding affinity values of docking results are shown in Figure 3. Sakuranetin binds to the active site of MMP12 by forming 2 hydrogen bonds and 2 hydrophobic bonds. Sakuranetin had the same residues as the control, namely, in Ala182, Leu181, and His218. Cardamonin bound to the active site of CDK4 by forming 4 hydrogen bonds and 4 hydrophobic interactions and bound to the same residue as the control, namely, to Val20 and Leu147. Alpinetin formed 1 hydrogen bond and 3 hydrophobic interactions with JAK3. Alpinetin bound to the same residue as the control, namely, to Leu956, Leu828, and Val836.2S-Pinocembrin bound to the active site of VEGFR2 by forming 1 hydrogen bond and 6 hydrophobic interactions. is compound formed bonds at the same residue as the control, namely, at Leu840, Val848, Phe104, Leu1035, and Cys1045. 7,4′-Hydroxy-5-methoxyflavanone binds to the active site of MMP13 by forming 2 hydrogens and 6 hydrophobic interactions. is compound contains the same residues as the control, namely, r224, Met232, Tyr223, Leu197, His201, and Val198. 5,6-Dehydrokawain bound to the VEGFR1 active site by forming 1 hydrogen and 9 hydrophobic interactions. 5,6-Dehydrokawain binds to the same residues as the control, namely, to Val841, Ala859, Cys912, Leu1029, Lys861, Val909, and Asp1040. Pinostrobin chalcone binds to KCNA3 by forming 2 hydrogens and 2 hydrophobic interactions. is compound only binds to the same residue as the control, namely, to Glu168. 3.6. Molecular Dynamics Simulation. Molecular dynamics simulations were performed to analyze the stability of the interaction between proteins and B. rotunda compounds. e parameters used in this simulation are protein-ligand complex RMSD, ligand movement RMSD, and molecular dynamics binding energy. e protein-ligand complex RMSD represents the stability of the complex during a 20 ns simulation. e complex RMSD results showed that all compounds had stable values similar to the control and all complexes had RMSD values below 3Å, which means they are stable [60,61] (Figure 4). Ligand movement RMSD represents the stability of ligands when interacting with proteins. e ligand is in a stable state if it does not move much during the simulation, which is indicated by a stable ligand movement RMSD value. e results showed that almost all compounds in B. rotunda had stable ligand movement RMSD similar to the control. Sakuranetin has a more stable RMSD value than the control. Ligand movement RMSD of alpinetin upon binding to JAK3 increased at ∼10 ns but stabilized from ∼12 ns until the end of the simulation ( Figure 5). Molecular dynamics binding energy also represents the stability of the protein-ligand interaction; the more positive the binding energy value, the more stable the protein-ligand interaction [62]. Overall, the molecular dynamics binding energy results showed that all protein complexes in B. rotunda were stable. Complexes of CDK4cardamonin, VEGFR1-5,6-dyhidrokawain, VEGFR2-2Spinocembrin, and KCNA3-pinostrobin chalcone tend to be stable because their binding energy values do not fluctuate much, but their stability is still below the control protein complex. e MMP12-sakuranetin and MMP13-7.4′-dihydroxy-5-methoxyflavanone complexes have high interaction stability because their binding energy is almost the same as that of the control. e JAK3-alpinetin complex is very stable because it has a more positive binding affinity value than the control ( Figure 6). Overall, molecular dynamics simulations showed that the interactions between proteins and compounds contained in B. rotunda are stable and that these compounds have a high potential to act as inhibitors of related proteins. Total Phenol and Flavonoid of B. rotunda Extracts. Total phenol and flavonoid assays were conducted to predict the presence of potentially bioactive compounds (based on in silico results) in the extracts used because most of these potential compounds belong to the phenolic and flavonoid groups. e results showed differences in total phenols and flavonoids in the aqueous and ethanol extracts of B. rotunda. Total phenol in the ethanol extract was much higher (25.04 mg GAE/g) than in the aqueous extract (0.57 mg·GAE/g). e total flavonoid in the ethanol extract was also higher (4.52 mg·QE/g) than in the aqueous extract (1.40 mg·QE/g) (Figure 7(a)). ese results indicate that phenolic and flavonoid compounds that have potential as anti-breast cancer agents are predicted to be more abundant in the ethanol extract of B. rotunda. Antioxidant Activity of B. rotunda Extracts. e antioxidant assay results confirmed the total phenol and Flavonoids (6) Terpens (2) Kavalactone (1) Benzoid acids (2) Stilbene (1) Chalcones (8) flavonoid results. e results of the antioxidant activity test using the DPPH and NO scavenging assay showed that the ethanol extract of B. rotunda had better antioxidant activity than the aqueous extract. e IC 50 value of the DPPH test for the ethanol extract was 602 ± 3.00 ppm, while that of the water extract was 5072.13 ± 28.5 ppm. e NO scavenging activity of the ethanol extract was also higher than that of the aqueous extract. e ethanol extract had an IC 50 value of 6.93 ± 3.46 ppm, while that of the water extract was 11.20 ± 0.43 ppm. ese results indicate that the ethanol extract of B. rotunda has more antioxidant compounds than the water extract (Figure 7(b)). Toxicity of the Extracts to the T47D Breast Cancer Cell Line. e toxicity assay aimed to analyze the toxic effect of the extract on T47D cells, and the results are shown in Figure 7 not known whether the cell death caused by this extract is necrotic or apoptotic. Rhodamine and apoptosis tests are necessary to determine the type of cell death. Ethanol Extract of B. rotunda Induced Loss of MMP in T47D Cells. e effect of the ethanol extract of B. rotunda on the mitochondrial membrane potential (MMP) was evaluated using rhodamine 123 (Figures 7(d) and 7(f )). Rhodamine 123 is a green fluorescent dye used to stain mitochondria with MMP, which indicates that the cells are viable. Cells are deprived of MMP that were not get stained by rhodamine 123, indicating that the cells are not viable [63]. e results showed that the higher the dose of B. rotunda ethanol extract, the more cells lost MMP. e number of cells that lost MMP increased significantly with the increasing dose of the extract. Ethanol Extract of B. rotunda Induced Apoptosis of T47D Cells. e effect of apoptotic induction of B. rotunda ethanol extract on T47D cells was evaluated using annexin V and PI (Figures 7(e) and 7(g)). Annexin V detects apoptosis by binding to phosphatidylserine, which is exposed to extracellular sites when cells undergo apoptosis and PI detects necrosis by binding to DNA [64]. e results showed that the number of T47D cells undergoing apoptosis increased with increasing extract dose, although it was not significant between doses of 20 and 40 μg/mL. e number of necrotic cells was not significantly different between the control and the treatment, which indicated that the extract did not significantly cause necrosis in T47D cells. Discussion Based on the KNApSAcK database, B. rotunda contains bioactive compounds, which are dominated by phenol group compounds, namely, chalcone and flavonoids (Figure 1(a)). ese phenolic compounds are predicted to have anti-breast cancer effects. e content of phenolic compounds and flavonoids in the ethanol extract of B. rotunda was higher than that of the aqueous extract (Figure 7(a)). erefore, the compounds contained in B. rotunda obtained from the database are most likely present in the ethanol extract. ese results are supported by antioxidant tests using DPPH and NO scavenging assays, where ethanol extract has higher antioxidant activity than aqueous extract (Figure 7(b)). is higher antioxidant activity is most likely caused by the more significant number of phenolic compounds present in the ethanol extract. Compounds in B. rotunda with high antioxidant activity are pinostrobin chalcone, alpinetin, and cardamonin [65,66]. is result is in line with that in previous studies, which state that extraction with ethanol solvent can obtain more phenolic compounds than extraction with water solvent. [67]. Drug likeness and bioactivity pathway prediction were used to select the active compounds of B. rotunda that have potential as anticancer agents. Seven compounds selected are sakuranetin, cardamonin, alpinetin, 2S-pinocembrin 7.4′-dihydroxy-5-methoxyflavanone, 5.6-dehydrokawain, and pinostrobin chalcone. Some of these bioactive compounds have been known to have anticancer activity in previous studies, but the molecular mechanism is partly unknown. Sakuranetin isolated from Artemisia dracunculus inhibits the proliferation of esophageal squamous cell carcinoma cells via induction of DNA damage and mitochondrial membrane potential loss [68]. Cardamonin and alpinetin can suppress proliferation and induce apoptosis of prostate and ovarian cancer cells by modulating the STAT3 pathway [69,70]. Pinostrobin chalcone and 5,6-dehydrokawain have antiproliferative effects on various cancer cell lines [71,72]. Research on the anticancer effects of 2Spinocembrin and 7.4′-dihydroxy-5-methoxyflavanone is still very limited. In addition, the combination of these compounds in inhibiting the growth of breast cancer has not been explained. is study describes how these seven potential compounds work together to provide an anti-breast cancer effect. Direct and indirect target proteins of the seven compounds contained in B. rotunda were closely related to breast cancer progression. ese proteins have a role in signaling pathways related to breast cancer. e PI3K/Akt and mTOR signaling pathways are activated by receptor tyrosine kinases that lead to tumor cell growth and proliferation [73]. e FOXO signaling pathway plays a role in tumor suppression. e FOXO protein regulates the expression of genes important for tumor cell growth, such as p27, CDKN1B, TNFSF10, and GADD45 [74]. e JAK/STAT signaling pathway is activated by receptor tyrosine kinases such as EGFR and interleukin receptor, which activate the STAT3 protein. STAT3 is a transcription factor for breast cancer proliferation-associated genes such as CCND1, c-myc, BCL2, and BAX [75]. ERα and ERβ are involved in the estrogen signaling pathway. ERα and ERβ are activated after binding to estrogen, form dimers, and bind to target genes such as CCND1, HIF1A, and IL6, which have a role in breast cancer proliferation [7]. e Wnt signaling pathway is activated when the Wnt ligand binds to the LRP and Frizzled protein complex, thereby activating catenin, which in turn regulates transcription of c-myc, CCND1, MMP7, and CD44 [76]. e MAPK pathway enhances the sensitivity of breast cancer cells to estradiol so that cells grow faster [77]. e VEGF signaling pathway induces angiogenesis in breast cancer [78]. e results of the functional annotation with the KEGG pathway are in line with those of the GO analysis. e interaction between the compounds contained in B. rotunda and the target proteins will potentially affect these pathways. e interaction between sakuranetin and MMP12 has a low binding affinity value. In addition, the interaction between and 7.4′-dihydroxy-5-methoxyflavanone and MMP13 also has a low binding affinity value. Interestingly, theses interactions have stability as same as the control. erefore, the two compounds were predicted to be high in inhibiting the activity of MMP12 and MMP13 proteins, respectively. MMPs have a role as metastasis-promoting enzymes by degrading all extracellular matrix proteins [79]. MMP12 was highly expressed in various tumor cell comparisons with normal epithelial cells and positively correlated with cancer cell invasion [80]. MMP12 inactivation could inhibit lung adenocarcinoma cells' growth, invasion, and metastasis [81,82]. MMP13 has significantly increased expression in breast cancer tissue and is predicted to play a significant role in tumor invasion and metastasis [83]. Previous studies have shown that inhibition of MMP13 activity causes inhibition of the growth of the breast cancer cell lines MDA-MB-231 and 4T1.2 [84]. Based on this explanation, it was stated that the inhibition of the MMP12 and MMP13 activities correlated with the inhibition of cancer cell growth. is was confirmed in this study, which showed that the higher the dose of B. rotunda extract, the lower the number of cells (Figure 7(c)). However, the molecular mechanism related to apoptosis caused by MMP12 and MMP13 inhibition needs further research. Angiogenesis-related proteins are also significant targets of compounds in B. rotunda. 5,6-Dehydrokawain and 2Spinocembrin stably bind to VEGFR1 and VEGFR2, respectively. Based on its binding position, the two compounds have high potential as inhibitors of the two proteins. VEGFR1 and VEGFR2 are receptors of the VEGF ligand that have a role in angiogenesis. VEGFR1 and VEGFR2 are overexpressed in breast cancer cells [85,86]. Previous studies have shown that the inhibition of these two receptors not only inhibits angiogenesis but also cell growth and induces cancer cell apoptosis. Inhibition of VEGFR1 can inhibit angiogenesis in mouse models of breast cancer and decrease the viability of various breast cancer cell lines such as CAL-120, JIMT-1, MCF-7, and MDA-MB-134 [87]. Another study stated that the inhibition of VEGFR1 and VEGFR2 could inhibit growth and induce apoptosis of cancer cells through regulation of the PI3K/Akt and MAPK pathways [88]. Inhibition of VEGF signaling reduces pPI3K and pAKT, which are important proteins in the PI3K/Akt signaling pathway, which causes cells to undergo apoptosis [89]. e inhibition of VEGFR1/ 2 from in silico studies that resulted in decreased cell viability and induction of apoptosis was confirmed by in vitro results (Figures 7(c)-7 (e)) Cyclin-dependent kinase 4 (CDK4) is a protein that plays an important role in cell cycle regulation. e docking and MD results indicated that cardamonin interacted stably at the abemaciclib binding site of CDK4. is similar interaction with the control indicates cardamonin has potential as a CDK4 inhibitor [61]. In general, cyclin D is overexpressed in breast cancer cells, but it requires CDK4 to perform its function, namely, as a cell cycle regulator [90]. When the CDK4-cyclin D complex is activated, the complex phosphorylates retinoblastoma (RB), which makes RB released from the E2F transcription factor, and then, E2F binds to DNA and initiates transcription of genes needed to enter the S phase [91]. erefore, CDK4 inhibition can cause cell cycle arrest in the G1 phase [92]. In addition, CDK4 inhibition can also cause cancer cell apoptosis. Previous studies have shown that CDK4 inhibition can reduce NF-kB activity, resulting in downregulation of antiapoptotic genes [93]. is mechanism may also occur in T47D cells treated with B. rotunda extract, but further research is needed. e JAK/STAT pathway has a crucial role in the development and progression of breast cancer [12]. is study showed that there were compounds bound to JAK3 and STAT3 that are predicted to inhibit the activity of these two proteins. e lowest binding affinity value is in the interaction between alpinetin and JAK3. e MD results also show that the interaction between the two is stable. erefore, alpinetin in B. rotunda has high potential as a JAK3 inhibitor. JAK3 activated by receptor tyrosine kinase will activate STAT3, and then, STAT3 forms a dimer and translocates to the nucleus to become a transcription factor related to cell proliferation and survival [75]. erefore, inhibition of this pathway can induce cancer cell apoptosis. Previous research stated that JAK and STAT inhibition resulted in the apoptosis of MCF-7 breast cancer cells [94]. e anticancer effect of B. rotunda predicted by the in silico approach was confirmed by the in vitro approach. e ethanol extract of B. rotunda has an IC 50 value of 40.4 μg/mL for T47D and 292.7 μg/mL for TIG-1. e result indicates that B. rotunda has potential for selective killing between cancer (T47D) and normal (TIG-1) cells. e effect of inducing apoptosis in B. rotunda ethanol extract on T47D cells was measured by the rhodamine 123 and apoptosis (annexin V/PI) assays. ese results are in line with those of the in silico method in which the compounds in B. rotunda can inhibit the activity of proteins related to cell survival and antiapoptosis. e apoptotic effect of the B. rotunda ethanol extract was presented in the rhodamine 123 assay and the apoptosis assay. e results showed that the cells lost their mitochondrial membrane potential (MMP) when treated with B. rotunda ethanol extract. Loss of MMP is an important step in inducing apoptosis because it can facilitate cytochrome c exit from the mitochondria and activate apoptotic signaling [95]. e decrease in MMP was predicted due to the inhibition of KCNA3 activity by pinostrobin chalcone. KCNA3 is a mitochondrial ion channel that controls the mitochondrial membrane potential [96]. Apoptosis of T47D cells due to B. rotunda extract was also demonstrated in this study (Figures 7(e) and 7(g)). e apoptosis of T47D is predicted to be due to seven bioactive compounds from B. rotunda that interact with breast cancer-related proteins. However, this study shows that several compounds have the potential for antiangiogenesis and antitumor invasion. Further studies are needed on the antiangiogenesis and anti-invasion effects of B. rotunda extract on breast cancer cells. Conclusion is study focused on predicting the potential anticancer mechanism of B. rotunda against breast cancer cell line using an in silico approach. B. rotunda contains seven compounds that are predicted to have anticancer effects: sakuranetin, cardamonin, alpinetin, 2S-pinocembrin 7.4′-dihydroxy-5methoxyflavanone, 5.6-dehydrokawain, and pinostrobin chalcone. ese compounds are predicted to stably interact with the MMP12, CDK4, JAK3, VEGFR2, MMP13, VEGFR1, and KCNA3 proteins, which have a role in inhibiting the growth and inducing apoptosis in breast cancer cells. e predicted anticancer activity results were confirmed by in vitro assays where B. rotunda extract was shown to be toxic and induce apoptosis of T47D cells. However, further experimental studies are needed to support these findings. is study provides an important basis for further research, considering that the in silico results predicted that the compounds in B. rotunda targeted multiple pathways related to breast cancer progression. Data Availability e datasets used and analyzed during the present study are available from the corresponding author on reasonable request. Conflicts of Interest e authors declare that they have no potential conflicts of interest.
2022-09-09T16:59:31.721Z
2022-09-05T00:00:00.000
{ "year": 2022, "sha1": "9b347d75b4bcaae25ac524429d6cc24b441df908", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/scientifica/2022/9130252.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "447be44c0cf830f8822691d79a22e779d578fdb4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
210488446
pes2o/s2orc
v3-fos-license
Improving place value ability for children with learning disability using balok pelangi ienes as Media The use of Balok Pelangi Dienes aims to improve the ability to determine place value for students with learning disability. The ability to determine place value is profoundly needed as prerequisites in the acquisition of other mathematical concepts, such as the operation of addition, subtraction, and others. The type of the research was the experiment using singlesubject research approach with A-B-A design. The subject of the research was elementary school student in grade 5 aged 11 years old. The target behavior research was the increase of ability to determine place value through reading and writing numbers. The data of the research were measured using frequency to identify correct answers on condition of A1 (first baseline), B (intervention) and A2 (second baseline). The results showed that the frequency of student’s correct answers has significantly increased after the intervention. This was proved by the inclination of increased line direction and the small percentage of overlapping data. Therefore, the use of Balok Pelangi Dienes as media can improve the ability to determine place value for students with learning disability. Introduction Some elementary school students have difficulty in understanding the concept of place value, particularly students with learning disability. Students at the primary level are hardly able to determine the place value [1][2][3][4][5][6][7], whereas the concept of place value is profoundly required and influential towards the other math concepts. The concept of place value becomes a prerequisite for arithmetic operations such as addition and subtraction in learning mathematics. When students are unable to determine the place value, they are also practicing errors in naming and writing down multi-digit numbers as well as in operation of addition with carrying technique, subtraction with borrowing technique, column addition and multiplication [1][2][7][8]. Determining the place value is associated with how a number is written and pronounced. Since the numeral system used in Indonesia is the Hindu-Arabic numerals, it follows the powers of 10 system (base 10) which its digit's place value determined by the position of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 with regular names and is tautly related to its meaning [9]. The concept requires an understanding of place value in the integration of the concept of grouping ten with procedural knowledge of how a set is recorded in the scheme of place value, as well as how a number is written and pronounced. Based on the preliminary studies conducted, the researchers found a student with learning disability whom has difficulty in determining the place value and incorrectly reads and writes numerals. The student wrote 2000023 for 223 and read the numbers into two-two-three or twenty-two -three. The inability of the student to determine the place value resulted in failure to round the numbers into nearest place value. The student made the error when determining numbers into targeted place value and the other arithmetic operations such as addition, subtraction, multiplication, and division. Some errors made were resulted from the inability to determine the place value. For instance, in addition with carrying, the student was failed to place the number into the correct position when operating with the column addition, such as 46+ 7 = 106, the student put 'seven' below 'four' instead of 'six'. In another case, the student borrowed and carried at the wrong number and also did not subtract the borrowed number. These findings are similar to those which discussed by [2,[7][8]. The learning of place value is the fundamental lesson that is considered easy, but in fact, many students in the elementary school find it hard to determine the place value because the material in determining the place value is an abstract concept. Hence the teaching process should be based on the learning principle of math concepts, which is starting from the concrete, to semi-concrete and eventually to abstract concepts. The lesson does not begin from the definition, but it begins by observing examples through the media/props [6,[10][11]. It should be noted that the learning medium needs to be prepared in accordance with the characteristics of the students, and the goal is one that must be considered. The media will support the students' understanding of the abstract mathematics; therefore, skills and innovation in developing or creating media learning are necessary especially for students with learning disability [12]. Block Dienes is one of the media used in learning place value. Block Dienes functions to teach a concept or understanding about many objects, compare and sort the objects, the place value of a number (ones, tens, hundreds, and thousands) as well as the operations of addition, subtraction, multiplication, and appropriate level of division [2,[13][14][15]. The use of block Dienes in learning place value represents the basic knowledge of the powers of ten, the ability of oral representations, and symbolic representations [1,4]. With those points, Balok Pelangi Dienes, modified from Block Dienes, is well-fitted to improve the ability to determine the place value which is related to the ability in reading and writing numbers as well as rounding the numbers. Researchers modified the media by leveraging the use of sequential colors on block Dienes in instilling the concept of place value. The order of these colors refers to Indonesia children's songs titled "Pelangi" which has been always taught in the kindergarten. The color 'red' stands for the ones, 'yellow' stands for the tens, and 'green' stands for the hundreds. Researchers also used a mini suitcase as storage container equipped with colored boxes functioned to allow the student to organize the blocks and determine the place value as well as a value of zero, and there were also two mini whiteboards placed to write long addition form and numerals. Based on the explanation above, the research discussed the influence of Balok Pelangi Dienes as media towards the concept of place value on students with learning disability. Method This research used a quantitative approach. The type of research conducted was an experiment in the form of single-subject research (SSR), which the research looked at the impact of the frequently given intervention on a single object. This research used A-B-A design where condition (A1) is the object condition before the intervention, condition (B) is the object condition under intervention, and condition (A2) is the object condition after the intervention. Variables in this research consisted of free variable and bound variable. The free variable was the media (Balok Pelangi Dienes). Balok Pelangi Dienes is a modification of block Dienes which the colors sequence taken from Indonesia children song "Pelangi". The blocks are made of log, plastic or paper used to embed the concept of quantity, number, place value, operations of addition, subtraction, multiplication, division and it is the easier way to help the students with learning disability [1,10,16]. Meanwhile, the bound variable was the ability to determine the place value in ones, tens and hundreds. The ability to determine the place value is the ability to determine the value of a number/digit in numerals based on the position of the number with the given names such as ones, tens, hundreds, thousands and so on [2,9,[17][18]. The subject of the research was a girl from grade 5 in elementary school. The data collected through direct observation aimed to observe the student's demeanor and the phenomenon occurs when answering the question about determining the place value. The given test aimed to measure student's ability in determining the place value in the baseline condition (A1), the intervention condition (B) and the second baseline condition (A2). The tools used in collecting the data were written test questions, while the data recordings were conducted by calculating the frequency from the number of correct answers. Results and Discussion The research was conducted in 24 meetings; 6 meetings in phase A1, 12 meetings in phase B and 6 meetings in phase A2. The findings of the observation are presented as The data ratio of each phase in baselineA1, intervention B and baseline A2 is shown in the figure 4: : Frequency of correctly answered questions : Mean level : Trend line : Upper limit line : Lower limit line : Point of intersection mide date and mide rate According to the above table and chart, there is an upward trend in baseline phase as the line is ascending, whilst in the intervention phase, the line is still upward but it slightly falls down in the second baseline. The stability trend uses the stability feature of 15%. The direction trend is considered stable when its stability percentage is placed somewhere between 80%-90% [19]. The stability trend of condition A1 obtained was 50% which means the results were unstable or variable. The calculation of the stability trend in condition B was 17% resulted in unstable results or variable. In the contrast, the calculation of stability trend in condition A2 was100% means the results obtained were stable. The data trend on condition A1 showed the initial capability of the student viewed from the slight line increment. On condition B, the steep line described the ability of the student which increased after intervention using Balok Pelangi Dienes. On condition A2, the student's ability slightly declined due to the student's inadvertence but is still increasing compared to the ability on condition A1. While the range at A1 condition was 5-7, on condition B, the range was 11-30 and on condition A2, the range was 27-30. On the level change of A1 condition, found data differences on the first day and the last day where the student solved five correct problems on the first day and six correct problems on the last day, hence the lane raised and marked with (+). On condition B, the gap between the first day and the last day was 19 with an upward lane, means that the student's demeanor was well improved, hence it was marked with (+) since it provides the purpose of the intervention that is improving the ability to determine the place value. Whilst on condition A2, there was an extremely slight difference, a regression in level change and it was marked with (-). Based on the analysis of the conditions, the variable to be changed was the ability to determine the place value, and then the number of variables to be converted from baseline condition (A1) to intervention (B) and to baseline (A2) is one. The change of stability trend in the conditions was based on the stability trend in conditions of A1, A2, and B on the analysis of the conditions. Therefore, the change of the stability trend from A1 to A2 is of a variable to the stable. The overlap between condition A1 and condition B, and condition B and A2 were calculated by means of the data observation on the condition B involved in the range of condition A1 divided by the amount of data observation in condition B and likewise the overlap between condition B and condition A2. Percentage of overlap B/A1 = x100%=0% Percentage of overlap B/A2 = x100%=0% The above calculation implies that the intervention using the modified Balok Pelangi Dienes as media has the influence towards the ability improvement in determining the place value. The summary analysis of inter-conditions in the research is shown in the The findings showed that Balok Pelangi Dienes has improved the student's ability to determine the place value and the ability to round the numbers according to their place value. The result of this research was similar to the previous research (such as Muhammad Faisee, 2012) which showed that the block Dienes has enhanced the capability to understand the concept of place value for students with mild mental disability. Using Balok Pelangi Dienes as media transforms the abstract material becomes the concrete one and helps the students to understand the concept of place value through the shape, size, and colors of the media. Besides, it is not merely functioned to help the student to solve the problems in order to improve the student's ability [2,[13][14][15][20][21], as explained that Balok Pelangi Dienes also functions to teach the concept or understanding about many things, compare and sort many objects, the place value of a number ones, tens, hundreds, and thousands) as well as the operations of addition, subtraction, multiplication, and appropriate level of divison. In addition, researchers assumed that Balok Pelangi Dienes can also be used in teaching rounding numbers which is included in the ability to determine the place value. Using Balok Pelangi Dienes as media helps the students with learning disability to understand the abstract concept of place value. As stated by Ormond and Steele [22][23] that student with learning disability are experiencing challenges such as difficulty to pay attention against distraction, regression in reading skills, ineffective learning and memory strategies, and the difficulty to complete the abstract reasoning-related tasks. In order to improve the ability to determine the place value of units in the ones, tens, and hundreds, the researchers used Balok Pelangi Dienes by wielding the colors sequence that has been learnt from the kindergarten, to read and write the numerals. The use of color sequence can improve the student's by 30%-40%, give learning satisfaction, evoke the motivation and emotion to learn, and improve the learning outcome [24][25][26]. The researchers also provided the media with a mini suitcase containing a whiteboard and a box to place units. The mini suitcase functioned as the storage and media usage. The whiteboard on the suitcase aimed to improve the student's memory since the use of the media requires the student in viewing and doing simultaneously. The use of the boxes aimed to affirm the place value of zero hence the student would not be confused with the place value of zeros by looking at the empty box. The use of Balok Pelangi Dienes gives an understanding of the concept of place value and improves student's ability to determine the place value and the student's ability in solving problems regarding the place value, as well as implementing the rounding rules. The student is able to determine the number of the targeted place value and understand the concept of roundings, which number greater than or equal to (≥) 5 (ones) is rounded up into one in tens and number lesser than 5 is rounded up into zero; likewise number greater than or equal to (≥) 5 (tens) is rounded up into one in hundreds and number lesser than 5 in tens is rounded into zero. The steps the researchers conducted when implementing the media i.e. preparing a room, media, stationery and ensuring the student's condition. The steps conducted firstly are praying, delivering the learning objectives and conducting the lesson with numerals material. The number of digits formed in the ones, tens, hundreds and thousands in ratio. Secondly, naming the place value and its position of the numerals based on the way it is read and written. Then, the student was introduced to the media (Balok Pelangi Dienes); its size, colors and the number of the block in each unit representing its place value. The student began to compose a long addition and decompose the result through the column addition technique. Further, the student determined the place value of the digit by observing the color and size of Balok Pelangi Dienes and determined the value of the number. Finally, the student compared two numbers and rounded the number using the media. The student's initial ability to determine the place value was below the standard and it required treatment to improve the student's ability. During the intervention using Balok Pelangi Dienes as media, the student's ability has significantly improved. The frequency of correctly solved questions was slightly down for a while due to the student's inadvertence but has been recovered and increased in the next stage. The student's decline after a given intervention is not because a misguided concept but due to her inadvertence in solving the problems. This can be seen from her tendency to write excess zeros than she should and fail to copy the answer from the scratch paper. Besides, the student tended to rely on her memory about the answer on previous meetings, so she often mistakenly wrote the answer to the question with almost similar numbers. But when the student was given the intervention using Balok Pelangi Dienes, the student's ability to determine the place value was resulted above standard and has significantly increased compared to her initial ability. Conclusions Based on the data analysis on conditions and data analysis inter-conditions, the research found that intervention using Balok Pelangi Dienes as media is influential towards the ability to determine the place value and rounding the numbers for students with learning disability. This was proved by the increase of student's ability in solving the problems about determining the place value. In addition, the calculation of overlap percentage between data A1 and B was insignificantly 0% and the overlap percentage of data B and A2 was also 0%. Acknowledgments Thanks to all the teams involved in writing this article and thank you for the enthusiasm and cooperation of the research subjects.
2019-11-22T01:31:27.793Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "f9111af494fb03f22f58327d8c93e4ed0ddccbf5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/1321/3/032007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e4339b8b538250ed4a29d56ee6523669ee273090", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Physics", "Psychology" ] }
228091035
pes2o/s2orc
v3-fos-license
Total laparoscopic vs. conventional open abdominal nerve-sparing radical hysterectomy: clinical, surgical, oncological and functional outcomes in 301 patients with cervical cancer Objective Total laparoscopic nerve-sparing radical hysterectomy (TL-NSRH) has been considered a promising approach, however, surgical, clinical, oncological and functional outcomes have not been systematically addressed. We present a large retrospective multi-center experience comparing TL-NSRH vs. open abdominal NSRH (OA-NSRH) for early and locally-advanced cervical cancer, with particular emphasis on post-surgical pelvic function. Methods All consecutive patients who underwent class C1-NSRH plus bilateral pelvic + para-aortic lymphadenectomy for stage IA2–IIB cervical cancer at 4 Italian gynecologic oncologic centers (Negrar, Varese, Bologna, Avellino) were enrolled. Patients were divided into TL-NSRH and OA-NSRH groups and were investigated with preoperative questionnaires on urinary, rectal and sexual function. Postoperatively, patients filled a questionnaire assessing quality of life, taking into account sexual function and psychological status. Oncological outcomes were analyzed using Kaplan-Meyer method. Results 301 consecutive patients were included in this study: 170 in the TL-NSRH group and 131 in the OA-NSRH group. Patients in the OA-NSRH group were more likely to experience urinary incontinence and (after 12-months follow-up) urinary retention. No patient in the TL-NSRH group vs. 5 (5.5%) in the OA-NSRH group had complete urinary retention (at the >24-month follow-up [p=0.02]). A total of 20 (11.8%) in the TL-NSRH and 11 (8.4%) patients in the OA-NSRH had recurrence of disease (p=0.44) and 14 (8.2%) and 9 (6.9%) died of disease during follow-up, respectively (p=0.83). Conclusion Our study shows that TL-NSRH is feasible, safe and effective and conjugates adequate radicality and improvement in post-operative functional outcomes. Oncological outcomes of laparoscopic procedures deserve further investigation. INTRODUCTION Despite the wide spread of screening programs, cervical cancer still represents the fourth most common malignancy in women worldwide (being the second in developing countries) and the second most common cause of cancer-related deaths among women between 20 and 39 years old [1]. Surgery represents one of the mainstays of treatment in early-stage disease and radical hysterectomy (RH) with nodal dissection provides very good results in patients with International Federation of Gynecology and Obstetrics (FIGO) stage IA2-IIA nodenegative disease, with 5-year survival rates >90% [2]. RH is a challenging procedure and it has been traditionally related to a high incidence of peri-operative complications [3]. In addition, relevant long-term functional sequelae are associated with the procedure [4][5][6]. Iatrogenic denervation during radical surgery seriously interferes with pelvic function and results in considerable patients' distress and impaired quality of life (QoL), particularly in young women. In recent years, nerve-sparing RH (NSRH) has been proposed to reduce postoperative morbidity, without compromising oncological radicality [7][8][9]. The nerve-sparing concept was originally described by Kobayashi in 1960 [10], and subsequently described in different techniques by several authors [5,[11][12][13]. The results of the available series show that NSRH is equivalent to conventional RH in terms of survival, but it allows the preservation of pelvic function [14]. The advent of minimally invasive techniques has significantly improved the short-term outcomes of patients undergoing major gynecologic surgery, including RH, by enhancing recovery and decreasing pain and post-operative complications [15,16]. However, recent publications (and particularly the Laparoscopic Approach to Cervical Cancer [LACC] trial) have raised serious concerns regarding the oncological safety of endoscopic surgery in cervical cancer [17,18], describing a significant and alarming increase in the rate of recurrence. Despite the deep rethinking of the role of endoscopy in this setting, total laparoscopic RH has been adopted for years with the potential to increase the possibility to spare autonomic neural structures during the procedure: the image magnification offered by the laparoscope has markedly improved the possibility to identify and visualize thin structures, such as the pelvic autonomic nerves. Some studies have reported the outcomes of total laparoscopic NSRH (TL-NSRH), but a recent review on this issue has identified only 7 non-randomized series on a limited number of patients, comparing nerve-sparing vs. conventional procedures [14]. In addition, the authors of the review acknowledged that functional outcomes have been seldom described in depth in the available literature [14]. The aim of this study is to report our multi-center experience in terms of clinical outcomes and prognosis, comparing TL-NSRH vs. conventional open abdominal NSRH for early-stage or locally-advanced cervical carcinoma, performed before the publication of the LACC trial. A particular attention was paid to the description of functional results and patients' QoL. MATERIALS AND METHODS The present analysis is a multi-institutional, retrospective series of consecutive patients who underwent class C1 (Querleu-Morrow classification) NSRH (performed by laparoscopy or by open abdominal surgery) [19] and systematic bilateral pelvic ± additional para-aortic lymphadenectomy for the treatment of carcinoma of the uterine cervix. The study involved patients from 4 Italian Institutions: Gynecologic Oncology and Minimally-Invasive Pelvic Surgery Unit of the IRCCS (Scientific Institute for Research, Hospitalization and Healthcare)-Sacro Cuore Don Calabria Hospital, Negrar di Valpolicella (Verona), Endoscopica Malzoni, Center for Advanced Endoscopic Gynecologic Surgery, Avellino, Department of Obstetrics and Gynecology, DIMEC, S. Orsola Hospital, University of Bologna, and the Department of Obstetrics and Gynecology, University of Insubria, Varese. The protocol for this study was in line with the Strengthening the Reporting of Observational Studies in Epidemiology statement. From January 2007 to December 2012, all consecutive patients with early stage or locally advanced cervical cancer who underwent NSRH were enrolled. Inclusion criteria were: age >18 years, stage IA2-IIB, provision of written informed consent. Patients with positive history of malignancy and those with absolute contraindication to surgery were excluded. At the time of informed consent before surgery, each patient agreed to the use of her data in future clinical retrospective studies. Data on patient characteristics, tumor classification, pathology, surgical factors and follow-up were analyzed. Clinical staging of cervical carcinoma and preoperative evaluation were based on complete physical and rectovaginal examination, routine blood and urine analysis, chest radiography, transvaginal ultrasound. Cystoscopy and rectoscopy were performed only on specific indication; magnetic resonance imaging (MRI), or computed tomography (CT) were performed in the vast majority of cases (when clinical examination was ambiguous or inconclusive). The stage of the disease was determined by using the FIGO 2009 staging system. Patients were divided into TL-NSRH group and open abdominal (OA-NSRH) group, according to the type of surgical approach adopted. The surgical approach was chosen according to the availability or not of a surgical team with extensive experience in laparoscopic surgery. Operative technique The procedures were performed by surgeons with extensive experience in gynecologic oncology and (for the TL-NSRH group) in advanced minimally-invasive radical surgery (>50 laparoscopic type C radical hysterectomies performed). Bipolar coagulation ± a ultrasonic device was used. Regarding the radicality of parametrial resection, the surgical steps of type C1 RH, as per the Querleu-Morrow classification system were followed [19]. The intervention was performed according to the technique previously reported by Malzoni et al. [20] (for laparoscopic NSRH) and Raspagliesi et al. [13] (for open abdominal NSRH) with technical modifications in parametrial dissection for the nerve-sparing approach, as recently published by our group (Supplementary Data 1) [21]. Ovarian preservation was performed in patients <40 years of age and with squamous histology. The indwelling urethral catheter was removed 3 days after the intervention. Intermittent catheterization was performed by patients 3 times a day until residual urine volumes less than 100 ml were obtained at least 3 consecutive times. Clinical outcomes and QoL All patients were investigated with pre-operative questionnaires on urinary, rectal and sexual function (Supplementary Data 2). Post-operative complications were divided into 2 groups according to the timing of their occurrence (i.e. within or after 7 days). Postoperative bladder function was assessed by means of post-voiding residual (PVR), evaluated by ultrasound or catheterization after voiding. According to the principles by Asimakopoulos et al. [22], bladder function was classified normal when PVR was <100 mL and the patient voided spontaneously; on the contrary it was classified abnormal when PVR was >100 mL and selfcatheterization was necessary after the first month of follow-up. In the latter case, the time of spontaneous voiding recovery after self-catheterization was evaluated. Patients performed self-intermittent catheterization at home if they experienced voiding difficulty or urinary retention of >100 mL at the time of discharge. The number of days of self-catheterization from discharge until resolution were recorded and calculated. Rectal manometry was planned only for patients with severe rectal dysfunction referred pre-operatively. In the study plan, every patient with severe bladder and rectal dysfunction after 18 months was considered as "denervated". For these patients, long-term urodynamic studies and ano-rectal manometry were planned, in order to assess the neurologic impairment. During the follow-up period (6,12, and 18 months after the operation), patients were asked to fill a questionnaire regarding pelvic function and QoL. The questionnaire was modified from the Bergmark's series [23] and assessed QoL using a score based on 54 items which take into account sexual function (according to DSM-IV criteria) and psychological status according to the short World Health Organization QoL scores [24]. All patients provided a written informed consent to participate in the survey. For bowel and bladder dysfunction, a confirmation was obtained by urodynamic tests and anorectal manometry. Neoadjuvant/adjuvant treatment and follow-up All the patients classified as FIGO stage IB2-IIB as well as those with stage IIA >4 cm, at pre-operative clinical and radiological examination, were administered 3 cycles of neoadjuvant chemotherapy (NACT) with associated cisplatin/carboplatin and paclitaxel +/− iphosphamide prior to surgery: in case of progressive or stable disease after NACT, surgery was not accomplished and patients were submitted to chemoradiation. Adjuvant pelvic/ para-aortic radiotherapy was administered in patients with detection of positive pelvic/paraaortic nodes, parametrial/lympho-vascular spread of disease detected at definitive histology and G3 histotypes, and in case of involved surgical resection margins (<5 mm), as per the Sedlis criteria [25]. Adjuvant chemotherapy with cisplatin was added in case of positive nodes. The follow-up period lasted from the date of surgery until January 2017. Follow-up consisted of a pelvic examination every 3 month during the first 2 years, twice a year from the third to the fifth year, then yearly. Pap smear and thoraco-abdominal CT scan were planned every year. Pelvic recurrences were detected by clinical examination and confirmed, as far as retroperitoneal recurrences, on CT and/or MRI scans. Disease-free survival (DFS) and overall survival (OS) were calculated. DFS was defined as the period from the surgical intervention to the evidence of recurrent disease. OS was defined as the time from date of operation to death. Statistical analysis Statistical analysis was performed using the Statistical Package for the Social Sciences for Windows, version 17.0 (SPSS Inc., Chicago, IL, USA). The Students' t-test and Mann-Whitney U test were used for comparison of continuous variables as appropriate. The Fisher's exact test and the χ 2 test were used for categorical variables. All the tests were 2-sided and statistical significance was assigned at the level of p<0.05. Survival rates were calculated according to the Kaplan-Meier method. The corresponding p-value was computed using the log-rank (Mantel-Cox) test. Patients group, pre-operative and operative data A total of 301 consecutive patients were included in this study: 170 in the TL-NSRH group and 131 in the OA-NSRH group. Preoperatively, 2 (1.2%) and 1 (0.8%) patients had urodynamically proven detrusor overactivity and 1 (0.6%) and 0 patient had mixed urinary incontinence in the TL-NSRH and OA-NSRH groups, respectively. Seven patients were excluded from the analysis because of stable/progressive disease after NACT. Three patients (1.8%) in the TL-NSRH group and 3 (2.3%) in the OA-NSRH group reported mild/moderate difficulty to empty the bladder (with partial Valsalva maneuver) but at urodynamic studies, no neurologic impairment of detrusor muscle was detected. This impairment was related to urethral stenosis, confirmed by cystoscopy, and elderly age: 5 of these 6 patients were >70 years. No patient reported severe impairment of rectal function. All the 301 patients underwent type C1 NSRH with systematic pelvic lymphadenectomy for invasive cervical cancer. Fifty-six of them, were submitted to additional para-aortic lymphadenectomy: 27 and 39 in the TL-NSRH and OA-NSRH groups, respectively. No conversion to laparotomy was registered in the TL-NSRH group. No differences were found in terms of demographic and pathological characteristics between groups, although a tendency towards a higher stage and a lower rate of adenocarcinomas in the OA-NSRH was noted ( Table 1). No difference was noted in terms of the proportion of patients who underwent neoadjuvant chemotherapy. Clinical and functional outcomes and QoL Intra-operative complications included 7 vs. 34 cases of intra-operative blood loss >500 mL in the TL-NSRH vs. OA-NSRH, respectively (p<0.01). Two organ injuries in the TL-NSRH group and 4 in the OA-NSRH group were observed (p=0.25): 2 case of bladder lesions (successfully repaired laparoscopically) in the TL-NSRH group, 2 cases of ureteral injury repaired intraoperatively with positioning of a stent, one vena cava injury and one bowel lesion in the OA-NSRH group. Post-operative complications within the first 7 days after intervention are summarized in Table 2. Two patients (1.2%) in the TL-NSRH group presented hemoperitoneum and required reoperation the day after RH. Both the re-operations were completed laparoscopically. The site of bleeding was at the level of the residual parametrium in both cases. Thirty-three cases of post-operative urinary retention occurred within 7 days in the overall population; the rate of immediate post-operative urinary retention was higher in the OA-NSRH group. All these cases resolved within 21 supplementary days of bladder selfcatheterization. Post-operative complications occurring >7 days after the operation are listed in Table 3. Regarding urologic complications, hydronephrosis occurred in 4 patients (2.4%) in the TL-NSRH group and in 5 (3.8%) patients in the OA-NSRH group, due to post-operative ureteral stenosis (p=0.51). All cases were treated with cystoscopic double-J stent placement and they had spontaneous resolution after 3 months. Four patients (4.7%) in the TL-NSRH group and 6 patients (4.6%) in the OA-NSRH group presented ureteral fistula within 22 days after surgery. In all cases stent placement for 3 months was sufficient to restore the ureteral integrity. In 2 of these patients a laparoscopic uretero-neocystostomy with psoas-hitch was necessary, due to fibrotic ureteral stenosis after the healing of the fistula. One patient in each group had a vesico-vaginal fistula, which was treated conservatively, with an indwelling Foley catheter in the bladder for 2 months. After catheter removal, resolution of the fistula was observed in the patient in the TL-NSRH group, whereas the patient in the OA-NSRH underwent re-laparotomy, fistula repair and omental flap. The rate of lymphorrhea was higher in the TL-NSRH group, whereas the rate of selfcatheterization at discharge was higher in the OA-NSRH group. The days of selfcatheterization were <3 in all cases but one (6 days in the TL-NSRH group). One (0.6%) patient in the TLNSRH group died 27 days after surgery due to encephalitis caused by sudden deterioration of her HIV infection. Of the remaining patients, 132 (78.1%) vs. 126 (96%) gave their consent to complete the study questionnaire at the 1-month follow-up in the TL-NSRH vs. OA-NSRH groups respectively. At Data about urinary, bowel and fecal function are provided in Table 4. The patients with preoperative urinary incontinence did not show significant worsening post-operatively. Patients in the OA-NSRH group were more likely to experience urinary incontinence and, after 12 months follow-up, urinary retention. No patient in the TL-NSRH group vs. 5 (5.5%) in the OA-NSRH group had complete urinary retention at the >24-month follow-up (p=0.02). Anal incontinence was uncommon, and its percentage was comparable between TL-NSRH and OA-NSRH groups. All the patients experiencing anal incontinence defined their condition as "rare". Fecal constipation was similar between groups up to the 2-year followup, but at the last checkpoint it was more common in the OA-NSRH group. Seventy-one patients (41.7%) and 65 (49.6%) who were sexually active before the operation in the TL-NSRH and OA-NSRH groups, respectively, accepted to complete the questionnaire about sexual activity at the 12-month follow-up. Of them, 62 (87.3%) and 57 (87.7%) recovered sexual activity after surgery in the TL-NSRH and OA-NSRH groups, respectively. Forty-eight (77.4%) and 39 (68.4%) of the women who recovered sexual activity after surgery considered their sexual life as "satisfactory" in 2 groups, respectively (p=0. 30 in the TL-NSRH group and 5 (12.8%) in the OA-NSRH group had received radiation therapy; of the 14 and 18 patients in the TL-and OA-NSRH group with an unsatisfactory quality of sexual life, 11 (78.5%) and 15 (83.3%) had received adjuvant radiation therapy in the 2 groups, respectively (p<0.001 for both groups). Table 5 summarizes data regarding sexual function at 12 months post-operatively. The performance or not of para-aortic lymphadenectomy did not affect functional outcomes. Survival and follow-up The median follow-up was 30 (6-88) and 39 (8-85) months in the TL-NSRH and OA-NSRH groups, respectively. Twenty (11.8%) in the TL-NSRH and 11 (8.4%) patients in the OA-NSRH had a recurrence of disease (p=0.44) and 14 (8.2%) and 9 (6.9%) died of disease during follow-up, respectively (p=0.83). No patient was lost to follow-up. Three deaths in the TL-NSRH group were unrelated to cervical cancer (1 cerebral stroke, 1 encephalitis due to HIV progression and 1 myocardial infarction). The first relapse of disease was in the pelvis in 14 patients (70%) and at distant sites in 6 (30%) patients in the TL-NSRH. In the OA-NSRH group the first location of recurrence was the pelvis in 7 in the long-term the DFS curves were superimposable. Fig. 3 reports the DFS for early stagedisease in the 2 groups (p=0.33). DISCUSSION A recent review and meta-analysis of studies comparing outcomes of nerve-sparing laparoscopic RH to those of standard minimally-invasive RH reported a total of 325 nervesparing procedures from 7 independent series [14]. The authors concluded that NSRH is more time-consuming but may be associated with similar oncological outcomes compared to non-nerve sparing radical procedures. The evaluation of functional outcomes in the included series was extremely poor and the level of evidence was low [14]. In this context, the recommendation of performing type C1 RH whenever possible, relies more on anatomical and logical considerations, rather than on scientific evidence. The present study represents one of the largest available series of nerve-sparing laparoscopic vs. open abdominal radical hysterectomies for cervical cancer and it includes an extremely detailed and thorough evaluation of peri-operative surgical outcomes and post-operative pelvic function. The main findings of our study are: first, the risk of surgical complications of this technicallychallenging procedure is acceptable; we observed 25 (8.3%) cases of intra-/post-operative urinary tract lesions in the overall population, although our cohort included patients with locally-advanced disease, submitted to NACT [4,26]. Second, we provide a large amount of data showing that the rate of pelvic dysfunction after this type of radical surgery is low, thus demonstrating that preservation of the neural autonomic fibers has actually a positive effect on patients' post-operative wellbeing and QoL. Third, the rate of pelvic dysfunction is lower when laparoscopic surgery is accomplished, rather than traditional open abdominal surgery. Fourth, the oncological outcomes and the survival rates of TL-NSRH well compare with the available series of non-nerve-sparing laparoscopic procedures [27]. Similar to the LACC trial [17], we observed a tendency towards earlier recurrences in the group of patients submitted to laparoscopic surgery. However, with extended follow-up, the DFS curves of laparoscopic and open abdominal procedures move closer and finally overlap. On the other hand, OS was quite similar between the 2 groups in our series. In the era post-LACC trial, the presentation of series of laparoscopic radical hysterectomies for cervical cancer, may be unpopular. However, some considerations have spurred us to publish the present article: 1) one of the main criticisms against the LACC trial is the inclusion of many centers with not renowned experience in minimally invasive techniques for cervical cancer; the criterion for including a center in that trial was the evaluation by a central committee of 2 unedited videos of endoscopic RH. The operators involved in laparoscopic procedures have a solid background in advanced minimally-invasive surgery. 2) The data presented have been collected before the publication of the LACC study. 3) Our oncological data well compare with an important historical series by Landoni et al. [2] describing the use of open surgery in cervical cancer, and including more favorable cases. The very good results of open surgery in the LACC trial appear unprecedented and difficult to be replicated. 4) Our thorough analysis of the outcomes of nerve-sparing surgery performed on a considerable number of patients may be helpful for pre-and post-operative counseling and could be at least partly independent of the type of surgical approach used. 5) The observation that urinary incontinence, voiding difficulties and fecal constipation, although generally low, were more common among patients submitted to OA-NSRH may imply an important finding, i.e. that the magnified view of the laparoscope may allow a more subtle and precise identification of the pelvic autonomic fibers, with a more efficient sparing of nerves and function. It is well accepted that bladder, colorectal and sexual dysfunction affect the long-term recovery of patients undergoing RH for cervical cancer. This type of outcome depends on the disruption of the pelvic autonomic nerves during resection of the lateral and posterior parametria. Specific steps of RH may cause surgical neuro-ablation of autonomic fibers leading to or belonging to the pelvic plexus (PP). The injured neural structures during RH are: the superior hypogastric plexus (ortho-sympathetic) during presacral and para-aortic lymph-node dissection; the hypogastric nerves (HNs -ortho-sympathetic) during resection of uterosacral and rectovaginal ligaments; the pelvic splanchnic nerves (PSNs; parasympathetic) and the PP (mixed ortho-and parasympathetic) during resection of the cardinal ligament in the lateral part of the parametrium and during resection of the vesicovaginal ligaments and paracolpia in the caudad anterior part of the parametrium [28]. Thanks to the enhanced view of the laparoscopes, the performance of nerve-sparing surgery using a minimally-invasive approach appears extremely precise and allows a careful and respectful dissection of the "pars vasculosa" from the "pars nervosa" of the parametrium [29,30], which can be resected with an adequate oncological radicality but at the same time sparing the visceral efferent branches of the PP and finally resulting in lower bladder, ano-rectal and sexual dysfunction. In our experience, the first step for a safe NSRH is the knowledge of the anatomy of the female pelvis and pelvic ligaments. To better identify neural fibers and surgical landmarks, avascular spaces are routinely developed: typically, the medial and lateral (Latzko's space) pararectal spaces are dissected, in order to achieve different dissection planes, leading to the Deep Uterine Vein and to the origin of the parasympathetic PSN at the sacral roots [26]. According to the series by Ercoli et al. [30], it is anatomically possible to identify 3 main groups of visceral efferent fibers, leaving the PP, directed to the target viscera: a) The medial efferent bundle: a group of thin fibers directed medially toward the rectum running through the mesorectum b) The cranial efferent bundle: a group of thin fibers directed cranially toward the uterus running through the parametrium c) The anterior efferent bundle: a group formed by 3 or four main fibers directed anteriorly towards the bladder and the vagina, running through the paracervix Regardless of the surgical approach, the accomplishment of an adequate NSRH mandates to reveal the PP with its efferent groups of fibers and to transect only the uterine branches arising from the cranial efferent bundle. In this way, bilateral preservation of HN, PSN and medial and anterior efferent bundles from the PP, is achieved [6,30,31]. In recent years, the importance of NSRH in cervical cancer has become even more evident, considering that more than 54% of women diagnosed are younger than 50 years of age [1]. Since 5-year survival has been reported between 88%-97% after RH for early stage nodenegative cervical cancer [2,17,32], the definition of procedures with a lower impact on postoperative QoL appears crucial. Unfortunately, the main limitation of the available studies on this issue is the lack of adequate analysis of the post-operative pelvic dysfunction and the short follow-up period, particularly in terms of QoL and functional outcomes. Concerning bladder function, our study shows that 12-24 months after the operation no patients in the laparoscopic group and only 5 in the overall cohort complained of complete urinary retention: a prolonged period of follow-up is therefore essential to detect the real bladder dysfunction rates after RH [33]. This aspect may reflect the fact that nerve-sparing radical procedure actually spare at least some visceral fibers of the PP, even in cases in which denervation seems evident. However, in case of partial neural damage, a long time may be required by the "survivor" visceral fibers of the PP for sprouting and increasing their connections with the target viscera. As previously described, rectal dysfunctions can occur after laparoscopic RH [34]. Internal and external sphincters contract by coordination of parasympathetic and sympathetic fibers, while defecation involves the interaction of voluntary and involuntary pathways. It is not easy to quantify the grade of the most common bowel functional complaints, such as constipation, incomplete evacuation, tenesmus, or diarrhoea. However, some studies show a negative effect of RH on bowel function (higher volumes of rectal distension to elicit the anorectal inhibitory reflex, slow transit constipation, tenesmus, diarrhoea, faecal leakage, and flatus incontinence) [35]. Non-adequate radical mobilization of the rectum and radical resection of the dorsal and lateral paracervix can result in partial damage to the [37,38]. However, that review does not take into account the rate of incontinence, constipation and soiling, after RH [36]. Our results analyze in detail these aspects and show no case of "frequent" fecal incontinence at the last follow-up; 16.6% of patients complained of moderate constipation, with no impact on QoL after surgery. Of note, the majority of these patients reported constipation even before RH. Considering sexual function, it is well known that after RH, women experience changes in their vaginal anatomy and function, resulting in sexual dysfunction for many patients. These changes include shortening and inelasticity of the vagina, resection of the paracolpia, and loss of ovarian function [7]. Of course, radiotherapy is a contributing factor to sexual dysfunction, with the associated loss of elasticity, vaginal dryness, shortening and consequent dyspareunia, due to fibrosis and vascular reaction of the irradiated tissues. Moreover, we must take into account also the psychological impact of a diagnosis of cervical cancer. In our series, only a small proportion of the women included were sexually active. This may be due in part to the fact that approximately one third of patients was >60 years and in part to the fact that the diagnosis of cervical disease had a detrimental impact on the sexuality of the women included. Overall, our data show good results in terms of sexual function and >70% of patients reported a satisfactory sexual life and unchanged libido. A recent study by Pieterse et al. [34] on a series of 229 patients (123 NSRH; 106 conventional RH) evaluated with a validated questionnaire) showed a significant sexual deterioration both at 12 and 24 months after surgery for the total study group (nerve sparing and classic RH). The complaints included absence of sexual activity, narrow or short vagina, pain during intercourse, little or no lubrication during sexual activity, and an overall unsatisfactory for sexual life. However, sexual activity increased significantly after 12 and 24 months of followup compared with the situation before the treatment. The authors concluded that there was not a clear difference in self-reported sexual life between groups of women treated with different surgical procedure (nerve sparing versus non nerve-sparing technique). Conversely, a previous study by our group shows that NSRH is associated with significantly better sexual outcomes, compared with the "classical" RH technique [7]. Regarding oncologic outcomes, we acknowledge that the concerns regarding minimally invasive surgery in cervical cancer represent nowadays a major barrier about its use in this malignancy. However, we believe that this argument deserves a thorough reconsideration and new data are needed to confirm or deny the detrimental effect of endoscopic techniques. In any case, our survival data are comparable with the majority of the available reports on cervical cancer and therefore we believe that they can be considered encouraging [2,25,39,40]. In our opinion, final words on the role of minimally invasive surgery in cervical cancer are still to be pronounced. Among the limitations of the present study we mention the retrospective design, the relatively small number of patients included and the inclusion of both early and locallyadvanced stages submitted to neoadjuvant chemotherapy. The detailed collection of pre-, intra-and post-operative data, and the specific assessment of functional outcomes mitigate the possible reporting bias due to the retrospective nature of our analysis and represent the major strengths of this study. Regarding the inclusion of patients submitted to neoadjuvant 12/15 https://ejgo.org https://doi.org/10.3802/jgo.2021.32.e10 Functional outcomes after NSRH for cervical cancer treatment, a large, multicentre propensity-matched analysis by Ghezzi et al. [39] has shown that the use of laparoscopic surgery in this specific setting is adequate and provides optimal oncological results. Another possible limitation is the inherent selection bias: we noticed a higher proportion of patients with larger tumors in the open abdominal group; this raises the problem of a possible imbalance between the 2 groups both in terms of oncologic and functional outcomes. Moreover, the exclusion of the 7 patients who were refractory to NACT may have biased our survival analysis regarding locally-advanced disease. We also acknowledge that several factors, apart from the type of surgical approach, affect postoperative urinary and bowel function following NSRH, such as tumor size, urinary tract infection as well as individual surgeon's technique. To possibly correct for at least a part of these factors, it would be interesting to perform a study comparing pelvic function following NSRH with vs. without NACT. In conclusion, our study shows that TL-NSRH is a feasible, safe and effective procedure that conjugates adequate radicality with an improvement in post-operative functional outcomes. Irrespective of the surgical approach chosen (minimallyinvasive or open) and outside of the aseptic setting of clinical trials, our data provide a reliable picture of the everyday clinical scenario in patients affected by early-stage as well as locally advanced cervical cancer. These data may serve as the basis for clinical counselling and future discussions on this relevant topic.
2020-12-03T09:03:53.203Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "0a0e926eb92029f260b4d3cd6e589558dbeaa844", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3802/jgo.2021.32.e10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "750dc87680e7902fc337a6a10b2ce7d012528590", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219725175
pes2o/s2orc
v3-fos-license
Effect of thalamic deep brain stimulation on swallowing in patients with essential tremor Abstract Objective Deep brain stimulation (DBS) of the ventral intermediate nucleus (VIM) is a mainstay treatment for severe and drug‐refractory essential tremor (ET). Although stimulation‐induced dysarthria has been extensively described, possible impairment of swallowing has not been systematically investigated yet. Methods Twelve patients with ET and bilateral VIM‐DBS with self‐reported dysphagia after VIM‐DBS were included. Swallowing function was assessed clinically and using by flexible endoscopic evaluation of swallowing in the stim‐ON and in the stim‐OFF condition. Presence, severity, and improvement of dysphagia were recorded. Results During stim‐ON, the presence of dysphagia could be objectified in all patients, 42% showing mild, 42% moderate, and 16 % severe dysphagia. During stim‐OFF, all patients experienced a statistically significant improvement of swallowing function. Interpretation VIM‐DBS may have an impact on swallowing physiology in ET‐patients. Further studies to elucidate the prevalence and underlying pathophysiological mechanisms are warranted. Introduction Essential tremor (ET) is the most common movement disorder with a prevalence of 0.9%. 1,2 Thalamic deep brain stimulation (DBS) is a mainstay of treatment for severe and drug-refractory ET. 3,4 However, postoperative management may be challenging. 5 As the most frequent side effect, stimulation-induced dysarthria (SID) has been reported with an average occurrence of 9%, with values ranging up to 75%. 6 The exact pathogenesis remains unknown, but it is hypothesized that current spread affecting neighboring structures causes SID. This could be either due to interference with physiological cerebellar information, or due to affection of corticobulbar fiber tracts of the internal capsule. [7][8][9][10] In addition, both cerebellum and corticobulbar fibers play an important role in the process of swallowing with the latter carrying information from the motor cortex to the cranial nerve nuclei innervating the swallowing musculature and the former being responsible for coordination, sequencing, and timing of swallowing function. [11][12][13][14][15] Considering this substantial neuroanatomical overlap of structures involved in control and execution of speech and swallowing, it can be assumed that both stimulation of the internal capsule or interference with the cerebellar network might affect swallowing physiology resulting in stimulation-induced dysphagia. However, in contrast to SID, possible impairment of swallowing function after DBS of the ventral intermedius nuclei (VIM-DBS) has not been systematically investigated yet. The aim of this study was to evaluate the impact of VIM-DBS on swallowing function in patients with self-reported dysphagia using flexible endoscopic evaluation of swallowing (FEES). In addition, information about the underlying pathology was obtained by analyzing the observed pattern of dysphagia. Methods We retrospectively evaluated patients with ET and VIM-DBS who had received standardized FEES in the DBS-ON and OFF condition for swallowing assessment at the dysphagia outpatient clinics of the German University Hospitals of Frankfurt and Muenster between 2011 and 2017. In total, 12 patients were included. All subjects reported swallowing problems which had developed during the course of DBS treatment. Detailed medical history was obtained from every subject and there was no evidence of any other diagnosis as the underlying cause of dysphagia. Clinical examination was performed by a neurologist; additional assessment of speech and swallowing function by a speech and language pathologist (SLP). All patients received standardized FEES in the DBS-ON condition and after deactivation of the stimulator for a variable time. 16,17 Oropharyngeal dysphagia was deemed to be present when one or more pathological findings (e.g., penetration/aspiration, residue) occurred during FEES. 18 The study was approved by the ethics committees of the Goethe University Hospital Frankfurt and University Hospital of Muenster and was conducted according to the principles of the Declaration of Helsinki. Patients were assessed at two different time points in the following conditions: (1) during stim-ON with clinically optimized and chronically used stimulation parameters and (2) during stim-OFF after the DBS has been deactivated for variable time interval (range 1-96 h). We followed a standardized FEES protocol as published before. 18 . FEES videos were rated according to a standardized dysphagia score which had been developed for assessing treatment effects on swallowing function in patients with movement disorders. [19][20][21][22] In brief, three salient parameters of swallowing function were evaluated and scored: (I) premature spillage, (II) penetration-aspiration events, (III) residue. Premature spillage was defined as when the bolus spilled into the pharynx prior volitional posterior lingual propulsion and was distinguished from delayed swallow by identifying purposeful transfer of the bolus into the pharynx. 23 Scores of all single ratings were added yielding a total dysphagia sum score with a range from 0 to 108 and higher scores indicating worse function. FEES examinations were video-recorded and stored on a hard disc for later review (Muenster) or saved on an external server (Frankfurt). All videos, that is, stim-OFF and stim-ON FEES assessments, were independently scored by two raters who were blinded for the patients' clinical data and assessment conditions. For final analysis of the results disagreements were discussed until agreement was reached. Severity of swallowing dysfunction was classified according to a previously published scale which ranges from 0 (no dysphagia) to 3 (severe dysphagia). 22 Statistical analysis Statistical analyses were performed with R (version 3.4.4) and SPSS 19 (IBM Corporation, Somers, NY). Dysphagia sum scores were compared between the stim-ON and stim-OFF condition using the Wilcoxon signed rank test. Interrater reliability was analyzed separately for every single FEES dysphagia subscore (premature spillage, penetration-aspiration events, residue) for both conditions using ranked correlation (ICC by Friedmann chi-square procedure) providing a Cronbach's alpha coefficient. Spearman's rank correlation coefficient was used to analyze the correlation between dysphagia severity and the total electrical energy delivered (TEED) by DBS, for all patients for which the respective data were available. All tests were performed two-sided and considered significant when Pvalues were <0.05. Results Twelve patients (4 female) were included the study (Frankfurt n = 8; Muenster n = 4). All patients suffered from action and postural tremor, whereas resting tremor was present in 4, intention tremor in 10, and head tremor in three patients. Average age was 69 AE 9 years; disease duration 33 AE 21 years and time from electrode implantation to dysphagia assessment was 26 AE 24 months. Time between DBS and onset of subjective dysphagia was 12 AE 10 months. Other reported side effects were dysarthria (7/12; 58%), gait ataxia (4/12; 33%), and limb ataxia (2/12; 17%). All patients had a marked tremor reduction during stim-ON with significant improvement in hand Dysphagia was present in all patients in the stim-ON condition (n = 12), the average FEES dysphagia sum score amounting to 16 AE 10 (range 5-42). The most common FEES findings during stim-ON were premature spillage of the entire bolus and/or of bolus parts with the consequence of quick and uncontrolled overflow into laryngeal vestibule in 83% (10/12) as well as predeglutitive penetration in 58% (7/12) and predeglutitive aspiration in 25% (3/12) of cases ( Table 2). 71% (5/7) of the penetration and all the aspiration (3/3) events were directly related to the premature spillage. Of note, swallowing impairment was observed testing all consistencies (11/12 liquid, 10/12 semisolid, 9/12 solid) with premature spillage occurring mostly during swallowing of liquid textures. 50% of the patients (6/12) showed oral residues with fragmented bolus transfer. Pharyngeal residues were observed in about 60% of subjects which were primarily present when semisolid (7/12) and/or solid food (8/12) were applied. Dysphagia was classified as mild in 42% (5/ 12) and as moderate in 42% (5/12) of patients, whereas 16% (2/12) of patients suffered from severe dysphagia. In the stim-OFF condition, the mean FEES dysphagia sum score decreased to 3 AE 2 which translates to an average improvement of 82% compared to stim-ON (P = 0.003, Wilcoxon signed-rank test, sum of signed ranks = 78) ( Table 3). Dysphagia severity was classified as mild in 3/ 12, moderate in 1/12, and severe in 1/12 patient, whereas in 7/12 subjects swallowing function was evaluated as normal. Noteworthy, swallowing completely recovered in two patients, whereas in the remaining 10, subtle pathological findings maintained (range 1-8 points) Demographic and clinical data are presented in Table 1. Interclass correlation analyses demonstrated very good interrater reliability for all single parameters (premature spillage, penetration-aspiration events, residue) and both DBS conditions with Cronbach's alpha ranging from 0.84 to 0.90 (stim-ON) and from 0.82 to 0.95 (stim-OFF). Discussion In this study, we evaluated the impact of VIM-DBS on swallowing function in a sample of ET patients suffering from dysphagia. Although dysarthria is a well-known side effect of VIM-DBS, this isto the best of our knowledge the first systematic and instrumental-based report on dysphagia as a VIM-DBS-induced adverse effect. In all investigated cases, dysphagia was confirmed using FEES when DBS was on. After DBS deactivation dysphagia significantly improved in all patients, the mean improvement of FEES dysphagia sum score amounting to 80%. Reason for lack of full recovery could be age-or diseaserelated changes in swallowing which we cannot completely rule out because no FEES assessment was done before surgery. 24 Likewise, the deactivation period of the neuro-stimulator may not have been long enough to completely resolve swallowing function. 1 At present, the amount of time needed for the DBS to be turned off in order to allow for a noticeable change in the patient's swallow remains elusive and should be investigated in future studies. In analogy to SID, two pathophysiologic mechanisms can be hypothesized that may underlie dysphagia in VIM-DBS, namely: (1) via an unintended stimulation of corticobulbar fibers in the internal capsule 9 or (2) via DBS-induced modulation of the cerebellar network due to stimulation of cerebellar-thalamic afferents. 25 The analysis of the dysphagia pattern rather supports the latter for the following reasons: The main endoscopic finding was premature spillage with quick overflow into the laryngeal vestibule accompanied with penetration/aspiration before swallowing. Thus, bolus control and transition from oral to pharyngeal stage are affected by a lack of coordination of the muscles of the oral cavity rather than delayed pharyngeal response. This observation is more likely caused by interference of DBS with cerebellar circuits resulting in an ataxic dysphagia pattern. [26][27][28] This view is supported by the fact that all patients reported a considerable delay between the beginning of DBS treatment and the onset of dysphagia. A similar delay in onset was also observed for progressive gait ataxia as a side effect of VIM-DBS for the treatment of ET. 1,29 In affected patients, gait ataxia improved within several days after DBS deactivation. The side effect was interpreted as a maladaptive response of distinct cerebellar subregions caused by antidromic stimulation of cerebello-thalamic afferents in the subthalamic area. 1,29 Furthermore, it is well-known from lesion studies that patients suffering from stroke in the internal capsule typically show longer pharyngeal transit times with delayed triggering of pharyngeal swallow, 30 a pattern which was not observed in our DBS cohort. Noteworthy, all our patients were highly aware of their swallowing difficulties although dysphagia was only mild to moderate in most cases. Patients were not suffering from sensory loss, so cough and/or sustained swallowing were observed as frequent response to penetration/aspiration and/or residue. If stimulation of the internal capsule was the underlying cause of these dysphagic symptoms, additional pharyngolaryngeal sensory deficits should have been observed. 31,32 Taken together, our clinical findings support the hypothesis that dysphagia more likely results from modulation of cerebellar circuits rather than from direct stimulation of corticobulbar fibers in the internal capsule. However, this hypothesis has to be tested in future studies, for example, using diffusion tensor imaging and tractography in order to assess the overlap of the stimulation field with crucial fiber tracts. In general, dysphagia was mild to moderate but nevertheless impacted the patients' well-being. In six patients, adjustment of the stimulation settings led to full recovery of swallowing function (Table 2). If readjustment of stimulation parameters did not result in a marked and lasting improvement of swallowing, dysphagic symptoms had to be tolerated for the sake of sufficient tremor control. Additionally, we detected a significant correlation of dysphagia severity and TEED. While our findings are suggestive that patients with high stimulation settings may have a higher risk for developing dysphagia,due to the low number of casesthis observation must be validated in future studies. Limitations of this study include its retrospective design and the small sample size thus limiting statistical power. Furthermore, our hypothesis of a cerebellar pathomechanism underlying stimulation-induced dysphagia is based on clinical observations and, thus, speculative. Future studies are needed which address this issue by analyzing the anatomical correlations between lead locations and the stimulation field with critical areas and fiber tracts. Nevertheless, this is the first data showing that dysphagia can be a clinically relevant adverse event of VIM-DBS in ET that should arouse special awareness of the multidisciplinary team in charge of the patient. Professional assessment of swallowing impairment should be routinely implemented in patients with ET before and after surgery. For quantification of dysphagia, the applied FEES dysphagia score is a suitable tool to evaluate treatment induced improvement or worsening of swallowing function in patients with movement disorders and tremor beyond Parkinson's disease. Prospective, controlled studies are warranted to gather robust data on the incidence and underlying pathomechanism causing swallowing disorders in VIM-DBS ET patients in order to optimize patient management.
2020-06-18T09:05:43.741Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "440c625c9475ed69fc6ada37bbd25379ae58e336", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/acn3.51099", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b94e50aa1fe2e55985e9f3a6b1a393125b5c5f8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204932556
pes2o/s2orc
v3-fos-license
Spontaneous Z2 Symmetry Breaking in the Orbifold Daughter of N=1 Super Yang-Mills Theory, Fractional Domain Walls and Vacuum Structure We discuss the fate of the Z2 symmetry and the vacuum structure in an SU(N)xSU(N) gauge theory with one bifundamental Dirac fermion. This theory can be obtained from SU(2N) supersymmetric Yang--Mills (SYM) theory by virtue of Z2 orbifolding. We analyze dynamics of domain walls and argue that the Z2 symmetry is spontaneously broken. Since unbroken Z2 is a necessary condition for nonperturbative planar equivalence we conclude that the orbifold daughter is nonperturbatively nonequivalent to its supersymmetric parent. En route, our investigation reveals the existence of fractional domain walls, similar to fractional D-branes of string theory on orbifolds. We conjecture on the fate of these domain walls in the true solution of the Z2-broken orbifold theory. We also comment on relation with nonsupersymmetric string theories and closed-string tachyon condensation. Introduction Recently, a considerable progress has been achieved [1,2,3] in understanding of nonsupersymmetric Yang-Mills theories which can be obtained from supersymmetric gluodynamics by orbifolding or orientifolding, following the original discovery of planar equivalence [4,5,6,7,8,9,10]. While establishing perturbative planar equivalence is quite straightforward, the issue of nonperturbative equivalence of the orbi/orientifold daughters to the parent theory -supersymmetric gluodynamics -is more complicated. The question of nonperturbative equivalence between supersymmetric (SUSY) and non-SUSY theories was raised by Strassler [11] who formulated a nonperturbative orbifold conjecture (NPO). Shortly after Strassler's work, arguments were given [12,13] that in the orbifold daughter planar equivalence fails at the nonperturbative level. In particular, Tong showed that when the orbifold theory is compactified on a spatial circle, the SYM-inherited vacuum is not the genuine vacuum of the theory [13]. It was discovered, however, that the orientifold daughter is more robust and withstands the passage to the nonperturbative level [1,2,3]. A refined proof of the nonperturbative equivalence of the orientifold daughter was worked out in Ref. [3]. Here we carry out a similar analysis for the orbifold theory. This analysis will show, in a very transparent manner, that the necessary condition for the nonperturbative equivalence to hold in the orbifold case is that the Z 2 symmetry of the (Z 2 ) orbifold Lagrangian is not spontaneously broken. The same conclusion was reached in [14]. As well-known [15,16], string theory prompts us that, for the orbifold daughter of N = 4 SYM theory, the Z 2 symmetry is spontaneously broken above a critical value of the 't Hooft coupling. The orbifold field theory under consideration can be described by a brane configuration of type-0 string theory [10] (see Sect. 6). Type-0 strings contain a closed-string tachyon mode in the twisted sector. The tachyon couples to the "twisted" field [16] T ≡ Tr of the SU e (N)×SU m (N) gauge theory. 1 The subscripts e and m refer to 1 Here and below the normalization of traces is such that Tr FF = "electric" and "magnetic", respectively. The words electric and magnetic, borrowed from the string theory terminology, are used here just to distinguish between the two SU(N)'s of the gauge group SU(N)×SU(N). The prediction of string theory [16] is that the perturbative vacuum at T = 0 is unstable. In the bona fide vacua a condensate of the form Our task is to explore this phenomenon within field theory per se, with no (or almost no) reference to string theory. The SU e (N)×SU m (N) gauge theory with a Dirac bifundamental field is very interesting by itself, with no reference to orbifolding. If we could prove that Z 2 is spontaneously broken, using fieldtheoretic methods, this would be a tantalizing development. Below we will present arguments that such spontaneous symmetry breaking does take place, which, although convincing, stop short of being a full proof. First, we generalize the analysis of Ref. [3] to demonstrate that NPO does require unbroken Z 2 . Then we proceed to arguments based on consideration of domain wall dynamics to show that the domain wall of the parent SYM theory, upon orbifolding, becomes unstable and splits into two walls: one "electric" and one "magnetic." As will be explained below, this splitting is a signal of the spontaneous breaking of the Z 2 symmetry. We then argue that the true solution of the orbifold theory has vacua in which the tachyon operator condenses. We discuss the true vacuum structure of the orbifold theory and comment on its relevance for the issue of the closedstring-tachyon condensation in string theory. The paper is organized as follows. In Sect. 2 we show that NPO requires unbroken Z 2 symmetry and provide evidence that this symmetry is broken. Section 4 is devoted to a discussion of the order parameter(s) in the orbifold theory. In Sect. 5 we discuss dynamics of fractional domain walls. Section A is devoted to low-energy theorems. Section 6 discusses the relation between type-0 string theory and the Z 2 orbifold field theory. In Sect. 7 we comment on the difference between orbifold and orientifold daughters. Finally, in Sect. 8 we summarize our results and outline possible issues for future investigations. and so on. After the first version of this paper appeared in the eprint form, a related work was submitted [17]. We agree with a part of criticism presented [17]. In particular, in Ref. [12] and in the first version of this paper low-energy theorems were used to discriminate between the parent and orbifold daughter theories. These theorems become instrumental under the assumption of coincidence between the corresponding vacuum condensates. The vacuum condensate coincidence, imposed previously, is seemingly not necessary and, in fact, does not hold in a toy model we have recently analyzed. Relaxing this requirement makes the above low-energy theorems (and gravitational anomalies) uninformative. If one allows for unequal condensates, they cannot be used to prove (or disprove) that the Z 2 symmetry is broken. We revised the manuscript accordingly. We strongly disagree, however, with the analysis of the domain wall issue presented in [17]. Domain walls dynamics in Z 2 orbfiold field theories is incompatible with planar equivalence. 2 The role of Z 2 in the proof of planar equivalence Let us analyze whether or not nonperturbative planar equivalence takes place in the orbifold theory, following the line of reasoning established in [3]. We refer the reader to this paper for a detailed discussion of the procedure. Here we will consider, as a particular example, a two-fermion-loop contribution (see Fig. 1) to the partition function. Each fermion loop consists of two lines: one solid and one dashed. The solid line denotes propagation of the (fundamental) color index belonging to the "electric" SU(N) while the dashed line denotes propagation of the color index belonging to the "magnetic" SU(N). "Electric" and "magnetic" gluon fields are marked in Fig. 1 by vertical and horizontal shadings, respectively. Each fermion loop represents, in fact, A mass term is introduced for regularization. It is very important (see below). In what follows we will assume for definiteness that m is real and m > 0. We note that Moreover, with the same convention regarding AT and F T as in Eq. (4). In the above expressions the gluon field is considered as background. Averaging over the vacuum gluon field is performed at the very end. The requirement that the fermion loops are connected through the gluon field enforces that only selected contractions are possible in the orbifold theory. In particular on the diagrams of Fig. 1 the outside loops must be both either solid lines or dashed lines (a, b, respectively). Solid-dashed combination is excluded as it represents a disconnected graph. In the parent SYM theory we deal with a single SU(2N), and all contractions are possible. In perturbation theory the contributions from the diagrams 1a and 1b are equal. The combinatorics is such that adding up 1a and 1b one exactly reproduces the two-fermion-loop contribution in SYM theory provided one performs the following coupling rescaling: where P and D stand for the parent and daughter (orbifold) theories. The above rescaling ensures that the 't Hooft couplings are the same in the parent and daughter theories. In the perturbative planar equivalence -a solidly established fact -the vacuum angle θ plays no role since it does not show up in perturbation theory. A correspondence between parent and daughter θ ′ s following from NPO can be derived from holomorphic dependences of the bifermion condensates on the complexified coupling constants. If the vacuum angles in the parent and orbifold daughter theories are introduced as then it is not difficult to show that the vacuum angles must be rescaled too [18,12,14], Equations (6) and (8) are equivalent to the statement of correspondence between the holomorphic coupling constants. Planar equivalence: what does it mean? In establishing large-N equivalence between distinct theories, with distinct vacuum structure (and, as we will see shortly, the vacuum structure of the parent theory is not maintained upon projection to the orbifold daughter theory) we must carefully specify what this equivalence might actually mean. Any theory is characterized by a set of physical quantities that scale differently in the 't Hooft limit. For instance, particle masses are assumed to be N-independent, their residues in appropriate currents grow with N, particle widths fall with N (so that at N → ∞ all mesons are stable), the vacuum energy density scales as N 2 , the number of vacua scales as N 1 , and so on. When two theories with with distinct vacuum structure are compared (but with the same scale parameter Λ), physical equivalence of the theories in question need not necessarily mean full general equality of all n-point functions, since such equality may come into contradiction with appropriate scaling laws. In particular, some vacuum condensates and low-energy theorems can be sensitive to the number of fundamental degrees of freedom (cf. the π 0 → 2γ constant whose consideration led to the conclusion of three colors in bona fide QCD in the early 1970s). It is clear that a minimal requirement of planar equivalence is coincidence of the particle spectra in the common sector. More precisely, let us consider a vacuum V P of the parent theory that can be mapped onto a vacuum V D of the daughter theory and vice versa. We must verify that both V P and V D are stable vacua. If the spectra of particle excitations in both vacua in the common sector coincide up to 1/N corrections we can speak of planar equivalence. Besides particle excitations the parent and daughter theories may (and do) support extended excitations, such as domains walls. To consider domain walls we must consider pairs of vacua V P , V ′ P and V D , V ′ D which can be mutually mapped. Correspondence between the parent and daughter walls can be included in the requirement of planar equivalence. In the remainder of this section we show that evident distinctions in the vacuum structure of the parent SYM theory and its orbifold daughter lead, with necessity, to a mismatch in certain correlation functions. In particular, θ dependences cannot match. This does not necessarily mean a mismatch in the particle spectra since the composite particle masses acquire a dependence on the θ parameters introduced in Eqs. (7) (at m = 0, where m is a small fermion mass term, see below) only in subleading order in 1/N. The functional-integral representation for the partition function, with the fermion determinant included, is ill-defined unless we regularize the determinant. The infrared regularization is ensured by the introduction of a (small) mass term m. Simultaneously, this mass term lifts the vacuum degeneracy, eliminating further ambiguities in the functional integral. Let us dwell on the vacuum structure of the parent and daughter theories. SU(2N) supersymmetric gluodynamics has 2N vacua labeled by the order parameter, the gluino condensate, 2 Its SU(N)×SU(N) orbifold daughter has N vacua (under the assumption that Z 2 is unbroken) which are labeled by the order parameter Consider an instructive example, namely, and m real and positive, the vacuum structure is depicted in Fig. 2. P 0 is the unique vacuum of the SYM theory, while D ±1 are the vacua of the orbifold theory. Note that at m > 0 the "vacua" P ±1 are in fact excited (or quasistable) because The daughter theory has two-fold degeneracy, a phenomenon well-known at θ = π. This is the so-called Dashen phenomenon [24], with all ensuing consequences. Let us emphasize that physics at θ = π and θ = 0 are essentially different. In particular, at θ = π spontaneous breaking of discrete symmetries (such as P -invariance) typically occurs [24]. 2 The gluino condensate in supersymmetric gluodynamics was first conjectured, on the basis of the value of his index, by E. Witten [19]. It was confirmed in an effective Lagrangian approach by G. Veneziano and S. Yankielowicz [20], and exactly calculated by M. A. Shifman and A. I. Vainshtein [21]. The exact value of the coefficient −12N in Eq. (9) (for SU(2N )) can be extracted from several sources. All numerical factors are carefully collected for SU (2) in the review paper [22]. A weak-coupling calculation for SU(N ) with arbitrary N was carried out in [23]. Note, however, that an unconventional definition of the scale parameter Λ is used in Ref. [23]. One can pass to the conventional definition of Λ either by normalizing the result to the SU(2) case [22] or by analyzing the context of Ref. [23]. Both methods give one and the same result. One can consider another instructive example, θ P = π. At this point, the Dashen phenomenon occurs in the parent theory. There is a doublefold vacuum degeneracy. At m = 0 domain walls are unstable, generally speaking. However, a stable domain walls emerges in the Dashen point, as is well-known from the past. At the same time, the corresponding vacuum angle of the daughter theory at this point is θ D = π/2. The Dashen point is not yet reached. It is clear that there is no equivalence in this aspect. Coincidence of the vacuum structure in the parent and daughter theories at N = ∞ implies, generally speaking, a much broader understanding of planar equivalence. This is the case for orientifold daughters. For orbifold daughters one has to stick to the minimal requirement specified in the beginning of this section. Order parameters We will pause here to discuss appropriate order parameters. In the parent SYM theory the order parameter is the gluino condensate (9). In the daughter theory with the spontaneously broken Z 2 the bifermion condensate (10) is insufficient for differentiation of all 2N vacua of the theory because it is Z 2 -even. We must supplement it by a Z 2 -odd expectation value of (1). This vacuum expectation value (VEV) is dichotomic. The bifermion condensate (10) in conjunction with T = ±Λ 4 fully identifies each of the 2N degenerate vacua of the orbifold theory. Somewhat symbolically the vacuum structure is presented in Fig. 3. The angular coordinate represents the phase of (10), while the radial coordinate can take two distinct values representing the dichotomic parameter T . It is instructive to discuss here the Z 2 -even gluon condensate F 2 e + F 2 m . This operator is related to the total energy-momentum tensor of the theory, <T>=+ <T>= Since all 2N vacua are degenerate, at first sight the gluon condensate is no order parameter, since the VEV of (12) is the same in all vacua. Ever since the gluon condensate was introduced in non-Abelian gauge theories [25] people tried to identify it as an order parameter. In a sense, in the case at hand it is ! To be more precise, a nonvanishing (in the planar approximation) VEV The latter condensate vanishes due to supersymmetry of the parent theory. Hence, the Z 2 symmetric vacua in the daughter theory would have vanishing vacuum energy density. Since the Z 2 -symmetric point is unstable, the bona fide Z 2 -asymmetric vacua must have a negative energy density. Equation (12) implies then that in the genuine Thus, in the case at hand the gluon condensate does play the role of an order parameter, much in the same way as F 2 SYM is the order parameter for SUSY breaking in SUSY gluodynamics. Note that for this reason F 2 e + F 2 m must vanish in (planar) perturbation theory. Nonperturbatively, 3 Eq. (13) must hold at O(N 2 ). This prediction from the broken Z 2 symmetry imposes a strong restriction on the low-energy effective action for the orbifold daugther. In particular it disfavours the action suggested in [27]. Needless to say, revealing dynamical distinctions leading to vanishing/nonvanishing of F 2 in the parent/daughter theory is of paramount importance. We are far from understanding these mechanisms. We would like to make a single remark regarding instantons, the only well-studied explicit examples of nonperturbative field configurations. In the SYM theory instanton does not contribute to the vacuum energy because of the fermion zero modes (an instanton-antiinstanton configuration could contribute but it is topologically unstable.) The orbifold theory exhibits a new phenomenon (to the best of our knowledge, for the first time ever): topologically stable instantonantiinstanton pair, connected through fermion zero modes, see Fig. 4. The stability is due to the fact that they belong to distinct gauge factors. Therefore, although the overall topological charge vanishes (all fermion zero modes are contracted), still instanton e cannot annihilate antiinstanton m . Domain wall dynamics in orbifold field theory In this section we discuss the dynamics of domain walls in the Z 2 orbifold field theory. We discuss both four-dimensional and world-volume dynamics. Since domain walls are "QCD D-branes" [28] the similarity between wall dynamics and D-brane dynamics is clear. In Sect. 6 we will discuss the dynamics of D-branes in type-0 string theory. We will identify the domain walls of the orbifold daughter theory with the fractional D-branes of type-0 string theory. Why domain walls ? As well-known, the occurrence of domain walls is the physical manifestation of spontaneously broken discrete symmetries. Since our considerations aims at exploring the Z 2 breaking in the orbifold daughter theory, an analysis of the domain walls is relevant. In addition, we will discuss the role of the fractional domain walls of the orbifold theory as fundamental (or constituent) domain walls of the theory in its true vacuum. Four-dimensional perspective Let us consider the domain walls in the Z 2 orbifold field theory. It is a SU e (N)× SU m (N) gauge theory with a bifundamental Dirac fermion. The theory has a global U A (1) axial anomaly analogous to the U R (1) anomaly in the parent SYM theory. On the basis of the U A (1) anomaly one can deduce that the daughter theory has N degenerate vacua marked by distinct values of the bifermion condensate Ψ (1 − γ 5 ) Ψ (see Fig. 2). The domain walls can separate these N vacua. (An alternative terminology: the domain walls can interpolate between these N vacua.) Let us begin with a brief review of the SYM theory domain walls. The SYM theory contains BPS domain walls [29] that carry both tension σ and charge Q (per unit area), with σ = Q. The expressions for the tension and charge are [30] where z is the direction perpendicular to the wall plane. Equation (14) is a consequence of the scale anomaly. We can consider, as well, the bound state of k elementary walls. These walls interpolate between the vacua i and i + k. The exact tension for the k-wall configuration is [29] At N → ∞ it reduces to σ(k) → kσ (1) . In other words, the walls do not interact as their total tension is the sum of tensions of k free 1-walls. Although the walls do interact via the exchange of glueballs, there is a perfect cancellation between the contribution of evenand odd-parity glueballs [30]. In Sect. 5.2 we will see, from the world-volume theory standpoint, that the no-force result is due to bose-fermi degeneracy on the wall. Now we proceed to the orbifold daughter. Analogously to the parent SYM theory, the domain walls of the daughter theory carry both tension and charge which can be evaluated by using the orbifold procedure. We obtain the following expressions for the tension and charge of the orbifold theory domain walls: 4 In a bid to reveal inconsistencies of the NPO conjecture and preparing for such a demonstration in Sect. 5.2, we will look at the domain walls from a slightly different angle. It is suggestive to think of the domain walls of the orbifold theory as of marginally bound states of fractional "electric" and "magnetic" domain walls, with the following tensions and charges: Assuming now that NPO is valid, and the Z 2 symmetry is unbroken, i.e. σ e = σ m , we get i.e., a fractional amount of tension (in full analogy with the fractional Dbranes, see Sect. 6). The tensions of the fractional multi-walls are When k = 2, the statement reduces to that two parallel electric domain walls do not interact at N = ∞. Needless to say, the same is valid for the magnetic walls. In Sect. 5.2 we will demonstrate, using the world-volume description, that two electric domain walls do interact at N → ∞. World-volume (2+1)-dimensional perspective Our discussion of the world-volume domain wall dynamics in the orbifold daughter is closely related to the situation in the parent SUSY theory. The world-volume theory for k-walls in N = 1 gluodynamics was derived in Ref. [31]. It was shown to be a (2+1)-dimensional U(k) theory with level-2N Chern-Simons term (for the bulk gauge group SU(2N)). The world-volume theory has (2+1)-dimensional N = 1 supersymmetry. Note that N = 1 SUSY in three-dimensional SU(N) theory is dynamically broken [32] at small values of the coefficient in front of the Chern-Simons term, k cs ≤ N/2. However this SUSY breakdown does not happen on the world-volume of multiple domain walls in the parent theory since in this case k cs = 2N, and gauge group is at most SU(N). The action of the theory is All fields in the action, including the fermion fields, transform in the adjoint representation of U(k). For definiteness, we will consider the case k = 2, which is in a sense minimal, see Sect. 5.3. Now, consider the orbifold daughter theory. The world-volume theory becomes, by virtue of the orbifold procedure, a U e (1)×U m (1) gauge theory with a neutral scalar field and bifundamental fermions 5 As we will see momentarily, the occurrence of the Yukawa couplinḡ in the daughter theory (with no counterpart in the parent one) is a fact of special importance. We can give the following interpretation to the above expression. The daughter wall consists of a sum of electric and magnetic walls that interact with each other via the bifundamental fermions. In fact, the electric branes can be separated from the magnetic branes. To see that this is the case, note that the Yukawa term (25) in the action (24) can make the bifundamental fermion massive. Indeed, by giving vacuum expectation values we generate a mass µ for the world-volume fermions, When µ → ∞ the fermions decouple, and we have two decoupled U(1) theories. The interpretation is clear: we can give VEV's and separate the electric domain wall from the magnetic one. The world-volume theory on the separated electric (or magnetic) domain walls is just a bosonic U(1) gauge theory with a level-N Chern-Simons term. It is not supersymmetric. There is no reason for the wall tension non-renormalization and the no-force statement. Let us discuss the force between the two walls. It is done by evaluating the Coleman-Weinberg potential in the presence of a VEV v. A similar calculation was performed in Refs. [33,34]. The result is where c 0 , c 1 and c 2 are positive coefficients (independent of N), Λ is a UV cut-off and m is an IR cut-off (the gauge boson Chern-Simons mass). We can set c 0 and c 1 to zero by a fine-tuned renormalization. However, even after renormalization a repulsive v 4 term remains. This is not surprising. A necessary condition for the zero force is a degeneracy between bosons and fermions in the world-volume theory. This is achieved in the parent theory, where the world-volume theory is N = 1 supersymmetric. However, since the theory on the electric walls of the daughter theory is purely bosonic we found a repulsion. This is in contradiction with the NPO conjecture. At the end of Sect. 5.1 we assumed NPO and we reached the conclusion that there is no force between two parallel electric walls at N → ∞. However, the microscopic calculation reveals a different answer. Again, the conclusion is that the Z 2 symmetry must be broken. The fate of electric and magnetic fractional walls as independent constituents in the true solution By studying the fractional domain-wall dynamics we arrived at the conclusion that the Z 2 symmetry is dynamically broken. Moreover, the gauge theory has V(T) T Figure 5: The tachyon field potential. The Z 2 symmetry is dynamically broken in the true vacuum. Z 2 -odd vacua. In other words, the tachyon field potential has a minimum (the tachyon field is T = Tr F 2 e − Tr F 2 m ). The statement that V (T ) is bounded from below is not an assumption -it can be justified by observing that the regime of large VEV's is fully controlled by semiclassical dynamics. From the field-theoretic standpoint it is clear that the only possibility open is that in the bona fide vacuum T ∼ Λ 4 . At the same time, non-stabilization of tachyons would mean T ≫ Λ 4 , which is ruled out. Therefore, the tachyon field potential must look like a Higgs potential, see Fig. 5. In the parent N = 1 SYM theory with the gauge group SU(2N), there are 2N vacua, with the gaugino condensate as an order parameter. The 2N vacua, being roots of the unity, can be drawn as points on a unit circle, see Fig. 2. The domain walls interpolate between the various vacua. In the daughter theory the situation is more complicated. Since each vacuum of the N "false" perturbative vacua splits into two, the vacuum structure of the gauge theory can be described as two circles, with N points on each circle, see Fig. 3. The wall inheritance from the parent to the daughter theory proceeds as follows. We first pretend that the daughter theory is planar-equivalent to SYM, and that the Z 2 symmetry is unbroken. Then we must start from a 2-wall in the parent theory; and it will be inherited, as the minimal wall in the daughter theory. Indeed, if Z 2 is unbroken there are only N vacua in the orbi-daughter (versus 2N in SYM). This is seen from Fig. 2. If the wall is inherited, the vacua between which it interpolates must be inherited too. Under NPO only every second vacuum is inherited. Thus, if we want to consider the wall that is inherited, we must consider e.g. the wall connecting D −1 and D 1 in the daughter (this is a minimal wall in the daughter), versus the wall connecting P −1 and P 1 in the parent (this is a 2-wall in SYM). In the parent theory two 1-walls comprising the 2-wall do not interact with each other (at N = ∞). If we consider them on top of each other, the world-volume theory has U(2) gauge symmetry. However, nobody precludes us from introducing a separation. Then we will have U(1) on each 1-wall, U(1)×U(1) altogether. The tension of each 1-wall is 1/2 of the tension of the 2-wall, it is well-defined and receives no quantum corrections. The fact that the world-volume theory on each 1-wall is supersymmetric is in one-to-one correspondence with the absence of quantum corrections. Now, in the daughter theory, according to NPO, everything should be the same. The minimal wall splits into one electric and one magnetic (the electric one connects D −1 with the would-be vacuum which is a counterpartner of P 0 , the magnetic one connects the would-be vacuum which is a counterpartner of P 0 with D 1 , each having 1/2 of the tension). However, now the world-volume theories on e-wall and m-wall are not supersymmetric, so that there is no reason for the wall tension non-renormalization. In this false orbifold theory, there is also no place for the "twisted" walls, since in the false orbifold theory there are no black vacua and white vacua of Fig. 3 -supposedly, there is only one per given value of A possible visualization of the situation is as follows. In the parent theory we have degenerate minima at all points P i . In the true orbifold theory these minima become maxima (still critical points, but unstable). Near every second maximum two minima develop. These are true vacua of the true orbifold theory, with Z 2 broken. Of course, the walls that would be inherited from SYM are all unstable, with tachyonic modes. 1-walls are transformed into electric/magnetic walls of the orbifold theory, which are still unstable and, in fact, decay. Each of them separately could decay only into a "twisted wall" connecting white and adjacent black true vacua. The "untwisted" electric+magnetic wall can decay into a minimal stable wall of the daughter theory which connects two neighboring black vacua or two neighboring white vacua. D-branes in type-0B string theory Orbifold field theories have deep roots in string theory. The particular case of Z 2 orbifold is related to type-0 string. In particular, we can realize the Z 2 orbifold field theory on a brane configuration which involves D4-branes and orthogonal NS5-branes in type-0 string theory [10] (see also [36] for other realizations in type 0B). Type-0B string theory is a nonsupersymmetric closed string theory, defined by a diagonal Gliozzi-Scherk-Olive (GSO) projection that keeps the following sectors: Note the doubling of the R-R fields and the lack of the NS-R sector (closed string fermions). In addition, it is worth noting that the theory contains a tachyon in the (NS-,NS-) sector. Due to the doubling of the R-R fields the theory contains two types of D-branes, often called electric and magnetic branes. A combination of an electric and a magnetic brane is referred to as an untwisted brane. The untwisted brane of the type-0 string is the analogue of the type-II brane. It is useful to think about the electric and magnetic branes as fractional branes, or as the constituents of the untwisted brane. The field theory on a collection of N D-branes of type-II string theory is a supersymmetric U(N) gauge theory. The field theory on a set of N untwisted D-branes of type-0 string theory is a U e (N)×U m (N) gauge theory with adjoint scalars and bifundamental fermions [15]. The bosons arise from open strings that connect electric branes with electric branes or magnetic branes with magnetic branes. Fermions are due to open strings that connect electric branes with magnetic branes [37]. The situation is depicted in Fig. 6 below. Thus, type-0 string theory provides a natural framework for discussing Z 2 orbifold field theories. Indeed, type-0 string theory is a Z 2 orbifold of type-II string theory. The forces between D-branes are determined by the annulus diagram. The short-distance force between a set of the same-type branes (electric-electric or magnetic-magnetic) is repulsive [33]. The short-distance force between the opposite brane pair (namely, between electric and magnetic) is attractive. The latter matches the picture we presented in Sect. 5.3. The forces between untwisted branes, namely between a pair of electric plus magnetic branes and another such pair is always zero, as in the supersymmetric theory [34]. The situation is described in Fig. 7 below. The above results on the forces between the various D-branes of type-0 string theory can be explained via either the closed string channel or the open string channel. Let us start with the closed string channel. In order to achieve a zero force between the branes, the attractive force due to NS-NS modes has to be canceled by the repulsive force due to R-R modes. Note that the cancellation is among bosons of opposite parity; it does not involve fermions (the NS-R sector). No-force situation is achieved in a SUSY setup (type II) or for dyonic (or untwisted) branes of type 0. However, such a cancellation does not occur in other cases. The same phenomena can be explained via the open string channel. Here, however, the zero force can be explained due to a cancellation between bosons (the NS sector) and fermions (R sector). If the world-volume theory on the brane is SUSY (such as type II) or if the spectrum of the modes on the brane is degenerate, the zero force can be achieved. D-branes versus domain walls As has been already mentioned, Witten suggested [28] that domain walls are QCD D-branes. He argued that, since the tension of the domain wall scales as N ∼ g −1 st and since the QCD string can end on the wall, it is natural to conjecture such a relation. Moreover, in [28] Witten described a domain wall as a wrapped M-theory five-brane. Acharya and Vafa later suggested [31] that domain walls correspond to D4-branes wrapping an S 2 . By using this realization Acharya and Vafa were able to determine the world-volume theory. Following [31] we suggest that the domain walls of the Z 2 orbifold field theory correspond to various branes of type-0 string theory. This is a very natural proposal, since the four-dimensional orbifold field theory itself can be realized on a collection of D-branes of the type-0 string [10]. We suggest the following: an electric domain wall corresponds to an electric brane, a magnetic domain wall corresponds to a magnetic brane and, finally, the untwisted domain wall -a pair of electric and magnetic domain walls -corresponds to an untwisted brane. By using the above identifications and [31] we can get the world-volume theory on various domain walls. The answer follows from an analysis of the annulus diagram in type-0 string theory [37]. For k coincident electric (or magnetic) domain walls it is a (2+1)-dimensional U(k) gauge theory with a real scalar in the adjoint representation and a level-N Chern-Simons term. The theory on the collection of k untwisted domain walls is a U e (k)×U m (k) gauge theory with a real scalar in the adjoint representation of each gauge factor, a Chern-Simons term for each factor and bifundamental fermions. Closed-string tachyon condensation In the previous sections the breaking of the Z 2 symmetry in the orbifold field theory was proven, mostly within field-theoretical framework, and consequences outlined. An obvious order parameter for the Z 2 -symmetry breaking is the tachyon operator T ≡ Tr F 2 e − Tr F 2 m , see Eqs. (1), (2). The fieldtheory analysis suggests that T acquires a VEV dynamically and develops a potential of the Higgs type (see Fig. 5). This conclusion is actually very natural, once the relation with type-0 string theory is established. If the orbifold field theory is dual to type-0B string theory on a certain manifold , Klebanov-Strassler [39], or C 3 /Z 2 ×Z 2 [36]) then by the operator/closed-string relation of the AdS/CFT correspondence, the operator Tr F 2 e − Tr F 2 m couples to the tachyon mode of the type-0 string [16]. It is also clear that, if there is a duality between a tachyonic string theory and a gauge theory, then the gauge theory must suffer from an instability at strong coupling [16]. 6 The situation, however, is not so simple. The tachyon mode has a negative mass square on a flat background, at tree level. The curvature or R-R flux or, maybe, sigma-model corrections can create a potential for the tachyon. It is very difficult to answer the question of the fate of the closed-string tachyon, especially when the theory is compactified on a non-flat manifold and in the presence of the R-R flux. In the case of the Z 2 daughter of N = 4 SYM theory, which was conjectured to be dual to type-0B string theory on AdS 5 ×S 5 background, Klebanov and Tseytlin [16] argued that the tachyon mass will be shifted toward positive values, when the 't Hooft coupling is smaller than a certain critical value, namely, However, the full potential for the tachyon field at strong 't Hooft coupling remained unknown. Our field-theory analysis suggests a definite answer to this question. We argue that, if there is a type-0 string model which is dual to the Z 2 orbifold theory, the potential for the tachyon mode is as shown in Fig. 5. We conclude this section by quoting A. M. Polyakov [40] who discussed the fate of the tachyon of noncritical type-0 string theory is his paper The Wall of The Cave: Presumably, this tachyon should be of the "good" variety and peacefully condense in the bulk. Orbifolds versus orientifolds In this short section, we would like to explain the conceptual difference between orbifold field theories and orientifold field theories, or why the conjectured planar equivalence [11] does hold for the orientifold field theories [3] and fails for orbifold ones. Let us start with orbifold theories. The bold conjecture relates supersymmetric theories with the untwisted sector of the orbifold daughter. It does not address the twisted sector of the gauge theory but assumes that the twisted sector is "kosher" (or that it decouples from dynamics of the untwisted sector). However, as was already argued in the present and previous works, a necessary condition for nonperturbative planar equivalence between a supersymmetric theory and a nonsupersymmetric orbifold daughter, is that the daughter theory inherits the SUSY vacua. In this paper we demonstrated that a condensate (2) develops, hence the vacuum structure of the orbifold theory is different from that of the parent SUSY theory. Multiple evidence for the condensate (2) was obtained, in particular, via investigation of fractional domain walls. String theorists are familiar with this phenomenon. Type-II strings on orbifold singularities of the form C 3 /Z n , or type-0 strings always contain a tachyon in the twisted sector (and fractional branes). For orientifold theories the situation is conceptually different. This nonsupersymmetric gauge theory does not contain a twisted sector and, in particular, it does not contain fractional domain walls; hence, it is guaranteed that the theory inherits its vacua from the SUSY parent. Similarly, the candidate for a string dual of the orientifold theory -Sagnotti's type-0' model [41] -does not contain a tachyon since it was projected out by orientifolding. Thus, either from the string-theory side or from the field-theory side, it is evident that a tachyon-free model is a much better starting point for the investigation of nonsupersymmetric gauge (or string) dynamics. Conclusions The goal of this work is to determine the vacuum structure of a non-supersymmetric gauge theory, the orbifold theory. The problem is extremely difficult, since the answer lies in the nonperturbative regime of the theory. The Z 2 orbifold theory is obtained from N = 1 SUSY gluodynamics by orbifolding. Nonperturbative planar equivalence for such daughter-parent pairs (SUSY-non-SUSY) was suggested in Ref. [11]. While nonperturbative planar equivalence was proven for orientifold daughter [1,2,3], with multiple consequences that ensued almost immediately, theorists continued working on orbifold daughters. Evidence reported in this paper points to nonperturbative nonequivalence. Of course, one can say that on the positive side nonperturbative nonequivalence implies spontaneous breaking of Z 2 of the orbifol daughter. Our investigation suggests a different picture in the orbifold case. Based on domain wall dynamics we arrived at Fig. 3. N "pre-vacua" that could be inferred from the chiral condensate (10), due to Z 2 breaking, split into 2N vacua, N "white" and N "black." Each vacuum is uniquely parametrized by two order parameters: the bifermion condensate and the tachyon vacuum expectation value (2). In a theory with multiple vacua, interpolating domain walls of distinct types exist. In the true orbifold solution we have walls connecting two white vacua (or two black ones) which can be interpreted as a bound state of an electric plus magnetic wall pair, each of these e,mwalls being individually unstable. We also have twisted walls interpolating between a black vacuum and a white one. Several possible directions of future research are at the surface. It would be interesting to investigate other gauge theories applying the set of tools used in the present work. An interesting question is whether there exists at all a daughter non-SUSY orbifold theory whose the vacuum structure is inherited from the parent SUSY theory. 7 Another possible line of investigation is a derivation of an effective Lagrangian of the Veneziano-Yankielowicz type [20] that would generalize the Lagrangians [26] and [42] to include effects due to spontaneously broken Z 2 . Examples of cross-fertilization between string theories and gauge field theories are abundant. In the recent years the direction "from fields to strings" is becoming increasingly useful. The present work suggests another topic along these lines: studying closed-string tachyon condensation basing on the analogous phenomenon in field theory. Detailed analysis of the supergravity solutions corresponding to the strong-coupling limit of the orbifold daughter theory could shed light on these issues. of A.G. was supported in part by the French-Russian Exchange Program, CRDF grant RUP2-261-MO-04, and RFBR grant 04-011-00646. A.G. is grateful to LPT at Université Paris XI, where a part of the work was done, for kind hospitality, and acknowledges a discussion with L. Yaffe. We also thank our PRD referee for useful comments. A Trace anomaly low-energy theorems Here we will derive and discuss low-energy theorems related to the trace anomaly. Let the parent theory be N = 1 SUSY Yang-Mills theory with SU(2N) gauge group, where λ α is the Weyl spinor in the adjoint. The theory has 2N chiralasymmetric vacua labeled by the value of the gluino condensate λλ . We will have to add a small gluino mass term, which will lift the vacuum state from zero and break SUSY, and will make θ dependence physical and observable. The daughter theory is the gauge theory with SU(N) × SU(N) gauge group, two Weyl bifundamentals and the rescaling law (6), (8). The Lagrangian of the daughter theory is where the covariant derivative is defined as are the generators of the gauge symmetry with respect to the ℓ-th group SU(N) (here ℓ = e, m). The fermion fields have the following color assignment where a, b are fundamental/antifundamental indices belonging to the first (electric) SU(N) while i, j are fundamental/antifundamental indices belonging to the second (magnetic) SU(N). Then χ 1 χ 2 ≡ χ a i η i a is a gauge invariant chiral order parameter. In the theory (A.2), using the existence of the above parameter, we will introduce the fermion mass term exactly equal to the projection of the gluino mass term of the parent theory into the daughter one. The daughter theory is non-supersymmetric. Now, both the parent theory and its orbifold daughter are endowed with appropriate (small) mass terms for the fermions. The mass terms are needed for (i) IR regularization; (ii) making the vacuum energy density E vac ∼ O(N 2 ). In the massless limit the E vac ∼ O(N), and it is very hard to track subleading terms. (See, however, Ref. [26].) We will discuss only the terms Let us use the fact that in the daughter one, and the vacuum energy density of the parent is twice higher than that of the daughter. Let us make a comment concerning the order parameters in the daughter theory. From the low-energy theorems [35] we get which means that the mixed e-m correlator does not contribute to the condensate of the twisted field. In other words, the condensate of the twisted field corresponds to a difference in the interactions between the "electric" and "magnetic" domain walls. On the other hand, a similar low-energy theorem for the "untwisted" condensate shows that the mixed e-m correlator contributes in this case. Let us remark that the mixed instanton-antiinstanton pairs mentioned in Sect. 4 can contribute to the "untwisted" condensate or the vacuum energy only. B Topological susceptibilities We define θ terms in Eqs. (7). A few comments on this definition will be presented shortly. It is important that (in the parent theory) Here E P is the vacuum energy density in the parent theory, E 0,P is a positive constant (proportional to m gluino Λ 3 ). The N dependence in Eq. (B.1) follows from Witten-type arguments combined with the fact that there are 2N vacua all entangled in the process of the θ evolution in the parent theory. This entanglement leads to apparent periodicity 2π · 2N rather than 2π. Differentiating Eq. (B.1) twice with respect to θ P , using Eq. (A.1) and setting θ P = 0 after differentiation we get the following result for the topological susceptibility in the parent theory: Now, let us turn to the daughter theory and discuss the θ term in the daughter theory. We introduce θ D in such a way that the physical 2π periodicity in θ D is maintained, as indicated in Eq. (7). For convenience we reproduce the appropriate part here, 6) and (8). This is best seen upon transition to the canonically normalized kinetic terms in both theories. Indeed, then g 2 P θ P ↔ g 2 D θ D , and consequently Eq. where the factor 1/2 in the last term reflects 1/2 in the defining equation (B.5), while 2 reflects two distinct SU(N) gluons in the daughter theory. Given equation (6), the perfect match between Π P (q) and Π D (q) at large q 2 is achieved, as we intended. From now on we will drop the subscript θ = 0 where it is self-evident. Now, assume the orbifold planar equivalence holds nonperturbatively [11]. Then we must conclude that Π D (q = 0) = Π P (q = 0) , (B.7) implying, in turn, that On the other hand, in the daughter theory -remember it has allegedly N rather than 2N vacua -the dependence of the vacuum energy density on the θ angle is as follows: Differentiating twice over θ D and setting θ D = 0, we obtain C Gravitational chiral anomalies The parent SUSY gluodynamics and the daughter orbifold theories have classically conserved axial currents which are anomalous at the quantum level. In addition to the gluon anomaly of the Adler-Bell-Jackiw type, one can consider the gravitational anomaly whose existence was first noted in [43,44]. At first we will have to establish appropriately normalized operators which are related by the orbifold projection. If the axial current in the parent theory (in the Weyl representation) is 8 A µ P = g −2 P Trλα (σ µ )α α λ α , (C.1) its orbifold counterpart is where Ψ is the bifundamental Dirac spinor. With these definitions the chiral gluon anomaly takes the form (C.4) Now, let us pass to the gravitational anomalies. For one Dirac fermion it was calculated in [43,44] where R µνκλ is the Riemann tensor. For simplicity we specified Eq. (C.5) to the lowest order in h µν . (Otherwise, one must have the covariant derivative on the left-hand side). To the lowest order, the right-hand side is O(h 2 µν ) and Equation (C.5) assumes the axial current normalized to unity, and the unit coupling h µν θ µν . Let us examine the gravitational anomalies in the parent and daughter theories, expressing the answer in terms of the right-hand side of (C.5). In the parent theory the coefficient is (1/2) · 4N 2 = 2N 2 , the factor 1/2 being associated with the Weyl fermions in Eq. (C.1). In the daughter theory the coefficient is N 2 . This factor is the number of the Dirac degrees of freedom. D Additional low-energy theorems in the orbifold theory The orbifold theory admits a class of low-energy theorems which have no parallel in the parent SYM theory. They seem interesting on their own; some are presented here with a brief comment. The orbifold theory has two classically conserved currents, The axial current A µ is anomalous and can be projected onto its counterpart in the SYM theory. At the same time, the vector current V µ is anomaly free. It has no projection. We can couple this current V µ to an external gauge boson, a "photon." Then the orbifold theory becomes an SU(N)×SU(N)×U(1) theory. The U(1) filed strength tensor will be denoted by F µν . Consideration of the scale and chiral anomalies in this theory along the lines suggested in [35] provides us with low-energy theorems for the two-photon couplings to the gluon operators, namely, 0 3N 1 2 5 π 2 ℓ F a µν F µν a ℓ 2 γ = where the photons are assumed to be on mass shell and (k 1 + k 2 ) 2 → 0 (here k 1,2 are photons' momenta). Note that unlike QCD the orbifold theory at hand has no composite Goldstone mesons. Therefore, a subtle point in the derivation of the scale anomaly (D.2) which was revealed in [45] does not show up here.
2019-04-14T02:53:37.329Z
2005-05-03T00:00:00.000
{ "year": 2005, "sha1": "6cb87f86925c3ce60586fea6ad2160d78a74a699", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0505022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6cb87f86925c3ce60586fea6ad2160d78a74a699", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9418379
pes2o/s2orc
v3-fos-license
The Clostridium ramosum IgA Proteinase Represents a Novel Type of Metalloendopeptidase* Clostridium ramosum is part of the normal flora in the human intestine. Some strains produce an IgA proteinase that specifically cleaves human IgA1 and the IgA2m(1) allotype. This prolylendopeptidase was purified from a broth culture supernatant, and N-terminal sequences of the native protein and tryptic fragments thereof were determined. A fragment of the igagene encoding the IgA proteinase was isolated using degenerate primers in PCR, and the complete gene was obtained by inverse PCR. The identity of the iga gene was confirmed by heterologous expression inEscherichia coli. The deduced amino acid sequence indicated a signal peptide of 30 residues and a secreted proteinase of 133,828 Da. A typical Gram-positive cell wall anchor motif was identified in the C terminus. The presence of a putative zinc-binding motif His-Glu-Phe-Gly-His together with inhibition studies indicate that the proteinase belongs to the zinc-dependent metalloproteinases. However, the sequence of the C. ramosumIgA proteinase shows no overall similarity to other proteins except for significant identity around the zinc-binding motif with family M6 of metalloendopeptidases, and the unique sequence of the IgA proteinase in this area presumably establishes a new subfamily. The GC percentage of the iga gene is significantly higher than that for the entire genome of C. ramosum, suggesting that the gene was acquired recently in evolution. IgA is the major class of immunoglobulin in human mucosal secretions. Two subclasses, IgA1 and IgA2, exist, and IgA2 is found in three allelic forms, A2m(1), IgA2m (2), and A2m (3), among which IgA2m(1) is expressed mainly in Caucasians (1). The IgA1 subclass is predominant in the upper respiratory tract and in serum, whereas more even proportions of IgA1 and IgA2 occur in intestinal and urogenital secretions (2,3). A number of bacterial species that colonize mucosal membranes of man produce IgA1 proteinase. This group of postproline endopeptidases cleaves one of several Pro-Ser or Pro-Thr peptide bonds in the hinge region of human IgA1, including its secretory form, S-IgA1, within an inserted stretch of 13 amino acid residues lacking in IgA2. IgA1 proteinase-producing species include the three leading causes of bacterial meningitis, Neisseria meningitidis, Hemophilus influenzae, and Streptococcus pneumoniae; three commensal streptococci, Streptococcus mitis, Streptococcus oralis, and Streptococcus sanguis; Gemella hemolysans; and several species of Capnocytophaga and Prevotella. In addition, the urogenital pathogens Neisseria gonorrhoeae and Ureaplasma urealyticum produce IgA1 proteinase (reviewed in Refs. 4 and 5). It is conceivable that IgA1 proteinases enable the bacterial species to escape specific immune defense on mucosal surfaces, although lack of relevant animal models has precluded definitive identification of their exact biological significance (5). Notably, human IgA1 cleaving activity among these bacteria has evolved in at least three independent evolutionary lineages, emphasizing the biological importance of these enzymes. Inhibition studies and molecular characterizations have shown that the Hemophilus and Neisseria IgA1 proteinases are homologous serine-type proteinases, the streptococcal IgA1 proteinases are mutually related metalloendopeptidases, and the enzyme from Prevotella melaninogenica (formerly Bacteroides melaninogenicus) is a cysteine proteinase (reviewed in Ref. 5). An IgA1 proteinase of N. gonorrhoeae has been shown to cleave not only human IgA1 but also LAMP1 (lysosome-associated membrane protein 1) and tumor necrosis factor ␣ (TNF␣) receptor II, features that may contribute to the pathogenesis of infections caused by these bacteria (6,7). Clostridium ramosum is a strict anaerobic, Gram-positive, spore-forming bacterium. It is part of the commensal flora in the human intestine (8,9), and only rarely has it been associated with disease (10). Some strains of C. ramosum produce an IgA proteinase that cleaves human IgA1 and IgA2 allotype A2m(1) at a Pro-Val peptide bond at positions 221-222 just N-terminal to the hinge region (8,11,12). In IgA2m(2) and IgA2m (3), the Pro at position 221 is substituted by Arg, apparently rendering these allotypes resistant to cleavage by the IgA proteinase. Other Clostridium species may produce proteinase(s) with similar activity (13). The C. ramosum IgA proteinase is inhibited by high concentrations of EDTA, suggesting that it is a metalloproteinase (11). Otherwise, this enzyme has not yet been characterized. Here we describe the purification, cloning, and characterization of the C. ramosum IgA proteinase. Analysis of the deduced translation product of the iga gene encoding the enzyme revealed that it is a metalloendopeptidase with a putative ex-tended zinc-binding motif HEXXHXXXGXXD. The primary structure of the IgA proteinase shows no significant overall similarity to any other known metalloendopeptidase, including any IgA1 proteinase belonging to this class of proteolytic enzymes. Notably, however, the sequence of 30 residues around the zinc-binding motif shows up to 60% identity to the equivalent region of proteinases grouped into the family M6 of metallopeptidases, including PrtV proteinase of Vibrio cholerae and immune inhibitor A of Bacillus thuringiensis. The GC% of the iga gene is significantly higher than that reported for the C. ramosum genome, suggesting that the IgA proteinase gene was acquired recently in evolution through horizontal gene transfer. EXPERIMENTAL PROCEDURES Bacterial Strains-C. ramosum strain AK183, which produces an IgA proteinase that cleaves both human IgA1 and the IgA2m(1) allotype, was obtained from Dr. Y. Fujiyama (Kyoto, Japan). Bacteria were grown anaerobically at 37°C in 2ϫ YT medium (14) supplemented with 0.05% sodium thioglycolate. The 2ϫ YT medium was used because it is devoid of high molecular weight proteins that may complicate subsequent purification of the proteinase. Escherichia coli JM 109 (Stratagene, La Jolla, CA) was used as host for propagation of recombinant forms of plasmid pUC19. E. coli One Shot was used for cloning derivatives of the pCR-TOPO vector (Invitrogen, Groningen, The Netherlands), and E. coli BL21(DE3)pLysS (R & D Systems Europe, Abingdon, UK) was used for the expression cloning. The E. coli strains were grown in 2ϫ YT or LB medium (14) supplemented with antibiotics when appropriate. Preparation of the IgA Proteinase-The cell-free supernatant of a 5-liter culture of C. ramosum was obtained by centrifugation (6,000 ϫ g, 30 min, 4°C). Solid (NH 4 ) 2 SO 4 was added stepwise to 47% saturation, which in a pilot experiment was found to precipitate the IgA proteinase, and NaN 3 was added to a final concentration of 0.05%. The precipitate formed overnight at 4°C was collected by centrifugation (6,000 ϫ g, 45 min, 4°C), dissolved in 0.05 M Tris-HCl, pH 8.2 (buffer T), dialyzed against the same buffer, and stored at Ϫ20°C until used. The ammonium sulfate cut was further fractionated by size exclusion chromatography on a column (1.6 ϫ 50 cm) of Sepharose 12 (prep grade; Amersham Biosciences) equilibrated in buffer T. Eluent fractions containing the IgA proteinase were identified by the capacity to cleave human IgA1 (see below). Fractions with maximal IgA1 cleaving activity were pooled and subjected to anion exchange chromatography on a Mono-Q column (Amersham Biosciences) equilibrated with buffer T. Protein fractions eluted with a gradient of NaCl (0 -1 M) in buffer T were analyzed for IgA1 cleaving activity. IgA Proteinase Activity Assays-IgA proteinase activity was detected by its capacity to cleave purified human myeloma IgA1. Briefly, IgA proteinase test samples were incubated with myeloma IgA1 at 0.5 mg ml Ϫ1 in buffer T at 37°C overnight, and cleavage was subsequently detected by the presence of characteristic Fc and Fd fragments as revealed by SDS-PAGE (15). This assay was used to determine the susceptibility of the recombinant IgA proteinase to inhibition by human ␣ 2 -macroglobulin and various synthetic compounds including specific inhibitors of serine peptidases (phenylmethylsulfonyl fluoride, 3,4-dichloroisocoumarin, and Pefabloc), cysteine peptidases (E-64 and iodoacetamide), and metallopeptidases (EDTA, 1,10-phenanthroline, phosphoramidon, 2-(N-hydroxycarboxamido)4-methyl pentanoyl-L-Ala-Gly-NH 2 (Zincov), N-benzyloxycarbonyl-Pro-Leu-Glu-hydroxamate, and p-aminobenzoyl-Gly-Pro-D-Leu-D-Ala-hydroxamic). Pefabloc was purchased from Roche Molecular Biochemicals, Zincov was from Merck, 1,7-phenanthroline was from GFS Chemicals (Powell, OH), and the other compounds were from Sigma. The ␣ 2 -macroglobulin was titrated on trypsin prior to use and found to be 60% active, and incubation time before adding IgA1 substrate was up to 5 h. For quantitative purposes, IgA proteinase activity was titrated using a previously described assay involving enzyme-linked immunosorbent assay technology (16). Briefly, serial dilutions of test sample were incubated with an equal volume of myeloma IgA1 substrate (50 g ml Ϫ1 ), after which reaction mixtures were incubated in enzyme-linked immunosorbent assay wells precoated with antibody specific for Fc␣. Subsequently, wells were incubated with enzyme-conjugated antibody to immunoglobulin light chains and developed with chromogenic substrate. In this assay, cleavage of IgA1 was reflected as a decrease in OD signal relative to the signal measured for wells receiving IgA1 incubated without protease. Based on a regression curve fitted to a plot of OD against dilution, the IgA1 proteinase titer was calculated as the sample dilution corresponding to a 50% decrease in OD. To examine the effect of reducing agents on enzyme activity, purified IgA proteinase was incubated with dithiothreitol (1 mM) or ␤-mercaptoethanol (1 mM) for 1 h at room temperature, and the activities of such treated proteinases were titrated along with an untreated control sample of the enzyme. To prevent interference of reducing agents with the enzyme-linked immunosorbent assay, reaction mixtures were diluted 1:15 with phosphate-buffered saline prior to analysis. To evaluate potential proteolytic activity of the IgA proteinase against other substrates, several proteins at 1 mg ml Ϫ1 were incubated in a volume of 50 l at 37°C for 24 h with an amount of the partially purified recombinant proteinase capable of cleaving completely 0.5 mg of human IgA1 within 2 h. The proteins used were human IgG, IgD, IgE, IgM, ␣ 2 -macroglobulin, ␣ 1 -proteinase, ␣ 1 -antichymotrypsin (all from Athens Research and Technology, Athens, GA) and fibrinogen, bovine albumin, carboxymethylated lysozyme, collagen type I and IV, oxidized insulin ␣-chain, and gelatin from Sigma. After incubation, the reaction was stopped by boiling in reducing sample buffer, and the integrity of the proteins was analyzed by SDS-PAGE. Amino Acid Sequencing-The most active fractions eluted from the Mono-Q column were subjected to reducing SDS-PAGE, blotted onto ProBlott membranes (PerkinElmer Life Sciences), and stained with Coomassie Blue. The band corresponding to the presumed IgA proteinase was excised, and the N-terminal sequence was determined using an ABI 477A/120A protein sequencer (PerkinElmer Life Sciences). For generation of tryptic peptides, the band in the polyacrylamide gel stained with Coomassie Blue was excised from several lanes and digested in situ as described (17). In brief, after washing with a mixture of ammonium bicarbonate and acetonitrile, pieces of the gel were shrunk with acetonitrile and dried completely. A solution containing modified trypsin (Promega, Madison, WI) was allowed to soak into the gel pieces. After incubation overnight at 37°C, generated peptides were recovered by extraction and separated by narrow bore RP-HPLC using the SMART system (Amersham Biosciences). Amino acid sequencing of selected peptides was performed as described above. Southern Blot Analysis-Unless otherwise stated, the DNA manipulations were performed according to Sambrook et al. (14). Whole-cell DNA from C. ramosum was isolated, digested with EcoRI, and subjected to agarose gel electrophoresis and Southern blot analysis including hybridization at high and low stringency as described previously (18). As probes we used a 5.1-kb fragment of the S. sanguis strain ATCC 10556 iga gene (18) and a PCR product of genomic DNA from C. ramosum strain AK183 amplified with the primers 5Ј-AACGTGTTT-TCGGGCAGATGA-3Ј and 5Ј-TGATAGTCTTGCATCGCTTTC-3Ј, identified in the present study, using the Expand Long Template PCR System as recommended by the supplier (Roche Molecular Biochemicals). The DNA probes were purified after agarose gel electrophoresis (QIAEX II Gel Extraction Kit, Qiagen, Valencia, CA) and labeled with [ 32 P]dCTP (Random Labeling Kit, Roche Molecular Biochemicals). Sequencing the IgA Proteinase Gene-Several degenerate primers with sequences corresponding to reverse translation of the obtained amino acid sequences of the peptides analyzed were purchased from DNA Technology (Aarhus, Denmark). The primers were combined in pairs of a forward and a reverse one and used in PCRs containing 100 ng of genomic DNA and 30 pmol of each primer using Ready To Go PCR beads (Amersham Pharmacia Biotech) and subjected to the following cycling parameters for 30 cycles: 94°C for 1 min, 55°C for 1 min, 72°C for 2 min, with an initial denaturation step at 94°C for 5 min and a final extension at 72°C for 8 min. The resulting PCR products were cloned into the E. coli plasmid vector pCR-TOPO using the TOPO-TA Cloning KIT (Invitrogen). For inverse PCR, the genomic DNA was digested with either MspI or EcoRI, and 50 ng of the resulting fragments was circularized in a 20-l reaction volume (Rapid DNA Ligation Kit; Roche Molecular Biochemicals). The self-ligated mixture was purified with Wizard Minicolumns (Promega) and used as template in the inverse PCR with primers pointing outwards using either Ready To Go PCR beads or the Expand High Fidelity PCR System (Roche Molecular Biochemicals) and the same cycling parameters as described above except that the annealing temperature was 60°C. The inverse PCR products were cloned into E. coli plasmids pCR-TOPO or pUC19. The ExoIII/S1 Deletion Kit (MBI, Fermentas, Lithuania) was used for construction of plasmid clones with nested, unidirectional deletions for sequencing the insert of recombinant plasmids using the universal M13 primers. Plasmid DNAs for sequencing were prepared as recommended by the supplier of the sequencing kit, and PCR products were purified with Wizard Minicolumns (Promega). Individual sequence reactions on plas-mid DNA were performed with the Taq DyeDeoxy-Terminator cycle sequencing Kit (PerkinElmer Life Sciences), whereas Thermo Sequenase Dye Terminator Cycle Sequencing Kit (Amersham Biosciences) was used for sequencing PCR products. Sequence reactions were analyzed with an ABI PRISM 377 DNA sequencer (PerkinElmer Life Sciences). As sequencing primers, we used the universal M13 sequencing primers as well as oligonucleotides designed on the basis of preceding sequences. The DNA sequence was determined for both strands of the iga gene. Computer analysis of the sequences was performed with programs included in the GCG package (Genetics Computer Group, Madison, WI). BLAST and PSI-BLAST at NCBI (available on the World Wide Web at www.ncbi.nlm.nih.gov/BLAST/) were used for data base searching. Expression of the ORF 1 -For expression in E. coli of the ORF, we used the vector pGEX-5T, which is designed to express a recombinant fusion protein consisting of a histidine hexapeptide and glutathione S-transferase followed by the amino acid sequence of interest (19). The primers 5Ј-CCGATGACCATTGGATCCGCATCAAAGC-3Ј and 5Ј-GCT-TAAAGGTCTATTCTCGAGTTATTCAGCG-3Ј were used in a PCR on genomic DNA from strain AK183 to amplify a fragment encoding the presumed secreted form of the IgA proteinase, and, in addition, the primers add a BamHI and an XhoI restriction site, respectively. For the PCR, we used the Pwo polymerase as recommended by the supplier (Roche Molecular Biochemicals). The vector and the PCR product were digested with BamHI and XhoI, ligated, and transformed into E. coli JM109. Colonies harboring the correct recombinant plasmid, termed pGEX-5T-iga, were identified by restriction analysis of plasmid DNA. The plasmid DNA was subsequently used to transform E. coli BL21(DE3)pLysS, and expression of the recombinant protein was induced in a culture with an A 600 of 0.6 by adding IPTG to a final concentration of 1 mM. After an additional 4 h of growth, the cells were pelleted, resuspended in phosphate-buffered saline, pH 7.3, and disrupted by mild sonication. The supernatant of the lysate was tested for IgA1 cleaving activity as described above. The recombinant IgA proteinase was partially purified using affinity chromatography on glutathione-Sepharose (Amersham Pharmacia Biotech). Briefly, the E. coli cells from 3 liters of culture grown for 4 h at 25°C after induction of expression by IPTG were suspended in 50 ml of phosphate-buffered saline and disrupted using a French press. Cell debris was removed by ultracentrifugation (105,000 ϫ g for 60 min), and recombinant IgA proteinase was separated using an on-column cleavage and purification procedure as recommended by the manufacturer. Thrombin was removed using benzamidine-Sepharose (Amersham Biosciences). Site-directed Mutagenesis-Three mutants of the C. ramosum IgA proteinase, H539A, D550A, and E551A, were generated. Mutator oligonucleotide primers were designed to introduce restriction enzyme sites to facilitate subsequent screening for mutated DNA products. For the H539A mutation, the codon CAC for histidine was replaced by GCC, a codon for alanine. In addition, a silent mutation in codon Ala 538 was introduced by changing GCA into GCG, creating an Eco52I restriction site, CGGCCG. Outward facing primers for the H539A mutagenesis were 5Ј-AACCGTGTCCAAACTCGGCCGCAAAAGTTT-3Ј and 5Ј-TGCTCGGTCTCGGTGATGAATA-3Ј. For the D550A mutation, the codon GAT for aspartic acid was replaced by GCT, a codon for alanine. In addition, a silent mutation in codon Gly 549 was introduced by changing GGT into GGG, creating an Eco88I restriction site, CTC-GGG. Outward facing primers for the D550A mutagenesis were 5Ј-AT-CCGTTACTGTATTCAGCCCCGAGACCGA-3Ј and 5Ј-ATTTGCTTGA-CGATAAGGAACTTAA-3Ј. For the E551A mutation, the codon GAA for glutamic acid was replaced by GCA, a codon for alanine. This change created an Mph1103I restriction site, ATGCAT. Outward facing primers for the E551A mutagenesis were 5Ј-CAAATATCCGTTACTGTATG-CATCACCGAG-3Ј and 5Ј-CTTGACGATAAGGAACTTAAATCAC-3Ј. A 1.2-kb SalI-HindIII fragment of plasmid pGEX-5T-iga was cloned into HindIII-and SalI-digested pTZ19R, generating pSH1200. Desired mutations and restriction sites were introduced into pSH1200 by inverse PCR using Pfu polymerase as recommended by the supplier (Fermentas) and the primer pairs described above. Following temperature cycling, the PCR products were treated with DpnI restriction endonuclease to digest parental DNA template methylated by the Dam methylase. After heat inactivation of DpnI, the DNAs were used to transform E. coli JM109 supercompetent cells. The mutants were selected by restriction analysis of plasmid DNA and confirmed by se-quence analysis. The 1.2-kb SalI-HindIII fragments with desired mutations were cloned into SalI-and HindIII-restricted pGEX-5T-iga. The resulting plasmids, termed pGEX-iga-H539A, pGEX-iga-D550A, and pGEX-iga-E551A, had single amino acid substitutions at amino acids His 539 to Ala, Asp 550 to Ala, and Glu 551 to Ala, respectively. The plasmids were transformed into E. coli BL21(DE3)pLysS, and expression of recombinant proteins was induced as described above. Purification, Characterization, and Amino Acid Sequence Analysis of the C. ramosum IgA Proteinase-Differences in the proportion of secreted compared with cell-associated forms of the IgA1 proteinase produced have been observed for different species and strains of bacteria (20). Here we found that in the early stationary phase the majority of the IgA1 cleaving activity in C. ramosum strain AK183 was secreted into the medium (data not shown). The C. ramosum IgA proteinase was purified from culture supernatant by a combination of ammonium sulfate precipitation, size exclusion, and anion exchange chromatography. Eluent fractions containing the proteinase were identified by their ability to cleave human IgA1, releasing intact Fc and Fd fragments (analyzed by SDS-PAGE), and the activity in fractions was determined by titration of the ability to cleave human IgA1 (analyzed by the enzyme-linked immunosorbent assay-based assay). The titer of IgA proteinase activity in the initial 5 liters of culture supernatant was 8, and in the peak activity fraction (0.5 ml) upon anion exchange it was 128. This modest increase in activity suggested a loss of enzyme activity during the process of purification. Because C. ramosum is a strictly anaerobic bacterium, we speculated that the enzyme might regain its activity if subjected to reducing conditions. However, we found that preincubation with neither 1 mM ␤-mercaptoethanol nor 1 mM dithiothreitol had any influence on its capacity to cleave human IgA1. The loss of activity during purification remains unexplained. The IgA proteinase was active at neutral pH, and it retained activity upon storage at Ϫ20°C for several weeks. It has been previously shown that 100 mM EDTA inhibits the activity of the C. ramosum IgA proteinase (11). We found that the IgA1 cleaving activity was completely inhibited by 0.5 mM EDTA, suggesting that the enzyme is a metalloproteinase. More detailed enzymatic characterization of the IgA proteinase activity was performed using the partially purified recombinant form of the enzyme (see below). Reducing SDS-PAGE analysis of the Mono-Q fractions revealed that the intensity of a band corresponding to a protein of 130 kDa correlated with the IgA1 proteinase activity, suggesting that this band represented the IgA proteinase (Fig. 1). The 130-kDa protein as well as tryptic peptides derived from it and purified by HPLC were subjected to N-terminal amino acid sequence analysis. The N-terminal sequence of the 130-kDa protein was determined to be AXKPDIKVXDYVKMGVYNN, while the N-terminal sequences EYGFHYFISPSD, FEDGX- 1 The abbreviations used are: ORF, open reading frame; IPTG, isopropyl-1-thio-␤-D-galactopyranoside. EIPNTAGG, and EYTGAY were obtained for three of the tryptic peptides. None of the sequences shared significant similarity to other bacterial IgA1 proteinases or to any known proteins, as revealed by searching the GenBank TM data base. The iga Gene Sequence from C. ramosum Strain AK183-Although the molecular mass and catalytic mechanism of the C. ramosum IgA proteinase was similar to that of the IgA1 proteinase from streptococcal species (18,(21)(22)(23), the C. ramosum iga gene encoding the IgA proteinase showed no homology to the streptococcal iga genes. Even when using hybridization at very low stringency conditions, genomic DNA from C. ramosum strain AK183 did not hybridize with the iga gene from S. sanguis in a Southern blot analysis (data not shown). To isolate the C. ramosum iga gene, the N-terminal amino acid sequences obtained for the putative IgA proteinase and the tryptic fragments of it were used to design degenerate primers for PCR amplification of a part of the C. ramosum iga gene using genomic DNA from strain AK183 as template. Forward primer 5Ј-ATGGGIGTITAYAAYAAY-3Ј was deduced from reverse translation of the amino acid sequence MGVYNN from the N-terminal sequence of the mature protein, and reverse primer 5Ј-RAARTARTGRAAICCRTAYTC-3Ј was deduced from the sequence EYGFHYF obtained for one of the tryptic peptides. A single amplicon of ϳ1.2 kb was produced. The nucleotide sequence of this fragment was determined and used for design of primers for inverse PCR to obtain the complete iga gene sequence. Combined, a sequence of 4242 nucleotides was determined. To correct for errors that may occur due to imperfect fidelity of the DNA polymerases in the PCRs and which would be carried over in the cloning procedures applied in the sequencing strategy (see "Experimental Procedures"), the sequence obtained was used to design primers for PCR amplification of overlapping fragments of the AK183 iga gene. Direct sequencing of the amplicons revealed a total of five errors in the initial sequence. The sequence contained a large ORF with the potential of encoding a protein of 1,234 amino acids. The N-terminal sequence of the 130-kDa protein as well as the N-terminal sequences of the tryptic fragments of it were all identified within the primary structure deduced from the ORF (Fig. 2). The ORF was preceded by typical promoter elements (Fig. 2). The sequence GGAAGT, six nucleotides upstream of the proposed ATG start codon, is similar to the Shine and Dalgarno sequence GGAGGT (24) and is in a suitable position for a ribosome binding site (25). Thirty-five nucleotides upstream of the proposed ATG start codon, the sequences TATAATA and TTGAC separated by 17 nucleotides match the Ϫ10 and Ϫ35 promoter elements, respectively (26,27). Another possible ATG start codon was located 15 nucleotides downstream of the first one. A possible transcriptional terminator in the form of an inverted repeat structure was identified downstream of the ORF (Fig. 2). A Southern blot analysis of genomic DNA of C. ramosum AK183 restricted with PstI, which has no recognition sites in the iga gene sequence determined, and hybridized with a 4-kb fragment containing the ORF showed a single band of 14 kb (results not shown), suggesting that the iga gene is a single copy gene in C. ramosum strain AK183. Interestingly, the GC percentage of the iga gene was 43 compared with an overall GC percentage of 26 in the C. ramosum genome (28). This difference strongly suggests that the IgA proteinase gene in C. ramosum was acquired recently in evolution through horizontal gene transfer from another bacterium with a higher GC percentage. Expression of the IgA Proteinase in E. coli and Characterization of the Recombinant Protein-To verify that the ORF in fact represented the C. ramosum iga gene, we performed heterologous expression in E. coli. The sequence encoding the presumed mature IgA proteinase (positions 537-4151 in Fig. 2) was amplified by PCR and cloned into the E. coli expression vector pGEX-5T. This vector is designed to express a recombinant fusion protein consisting of a histidine hexapeptide and glutathione S-transferase followed by the amino acid sequence of interest. The plasmid construct, termed pGEX-5T-iga, was transformed into E. coli BL21(DE3)pLysS. Intracellular expression of the fusion protein was induced by IPTG, and after incubation the cells were disrupted by sonication. The resulting lysate showed IgA proteinase activity (Fig. 3), demonstrating that the ORF sequenced was the iga gene. In addition, Nterminal sequencing of the Fc fragment generated by the recombinant fusion protein revealed the sequence VPSTP. This sequence is identical to that previously reported for Fc induced by the C. ramosum IgA proteinase (11), indicating that the specificity of the recombinant proteinase was identical to the native one. Relatively high expression of the active recombinant form of the IgA proteinase enabled us to perform a more detailed characterization of the enzyme activity. First, we confirmed that the IgA proteinase is highly specific for human IgA, since none of the other human immunoglobulins, including IgG, IgD, IgE, and IgM, were susceptible to cleavage. In addition, none of the other proteins tested (fibrinogen, albumin, collagen type I and IV, and two serpins, ␣ 1 -proteinase and ␣ 1 -antichymotrypsin) were cleaved by the IgA proteinase even after incubation for 24 h at an enzyme concentration sufficient to cleave 0.5 mg of human IgA1 in 2 h (data not shown). Especially significant was the lack of an effect on serpins, since this group of proteins possess a surface-exposed loop (the reactive site loop), which is readily cleaved even by nontarget proteinases with a restricted specificity like periodontain (29) and collagenase (30). These data, together with the lack of activity against unstructured polypeptides such as gelatin, carboxymethylated lysozyme (Fig. 4), and oxidized insulin ␣-chain, indicate an exquisite specificity of the C. ramosum IgA proteinase. Such a narrow specificity limited to the hinge region of the IgA molecule is also a common feature of other IgA1 proteinases (4,5). It has been assumed that the C. ramosum IgA proteinase is a metalloproteinase solely on the basis of its inhibition by EDTA (11). To extend enzyme characterization, we investigated the effect of a broad range of class-selective inhibitors of serine proteinases, cysteine proteinases, and metalloproteinases on the activity of the partially purified recombinant IgA proteinase. At 1 mM concentration, none of the compounds specific for the first two groups of peptidases, including phenylmethylsulfonyl fluoride, 3,4-dichloroisocoumarin, Pefabloc, E-64, and iodoacetamide, had any effect on IgA1 cleavage (results not shown). Also, among several metalloproteinase inhibitors tested, only metal-chelating compounds, 1,10-phenanthroline and EDTA, inhibited the enzyme activity (Fig. 5, lanes 3 and 4). Significantly, a nonchelating isomer of phenanthroline (1,7-phenanthroline) had no effect (Fig. 5, lane 8). Neither phosphoramidon nor Zincov, a compound specifically designed to inhibit metalloproteinases, had any effect on IgA1 cleavage (Fig. 5, lanes 5 and 6). The IgA proteinase activity was also insensitive to inhibition by other hydroxamate-based compounds such as N-benzyloxycarbonyl-Pro-Leu-Glu-hydroxamate (Fig. 5, lane 7) and p-aminobenzoyl-Gly-Pro-D-Leu-D-Alahydroxamic acid as well as ␣ 2 -macroglobulin (results not shown). Taken together, the inhibition profile of the C. ramosum IgA metalloproteinase reiterates the unique character of the enzyme. The Amino Acid Sequence of the C. ramosum IgA Protein- ase-The deduced amino acid sequence of this novel proteinase, when compared with the N-terminal sequence determined for the secreted protein, indicates that the first 30 amino acids of the primary translation product comprise the signal peptide. This is in perfect agreement with the predictions made on the basis of the primary structure inferred from the iga gene sequence by the computer program SignalP (31). Taking the signal peptide into account, the deduced mature IgA proteinase contains 1,204 amino acids, has a calculated M r of 133,828, and has an isoelectric point of 5.79. This is in agreement with the size of the purified proteinase observed in SDS-PAGE (130 kDa). In the C terminus, we identified a putative cell wall sorting signal that in other Gram-positive bacteria has been found to target surface proteins to the cell wall (32). The sequence SPQTG at positions 1196 -1200 presumably constitutes the sortase recognition site. It was previously reported that in Clostridium difficile the sortase appears to recognize SPXTG or PPXTG instead of the conventional LPXTG motif (33). In the C. ramosum IgA proteinase, a small spacer, DNSN, separated this motif from a transmembrane domain, IFLWFALLFVSAAGVT-GITAY, followed by a positively charged tail, NKKKKEHAE, at the C terminus. These features are in agreement with other presumed substrates for sortase-like proteins (33). Provided that the anchor motif is functional, the sortase cleaves at the Thr-Glu peptide bond in the recognition site and covalently links the threonine, and thereby the N-terminal part of the protein, to peptidoglycan in the cell wall (32,34). However, we found that the majority of IgA proteinase activity in C. ramosum AK183 was released into the medium. Release of surface proteins with a typical Gram-positive cell wall anchor motif has been reported for the ␣ and ␤ antigens present in the c protein complex of Streptococcus agalactiae (35)(36)(37), and Streptococcus mutans sheds surface antigen P1 and secretes exo-␤-D-fructosidase (38,39). The release of anchored surface proteins may be brought about by turnover of the peptidoglycan layer or by proteolytic cleavage of the proteins next to the anchoring. Provided that the cell wall anchor sorting signal in the C. ramosum IgA proteinase is functional, the mechanism by which the proteinase is released from the cell wall remains to be elucidated. A putative zinc-binding motif was identified at positions 539 -543 followed by an aspartic acid residue seven positions downstream in the sequence HEXXHXXXGXXD and resembling the extended zinc-binding site typical for the metzincin group of metallopeptidases, but as a significant difference in the IgA proteinase there are four instead of three residues between the second His and Gly (40,41). In all members of this clan with the exception of leishmanolysins, the third zinc ligand is His or Asp, located invariably six residues downstream of the HEXXH motif (42). In case of C. ramosum IgA proteinase, there are seven residues separating the second (His) and the third (Asp) zinc ligand. Nevertheless, the sequence encompassing the zinc-binding motif is remarkably similar to that of the PrtV proteinase of V. cholerae and the immune inhibitor A of B. thuringiensis (Fig. 6), each of which is a proteolytic member of clan MA. This significant similarity includes the presence of the conserved Gly, which allows the formation of the ␤-turn necessary to bring the zinc ligands together in this group of metalloproteinases (43,44). Therefore, it can be predicted that His 539 , His 543 , and Asp 550 of the C. ramosum IgA proteinase polypeptide chain form the metal binding site, while Glu 540 is the active site residue. To experimentally verify the prediction that His 539 and Asp 550 constitute part of the zinc binding motif and are therefore indispensable for the enzyme activity, we constructed and expressed mutant forms of the IgA proteinase in which these residues were individually replaced by alanine. As expected, neither of these two mutants possessed IgA1 cleaving activity (Fig. 7, lanes 4 and 5). Notably, however, the E551A mutant was fully active (Fig. 7, lane 6). These data corroborate the alignment-based predictions of the zinc-binding and catalytic residues (Fig. 6) and indicate that the IgA proteinase of C. ramosum can be included into clan MA. The endopeptidases from clan MA are also known as metzincins, because there is a conserved Met in a turn that underlines the active site (41,45). However, this Met is absent in the IgA proteinase, indicating the uniqueness of this enzyme, which most likely establishes a new subfamily of metallopeptidases in family M6 of clan MA. Currently, the family M6 consists of only three members listed in the MEROPS Data Base (available on the World Wide Web at www.merops.co.uk) exemplified by PrtV proteinase of V. cholerae and immune inhibitor A of B. thuringiensis. A PSI-BLAST search in the unfinished genomes, using these proteinase sequences for comparison, revealed, however, that similar putative proteinases are encoded in the genomes of Bacillus stearothermophilus, Streptomyces coelicolor, Clostridium acetobutylicum, Shewanella putrefaciens, and Bacillus anthracis (Fig. 6). The latter species can potentially express at least four different metalloproteinases that are homologous, with one being almost identical to immune inhibitor A (94% identity in the primary structure), the only member of the M6 family with defined biological function. This metalloendopeptidase cleaves the antibacterial proteins attacin and cecropin found in insects, disabling the immune system of lepidoptera infected by B. thuringiensis (46,47). In this respect, it is interesting to note that the IgA proteinase has an analogous function, since its activity is also specifically aimed at the host antimicrobial defense mechanisms. The importance of IgA1 cleavage by mucosal pathogens or commensals in their ability to escape immune defense seems apparent but is difficult to establish due to the lack of a relevant animal model (5). The biological significance of IgA1 proteinase activity can, however, be inferred indirectly from the fact that nature developed these specific proteinases based on three different catalytic mechanisms. Moreover, it is now apparent that within the metalloproteinase class, the specificity to cleave human IgA is present in two evolutionary lineages, with the C. ramosum enzyme capable of cleaving both IgA1 and IgA2m(1) molecules. This seems to be a major advantage for this bacterium, because in the gut environment, a natural habitat of C. ramosum, both isotypes of IgA occur in comparable amounts. It remains to be examined whether strains of C. ramosum producing this proteinase preferentially colonize subjects homozygous for the IgA2m(1) allotype. Besides, the expression of the recombinant IgA proteinase facilitates production of a large amount of the active, recombinant protein in a pure form for further studies of this intriguing molecule. FIG. 6. Conserved region in C. ramosum IgA proteinase. A comparison is shown of the region around the predicted active site and zinc-binding domains (indicated by A and Z, respectively) of C. ramosum IgA proteinase (IgAPrt), B. thuringiensis immune inhibitor A (InA; accession number X55436), and V. cholerae PrtV (PrtV; accession number Y00557), with a hypothetical secreted proteinase of S. coelicolor (ScPprt; accession number CAB51001) and those of PrtV-related proteinases obtained from the conceptual translation of sequences retrieved from genome data bases (available on the World Wide Web at www.ncbi.nlm.nih.gov/Microb_blast/unfinishedgenome.html): BaPrt1 from B. anthracis, BaPrt2 from B. anthracis, BstePrt from B. stearothermophilus, ClaPrt from C. acetobutylicum, and SputPrt from S. putrefaciens. The sequences were aligned using the ClustalW multiple sequence alignment tool. The arrows above the sequences indicate Gly and Met residues conserved in the metzincin family of metallopeptidases. The asterisks indicate identical residues, and dots indicate conserved residues with similar properties in members of the M6 family of metzincins. Gaps (dashes) have been introduced to optimize alignment. The numbers of the first and last amino acid in the alignment are indicated for each protein. In this assay compared with Figs. 3 and 5 we used a different preparation of human IgA1 with a distinct glycosylation, and therefore the fragments migrate slightly differently in the gel. Below the SDS-PAGE gel, a Western blot analysis of appropriate lysates probed with anti-His 6 antibodies demonstrates that the lack of IgA1 cleaving activity was not due to deficiency in expression of the mutated proteinase. The position of the recombinant proteinase is indicated by an arrow.
2018-04-03T00:10:44.311Z
2002-04-05T00:00:00.000
{ "year": 2002, "sha1": "a8049beda4c3ef079d283993c5988c67eaeab167", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/14/11987.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e927883c68b0e245113a58f0fa80907fb1e34272", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7190939
pes2o/s2orc
v3-fos-license
Gastrointestinal comorbidities associated with atrial fibrillation This observational study was conducted to describe the risk of gastrointestinal (GI) events among patients with atrial fibrillation (AF). We analyzed Thomson Reuters MarketScan® data from 2005 to 2009. Subjects aged ≥18 years with ≥ 1 AF diagnosis were selected. GI events were identified from claims with a primary or secondary diagnosis code for any GI condition. The risk of GI events was assessed using cumulative incidence (new GI events/patients with AF without GI condition at baseline) and incidence rates (IRs), calculated as the number of patients with new GI events divided by patient-years of observation. In addition, the CHADS2 score was evaluated at baseline to determine the patient’s risk of stroke. A total of 557,123 AF patients were identified. The mean (median) AF patient age was 68.2 years (70); 45% were female. The cumulative incidences of any GI event and dyspepsia were 40% and 19%, respectively. The corresponding IRs were 38.8 and 14.7 events per 100 patient–years. IRs of any GI events for female and male patients were 43.6 and 35.5; for patients in the age groups <65, 65–74, 75–84, and ≥85 years, IRs were 32.3, 38.9, 44.6, and 52.7; for patients with a CHADS2 score of 0, 1–2, 3–4, and 5–6, IRs were 30.3, 41.6, 56.9, and 74.5, respectively. In this large claims database, 40% of AF patients experienced a GI event, predominantly dyspepsia. Physicians should take age and comorbidities into consideration when managing AF patients. Electronic supplementary material The online version of this article (doi:10.1186/2193-1801-3-603) contains supplementary material, which is available to authorized users. Introduction Atrial fibrillation (AF) is the most common clinical arrhythmia; an estimated 2.3 million Americans were suffering from this condition in 2010 (Fuster et al. 2001;Go et al. 2001). AF is also strongly age dependent, affecting approximately 11-12% of persons ≥80 years of age, compared with only 0.1-0.2% of persons ≤55 years of age (Go et al. 2001). AF is commonly associated with other cardiovascular diseases, hypertension, including congestive heart failure, valvular heart disease, and ischemic heart disease (Lloyd-Jones et al. 2004). However, while literature documenting cardiovascular comorbidities is plentiful, less attention has been given to the prevalence and impact of gastrointestinal (GI) conditions such as dyspepsia, gastroesophageal reflux disease (GERD), peptic ulcer diseases, and GI bleeding in patients with AF (Hernandez-Diaz & Rodriguez 2002;Locke et al. 1997;Talley et al. 1992;Talley et al. 1995). The GI tract has been documented as one of the most common locations of major bleeds attributed to a typical thromboprophylaxis regimen in stroke prevention (Coleman et al. 2012). The number of GI conditions also increases with age (Hernandez-Diaz & Rodriguez 2002;Blachut et al. 2004;Garcia Rodriguez et al. 1998;Som et al. 2010;Sostres et al. 2010). Dyspepsia, for example, is a common condition in the elderly. It is also a likely comorbidity in patients with AF. In a recent retrospective observational study, subjects with AF presenting with dyspepsia tended to have a greater health burden and lower quality of life than those without dyspepsia. Moreover, these patients were at greater risk of stroke (Lamori et al. 2012). The agents used in patients with AF to prevent stroke or treat other comorbidities are known to increase the risk of GI events. These agents include, but are not limited to, anticoagulants, nonsteroidal anti-inflammatory drugs (NSAIDs) (e.g. aspirin), corticosteroids, and calcium channel blockers (Garcia Rodriguez et al. 1998;Bytzer 2010). Agents currently used to treat patients with GI conditions or to counteract treatment-induced GI events typically include acid secretory inhibitors, such as proton pump inhibitors (PPIs) (Bytzer 2010;McGowan et al. 2008;Yeomans et al. 1998). GI conditions, in particular GERD, also have been proposed as a potential independent trigger for AF, because of the close anatomical positioning of the esophagus and the atria, and their similar nerve innervations (i.e. vagal nerve innervation) (Huang et al. 2012). The fact that vagal nerve overstimulation has been observed in patients with GERD and has been suggested as a contributing factor in AF supports the notion of GERD-mediated AF stimulation via vagal innervation. The most compelling evidence in support of GERD-mediated AF stimulation was found in a recent nationwide population-based survey in Taiwan, where GERD was reported to be independently associated with an increased risk of developing concomitant AF (Huang et al. 2012). It is thought that the prevalence of GERD increases with age. Whether this is the case has not yet been fully elucidated; nevertheless, esophageal symptoms (i.e. severe reflux esophagitis) have been reported to be more severe in older patients (Becher & El Serag 2011). In the US, the management of AF is dictated by guidelines issued by the American College of Chest Physicians, which use the CHADS 2 classification to estimate stroke risk. This is established by adding points relating to risk factors of stroke -Congestive heart failure, Hypertension, Age ≥75 years, Diabetes mellitus, prior Stroke or transient ischemic attack or thromboembolism: a higher score denotes a greater risk (Gage et al. 2004;Singer et al. 2008). Oral anticoagulant therapy is recommended for patients with a CHADS 2 score ≥2, while either warfarin or aspirin is recommended for patients with a CHADS 2 score of 1 (Singer et al. 2008). In light of the prevalence of GI comorbidities in patients with AF and to better understand how it affects this population, we conducted an observational study to document the extent of GI comorbidities in patients diagnosed with AF. Data source Health insurance claims from the Thomson Reuters MarketScan® database were used to conduct the analysis. The MarketScan database, which combines two separate databases (Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits [COB]) to cover all age groups, contains claims from~100 employers, health plans, and government and public organizations representing about 30 million covered lives. All US census regions are represented, with the South and North Central (Midwest) regions predominating. The MarketScan data used in the current analysis covered the period from January 2005 through December 2009. Data used in the present study included health plan enrollment records, patient demographics, inpatient and outpatient medical services, and outpatient prescription drug dispensing records. Data included in the MarketScan database are de-identified and are in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 to preserve patient anonymity and confidentiality. Study design A retrospective longitudinal cohort design was employed. To be included in the study sample, patients were required to meet the following criteria: (i) have at least one primary or secondary diagnosis of AF (ICD-9-CM code 427.31), (ii) have continuous health plan enrollment during the study period, and (iii) be at least 18 years of age as of the date of the index AF (index date). In addition, patients were required to have continuous health plan enrollment for at least 180 days prior to the index date (baseline/washout period). The observation period of patients spanned from the index date through the earlier of either the health plan disenrollment date or the end of data availability. Outcome measures The main endpoint of the study included the risk of GI events. These were defined as a primary or secondary diagnosis code for any GI event (see Additional file 1 for a complete list of ICD-9-CM codes) and the subset of GI events based on the classification in the recent Randomized Evaluation of Long-Term Anticoagulation (RE-LY) study (Connolly et al. 2009), including dyspepsia (including upper abdominal pain, abdominal pain, and abdominal discomfort, as well as dyspepsia), diarrhea, vomiting, and GI bleeding. The secondary endpoints of the study included the following GI conditions: constipation, intestinal diverticula, dysphagia, esophagitis, flatulence, eructation and gas pain, gastritis and duodenitis, GERD, malignant neoplasm of the digestive organs and peritoneum, nausea alone, non-infectious gastroenteritis and colitis, other disorders of the intestine, and peptic ulcer diseases. GIrelated hospitalization was also reported; this was defined as a hospitalization that had any GI-related ICD-9-CM code associated with it, either as a primary or secondary diagnosis. Statistical analyses Descriptive statistics were used to describe patient baseline characteristics. Means and standard deviations (SDs) were used to describe continuous variables; frequencies and percentages were reported for categorical variables. The prevalence of GI events was calculated as the number of patients with a GI event during the 180-day baseline and/or study follow-up period divided by the total number of AF patients. Cumulative incidence, calculated as the number of patients with a new GI event (i.e. post-index AF diagnosis only) divided by the total number of AF patients without a history of GI events at baseline, was also reported. The 95% confidence intervals (CIs) of the prevalence and cumulative incidence of GI events were computed using binomial distribution. Finally, the incidence rates (IRs) of GI events were calculated as the number of new GI cases divided by patient-years of observation, which was censored at the time of the first event. This person-time approach is used to account for different lengths of observation among study subjects in a non-experimental setting. IR was expressed as number of new cases per 100 patients per year. The 95% CIs of the IRs of GI events were computed using the Poisson distribution. All statistical analyses were performed using SAS version 9.2 (SAS Institute, Inc., Cary, NC). AF patients without a history of GI conditions at baseline For AF patients without a history of GI conditions at baseline, the cumulative incidences of any GI event, any GI event based on the RE-LY study classification, and dyspepsia were 39.9%, 26.3%, and 19.1%, respectively. The corresponding IRs were 38.8, 21.7, and 14.7 events per 100 patient-years, respectively (Figure 1). The IRs of any GI event for female and male patients were 43.6 and 35.5, respectively (Figure 2). The IRs of any GI event increased with age and CHADS 2 score: for patients in the age groups <65, 65-74, 75-84, and ≥85 years, IRs were 32.3, 38.9, 44.6, and 52.7, respectively; for patients with a CHADS 2 score of 0, 1-2, 3-4, and 5-6, IRs were 30.3, 41.6, 56.9, and 74.5, respectively. Discussion Our analysis of real-world data demonstrates that a large proportion of patients with AF are at high risk of GI events. GI events were observed in more than half of the study population, with a prevalence of 55.4 per 100 persons. Dyspepsia was the most common GI symptom, reported in 557,123 patients with AF (29.6%; 164,892/ 557,123), accounting for 54% of all 308,823 GI events reported. Dyspepsia is regarded as a significant burden for AF patients (Lamori et al. 2012), and in several studies of patients treated with NSAIDs and aspirin, an important reason for discontinuing treatment (CAPRIE Steering Committee 1996;Cryer et al. 2011;Niculescu et al. 2009;Ofman et al. 2003;Peto et al. 1988;Saini et al. 2009;Tournoij et al. 2009). Other common GI effects included intestinal diverticula (62,638), GERD (63,159),and GI bleeding (52,979). GERD was recently found to be both a trigger for AF and associated with its development (Huang et al. 2012). Consistent with past studies of cardiovascular disorders associated with AF (Carroll & Majeed 2001), our study found cardiovascular diseases and hypertension to be frequent at baseline. Medications that can elicit GI adverse effects, ranging from dyspepsia to GI bleeding, include aspirin, other antiplatelet medications, anticoagulants, antibiotics, corticosteroids, SSRIs, NSAIDs, bisphosphonates, opioids and pain medications, calcium channel blockers, and iron-related medications, which are used to treat cardiovascular disorders and other comorbidities (e.g. depression and arthritis) (Garcia Rodriguez et al. 1998;Sostres et al. 2010;Bytzer 2010;Ashberg et al. 2010;Diego et al. 2011;Gabriel et al. 1991). In the current study, the most commonly used medications reported at baseline in AF patients, with known GI adverse effects, included pain medications (opioids), antibiotics, calcium channel blockers, and anticoagulants; 359,398 patients (64.5%) received at least one medication that may cause GI events, and that proportion rose to 71.6% after the index diagnosis of AF. This finding could partly explain why, over the entire study period, 55.4% of patients with AF had at least one GI event. Although a large proportion of patients presented with GI events in our study, only 40.5% of patients with AF received treatment to counteract these events compared to 29.1% at baseline, where 80% of patients treated used at least one ulcer drug (i.e., PPIs or H-2 antagonists). In our study, we assumed that a number of patients with AF would have received more than one medication that could cause a GI event. The use of multiple medications by older patients reflects the multiple comorbidities in this population (Hajjar et al. 2007) and can substantially increase their risk for GI events. For example, SSRIs increase the risk of GI bleeding up to three times and, when used concomitantly with NSAIDs, up to 15 times (Ashberg et al. 2010). Warfarin used concomitantly with aspirin, anti-infective agents, or NSAIDs, also has been shown to increase the risk of GI bleeding (Ashberg et al. 2010;Hallas et al. 2006;Man-Son-Hing & Laupacis 2003;Schelleman et al. 2008;Shorr et al. 1993). Moreover, dual therapy in thromboprophylaxis has been found to increase patients' odds of experiencing a major GI bleed compared with monotherapy. Administering the antiplatelet agent clopidogrel with aspirin increased patients' odds of having a major GI bleed by 93% compared with aspirin monotherapy (Coleman et al. 2012). Given the greater risk for stroke with older age, we may assume that patients in this age group are more likely to be candidates for dual thromboprophylaxis therapy and are therefore at greater risk for the subsequent GI effects attributed to this regimen. Consistent with previous findings, in our study, advancing age was found to increase the risk of GI conditions, ranging from IRs of any GI event of 32.3 per 100 patient-years for patients aged <65 years to corresponding IRs of 52.7 per 100 patient-years for patients aged ≥85 years. A higher CHADS 2 score, indicative of greater comorbidity, also was associated with a higher risk of GI conditions, ranging from IRs of 30.3 per 100 patient-years for a CHADS 2 score of 0 to corresponding IRs of 74.5 per 100 patient-years for a CHADS 2 score of 5-6. Notably, subjects with higher CHADS 2 scores tend to be older (i.e. ≥75 years) (Oldgren et al. 2011). Given that the risk of GI events increases with age and that AF is strongly age dependent, this study highlights the importance of profiling the characteristics of patients with AF, in terms of both comorbidities and age, when making treatment decisions. We suggest that further research on GI adverse events in AF patients, specifically regarding the potential impact of AF therapy and age on GI conditions, is warranted. Moreover, the propensity for GI conditions, such as GERD, to trigger AF requires further elucidation. The possible impact of GI events and other comorbidities on the underuse of anticoagulants in AF patients also might be explored in future research. Our study has a number of limitations. First, claims databases may contain inaccuracies or omissions in coded procedures, diagnoses, or pharmacy claims; however, it would be unlikely that these have significantly impacted our results considering the large sample size and the relatively high proportion of patients having a GI event in our study. Second, antiplatelet therapy was assessed based on pharmacy dispensing claims, and because the data do not capture nonprescription medications, such as aspirin, we may have underestimated antiplatelet utilization. Third, some medications used to treat GI conditions are also available without a prescription, which may further underestimate the utilization of these agents. In addition, the observational design was susceptible to various biases, such as information or classification bias (e.g. the identification of false positives of GI events). Despite these limitations, well-designed observational studies provide valuable information, with real-life scenarios and high generalizability. Incidence rate of any GI event (per 100 patient-years) Figure 2 Incidence rate of any gastrointestinal (GI) events stratified by gender, age, and CHADS 2 (Congestive heart failure, Hypertension, Age ≥75 years, Diabetes mellitus, prior Stroke or transient ischemic attack or thromboembolism) (N = 413,168).
2016-05-12T22:15:10.714Z
2014-10-15T00:00:00.000
{ "year": 2014, "sha1": "a183eb0288b9c3a9901864c6cd51174990d1f068", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-3-603", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a183eb0288b9c3a9901864c6cd51174990d1f068", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
102898334
pes2o/s2orc
v3-fos-license
Oxygen backed silicon hydride in correlation with the photoluminescence of silicon nano-crystals Converting silicon hydride (–SiH) to oxygen backed silicon hydride (–OSiH) on porous silicon leads to a shift in the wavelength of photoluminescence (PL) maximum from 670 to 605 nm, corresponding to an increase of 0.2 eV on emission energy. The results implied that silicon hydride, which links to the surfaces of silicon nano-crystals (SiNCs) via oxygen atoms, is directly responsible for the wavelength change of the PL peak. Silicon nano-crystals (SiNCs) are one of the most attractive nano-functional materials due to their electronic and photonic properties, their compatibility in biological environments, as well as their popularity in the semiconductor microelectronics industry. [1][2][3][4] Understanding the origin of the photoluminescence (PL) of SiNCs is fundamentally important, considering the possibility of their widespread applications. The surface of SiNCs containing silicon hydride groups could readily react with 1-ene compounds to form relatively stable Si-C bonds, 5,6 introducing organic functional groups on the surface, facilitating the covalent attachment of organic and biological moieties that open the potential of such inorganic nano-materials to many biological applications. 1,4,7,8 Since the discovery of the PL of SiNCs at room temperature in 1990, 9 the mechanism of the PL generation has been the centre of many studies. Various theories have been proposed in order to reveal the physical and chemical grounds. To date, however, research data have seldom led to the complete understanding of such emission. Quantum connement (QC) theory 10 proposed in earlier years has been challenged by many experimental observations, which showed that PL was inuenced by not only geometric dimensions of SiNCs, but also surface states of chemical groups, such as silicon oxide species. [11][12][13][14] Crystal defects and dangling bonds have been regarded as the causes of the PL emission, but the effect of the oxidation of surface silicon hydride was not fully accounted for. 15,16 It becomes well accepted that the PL of SiNCs is a complicated process and QC theory cannot be adequately applied for full explanation. Besides surface states of chemical groups, other factors, such as particle sizes, also play important roles in the mechanism of the PL. 7,11,[17][18][19][20] Time resolved PL spectra revealed that there are two types of PL emissions from SiNCs, namely, fast decaying "F band" blue emissions and slow decaying "S band" red emissions. 21 Although short life-time F band emissions were believed to be originated from the core of SiNCs, 20 both F and S bands were found to be inuenced by oxygen on SiNCs. 22,23 Recently a long lived blue band and a UV band were also found to be associated with oxidized silicon. 24 Although the oxidation of silicon in ambient air has been known to cause wavelength shis of PL maximum ever since the discovery of the PL of porous silicon for S band emission decades ago, the detailed mechanism was unclear. There are too many unknown factors for both S and F band emissions when oxygen is involved. Therefore, revealing the role of the oxidation on surface of SiNCs is an important step for fully understanding the nature of PL of SiNCs. Based on the fact that oxidation of silicon hydride on porous silicon signicantly inuences the wavelength of PL maximum, the theory of Quantum Connement-Luminescence Centre (QC-LC) 25 was proposed to modify the QC theory, taking into account of oxidation effects. Although this model separated the excitation and emission centres, their chemical structures were not clearly dened. Various chemical structures have been considered as models in attempts to shed light on the oxygen containing species that contribute to PL. For instance, Si]O species on the surface of SiNCs was proposed to be responsible for changes in both the intensity and the wavelength of PL. The study of PL spectra, oxygen isotopes, infrared characterization, and the quantum chemistry model indicated that silanone based Si]O affected S band of PL and hydroxylated silicon was considered to be the precursor of Si]O groups that were excited to provide PL. 10,26,27 However, other investigations challenged the idea of the Si]O structure being the centre of PL emission for the S band, 28 and attributed this oxidation state to nonradiative relaxation processes. 29 Thus, current reports on understanding of the effect of oxygen on the PL of SiNCs are far from conclusive. In this paper, four types of silicon species were prepared based on oxidation of silicon hydride in order to study the PL mechanism. Fourier transform infrared spectroscopy (FTIR) and steady state PL spectra were utilized to characterize the surface groups and the emission. Different silicon hydride species were studied in relation to the steady state emissions of porous silicon. Porous silicon samples were prepared by immersing in or oating on HF solutions for chemical etching under ambient temperature. It needs to be emphasized that such treatment is critical for PL of SiNCs. Silicon wafers (boron-doped p-type h100i with resistivity of 20 U cm À1 , from Wafer World, FL., USA) were diced to the dimension of 4 mm by 4 mm and cleaned, before etching, with RCA hydrogen peroxide solutions followed by acid chemical cleaning in a 5% HF solution (ACS grade from Aladdin, Shanghai, China) for 2 minutes, and nally rinsed with de-ionized water for 5 times. The cleaned wafer was placed in the solution of 48% HF containing 1% of nitric acid (analytical grade from Aladdin, Shanghai, China) and etched for 60 minutes. The atmosphere above the solution was either air or purged nitrogen with small amounts. The etched porous silicon was washed with de-ionized water for 5 times and anhydrous ethanol for 3 times before being dried in either ambient air or nitrogen for half an hour. The obtained porous silicon was further oxidized by ozone for 15 minutes to remove silicon hydride. FTIR measurements were performed in transmission mode under the background of air with a Nicolet Avatar 380 FTIR spectrometer. An integration of 32 scans was used to acquire the spectra. Steady state PL was obtained using a PG2000-PG bre optic spectrometer (Ideaoptics Shanghai, China) equipped with a diode laser of 488 nm as the excitation source with the integration time of 4000 ms. All the data were collected on the surface of porous silicon, which were used as models for examining the effect of oxygen on PL of SiNCs. Porous silicon samples prepared by chemical etching under different atmospheres were found to contain different oxidized species on the surface. Formation of the oxygen containing species could also take place during etching processes, or in post-etching storage stages. Four types of the oxygen containing species could be identied on the freshly prepared porous silicon surface: (I) silicon hydride -SiHx (x ¼ 1 to 3) directly linking to the crystal substrate of silicon via Si-Si bond; (II) oxygen backed silicon hydride -OSiHx, in which silicon hydride connecting the crystal substrate via the bond of Si-O-Si; (III) co-existence of two groups, -SiHx and -OSiHx; (IV) when silicon hydride was completely oxidized by ozone treatment, all the silicon hydride groups were removed. Fig. 1 illustrates transmission FTIR spectra of the above four types of porous silicon samples. Attention was paid to the wavelength range of 2110 to 2210 cm À1 where silicon hydrides were well characterized. SiHx exhibited typical silicon hydride bond stretches at around 2116 cm À1 , while OSiHx groups showed OSi-Hx stretches at around 2251 cm À1 . In case of porous silicon samples containing both SiHx and OSiHx, two bands at 2116 cm À1 and 2251 cm À1 region could be observed. For completely oxidized samples with mostly Si(OH)x groups, no peak was found in above regions, whereas an obvious abroad peak of Si(OH)x at around 3400 cm À1 could be seen. Such results are in agreement with those reported before. 29,30 Freshly prepared, or etched, porous silicon, frequently referred as "hydrogen terminated", contains silicon hydride groups on the surface. Depending on producing procedures of porous silicon, some "freshly prepared" samples may already contain certain amounts of OSiHx although they are still hydrogen terminated. Thus, hydrogen terminated groups can be SiHx, or OSiHx, or both. Differentiation of their PL would signicantly help our understanding on the role of oxygen on the PL of SiNCs. PL of porous silicon samples containing the above mentioned four oxidized species of silicon hydride were further studied. Fig. 2 shows a PL peak at 670 nm with a shoulder peak at $610 nm when there were both SiHx and OSiHx on the surface of the porous silicon, indicating emissions from a combination of processes. However, when samples containing either silicon hydride (SiHx) or oxygen backed silicon hydride (OSiHx), no shoulder peak could be found for PL emissions (Fig. 3). It could be seen from Fig. 3 that pure silicon hydride SiHx was responsible for the emission peak at 670 nm while oxygen backed silicon hydride was responsible for the peak at 605 nm. By substituting OSiHx for SiHx, the wavelength of the emission shied towards the higher energy end by 65 nm, or about 0.2 eV. Such results suggested that oxygen backed silicon hydride might be one of the causes for the blue shi of PL of porous silicon upon oxidation. It could also be noticed in Fig. 1 that when hydrogen terminated groups were completely removed by ozone oxidation, silicon hydride was completely converted to be hydroxyls, Si(OH)x, as indicated by a broad FTIR peak at around at 3400 cm À1 . No PL could be observed from these samples (Fig. 3). As listed in Table 1, various optical properties via different groups on the surface could be obtained by treating silicon wafers with different approaches based on HF solutions. The oxidation process might also remove other components such as dangling bonds that were associated with the emission. This is in agreement with previous reports that full oxidation would deplete PL of porous silicon. It is interesting to note that dipping completely oxidized samples to HF etching solution for just 1 s would recover the strong visible PL. 31 The recovery is possibly caused by reappearance of silicon hydride aer treatment. The results of this study indicated that SiNCs with hydrogen terminated silicon hydride exhibited PL peaks with different bands depending on how SiH was linked onto the substrate of silicon crystals. When SiH was directly linked to silicon crystals, PL emission was at about 670 nm. When SiH was linked to crystal silicon via an oxygen bridge in the form of -OSiH, PL has an emission peak at about 605 nm. The inuence of oxygen backed silicon hydride on PL emission suggested the importance of linking chemistry in silicon hydride when attached to the crystal substrates. This study might provide additional evidences for the study of PL emission of silicon materials and improve our understanding on related mechanisms. Conflicts of interest There are no conicts to declare. Fig. 3 PL spectra of porous silicon samples containing different silicon hydride groups: type I sample with pure SiHx (red spectrum) emitted at 670 nm, type II sample with OSiHx (blue spectrum) emitted at 605 nm, while no emission could be found for type IV sample with pure SiOH (black spectrum). Inset: FTIR spectra corresponding to the three types of silicon hydride groups.
2019-04-09T13:05:08.698Z
2017-09-15T00:00:00.000
{ "year": 2017, "sha1": "369247ea86b2f5d3a61ebba28d7a3aaa28596b86", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra02883k", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d682c107ccb824f2c919fe03b030a1986dab9d85", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
235240941
pes2o/s2orc
v3-fos-license
TAK1/AP-1-Targeted Anti-Inflammatory Effects of Barringtonia augusta Methanol Extract Barringtonia augusta methanol extract (Ba-ME) is a folk medicine found in the wetlands of Thailand that acts through an anti-inflammatory mechanism that is not understood fully. Here, we examine how the methanol extract of Barringtonia augusta (B. augusta) can suppress the activator protein 1 (AP-1) signaling pathway and study the activities of Ba-ME in the lipopolysaccharide (LPS)-treated RAW264.7 macrophage cell line and an LPS-induced peritonitis mouse model. Non-toxic concentrations of Ba-ME downregulated the mRNA expression of cytokines, such as cyclooxygenase and chemokine ligand 12, in LPS-stimulated RAW264.7 cells. Transfection experiments with the AP-1-Luc construct, HEK293T cells, and luciferase assays were used to assess whether Ba-ME suppressed the AP-1 functional activation. A Western blot assay confirmed that C-Jun N-terminal kinase is a direct pharmacological target of Ba-ME action. The anti-inflammatory effect of Ba-ME, which functions by β-activated kinase 1 (TAK1) inhibition, was confirmed by using an overexpression strategy and a cellular thermal shift assay. In vivo experiments in a mouse model of LPS-induced peritonitis showed the anti-inflammatory effect of Ba-ME on LPS-stimulated macrophages and acute inflammatory mouse models. We conclude that Ba-ME is a promising anti-inflammatory drug targeting TAK1 in the AP-1 pathway. Introduction Inflammation, which plays an important role in protecting the body from harmful external influences, is associated with pain, swelling, heat, redness, and various functional impairments, and presents acute and chronic responses [1][2][3]. Without effective treatment, acute inflammation can become chronic. Hyperactive and prolonged inflammatory responses are considered important factors in various diseases, such as autoimmune disorder, cancer, diabetes, arthritis, and several vascular diseases [4,5]. Innate and adaptive immunity are the two parts of the immune system. The innate immune mechanism, which controls the activities of inflammatory response cells, comprises macrophages, neutrophils, and dendritic cells [6]. Toll-like receptors (TLRs) are proteins that have vital roles in the innate immune system. [6]. Lipopolysaccharide (LPS) is a major part of the TLR4 ligand [7]. Mitogenactivated protein kinase (MAPK) signaling was activated in the course of LPS-induced inflammation, because LPS binds to TLR4 and stimulates the recruitment of both TRIF adaptor proteins and cytoplasmic MyD88 [8]. MAPK families include extracellular signalregulated kinase (ERK), c-Jun N-terminal kinase (JNK), and p38 kinase [9]. When TAK1 is activated, a sequential signaling cascade composed of mitogen-activated protein kinase kinases (MAPKKs) and kinase IKK is activated [10]. MAPKKs or IKK phosphorylate MAPK (JNK, ERK, and p38 or inhibitor of κBα IκBα) to activate activator protein 1 (AP-1) [11]. Activation of AP-1 increases when the MAPK signaling pathway is activated. The AP-1 signaling pathway consists of ATF, c-Fos, c-Jun, and JDP families [12,13]. Because many inflammatory diseases in humans occur with the activation of AP-1 [14], targeting the MAPK/AP-1 pathway is a promising and attractive therapeutic anti-inflammatory method. The onset and intensification of inflammation in the body occasionally activate macrophages and release more cytokines, such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), IL-1beta, IL-12, and interferons [15]. Inflammatory genes include cyclooxygenase-2 (COX-2) and inducible nitric oxide synthase (iNOS) [16]. Cytokines and inflammatory genes are upregulated by activation of AP-1 transcription factors and nuclear factor kappa B (NF-κB). Therefore, decreasing inflammation is an important therapeutic goal and possibly prevents infection in the human body. Traditional medicine and natural extracts offer benefits to healthcare and the treatment of various diseases [17]. Nowadays, studies have been developing with original plant extracts that are rich in antioxidants of a phenolic nature to study anti-inflammatory, antimicrobial, anticholinesterase effects, etc. Traditional medicines-more specifically, herbal extracts-have proven their effectiveness in the treatment of various diseases and are now gaining more attention among the scientific community for their potential in the development of medicine for treating urgent and contemporary diseases of the modern era. Most notably, Sytar et al. demonstrated the possible role of plant-derived natural antiviral compounds for the development of plant-based drugs against the representative coronaviruses group-specifically, COVID-19, which has caused a global pandemic that is currently ongoing [18]. Thymus species, which are culinary herbs and flavoring agents in Europe, North Africa, and Asia, have also proved to be promising therapeutic agents for neurodegenerative disorders (e.g., Alzheimer's disease, which is currently ranked as the sixth leading cause of death in the United States) [19]. The anti-oxidative and anti-inflammatory effects of the variety of plants collected from 14 original research papers have been comprehensively reviewed and summarized by Allegra, providing an overview of the original plant extract's antioxidants, which are phenolic in nature, for investigation of its anti-inflammatory, antimicrobial, anticholinesterase effects, etc. [20]. These previous works inspired us to attempt to utilize traditional medicine and natural extracts for anti-inflammatory applications. Interestingly, Barringtonia racemosa (a traditional plant in Malaysian villages) has been employed for human breast cancer treatment, drug discovery, and development [21]. In the same manner, in this research, we explored Barringtonia augusta, which is found in the wetlands of Thailand. Extracts of B. augusta exhibit antioxidant properties [22], although the molecular mechanisms by which they inhibit inflammatory responses through the AP-1 signaling pathway are not understood. In this study, we explored the anti-inflammatory effect of B. augusta methanol extract (Ba-ME). We investigated the roles of this compound in the regulation of the AP-1 signaling pathway in an LPS-treated macrophage RAW264.7 cell line and an LPS-induced peritonitis mouse model. Effect of Ba-ME on Cell Viability and Expression Levels of Inflammatory Genes in LPS-Treated Cells We inspected the cell cytotoxicity of Ba-ME (25-50 µg/mL) in RAW264.7 and HEK293T cells by MTT assay (Figure 1a,b). The viability of RAW264.7 and HEK293T cells was not affected notably by Ba-ME treatment compared with untreated cells. We employed a reverse transcription PCR (RT-PCR) assay to examine the transcriptional level of pro-inflammatory genes. We measured the expression of the proinflammatory cytokines, such as chemokine (C-C motif) ligand 12 (CCL12), C-X-C motif chemokine ligand 3 (CXCL3), Chemokine (C-X-C motif) ligand 9 (CXCL9), Cyclooxygenase-2 (COX-2), and glyceraldehyde 3-phosphate dehydrogenase (GAPDH). CCL12 and COX-2 expression were decreased by Ba-ME treatment in a dose-dependent manner (Figure 1c). Because the mRNA expression levels of these cytokines are interconnected through the AP-1 pathway, we confirmed that Ba-ME attenuated the AP-1 pathway. 2), and glyceraldehyde 3-phosphate dehydrogenase (GAPDH). CCL12 and COX-2 expression were decreased by Ba-ME treatment in a dose-dependent manner (Figure 1c). Because the mRNA expression levels of these cytokines are interconnected through the AP-1 pathway, we confirmed that Ba-ME attenuated the AP-1 pathway. The values are presented as mean ± SD of 3 replicates. (b HEK293T cells were treated with different Ba-ME concentrations for 24 h, and the MTT a performed to determine cell viability. The values are presented as mean ± SD of 3 replica Employing RT-PCR to detect the mRNA expression levels of CCL12, CXCL3, CXCL9, CO GAPDH in LPS-stimulated RAW264.7 cells treated with Ba-ME (0-50 g/mL). Band inten bottom panel of (c)) was measured and quantified using ImageJ. ** p < 0.01 compared wit cells. Effect of Ba-ME on Transcriptional Activation of AP-1 Due to the regulatory role of the AP-1 transcription factor in inflammatory pression, we decided to inspect the suppressive effect of Ba-ME on such activ order to determine whether Ba-ME suppressed the activation of AP-1, a transfe periment with the AP-1-Luc construction and HEK293T cells was conducted. T showed that AP-1-mediated luciferase activity was intensified by co-transfect TRIF and MyD88. In contrast, Ba-ME treatment inhibited this upregulation sign (p < 0.01) with dose dependence (Figure 2). These results indicate that AP-1 activ vital pharmacological target of Ba-ME. presented as mean ± SD of 3 replicates. (b) HEK293T cells were treated with different Ba-ME concentrations for 24 h, and the MTT assay was performed to determine cell viability. The values are presented as mean ± SD of 3 replicates. (c) Employing RT-PCR to detect the mRNA expression levels of CCL12, CXCL3, CXCL9, COX-2, and GAPDH in LPS-stimulated RAW264.7 cells treated with Ba-ME (0-50 µg/mL). Band intensity (the bottom panel of (c)) was measured and quantified using ImageJ. ** p < 0.01 compared with control cells. Effect of Ba-ME on Transcriptional Activation of AP-1 Due to the regulatory role of the AP-1 transcription factor in inflammatory gene expression, we decided to inspect the suppressive effect of Ba-ME on such activation. In order to determine whether Ba-ME suppressed the activation of AP-1, a transfection experiment with the AP-1-Luc construction and HEK293T cells was conducted. The result showed that AP-1-mediated luciferase activity was intensified by co-transfection with TRIF and MyD88. In contrast, Ba-ME treatment inhibited this upregulation significantly (p < 0.01) with dose dependence (Figure 2). These results indicate that AP-1 activation is a vital pharmacological target of Ba-ME. Regulatory Mechanism of Ba-ME in AP-1 Pathways Whether Ba-ME can suppress the activation and translocation of AP-1 was our next investigation in this study. Figure 3a shows the increase in nuclear levels of the AP-1, c-Fos, and c-Jun subunits due to time-dependent (5, 15, 30, and 60 min) inhibition by Ba-ME. Similar time-dependent (5 min) inhibitory c-Fos and c-Jun expression patterns were confirmed from RAW264.7 cells by whole lysate extraction. It is an important and non-trivial task to establish which intracellular molecules are targeted by Ba-ME in the AP-1 signaling pathway. We measured the levels of phosphorylated MAPKs (p-38, ERK, and JNK). We noticed that LPS obviously raised the phosphorylation of ERK, JNK, and p-38. In contrast, the phosphorylation of JNK was suppressed by Ba-ME strongly and time-dependently (5, 15, 30, 60 min), but that of p-38 and ERK (Figure 3b,c) in RAW264.7 cells was not after the treatment of Ba-ME. As phosphorylation of MAPKs is crucial in regulating the LPS-induced inflammatory mediators, our results confirmed that Ba-ME blocks the AP-1 pathway through the expression level of JNK. We analyzed the phosphorylated forms of AP-1-related proteins to identify the targeted protein of Ba-ME in inhibiting the AP-1 pathway. As shown in Figure 4a, phosphorylated TAK1 was detected by LPS induction for 2, 3, and 5 min. TAK1 is the most upstream protein in the AP-1 pathway; it orders AP-1 signaling to progress, and its activation is required to activate macrophages. Using a Western blot assay, we observed that TAK1, the phosphorylated forms of AP-1 pathway-related proteins [11], were decreased by Ba-ME at 2, 3, and 5 min. This result proved that Ba-ME specifically targets TAK1. Moreover, the inhibition of TAK1 kinase alleviates the activation of downstream proteins. HEK293T cells were additionally treated with Ba-ME (0-100 µ g/mL) for 24 h. AP-1-driven luciferase activity was measured by a luminometer. ** p < 0.01 compared with control cells. The values are presented as mean ± SD of 3 replicates. Regulatory Mechanism of Ba-ME in AP-1 Pathways Whether Ba-ME can suppress the activation and translocation of AP-1 was our next investigation in this study. Figure 3a shows the increase in nuclear levels of the AP-1, c-Fos, and c-Jun subunits due to time-dependent (5, 15, 30, and 60 min) inhibition by Ba-ME. Similar time-dependent (5 min) inhibitory c-Fos and c-Jun expression patterns were confirmed from RAW264.7 cells by whole lysate extraction. It is an important and non-trivial task to establish which intracellular molecules are targeted by Ba-ME in the AP-1 signaling pathway. We measured the levels of phosphorylated MAPKs (p-38, ERK, and JNK). We noticed that LPS obviously raised the phosphorylation of ERK, JNK, and p-38. In contrast, the phosphorylation of JNK was suppressed by Ba-ME strongly and time-dependently (5, 15, 30, 60 min), but that of p-38 and ERK (Figure 3b,c) in RAW264.7 cells was not after the treatment of Ba-ME. As phosphorylation of MAPKs is crucial in regulating the LPS-induced inflammatory mediators, our results confirmed that Ba-ME blocks the AP-1 pathway through the expression level of JNK. We analyzed the phosphorylated forms of AP-1-related proteins to identify the targeted protein of Ba-ME in inhibiting the AP-1 pathway. As shown in Figure 4a, phosphorylated TAK1 was detected by LPS induction for 2, 3, and 5 min. TAK1 is the most upstream Anti-Inflammatory Effects of Ba-ME by Targeting TAK1 The whole cell lysate immunoblotting, which used HEK293T cells to overexpres TAK1, was conducted to examine Ba-ME's capability in inhibiting autophosphorylatio of target enzymes. We overexpressed the plasmids expressing pPRK6-HA-TAK1 i HEK293T cells after 24 h and exposed the cell that was treated to Ba-ME (150 µ g/mL) fo another 24 h. The p-TAK1 level was reduced by the Ba-ME treatment (Figure 4b). To assess the interaction of Ba-ME with TAK1 in intact cells, we performed a cellula thermal shift assay (CETSA) at 49 C, 51 C, 53 C, 55 C, 57 C, 59 C, and 61 C. Figur After overexpressing TAK1 in HEK293T cells, a CETSA was performed with Ba-ME (150 µ g/mL), and dimethyl sulfoxide was used as a control. A Western blot analysis was conducted to examine the stabilization of Ba-ME on TAK1. Solid circles are Ba-ME group and hollow circles are control group. Anti-Inflammatory Effects of Ba-ME by Targeting TAK1 The whole cell lysate immunoblotting, which used HEK293T cells to overexpress TAK1, was conducted to examine Ba-ME's capability in inhibiting autophosphorylation of target enzymes. We overexpressed the plasmids expressing pPRK6-HA-TAK1 in HEK293T cells after 24 h and exposed the cell that was treated to Ba-ME (150 µg/mL) for another 24 h. The p-TAK1 level was reduced by the Ba-ME treatment (Figure 4b). To assess the interaction of Ba-ME with TAK1 in intact cells, we performed a cellular thermal shift assay (CETSA) at 49 • C, 51 • C, 53 • C, 55 • C, 57 • C, 59 • C, and 61 • C. Figure 4c shows that Ba-ME treatment shifted the thermal stability of the target protein TAK1. Ba-ME Alleviates Clinical Signs of LPS-Induced Peritonitis in a Mouse Model An LPS-induced peritonitis mouse model was established to examine the anti-inflammatory effect of Ba-ME in vivo. Ba-ME (100 mg/kg) clearly decreased nitric oxide (NO) production ( Figure 5a). Next, the mRNA expression levels and protein levels of AP-1 pathwayrelated factors were examined. Figure 5b indicates that the mRNA levels of COX-2 and CCL12 decreased. These observations indicate that Ba-ME has the capability to inhibit the progression of the AP-1 pathway, by proving its anti-inflammatory effect in vitro and in vivo. Ba-ME Alleviates Clinical Signs of LPS-Induced Peritonitis in a Mouse Model An LPS-induced peritonitis mouse model was established to examine the anti-inflammatory effect of Ba-ME in vivo. Ba-ME (100 mg/kg) clearly decreased nitric oxide (NO) production ( Figure 5a). Next, the mRNA expression levels and protein levels of AP-1 pathway-related factors were examined. Figure 5b indicates that the mRNA levels of COX-2 and CCL12 decreased. These observations indicate that Ba-ME has the capability to inhibit the progression of the AP-1 pathway, by proving its anti-inflammatory effect in vitro and in vivo. Discussion Barringtonia augusta has long been used as a folk medicine and is understood to act as an antioxidant [22]. However, a molecular mechanism that explains how Ba-ME inhibits inflammatory responses to the AP-1 signaling pathway has yet to be elucidated. We focused on how Ba-ME exerts its anti-inflammation function in vitro using LPS-stimulated RAW267.4 cells and an in vivo LPS-induced peritonitis mouse model. The viability of HEK293T cells and RAW264.7 was examined to determine how Ba-ME produces anti-inflammatory effects without cytotoxicity (Figure 1) using an MTT assay [23]. LPS-stimulated TLR4 signaling modulates the COX-2 and pro-inflammatory cytokines by activating AP-1 pathways in macrophages [24,25]. Our goal was to determine whether Ba-ME mediates the downregulation of COX-2 and whether inflammatory cytokines are mediated in LPS-stimulated macrophages by suppressing AP-1 signaling. Previously, O'Neill et al. demonstrated a method for identifying the essential components of TLR signaling that employed the transfection with a luciferase reporter gene construction and adaptor molecules in HEK cells [26]. As demonstrated in our studies [27,28], this method was obviously found to be reliable in investigating the functional activation of transcription factors. Following this approach, we transfected cells containing AP-1-Luc with adaptor molecules (TRIF and MyD88) to examine how Ba-ME suppresses Discussion Barringtonia augusta has long been used as a folk medicine and is understood to act as an antioxidant [22]. However, a molecular mechanism that explains how Ba-ME inhibits inflammatory responses to the AP-1 signaling pathway has yet to be elucidated. We focused on how Ba-ME exerts its anti-inflammation function in vitro using LPS-stimulated RAW267.4 cells and an in vivo LPS-induced peritonitis mouse model. The viability of HEK293T cells and RAW264.7 was examined to determine how Ba-ME produces anti-inflammatory effects without cytotoxicity (Figure 1) using an MTT assay [23]. LPS-stimulated TLR4 signaling modulates the COX-2 and pro-inflammatory cytokines by activating AP-1 pathways in macrophages [24,25]. Our goal was to determine whether Ba-ME mediates the downregulation of COX-2 and whether inflammatory cytokines are mediated in LPS-stimulated macrophages by suppressing AP-1 signaling. Previously, O'Neill et al. demonstrated a method for identifying the essential components of TLR signaling that employed the transfection with a luciferase reporter gene construction and adaptor molecules in HEK cells [26]. As demonstrated in our studies [27,28], this method was obviously found to be reliable in investigating the functional activation of transcription factors. Following this approach, we transfected cells containing AP-1-Luc with adaptor molecules (TRIF and MyD88) to examine how Ba-ME suppresses AP-1 transcription activity (Figure 2). The AP-1-mediated luciferase activity was accelerated up to a factor of 6.5 by TRIF and MyD88 co-transfection while Ba-ME significantly and dose-dependently blocked this activity. Altogether, AP-1 activation is a pharmacologic target of the extract. We exposed the RAW264.7 cells to LPS and measured the phosphorylated and total levels of c-Jun and c-Fos to comprehensively examine the effect on the AP-1 signaling pathway of Ba-ME. Figure 3a shows that Ba-ME can decrease levels of phosphorylated c-Jun and c-Fos in RAW264.7 cells under LPS stimulation conditions. It has been stated that MAPKs are able to control AP-1 activation, thus playing an important role in regulating LPS-induced inflammation [28,29]. In the same manner, our results suggest that Ba-ME specifically targets an upstream MAPK. We further analyzed the inhibitory effect of Ba-ME on MAPKs and their upstream signaling enzymes. We had to examine the effects of Ba-ME on the activity levels of the phosphorylated and total forms of p38, JNK, and ERK because there are many kinds of MAPKs (such as p38, JNK, and ERK) that activate AP-1 signaling pathways. Although the activity of p38 and ERK was not inhibited, Ba-ME inhibited the activity of kinase JNK at 5, 10, 15, 30, and 60 min, as shown in Figure 3c. The results indicate that Ba-ME targets were upstream signaling molecules in the AP-1 signaling pathways of anti-inflammatory activity. Brief experiments performed at 2, 3, and 5 min (Figure 4a) revealed that LPS enhanced the TAK1 phosphorylation, which takes place upstream of JNK. These results confirmed the MAPK inhibitory activity of Ba-ME. To evaluate whether Ba-ME targets upstream AP-1 signaling molecules, we employed TAK1-overexpressing HEK293T cells. As shown in Figure 4b, Ba-ME suppressed the phosphorylation of TAK1. We examined whether TAK1 is the target of Ba-ME using CETSA experiments to identify interactions between Ba-ME and TAK1. The results confirmed that Ba-ME interacts with TAK1. The LPS-induced peritonitis mouse model was employed for exploring the antiinflammatory ability of Ba-ME in vivo. As shown in Figure 5, the Ba-ME treatment (100 mg/kg) improved LPS-induced peritonitis. The NO production assay (Figure 5a) helped us to confirm the suppressive effect of Ba-ME. Moreover, Ba-ME reduced inflammatory lesions, pro-inflammatory cytokines, and activation of AP-1 pathway-related proteins in the peritonitis model. These reports agree with the previously demonstrated effects of Ba-ME on mRNA production and active forms of AP-1 signaling molecules. Ba-ME was strongly peritonitis-protective in the mouse model of LPS-induced peritonitis. These results indicate that Ba-ME is a potential candidate for an anti-inflammatory medicine component. Materials Barringtonia augusta methanol extract (Lecythidaceae) was extracted from the leaf and stem of the plant from Vietnam. The phytochemical details of Ba-ME, including HPLC profile, are presented in the Supplementary Information. RAW264.7 and HEK293T cells were and GAPDH were synthesized by Bioneer Inc. Antibodies specific for the phosphorylated and total forms of c-Fos, c-Jun, p38, ERK, JNK, TAK1, and β-actin were acquired from Cell Signaling Technology (Beverly, MA, USA). Cell Cultures A murine macrophage cell line (RAW264.7) was cultivated in RPMI 1640 medium supplemented with 10% heat-inactivated FBS and antibiotics (penicillin and streptomycin) at 37 • C in 5% CO 2 . Human embryonic kidney cell line (HEK293T) was cultured in cultured in DMEM medium with 5% heat-inactivated FBS and antibiotics (penicillin and streptomycin) at 37 • C in 5% CO 2 . Mice Male C57BL/6 Institute of Cancer Research mice (6 to 8 weeks old, 17 to 21 g) were obtained from Deahan Biolink (Chungbuk, Korea) and treated orally with Ba-ME (100 mg/kg) or ranitidine (40 mg/kg) twice per day for 3 days. Water and pellet chow were available ad libitum (Samyang, Daejeon, Korea). Studies (the permit number for experimentation on mice: SKKUIACUC2020-06-30-1) were performed following instructions established by the Sungkyunkwan University Institutional Animal Care and Use Committee. Cell Viability Tests The cytotoxicity of Ba-ME (5 × 10 5 cells/mL) was assessed for 24 h, and HEK293T cells (2 × 10 5 /mL) were measured by MTT assays [6]. The cytotoxic effect of Ba-ME (25-50 µg/mL) was evaluated by a conventional MTT assay. The final concentration of MTT solution was 500 µg/mL. Cells were treated with Ba-ME for 24 h; 10 µL of MTT solution was added to cells 3 h prior to the end of the culture period. The addition of 15% sodium dodecyl sulfate to each well to dissolve the formazan stopped the assay. Absorbance at 570 nm was measured using a Synergy HT multi-mode microplate reader (BioTek Instruments, Inc., Winooski, VT, USA). mRNA Analysis by Quantitative Reverse Transcription Polymerase Chain Reaction RAW264.7 cells (1 × 10 6 cells/mL) were treated with Ba-ME (100-150 µg/mL), and induction was executed with LPS (1 µg/mL) after 30 min. After 6 h of induction, RNA was extracted by a TRI reagent according to the manufacturer's instructions and stored at −80 • C. A 1 µg sample of total RNA was used in a cDNA synthesis kit (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer's instructions [30,31]. The used primer sequences are listed in Table 1. Plasmid Transfection and Luciferase Reporter Gene Activity Assays HEK293T cells (2.5 × 10 5 /mL) were seeded in 24-well plates. The cells were transfected with plasmids encoding a luciferase gene (AP-1-Luc) with AP-1 promoter sites. MyD88 or TRIF genes were then co-transfected to further activate the luciferase genes. Transfections were performed using the polyethylenimine (PEI) method. After 24 h, we treated the transfected cells with Ba-ME (0-150 µg/mL). The harvested cells were lysed by freezing at −70 • C for at least 3 h. A luminometer was used to measure luciferase reporter activity [32]. Cellular Thermal Shift Assays HEK293T cells were transfected by plasmids expressing TAK1 domain-deletion genes and treated with Ba-ME (150 µg/mL) or DMSO (as a control) for 24 h. After treatment, the cells were isolated and resuspended in PBS. The suspended cells were separated into seven PCR tubes with a volume of 100 µL and equal numbers of cells. Each PCR tube was heated for 3 min at a temperature gradient from 49 • C to 61 • C and then cooled to 25 • C for 3 min. We performed three rounds of freezing and thawing using liquid nitrogen and room-temperature water, as reported previously [35,36]. The samples were transferred into Eppendorf tubes and centrifuged at 12,000 rpm for 30 min. Protein samples were examined by Western blot analysis. LPS-Induced Peritonitis Mouse Model C57BL/6 male mice (n = 5 per group) were injected intraperitoneally with 1 mL of 4% thioglycollate broth for 4 days [37,38]. Ba-ME (100 mg/kg) suspended in 0.5% Na-CMC was administered orally to the thioglycollate-injected mice daily for 5 days using gavage. Acute peritonitis was induced in the thioglycollate-injected mice by intraperitoneal injection of 1 mL of LPS (10 mg/kg); peritoneal macrophages derived from the mice were collected and plated in RPMI 1640 medium at 1 day after LPS injection. The total RNA in peritoneal exudates was isolated with TRIzol reagent according to the manufacturer's instructions and measured by an mRNA analysis by Quantitative Reverse Transcription Polymerase Chain Reaction. Nitric Oxide (NO) Assay Peritoneal macrophages were pre-treated with Ba-ME and then stimulated with LPS. The supernatant (100 µL) obtained was mixed with 100 µL of Griess reagent. The absorbance of this mixture was measured at 540 nm and a standard curve was employed to calculate the concentration of NO. Isolation of Peritoneal Macrophage After 5 days of oral administration to the thioglycollate-injected mice, we obtained the peritoneal macrophages by IP lavage. The isolated peritoneal macrophages (1 × 10 6 cells/mL) were washed with RPMI1640 medium and they were cultured for 4 h at 37 • C in 5% CO 2 in a humidified incubator. Statistical Analysis The data are presented as the mean and standard deviation of independent replicate experiments performed in triplicate for statistical comparisons. Statistical comparisons were examined by a Student's t-test and a one-way analysis of variance. A p-value < 0.05 was considered statistically significant. All statistical analyses were performed using SPSS software. Conclusions Through the above experiments, we proved that Ba-ME has an anti-inflammatory effect both in vitro and in vivo. Ba-ME effectively suppressed the expression of COX-2 and CCL12 in LPS-stimulated macrophages in a dose-dependent manner. Ba-ME inhibits the activation of AP-1 signaling, as shown by luciferase assay. Further analysis of kinase activities by in vitro assays and Western blotting confirmed that Ba-ME blocks the AP-1 pathway through the expression level of JNK. Employing an overexpression strategy, a CETSA showed that Ba-ME targeted TAK1 to inhibit macrophage-mediated inflammatory responses. In Figure 6, we summarize the inhibitory progression of the AP-1 pathway mechanism of Ba-ME to achieve the anti-inflammatory effects in vitro and in vivo. Our research suggests that Ba-ME could be a potential anti-inflammatory therapeutic. Conclusions Through the above experiments, we proved that Ba-ME has an anti-inflammatory effect both in vitro and in vivo. Ba-ME effectively suppressed the expression of COX-2 and CCL12 in LPS-stimulated macrophages in a dose-dependent manner. Ba-ME inhibits the activation of AP-1 signaling, as shown by luciferase assay. Further analysis of kinase activities by in vitro assays and Western blotting confirmed that Ba-ME blocks the AP-1 pathway through the expression level of JNK. Employing an overexpression strategy, a CETSA showed that Ba-ME targeted TAK1 to inhibit macrophage-mediated inflammatory responses. In Figure 6, we summarize the inhibitory progression of the AP-1 pathway mechanism of Ba-ME to achieve the anti-inflammatory effects in vitro and in vivo. Our research suggests that Ba-ME could be a potential anti-inflammatory therapeutic. Supplementary Materials: The following are available online, Figure S1: HPLC profile of Ba-ME and standard flavonoid compounds (silibinin, genistein, and apigenin). Author Contributions: A.T.H. conceived and designed the experiments, performed the experiments, analyzed the data, and wrote the paper. M.-Y.K. and J.Y.C. conceived and designed the experiments, analyzed the data, and wrote the paper. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon request.
2021-05-30T05:09:51.595Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "b097fa5c11094371de126ac0fc945248173681ed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/10/3053/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b097fa5c11094371de126ac0fc945248173681ed", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251038844
pes2o/s2orc
v3-fos-license
The effectiveness of ūloa as a model supporting Tongan people experiencing mental distress Abstract This article is based on a larger research project, which investigates the effectiveness of a culturally appropriate model, namely ūloa, when working with Tongan people. Ūloa is a communal method of fishing in Tonga, which includes all members of the community. A previous paper described the three phases of ūloa: presenting the concept to health providers and community groups; phase two amended the model based on phase one. This paper reports on phase three and findings related to the increased awareness of ūloa model within the mental health services and to raise awareness of how to work with Pacific people and adjust the health service to suit the needs of this population to test its effectiveness. Using reflexive thematic analysis, results highlighted a number of patterns both across the groups, described as napanapangamālie (harmony, balance), ngāue fakataha (working together/oneness), and toutai (fisher). These findings continue to support that the conventional biomedical approach employed in the mental health services overlooks elements of Tongan constructions of mental illness and the intersections between Tongan and biopsychosocial themes. Care that is based only on the ‘medicine’ rather than bringing the spiritual aspect into care planning (fake leaves) will not serve the needs of the Tongan community. BACKGROUND Tangata (people) o le Moana (Pacific ocean) refers to people of Polynesian origin who first stepped foot in New Zealand over 800 years ago. This population group has a higher prevalence of mental illness at 25% compared with the 20% of the non-Polynesian (general) population in New Zealand and have also been found to be the group with the highest level of suicidal ideation, attempts, and plans (Oakley-Browne et al. 2006). Further, accessing mental health services has been identified as of concern, with only 25% of Pacific people accessing mental health services compared with 58% of the general population within a 12 month period (Oakley-Browne et al. 2006). Additionally, Pacific children and adults were less likely to get help for mental health issues (Ministry of Health 2020). Rates of mental illness have been increasing (Ataera-Minster & Trowland 2018;Oakley-Browne et al. 2006;Vaka 2014), and management of mental illness has largely focused on designing appropriate Pacific cultural tools to inform health services about working effectively with Pacific people (Fotu & Tafa, 2009;Samu & SuaaliiSauni, 2009). These numbers confirm the higher number of Pacific people with mental distress, with lower access to mental health and addiction services. Though current health services are using culturally appropriate tools when working with Pacific people, there is concern that staff fail to question Pacific people in their levels of understanding and interpretation of their illness due to their experience of mental health services being 'hostile, coercive, culturally incompetent, individualistic, cold and clinical' (p 41) (New Zealand Government 2018). In order to, in part, address this failure, the researchers set out to explore how mental health clinicians could adjust health care to suit the needs of Pacific people at the ethnic level, and to clarify and recommend systems changes. Hence, this phase three of the larger research project trialled the uloa model in both the mental health services and the Tongan community in an Auckland metropolitan health board, to test the increased awareness and effectiveness of this culturally appropriate uloa Tongan model. Uloa is a communal method of fishing in Tonga, which includes all members of the community. Uloa is usually done in a community setting like villages or churches; Vaka (2014Vaka ( , 2016, further asserts that uloa involves all ages and genders whereby everyone goes down to the sea to join in with one another. Using coconut leaves, the people move together in a symmetrical manner toward a collection basket; all the fish caught are distributed evenly to all villagers. Importantly, it is challenging to achieve symmetrical movements as they are largely affected by contexts, climate, environment, the sea tides, availability of coconut leaves, and numbers of people. METHOD The methodology used is talanoa, which is largely used with Pacific people (Vaioleti 2006;Vaka 2014;Vaka et al. 2016). Talanoa captures Pacific ways of knowing and doing and provides a platform for tala (conversation/talking) and ability to reach into people's noa (hearts, souls) ). Sampling Purposive sampling was used to identify participants from mental health service providers who had implemented and trialled uloa in their area of practice. All were recruited through an intermediary person who invited them to the talanoa and acted as the contact liaison person to alleviate cultural pressures. All participants were Tongan, and one participant was employed as a cultural worker; however, it is important to note that all participants are cultural experts in their own culture. Participant interviews included staff at the local district health board, nongovernment organizations, traditional healers, and also importantly, to hear perspectives from service users. The data were collected in both English and Tongan language and were digitally recorded. There were four individual talanoa and one talanoa with two participants as follows: The first talanoa was with a female community support worker from a non-government organization, second was with a traditional healer who practices Tongan traditional treatment, third was a person experiencing mental distress and uses mental health services, and the fourth individual talanoa was with a psychiatrist from the district health board. The group talanoa was with a social worker and cultural workers from the district health board. Ethics This study gained ethics approval from the national Health and Disability Ethics Committee (reference number 19/STH/83) in 2019. Pseudonyms were used to protect participants' identity. Approval was also confirmed by the ethics committee at the district health board. Analysis Reflexive thematic analysis (Braun & Clarke 2021), (TA), was used based on the methodological alignment and philosophical aspect of talanoa, and its use in counselling and psychotherapy research to create meaningful knowledge production. Further, by following these guidelines, the researchers were assured that they could replicate the analysis in previous phases of the larger study. TA had been used extensively in qualitative research and mental health as it allows researchers to analyse their data independent from methodology. TA also addresses cultures effectively (Trevino et al. 2021). Each talanoa group interview was conducted in Tongan and then transcribed, coded, and thematically analysed to identify patterns across the data and alignment with the talanoa approach and Braun and Clarke (2021). Upon reading the data, it was evident that each talanoa told a different story. However, the analysis identified a number of patterns both across the groups and within each talanoa. Three themes were identified, napanapangam alie (harmony, balance), ng aue fakataha (working together/oneness), and toutai (fisher). Each theme will be presented with data from the participants in the following section. Rigour The talanoa allowed the authentic voices of Tongan participants to be heard and provided a context where participants were able to talk openly and the opportunity to confirm participants' viewpoints, which gave validity to the findings. Authenticity was tested throughout the talanoa process (McGrath & Ka'ili 2010;Vaioleti 2006) with an academic and an expert on Tongan culture. These experts were involved in analysing the data and the translation of Tongan language interview data into English. Theme 1 Napanapangam alie (harmony, balance) Uloa involves movement with the aim of arriving at the collection basket in a balanced and symmetrical manner. Napanapangam alie is when this movement in the uloa is successfully achieved. Tongan people usually talk metaphorically; saying one thing but meaning another. Tongan concept like napanapangam alie was useful in terms of applying to everyday practice. This theme captures Tongan concepts that were useful in terms of collectively moving towards the same directions, in harmony and balance, to achieve napanapangam alie. The psychiatrist noted their deeper understanding of napanapangam alie: So, over the last month you know. . .when I think back to when you explained the concept to me. . . I understand more deeply now the role that metaphor plays in the way that Tongans communicate. The psychiatrist explained that they had been functioning within the dominant medical model; however, opportunity to use uloa and Tongan concepts strengthened their understanding of the role of metaphor in practice. Tongan concepts such as loto refers to inside/heart/soul. The cultural advisor discussed loto and further highlighted other Tongan concepts like 'atamai, 'uto, and ongo. Loto is the unseen part . . . one of the problems we have is trying to dissect unseen things. Ongo are feelings, and we all know that they come from one place, the soul. 'Atamai (brain) comes from the same place . . . mind is different from the brain. Brain is 'uto, some secular people refer to the brain as 'atamai. No, the mind is the unseen part, and I like how we talk about the heart, the mafu is the heart and loto is the soul. It is important to understand these Tongan concepts and incorporate them into the metaphor of moving towards the collection of the fish with the basket to achieve napanapangam alie. The NGO representative discussed the relationships between loto and the mind and thinking, explaining that 'When you have a sad loto. . . that will affect your thinking'. They further added how it is important to know where you are at, and that the services acknowledge that too, however, suggesting there is a lack of knowledge and awareness from Tongan people: There are many people who do not know the services that they should go to. Especially, our Tongan people, they are not clear where to get help in this area For one service user, they reported that they do not feel accepted in their own home or experience a sense of harmony and balance I do not usually go to our living room. When I go there, we (family) talanoa (talk) but I don't think that's my space of acceptance. This participant highlighted the importance of harmony at home and importantly how the mental health workers can play a larger role to support both the individual and their family's need to work closely together. Phases 1 and 2 of the research reported earlier that by bringing everyone together to the same level through a shared understanding of what services can provide will promote a feeling of acceptance and harmony, leading to our next theme, which is about working together. Theme 2 Ng aue fakataha (working together/oneness) This theme discussed the importance of working together in terms of incorporating both Tongan and broader worldviews. It is important to note the Tongan word for together is fakataha, which also means oneness focusing on the word, taha, one. These include language, different interpretations of mental distress, treatments, and the challenges of individualism and collectivism. The traditional healer highlighted the importance of working together in uloa and the relationship to Tongan worldviews of collective family: The cultural worker supports this idea and explains the importance of incorporating Tongan treatments into care: The effectiveness of uloa depend on the loto . . . and we need to work together . . . I remember when I went and got a [traditional healer] and they used traditional medicine and there are improvements in the client's mental state The psychiatrist discussed relationship from a Tongan perspective, and how they are important in terms of mental health: When you talk to a young person and their families, the number one thing that it comes down to is that [mental illness] is a break down in the relationship . . . what we call v a (relationship) and tauhi v a (maintaining relationship). Without thinking about it, everything we do is about tauhi v a (maintaining relationship). You know, even the introduction before we started this conversation. This emphasizes the importance of relationships between Tongan healers and the medical team. Theme 3 Toutai (fisher) Toutai is the main person who makes the decisions in uloa. In the earlier phases of this larger study participants, including traditional healers, staff, NGOs, service users, strongly argued that they each should be the toutai. After implementing uloa, all participants in this phase three compromised and regarded toutai as a shared role: You want to use your expertise to strengthen the individual to the point where they manage all the important decisions in their lives well. That the issues that cause them to experience mental illness are gone. And using the metaphor, then they can be that person that makes those decisions, [toutai]. Not just for themselves, but for the collective When asked, 'Who do you think should make the decision for you when you are unwell?' The service user replied: Me, because I am the person who knows myself and I should make the decision . . . if I get bad, then call the hospital A social worker reported their view of toutai The role of practitioners is to direct . . . but we must also be flexible so that we are able to achieve uloa. Pacific practitioners are kind to Pacific people when [we] fulfil our duties. We just feel the ocean as we move, so we know their movements and directions DISCUSSION This study set out to further develop the uloa model, a Tongan model of care based on a communal fishing technique, and how service providers in mental health practice can implement the uloa model to deliver successful treatment outcomes for Tongan service users. Phase 1 of this study was the consultation with the Tongan community and mental health providers, Phase 2 was amending the model according to the findings from Phase 1. The findings from Phases 1 and 2 emphasized the need for working together, effective communication and the importance of using a Tongan tool, like uloa. This article has reported findings from Phase 3 to further inform and modify uloa and then followed by a trial of this novel approach in the mental health services. Three main themes were derived from the data: napanapangam alie, ngaue faka-taha and the central role of the service user as the toutai and key person in uloa; these are discussed below. Napanapangam alie The participants reported that the understanding of the Tongan worldview is vital for culturally informed care. Though a bio-psycho-social approach to care is often cited as a focus in clinical practice, there is still a reliance and dominance of the biomedical explanations of mental distress and can unwittingly exclude notions of loto, or soul, ongo, or emotional context of people's experience and 'atamai, the mind, and the service user's interpretation of the experience. Inclusion of these concepts would expand the spiritual connections between service user and the clinician, reduce stigma, and support a shared understanding of the experiences of the person's distress that can also be shared and understood by their family. For example, similar to the Tongan view, it is not uncommon for many cultures to ascribe their source of distress to the notion of 'transgression' or 'wrongdoing' in the eyes of the deities. Not only is sensitivity required in working with cultural expressions of mental distress, often understood through the western view of mental illness, it also requires the services to be more informed and confident in embracing these spiritual concepts. Carter and Palmer (2017), suggest that transgression itself is a metaphor for further re-imagining of experiences, in this study, we argue further that such a disruption of the spatial, emotional, and ethical boundaries within traditional psychiatry will shape a more responsive, respectful interpretation of the world in terms of human values and experiences held by the service user. Further, Bracken and Thomas (2013), take a post-psychiatry approach asserting that contemporary psychiatry is required to value community development and safe spaces whereby different understandings and responses to madness and distress within minority ethnic communities can be articulated by dissecting the unseen. Uloa is about trustthat the collective action of fishing will feed the villagetherefore to be truly culturally competent as a health professional, placing one's trust in models that propose such communication and community action, such as the uloa model is central for Tufunga faka-Tonga -Tongan constructions of mental health. Ngaue faka-taha Uloa offers the safety-net of Ngaue faka-taha that supports collaborative, and collective action, based on relationships that create the Va, literally the space within a relationship that connects sacredness and inclusion with harmony and balance, underpinned by mutual respect (Te Pou 2010). Forms of communication and the use of language were reported as being central to uloa and further supported by the participants. Tongan culture is replete with metaphorsas images and symbols that require minimal explanation yet consist of collective understandings of distress that normalize the service user's experience. Paying attention to metaphors provides a way forward for the health professional and systems to actively collaborate as a collective, rather than the western, individualistic approach, to safely bring resolution to the distress of the whole person and their family. Uloa brings to the fore the 'net' to capture the essence of the service user and to also provide a collective approach in care planning, hence reducing the risk of stigma, minority stress (Velez et al. 2017), and cultural alienation (Taonui 2010). In Aotearoa, metaphors of integrating and collaborating are also evident in the future direction of integrated mental health services. For example, 'long-lining' is a fishing metaphor cited by the Health and Disability Review report (New Zealand Government 2020), which signifies the future whereby service providers are 'hooked on and in' care planning to maintain a seamless connection and strengthening of the service users care across the specialist and primary care sector. Both the metaphors of 'long-lining', and the 'net' in uloa, symbolize a collective safe 'holding' of the person on the care pathway to meet the needs of the service user at that time, rather than services perpetuating an individualistic and fragmented approach. Health professionals will also need to work effectively, rather than being in one corner of the net resulting in care that is disjointed. Finding a common language to join each world together into the net is important particularly as the older and younger generation hold different understanding of mental distress and illness. Uloa therefore can offer provision of the cultural needs in one care pathway, and like other cultural models, such as Fonefale (Pulotu-Endemann 2009), and Te Whare Tapa Wha (Durie 1994), provides health professionals with a shared understanding across a range of cultural expressions of distress; uloa is well suited to join this broad church of alternative and culturally relevant approaches to mental distress (Pulotu-Endemann & Faleafa 2017). The Toutai The role of the service user at the centre of their care and recovery is the concept that has been part of mental health service delivery for several decades (Mental Health Commission 2001). Participants reported that the notion of toutai is regarded as a key concept in uloa; the status of toutai is interchangeable, as is the person who holds the net to secure the catch, while others assist in bringing the weight of the fish to shore. Following the symbolism of uloa, the service user may not immediately be the toutai, as there is a journey to be undertaken to gain understanding of their mental distress, continue their recovery, and regain their power. Though the aim is for the service user to be toutai, significant others may support them to regain this role. For example, the role of aiga or extended family, traditional healers, church leaders, and health professionals will be interchangeable, and like the notion of 'long-lining', the right people and relevant resources 'hook' on or off the care journey with the best interests and cultural needs of the person at the forefront of care. Vaka described the concept of h e (lost) for people with experience of mental illness as 'the mind is lost' and needing support to steer their journey through the distress. Likewise, the original work of R. D. Laing is also replete with metaphors such as navigation, mental maps, territory, and lost, whereby the role of the helper is to support the person as a 'traveler who's been lost in a land where no one speaks his language. . . he feels completely lost. . . and sharing the problem with someone means. . .. you don't feel hopeless anymore' (Laing 1990, p. 165). However, the concept of the service user as toutai may not currently 'fit' with the historical and paternalistic notions of 'doctor knows best'. According to Kanaan (2009), the discipline of psychiatry has long been viewed as paternalistic and underpinned with the notion of 'insight', which perpetuates the impression that service users lack the capacity to make their own decisions (Cavelti et al. 2012;Hamilton & Roper 2006). Therefore, heightening concerns about risk to self and others by the person if current palagi (western) protocols and care pathways are not followed. For example, one health professional participant described how they encountered a person who did not take their advice to consult with the mental health services because their voices suggested that the 'staff want to kill you, so run away from them'. From a biomedical view, this would indicate that the person is at great risk to self/others due to a command hallucination, thus increasing the professional need to address the safety and risk aspects of care. Whereas a cultural-and spiritual-based view of hearing voices would make sense once explained through an anthropological lens (Larøi et al. 2014). Balancing risk versus safety is imperative to support the person's informed consent processes to increase both choice of, and shared power in collaborative care planning; however, care that is collaborative relies upon talanoa (story) being shared. Unfortunately, restrictive mental health legislation, stigmatizing attitudes, and helping responses inherent in institutional racism may regard non-medical involvement, such as traditional healers, to be involved later rather than sooner in care planning. Harris et al. (2019) argue that forms of institutional racism are higher among M aori, Pacific, and Asian groups compared with Europeans, acting as a barrier to, and influence on the quality of healthcare. Racism is also present in policy development and in the contracting of services to effectively meet the needs of these populations (Came et al. 2020). Systems that bring a cultural approach to care can reduce the restrictive practices and increase the collaborative negotiation on care planning. Partnerships between health professionals and Tongan traditional healers is vital for cultural care planning (Incawayar et al. 2009), more socially accepting of distressing experiences and often more accessible for service users and their family (Ibrahim Awaad et al. 2020). Similar to the inclusion of the peer workforce over time (Vandewalle et al. 2018), the increased engagement of traditional healers as peers alongside health professionals will ensure an authentic and meaningful contribution to the cultural care of the person and possibly liberating professionals from their restrictive roles and patterns of care that perpetuate stigma and racism inherent in our institutions (Came et al. 2020). The power of metaphors within a new practice of uloa breaks from traditional approaches and consideration needs to be given to how novel, symbolic, and metaphorical language will impact on health professionals' clinical reasoning and formulation to determine the current, western, diagnostic criteria (American Psychiatric Association 2013). As (Vaka et al. 2020, p. 4), argue, from a Tongan spiritual perspective, the person is regarded as a whole and 'perfect form', who is now broken or damaged, therefore the finding in this study suggest that healing requires a cultural perspective to be embedded into the practice of health workers to and the cultural humility to recognize their role in healing. However, participants in the Tongan health professionals group felt conflicted in working between the traditional Tongan healing approach versus the medical model, thus closing their options to explore how uloa could be practiced. Conversely, if too many health professionals are involved, then the Tongan worker 'backs out'. We suggest that mental health nurses can demonstrate cross-cultural leadership in the implementation of uloa and thus challenging the task-oriented approach. As one participant stated in the previous study, we need to refocus on the 'ripples in the water, rather than the focus only on the pebble' as the source of continuing distress for service users. Hence, the role of the toutai ca also be ascribed to leading culturally informed collaborative care so that health workers gain the trust and respect as through demonstrating cultural humility in their key position with the Tongan community. However, this requires health workers to defer leadership in care planning to the service user, their traditional healer, and family as a collaborative unit to 'bringing everyone in as village'. Though safety and risk will always be present in dynamic health systems, options for respite facilities where the healer lives with the person/family and brings the rituals such as kava for healing is likely to reduce risk, restrictive use of the mental health act, and ultimately reduce discrimination and stigma, including self-stigma by the person. As argued earlier, Cavelti et al. (2012), assert that self-stigma has a detrimental effect on a person's insight arguing that selfstigma needs to be addressed with interventions that increase self-concept to reduce dysfunctional, or troubling, beliefs related to mental distress. Uloa and the role of the traditional healers have a vital role to play to achieve this outcome. CONCLUSION The importance of health professionals understanding how the Tongan model of uloa can inform their practice will provide a framework to strengthen their skills in safely navigating their way through the Tongan relationships. Both Pangopango malie has described the person's worldview within the Pacifica culture of voyaging and fishing the oceans; therefore, the use of the metaphor of 'navigation' is central to collaborative care. The incorporation of ngaue faka-tahaeffective and culturally informed communication, underpinned by the va, will support the practice of health professionals. Like the fishing net, uloa supports practice that attends to the whole task, increases awareness of all the cultural connections in treatment planning, and promotes restoration of the toutai role to service usersand to teach them how to fish. RELEVANCE FOR CLINICAL PRACTICE Given some limitations experienced by the researchers, such as the challenges of working with two languages, this was handled carefully between the researcher translating data followed by discussion with two peer Tongan researchers and expert translators for validation. The findings continue to support that the conventional biomedical approach used in the mental health services has been found to overlook elements of Tongan constructions of mental illness and the intersections between Tongan and biopsychosocial themes. The notion of care based on uloa creates an opportunity to critique the dichotomy between the biomedical and the psychosocial-spiritual approach to current mental health and addiction approaches. The findings support that care based only on the 'medicine' rather than bringing the spiritual aspect into care planning will not serve the needs of the Tongan community.
2022-07-26T06:17:03.728Z
2022-07-24T00:00:00.000
{ "year": 2022, "sha1": "bdf630a2a0d7400d3c83613dfc48cd5d4e4948ee", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "021c359c0ffeace7cf67db55ce9daa463bf02a1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207894558
pes2o/s2orc
v3-fos-license
Effect of dietary tryptophan supplementation on growth performance, immune response and anti‐oxidant status of broiler chickens from 7 to 21 days Abstract Background This study was conducted to investigate the optimum dietary level of tryptophan (Trp) supplementation at which broiler chickens have better growth with efficient immune system and anti‐oxidant status. Method One hundred and twenty (n = 120) 1‐day‐old broiler chicks were fed a common commercial diet from days 1 to 7. On day 7, the chicks were randomly divided in three treatment groups, that is, Trp 0.2 [national research council (NRC) recommended level of tryptophan], Trp 0.3 (tryptophan supplemented at 0.3%) and Trp 0.5 (tryptophan supplemented at 0.5%). All the experimental diets were iso‐caloric (ME; 3,000 kcal/kg) and iso‐proteic (CP; 18.5%). Weekly data on feed intake and body weight gain (BWG) were recorded to calculate feed conversion ratio (FCR). On day 19, avian tuberculin was injected to note the cellular immunity. On day 21, two birds per replicate were killed to determine carcass and visceral organ weights. Blood serum samples were collected for analysis of humoral immune response against sheep red blood cells, total oxidant and anti‐oxidants by spectrophotometric method. Results Feed intake, carcass and visceral organ weights remained unaffected by dietary treatments while BWG and FCR tended to improve (p < .05) in broiler chicks fed the Trp 0.3 and the Trp 0.5 diets. Total oxidant status was also improved (p < .05) in broiler chicks fed the Trp 0.5 diet. Likewise, broiler chicks fed the Trp 0.3 and the Trp 0.5 diets tended to have better (p < .05) total anti‐oxidant status, catalase, glutathione peroxidase, glutathione reductase and arylesterase (ARE). The overall antibodies response and IgG improved (p < .05) by the Trp 0.3 and Trp 0.5 diets compared to control. However, IgM level remained similar across the treatment. The cellular immunity against avian tuberculin improved at 24 hr post‐injection but its effect disappeared at 48 hr. Conclusion The results of present study revealed that Trp above the NRC recommended level may give better growth, immune response and anti‐oxidant status in broiler chickens. | INTRODUC TI ON Modern poultry production encounters various stresses especially the nutritional stress imposed by high dietary level of polyunsaturated fatty acids, mycotoxins, vitamins and minerals imbalance. Moreover, broiler chicken has been improved genetically over the years for fast growth rate, which is associated with rapid cell proliferation, consequently, the level of reactive oxygen species (ROS) increases leading to oxidative stress (Surai, 2015). The highly reactive and unstable nature of ROS has great biological concern due to their detrimental effect on cellular membranes, DNA and RNA. Thus, these ROS may create stress in the body, disturbing many metabolic and immunological pathways (Halliwell & Gutteridge, 1999). Many studies suggested that the deficiency of dietary nutrients, especially amino acids, can impair the immune system and cellular redox status (Li, Yin, Li, Kim, & Wu, 2007). Some indispensable amino acids, in particular, methionine and tryptophan (Trp), have been reported to play important role in the prevention of oxidative stress. Tryptophan derived to 5-hydroxytryptophan (5-HT) that had role to preserve the membrane fluidity in chicken. Whereas oxidative stress puts harmful effects on membran fludity. The 5-HT has beneficial effects on the enzymatic and non-enzymatic anti-oxidant capacity (Dong, Azzam, Rao, Yu, & Zou, 2012;Yue, Guo, & Yang, 2017). Tryptophan deficiency leads to depressed body weight gain (BWG), lowered feed intake and poor feed conversion ratio (FCR) along with compromised antibody status (Mozhdeh et al., 2010). Tryptophan is a structural component of protein as well as a major precursor of serotonin and melatonin hormones which plays an important role in maintenance of normal physiological processes, for example, tissue synthesis, feed intake, growth performance, FCR and immunity in broiler chickens (Bai et al., 2017). Further, Trp is also involved in niacin biosynthesis in poultry (Richard et al., 2009). Serotonin is a vital neurotransmitter that improves environmental adaptability and alleviates oxidative stress (Martin et al., 2000). It is also an important mucosal signalling molecule produced by the enterochromaffin cells in the gut and is related to numerous pathophysiological processes (Coates et al., 2004). Synthetic amino acids, especially lysine, methionine, threonine and Trp, are regularly supplemented in corn-soybean meal diets. Also, the response of these amino acids on growth performance has been evaluated extensively. However, there is a need to determine the optimum dietary Trp supplemental level which can enhance growth performance with efficient immune response. Therefore, the present study was designed to investigate the optimum level of Trp and its effect on growth performance, immune response and serum parameters in broiler chickens from days 7 to 21. | Bird management and experimental diets One hundred and twenty (n = 120) 1-day-old broiler chicks were reared in a group and fed a commercial diet for 1 week. At the end of first week, the chicks were divided into three groups in such a way that each group had four replicates with 10 chicks in each replicate. The birds were kept for brooding at 95°F for first week and lowered down by 5°F per week till 85°F was attained. The experimental diets (Table 1) were fed ad libitum. Clean and fresh drinking water was available all time. The experiment lasted for 21 days. | Performance data The growth performance in terms of feed intake, BWG and FCR was recorded weekly. On day 21, two chicks per replicate were selected randomly and killed to evaluate carcass characteristics (thigh and breast meat percentage) and visceral organ weight. | Cellular response Two birds per replicate were selected and inoculated by avian tuberculin (Veterinary Research Institute Lahore, Pakistan) 0.1 ml and normal saline 0.1 ml between third and fourth interdigital space of right and left foot, respectively, at 19th day of trial. The intra-inflammatory response was measured using screw gauge after 24 and 48 hr of injection and results were interpreted by following the method of Corrier & DeLoach, 1990. | Humoral response On day 14, two birds per replicate were administrated intravenously with 3% solution of sheep red blood cells to evaluate humoral immune response in terms of overall antibody response, IgM and IgG. After 7 days of injection, blood samples were collected in test tubes and centrifuged at 402 g for 20 min to collect the serum and were freezed at −20°C till further analysis (Delhanty & Solomon, 1966). | Growth performance from days 7 to 21 The results demonstrated that Trp has no effect on feed intake. The Trp 0.3 and Trp 0.5 tended to increase (p < .05) the BWG and improve FCR as compared to control group that was supplemented with 0.2% Trp (Table 2). | Carcass characteristics Carcass characteristics (thigh and breast meat), visceral organ weights (liver, kidney, spleen, heart and intestine) remained unchanged by the dietary treatment (Table 3). | Oxidant and anti-oxidant status Oxidant and anti-oxidant status of the chicks tended to increase (p < .001) as compared to control group while no difference was noted between Trp 0.3 and Trp 0.5 groups. The total oxidant status was lowered (p < .05) at 0.5% level of Trp supplementation. Likewise, catalase activity improved (p < .05) in chicks fed on Trp 0.5 diet. In addition, glutathione reductase and glutathione peroxidase level improved (p < .001) along with ARE activity (p < .001) chicks fed diets Trp 0.50 as compared to control group (Table 4). | Immune response The data demonstrated that at 24 hr of avian tuberculin inoculation, Trp supplementation increased (p < .01) the inflammatory response. However, inflammatory response disappeared after 48 hr of injection (Table 5). The results of humoral immune response showed that the overall antibody titre and IgG increased (p < .01) with increasing Trp in diet. But IgM titre was not different among the dietary treatments (Table 6). | D ISCUSS I ON In this study, feed intake remained unchanged by the dietary treatments but BWG and FCR were improved. Similarly, no effect was noticed on carcass traits. These findings are supported by the previous studies (Duarte et al. 2013;Mr & Azam, 2014) in which Trp supplementation had similar response. The similar result for feed intake (Rosebrough, 1996) reported that the feed intake was decreased when broiler chicken was fed by the diet containing low crude protein and excess level of Trp supplement. Duarte et al. (2013) also concluded that Trp had no effect on feed intake in broiler chicken. This result might be due to the crude protein (18.4%) in diet. As Trp is a chief molecule to control behaviour and physiological functions and ultimately required more than national research council (NRC) recommendation for maximum weight gain and FCR in broiler (Cortamira, Seve, Lebreton, & Ganier, 1991;Dong & Zou, 2017;Rosa & Pesti, 2001 The present study showed that the Trp increased the cellular and humoral immunity in broiler chicken. Sanchez, Sanchez, Paredes, Rodriguez, and Barriga (2008) (Emadi et al., 2011;Mozhdeh et al., 2010). Gershoff, Gill, Simonian, and Steinberg (1968) observed that a deficiency of Trp decreased the antibody production. Esteban et al. (2004) indicated that the synthesis of serotonin and melatonin, as well as the innate immune response, can be modulated by Trp. TA B L E 4 Effect of tryptophan supplementation on oxidant and antioxidant status of broiler chicken In conclusion, Trp is third or fourth limiting amino acid in poultry diets. The study proposed that dietary Trp supplementation level should be greater than NRC (1996) recommendations. Tryptophan level of 0.3% and 0.5% as compared to 0.2% improved growth performance but had no effect on carcass characteristics. Tryptophan level above the NRC recommendation improved the anti-oxidant status, humoral and cellular immunity in broiler chicken during 7-21 days. ACK N OWLED G EM ENT Thanks to UM Enterprises for the provision of L-Tryptophan. CO N FLI C T O F I NTE R E S T There is no conflict of interest. E TH I C A L S TATEM ENT All the experimental proceedings in this experiment were approved by the university animal ethics committee.
2019-11-07T14:10:53.878Z
2019-11-05T00:00:00.000
{ "year": 2019, "sha1": "29b4445fefce9143cdaea166407c1825af8be628", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/vms3.195", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d63db3192ed0be42d489545c6b52da265571c346", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251325913
pes2o/s2orc
v3-fos-license
Regulatory role of RNA N6-methyladenosine modifications during skeletal muscle development Functional cells in embryonic myogenesis and postnatal muscle development undergo multiple stages of proliferation and differentiation, which are strict procedural regulation processes. N6-methyladenosine (m6A) is the most abundant RNA modification that regulates gene expression in specific cell types in eukaryotes and regulates various biological activities, such as RNA processing and metabolism. Recent studies have shown that m6A modification-mediated transcriptional and post-transcriptional regulation plays an essential role in myogenesis. This review outlines embryonic and postnatal myogenic differentiation and summarizes the important roles played by functional cells in each developmental period. Furthermore, the key roles of m6A modifications and their regulators in myogenesis were highlighted, and the synergistic regulation of m6A modifications with myogenic transcription factors was emphasized to characterize the cascade of transcriptional and post-transcriptional regulation during myogenesis. This review also discusses the crosstalk between m6A modifications and non-coding RNAs, proposing a novel mechanism for post-transcriptional regulation during skeletal muscle development. In summary, the transcriptional and post-transcriptional regulatory mechanisms mediated by m6A and their regulators may help develop new strategies to maintain muscle homeostasis, which are expected to become targets for animal muscle-specific trait breeding and treatment of muscle metabolic diseases. Introduction The skeletal muscle is mainly composed of a large number of muscle fibers, a small amount of adipose tissue, and connective tissue, which are highly heterogeneous striated muscles that contain muscle cells, immune cells, and nerves. Accounting for approximately 40% of an animal's body weight, the skeletal muscle is the organism's largest motor and metabolic organ and plays a vital role in metabolism and energy balance OPEN ACCESS EDITED BY (Mayeuf-Louchart et al., 2015;Cong et al., 2020). Embryonic myogenesis and postnatal muscle development are precisely regulated by multiple mechanisms that involve myoblast progenitor cell proliferation, differentiation, and fusion to form muscle fibers . Skeletal muscle development involves multiple stages of proliferation and differentiation, and research on its formation process and molecular regulation mechanisms has always been a hot topic in the field of molecular genetics. Currently, research on skeletal muscle growth and development mainly focuses on the functional identification of key genes and the posttranscriptional regulatory mechanisms mediated by noncoding RNAs (such as lncRNA, circRNA, and microRNA) (Zhao et al., 2011). In addition to being regulated by a series of specific transcription factors and signaling pathways, epigenetic modifications are also involved in a variety of biological processes in muscle development (McKinnell et al., 2008;Yang et al., 2021). As the most common methylation modification in RNA (Deng et al., 2018), N 6 -methyladenosine (m 6 A) represents a new type of post-transcriptional gene regulation, which is tissue specific and spatio-temporally specific. To date, more than 150 different RNA modifications have been identified (Yang et al., 2018), which play an important role in tissue development and homeostasis by controlling the cell state transition (Frye et al., 2018). The m 6 A modification is the most abundant methylation modification in eukaryotic cells, and it is widely distributed in mRNA and non-coding RNA and can regulate gene expression through "epitranscriptomics" without changing the sequence of RNA molecules (Roundtree et al., 2017a;Helm and Motorin, 2017;Fazi and Fatica, 2019). m 6 A modifications in eukaryotic RNA are reversible (Jia et al., 2012) and can be deposited, removed, and recognized by a series of methyltransferase complexes (writers), demethylases (erasers), and m 6 A-binding proteins (readers) . They are involved in the regulation of biological processes, such as disease occurrence (Han et al., 2019;Wang et al., 2020), embryonic development (Mendel et al., 2018), tissue development, and cell proliferation and differentiation (Zhang et al., 2017;Lee et al., 2019) by regulating RNA metabolic activities such as precursor RNA splicing (Haussmann et al., 2016), mRNA translocation (Fustin et al., 2013), stability (Geula et al., 2015), and translation (Coots et al., 2017). However, the post-transcriptional mechanisms of m 6 A modifications in the regulation of muscle development remain largely unknown. Recent studies have shown that m 6 A methylation plays an important role in muscle stem cell maintenance, myocyte proliferation, and cell differentiation (Kudou et al., 2017;Wang et al., 2017;Dorn et al., 2019;Mathiyalagan et al., 2019;Gheller et al., 2020;Lin et al., 2020;Zhang et al., 2020). This review aims to provide an overview of the mechanisms of myogenesis under transcriptional and m 6 A methylationmediated post-transcriptional regulation and emphasizes the important role of m 6 A modification and its regulators in coordinating functional signaling factors at different stages of skeletal muscle development. Based on the latest research in epigenetic modifications, this study also discusses the interaction between m 6 A modification and non-coding RNA and explores different levels of co-regulation in myogenesis. 2 Biological characteristics of skeletal muscle development Both embryonic myogenesis and postnatal muscle development undergo a series of cell proliferation and differentiation processes from the embryonic stage to early growth and development, and mesoderm cells undergo repeated mitosis and massive proliferation to form mononuclear myoblasts. Furthermore, myoblasts proliferate and fuse into multinucleated myotubes (Velleman et al., 2013). During myogenesis, mononuclear myoblasts withdraw from the cell cycle, lose their ability to divide, and migrate from the cell center to the cell membrane to form myofibers (Picard et al., 2010). Single myoblasts that cannot be fused are isolated between the myofiber basement membrane and muscle cell membrane and finally form muscle satellite cells (MuSCs) with stem cell characteristics. The proliferative capacity of MuSCs decreases with age during animal growth and development. However, MuSCs can proliferate and undergo myogenic differentiation when myofibers are damaged, participating in their repair and regeneration , thus ensuring normal growth and development of animals. Embryonic myogenesis The genesis and development of the skeletal muscle in the animal embryo is a complex physiological and biochemical process that primarily includes muscle cell genesis, myofiber formation and maturation, and the accumulation of myofiber number ( Figure 1). The skeletal muscles of vertebrates originate from the paraxial mesoderm during embryonic development (Buckingham and Vincent, 2009). The paraxial mesoderm differentiates to produce somites, which act as the "helmsman of fate" and control the direction of myogenesis and osteogenesis (Biressi et al., 2007). The somite matures and divides again during embryo development. Driven by the sonic hedgehog (Shh) signal secreted by the neural tube and notochord, a part of the epithelial cells of the dorsal develops into a dermomyotome (Buckingham, 2001). Pluripotent stem cells located in the somite differentiate into muscle progenitor cells (MPCs) under the conditions of various signal molecules and embryonic environments. With the proliferation of dermomyotome cells, the number of MPC increases continuously, and the MPC in the middle of the dermomyotome continues to migrate downward to form the first skeletal muscle tissue: the myotome. MPCs are further exfoliated from the dermomyotome and fused with the myotome to form skeletal muscle, and some MPCs migrate to the extremities to form the limb skeletal muscle (Parker et al., 2003;Jensen et al., 2010). During primary muscle development, MPC-expressing Pax3, Pax7, and low Myf5 genes in the myotome stratify from somites and migrate to more distant muscle tissues to form mononuclear myoblast precursors, which are then induced by myogenic determinants to differentiate further into myoblasts (Kassar-Duchossoy et al., 2005;Bajard et al., 2006). During the development of the secondary muscle in the later stage of the embryo, myoblasts proliferate, and the expression of the muscle differentiation factor, myogenin, and other genes increases (Tajbakhsh and Buckingham, 2000). After a series of proliferation, myoblasts withdraw from the proliferation cycle and are irreversible. Then, they undergo terminal differentiation by expressing muscle-specific proteins and different types of cell adhesion factors and fuse to form fusiform multinucleated myotubes containing non-striated myofibrils (Walsh and Perlman, 1997;Sabourin and Rudnicki, 2000). The multinucleated cells already contain myofibrils composed of myosin and actin. When myofibril filaments are arranged in rows to produce transverse striations, the myotubes further develop into mature myofiber-forming skeletal muscles with a relatively perfect structure and function (Buckingham et al., 2003;Buckingham, 2006;Buckingham and Relaix, 2007). The embryonic stage is the main period of myofiber quantity fixation and differentiation in most mammals. Postnatal skeletal muscle development The number of skeletal myofibers after birth is fixed, and muscle growth is mainly derived from increased cell volume (Sassoon, 1993). In this process, the length of myofibers increases with the length and number of sarcomeres, and at the same time, myofibrils increase the diameter of myofibers through multiple divisions. Notably, MuSCs are essential for the developmental growth of myofibers, which can replenish the nuclei of the postnatal myocyte pool, thus contributing to the increase in myonuclei during the early postnatal stage (Pallafacchina et al., 2013;Fukada and Ito, 2021). When some MuSCs divide and proliferate, their nuclei fuse with myofibers, maintaining the relative balance between myocyte nuclei and cytoplasm and promoting myocyte enlargement, thus causing the skeletal muscle to exhibit a growth state (Pallafacchina et al., 2013;Fukada and Ito, 2021). MuSCs are pre-eminent stem cells that maintain the regeneration of postnatal myofibers and are generally in a quiescent state of mitosis (Partridge, 2004;Dhawan and Rando, 2005). However, when mature myofibers are damaged, MuSCs can be activated and re-enter the myogenic pathway, differentiate into myoblasts, and fuse to produce new myofibers after a series of proliferation and differentiation processes (Collins et al., 2005;Dhawan and Rando, 2005;Le Grand and Rudnicki, 2007;Tedesco et al., 2010) (Figure 1). The growth and development of the postnatal skeletal muscle are accompanied by the transformation and maturation of myofiber types. According to the different expression activities of ATP enzymes in myofibers, myofibers are divided into slow oxidation, fast oxidation, fast glycolysis, and intermediate oxidation types (Schiaffino and Reggiani, 1996). It has been discovered that in slow myofiber activity, unlike in fast myofibers, protein synthesis and degradation rates are higher (Li and Goldberg, 1976). Furthermore, muscle activity is mainly reflected in protein metabolism, and the regulation of muscle mass and myofiber size mainly depends on the balance between protein synthesis and degradation in myofibers (Goll et al., 2008;Yin et al., 2021). When myofibers are stimulated by load or synthetic metabolic hormones after development and maturity, the total protein synthesis rate of the skeletal muscle is greater than the degradation rate, and the size of myofibers increases to promote muscle growth. When an organism is starved, diseased, or stimulated by catabolic hormones, the synthesis rate of skeletal muscle protein is reduced, resulting in the loss of the balanced state of synthesis and degradation and therefore decreased skeletal muscle mass and muscle atrophy (Burd et al., 2010;Goodman, 2014;Mckendry et al., 2021;Sartori et al., 2021). The critical role played by muscle cells Totipotent muscle cells (myocytes) exist throughout skeletal muscle growth and development during the embryonic and postnatal stages of vertebrates, and the myocytes in the somites begin to be active in the early stages of myogenesis. During various stages of animal life, myocytes promote the formation and development of muscle tissue through a series of proliferation and differentiation processes ( Figure 1). Myoblasts Myoblasts are derived from muscle progenitor cells, which are the basic materials for myofiber formation. The proliferation and differentiation of myoblasts play an important role in muscle development. After muscle injury, stationary MuSCs activate and commit to myoblasts and further proliferate, fuse, and eventually mature into myofibers and restore the contraction ability of injured muscles . Therefore, myoblasts are the driving force of skeletal muscle development and an effective tool for treating many diseases with poor prognoses, such as clinical muscular atrophy. During secondary muscle development, myoblast proliferation and fusion are regulated by a variety of molecular networks. Myoblasts play a decisive role in muscle development under the direct or indirect action of a series of regulatory factors. Owing to the fundamental constitutive role of myoblasts in muscle development, maintenance, and adaptation, the study of regulatory mechanisms of myoblast differentiation has become a key direction in skeletal muscle growth and development, and as an in vitro model, it is widely used in muscle growth, differentiation, migration, apoptosis, and other related studies (Sharples and Stewart, 2011;Cai et al., 2020). Satellite cells MuSCs are myoblast precursor cells with the ability to proliferate and self-renew in the later stages of embryonic development and play an essential role in the growth and regeneration of newborn muscle (Chargé and Rudnicki, 2004;Endo, 2007;Relaix and Zammit, 2012;Yin et al., 2013). MuSCs are in the G0 phase and typically remain quiescent. When the skeletal muscle is stimulated externally, the basement membrane secretes the hepatocyte growth factor (HGF), which binds to receptors on the surface of MuSCs and activates quiescent-stage MuSCs (Tatsumi et al., 1998). Furthermore, under the induction of HGF and other related factors, some activated MuSCs re-enter the cell cycle and migrate along the myofiber to the damaged site (Bischoff, 1997). MuSCs proliferate massively after migrating to the injury site and generate sufficient myoblasts, which subsequently proliferate and fuse under the regulation of a series of muscle-specific factors to further develop into mature myofibers for skeletal muscle growth, maintenance, and regeneration (Halevy et al., 2004;Zammit et al., 2004;Rizzi et al., 2012). Inactive MuSCs re-enter the quiescent state and become reserve cells for the next cell cycle (Zammit et al., 2004). The self-renewal and myogenic differentiation of MuSCs maintain a dynamic balance during muscle development, which is essential for the normal function of MuSCs and the maintenance of homeostasis in the internal environment (De Luca et al., 2013). In summary, the ability of MuSCs to remain quiescent plays a crucial role in the long-term maintenance of a functional stem cell pool during skeletal muscle development and regeneration. Myoblasts and MuSCs are the most important functional cells involved in muscle growth and development. The dynamic balance between their proliferation and differentiation is critical for maintaining the fate of skeletal muscle cells, thus ensuring normal growth and development of the organism. 3 RNA m 6 A-modified enzyme system m 6 A is the most characteristic methylation modification in eukaryotic RNA, in which three key proteins-writers, erasers, and readers-are involved in maintaining the dynamic balance of its modification. Frontiers in Cell and Developmental Biology frontiersin.org m 6 A methyltransferase (writers) m 6 A methyltransferase is composed of the m 6 A-METTL core complex (MAC) and m 6 A-METTL-associated complex (MACOM) (Lence et al., 2019), and multiple subunits of the complex are co-transcribed and bound to RNA to catalyze methylation. The m 6 A-METTL complex includes methyltransferase 3 (METTL3) and methyltransferase 14 (METTL14), which can form stable heterodimers in vitro. METTL3 is highly conserved, including the SAM-binding domain and methyltransferase active domain, which catalyzes the formation of m 6 A (Wang et al., 2016a). METTL14 lacks the catalytic active domain of the enzyme but can promote the binding of METTL3 and RNA. When MELLT14 and METTL3 bind, the methylation catalytic ability of the complex is significantly enhanced (Wang et al., 2016b). Further studies have shown that some m 6 A-METTLassociated complexes play an important role in guiding the methylation of specific target sites. Wilms' tumor 1-associated protein (WTAP) lacks methylase activity, but it can promote m 6 A deposition by recruiting the METTL3-METTL14 complex to localize in nuclear plaques (Ping et al., 2014). Vir-like m 6 A methyltransferase associated (VIRMA) can promote the specific deposition of m 6 A in the 3′UTR (Yue et al., 2018). KIAA1429 also has the catalytic activity of m 6 A methylation, which affects splicing by regulating the level of m 6 A (Schwartz et al., 2014). RNA-binding protein 15 (RBM15) and its analog RBM15B contain three RNA recognition motif (RRM) domains that interact with the WTAP-METTL3 complex at specific sites to promote m 6 A methylation (Yang et al., 2018). CCCH type 13 zinc finger protein (ZC3H13) regulates the nuclear localization of the WTAP-Virilizer-Hakai complex and the self-renewal of mouse embryonic stem cells (ESCs) by promoting m 6 A methylation (Wen et al., 2018). Although the m 6 A modification mechanism of the key complex has been defined, future studies may identify additional subunits of the methyltransferase complex that promote or inhibit the occurrence of m 6 A by identifying specific loci and thereby accurately regulating gene expression. m 6 A demethylase (erasers) The m 6 A modification is a dynamic and reversible regulatory process whose activity can be counteracted by demethylases, fat mass and obesity-associated (FTO) gene, and ALKBH5 in gene regulation. The FTO gene is the first protease discovered to have m 6 A demethylation activity and belongs to the Fe (II) and α-KG (ketoglutaric acid)-dependent ALKB dioxygenase family. FTO exhibits catalytic activity both in vitro and in vivo. During demethylation, m 6 A is catalyzed to form N 6hydroxymethyladenosine (hm 6 A) and N 6 -formyladenosine (fm 6 A), which are extremely unstable and eventually decompose into adenine (A) (Jia et al., 2012;Fu et al., 2013). In the intracranial glioma model of nude mice, mice bearing FTO shRNA-1-infected U251 cells had significantly shorter survival times, and when mice were co-infected with FTO-Mut, they survived longer. This suggests that FTO inhibits the in vivo progression of gliomas (Tao et al., 2020). An ovariectomized mouse model demonstrated that FTO can promote osteoporosis by demethylating the osteogenic marker, Runx2 mRNA . In C57BL/6N mice, FTO deficiency results in weight loss, marked reduction in white adipose tissue, and promotion of the conversion of white adipocytes to brown or beige adipocytes (Ronkainen et al., 2015). Moreover, FTO-mediated mRNA m 6 A demethylation can affect preadipocyte differentiation and lipid deposition by regulating the expression of fat-related genes, such as C/EBPβ, PPARγ, and ANGPTL4, and plays an important regulatory role in lipid metabolism and lipid disorders (Yang et al., 2022b). It has been shown that inhibiting the expression of FTO can considerably increase the total m 6 A level of polyadenylated RNA (Jia et al., 2012). Additionally, FTO also uses N 6 ,2′-O-dimethyladenosine (m 6 Am) on single-stranded RNA as a substrate and has higher demethylase activity (Mauer et al., 2017). ALKBH5, also derived from the ALKB protein family, is the second m 6 A demethylase discovered, and its catalytic activity is similar to that of FTO; however, it can directly demethylate m 6 A to A through one reaction. ALKBH5 plays a role in specific sequences, showing a preference for m 6 A in common sequences and can completely remove m 6 A methylation modifications from single-stranded RNA (Zheng et al., 2013). m 6 A-binding protein (readers) Following m 6 A modification in eukaryotes, respective downstream biological functions require specific recognition by reader proteins to proceed normally. Currently known m 6 A-binding proteins include the YT521-B homology (YTH) family, IGF2BP, and HNRNP. The YTH domain family members mainly include YTHDF1/ 2/3 and YTHDC1/2, which contain a YTH domain that can selectively bind to the m 6 A site on RNA (Theler et al., 2014;Hsu et al., 2017). YTHDF2 has a strong binding ability, which can identify m 6 A sites and regulate the degradation of modified genes (Wang et al., 2014a). Studies have shown that YTHDF2 can enter the nucleus during heat stress stimulation, prevent FTO from demodifying m 6 A in the 5′UTR, and promote translation in a nonhat-dependent manner (Zhou et al., 2015). YTHDF1 can interact with the translation initiation factor, eIF, to enhance the translation efficiency of m 6 A modified genes by promoting ribosome enrichment of methylated genes (Wang et al., 2015). YTHDF3 is the first reader protein that binds to nuclearexported m 6 A-modified RNA, assisting YTHDF1 and YTHDF2 in regulating the degradation or translation of target Frontiers in Cell and Developmental Biology frontiersin.org genes, respectively (Li et al., 2017;Shi et al., 2017). However, recent studies have found that YTHDF1, YTHDF2, and YTHDF3 directly affect mRNA degradation in an m 6 A-dependent manner but do not participate in the translation regulation of mRNA (Lasman et al., 2020). YTHDC1, located in the nucleus, is highly conserved, which can selectively recruit the pre-mRNA splicing factor, SRSF3, and promote its binding to m 6 A-modified mRNA. Furthermore, it inhibits the binding of SRSF10 and mRNA, promotes the retention of exons modified by m 6 A, and regulates mRNA splicing (Xiao et al., 2016). Additionally, YTHDC1 interacts with SRSF3 and nuclear RNA output factor 1 to promote the nuclear output of m 6 A-modified mRNA, which plays an important role in the metabolic regulation of mRNA (Roundtree et al., 2017b). YTHDC2 preferentially binds to m 6 A in a common motif to improve translation efficiency and reduce the abundance of its target mRNA (Yang et al., 2018). The insulin-like growth factor-2 mRNA-binding protein (IGF2BP) family consists of three homologous coding genes, IGF2BP1, IGF2BP2, and IGF2BP3, which contain an RNA recognition domain and a ribonucleoprotein K domain. IGF2BPs promote the stability and storage of target genes in an m 6 A-dependent manner through the ribonucleoprotein K domain . Finally, a family of HNRNP proteins mediates the "m 6 A-switch" mechanism, and its member, HNRNPA2B1, can directly bind to m 6 A and cooperate with METTL3 to regulate alternative splicing events and primary microRNA processing. The other two members, HNRNPC11 and HNRNPG, do not directly bind to m 6 A but regulate the processing of RNA transcripts containing m 6 A (Yang et al., 2018). 4 Regulation of skeletal muscle growth and development by RNA m 6 A modifications m 6 A modification regulates various biological activities, such as RNA processing and metabolism in eukaryotes. Its dynamic and reversible mode of action may affect gene expression and cell fate by regulating various RNA-related cellular signaling pathways. Given the role of m 6 A modification in gene expression and the regulation of functional cellular mechanisms, there is emerging evidence that m 6 A modification and its regulators play an essential role in the growth and development of the skeletal muscle (Kudou et al., 2017;Wang et al., 2017;Gheller et al., 2020;Zhang et al., 2020;Deng et al., 2021a;Li et al., 2021;Petrosino et al., 2022) (Figure 2). Therefore, this review focused on the cascade regulation of m 6 A modification during embryonic and postnatal skeletal muscle development to provide evidence for exploring a new mechanism of m 6 A regulation of myogenesis. Embryonic m 6 A methylation In recent years, many studies have shown that m 6 A plays an important role in regulating the embryonic development of eukaryotes. During early embryonic development in zebrafish, m 6 A modification promotes maternal-zygote transition through YTHDF2-dependent maternal mRNA clearance . In addition, m 6 A participates in the metabolism of mRNA in embryonic stem cells, maintains cell self-renewal, and regulates the state transition and pluripotency of embryonic cells as a determining factor of cell fate at the transcriptional level (Batista et al., 2014). The regulatory mechanism of m 6 A in embryonic cell fate, which exists in a tissue-specific form at different stages of embryonic development, has been continuously explored. It maintains the dynamic balance between biological processes by regulating functional gene transcription and posttranscriptional expression. Determining the number of skeletal muscle fibers in animal embryos involves complex regulatory mechanisms, and the expression of key muscle-specific transcription factors is precisely regulated by m 6 A methylation ( Figure 2). Therefore, in-depth exploration of the potential mechanism of m 6 A in embryonic skeletal muscle development is of key interest in developmental muscle biology. Analysis of two key stages of pectoral muscle development in Dingan goose embryos revealed a negative correlation between m 6 A methylation and gene expression. Moreover, most m 6 A-modified differentially methylated genes were significantly enriched in muscle-related pathways, such as the Wnt, mTOR, and FoxO signaling pathways (Xu et al., 2021). Combined with miRNA-seq, potential m 6 A-miRNA-PDK3 was screened, which revealed the key role of m 6 A-modified miRNA in muscle growth and development of Dingan goose embryos (Xu et al., 2021). Analysis of m 6 A distribution in goat muscles at two key developmental stages revealed that the m 6 A peak in the longissimus at embryonic day 75 was significantly higher than that in the newborn stage, and m 6 A-modified genes were mainly enriched in actin binding, myotubular differentiation, MAPK, Wnt, and other signaling pathways related to skeletal muscle development (Deng et al., 2021a). During the differentiation of goat primary myoblasts, FTO expression was negatively correlated with global m 6 A levels. Following FTO knockdown in myoblasts, m 6 A levels of GADD45B mRNA were increased, whereas its protein expression and phosphorylation levels of p38 MAPK were significantly decreased, and myotube formation was attenuated. This demonstrated that the FTO-mediated m 6 A demethylates GADD45B and activates the p38 MAPK pathway, which in turn promotes myogenic differentiation of goat skeletal muscle (Deng et al., 2021a). The expression of IGF2BP1 is continuously downregulated in the six stages of skeletal muscle growth and development in the embryonic pig stage. Combined with RIP-seq, m 6 A-modified myogenic marker genes, MYH2 and MyoG, were identified as Frontiers in Cell and Developmental Biology frontiersin.org target genes of IGF2BP1 . Loss-of-function experiments were performed in myoblasts, and it was found that knocking down IGF2BP1 significantly downregulated MYH2 and MyoG mRNA expression and significantly inhibited myotube formation. The same phenotypic changes were observed with METTL14 knockdown , demonstrating that m 6 A is a key epigenetic factor in embryonic myogenesis. These results suggest that dynamic changes in m 6 A modification levels during embryonic skeletal muscle development play an essential role in regulating myogenesis. Postnatal m 6 A methylation Studies have found that m 6 A methylation has commonality and discrepancy in regulatory functions in different developmental stages of the organism, and it also plays an essential role in postnatal skeletal muscle development and muscle regeneration (Figure 2). Through whole-transcriptome m 6 A methylation map analysis of the muscle tissue of wild boar and Landrace and Rongchang pigs, it was found that most of the nuclear-related genes containing m 6 A encode transcription factors, indicating that m 6 A modification is involved in transcriptional regulation, and the two coordinately regulate gene expression (Tao et al., 2017). This is consistent with the results of a recent study on m 6 A profiles in the longissimus dorsi of Landrace and Jinhua pigs (Jiang et al., 2019), revealing the potential biological role of m 6 A modification in regulating muscle growth and development. The critical role of m 6 A modulators in myogenic differentiation A previous study found that after METTL3 knockdown in proliferating C2C12 cells, the overall levels of m 6 A modification decreased, resulting in premature differentiation of myoblasts, thus demonstrating that METTL3-mediated m 6 A methylation is an important regulator of myoblast state transition (Gheller et al., 2020). Both the mRNA and protein levels of METTL3/14 and WTAP were significantly downregulated during C2C12 cell differentiation and were negatively correlated with the expression of MHC and MEF2C. Consistent with in vitro findings, the expression of METTL3/14 is significantly downregulated in mouse embryonic hindlimb muscles during Frontiers in Cell and Developmental Biology frontiersin.org skeletal muscle growth and development (Xie et al., 2021a). Through genome-wide expression and gain/loss of function analysis, METTL3/14-mediated m 6 A methylation was found to inhibit myogenic differentiation by enhancing MNK2-ERK signaling (Xie et al., 2021a). This was consistent with the findings of Liang et al. (2021) and Gheller et al. (2020), suggesting that METTL3/14-mediated m 6 A methylation has a common inhibitory effect on myogenesis. However, METTL3 knockdown in proliferative C2C12 cells significantly downregulated MyoD mRNA expression and inhibited myoblast differentiation, and its m 6 A-mediated modification stabilized MyoD mRNA levels by promoting mRNA processing, thereby maintaining myoblast myogenic potential (Kudou et al., 2017). This is consistent with the findings of Zhang et al. (2020), who showed that METTL14 knockdown downregulates MyoD mRNA expression and significantly inhibits myotube formation. Investigating METTL3/14 promotion or inhibition of myoblast differentiation, we found that m 6 A modification may mediate the expression of myogenic transcription factors by regulating specific signaling axes, thereby inducing myogenic differentiation. In contrast, the upstream of m 6 A-modified myogenic transcription factors may be regulated by other transcriptional elements in multiple ways, leading to the opposite result of myogenic differentiation. It has been suggested that m 6 A modification in myogenesis has multiple functions, but the current study only analyzed the potential mechanism of m 6 A modification-mediated differentiation induced by myogenic transcription factors. Future research needs to explore the key upstream/downstream factors coregulated by RNA m 6 A methylation and myogenic transcription factors to explain the specific functional mechanism of m 6 A modification in regulating myogenic differentiation. FTO-mediated m 6 A plays an important role in fat mass and lipogenesis (Song et al., 2020). Considering that the skeletal muscle participates in the body's metabolic regulation and is similar to fat-related functions, it can be considered that FTO has an important regulatory effect on its myogenic differentiation. FTO expression is increased during the differentiation of mouse myoblasts into myotubes, whereas FTO silencing inhibits differentiation and impairs skeletal muscle development in endogenous FTO-null mice . Further exploration revealed that FTO-mediated m 6 A modification promoted myogenic differentiation through the mTOR-PGC-1α-mitochondrial axis. Interestingly, FTO overexpression, in vitro, does not significantly promote myoblast differentiation, presumably because of the high abundance of endogenous FTO expression, which is sufficient to support muscle differentiation . This is similar to the results of Church et al. (2010), who found that FTO overexpression did not increase lean mass in male mice. Furthermore, the potential role of m 6 A modification in goat skeletal muscle was explored, and it was found that FTO-mediated m 6 A demethylation activity upregulates CCND1 expression in a YTHDF2-dependent manner and that silencing of FTO can also induce autophagy during myogenic differentiation (Deng et al., 2021b). Moreover, the global m 6 A modification in goat embryonic skeletal muscle was significantly higher than that in newborn fetuses. Functional studies have shown that FTO-mediated m 6 A modification can promote the expression of GADD45B and myogenic differentiation by activating the p38 MAPK pathway (Deng et al., 2021a). These results provide a new insight that FTO promotes myogenesis by regulating the expression of related genes and may serve as a new target for myogenic differentiation. m 6 A methylation regulates skeletal muscle homeostasis and regeneration MuSCs are required for maintaining skeletal muscle homeostasis and regeneration after injury. This study found that the proliferative activity of MuSCs was reduced after METTL3 knockdown, resulting in a significant decrease in both m 6 A modification and protein expression levels of key genes of the Notch signaling pathway. Furthermore, YTHDF1 is positively correlated with the mRNA translation efficiency of Notch signaling pathway components, revealing a novel post-transcriptional mechanism whereby the METTL3m 6 A-YTHDF1 axis further controls MuSC fate and muscle regeneration by regulating the Notch signaling pathway . As a key factor in the MAPK signaling pathway, the protein kinase, MNK2, can target the phosphorylation-activated ERK signaling pathway and maintain muscle homeostasis (Hu et al., 2012;Maimon et al., 2014). By establishing a mouse muscle injury regeneration model, it was found that METTL3/14-MNK2 could activate MuSCs and promote their proliferation in the early stages of muscle regeneration, thereby controlling ERK signaling (Xie et al., 2021a). These results suggest that the METTL3/14m 6 A-MNK2-ERK signaling axis is required to regulate early muscle regeneration after acute injury. In the establishment of a BaCl 2 -induced mouse skeletal muscle injury model, the global m 6 A levels in muscle tissue were significantly increased 3 days after injury. In addition, m 6 A also plays an important regulatory function during the state transition of MuSCs (Gheller et al., 2020), indicating that m 6 A is a key epigenetic modifier of skeletal muscle regeneration. m 6 A methylation regulates skeletal muscle hypertrophic response Maintaining skeletal muscle mass is critical for an organism's health. Evaluation of the m 6 A modification signature of skeletal muscle hypertrophic growth in mice with mechanical overload revealed that global m 6 A levels and METTL3 expression were significantly increased in overloaded muscles (Petrosino et al., 2022). The myofiber-specific METTL3 mouse model was constructed using gain-of-function and loss-of-function Frontiers in Cell and Developmental Biology frontiersin.org experiments, and further validation revealed that m 6 A content and the myofiber cross-sectional area increased in muscles overexpressing METTL3. Moreover, METTL3 can regulate the post-transcriptional process of the myostatin pathway, and its mediated m 6 A methylation affects TGF-β superfamily signaling by inhibiting the translation of the activin receptor, Acvr2a mRNA, thereby promoting hypertrophic growth of the skeletal muscle (Petrosino et al., 2022). These results reveal a novel post-transcriptional mechanism to regulate muscle genespecific expression, namely, that m 6 A modification regulates muscle growth through the translation of activin receptors and is required to maintain muscle mass and function in vivo. m 6 A may be an effective modulator for the treatment of muscle-related diseases MuSC transition from quiescent to activated and proliferative to differentiated states after skeletal muscle injury and multiple m 6 A-modifying genes are associated with MuSC function during the differentiation process. When METTL3 is knocked down in primary murine MuSCs, the proliferation of MuSCs is slowed and the engraftment ability of their primary transplantation is enhanced, but their serial transplantation ability is lacking (Gheller et al., 2020). In addition to FTO, ALKBH5 may also be involved in regulating mRNA processing and metabolism related to skeletal muscle growth and development. ALKBH5-mediated m 6 A modification plays an important role in FoxO3dependent neurogenic muscle atrophy (Liu et al., 2022). To verify the specific mechanism, m 6 A-seq and Co-IP combined with loss-of-function experiments were performed, and it was found that ALKBH5 activates FoxO3 signaling in an m 6 A-HDAC4-dependent manner in denervated muscles, resulting in loss of skeletal muscle mass and denervation muscle atrophy (Liu et al., 2022). These results suggest that ALKBH5 may be a potential therapeutic target for the treatment of neurogenic muscle atrophy. In conclusion, m 6 A is an essential epigenetic regulator of MuSC function; further work is needed to determine the fate of specific m 6 A-modified proteins based on their binding activity to muscle mRNA, which is expected to be better applied to treat muscle atrophy and other related diseases. Transcriptional and posttranscriptional regulation of myogenesis Transcriptional and post-transcriptional regulation at the RNA level can rapidly respond to the regulation of the corresponding mechanisms when biological functions occur. In eukaryotic cells, mRNA efficiently initiates protein synthesis by adding a 5′ 7-methyl-guanosine cap and a 3′ poly(A) tail. However, mRNA-directed protein synthesis is blocked by sequence-specific RNA-binding proteins (Sonenberg and Hinnebusch, 2009;Nwokoye et al., 2021). This way of promotion or repression reveals the importance of RNA transcriptional regulation. Early studies have found that one of the main functions of m 6 A in mammalian cells is to mediate mRNA degradation, suggesting a possible negative correlation between m 6 A methylation and mRNA stability and transcription levels (Wang et al., 2014a;Wang et al., 2014b;Liu et al., 2014;Ping et al., 2014). Like DNA methylation and histone modification, RNA methylation, an important posttranscriptional epigenetic modification, plays an important role in regulating gene expression in specific cell types (Heck and Wilusz, 2019). Therefore, future studies are needed to elucidate the synergistic regulation of m 6 A and myogenic transcription factors to explain the cascade response of transcriptional and post-transcriptional regulation during myogenesis. Interactions between m 6 A-related proteins and myogenic transcription factors Studies have reported a critical role of MyoD methylation modification during myogenic differentiation. During C2C12 cell proliferation, high m 6 A levels in the 5′ UTR can promote the efficient processing of MyoD mRNA and actively maintain its stability. Furthermore, MyoD mRNA levels and myotube formation are significantly inhibited when METTL3 is knocked down (Kudou et al., 2017). Through the analysis of m 6 A at six prenatal stages in pigs, it was identified that the m 6 A reader, IGF2BP1, is continuously downregulated, which can regulate mRNA stability and translation . The same phenotypic changes were observed after knockdown of METTL14 and IGF2BP1 in C2C12 cells, inhibiting myoblast differentiation and significantly downregulating MyHC, MyoD, and MyoG expression . These results suggest that the m 6 A modification-mediated key transcription factor, MyoD, is a positive regulator of skeletal muscle differentiation. The Notch signaling pathway plays an important role in regulating MuSCs and muscle regeneration and is necessary for MuSC maintenance, activation, proliferation, and differentiation (Brack et al., 2008;Buas and Kadesch, 2010;Bjornson et al., 2012;Gerli et al., 2019). It was found that the mRNA molecules of the receptor (Notch2), transcription factor (RBPJ), and activator (MAML1) of the Notch signaling pathway in myoblasts are also regulated by m 6 A modification . Through in vivo and in vitro validation of m 6 A MeRIP-seq and cellular functions, it was found that METTL3-mediated m 6 A modification significantly inhibited the translation efficiency of Notch signaling pathway components, thereby promoting MuSC and muscle regeneration . These results revealed a novel post-transcriptional Frontiers in Cell and Developmental Biology frontiersin.org mechanism for m 6 A regulation of mRNA methylation of key transcription factors in MuSC fate and muscle regeneration. MEF2C, a member of the myocyte enhancer factor 2 (MEF2) family, induces the expression of muscle-specific genes, mainly by binding to basic helix-loop-helix proteins in myogenic regulatory factors. It can also bind to the promoter and enhancer regions of transcription factors that assist in regulating myoblast differentiation during muscle development (Molkentin et al., 1995;Black and Olson, 1998;Kim et al., 2008). Recent studies have revealed that m 6 A levels of MEF2C mRNA are significantly increased during bovine myoblast differentiation, and its expression is post-transcriptionally regulated by m 6 A modifications. Through gain-of-function and loss-of-function analyses, METTL3 was found to regulate myogenic differentiation by promoting the translation of MEF2C mRNA in an m 6 A-YTHDF1-dependent manner (Yang et al., 2022a). Furthermore, it is worth noting that both the mRNA and protein levels of METTL3 were significantly increased in MEF2C-overexpressing myoblasts. Genomic analysis and ChIP-qPCR demonstrated that MEF2C binds directly to the METTL3 promoter as a transcription factor to promote its expression (Yang et al., 2022a). This positive feedback loop during myogenic differentiation explains the transcriptional and post-transcriptional cascade regulatory mechanisms. Therefore, an in-depth study of the novel mechanisms of transcription factors in coordinating gene transcription and RNA m 6 A modification is required to shed more light on the functional mechanisms of skeletal muscle growth and development. Mechanism of action of m 6 A-related proteins and non-coding RNAs during myogenesis Myogenesis is a highly coordinated process involving multiple mechanisms, the programmed occurrence of which is controlled by specific genes and epigenetic modifications. An increasing number of studies have shown that non-coding RNAs (ncRNAs) or m 6 A-mediated transcriptional/post-transcriptional regulation play an important role in gene expression in skeletal muscle development. Moreover, m 6 A has been identified as a key regulatory factor that affects the function of ncRNAs, thus participating in body growth and development. Therefore, we have summarized recent studies to elucidate the molecular mechanisms of m 6 A-regulated miRNAs and long non-coding RNAs (lncRNAs) in myogenesis. Methyltransferase 3 regulates musclespecific miRNAs through transcriptional and post-transcriptional regulation miRNAs are a class of highly conserved small RNA molecules that participate in biological processes by degrading target genes or inhibiting post-transcriptional translation. During miRNA biogenesis, METTL3-mediated m 6 A methylation markers can promote the binding and processing of primary miRNA to DGCR8 and promote the maturation of miRNA in a global and non-cell-type-specific manner (Alarcon et al., 2015b). In addition, the m 6 A-labeled nuclear reader and effector, HNRNPA2B1, can interact with the DGCR8 protein to promote the processing of primary miRNAs (Alarcon et al., 2015a). These results are consistent with those of Alarcon et al. (2015b). This study found that METTL3 overexpression significantly downregulated the expression of muscle-specific miR-1a, miR-133a, miR-133b, and miR-206 in differentiated C2C12 cells and a mouse model of muscle injury regeneration. Combined with the immunoprecipitation results, it was demonstrated that METTL3 inhibited muscle-specific miRNAs through m 6 A modification of primary miRNAs during skeletal muscle differentiation (Diao et al., 2021). These results are contrary to the previous findings of Alarcon et al. (2015b). Therefore, Diao et al. (2021) further explored the complex mechanism by which METTL3 inhibits muscle-specific miRNAs. They found that in differentiated C2C12 cells, METTL3 overexpression significantly suppressed the expression of myogenic transcription factors, MEF2A/C and SRF, whereas the expression of epigenetic regulators, HDAC1, HDAC4, and HDAC8, was significantly upregulated. The METTL3overexpressing C2C12 cells were then subjected to MEF2C overexpression and HDAC inhibitor TSA treatment, and it was noted that the expression of miR-1a, miR-133a, miR-133b, and miR-206 was significantly upregulated (Diao et al., 2021). These results suggest that METTL3 represses muscle-specific miRNA expression by repressing MEF2C and promoting HDAC family epigenetic regulators. Taken together, METTL3 can repress muscle-specific miRNAs at the transcriptional and posttranscriptional levels and plays an important role in muscle function maintenance and anti-differentiation. The critical role of methyltransferase 3mediated lncRNA in myogenesis LncRNAs are mainly involved in regulating gene expression at the transcriptional and post-transcriptional levels. It has been reported that lncRNA can directly induce the binding of chromatin remodeling proteins to target genes and change histone modification or DNA methylation status according to specific genome sites, thus regulating the expression of functional genes (Chen and Xue, 2016). In addition to the aforementioned epigenetic modification, m 6 A-methylated lncRNAs have been well-characterized. For example, METTL3 binds to RBM15 and RBM15B in a WTAP-dependent manner and promotes lncRNA XIST-mediated gene silencing through m 6 A-YTHDC1 (Patil et al., 2016). Furthermore, lncRNAs are involved in a variety of biological processes, and their post-transcriptional regulation plays an essential role in myogenesis. Recent studies have found that during myogenesis, m 6 A methylation levels of lncRNA are positively correlated with the transcriptional abundance of Frontiers in Cell and Developmental Biology frontiersin.org lncRNA, and m 6 A methylation modifies lncRNA to positively or negatively regulate its adjacent mRNA. Functional verification showed that METTL3-mediated m 6 A-lncRNA Brip1os is involved in muscle differentiation by negatively regulating the expression of Tbx2 mRNA (Xie et al., 2021b), revealing a novel mechanism of post-transcriptional regulation during skeletal muscle development. Summary and outlook The discovery of RNA m 6 A modifications has greatly broadened our understanding of the mechanisms underlying gene expression regulation. In recent years, many studies have confirmed that m 6 A modifications are the important components of myogenic regulatory networks. Based on this, we summarized current research on m 6 A modifications in skeletal muscle growth and development and highlighted the mechanisms of transcriptional and post-transcriptional regulation of m 6 A modifications with myogenic transcription factors, which may help to better explore the synergistic regulation at different levels during myogenesis. In addition, other myogenic transcription factors are expected to be modified by m 6 A or regulate the expression of m 6 A-related proteins in the form of transcription factors, and the interaction of multiple factors uncovers new transcriptional regulatory mechanisms in myogenesis. The state transition of MuSCs and myoblasts is accompanied by dramatic changes in transcriptional regulation. Transcriptional changes affecting the muscle state are influenced by various aspects of ncRNAs (miRNA, lncRNA, and circRNA), DNA methylation, histone modifications, and chromatin remodeling. The current study found that miRNAs, lncRNAs, and circRNAs have m 6 A modifications and that m 6 A exerts facilitative and inhibitory effects by altering the expression of targeted ncRNAs. However, it remains unclear if ncRNAs can participate in myogenic differentiation by regulating the stability of m 6 A-related enzymes, and the biological mechanisms by which the two crosstalk during skeletal muscle development are still unknown. Multiomics should be integrated to explore the interactions between m 6 A and other epigenetic modifications in skeletal muscle development and elucidate the functional signals, regulating skeletal muscle development. The role of m 6 A-regulated RNA processing and metabolism in myogenesis requires further investigation. In summary, METTL3/14 was found to promote or inhibit myoblast differentiation, although the cause of this heterogeneity has not been identified. Therefore, future studies should exclude analytical errors due to tissue and cell heterogeneity, apply single-cell transcriptome technology to functional studies of myoblasts, and comprehensively explore the specific functional mechanisms of the synergistic regulation of RNA m 6 A methylation and myogenic transcription factors. Moreover, we should focus on the special mechanism between m 6 A and functional cellular markers and look for "classic" drugs targeting new-type m 6 A-related enzymes to provide new targets and ideas for the early prevention of skeletal muscle aging and treatment of muscle metabolic diseases. Author contributions JZ and YG conceived and managed the project. BY and JL wrote the manuscript. TM, XF, and RM revised the manuscript. All the authors approved the final version of the manuscript. Funding This study was supported by the Ningxia Hui Autonomous Region Key R and D Program (2022BBF02034).
2022-08-05T13:06:34.637Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "2f48435492f3b758642e038af9b227113acf89ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2f48435492f3b758642e038af9b227113acf89ba", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151713916
pes2o/s2orc
v3-fos-license
Employment, Motherhood and Wellbeing: A Discourse on the Trio within Public Organisations in Southwest Nigeria African women in public service experience some issues in their bid to juggle between their jobs and parenting; however, not much is available within the literature to explain these dynamics. This study seeks to document this by understand their experiences, the coping strategies adopted and the implications on the respondents’ wellbeing and that of their children. For quantitative data, a total number of one hundred and sixty questionnaires were purposively administered to mothers with infants working within the selected organizations and complemented with twenty in-depth interview schedules. Findings: a majority (40.0 percent) of the respondents were aged 31-40 years. About 73.0 percent claimed their challenges include how to combine paid employment with motherhood while 26 .9 percent attributed it to finance. Respondents mentioned that their children experience diarrhoea and malaria (22.2 percent), loss of appetite and weakness (23.1 percent) while they are away to work. On the part of the mothers, 62.3 percent claimed that their major challenges are stress and in ability to focus on their work. Respondents further argued that child spacing (7.5 percent) and support from husband (17.5 percent) are part of their coping mechanisms. Working mothers need a more conducive atmosphere for career development and parenting. Las experiencias, y sus implicaciones en el bienestar de las encuestadas y de sus hijos. Para los cuantitativos, un total de ciento sesenta cuestionarios fueron intencionalmente administrados a madres con niños, que trabajan dentro de las organizaciones seleccionadas, y complementadas con veinte entrevistas en profundidad. Conclusiones: la mayoría (40%) de las encuestadas tenían entre 31 y 40 años. Cerca de 73% afirmó que sus desafíos incluyen cómo combinar el empleo remunerado con la maternidad, mientras que 26.9% los atribuyó a la financiación. Las mencionaron hijos experimentan diarrea y malaria pérdida de apetito y debilidad mientras ellas están ausentes para trabajar. Por parte de las madres, 62,3% afirmó principales desafíos estrés capacidad ne of the variants of modernisation includes the involvement of women in paid employment. This development implies that women who over time were full housewives and mothers had to divide their time between paid employment and family care. This has implications on the family structure, child health and development (Basu, 1992). In the last 40 years, women involvement in labour force has increased dramatically across the globe (Hielman and Okimoto, 2008). Notwithstanding and comparatively, mothers face stiffer challenges than women that are yet to give birth (Hielman and Okimoto, 2008). The impact of this situation on the social, psychological and physiological conditions of these mothers cannot be overemphasized. Correll, Bernard and Park, (2007); Gungor and Biernat, (2009) while investigating the dynamics of employment on mothers emphasized that the rigors surrounding work often undermine the competence of mothers in their chosen fields. Aside these, the economic impact on the mothers have been found to be enormous. Budig and England (2001) for instance in their study found out that employed mothers in the United States on average experience a five percent wage penalty per child after controlling other factors that affect earnings. Ridgeway and Correll (2004) also looked at the social label placed on women as being unable to do the expected job based on the assumption that she is a nursing mother, the belief is that a mother is unable to do the normal working hours due to her status as a mother even in situations where the employed mother is discharging her duty effectively. Aside the above, the issue of glass ceiling is also a strong factor affecting these mothers, a situation whereby women based on social factors are not allowed to rise to their desired status. Some of the reasons often associated with this include the perception that they are too feminine and as a result of that they are often perceived as not qualified and competent enough because they are nursing women and if they choose not to be feminine, they regard them as lacking social skills expected of a woman (William, 1992). This situation is same in Africa and Nigeria is not left out. While studies on the conditions of mothers within work place are scanty, available ones suggest that juggling between work and child care is a challenge to them. Evidence available regarding the plight of women within the workplace still suggests marginalization, unpleasant polices against women and the rest. O GÉNEROS -Multidisciplinary Journal of Gender Studies, 6(1) 1266 Women constitute 45 percent of the estimated 150 million populations in Nigeria, yet their involvement in leadership positions is not proportionate to their population. Mothers in private organizations do not enjoy the compulsory 3 months' maternity leave especially in private organizations; those working in cities struggle with the demands of urban centres and quite a number of them suffer cultural injustices in within the work environments (Sun news, 2013). Beyond that, the cultural limitations placed on women due to patriarchy also constitute a challenge for working mothers; quite a majority of them have to combine domestic chores with office work and child care. Evidence further suggests that the current situation may worsen (Hein and Cassirer, 2010;Leslie, 1989). The impact of this marginalization on the children cannot be ignored as well. Studies have revealed that the kind of provision for child care in developing countries has failed to meet the present needs of mothers and children (Hein and Cassirer, 2010;Leslie, 1989;Joekes, 1989). The long and short run implication of inadequate child care is high; failure to pay adequate attention to child care from the beginning may expose these children to a number of challenges ranging from malnutrition stress unemployment, increased poverty and among others. Going by these developments, investigating the dynamics of motherhood workplace vis-à-vis mothers and child wellbeing thus becomes critical. Studies have showed that women form almost half of the total workforce and the figure is expected to rise sharply at the first half of the 21st century (Falzon, 2007;Orr, 1997). Further, it is an established fact that women workers are an essential part of the work system and cannot be dispensed with in any way (Barnett, Marshall & Sayer, 1992). However, in spite of their uniqueness and qualities, women still suffer challenges due to a combination of work and child care. For instance, Hoffman (1963) observed in his study that work stress has an influence on the quality of the parent-child interaction. Falzon (2007) equally submitted that 78.0 percent of working women still come back home to look after their children when they close from work. The implication of this is that mothers combine two jobs at the same time-a paid employee and a nursing mother. The children are not left out of this encounter; studies reveal that lack of contact between children and parents was directly responsible for the rising levels of mental health problems, sleep disorders and other socio-psychological issues others (Falzon, 2007). Ammaniti, et al (2004) also saw that separating children from their parents store up behavioural difficulties for them. These myriads of events within these spaces suggest an investigation of the methods and strategies adopted by mothers to meet the demands of workplace and child care. Further, an assessment of these strategies in relation to the health and welbeing of the child also becomes necessary in this study. Within the literature, combining the roles of motherhood and work has been regarded as a combination of multiple roles and it has not attracted the required attention. Mothers working outside the home often find it challenging to combine health responsibilities and employment demands, more especially among low income women (Kaiser Family Foundation, 2003). Examining dynamics of motherhood, workplace and child care thus becomes important as one of the means to further throw light to these issues and with the aim of finding lasting solution to some of the challenges women face as mothers within the work places. The following are the research questions formulated for the study: Research Questions a) What are the socio-economic and demographic characteristics of the respondents? b) What are the coping strategies adopted in combining motherhood and employment? c) What are the challenges they face in combining motherhood and employment? d) What are its implication on their health and that of their child? The following are the objectives of the study: Objectives of the Study The general objective of the study was to examine the experiences of working mothers to work, motherhood and wellbeing in public organisations in Southwest Nigeria. The specific objectives were to: a) Examine the socio-economic and demographic characteristics of the respondents b) Understand the challenges respondents face in combining motherhood and employment. c) Assess its implication on the mother and child's health and wellbeing d) Assess the coping strategies of the respondent in combining motherhood and employment Brief Literature review Women, work and child care: a brief discourse The dynamics of motherhood and work may not be strange in Africa and Nigeria in specific. From pre-colonial through colonial era, women have engaged in different forms of work as a means of survival and as part of their contributions to the maintenance of the family. Women were farmers, traders and some have in the process extended their trade beyond their localities (Omoruyi, 1994). In some cultures, in Nigeria for instance, some women were found to have excelled in their chosen careers more than their male counterparts. They were able to successfully combine their jobs with child care based on the flexibilities of their jobs. As a matter of fact, in traditional African settings, caring for the children was never a challenge to working mothers basically because the nature of their jobs permitted that. Aside this, the communal lifestyle as reflected in social and housing structure in traditional African societies permitted working mothers to leave their children with the older children if available, or elderly women within the homestead who could no longer do strenuous activities. Thus, children within a particular homestead belonged to all members of that homestead and would be cared for by all. However, with the emergence of paid employment and industrialization in Nigeria, there was a breakdown of this lifestyle such that nuclear families became separated from the extended families. However, the patterns of work changed drastically about fifty years back due to industrialization such that work became more formalized (ILO, 1269 Omotosho -Employment, Motherhood andWellbeing 2010;Chete, Adeoti, Adeyinka and Ogundele, 2014). This development in the country no doubt is an extension of what obtained in developed countries. The developing countries are gradually modelling its work and family patterns after the west yet, the structures needed to make this work has not been successfully put in place. ILO (2010), while explaining the differences at the regional and sub regional levels among developing countries argues that between 1950 and 1985, the proportion of women aged 15 or older in the paid labor force rose from 37 to 42 percent. This of course represents a fraction of women in labor participation as larger percentage of women in Sub Saharan Africa is engaged in informal employments. Notwithstanding, the percentage of economically active women is high, but it appears to have actually declined from slightly above to slightly below 50 percent in the period since 1970 (ILO, 2010). Joekes (1989) categorized the sources of non-maternal child care into four areas; the first she referred to as 'non-existent' child care which to her is the most frequent child care practice adopted globally. This arrangement is such that children are left unattended to when mothers are busy. The second type has to do with the child care being provided for by members of the household, especially by older siblings. The third type of child care plan relates to exchange of child care among family members and neighbors usually without any financial obligation attached to such exchange of services. The fourth which is the last relates to child care being handled by professionals whether at the formal or informal level for a fee. Joekes (1989) typology provides a description of child care across the globe and as a matter of fact almost all of them are practiced in this part of the globe. The second and the fourth child care plans are however very common among nursing mothers in Nigeria. The reason is simple, they are most times far from their extended families and they need to be close to their children and the demands of modernization are also very important. Methods A total of one hundred and sixty female working mothers were selected from government establishment in Ado Ekiti, the state capital of Ekiti state through a purposive sampling technique over nine major ministries across the government secretariat. By working mothers, we refer to mothers having children between ages 3 months and 2 years. The reason for this sampling procedure was because of the nature of the research, which had to look for working mothers within the work setting. In eliciting data from the respondent, the study employed both quantitative and qualitative methods. Questionnaires containing open ended and closed-ended questions were used. Qualitative data involved the use of in-depth interview to capture deeper meanings and insights into the research. A total number of twenty respondents were selected for the in-depth interview. Both quantitative and qualitative data were analyzed accordingly. The questionnaires were analyzed through Statistical Package for Social Science (SPSS) software while the in-depth interview was analyzed and quoted where necessary to support the data from questionnaire. Three hypotheses were tested for this study. They are listed below in alternate forms; chi-square statistical technique was used to identify the direction of the relationship: 1. There is a significant relationship between age and finding parenting and employment easy. 2. There is a significant relationship between marital status and finding parenting and employment easy. 3. Respondents who give in their best to work often find motherhood and employment an easy task Result Findings This section (see Table 1) explains the percentage distribution of respondents' age, education, marital status and professional qualification. Findings on age revealed that respondents between age 31 and 40 dominated the study constituting 40.0 percent. The least were respondents aged 20 and below (10.0 percent). This may not be strange considering the requirements as regards entry into labour force. This is further reflected on the respondents' educational qualification as respondents with tertiary education and professional qualification formed the majority with 32.5 percent and 42.5 respectively. Findings on the marital status also revealed that a majority of the respondents were married. From the table (see Table 2), majority of the respondents earn between 20000 and 40 000 in a month; this of course may be a reflection of the salary structure in the country. Aside that, quite a number of them from indepth and questionnaire have spent between 1 and 5 years in the organization. Equally, a total of 9 departments (ministries) were involved in the study with Women Affairs constituting 17.5 percent. The respondents (see Table 3) were asked to assess whether they are able to give in their best to their work place as mothers, and a majority of the respondents submitted in the affirmative. A number of reasons were given by the respondents as responsible for their responses. They submitted that they come to work as at when due (45.9 percent) and do all tasks assigned to them (54.1 percent). Only 8.8 percent of the respondents felt they were not doing enough for working moms. The major reasons given attributed to this included inability to attend trainings and seminars that can further equip them (42.9 percent) and the opinion that they could perform better than they are doing at the moment (57.1 percent). Hakim (1997) argues one of the determinants for how much to work is often based on sex role attitudes; other studies have also added intentions as key determinants of how much an individual will put into his or her work (Kan 2007;Blozendahl and Myers, 2004). Charles and Harris (2007) summed up the whole argument by emphasizing on the tradition of wanting to do what is right as a strong determinant of putting in the best to work. This tradition of course may propel young mothers to wanting to excel in both work and parenting. This section (see Table 4) examines how the respondents are able to combine motherhood with their jobs. From the table, 65.0 percent felt the task was challenging while the rest of the respondents (35.0 percent) claimed they found it very interesting. Buttressing on why they found it challenging, 46.4 percent of the respondents attributed it to the nature of their jobs; while 25.0 percent felt it was challenging because of lack of individuals to assist them. During the IDI sessions, some of the respondents had this to say regarding why they found motherhood and employment challenging: I am the confidential secretary to my boss, he wants me to be around all the time, sometimes to go and pick up my kids after closing hour is difficult; I have to come to work very early and I may not leave until my boss is done for the day. My boss is a busy man; he doesn't close at the normal period and as his secretary you know what that means, am here till he leaves office (IDI, Female, Ado Ekiti) Another respondent: Actually I am not complaining, but not having somebody to assist me in taking care of the kids has been a major challenge; I am sure that it would have been easy assuming I have a house help or somebody to assist. My children are still very young, my husband doesn't work within the city, he comes home at fortnight, and sometimes at month end and am the only one taking care of the three of them… (IDI, Ado Ekiti). Another respondent: It is not easy… but combining the two are a necessity, you just have to do it… (IDI, Ado Ekiti). The respondents (see Table 5) were asked to identify the major challenge facing them while combining work with motherhood; 34.3 percent attributed their main challenges to closing late from work; 18.8 percent claimed that the demands within the office was too tasking for them while 26.3 percent attributed it to the stress they face after the closing hour. GÉNEROS -Multidisciplinary Journal of Gender Studies, 6(1) 1276 The rest of the respondents (20.6percent) claimed that going home to tackle the domestic chores was their major challenge. Some of the responses of the respondents during the IDI are submitted below: To me, working mothers having infants should be allowed to close early; in our workplace, you are allowed to close at 2 pm for three months after resuming for maternity leave, but I feel this should be extended to 3 years. This will allow mothers to have time to take care of their children till they are old enough to care for themselves, I think if this is addressed within the workplace, it will be a lot easier (IDI, Ado Ekiti). Another respondent: I am hopeful that I will get a car very soon; if this dream is realized, it will make my job easier as a mother, I would not have to go through the traffic stress, it will be easier to pack my children lunch while leaving in the morning and I think it will be a lot easier with a car… (IDI, Ado Ekiti). Another respondent: Everybody is always eager to leave at exactly 4pm (closing time), at this period, the whole place will be a mess due to traffic hold up, sometimes, I could spend up to 2 hours in the traffic… (IDI, Ado Ekiti). Another respondent: Getting home after work to cook and prepare for the next day is really an issue; it's like resuming for another work session after closing, even with house help, you will still need to supervise him/her to do the right thing…. It's not easy… (IDI, Ado Ekiti). A number events are changing regarding the set up families across the globe (Ellison, Barker, Kulasuriya, 2009). For instance, parents in most societies across the globe now share the responsibilities of caring for their kids. This has made it easier for mothers to combine parenting and employment. However, data explaining how this is possible in Africa is however still lacking. From the data available in western world, a number of factors ranging from employment patterns, age of and number of the children within a family and ability to spend less hours have been argued to play key roles in determining the abilities of working mothers to easily combine parenting with their jobs Further, majority of the respondents (87.5 percent) had crèche around their workplace and 72.5 percent of them patronized the place. The few respondents who were not patronizing the place attributed it to the fact that they do not like the place and that the crèche is expensive. While little information is available regarding the state of crèche facilities for nursing mothers within the workplace, what is obvious is that a quantum leap has been taken in this regard. A number of crèche facilities are emerging across workplaces, though this may not be adequate; apart from this, the traditional methods where relations namely grandmothers, and older siblings take care of their younger are still in place, though it has been criticized as having a negative effect on the child (Engle 1991;Leslie and Paolisso, 1989). Notwithstanding, these traditional methods still play key roles in providing care for the child. This segment (see Table 5) describes the percentage distribution of respondents as regards the challenges they face and the perceived impact on them and their children. Findings revealed that about 73.1 percent claimed the challenges include how to combine work with mother hood while 26 .9 percent attributed it to financial challenges in meeting their needs and that of their children. Their argument was that they engage in paid employment based on the need to get more money to take care of themselves and their children. Regarding the implications on them and their children, respondents claimed that their children experience diarrhoea and malaria (22.2 percent), loss of appetite and weakness (23.1 percent) while 54.9 percent claimed that their children experience some emotional discomfort like refusal to stay with child attendants. However, 23.0 percent claimed that their children did not show any negative symptoms as a result of their jobs. On the part of the mothers, 62.3 percent claimed that their major challenge as a result of combining work and motherhood includes stress and in ability to focus on their work. Further, 23.2 percent claimed they had frequent disagreements with their spouses over work related issues. Some of the IDI responses are captured below to corroborate this analysis: My main issue with combining work and motherhood is the stress of crèche, some of the handlers of the centres may not be professional enough, they keep changing handlers which to me is not good for the children; by the time my child is getting used to a care giver, another one is brought, this has always made it difficult for my child to stay. I would need to pacify her before she could stay, even after leaving I would be summoned that my child needs my attention because she is always upset because of the environment. On the long run, I find it difficult to concentrate because sometimes I would be wondering if my baby has not started crying... (IDI, Ado Ekiti). Another respondent gave her remarks: My husband feels I should quit salaried job. He feels I need more time to stay with the kids. But if I would want to do that, where would the money come from? Sometimes this generates argument 1279 Omotosho -Employment, Motherhood and Wellbeing between me and my husband. Though I plan to leave as soon as we are able to gather enough money to start a business... (IDI, Ado Ekiti). Several empirical works have explained the synergy between these variables (Blau and Grossberg, 1992;Ermisch and Francesconi, 2005;Berna, 2008). However, each scholar has taken different stands from neutral to extreme sides. This has made it a bit cumbersome to analyze. A number of factors have to be taken into consideration before a definite statement can be made regarding the impact mother's job on child. For instance, Mancini and Pasqua (2012) suggested that the number of time spent on children must be taken into consideration while Hsin, (2009) laid emphasis on the level of education of the mother as an important variable to consider. The respondents (see Table 6) were asked on how they coped with motherhood and employment; respondents had different means by which they achieved this. For instance7.7 percent of the respondents claimed they spaced the birth of their children; 17.5 percent claimed they asked for the assistance of domestic help. Other respondents (17.5 percent) claimed that the assistance rendered by their spouses made it easy to survive; further, 25.0 percent argued that they relied on the facilities provided by day care centres while 48.8 percent argued that the working condition in their workplaces were not arduous. IDI conducted further shed light on the strategies adopted by the mothers to cope with challenges associated with motherhood and employment. Some of their comments are highlighted below: It is easier to manage work and motherhood unlike what it used to be when I had my first baby. At that time, we didn't have a crèche around my office area but now we do. It was easier to visit the place and nurse my child when he was suckling and now that he has stopped, I still pay him visits in between office hours. At closing hour, I pick our second baby in the crèche while my husband brings our first son home from school… (IDI, Ado Ekiti). Another respondent: I think the work stress in this unit is very minimal; thus, you are allowed to check on your child at the day care centre as often as you want to. So my job demands doesn't have adverse effect on my role as a mother… (IDI, Ado Ekiti). Another respondent: We (myself and the rest of the family) all leave home at the same time and return at the same time in our car…, my office is closer to my house, I am usually the first person my husband drops off in the morning, so I just drop my baby in the crèche while my husband drops our other children in the school (they attend the same school). He follows the same routine in the afternoon, and to me this make it easier rather than me doing all the work. (IDI, Ado Ekiti). Available evidences have always suggested that balancing work responsibilities and parenting is often a challenge (Parker and Wang, 2013), however, the outcome of such balancing exercise often reveals that parents are happy with the outcome of their efforts. The respondents were also asked on how they think the stress of combining motherhood and employment be reduced; 24. percent of the respondents suggested that office hours should be reduced; 48.8 percent of the respondents opined that there must be special allowances paid to working mothers to compensate for the stress while the remaining 26.9 percent felt that the present three months' maternity leave given to women after child birth be increased. Some of the IDI responses buttressing these arguments are captured below: Mothers having toddlers should be allowed to close early; this will enable them have ample time to take care of the kids and also prepare for work. Apart from this, they should also give them more maternity leave periods (IDI, Ado Ekiti); Another respondent gave her remarks: We are told that in developed countries, there are special allowances for nursing mothers. They should introduce that in our country as well. Though this cannot really alleviate the stress, but it will go a long way in mitigating the effects as mothers will have enough money to secure the services of domestic help and further take care of the children. Test of hypotheses This section presents some findings regarding the statistical relationship among the variables tested for the study. The first hypothesis states that there is a significant relationship between age and finding parenting and employment easy to cope with. As regards the chi-square test of association between age and finding parenting and employment easy to cope with, data revealed that there was a significant association between the variables (.000). By implication, age could play an important role in determining how easy it would be in combining motherhood and employment. Younger couples who have not had more than one child appears to find it easy to cope with the demands of motherhood and employment for a number of reasons. First they have just entered the business of procreation and thus it is fascinating for them rather than couples with more children. This may however not suggest that in most instances older women may find it more difficult in most cases than the young women. Older women with infants may find it easier to cope with work if they have older children that may assist them in taking care of their infants. This of course is not an alien culture in Africa where older children are expected to take care of their younger ones. Data on significant relationship between marital status and finding parenting and employment easy revealed that statistically there was no significant association between both variables (.012). By implication, marital status may not have strong implications on respondents' ability to juggle between work and motherhood. IDI conducted further attests to this, I live with my parents, so my mom takes care of my daughter when am away to work, sometimes the girl doesn't even miss me because she enjoys the company of my mom more than myself. Taking care of the kid is not an issue to me at all. My mother does it all. (IDI, Ado Ekiti). Another respondent: Combining my roles as a mother and a worker is not an easy task. I have about 3 of my husband's family members leaving with usmy mother-in-law and two others. To cater for these set of people including my two children plus my husband is really a big deal. Yet, I will still be expected to report early for work the following day. (IDI, Ado Ekiti). A number of factors intervene in determining the ability of mothers to effectively combine parenting and work. For instance, it has been widely argued within literature that single mothers often find it difficult to combine work and parenting, yet other works have also opined that a number of factors like income of the single mother equally play important roles as well (Alberda, 2009). From this finding, the roles of significant others namely family members and friends may equally make the job easier for single mothers. The general assumption regarding married mothers is that they enjoy maximum support from family members but this may not be so in all situations. Traditionally, external influence and supports are minimal for married mothers and they may have to depend on their husbands and very close relations for support which are usually time based. Findings on the statistical relationship between respondents who give in their best to work often find motherhood and employment an easy task shows that a significant relationship exists between the variables (0.002). By implication those who give in their best in their work settings are likely to give in their best to motherhood. While this may be hard to explain, this phenomenon may suggest attitudinal issues as a strong determinant for success. This is because women in employment tend to perform in one aspect to the detriment of the other (Carr, Ash, Friedman, Scaramucci, Barnett, Szalacha, Palepu, Moskowitz, 1998). Studies have however affirmed a strong relationship between work preferences and attitudes (Hakim, 2001); however, studies have equally added that attitudes towards work are a combination of many factors ranging from educational attainment, their ethnic and social background, their employment record and age all play important roles in attitudes (Kangas and Rostgaard, 2007). Studies examining the linkage between success in work preferences and motherhood are however lacking. Conclusion and Recommendation This study explores how working mothers within public organisations juggles between parenting and employment. A number of issues became clear, first, combining work and motherhood is a difficult process for mothers. This challenge may continue considering the persistent demand of employers for effectiveness and productivity. Children and mothers may also continue to experience different kinds of challenges due to the demands in the workplace. Second further studies exploring issues surrounding working mothers within work settings in developing countries and especially in Africa is still needed. Data explaining the realities within these spaces are still limited to developed nations. The study recommends the need for both stakeholders (policy makers and employers of labour) to adopt means of ensuring working mothers are at their optimum capacities for parenting and career.
2019-05-10T13:09:09.718Z
2017-02-25T00:00:00.000
{ "year": 2017, "sha1": "bec66453da5dcf9ea21948f57b7ba75ee7fde7fe", "oa_license": "CCBY", "oa_url": "http://hipatiapress.com/hpjournals/index.php/generos/article/download/2223/1982", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e214ec6fbf6b3aefb3129568395267261df72a9a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Political Science" ] }
235355058
pes2o/s2orc
v3-fos-license
Decision-Making Skills: An Assessment among Adolescents in Surat City Adolescence is the period of transition between childhood and adulthood. [1] These are also years of experimentation and risk taking, of giving in to negative peer pressure. Adolescence is a period of increased potential but also one with a greater vulnerability and newer responsibilities. [2] Adolescents are unique in the way they understand information and how they think about the future and make decisions in the present. [2,3-8] Life Skills Education is a novel promotional program that teaches generic life skills through participatory learning methods of games, debates, role plays, and group discussion which would help the adolescents. [9-11] This study was carried out with the objectives to assess the process of decision-making among adolescents and the factors affecting it and to explore the styles of decision-making among adolescents. It is a part of assessment of the ten different life skills. Introduction: This study assessed the process of decision-making among adolescents and the factors affecting it and also explored the styles of decision-making among adolescents. Methodology: A cross-sectional study using purposive sampling was carried out involving 1177 college-going students aged between 17 and 19 years. General Decision-Making Style (GDMS) and semi-structured questionnaire was used to collect data. Data were analyzed with the help of SPSS and AMOS. Exploratory and confirmatory factor analyses were run. Results: Good decision-making process was seen among 76.9% of the students. Kaiser–Meyer–Olkin verified that sampling adequacy was 0.8. Scree plot and Monte Carlo parallel analysis were suggestive of four factors which were logically intuitive, avoidant, dependent, and spontaneous styles of making decisions. Cronbach’s alpha was 0.7 for GDMS. Staying arrangement, paternal education, fantasy scale score, perspective-taking score, personal distress score, problem-solving, self-esteem, creative thinking, and coping with stress were found statistically significant with decision-making process. While, on confirmatory factor analysis, a five-factor model was found to be fit with minimum discrepancy/degrees of freedom value of 2.68, root mean square error of approximation: 0.038, Comparative Fit Index (CFI): 0.927, Normed Fit Index (NFI): 0.890, parsimony CFI: 0.66, and parsimony NFI: 0.634. A high correlation was observed between rational and intuitive styles. Conclusion: The process of decision-making was found to be good, but styles of making decisions were overlapping assessed with the help of a predesigned questionnaire which was a part of the self-administered questionnaire. Data were analyzed using Statistical Package for Social Sciences with AMOS module (SPSS for Windows, version 18.0, SPSS Inc, Chicago, Illinois, USA). Exploratory and confirmatory factor analyses were run. Study tool General Decision-Making Style (GDMS) Inventory has 25 questions rated on a five-point Likert scale. The GDMS is an appropriate, reliable, and valid scale for assessing decision-making and decision-making quality. [12][13][14][15][16] GDMS questionnaire elicits decision-making styles in five different patterns which are intuitive, rational, dependent, avoidant, and spontaneous. reSultS A total of 1177 college students were interviewed from 6 different colleges. Among them, 38.2% were male and 61.8% were female; 88.7% had an urban and 11.3% had a rural background for their schooling. Most (92.2%) of the participants had studied under the Gujarat State Education Board followed by Central Board of Secondary Education (5.8%); 85.2% of the participants reported to be staying with their parents, hostel (11%), and in their relative's house (3.4%). There was no significant difference among these variables. Decision-making process Good decision-making skill was elicited among 76.9% of the participants and 23.1% showed to have fair scores. The mean, SD, and median of decision-making process were 26.9, 3.6, and 28, respectively. Decision-making was observed to be significantly better (P < 0.05) if the participants were staying with their parents, had a more educated father or if they themselves were pursuing a professional degree. It was significantly better in participants who had higher scores in perspective taking (P = 0.000), Interpersonal Reactivity Index (P = 0.001), problem-solving (P = 0.000), self-esteem (P = 0.000), creative thinking (P = 0.000), and coping with stress (P = 0.000). Backward logistic regression (LR) was used to study the determinants of decision-making process among adolescents. Wald statistics was significant for this model (Wald: 258.39, df = 1, P = 0.000). Result showed an overall model giving 77.1% correct predictions. The Chi-square value is 103.69 and the associated significance level is < 0.05, so the present model shows decreased deviance from the base model. Hence, this model is a better fit compared to the base model. Nagelkerke R 2 value is 0.143 which indicates that 14% of the variance in the outcome (dependent) variable which is decision-making process is explained by this model where independent predictors were critical thinking, problem-solving, and creative thinking skills. Hosmer and Lemeshow test had a Chi-square value of 3.92 with 5 degrees of freedom (DF) and P = 0.561 which is also suggestive of a fit model [ Table 1]. Decision-making styles Decision-making style of the participants was assessed by the GDMS. The mean, SD, and median for intuitive style were 18.81, 3.1, and 20; dependent style 19.5, 3.4, and 20; rational 19.9, 3, and 20; avoidant style 12.4, 4.2, and 12; and spontaneous style 14.8, 3.7, and 15, respectively. Cronbach's alpha was 0.701 which suggests that it is acceptable. Results demonstrated a strong agreement to the intuitive and dependent type of decision-making which was backed up by rational thought processes such as double checking of the facts (86%), careful thought (91%), and goal-oriented perspective (80%). The avoidant and spontaneous processes for decision-making were disagreed on by nearly 45% of the participants in most of the variables. Results of exploratory factor analysis A principal component analysis with oblique rotation was run in SPSS version 19. The Kaiser-Meyer Olkin (KMO) = 0.781 (good according to Hutcheson and Sofroniou, 1999), all KMO values for individual items were >0.7. An initial analysis was run to obtain eigenvalues for each factor in the data. Six factors had eigenvalue >1 and in combination explained 45% of the variance. Monte Carlo parallel analysis was run to extract factors which justified four factors. The total variance explained by four-factor model was 37.1%. The scree plot was also conclusive and showed inflexions that would justify retaining the four factors [ Figure 1]. These four factors were retained because of the large sample size and convergence of scree plot and Monte Carlo parallel analysis on this value. The items that cluster on the same factor suggest that factor 1 represents a logically intuitive style of making decisions, factor 2 represents avoidant style, factor 3 represents dependent style, and factor 4 represents the spontaneous style of making decisions [Tables 2 and 3]. Results of confirmatory factor analysis Hypothesis General Decision-Making Style questionnaire model is a five-factor structure The model to be tested in hypothesis postulates a priori that GDMS questionnaire is a five-factor structure composed of intuitive, rational, dependent avoidant, and spontaneous styles of decision-making. • The five factors are intercorrelated, as indicated by the two-headed arrows • There are 17 observed variables, as indicated by the 17 • The observed variables load on the factors in the following pattern: • Intuitive style consists of d_intuition, d_innere_feeling, and d_instinct; dependent style consists of d_advise, d_ steer, d_assistance, and d_support; rational style consists of d_double_check, d_logical, and d_options; avoidant style consists of d_putoff_uneasy, d_avoid, d_postpone, and d_put_off; and spontaneous style consists of d_spur, d_quick, and d_snap • Each observed variable loads on one and only one factor • Errors of measurement associated with each observed variable (err01-err17) are uncorrelated. Model fit summary Minimum discrepancy Focusing on the first set of fit statistics, we see the labels number of parameters, minimum discrepancy (CMIN), DF, probability value (P), and CMIN/DF. The value of 292.106 under CMIN represents the discrepancy between the unrestricted sample covariance matrix S and the restricted covariance matrix Σ (θ) and, in essence, represents the likelihood-ratio test statistic, most commonly expressed as a Chi-square statistic. The test of H0 that GDMS is a five-factor structure, as depicted in Figure 2, yielded a χ 2 = 292.106, with 109 DF and a probability of less than 0.01 (P < 0.01), thereby suggesting that the fit of the data to the hypothesized model is not entirely adequate. However, both the sensitivity of the likelihood-ratio test to sample size and its basis on the central Chi-square distribution, which assumes that the model fits perfectly in the population (i.e., that H0 is correct), have led to problems of fit that are now widely known. Because the Chi-square statistic equals (N − 1) Fmin, this value tends to be substantial when the model does not hold and when the sample size is large. Yet, the analysis of covariance structures is grounded in large sample theory. Thus, findings of well-fitting hypothesized models, where the Chi-square value approximates the DF, have proven to be unrealistic in most structural equation modeling empirical research. More common are findings of a large Chi-square relative to DF, thereby indicating a need to modify the model in order to better fit the data. Thus, results related to the test of hypothesized model are not unexpected. Indeed, given this problematic aspect of the likelihood-ratio test, and the fact that postulated models (no matter how good) can only ever fit real-world data approximately and never exactly. One of the first fit statistics to address this problem was the Chi-square/DF ratio, which appears as CMIN/DF is 2.68 (standard recommended value is ≤ 5) [ Table 4]. Baseline comparisons The next set of goodness-of-fit statistics (baseline comparisons), which can be classified as incremental or comparative indices of fit. However, addressing the evidence that the Normed Fit Index (NFI) has shown a tendency to underestimate fit in small samples, Bentler (1990) revised the NFI to take sample size into account and proposed the Comparative Fit Index (CFI). Values for both the NFI and CFI range from 0 to 1.00 and are derived from the comparison of a hypothesized model with the independence (or null) model. As such, each provides a measure of complete covariation in the data. Although a value >0.90 is considered representative of a well-fitting model. In this case, the value is 0.927 indicating the moderate fit of the model [ Table 5]. The Relative Fit Index represents a derivative of the NFI; as with both the NFI and CFI, the RFI coefficient values range from 0 to 1.00, with values close to 0.95 indicating superior fit (Hu and Bentler, 1999). In this case, the value is 0.846 indicating the moderate of the model [ Table 5]. Root mean square error of approximation The next set of fit statistics focuses on the root mean square error of approximation (RMSEA). Although this index, and the conceptual framework within which it is embedded, was first proposed by Steiger and Lind in 1980, it has only recently been recognized as one of the most informative criteria in covariance structure modeling. The RMSEA takes into account the error of approximation in the population and asks the question " How well would the model, with unknown but optimally chosen parameter values, fit the population covariance matrix if it were available?". This discrepancy, as measured by the RMSEA, is expressed per DF, thus making it sensitive to the number of estimated parameters in the model (i.e., the complexity of the model); values <0.05 indicate good fit, and values as high as 0.08 represent reasonable errors of approximation in the population have recently elaborated on these cutpoints and noted that RMSEA values ranging from 0.08 to 0.10 indicate mediocre fit, and those >0.10 indicate poor fit. Have suggested a value of 0.06 to be indicative of good fit between the hypothesized model and the observed data, they cautioned that, when the sample size is small, the RMSEA (and Tucker-Lewis Index) tend to over reject true population models. In this case, the value of RMSEA is 0.038 which indicates good fit of model [ Table 6]. Table 7 shows the standardized regression weights. The value above 0.7 indicates that a reasonable amount of variance can be extracted from the variable. Majority of the regression weights are >0.5. diSCuSSiOn Cronbach's alpha was 0.701 which suggests that it is acceptable. [17] Decision-making was affected according to the staying arrangement, paternal education, or pursuing a professional degree. Decision-making was significantly better in participants who had higher scores in perspective taking (P = 0.000), Interpersonal Reactivity Index (P = 0.001), problem-solving (P = 0.000), self-esteem (P = 0.000), creative thinking (P = 0.000), and coping with stress (P = 0.000). Empathic concern and personal distress scores had no association with decision-making skills. Backward LR suggested that the decision-making process is influenced by multiple factors such as perspective taking, problem-solving, and creative thinking. Thus, although 77% of the participants had good decision-making skills, we have to keep in mind that 23% had fair decision-making skills. Hence, this group should be targeted for this skill development. Jozef Bavol'ár et al. conducted an exploratory factor analysis to assess the inner structure of the measure. The principal axis factoring method with direct oblimin rotation found five factors with an eigenvalue over 1 explaining 48.59% of the shared variance. [14] Results of this study had four factors. Indian cultural context and different age groups might be reasons behind this difference. Results of the current study support the results obtained by Loo. [18] Applying the GDMS in cross-cultural settings, a four-factor model was derived with exploratory factor analysis with high correlation between intuitive and rational styles of decision-making. Confirmatory factor analysis was run using SPSS and AMOS version 18. Five-factor model was found to be fit with CMIN/ DF value of 2.68, RMESA: 0.038, CFI: 0.927, NFI: 0.890, parsimony CFI: 0.66, and parsimony NFI: 0.634. A high correlation was observed between rational and intuitive styles. While running confirmatory factor analysis, a five-factor model with rational, intuitive, avoidant, dependent, and spontaneous was prepared with high correlation between rational and intuitive styles. Hence, an overlap among different decision-making styles was observed. In a study conducted by Roberto et al., CFA was performed, five-factor model showed significant fit, Chi-square (n = 700) = 93, 39, P < 0.001, and an acceptable value for the CMIN/ df (3.74). The RMSEA (0.058) and Adjusted Goodness-of-Fit Index (0.931) were indicative for fair fit. [19] Our results of confirmatory factor analysis were similar to this study. Similarly, in a study conducted by Peter Thunholm, the correlated five-factor model showed a significant fit, Chi-square (269, n = 206) = 520.46, P < 0.0001, and a reasonable value for the fit indexes, Chi-square/df = 1.94, RMSEA = 0.075. [20] The current study obtained similar results with CFA. Loo, in one of his researches, suggested that results from the item and scale analyses support the construct validity of this new measure. However, the study recommended further validation work, for example, applying the GDMS in cross-cultural settings. [18] Results of the current study support this because four-factor model was derived with exploratory factor analysis in this study with high correlation between intuitive and rational styles of decision-making. The present study demonstrated a strong agreement to intuitive, dependent, and rational styles whereas disagreement to avoidant and spontaneous styles. Hence, an overlap among different decision-making styles was observed. Conclusion and Recommendation The interrelationship among different life skills suggests the need for training using a comprehensive package like the "Life Skills Education Package" suggested by the WHO and UNICEF. Such life skill-based education shall contribute a lot in the emotional development of the youth and provide an equipped task force for countries like India where we have a large young population. Making adolescents mentally and emotionally strong would improve their decision-making skills and help us reap this demographic dividend. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-06-07T13:28:21.388Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "9abc6c2794b490dfb585b785fc76195b168437cc", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "e569ab560725da0184f62aab36baefc98632b59d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
18712837
pes2o/s2orc
v3-fos-license
Site-specific identification of heparan and chondroitin sulfate glycosaminoglycans in hybrid proteoglycans Heparan sulfate (HS) and chondroitin sulfate (CS) are complex polysaccharides that regulate important biological pathways in virtually all metazoan organisms. The polysaccharides often display opposite effects on cell functions with HS and CS structural motifs presenting unique binding sites for specific ligands. Still, the mechanisms by which glycan biosynthesis generates complex HS and CS polysaccharides required for the regulation of mammalian physiology remain elusive. Here we present a glycoproteomic approach that identifies and differentiates between HS and CS attachment sites and provides identity to the core proteins. Glycopeptides were prepared from perlecan, a complex proteoglycan known to be substituted with both HS and CS chains, further digested with heparinase or chondroitinase ABC to reduce the HS and CS chain lengths respectively, and thereafter analyzed by nLC-MS/MS. This protocol enabled the identification of three consensus HS sites and one hybrid site, carrying either a HS or a CS chain. Inspection of the amino acid sequence at the hybrid attachment locus indicates that certain peptide motifs may encode for the chain type selection process. This analytical approach will become useful when addressing fundamental questions in basic biology specifically in elucidating the functional roles of site-specific glycosylations of proteoglycans. repeating units of GlcA and GalNAc forms CS chains. The CS and HS chains are thereafter extensively modified by class-specific epimerases and sulfotransferases [11][12][13] . The uniquely glycosylated Ser residues of the core proteins are usually flanked by a glycine residue (-SG-) and certain features have been identified that influence the selection of HS synthesis vs CS synthesis. The presence of repetitive SG motifs seems to prime for HS synthesis, whereas a single SG motif in combination with a cluster of acidic residues in close proximity seems to prime for CS 10 . However, due to the structural complexity of proteoglycans and the limitations of the present analytical techniques, the influence of the peptide sequence for GAG-glycosylation has not yet been fully explored. Novel strategies using tandem mass spectrometry (MS/MS) have recently been developed that provide site-specific structural information of N-and O-glycans attached to various tryptic peptides in a bottom-up strategy to characterize glycoproteins. Such strategies are referred to as 'glycoproteomics' and are typically based on an initial and specific enrichment step of glycopeptides and a subsequent analysis with nano-liquid chromatography-tandem mass spectrometry (nLC-MS/MS) 14,15 . Glycoproteomics has resulted in the identification of hundreds of novel N-and O-glycosylation sites and recent methods have swiftly become essential tools in cell biology and biomedical research 16,17 . More recently, we introduced a similar bottom-up approach to characterize CSPGs that enabled site-specific identification of novel and established core proteins and the mapping of unforeseen structural complexity of the linkage region of bikunin 18,19 . With this methodology at hand we now wanted to further develop this concept of characterizing hybrid proteoglycans as to both CS and HS chain structures and their respective attachment sites. Thus, a similar approach for characterization of HSPGs had to be developed and as the first proteoglycan for study we chose perlecan and, for reasons of availability, a commercial sample derived from Engelbreth-Holm-Swarm mouse sarcoma. Perlecan (also known as basement membrane specific heparan sulfate proteoglycan, coded for by the gene HSPG2) has been the focus of previous structural studies and is perceived as relatively well characterized 20,21 . The proteoglycan represents one of the largest extracellular matrix proteins identified (470 kDa) and has a large number of post-translational modifications, which includes three substitutions with HS chains at the N-terminal end as well as several O-and N-linked glycosylations 22 . Moreover, perlecan may also be substituted with CS chains and we recently identified a CS-attachment site at the C-terminal end (Fig. 1a), thus making perlecan a suitable model for addressing the issue of site-specific characterization of hybrid proteoglycans 18,23 . The mouse sarcoma perlecan sample was digested with trypsin and enriched for glycopeptides using strong anion exchange (SAX) chromatography. The resulting fractions were digested with a mixture of heparinases which generate the release of internal disaccharide residues and a residual glycan structure still attached to the peptide. This residual glycan structure is expected to be composed of the linkage region, extended with varying numbers of GlcA-GlcNAc disaccharides where the terminal disaccharide is dehydrated on the hexuronic acid to form delta hexuronic acid (Δ HexA). The generated glycopeptides were analyzed with nLC-MS/MS in positive mode, which allowed for the combined sequencing of the residual glycan structure and of the core peptide. In addition to perlecan, the method enabled site-specific identification of three additional HSPGs also present in the perlecan preparation. The combined use of heparinase and chondroitinase ABC further enabled the identification and differentiation of consensus HS-sites and a hybrid GAG-site of perlecan, carrying either a HS or a CS chain. Inspection of the amino acid sequence at the attachment loci indicated that certain peptide motifs may encode for the chain type selection process. In principle, this analytical approach may be generally used to study the attachment sites and glycan structures of native hybrid proteoglycans, the influence of altered peptide sequences in selecting HS-versus CS-biosynthesis in various cell systems, as well as elucidating the functional roles of site-specific glycosylations of various proteoglycans. Results Analysis of perlecan GAG-composition with SDS-PAGE. The relative proportion of HS and CS chains on perlecan was assessed using SDS-PAGE. To obtain defined GAG-glycopeptides for structural analysis, a perlecan sample was incubated with trypsin and passed over a SAX-column, equilibrated with a low-salt buffer (0.2 M NaCl). The positively charged matrix retains anionic polysaccharides and their attached peptides, whereas neutral and positively charged peptides flow through. After a washing step, the bound GAG-glycopeptides were eluted stepwise with three buffers of increasing sodium chloride concentration (0.4 M NaCl, 0.8 M NaCl and 1.6 M NaCl). The three fractions were desalted and analyzed with SDS-PAGE followed by Alcian blue staining to visualize the presence of acidic GAG-chains. As a control, perlecan samples incubated with and without trypsin were loaded onto the gel without any prior SAX-chromatography. The SAX-enriched GAG-glycopeptides migrated as a continuous band on the top section of the gel and were mainly recovered in the 0.8 M and 1.6 M NaCl fractions (Fig. 1b). To assess the relative proportions of CS versus HS, the enriched fractions were treated with either chondroitinase ABC or heparinase prior to the SDS-PAGE and Alcian blue staining. Digestion with chondroitinase resulted in slightly less Alcian blue staining compared with the non-treated fractions, indicating the presence of small amounts of CS chains (Fig. 1c). After heparinase digestion no staining was visible, indicating that the vast majority of the polysaccharides are of HS type (Fig. 1d). Taken together, this confirms that HS is indeed the major GAG type in the sample and inspection of the PAGE profiles indicates that CS constitutes less than 10 % of the total GAG-chains. Nevertheless, these findings support the concept that perlecan may appear as a hybrid proteoglycan. Analysis of heparan sulfate glycopeptides. We then tested whether nLC-MS/MS analysis could be used for site-specific analysis of the HS-glycopeptides. After SAX-chromatography the 1.6 M fraction was digested with heparinase and analyzed with positive mode nLC-MS/MS. To increase the likelihood of generating glycan-specific fragments, the glycopeptides were fragmented at normalized collision energy (NCE) of 20%, as this relatively low energy level generates abundant glycosidic fragmentation 24,25 . Inspection of the resulting spectra revealed that the MS2-fragmentation generated intense oxonium ions at m/z 362.11, corresponding to a terminal dehydrated disaccharide structure [Δ HexAGlcNAc] + . The MS2-spectra were therefore filtered for the presence of m/z 362.1 (m/z range 362.10-362.11) and several 362.1-peaks were indeed identified at various elution times (Fig. 2a). Examination of the peak at 43.6 min displayed a spectrum with abundant glycosidic fragmentation (Fig. 2b,c). In addition to the ion at m/z 362.11, other fragment ions were also identified, including a tetrasaccharide-([Δ HexAGlcNAcGlcAGal] + , m/z 700. 19) and a pentasaccharide ([Δ HexAGlcNAcGlcAGalGal] + , m/z 862.25) (Fig. 2b). This indicates that the heparinase digestion generated a hexasaccharide structure, composed of the linkage region tetrasaccharide extended with a GlcA-GlcNAc disaccharide, dehydrated on the terminal hexuronic acid (Δ HexA-GlcNAc). The monoisotopic mass of the precursor ion (1362.72; 4+ ) equated to the mass of a peptide with a DDASGDGLGSGDVGSGDFQMVYFR sequence, derived from the N-terminal domain (amino acids 62-85) of mouse perlecan, encompassing the three previously described HS-attachment sites. The peptide was found to be modified with three hexasaccharide structures and one methionine oxidation. The measured mass (5446.8697 Da) deviated + 0.66 ppm from the theoretical value. Detailed inspection also revealed several xylose (132 Da) (e.g. peptide + xylose, m/z 1300.54; 2+ ) and galactose shifts (162 Da) (e.g. peptide + xylose + galactose, m/z 1381.56; 2+ ), further demonstrating the GAG-nature of the structure (Fig. 2b). Notably, fragmentation of glycoproteins often generates various types of glycosidic fragments. Here, the largest glycosidic fragment consisted of the peptide with two of the three serines substituted with one xylose and one galactose respectively (m/z 1529.11; 2+ ). Further, detailed inspection in the low mass range (m/z 100-250) enabled the identification of several diagnostic HexNAc-derived oxonium ions. Such ions were the result of H 2 O losses from the m/z 204.09 [HexNAc] + ions into m/z 168.07 and m/z 186.08, and further decompositions into m/z 126.06 and m/z 138.05 (Fig. 2c). Furthermore, an increase of the NCE level to 30% was used to generate abundant peptide fragmentation that enabled the identification of several diagnostic b-and y-ions of the peptide (m/z 370-2000) (Fig. 2d). The m/z range was here started at a higher value to exclude the prominent ion at m/z 362.11 that otherwise possibly would suppress the intensities of the weaker b-and y-ions. The identified HS-glycopeptide structure is shown in Fig. 2d, insert. An automated Mascot search algorithm was constructed to identify if other HS-glycopeptides were present in the sample preparation. This proteomic analysis was allowed to include the hexasaccharide structures Δ HexAGlcNAcGlcAGalGalXyl-O-(993.2808 Da) and was used on the MS-data of the 0.4 M, 0.8 M and 1.6 M fractions. The search algorithm enabled the identification of other extracellular matrix HSPGs, including mouse collagen XVIII and agrin, found as three and one glycopeptide variants, respectively (Supplementary Fig. S1). Surprisingly, a HS-site was identified in the C-terminal end of mouse perlecan (CQQGAGYGVVESDWHPEGSGGN) at the same location that we previously identified a CS-site in human tissue fluids 18 Furthermore, detailed evaluation of additional heparinase-generated glycopeptides revealed that the peptides could also be substituted with tetrasaccharides, thus demonstrating that the enzyme digestion may generate additional length variations. The tetrasaccharides were composed of the linkage region with a dehydrated terminal HexA residue. A glycopeptide derived from the N-terminal domain of perlecan encompassing the three previously known HS-sites (DDASGDGLGSGDVGSGDFQMVYFR) where found with three tetrasaccharide modifications (614.1694 Da), one methionine oxidation and one phosphate modification. The measured mass (4389.5071 Da) deviated +2.05 ppm from the theoretical value. Furthermore, the glycan was found modified with one phosphate group ( Table S1). The automated Mascot search algorithm was constructed to identify if other HS-glycopeptides were modified with tetrasaccharides in the same sample preparation. The analysis was allowed to include tetrasaccharide structures Δ HexAGalGalXyl-O-without (614.1694 Da) or with one phosphate group attached (694.1358 Da). The search enabled the identification of HS-glycopeptides derived from mouse collagen XVIII, agrin and collagen XV, several which had the phosphate modification on the xylose residue ( Supplementary Fig. S4) (Supplementary Table S1). Furthermore, the C-terminal HS-site of mouse perlecan was also found with a tetrasaccharide and further examination identified variants related to NH 3 -rearrangement ( Supplementary Fig. S5). Similar NH 3 -rearrangements were also identified for the hexa-, octasaccharides, which eluted at ~46 min in Fig. 3. Taken together, the Mascot-assisted search analysis of the sample preparation enabled the identification of seven different mouse HS-glycopeptides derived from four different core proteins (perlecan, collagen XVIII, agrin and collagen XV). Identification of a hybrid proteoglycan site. Since perlecan has been suggested to be a hybrid proteoglycan and as our initial SDS-PAGE analysis indicated the presence of CS, we wanted to determine which of the HS-sites may also be substituted with CS. GAG-glycopeptides were enriched by SAX-chromatography and eluted with high salt buffers as previously described. The collected fractions were divided in half and one part was digested with heparinase and the other with chondroitinase ABC. The samples were analyzed in consecutive order on the nLC-MS/MS. The general workflow for glycopeptide enrichment, the enzyme digestions, and the subsequent MS-analysis is illustrated in Supplementary Fig. S6. The chondroitinase-digested sample contained the anticipated hexasaccharide-substituted C-terminal glycopeptide (CQQGAGYGVVESDWHPEGSGGN), indicating the presence of a CS-substitution. The identified CS-glycopeptide eluted at ~39 min (Fig. 4a), similar to that of the C-terminal HS-glycopeptide of the heparinase digested sample (Fig. 4b), although of much lower intensity. The HCD-generated MS2-spectra of the CS-and HS-glycopeptides at m/z 1095.75; 3+ were virtually indistinguishable and both contained a prominent m/z 362.11 ion, as well as similar glycosidic and peptide fragments (Fig. 4c,d). However, GalNAc-and GlcNAc residues produce different oxonium ion profiles during HCD-fragmentation, which can be used for saccharide identification 25 . The GalNAc-derived oxonium fragments typically produce relatively higher intensities of m/z 126.06 and m/z 144.07 compared with GlcNAc-derived fragments. Oppositely, GlcNAc-derived oxonium fragments typically produce relatively higher intensities of m/z 138.05 and m/z 168.07. In accordance with this concept, the chondroitinase ABC digested sample revealed higher intensities of m/z 126.06 and m/z 144.07 compared with that of the heparinase digested sample, and vice versa for the m/z 138.05 and m/z 168.07 ions, thereby providing direct evidence for the structural identities of the two detected GAG-structures (CS and HS). Notably, the perlecan N-terminal HS-sites were not found to be substituted with CS, illustrated by the dominance of oxonium ion peaks at m/z 138.05 and 168.07 (Fig. 2b). Taken together, this suggests that the C-terminal site of mouse perlecan is indeed a 'hybrid proteoglycan site' , carrying both HS and CS. Discussion Despite great interest for HSPGs and CSPGs in biomedical research, there exists no effective method to experimentally determine their GAG attachment sites in vivo. Here we present a glycoproteomics approach that provides combined information of HS and CS linkage regions, and their attachment sites as well as the identities of the core protein. This method may facilitate studies on HS, CS and hybrid type PGs in various pathophysiological settings, as well as assist in elucidating the influence of a peptide code for GAG-biosynthesis. Due to the complex nature of PGs, the GAG chains and the core protein are typically separated prior to structural analysis and although this procedure facilitates the analysis in some aspects, it precludes site-specific glycan information. SAX-chromatography is commonly used for enrichment for GAG chains as the positively charged matrix retains the anionic polysaccharide chains 26,27 . Similar to previous work, GAG-substituted glycopeptides rather than GAG chains or complete PGs were now enriched. Additionally, we have now built on our recently published methodology for CSPG characterization in order to specifically analyse HS modifications of strict HSPGs or hybrid type PGs 18,19 . Heparinase enzymes were used to reduce the length and structural complexity of the polysaccharides which results in the release of internal disaccharides and generation of a residual saccharide structure containing the linkage region of the HS chains still attached to the peptide. As heparinases act by elimination, they generate disaccharides containing Δ 4,5 HexA and a Δ 4,5 HexA at the non-reducing end of the residual saccharide structure 28 . Other approaches using heparinase for HS-and HSPG structural characterization has previously been described 29,30 . Some of these methods include composition analysis of the released disaccharides following heparinase digestion 31 . Such analysis provides a picture of the polysaccharides in terms of its constituent disaccharides 32 . However, as the disaccharides are derived from a mixture of HS chains of various core proteins this strategy does not provide site-specific compositional information. An antibody that reacts with neoepitopes of the heparinase-generated saccharide structures has also been described 30 . In combination with heparinase digestion, the antibody provides a general overview of expressed HSPGs in a given sample. However, no core protein identities are obtained using such antibody-based assay. The method described here has the advantage of providing site-specific glycan information together with core protein identities. This is all achieved in a single analysis and should thus be an important complement to previous methods. The number of identified core proteins that carry HS chains is relatively low. Only seventeen proteins are known to carry HS, which is a very limited number compared to the number of proteins known to carry N-or O-linked glycosylations [33][34][35] . Although this difference may primarily reflect that the potential attachment sites in the proteome is far greater for N-and O-glycans compared with that of HS, a certain degree of research bias regarding detection methods may not be excluded. The introduction of glycoproteomics for site-specific analysis of N-and O-glycans has resulted in a significant increase in the number of identified glycoproteins 16,17 . We identified several HSPGs in a single sample preparation, illustrating that this method may potentially be routinely used to identify HSPGs in biologically relevant samples. An interactome analysis of agrin, collagen XV and collagen XVIII, identified in the perlecan-enriched sample now analyzed, demonstrates that these HSPGs interact with perlecan in a complex manner ( Supplementary Fig. S7). Thus, as these components are integral parts of the basement membrane they are likely to be co-enriched with perlecan during the extraction procedure. Glycoproteomic analysis of CSPGs has enabled the identification of several novel CS core proteins, many of which were previously defined as prohormones 18 . Detailed analysis also revealed that the linkage region contained unexpected sialic acid and fucose modifications 19 . Whether further glycoproteomic analysis of HSPGs will result in the identification of novel core proteins, unforeseen linkage region complexity, or the discovery of novel functional classes of proteoglycans remains to be determined. The term hybrid proteoglycans is used to denote core proteins which carry both HS and CS chains. However, it is often unclear on a peptide level whether a certain attachment site carries HS or CS chains, or both. The combined use of heparinase-and chondroitinase ABC, as used here, enabled the identification of a hybrid site only at the C-terminal end of mouse perlecan. MS2-fragmentation generated a prominent oxonium ion at m/z 362.11 for both glycopeptides, which corresponds to the terminal dehydrated disaccharide [Δ HexAHexNAc] + of two possible hexasaccharide structures. The analysis of the HexNAc-derived oxonium ion profiles provided additional support to the saccharide identities (GlcNAc and GalNAc respectively). Such analysis has previously been used for N-and O-glycans and provides a direct evidence of the isomer identity, in contrast to the indirect evidence provided by the enzyme specificities 25,36,37 . This additional confirmation is important as the purity and specificities of the enzymes are not always optimal, and thus the possibility of cross-reactivity may not be excluded. Definite assignment of saccharide identities is therefore only feasible for hexasaccharide-or longer structures, as they generate unique finger-print oxonium ion patterns that serve as a direct proof of the isomer identities. The identification of 'hybrid sites' is likely to be biologically important as HS and CS often display opposite effects on cell function 38,39 . In neurons, HS and CS control axonal generation through the interaction with a protein tyrosine phosphatase, and while HS has been shown to promote neurogenesis CS has an inhibitory effect 39 . One may speculate whether the separate C-terminal HS and CS polysaccharides on perlecan, as shown here, induce opposite effects on cell function. Apart from its structural role in basement membranes, perlecan also contributes to the regulation of blood vessel growth 21 . The proteolytic processing of its C-terminal domain liberates endorepellin, a bioactive domain with angiostatic activity 40 . The activity has been assigned to the laminin-like globular (LG3) domain, which is generated upon cleavage of endorepellin by bone morphogenic protein 1 (BMP1) 41 . Interestingly, BMP1 cleaves endorepellin at a position in close proximity (+ 3 residues) to the hybrid GAG-site. One may speculate whether the GAG-site influences the regulation of BMP1 cleavage of perlecan, and if so, whether CS and HS chains may have opposing roles in such a process. Interestingly, the fixed HS-sites of the N-terminal domain (DDASGDGLGSGDVGSGDFQMVYFR) display a different peptide motif compared to the hybrid site in the C-terminal domain of perlecan (CQQGAGYGVVESDWHPEGSGGN). Although no straightforward consensus motifs exist to predict potential GAG-sites, certain features have been identified to influence the selection of HS-synthesis and CS-synthesis, respectively. HS-sites typically contain repetitive SG-motifs and sometimes also a tryptophan residue in the vicinity of the SG-repeats 42 . In contrast, CS-sites typically contain only a single SG-attachment site with a cluster of acidic residues in close proximity 10 . Whereas the fixed HS sites in the N-terminal end of perlecan largely conform to this notion, the hybrid site in the C-terminal end does not conform to neither of the classical HS nor the established CS-promoting sequences. Although the CQQGAGYGVVESDWHPEGSGGN sequence has an apparent resemblance of a CS-promoting sequence due to its single SG-dipeptide, the peptide has relatively few acidic residues and contains a tryptophan residue in the vicinity to the attachment site, which are features more expected for fixed HS-sites. Future studies will reveal if this motif is present in other core proteins, and if so, whether they also code for hybrid GAG-sites. However, the amino acid sequence alone does not completely differentiate the GAG-selection process, as some sites are unoccupied and the determination of HS vs. CS-glycosylation appears only partly encoded by the peptide sequence. Serglycin, the major proteoglycan of mast cells, contains multiple clusters of SG-SG dipeptides and may carry both heparin (a highly sulfated variant of HS) and CS 43 . Mast cells derived from the peritoneum contain only heparin whereas mast cells of the lung contain both heparin and CS, suggesting that the core protein substitution varies with the differentiation and functional status of the cells 44 . It is possible that other post-translational modifications, such as N-glycans and mucin-type O-glycans, in vicinity to the GAG-site may also influence the biosynthetic process. We have previously identified combined site-specific characterization of CS linkage regions and mucin-type O-glycans on the very same glycopeptide of bikunin (protein AMBP) 19 . The basis of a peptide code and the influence of post-translational modifications adjacent to the attachment site is something that can be tackled afresh with our new methodological approach. A long-asked question is whether the core protein sequence influences the final structure of the synthesized HS chains. For instance, does the HS chains of the N-terminal end of perlecan display different fine-structures compared with the HS chain of the hybrid site on the C-terminal end? Interestingly, our results show that site-specific sequencing of longer structures is also feasible with our method, including analysis of octasaccharides and decasaccharides (Fig. 3). Notably, no sulfate-or phosphate modifications were identified on these structures, indicating that the first proportion of the HS chain on the C-terminal site may be relatively non-modified. Our finding is in keeping with previous studies, which demonstrates that the proximal region of HS close to the protein core may sometimes be extended with non-sulfated domains of about 10 disaccharides in length 45,46 . Moreover, with the use of shorter heparinase incubation time, sequencing of longer structures is likely to be feasible. Although site-specific detailed sequencing of full-length structures will be very challenging, the analysis of, perhaps, the first 14-16 residues of the HS chains should be feasible and would provide insights into whether the attachment site peptide sequence influences the fine-structure of the polysaccharides. The idea of site-specific HS polysaccharide structures relates to the concept of specificity for HS-protein interactions and is currently a debated area 47 . As HS chains influence the spatiotemporal organization of various physiological processes, this implicates a degree of specificity between the polysaccharides and the interacting ligands. Indeed, evidence shows that some activities clearly need a distinct sulfation pattern, whereas other interactions display lower degree of specificity and requires less stringent sulfate distribution 3 . If any given core protein or peptide sequence is associated with a unique fine-structure this information will likely contribute to our understanding of the concept of specificity and provide a novel theoretical framework for the understanding of HS-protein interactions. We suggest that the method presented herein open new possibilities for site-specific characterization of HS, CS and hybrid proteoglycans. In this work we present a novel glycoproteomic approach that provides site-specific identification and differentiation of HS-and CS-sites in vivo. The use of heparinase and chondroitinase ABC enabled the identification and differentiation between consensus HS sites and hybrid GAG sites, and inspection of the amino acid sequence indicated that certain peptide motifs may encode for the chain type selection process. The method may be used to address fundamental questions in basic biology, as well as assist in elucidating the functional roles of peptide sequences in the biosynthesis of proteoglycans. Methods Perlecan GAG-peptide preparation. Twenty microgram of the commercial perlecan sample, isolated from Engelbreth-Holm-Swarm mouse sarcoma, (H4777, Sigma-Aldrich) was trypsinized using an in-solution digestion protocol. Briefly, the sample was incubated for 10 min with Protease Max surfactant trypsin enhancer (0.02 % final concentration) (Promega) in 50 mM NH 4 HCO 3. The sample was thereafter reduced with DTT (5 mM) and alkylated with iodoacetamide (15 mM). Additional Protease Max surfactant was then added (0.03 % final concentration) and the sample was trypsinized over night (37 °C) with 20 μ g trypsin (Promega). The trypsin-digested sample was enriched for GAG-peptides using SAX-chromatography (Vivapure, Q Mini H), as described previously 19 . Briefly, the sample was diluted in 10 mL coupling buffer (50 mM NaAc, 200 mM NaCl, pH 4.0) and applied onto the column and spun at 1000× g for 2 min. The procedure was repeated until all sample volume had been applied onto the column. The column was washed with 400 μ L of a low-salt wash solution (50 mM Tris-HCl, 200 mM NaCl, pH 8.0) and the GAG-peptides were eluted step-wise with three buffers of increasing NaCl-concentrations and pH; (1) 50 mM NaAc, 400 mM NaCl, pH 4.0, (2) 50 mM Tris-HCl, 800 mM NaCl, pH 8.0 and (3) 50 mM Tris-HCl, 1600 mM NaCl, pH 8.0. The collected fractions were desalted using a PD10-column (GE Healthcare) and individually subjected to heparinase or chondroitinase ABC degradation or without glycosidase treatment. For the heparinase digestion, 0.3 mU each of heparinase I (H2519, Sigma Aldrich) and heparinase III (H8891, Sigma Aldrich) were used together. The samples were incubated for 24 hrs. at 37 °C in 40 μ L digestion buffer (50 mM NaAc, pH 7.0, 0.1 mM CaCl 2 ). For the chondroitinase ABC digestion, 0.3 mU of chondroitinase ABC (C3667, Sigma-Aldrich) was used and incubated for 24 hrs. at 37 °C in 40 μ L digestion buffer (55 mM NaAc, pH 8.0). The actions of heparinase (I and III), chondroitinase ABC and trypsin were monitored with SDS-PAGE. Three micrograms of perlecan incubated with and without trypsin was used as control. The samples were mixed with 5xSDS sample buffer and loaded onto a 4-20% Novex Tris-Glycine gel (Invitrogen, Carlsbad, CA). After electrophoresis separation at 100 V for 1 h the gel was stained with Alcian blue and thereafter scanned. Mascot search for GAG-glycopeptides. The MS data were processed using Mascot Distiller and searches for GAG-glycopeptides were performed as previously described 19 . Briefly, the HCD.raw spectra files were converted to Mascot.mgf format using Mascot distiller (version 2.3.2.0, Matrix Science, London, UK). The ions were presented as singly protonated in the output Mascot file. Searches were performed using an in-house Mascot server (version 2.3.02) with the enzyme specificity set to trypsin and then to "semitrypsin", meaning that the program will search for peptides that display tryptic specificity at one terminus, whereas the other terminus may be a non-tryptic cleavage site. The searches were performed on human sequences of the UniprotKB (87,613, sequences, 13/3/2013). The instrument parameter was set to consider the MH+ form of b-and y-ions and their losses of NH 3 and H 2 O. The peptide tolerance was set to 10 parts per million (ppm) and fragment tolerance was set to 0.01 Da. The searches were allowed to include variable modifications at serine residues of the residual hexasaccharide structure [HexA(-H 2 O)GlcNAcGlcAGalGalXyl-O-] with 0 (C 37 H 55 NO 30 Data evaluation. All generated GAG-glycopeptide hits were manually evaluated according to previous established criteria for CS-glycopeptides 19 . HS-glycopeptide hits were evaluated using following criteria: (1) The presence of HexNAc-generated oxonium ions, specifically m/z 362.11. (2) At a NCE of 20% the MS2-spectra should also display stepwise glycosidic fragmentation of the linkage region and/or the peak(s) corresponding to the deglycosylated peptide ions. The accuracy of the precursor mass should not deviate more than 5 ppm from the theoretical mass value.
2018-04-03T00:58:38.535Z
2016-10-03T00:00:00.000
{ "year": 2016, "sha1": "e9f3b126f6be968f5b5e4f32ace5752aec7bfe2e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep34537.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9f3b126f6be968f5b5e4f32ace5752aec7bfe2e", "s2fieldsofstudy": [ "Biology", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
251967131
pes2o/s2orc
v3-fos-license
Trends in Population-Based Studies: Molecular and Digital Epidemiology (Review) The development of high-throughput technologies has sharply increased the opportunities to research the human body at the molecular, cellular, and organismal levels in the last decade. Rapid progress in biotechnology has caused a paradigm shift in population-based studies. Advances in modern biomedical sciences, including genomic, genome-wide, post-genomic research and bioinformatics, have contributed to the emergence of molecular epidemiology focused on the study of the personalized molecular mechanism of disease development and its extrapolation to the population level. The work of research teams at the intersection of information technology and medicine has become the basis for highlighting digital epidemiology, the important tools of which are machine learning, the ability to work with real world data, and accumulated big data. The developed approaches accelerate the process of collecting and processing biomedical data, testing new scientific hypotheses. However, new methods are still in their infancy, they require testing of application under various conditions, as well as standardization. This review highlights the role of omics and digital technologies in population-based studies. Introduction The global medical and demographic problems, those of population ageing, an increase in the prevalence of chronic non-communicable diseases, the pandemic of a new coronavirus infection, set new large-scale challenges for healthcare, where precision medicine becomes one of the tools for solving them. Initially demanded mainly in the diagnosis and treatment of oncological diseases, it is being introduced into all areas of medicine now. Major research projects and campaigns are being initiated worldwide to develop and implement precision medicine strategies. Experts estimate the global precision medicine market to reach $87.7 billion by 2023. The leading scientific institutions are located in the USA, United Kingdom, France, and China. Since 2018, the number of publications in the field of precision medicine has amounted to about 16 thousand worldwide. Molecular and digital epidemiology is one of the main tools of precision medicine. Molecular epidemiology Genomic research Biological research has traditionally been carried out using reductionist approaches, partly due to reviews limitations in both the experimental power of the devices and the complexity of the analytical data evaluation processes. In the last decade, the development of high-throughput technologies has led to a sharp increase in the opportunities of studying the human body at the molecular, cellular, and organismal levels [1]. The rapid progress of biotechnology has led to a paradigm shift in genomic epidemiology, from linkage analysis to genome-wide association studies (GWAS) and the widespread use of next-generation sequencing (NGS). Technological developments have improved research design, enhanced our understanding of disease etiology, and led to numerous scientific discoveries [2]. In genomics, first-generation sequencing methods could sequence the human genome for $300,000; two decades later, next-generation methods can sequence the human genome in a few hours at a cost of $1000. Measurements of characteristics such as epigenome, transcriptome, proteome, etc. have undergone similar changes, which has allowed researchers to start studying pathologies using their characteristics at the molecular level rather than tissue one [3]. Therefore, both the study of individual organisms and the study of populations require calculative and statistical approaches to the data of various "omics", which consider metabolism in cells, tissues and organs as a whole, as an integrated system, rather than isolated separate processes. The reductions in the cost of genome sequencing, combined with an increase in the computational power, have caused a strong revival of interest in the application of whole genome sequencing in public health [4]. Today, genomic epidemiology makes it possible to study the genomes of pathogens so as to have a better insight into the spread of infectious diseases among populations and quickly respond to the outbreaks of the diseases [5]. Together with philodynamics (a combination of epidemiology, evolution, and immunodynamics), genomic epidemiology is a rapidly developing field of science that addresses key issues related to epidemic preparedness and management in real time [6]. In the beginning, genomic data were used to study a variety of viruses, particularly, the influenza A virus and human immunodeficiency virus (HIV). The Ebola virus epidemic in West Africa (2013-2016) was the first major and large-scale challenge to study the virus genomes; that resulted in the discovery of their origin and causes for such a rapid spread of the epidemic and also allowed to detect subsequent sources of local outbreaks [7]. Genomic epidemiology has become a valuable source of information for scientists about the nature of the threats to public health such as Zika, Middle East Respiratory Syndrome (MERS), Ebola, and SARS-CoV-2 outbreaks [8]. These threats have required a variety of approaches including intensive genome sequencing to understand transmission dynamics during the acute phase of epidemics (Ebola virus in the Democratic Republic of the Congo) and broader genomic "surveillance" to detect a hidden increase in the prevalence (poliomyelitis) [9]. During the SARS-CoV-2 pandemic, many countries that had not previously used genomic data began to actively conduct such studies and rely on their results. Genomic technologies have made more than 2.5 million SARS-CoV-2 sequences known from over 185 countries [10], and due to the subsequent public interest in genomic epidemiology, new methodologies have been rapidly developed to fully utilize this dataset to fight against the pandemic. The transmission of all infections occurs at different spatial scales, which depend on the pathogen, the nature of the host's movement, immunity, and other factors [11]. The impact of obtaining genomic data on the formation of public health is shown in Figure 1. Genomic data can be used to characterize clinical cases of infection depending on location and time and track outbreaks at all spatial scales: from nosocomial infections to pandemics [12]. The analysis of the pathogen genomes in the context of other sequences obtained from the same outbreak, as well as their comparison with previously characterized variants, allow researchers to develop intervention strategies at the individual and population levels to minimize the burden of infectious diseases on the individual and society [13]. This comprehensive approach involving pathogen sequencing, analysis, and response is called Trends in Population-Based Studies: Molecular and Digital Epidemiology Public health outcome Development of vaccines, drugs, early diagnosis reviews molecular epidemiology. In contrast to the development of individual-level treatment strategies that focus on the functional roles of host and/or pathogen mutations, the outbreak-scale genomic analysis uses pathogen mutations as markers of transmission events [14]. Genomic epidemiology studies the dynamics of outbreaks and the rapid evolution of pathogens that often accumulate mutations on the same scale as the spread of these pathogens. NGS makes it possible to detect various types of genomic and epigenetic variations with high accuracy. Such sequencing allows researchers to directly study all these variations in person, increasing the chance of detecting mutations [15]. Although the use of NGS is still limited due to its high cost, the success of several recent projects demonstrates the great potential of this method in genomic epidemiology, especially in view of the sequencing cost decline. With a sufficient sample size, appropriate metadata (such as location and date), and an appropriate statistical framework, pathogen genomes may assist in the identification of patterns in the spread of an epidemic with a small number of patients studied, allowing the development of precise targeted interventions compared to traditional methods and the use of demographic data [16]. In the nearest future, we will also be able to estimate the prevalence of chronic noncommunicable diseases using patient's pedigree data. In 2011, the National Human Genome Research Institute (USA) published a review on genetic medicine, noting that the most effective way to improve human health is to understand normal biology (in this case, biology of the human genome) as a basis for studying the biology of diseases, which then becomes the basis for health promotion. To date, it is still difficult to fully determine the future prospects of genetic epidemiology for improving the public health [17]. When evaluating the contribution of genetic epidemiology to public health, it is equally important to understand that the etiology of diseases is complex and the genetic risk for developing pathology does not equate to genetic determinism [18]. The complex relationship between genetics and disease poses an ethical dilemma for practitioners regarding the correct interpretation of genetic test results. When performing genetic tests, it is possible to indirectly reveal the disorders that will not cause the development of the clinical disease manifestation [19]. An ethical question arises, should patients be aware of these incidental findings that may have a medical value? Biomarkers In the epidemiological study of diseases, metabolite concentrations are increasingly used as biomarkers that serve as indirect indicators of the rate of metabolic reactions. Though, the assessment of the rate of individual reactions can provide more accurate information about the ongoing changes directly in the organ [20]. Direct measurement of the rate of metabolic reactions in situ is currently impractical in large population studies since they are costly, technically complex, and require high-throughput equipment. This method is more successful when applied on a smaller scale, primarily through the use of non-invasive nuclear magnetic resonance spectroscopy (NMR spectroscopy) [21]. Metabolic pathway imaging techniques using hyperpolarized metabolites have shown promising results in the diagnosis and localization of tumors in patients with prostate cancer [22]. In a prospective clinical study involving 58 patients with chronic heart failure, the rate of adenosine triphosphate (ATP) synthesis was measured by studying the activity of cardiac creatine kinase in situ using the 31P NMR spectroscopy method [23]. ATP and creatine phosphate concentrations, as well as general clinical parameters, were used as predictors of chronic heart failure over an 8-year follow-up period. Excessive creatine kinase activity exceeded the significance of such parameters like patient's age, gender, and concentrations of other metabolites in predicting heart failure events and death, including hospitalization for heart failure and ventricular assist device insertion [24]. These results relate to a relatively small group of patients, but they add weight to the case for the development of biomarkers based on the rate of metabolic pathways and reactions in the study of disease. Metabolism works as a continuously operating system of movement and transformation of molecules through reactions. Since the flow of metabolites is regularly redirected, metabolites are accumulated at various points or become depleted which results in a change in their concentration. The concentrations of metabolites reflect the effects of combined changes in the reaction rate, but do not give a direct idea of the dysfunctions of the processes themselves, for example, in pathology affecting enzymes, genes, and other molecular products derived from the human genome [24]. In this regard, a systematic assessment of the reaction rate on the scale required for epidemiology will be done by integrating metabolomic data with genomic, transcriptomic and/or proteomic information to determine enzymatic function. Due to the ability to characterize diverse variants of endogenous and exogenous metabolites in biological specimens, metabolomic approaches have quickly been recognized as an important tool in public health studies [25]. The results show that the use of small volumes of blood, urine, feces, saliva, exhaled air condensate, cerebrospinal fluid, and biopsy for measuring the metabolome can provide information on possible mechanisms underlying the disease [26-30]. However, most of the existing evidence has come from case-control or crossover studies, which do not allow reviews for a clear temporal relationship between exposure, biomarkers, and disease. Recently, the metabolic characterization of amniotic fluid, cord blood, and maternal/child urine or serum samples has been used to assess complex effects on the fetus and mother, and it may potentially be associated with developmental problems. Dried newborn blood spot used to identify metabolic biomarkers of future risk for cancer and other diseases have been proposed as a promising sample for metabolomic profiling [31][32][33][34]. The application of metabolomics for the study of disease risks, screening, and treatment efficacy has yielded promising initial results, although the field is still under development. These studies include ones on neurodegenerative diseases [35], type 2 diabetes [36], cancer [37], HIV, tuberculosis [38], malaria [39], and cardiovascular diseases [40]. The next important step in the application of metabolomics to study the etiology of diseases and early detection of pathologies will be longitudinal studies, which have already shown their effectiveness in creating biological models of the environmental impact on humans [41,42]. Digital epidemiology To conduct large multicenter epidemiological studies, digital technologies are actively used to facilitate the processes of work planning, data remote collection and entry control, as well as subsequent result presentation and reuse [43][44][45][46]. Though the epidemiology of chronic non communicable diseases in Russia is still lagging behind infectious diseases [47], there is a need to create and implement digital services for epidemiology of chronic noncommunicable diseases [48]. The need is owing to the increase of omics technologies' availability, the accumulation of the many years results of research, the need to compare the findings of similar studies, and the increased requirements for practical application and implementation of the results [47,49]. Digital systems for clinical research The basis for conducting research in the field of precision medicine is the formation of databases of clinical information annotated with the data on the collected biomaterials for each clinical case [50,51]. This significantly expands resource opportunities for research at the intersection of clinical areas when new members of the research team are involved or in the case of a long-term work [52]. Coppola et al. [50] emphasized the importance of combining primary data with paraclinical information, including data from imaging studies, in a digital system. According to the authors, a service for visual data processing should have the options not only to display, but also to analyze data, which requires pre-processing and data markup. The selection of areas with suspected lung infiltration according to computed tomography (CT) data or with pathological signal foci in the magnetic resonance imaging (MRI) pictures can be an example. Integration of genomic analysis into the data system contributes to the development of genomics and radiomics (radiomics is aimed at creating mathematical models and computer algorithms that, through the analysis of medical images, such as MRI or CT images, provide a finding about the pathophysiological features of tissues) [50,53]. According to the research teams accumulating biomedical data, the imaging biobank data are to be used in accordance with the already-known standards until specific standards have been developed [50,54]. Harmonization of processing will make it possible to combine data from multi-omics studies and visual materials for the integration of phenotypic and genotypic data [50,55]. Over the past 10 years, many medical institutions have collected integrated databases (integrated data repositories, IDRs) [56], which are collected from electronic medical records [57]. Based on the accumulated data, not only scientific hypotheses are tested, but also a clinical decision support system is built [56]. Gagalova et al. [56] identified four models for the architecture of medical data collection and storage, in which data sources, the purpose of use, the availability of storage, etc. The purpose of this work was to initiate the development of guidelines on IDR creation in hospitals. Online databases Interactive monitoring systems have gained wide popularity [58]. Over the past 20 years, many services for monitoring infectious diseases have emerged [59,60]. To monitor the situation with antibiotic resistance, many services have been created that are limited geographically as well as by described microorganisms and assessed metrics: EARS-Net (https://atlas.ecdc.europa.eu/public/index. aspx); CDDEP Resistance Map (https://resistancemap. cddep.org/index.php); SGSS (https://sgss.phe.org.uk/Security/Register); ATLAS (https://atlas-surveillance.com/#/login); SMART (https://globalsmartsite.com/#/auth/login). The free-access web application AMRmap (https:// amrmap.ru/) [61] is a Russian development which displays data on antibiotic resistance obtained in multicenter clinical trials. The system has a section of genetic markers. Information in the database has been stored since 1997, access provided free of charge. Since 2018, the University of Bristol's project EpiGraphDB [62] has been developing, which is a data-based analytical platform designed for the intellectual analysis of epidemiological indicators. The project is developing approaches to the reviews interpretation of causal relationships in the systematic automated analysis of many phenotypes using data from the array of bioinformatic resources. The university is also developing a software for statistical processing of omics studies, MR-Base being an example of it [63]. A large system of producing sequences of biological reactions in the body is presented in the WikiPathways system [64]. Currently, this system is being actively filled out with omics research data. The STRING database contains known and predicted protein-protein interactions [65]. Toom et al. [66] compared the results of an epidemiological study of headache in Estonia using an online questionnaire with the results of data research obtained during face-to-face visits of patients. The use of online questionnaires can significantly speed up the data collection process, increase population coverage, and reduce manual data entry errors. However, the authors noted that in the online survey, the majority of people did not have a headache, which greatly differed the sample of people who completed the online questionnaires from the sample of patients who came for face-to-face visits. This reduced the incidence of headache in the population. Also, more women, young people, married people, urban residents and people with a high level of education participated in the online survey. These characteristics of the sample are typical and should be considered as limiting in the case of studies using online questionnaires [67][68][69]. The integrated (online access, telephone, and paper mail) National Australian StepUp System for Dementia Research [70] is an interesting solution. In this system, patients with dementia and researchers of the diseases accompanied by cognitive deficits are registered in one of three convenient ways [70]. This allows accelerating the process of collecting data for research hypotheses and developing new approaches to combat dementia [71,72]. The authors note that the continuous operation of the system went on after the start of the pandemic of a new coronavirus infection [70]. Over the two years of the platform operation, more than 1000 patients, 120 researchers have been registered, and more than 40 studies have been initiated [70]. For clinical trials, there are a number of free services which provide creating electronic individual registration cards, such as REDCap [73] or Ark [74]. The use of specialized services may be limited since access is provided to the organization after the conclusion of an agreement with the copyright holders and not directly to the researcher. However, the service ensures secure personal data storing without third-party access, unlike many open resources, including Google Forms [75]. In the future, research services will be used to create large databases on certain nosologies, diagnostic methods, or treatment. Services are constantly evolving, additional specialized analysis modules are created, for example, building a pedigree [74]. The pandemic of a new coronavirus infection caused an accelerated and forced introduction of digital technologies in all spheres of life, including all stages of research [76,77]. Since the beginning of the pandemic in 2019, many national and international online monitoring systems have been developed [78]. The challenges for the fast-growing services are their weak integration with each other and the lack of centralized management, a difficulty in interpretation and practical application of data [79]. On the other hand, a limiting factor is the reluctance of patients to use digital questionnaires or remote methods of communication due to uncertainty about confidentiality in their use or unwillingness to become addicted to gadgets [80], which is especially common among older patients. Open data The annual increase in the accumulated data requires the introduction of new guidelines for the management of captured data. One of the most common standards for such work with data is FAIR (findability, accessibility, interoperability, and reusability) [81], which has become a fundamental requirement for open science [82,83]. In their paper, Suhre et al. [84] emphasize the importance of data exchange for omics research, giving an example of a combination of GWAS and proteomic analysis. The authors consider the prospects for the creation of a database that will accumulate information about the genetic colocalization of genomic information and characteristics of the molecular phenotype of a disease (for example, gene expression and metabolomic characteristics) with clinical trial endpoints. Real world data Real world data in biomedical research refers to data captured from electronic medical records, medical registries, medical insurance companies, non-interventional clinical trials, and other sources in which information was obtained not under experimental conditions [85]. The HealthMap online system (https://www.healthmap. org/ru/) has been operating since 2006, accumulating data on disease outbreaks from open web resources [86]. In 2008, the web-based influenza surveillance system Influenzanet was launched [87,88]. Limitations in the use of these data are their redundancy (repetitions), heterogeneity (different input formats), inconsistency (violation of the chronology of events). Chatzidimitriou et al. [89] created a database (n=20,463) on clinical cases of chronic lymphocytic leukemia (The ERIC CLL Database) filled with data from more than 90 centers and 31 countries. The authors consider the provision of standardization, integration of retrospective data, and assessment of the quality of input data to be necessary for the successful functioning of the distributed database [89]. Digital epidemiology as a separate field of knowledge According to Salathé [90], digital epidemiology has become a separate area of scientific knowledge. Its purpose is to understand the patterns of disease development and the dynamics of the health status of the population, as well as to determine the causes of these patterns in order to find ways to prevent the development of diseases and promote health. The broadest definition of digital epidemiology is epidemiology that uses digital data. Though, the author then specifies that digital epidemiology operates the data that has not been collected with the main purpose of conducting epidemiological studies. Such data can include electronic medical records, information from insurance funds, city, regional, and federal health departments, as well as data from search engines, social networks, and mobile phones [90]. Google Flu Trends (GFT) has become one of the first known digital epidemiology services that uses search queries on acute respiratory symptoms for epidemiological analysis [91,92]. A serious problem was that the collected data were owned by a private company, and the analysis algorithms used were unavailable even to national healthcare systems [90], and independent testing of the capabilities of this service for epidemiological studies showed a low efficiency in assessing the incidence of infectious diseases [93]. Unofficial Internet sources can be a valuable resource for epidemiological research, but the current trend towards protecting personal data and maintaining privacy is an important limiting factor. Salathé identifies two ways to the solution of this problem [90]: creation of the monitoring systems by groups of scientists or professional communities, which will be more understandable and transparent for national healthcare, and that will increase the potential for their practical application; greater involvement of the population in epidemiological studies. The rights to the data generated by individuals belong to the developers of the resource. A representative part of the population should be persuaded to share their personal health data with public health authorities for scientific research, the results of which can benefit society. Roth et al. [94] have shown the formation of digital epidemiology (Figure 2). According to the authors, machine learning methods based on the data from healthcare systems or social networks (Twitter), which help determine the prognosis for survival and complications, had already been developed by 2018. It is important to note that the transformation of epidemiology leads to a change in its teaching principles [95]. Werler et al. [96] note that new curricula in epidemiology require the formation of causal thinking and the subsequent formation of a scientific hypothesis. Common mistakes made by young epidemiologists include estimation of one risk factor for one outcome, inaccurate formulation of research questions, and giving greater importance in research to epidemiological and statistical approaches compared to public importance. Ethical issues The development of high-precision medicine technologies entails the need to form new ethical standards [97]. Classical basic ethical principles are respect for patient's autonomy and privacy [98]. In this case, ethical requirements must ensure that individuals cannot be identified in open data portals for the exchange of scientific data. The ethics of precision public healthcare regulates the interaction between patients who have given voluntary informed consent to their attending doctor for the use of their clinical specimens in precision medicine research and the public decision-making process that drives public health activities. The development of a new hybrid ethical paradigm is possible only with the well-coordinated work of these process participants. Conducting omics studies allows obtaining detailed information about any subject. However, in order to plan disease control measures in a particular area or in a particular population, the following data indicating the demographic characteristics of an individual are important: geographical location, migration history, stay in prison, lifestyle and profession, etc. All these data are personal, they must not be subjected to Formation of digital epidemiology as a field of knowledge reviews wide dissemination and increase the risks of discovering the identity of the subject. In this regard, particular attention is paid to the way of presenting the obtained information. The ethics of precision medicine includes a public health ethic commitment to social justice and an emphasis on professional transparency and the trust built through it. The collected data should be transparent and aimed at improving the existing system and people's lives, and not stigmatizing social groups with high risk factors or relatively high incidence [97]. The development of electronic systems for capturing and storing data requires careful study of the risks to maintaining the security of the collected data [99]. New requirements for data management and professional confidentiality are emerging [98]. The speed, accuracy, and efficiency of big data processing offer great opportunities for public health, but entail a responsibility to adapt in a society that is committed to privacy, respect for human rights in matters of health, and social justice. Sharma et al. [100] advocate for the development of legislation to maintain the confidentiality of personal data collected during scientific and clinical research, for auditing and implementation of independent oversight to assess the management of the risks related to the reuse of the data on research subjects. Solving this problem requires new approaches to working with patient data, taking into account an increased activity of scientific communication, creation of open repositories, exchange of primary research data, which is an integral part of large epidemiological studies. However, people are motivated to participate in study by pursuing their own interests, like the reputation of the organization with which they interact. Reuse of data by other organizations carries certain risks, which patients should be informed about before submitting voluntary informed consent to participate in a study. FAIR-Health is a new paradigm of open science that has been developed in view of the peculiarities of biomedical research [101]. This paradigm is aimed at considering the information and biomaterials collected in research to be a single resource. It is this principle that, according to Holub et al. [101], will help ensure the reproducibility of studies and the subsequent integration of results. Conclusion Modern methods of population-based studies, including both omics technology data and the results of monitoring the conditions and behavior of patients over a long period of time, provide detailed data on subjects. At the moment, a search for methods of standardizing the collected data, their analysis and synthesis for further use is in progress. One of the major challenges to science is the integration of research results not only for rational storage, but also for the creation of dynamic digital models of subjects and processes. 10. Liu T., Chen Z., Chen W., Chen X., Hosseini M., Yang Z., Li J., Ho D., Turay D., Gheorghe C.P., Jones W., Wang C. A benchmarking study of SARS-CoV-2 whole-genome sequencing protocols using COVID-19 patient samples.
2022-09-01T15:16:16.604Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "a0a8e2fb4c9832c71d0b8656ed21853627153c23", "oa_license": "CCBY", "oa_url": "http://www.stm-journal.ru/en/numbers/2022/4/1791/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83409da278b2db643dfdff017ff8bcd86368f598", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
81985050
pes2o/s2orc
v3-fos-license
Bilinear Representation for Language-based Image Editing Using Conditional Generative Adversarial Networks The task of Language-Based Image Editing (LBIE) aims at generating a target image by editing the source image based on the given language description. The main challenge of LBIE is to disentangle the semantics in image and text and then combine them to generate realistic images. Therefore, the editing performance is heavily dependent on the learned representation. In this work, conditional generative adversarial network (cGAN) is utilized for LBIE. We find that existing conditioning methods in cGAN lack of representation power as they cannot learn the second-order correlation between two conditioning vectors. To solve this problem, we propose an improved conditional layer named Bilinear Residual Layer (BRL) to learning more powerful representations for LBIE task. Qualitative and quantitative comparisons demonstrate that our method can generate images with higher quality when compared to previous LBIE techniques. INTRODUCTION The task of Language-Based Image Editing (LBIE) aims at manipulating a source image semantically to match the given description well. LBIE has seen applications to domains as diverse as Computer-Aided Design (CAD), Fashion Generation and Virtual Reality (VR) [1]. As illustrated in Fig 1, using LBIE technique, one can automatically modify the color, texture or style for a given design drawing by language instructions instead of the traditional complex processes. Nevertheless, LBIE is still challenging due to the following two difficulties: i). the model should find the areas in image which are relevant to the given text description; ii). the relations of disentangled semantics in image and text description should be learned for a better generation of realistic image. To tackle these problems, several methods have been proposed [1,2,3,4,5], and most of them utilize the generative models, e.g., GANs [6]. [1,2] divide LBIE into two subtasks: language-based image segmentation and image generation. Specifically, Zhu et al. [2] performes LBIE to "redress" the person with the given outfit description, while at the The lady wore a white sleeveless dress + same time keeping the wearer and his posture or expression unchanged. They use a two stages GAN that outputs a semantic segmentation map as intermediate step, which is further used to render the final image with precise regions and textures at the second step. Some other approaches [3,4,5] can achieve LBIE without any segmentation map or explicit spatial constraints by adversarially train a conditional GAN [7]. Among them, [5] is the seminal work and it uses concatenation operation to condition the image generation process with text embeddings. [3,4] follow up this framework, and replace the concatenation operation with Feature-wise Linear Modulation (FiLM), which is a more efficient and powerful method as a generalization of concatenation. In this work, we first theoretically analyse these works which edit the image based on fused visual-text representations using different conditioning methods. We found that all these conditioning methods can be modeled by a universal form of bilinear transformation based on [8]. However, all these methods are lack of representation power as they cannot learn the second-order correlation between two conditioning embeddings. To solve this problem, we present an improved conditoning method named Bilinear Residual Layer which can strike a happy compromise between representation effectiveness and model efficiency. We have both theoretically and experimentally proved that the Bilinear Residual Layer can provide richer representations than previous approaches. Quantitative and qualitative results on Caltech-200 bird [9], Oxford-102 flower [10] and Fashion Synthesis datasets [2] suggest that our approach can generate images with higher quality when compared to previous LBIE techniques. METHOD In this section, we first theoretically analyse existing conditioning methods in cGANs. Then an improved conditional layer called Bilinear Residual Layer (BRL) is proposed in Sec 2.2. Finally, we introduce overall framework in Sec 2.3. Overview of conditioning methods Conditioning is a general-purpose operation and can be used for different tasks, e.g., conditional image generation [11,12] and cross-modality distillation [13]. The most commonly used approach in conditional GANs is concatenation. Formally, denote I f ∈ R D and I c ∈ R D as the output of previous layer and conditioning feature respectively, where D and D are the dimensionality of features. The concated representation [I f I c ] ∈ R D+D can be further encoded by a matrix W = [W f ; W c ], W f ∈ R D×O and W c ∈ R D ×O are the corresponding weights for I f and I c . O is the output dimension. Formally, we can get the following transformation: where I o is the output tensor. Equation 1 suggests that concatenation based conditioning method amounts to adding a feature-wise bias on the unconditional output I f W f . Therefore, some other approaches [14,15] suggest to add conditional bias directly instead of concatenation. Recently, some works [16] have validated that deep models could mimic the human attention mechanism by gating each feature using a value between 0 and 1. Inspired by this, Perez et al. [17] proposes a more general conditioning method named feature-wise linear modulation (FiLM), which rescales the features by adding multiplicative interactions: W c ∈ R D ×O is the weight for learning rescaling coefficients. From this formulation, we can conclude that concatenation is a special case of FiLM when I c W c = 1, where 1 is a matrix of ones. FiLM has shown its superiority over conventional concatenation method and has been widely applied to the multimodal interaction. However, concatenation and FiLM only apply a linear transformation between the input and conditional features. In this work, we go a step further and generalize these linear methods to the more powerful bilinear version, which can provide richer representations than linear models by learning the second-order interaction. In bilinear model, the ith feature in output I o can be calculated as is a weight matrix for the output feature I oi . Interestingly, we have found FiLM can be presented by bilinear transformations. Denote the weights corresponding to the ith output feature in W f , W c and W c as w fi , w ci and w ci . The FiLM transformation for I oi = (I f w fi )(I c w ci ) + I c w ci can be represented by where I f W i = w T ci , W i can be constructed by randomly choosing a nonzero element I f k in I f , we have where elements in the W i are 0 except for the kth row. Obviously, the rank of matrix w fi w T ci and W i are both 1. So we have Rank(W i ) ≤ 2 * . The constructed formulation indicates that FiLM is equivalent to bilinear transformation with transformation matrix W i is sparse and has the rank no greater than 2. From a theoretical perspective, it illustrates that bilinear transformations can provide more finegrained conditioning representations than the concatenation and FiLM. Bilinear Residual Layer We propose Bilinear Residual Layer (BRL) for learning conditional bilinear representations as illustrated in dashed box of Fig 2. Similar to FiLM, we add shortcuts to guarantee the model's capability to learn identical mapping. As a consequence, our bilinear residual layer can automaticly decide whether or not the model needs to incorporate the conditioning information in the later layers. However, the representational power of bilinear features comes with the cost of very high-dimensional model parameters , which require substantial computing and large quantities of training data to fit [18]. For example, the dimensionality of W is |D × D × O| which is cubical expansion. To reduce the dimensionality of model parameters, our approach adopts a low-rank bilinear method [19] to reduce the rank of W i . Based on this idea, I oi can be rewritten as follows: where U i ∈ R D×d and V i ∈ R D ×d are the decomposed submatrices and they restrict the rank of W i to be at most d ≤ min(D, D ). Then the final feature vector I o can be projected by P ∈ R O×d as follows: Moreover, Our bilinear residual layer is a general condition layer, and it is applicable not only for LBIE, but also for other conditional models or applications, e.g., text-to-image generation [20]. In following sections, we will present the overall framework of our work and we denote the bilinear residual layer as F for convenience. * Properties of rank: https://en.wikipedia.org/wiki/Rank (linear algebra) Overall framework We follow the work of Dong et al. [5] which utilizes the cGAN to learn the target mapping conditioning based on image and text description. As shown in Fig 2, the network consists of a generator G and a discriminator D. The generator has three modules: encoding module, fusing module and decoding module. Encoding module contains pre-trained encoders ϕ and φ down , and they are used to extract text and image features respectively. We adopt the procedure in [21] to pre-train the text encoder ϕ and use parameters of conv1-4 layers in VGG16 as the feature extractor φ down for image. The text and image features are then fed in the following fusing module, which can be seen as a conditioning layer to compromise the semantics of multiple modalities. The final decoding module φ up upsamples the fused feature to a high-resolution images. Finally, the discriminator is a classifier which takes the generated image and text embeddings as input and output the probability whether the description matches the image. Formally, given an original image-text pair <x, t>, t is the text matching with the image x. Suppose that we use description textt to manipulate the image x, typicallyt is a text relevant to x. The generator can transform the image according to text embedding ϕ(t) and output G(x, ϕ(t)) = φ up (F(φ down (x), ϕ(t))) the discriminator D is trained to distinguish semantically differentiated image-text pairs. To this end, we need to take a mismatching text t as negative sample. Original pair <x, t>, current editing pair <x,t> and negative pair <x, t> are fed into discriminator D to minimizing , ϕ(t)), ϕ(t)) 2 (9) here the objective of the first and second terms is to classify negative and original real-world image-text pairs. The third term makes D to identify the synthesized image with its editing text as mismatching as possible. Alternately to the training of D, the generator G is trained to generate more semantically similar images with the editing textt: In this work, t andt are selected from the text descriptions of other images in the dataset. EXPERIMENTS We conduct experiments on Caltech-200 bird dataset [9], Oxford-102 flower dataset [10] and Fashion Synthesis dataset [2]. The bird dataset has 11,788 images with 200 classes of birds. We split it to 160 training classes and 40 testing classes. The flower dataset has 8,189 images with 102 classes of flowers, and we split it to 82 training classes and 20 testing classes. The fashion dataset has much more classes with 78,979 images totally. We choose 3200 classes from 4119 for training and the rest for testing. Implementation details The source code has been released † . Our encoder ϕ for text descriptions is a recurrent network. Given the pair of image and text <x, t>, the method in [21] is used to pre-train the text encoder to minimize the pair-wise ranking loss. This pretrained text encoder encodes the text description t into visualsemantic text representation ϕ(t), which will be further used in the adversarial training process as detailed in Sec 2.3. For image encoder, it receives images with size of 64×64 as input and output features with dimension of 16×16×512. Text encoder encodes descriptions to the text embeddings with dimensionality of 128. Our fusing module consists of 4 (i.e., N in the Fig. 2) bilinear residual layers. To implement the low-rank bilinear method, we duplicate the text embeddings to be of dimension 16×16×128, so as to keep the same spatial size with image features. Then the dimensions of both text and image features are reduced to d (cf. Section 2.2) by using 1×1 convolutions. The decoding module consists of several upsampling layers that transform the learned representations into 64×64 images. For the discriminator, we first apply convolutional layers to encode the images † https://github.com/vtddggg/BilinearGAN for LBIE The lady was wearing a blue short-sleeved blouse. This little bird is mostly white with a black superciliary and primary. This flower has petals that are yellow at the edges and spotted orange near the center. into feature representations. We then concatenate the image representation with text embeddings, then apply two convolutional layers to produce final probabilities. Note that we use concatenation to conditioning for limiting the discriminator capability to prevent the mode collapse effect. To train the generator and discriminator, we adopt the Adam optimizer with momentum of 0.5. The learning rate is 0.0002. We set batch size to 64 for all three experiments and number of iterative epochs to 600 for birds and flowers synthesis, 200 for fashion synthesis. The parameters of VGG part were fixed during training the generator. The training takes about 1 day to converge on a single Tesla P100 GPU. Qualitative comparison We compare our proposed model with the baseline [5] (i.e., concatenation) and FiLM on three datasets. The results are shown in Fig 3. Baseline method fails to transform the detail attributes based on the given description because the learned representations are not powerful as it does not contain enough detail information. For example, the generated images by baseline method in editing flowers demonstrate the model has learned the colors of yellow and orange, but it is unaware of the location of these colors. Meanwhile, when original image has a complex background (e.g. 3th and 4th samples in first row), the model will fall into mode collapse and output the same meaningless image. On the contrary, our method can capture the specific semantic changes in detail, which is attribute to our richer bilinear representations. It correctly disentangles semantically related objects from some messy images and prevent the occurrence of mode collapse. As a consequence, our approach can successfully generate meaningful images subject to the text description. Quantitative comparison We choose inception score (IS) for quantitative evaluation. Inception score is a well-known metric for evaluating GANs. IS can be computed by IS = exp(E x D KL (p(y|x) p(y))), where x denotes one generated sample, and y is the label predicted by the inception model. The better models which gen- The results are shown in Table 1. To explore the influence of rank constraint d, we set d = 2, 64, 256 and get three variants: Bil-R2, Bil-R64 and Bil-R256. The Bil-R256 gets the highest IS in all three tasks. Interestingly, the baseline method has higher IS than FiLM on Oxford flower dataset because flower editing is simple and is not very dependent on the power of learned representation. For more complicated bird and fashion editings, our method gets the highest IS and achieves better performance with the increasing of d. Experimental result suggests that the learned bilinear representation is more powerful and do help to generate images with higher quality. CONCLUSION In this work, we propose a conditional GAN based encoderdecoder architecture to semantically manipulate images by text descriptions. A general condition layer called Bilinear Residual Layer (BRL) is proposed to learn more powerful bilinear representations for LBIE. BRL is also applicable for other common conditional tasks. Our evaluation results on Caltech-200 bird dataset, Oxford-102 flower dataset and Fashion Synthesis dataset achieve plausible effects and outperform the state-of-art methods on LBIE.
2019-03-18T15:13:55.000Z
2019-03-18T00:00:00.000
{ "year": 2019, "sha1": "a42483529d8ef2063b99bc71c0b3d865e7734cf9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.07499", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ad7023189d090fc092aef8a9419c9a9be09ace71", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231879222
pes2o/s2orc
v3-fos-license
Influenza swine flu virus: A candidate for the next pandemic? www.jogh.org • doi: 10.7189/jogh.11.03011 1 2021 • Vol. 11 • 03011 A rising number of pneumonia cases were being reported in Wuhan, China since December 2019. Upon further investigation, it was discovered that the cause of these cases was a novel strain of the severe acute respiratory syndrome coronavirus-2 (SARS-COV-2). Shortly after, the World Health Organization (WHO) declared it a public health emergency of international concern (PHEIC) by January 2020 [1] and consequently, it was declared a pandemic two months later, affecting millions of lives worldwide. As of July 2, 2020, approximately 11 million people have been infected with this disease; with over half a million succumbed to the disease. Coronavirus disease-19 (COVID-19) is a respiratory illness that mainly spreads through airborne droplets via coughing or sneezing [2]. The disease severity ranges from infected individuals being asymptomatic to having life-threatening complications such as myocarditis, acute cerebrovascular disease, deep venous thrombosis, ischemic stroke, and pulmonary embolism [3]. It is due to these characteristics, in addition to the unavailability of the vaccine, and the rapid human-to-human transmission rate that traditional measures of combatting the virus were and are still being employed [4]. A rising number of pneumonia cases were being reported in Wuhan, China since December 2019. Upon further investigation, it was discovered that the cause of these cases was a novel strain of the severe acute respiratory syndrome coronavirus-2 (SARS-COV-2). Shortly after, the World Health Organization (WHO) declared it a public health emergency of international concern (PHEIC) by January 2020 [1] and consequently, it was declared a pandemic two months later, affecting millions of lives worldwide. As of July 2, 2020, approximately 11 million people have been infected with this disease; with over half a million succumbed to the disease. Coronavirus disease-19 (COVID-19) is a respiratory illness that mainly spreads through airborne droplets via coughing or sneezing [2]. The disease severity ranges from infected individuals being asymptomatic to having life-threatening complications such as myocarditis, acute cerebrovascular disease, deep venous thrombosis, ischemic stroke, and pulmonary embolism [3]. It is due to these characteristics, in addition to the unavailability of the vaccine, and the rapid human-to-human transmission rate that traditional measures of combatting the virus were and are still being employed [4]. While the research scientists are still involved in devising an appropriate treatment for the SARS-CoV-2 disease, they are now concerned about another possible infectious outbreak that has an influenza pandemic potential that may further burden the already struggling health care system globally. Very recently, on June 30, 2020, a new strain of influenza swine flu virus named genotype 4 (G4) reassortant Eurasian avian-like (EA) H1N1 (G4 EA H1N1) virus was discovered by researchers in China. A recently published study in the journal of Proceedings of the National Academy of Sciences was based on the influenza virus surveillance of pigs across 10 Chinese provinces between 2011 and 2018. The findings of this study revealed that 10.4% of the swine workers and 4.4% of the general population exposed to the infected pigs tested positive for antibodies to G4 EA H1N1, virus and a higher seropositive rate of 20.5% was observed among those aged 18-35 years during the last three years of the study thus indicating that the virus has acquired increased human infectivity. The researchers concluded that further mutations in the G4 virus may enhance their adaptation in humans coupled with their widespread circulation in the pig farms that may facilitate their exposure and hence, human-to-human transmission leading to a pandemic. Further research indicated that this G4 strain contains DNA from the 2009's H1N1 strain, responsible for causing the pandemic that year in addition to the triple-reassortant (TR) derived internal genes. These viruses bind with high affinity to the human-like SAα2, 6Gal receptors, a prerequisite for infecting human cells. Additionally, G4 EA reassortant viruses replicate efficiently to produce much higher progeny viruses in the human airway epithelial cells including the human bronchial epithelial (NHBE) cells and alveolar epithelial (A549) cells, the primary target cells in human influenza virus infection. Finally, these viruses showed increased replication and pathogenicity in ferrets suggesting severe in-fection coupled with high virus transmission among ferrets both via direct contact (DC) and respiratory droplets (RD) exhibiting their capability to readily infect humans [5]. Scientists strongly recommend that the virus should be quickly controlled within the pigs and the human population, particularly workers in the swine industry, should be kept under surveillance [5]. Measures to contain this emerging virus must be taken immediately as it is believed that people may have little to no immunity to it and the current influenza flu vaccine does not offer protection against this strain. Furthermore, as most countries have started opening up for business despite the ongoing coronavirus pandemic, this new virus could also spread swiftly. Furthermore, health authorities have already been battling with an overwhelming number of SARS-CoV-2 cases and it has overburdened the medical system worldwide. Health care authorities and pharmaceutical companies are engaged in several collaborated and accelerated efforts to devise an efficient vaccine and management options to curtail the COV-ID-19 outbreak and combat the disease. Additionally, health care workers are already overwhelmed as they carry the highest risk of infection due to close contact with COVID-19 patients thus paralyzing health care systems [6]. Moreover, the lockdown, disruption in normal activities along with the uncertainty has placed considerable psychological stress in the general population, especially with anxiety and depression [7]. Another pandemic will lead to a further spike in mental health disorders which can, in turn, lead to adverse outcomes such as self-harm or suicide. Likewise, the global economy will suffer immensely in lieu of the coronavirus pandemic as International Monetary Fund (IMF) predicts that it will shrink by 3% this year, which is worse than the Great Depression seen in the 1930s [8]. Therefore, another pandemic will have a substantial impact on the global economy which can have disastrous consequences such as famine, drought, poverty, and even war. In conclusion, even amid the COVID-19 pandemic, scientists need to stay vigilant. Now the goal is to ensure that the G4 EA H1N1virus does not infect humans. A contingency plan must be formed and put in place in hospitals to not collapse an already overwhelmed health care system. Most importantly, the situation should be closely followed so that adequate measures, such as adapting the flu vaccine to the virus, can be taken immediately. World leaders must also work in conjunction with scientists and health care workers in taking effective measures to not just contain but combat any potential viruses. Furthermore, the media industry must also work towards providing authentic information verified by credible sources to create awareness and avoid public hysteria. Perhaps, the lessons learned from the COVID-19 pandemic can be applied here by containing the virus in its initial stages to prevent losing more lives. The new influenza virus strain had a higher seropositive rate of 20.5% among those aged 18-35 years during the last three years of the study, warranting strict control measures to be implemented to avoid another major outbreak.
2021-02-10T22:53:23.281Z
2021-01-16T00:00:00.000
{ "year": 2021, "sha1": "45496478718bdd44cb3e04937d1ba8024c8e6a87", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7189/jogh.11.03011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45496478718bdd44cb3e04937d1ba8024c8e6a87", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
262917687
pes2o/s2orc
v3-fos-license
Journal of Social and Political Sciences Every individual and ultimately the group try to identify him or herself as distinct as and more supreme than others through some symbols. This identification process is used by human beings as a tool for either psychological or biological or we can say the survival benefit. In this way, identity becomes a means of benefit. There may be so many types of identity symbols and cultural identity is one of them. Sharing the same identity symbols creates a distinct culture area. The culture area is a geographical concept of culture. This article examines the identity-making politics in modern Nepal based on different cultural elements, particularly caste and ethnicity. Therefore, this article proposes the concept of 'caste and ethnic areas,' other than culture areas, which are widely used in anthropological literature. Data were collected from primary and secondary documents and observations of Nepalese politics over a long period. A retrospective research design was applied in this study. Background It is taken for granted that caste and ethnicity are components of culture on the one hand and every caste and ethnic group may have similar kinds of culture on the other hand. However, a particular ethnic identity may subsume many different cultural identities (Guneratne, 2002). Again, ethnicity itself symbolizes the geographical and cultural groups. Ethnic or nationality is a social group with its mother tongue, native area, and religious tradition (Gurung, 2003). Here, we propose the terminology 'caste and ethnic area' other than 'culture area' in the context of Nepal, though we have emphasized 'culture area' because it is well-established in geography and anthropology. It is said that there is a political interest within cultural consciousness (Tamang, 2062). The purpose of this article is to describe the identity-making politics of Nepal based on various castes and ethnic areas. Though various caste/ethnic groups have diffused in different parts of Nepal, they still centralize in their traditional areas (Gurung, 2003). Caste and ethnic areas have been described based on some similar kinds of cultural elements such as religion, language, and historical habitation, etc. Methodology This study was based on a retrospective design. This research was based on data collected from secondary materials and observations of Nepalese politics over a long period. Experiences were another method of data collection in this study. It is not described in every detail of ethnic area and form and content of different autonomous culture-based states proposed by different political and ethnic organizations, but it is a bird's eye view over this matter. Conceptual Clarity The word 'culture area' was first probably used by O. T. Mason (McGee and Warms, 2013). A culture area is a geographical unit of culture (Kroeber, 1939). A norm or standard form of tribal culture readily distinguishable from others is called a type of culture that has its geography and the segregation of cultures of the same type will form a geographical area characterized by the type (Wissler, 1965). The concepts of culture region, a cultural region, culture area, cultural area, and culture sphere are used by different scholars in anthropology and geography with similar meanings. However, the concept of culture area originated from museum curators and ethnologists during the late 1800s as a means of arranging exhibits. It was the classification of museum collections on natural geographical lines instead of evolutionarily schematic ones, according to Boas, as quoted by Kroeber (1939). Variety of things including physical, for example, climates, landforms, and natural vegetation and so on and human for example cities, towns, custom, religion, agriculture, transportation systems, and industries and so on (i.e. culture) characterize our planet. A culture region is a portion of the Earth's surface that has common cultural elements. The culture region is the place where certain cultural traits or cultural communities are located. There are varieties of cultures on Earth and each contributes to global diversity and culture regions. Cultural geography is the study of these varieties of cultural differences that characterize the people and land. The culture region is identified based on one or some cultural elements such as religion, language, subsistence system, political and social organization, etc. Every culture region may have some kind of cultural landscape. Culture landscape consists of material aspects of culture that characterize the Earth's surface. That includes buildings, shrines, signage, sports and recreational facilities, economic and agricultural structures, crops and agricultural fields, transportation systems, and other physical things. Anthropologists also use the terminology 'culture area' but similar meaning with 'culture region.' This term is used to describe the areas within which the ways of life of the residents are relatively distinctive and homogeneous (Berreman, 1963). The concept of culture area is a means to an end and the end may be either understanding of culture process or the historic event of culture (Kroeber, 1939). But the anthropological concept kulturkreis is not synonymous with culture region. Culture area is an area or region encompassing a group of cultures, usually contiguous, which share a set of traits that distinguish them from the cultures in other such areas, also the group of cultures within such an area (Weiss, 1973). A culture area is defined as a more or less contiguous ethnographic area inhabited by peoples who share cultural traits to an extent that distinguish them from other societies (McGee & Warms, 2013). However, every cultural region may have a certain diversity. Certain names can be coined for a certain region based on dominant cultural elements. Culture region is not primordial but historical. Due to the different reasons, it may be changed, disappear, expand, and contract. Identifying and mapping culture regions shows us a particular geographical area where particular cultural traits or cultural communities are located. Human beings should make a strategy for survival according to the environment and terrain where they live. People of different regions may have different problems and prospects and may also have different knowledge, perspectives, and experiences to tackle them. That is why, culture may be different and ultimately culture region, too. Berreman (1963) described the different culture areas of India, Burma, Nepal, Pakistan based on cultural tradition and culture type (similar culture rather than a real continuity) and broadly categorized them into four cultural traditions within the two culture types as follows: Cultural type Cultural tradition 1. Aryan of Indo-Iranian a. Indian or South Asian b. Afghan-Iranian or Southwest Asian 2. Tibeto-Burman a. Tibetan b. Southeast Asian or Burman Nepalese Context Geographically, Nepal is divided into several segments by the mountains, rivers, and terrain. Geographical diversities parallel to the ecological diversities in terms of climate, physical features, landscape, and altitude. Historically, different castes and ethnic groups have been come into Nepal and settled here. Politically, due to the different reasons including geographical ones, Nepal is divided into different petty states in its long history. Though Nepal is an isolated land cut off with sharp finality from the north and south, this has always become the 'melting pot' for both the people of the north and south. These historical, political, and geographical facts, which are not mutually exclusive, create different culture regions within the country and influence the present identity politics of Nepal. Nepalese culture regions are broadly categorized by Berreman (1963) according to the three geographical elevations of Nepal. South-Asian culture, Indo-Aryan language, Hinduism, and settled agriculture-continues from the plain of North India are found in people of Terai except Tharu. People of the western and eastern part of low and middle Himalayas practices South Asian culture and combined culture with Tibetan, Indian and aboriginal characters respectively may continue from Southeast hill cultures. Tibetan culture-Tibetan language, Lamaistic Buddhism, and combination of pastoralism and settled agriculture found in Himalayan people also continue from Tibet. Table 1 also broadly categorizes the caste/ethnic region of Nepal. The national population census of 2011 also proved that Nepal is a multiethnic and multilingual state. Among slightly more than 2.6 million Nepali people, there are more than a hundred ethnic and lingual groups. In the words of Stiller (1975, p. 13), "This area was always an area where the Mongolian people from the north and the Indo-Aryan people from the south met and mingled." Although different ethnic groups have their own traditional homelands, due to the different opportunities and challenges, people within the country are also migrated and intermingled. Any country may fit into many different culture regions. Nepal is considered as the 'ethnic turntable of Asia' (Hagan, 1971). There are more than 123 castes/ethnic groups living within the small territories of Nepal. Different censuses and researches show that these groups cover only caste and ethnic groups other than language groups, for example. Bengali and religious groups Churaute, Muslim, and Sikh. The ethnic label is applied either by outsiders/state (e.g. Tamang) or by people themselves (e.g. Magar) in Nepal (Guneratne, 2002). Under the Tharu ethnic identity, many cultural groups live in Tharuwan as they claim. Because certain caste/ethnic group is traditionally concentrated in certain areas, it is customary to call this place as the land of this particular caste or ethnic group, for the example, Kirant Pradesh for hill region eastward from Sunkoshi river, Magarat for the land between Karnali and Gandaki rivers, Khasan for westward from Karnali river, Bhot for high Himalayan region and Tharuwan for whole terai (Bista, 2001). Shrestha (1981) demarcates the habitation of different caste and ethnic groups into three layers as the core area, middle area, and peripheral zone. Kroeber (1963) believes that culture areas are mostly addressed by geographical name however they also denote particular culture. But, Nepalese cultural areas are expressed by ethnic names such as Limbuwan, Khambuwan, etc. Identity Politics Nepali state not only organized the people into different castes and other groups in the Varna framework but also has tried to Hinduize them in different historical periods. People also felt proud to be Hindus. But, the present identity politics of Nepal based on caste and the ethnic area comprising with the history, language, and traditional habitation, etc. is cultural revival and partially culminated as the establishment of the federal state. The concept of caste/ethnicity is related to cultural change inspired by politics (Gurung, 2066). There are both positive and negative aspects of political mobilizations based on ethnicity (Sah, 2013). According to Lawoti (2007) between 1770 to 1979, there were at least twenty-five ethnic and regional-based mobilizations against the state, most occurred among ethnic Limbus and Rais in the eastern hills. After the establishment of democracy in 1950, the first regionalist movement was made by The Nepal Terai Congress demanding the autonomous Terai state (Thapa, 2009). After the re-establishment of democracy in 1990, various ethnic organizations based on certain regions and ethnicities were established and eight organizations among them came under the single umbrella of 'The Nepal Federation of Nationalities'. Due to the cultural consciousness, people began to de-Hinduize after the reestablishment of democracy in 1990. The state has also been accommodating the demand for different groups of Nepal. In this way, today, Nepal is in the process of de-Hinduization the kingdom and thus the 'rules of the game' are changing (Skar, 1995). The main demand of regional organizations after the reestablishment of democracy in 1990 was an autonomous state based on ethnicity in their respective geographical areas. But claim to separate states by different organizations was sometimes overlapping. Madhesh uprising -a 21-day long mass movement participated in by large masses of the Madheshi population -was an unprecedented event parallel to Janandolan II of April 2006. It was a landmark event in bringing out regional-based ethnonationalism as one of the prominent issues in the national discourse on restructuring the Nepali state. The State Restructuring Commission was formed on 14 July 2010 to provide suggestions regarding the federal division of Nepal (BBC News/Nepali, 2010). However, members of the commission could not reach the meeting point in the case of the federal division of the country. Among the nine members of the commission, six members including the president suggested 11 provinces. Among these provinces, seven would be based on ethnicity and they would be Kirat, Magarat, Tamsaling, Newa, Tamuwan, Limbuwan, and Tharuwan. Similarly, members of the commission suggested that three provinces would be according to geography and they should be Karnali-Khaptad, Mithila-Bhojpura-Koch-Madhesh, and Lumbini-Avadh-Tharuwan. The last one is a nongeographical province for Dalit. The remaining three members of the commission submitted a separate report with the suggestion of six provinces; based on strengthens and probability demarcated by rivers (BBC News/Nepali, 2012). Nepal Communist Party (Maoist) started its armed revolt 'people's war' in 1996. Although it was a class war Maoists raised the issues of caste and ethnicity of Nepal. The election of the Constituent Assembly (CA) was the bottom line of the Maoist when they were negotiating with the government. They practiced the federal division of Nepal in their party's organization during the wartime. Finally, the alliance of seven parties and Maoist launched the people's movement in April 2006. In the pressure of the movement the direct regime of King Gyanendra's had ended. The election of CA was held in 2008. More than 60 percent of the newly elected members were associated with left-oriented politics in the CA and it was truly inclusive because the elected members were from different social dimensions, and it broke out of the mold of Nepal's socio-political culture of "institutionalized exclusion" (Manchanda, 2008). However, the first CA could not draft the constitution for the country. After nearly four years of political negotiation, in May 2012 it was dissolved before it could finalize the long-awaited constitution (Pokharel & Rana, 2013). The Communist Party of Nepal (Maoist) sketched out the federal structure with thirteen provinces during the CA election, 2008, consisting of two regional and eight ethnic ones, with the Madhes ethnic state subdivided further into three linguistic units (Thapa, 2009). They were, Seti-Mahakali, Bheri-Karnali, Magrat, Tharuwan, Tammuwan, Tambasaling, Kirat, Limbuwan, Kochila, Newa, Abadh, Bhojpura, and Mithila. Changing Maoist Party -United Communist Party of Nepal (Maoist) (2013, p. 11) proposed 11 provinces in its commitment paper (manifesto) of the second election of the constituent assembly. Maoist replaced it with Madhesh, a single province, instead of Abadh, Bhojpura, and Mithila, as proposed in 2008. Remain were the same as in 2008. The Communist Party of Nepal-Unified Marxist-Leninist (UML), which had come out most strongly against the recognition of identity as a basis of federalism. Another major party Nepali Congress did express its clear vision about neither federal boundaries nor its bases (Thapa, 2009). Finally, the Constitution of Nepal promulgated by the constituent assembly in 2015 established the federal system of government in Nepal. The constitution has divided Nepal into seven provinces. But, the concept of a federal state based on cultural identity is not materialized. The main movement after the conflict in Nepal is related to the demand for caste/ethnic-based federal state (Snaidarman, 2013). Different movements run by indigenous organizations gradually develop as the movement for regional autonomy (Gurung, 2013). This kind of identity-based federal state became a more controversial issue among other political parties and people. American continent can be divided into different areas, not only as of the cultural areas but also natural areas and historical areas in the sense that they are culturally, geographically and historically uniform (Kroeber, 1939). Whatever the ethnic-based provincial states were demanded by different organizations of Nepal, these are cultural/ethnic areas in the sense that within each, culture and ethnicity is relatively uniform; historical areas in the sense that each area is tried to be demarcated based on the separate state before the unification of Nepal, geographical areas in the sense that within each geography is relatively uniform and traditionally in the sense that some areas are traditional homeland of certain ethnic groups. But, sometimes historical areas and geographical areas are overlapping, for example, Limbuwan culture areas. Limbuwan activists demand the Limbuwan state covering the different geographical areas-Himalayan, Hilly, and Terai -on the historical base. There is no inherent reason why peoples of one broad cultural tradition should comprise a political entity (Berreman, 1963). Conclusion The social composition of Nepal can be identified based on their geographic origin or homeland. Caste and ethnic activists have demanded different autonomous states as a caste or ethnic state in different geographical regions based on their habitation with similar kinds of caste and ethnicity from the historical period. Likewise, the demand for separate Madhes Pradesh is based on both culture and geography. But, the demand for Tharuhat based on ethnicity lies in the same region. Yes, some ethnic and cultural groups still concentrate in a certain geographical area but their population is not in majority. (2012)
2022-03-31T16:07:17.514Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "cdefa914fd5735765ef084d326f88a7015c3bf70", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/6388201/files/jsp0915%20aca.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8135109b9fee911b0554f429003db6e6132ab78f", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }