id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
234031904
pes2o/s2orc
v3-fos-license
Liver Congestion Assessed by Hepatic Vein Waveforms in Patients With Heart Failure Background It has been reported that the pattern of hepatic vein (HV) waveforms determined by abdominal ultrasonography is useful for the diagnosis of hepatic fibrosis in patients with chronic liver disease. We aim to clarify the clinical implications of HV waveform patterns in patients with heart failure (HF). Methods We measured HV waveforms in 350 HF patients, who were then classified into 3 categories based on their waveforms: those with a continuous pattern (C group); those whose V wave ran under the baseline (U group), and those with a reversed V wave (R group). We performed right-heart catheterization, and examined the rate of postdischarge cardiac events, such as cardiac death and rehospitalization due to worsening HF. Results The number of patients in each of the 3 HV waveform groups was as follows: C group, n = 158; U group, n = 152, and R group, n = 40. The levels of B-type natriuretic peptide (R vs C and U; 245.8 vs 111.7 and 216.6 pg/mL; P < 0.01) and mean right atrial pressure (10.5 vs 6.7 and 7.2 mm Hg; P < 0.01) were highest in the R group compared with the other groups. The Kaplan-Meier analysis found that cardiac event–free rates were lowest in the R group among all groups (log-rank P < 0.001). In the multivariable Cox proportional hazard analysis, the R group was found to be an independent predictor of cardiac events (hazard ratio, 4.90; 95% confidence interval, 2.23-10.74; P < 0.01). Conclusion Among HF patients, those with reversed V waves had higher right atrial pressure and were at higher risk of adverse prognosis. Introduction : Nous avons appris que le trac e ondulatoire de la veine h epatique (VH) à l' echographie abdominale est utile au diagnostic de la fibrose h epatique chez les patients atteints d'une maladie chronique du foie. Nous avons pour objectif de clarifier les implications cliniques des trac es ondulatoires de la VH chez les patients atteints d'insuffisance cardiaque (IC). M ethodes : Nous avons mesur e les ondulations de la VH de 350 patients atteints d'IC et les avons ensuite classifi es en trois cat egories Systemic venous congestion causes multiple organ failure in patients with heart failure (HF). 1 HF-related liver damage or dysfunction assessed by liver functional testing, such as total bilirubin, 2,3 g-glutamyl transferase, 4,5 alkaline phosphatase, 3,6 and cholinesterase, 7 or scores such as model of end-stage liver disease excluding INR (MELD-XI) score, [8][9][10] non-alcoholic fatty liver disease (NAFLD) fibrosis score, 11 and Fibrosis-4 (FIB4) index, [12][13][14] are reportedly associated with prognosis. In addition, regarding liver image testing in HF patients, it has been reported that novel liver elastography determined by abdominal ultrasonography is an indicator of liver congestion due to increased right-sided filling pressure (ie, central venous pressure [CVP]). [15][16][17][18][19] However, measurement of liver elastography requires specific abdominal ultrasonographic equipment and is not easy for cardiologists. However, hepatic vein (HV) waveforms have been reported to be useful in the diagnosis of liver fibrosis in patients with chronic liver disease. [20][21][22][23] With regard to intrarenal venous congestion, it has recently been reported that Doppler intrarenal venous flow pattern is simply classified by the presence or absence of systolic and diastolic interruption and reflects CVP. [24][25][26] These interruptions were associated with an elevated A wave and an elevated V wave in intrarenal venous waveforms, and the elevated V wave was caused by increased CVP and right ventricular dysfunction. We hypothesized that (1) HV waveforms determined by standard abdominal ultrasonography or echocardiographic equipment can be measured easily compared with liver elastography; (2) HV waveforms have more sensitively measurable CVP than kidneys, as the liver is located closer to the heart than the kidneys; and (3) HV waveforms reflect liver dysfunction. Thus, in this study, we aimed to identify liver congestion using HV waveforms determined by abdominal Doppler ultrasonography and their prognostic significance in HF patients. Subjects and study protocol This was a prospective observational study. We encouraged patients and attending physicians to perform abdominal ultrasonography to evaluate liver disease or damage in a stable condition before hospital discharge. Of 645 decompensated HF patients who were hospitalized in Fukushima Medical University between April 2018 and March 2020, 388 underwent abdominal ultrasonography. The diagnosis of decompensated HF was made by each patient's attending cardiologist based on the established HF guidelines. [27][28][29] Blood samples, abdominal ultrasonography, and echocardiography were obtained at hospital discharge. The patient flow is described in Figure 1. The exclusion criteria included patients who were positive for hepatitis B surface antigen and/or hepatitis C antibody, those with obvious chronic liver diseases (eg, cirrhosis, liver tumors), those who were receiving maintenance dialysis, and those who were lacking or presented with poor HV waveforms. Finally, 350 patients were enrolled in the study. These patients were classified into 3 categories based on their HV waveforms: those with a continuous pattern (C group), those whose V wave ran under the baseline (U group), and those with a reversed V wave (R group). First, we compared the clinical features and the results from laboratory tests, echocardiography, and right-heart catheterization (RHC) among the 3 groups. Second, the patients were followed up until July 2020 for cardiac events as composites of cardiac death or unplanned rehospitalization for HF treatment. Cardiac death was defined as death from ventricular fibrillation, acute coronary syndromes, or worsening heart failure. For patients who experienced ! 2 events, only the first event was included in the analysis. Because the patients visited the hospital monthly or every other month, we were able to follow up with all patients. Status and dates of death were obtained from the patients' medical records. Those administering the survey were blind to the analyses, and written informed consent was obtained from all study subjects. The study protocol was approved by the Ethics Committee of Fukushima Medical University and was carried out in accordance with the principles outlined in the Declaration of Helsinki. Reporting of the study conforms to the Strengthening the Reporting of Observational Studies in Epidemiology guidelines and the Enhancing the Quality and Transparency of Health Research guidelines. Abdominal ultrasonography and HV waveforms All examinations were performed by experienced sonographers, using an Aplio i800 (Canon Medical Systems, Tokyo, A total of 388 patients who were hospitalized in Fukushima Medical University Hospital for decompensated heart failure, and underwent abdominal ultrasonography between April 2018 and March 2020. Exclusion criteria (n = 38): Hepatitis B surface antigen (n = 13) Hepatitis C antibodies (n = 4) Cirrhosis (n = 5) Hepatic tumor (n = 2) Receiving maintenance dialysis (n = 10) Lacking or poor hepatic venous waveform (n = 10) A total of 350 patients were finally enrolled -Right heart catheterization was partly performed (n = 220) baseline (U group), and those with a reversed V wave (R group). We performed right-heart catheterization, and examined the rate of postdischarge cardiac events, such as cardiac death and rehospitalization due to worsening HF. Results: The number of patients in each of the 3 HV waveform groups was as follows: C group, n ¼ 158; U group, n ¼ 152, and R group, n ¼ 40. The levels of B-type natriuretic peptide (R vs C and U; 245.8 vs 111.7 and 216.6 pg/mL; P < 0.01) and mean right atrial pressure (10.5 vs 6.7 and 7.2 mm Hg; P < 0.01) were highest in the R group compared with the other groups. The Kaplan-Meier analysis found that cardiac eventefree rates were lowest in the R group among all groups (log-rank P < 0.001). In the multivariable Cox proportional hazard analysis, the R group was found to be an independent predictor of cardiac events (hazard ratio, 4.90; 95% confidence interval, 2.23-10.74; P < 0.01). Conclusion: Among HF patients, those with reversed V waves had higher right atrial pressure and were at higher risk of adverse prognosis. selon leurs ondulations : ceux qui avaient un trac e continu (groupe C); ceux dont l'onde V se pr esentait selon les valeurs de r ef erence (groupe U); ceux qui avaient une onde V invers ee (groupe R Japan) with a 1.8-6.4 MHz convex transducer. HV was identified in reference to continuity with the inferior vena cava (IVC), direction of blood flow, and other identifiers. HV waveforms were obtained using a pulsed-wave Doppler device. The tests were undertaken by approaching the right HV 3-5 cm proximal to the IVC from the right intercostal space ( Fig 2). In HF patients with atrial fibrillation, HV waveforms were measured during 5 relatively stable beats. In patients with atrial fibrillation, the C wave was used instead of the A wave. 20,25 Based on previous studies, 20-23 we focused on the shapes and positions of the V wave. We classified HV waveforms in accordance with the shape and position of the V wave into 3 patterns (Fig 2) and divided the total 350 HF patients into 3 groups: those in whom the continuous flow pattern or V wave was ambiguous (C group), those in whom the V wave ran under the baseline (U group), and those who had a reversed V wave (R group). The inter-and intraobserver variability of the HV waveforms classification was proven by using Cohen's kappa method. Echocardiography Echocardiography was performed blindly by experienced echocardiographers using standard techniques. 30 Echocardiographic parameters such as left ventricular ejection fraction, right atrium area, right ventricular area, IVC diameter, tricuspid regurgitation pressure gradient, and tricuspid annular plane systolic excursion were measured. Left ventricular ejection fraction was calculated using Simpson's method. RHC RHC was partly performed in 220 patients based on the remedial judgment of the attending physician (eg, hemodynamic assessment in conjunction with an evaluation of coronary artery disease, valvular disease, myocardial disease, or arrhythmia) within 3 days of abdominal ultrasonography. RHC was performed with the patients in a stable condition, in a resting supine position under fluoroscopic guidance, at room temperature, and at rest using a 7F Swan-Ganz catheter (Edwards Lifesciences, Irvine, CA). 11,19 Cardiac output was calculated based on the thermodilution method. Statistical methods Normally distributed data are presented as mean AE standard deviation, and nonnormally distributed data are presented as median (25th percentile, 75th percentile). The characteristics of the 3 groups were compared using analysis of variance, Kruskal-Wallis tests, and c 2 tests depending on the type and distribution of the data. Kaplan-Meier analysis was used with a log-rank test to assess cardiac event rates. Cox proportional hazard analyses were used to evaluate HV waveforms as predictors of cardiac events, and were adjusted for general confounding factors in HF patients (ie, age, sex, hemoglobin, creatinine, B-type natriuretic peptide, left ventricular ejection fraction, IVC, and tricuspid regurgitation pressure gradient). A P value of < 0.05 was considered statistically significant for all comparisons. These analyses were performed using a statistical software package (SPSS version 27.0, IBM, Armonk, NY). waveform groups was as follows: C group, n ¼ 158; U group, n ¼ 152, and R group, n ¼ 40. Regarding assessment of HV waveforms, the kappa value of interobserver variability was 0.85 and intraobserver reproducibility was 0.92. The comparisons of patient characteristics among the 3 groups are summarized in Table 1. There were no significant differences in age, sex, body mass index, HF etiology, or comorbidities among the 3 groups. Regarding laboratory data, there were no significant differences in the levels of liver function testing, including aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase, gamma-glutamyl transferase, and cholinesterase, except for total bilirubin (0.9 vs 0.7 and 0.8 mg/dL; P ¼ 0.03). In contrast, levels of B-type natriuretic peptide were highest in the R groups (245.8 vs 111.7 and 216.6 pg/mL; P < 0.01). With respect to the echocardiographic parameters, all echocardiographic parameters, except for IVC diameter (18.9 vs. 14.3 and 16.5 mm, P < 0.01), were comparable among the groups. With regard to RHC, During the follow-up period of a median of 304 days (range, 6-824 days), 50 cardiac events occurred, including 45 rehospitalizations due to worsening heart failure and 5 cardiac deaths. The Kaplan-Meier analysis (Fig 3) showed that the cardiac event-free rate was lowest in the R group among the groups (log-rank P < 0.001). In addition, cardiac eventefree rates were lowest in the R group among all groups, regardless of the presence or absence of atrial fibrillation (Fig 4; log-rank P < 0.001, respectively). In the multivariable Cox proportional hazard analysis, the presence of a reversed V wave was found to be an independent predictor of cardiac events (Table 2; hazard ratio, 4.90; 95% confidence interval, 2.23-10.74; P < 0.01). Discussion This study is the first to report that the HV waveforms with the reversed V wave pattern (R group) are associated with higher levels of B-type natriuretic peptide and increased CVP (higher mean atrium pressure and IVC diameter), rather than liver dysfunction, and with higher cardiac event rates in HF patients. HV waveforms are reported to be useful for the diagnosis of liver fibrosis in patients with chronic liver disease. [20][21][22][23] Although several classifications of HV waveforms have been reported, there is no established classification with universally accepted evidence, [20][21][22][23] and their association with hemodynamics in HF patients has, to date, not been examined. A normal HV waveform is a 3-phase waveform consisting of 4 waves: retrograde A wave, antegrade S wave, transition V wave, and antegrade D wave. 20,23,31 The A wave is caused by an increase in right atrial pressure, which itself is caused by the atrial contraction that occurs at the end of the diastole. 20 The S wave is a decreasing of right atrial pressure, caused by sucking, which is the inflow to the right atrium as the atrioventricular septum descends during early-to mid-systole. 20 The V wave represents an increase in right atrial pressure that occurs during continued systemic venous return to the closed tricuspid valve. At the opening of the tricuspid valve and the transition from systole to diastole, the wave peaks and shifts to the D wave. 20 The lowest point of the D wave is the maximum diastolic flow velocity. The V wave corresponds to atrial overfilling. 20 The blood flow to the heart appears below the baseline, and the reversed blood flow to the liver from the heart appears above the baseline. After the end systole, as the ventricular contraction intensity decreases and the closed tricuspid valve begins to return to its original resting position, the atrium fills, blood flow velocity (from the liver to the heart) decreases, and temporary equilibrium is reached, making V waves. 20 Therefore, the greater congestion and right atrial volume overload at the end of systole leads to deeper reversed V waves. The S wave is smaller in patients with high CVP and right ventricular pressure overload, and if the A, S, and V waves are all retrograde, they may fuse into a single retrograde wave and become biphasic waveforms, alternating with the D wave. 20,21 In patients with severe tricuspid regurgitation, systolic reverse flow in HV is sometimes observed, depending on right ventricular function and right atrial compliance. 20,21,32 From these reports, in this study, we divided the HV waveforms into 3 categories according to the position of the V wave above or below the baseline as a new classification. Similar to previous reports regarding renal congestion, [24][25][26] Doppler HV waveforms in this study were found to be associated with CVP. However, measurement of HV waveforms seemed to be superior to those of intrarenal venous flow pattern, in terms of ease of measurement and accuracy of CVP, because of their closer anatomic proximity to the heart. In addition, contrary to our expectations, Doppler HV waveforms were not associated with liver function testing. However, this study's relatively small sample size may have affected the statistical significance. With regard to the HV waveforms, it has been suggested that the significance of HV waveforms differ depending on the disease. 20,33 In patients with chronic liver disease, HV waveforms are useful for the estimation of liver fibrosis. [20][21][22][23] In patients with liver cirrhosis, the continuous HV waveform (namely C group in this study) is mainly caused by intrahepatic fat deposition, inflammatory or fibrotic changes, and changes in the compliance of the venous wall, suggesting the presence of severe liver fibrosis. [34][35][36] In postoperative Fontan patients with right-sided HF, liver fibrosis is caused by longlasting liver congestion, and the HV waveform tends to be a monophasic and continuous waveform (C group in this study) with increased CVP. 22,37,38 Contrary to the above-mentioned results, 22,37,38 HF patients with increased CVP present with a reversed V wave pattern (R group in this study). Although we could not fully explain the reason for this discrepancy, the lack of distinct liver disease or severe liver fibrosis might have had an effect. Therefore, the results of this study are in contrast to those of previous studies 22,37,38 on HV waveform classification in patients with liver cirrhosis. In addition, it was reported recently that the ratio of the S and D wave amplitudes of HV waveforms is useful for the diagnosis of cardiac disease (eg, right-sided HF, tricuspid regurgitation, pulmonary hypertension). 20,21 The S wave becomes smaller or retrograde waves mixed with the A waves and V waves (similar to R group in this study) in patients with high CVP and right ventricular pressure overload. 21,39 However, It may be difficult to define the ratio of the amplitudes of the S and D waves in all cases. In the case of antegrade S waves, the S/D wave ratio could be predictive of CVP. We here report the utility of a simple HV waveform pattern, which needs neither measurement nor calculation, for estimating RAP and prognosis. It has been reported that IVC diameter and collapsibility index are related to right atrial pressure 27,40,41 and prognosis in HF patients. 42 Concordant with the results of previous studies, 27,[40][41][42] the IVC diameter in this study was larger, and right atrial pressure was higher in the R group and was associated with worse prognosis. However, it remains controversial whether IVC diameter indicates right atrial pressure or prognosis in HF patients, [43][44][45] and cutoff of collapsibility of IVC diameter has been unestablished. 40,41,46 Implantable hemodynamic monitors are accurate alternatives to RHC [47][48][49] and are potentially useful to avoid rehospitalization due to worsened HF 50,51 because increases in intracardiac and pulmonary arterial pressure precede clinical decompensation. 52,53 However, these sensors are invasive, and noninvasive hemodynamic indicators are required for daily clinical settings. In this regard, lung ultrasonography can detect lung congestion and left atrial pressure 54 and shows moderate correlation with RHC. 55 be helpful in the noninvasive estimation of right atrial pressure and prognosis. Limitations First, because of the prospective study design and small sample size, the study might be underpowered. However, study population of this study was much larger than those of previous studies. 21,22,33,56 Second, although documented liver disease was excluded in this study, we could not fully exclude the presence of subsequent liver diseases, which may have affected the HV waveforms. Third, the relationships between HV waveforms and other estimations, such as liver biopsy, or other imaging, such as computed tomography and magnetic resonance imaging, were not examined in this study. Fourth, we used clinical variables obtained during hospitalization, without considering changes in HV waveforms, other parameters, or treatments after discharge. Fifth, although we encouraged abdominal ultrasonography, we could not perform it in all patients for various reasons (e.g. patient refusal). Additionally, whether RHC was to be performed was decided by each patient's attending physician. There might be potential selection bias of patients. Sixth, we did not evaluate the S/D ratio of HV and IVC collapsibility in the present study. 40,41 Therefore, further studies with larger populations are needed. Conclusion Among HF patients, those with reversed V waves had higher right atrial pressure and were at higher risk of adverse prognosis. 14. Sato Y, Yoshihisa A, Kanno Y, et al. Liver stiffness assessed by Fibrosis-4 index predicts mortality in patients with heart failure. Open Heart 2017;4:e000598. C, continuous flow patterns group; CI, confidence interval; HR, hazard ratio; R, the reversal V wave group; U, the V wave under the baseline group.
2021-05-10T00:02:48.639Z
2021-02-07T00:00:00.000
{ "year": 2021, "sha1": "7075a3829e17f8ec0319877d9667fce1fbb3f0ca", "oa_license": "CCBYNCND", "oa_url": "http://www.cjcopen.ca/article/S2589790X21000366/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a24847bd59cbe88cdb98b1415742c8a5d68d499", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246431905
pes2o/s2orc
v3-fos-license
A three-stage ensemble boosted convolutional neural network for classification and analysis of COVID-19 chest x-ray images For the identification and classification of COVID-19, this research presents a three-stage ensemble boosted convolutional neural network model. A conventional segmentation model (ResUNet) is used to increase the model's performance in the initial step of processing the CXR datasets. In the second step, the CNN is used to extract the features from the pictures in the training dataset using machine learning techniques. Using machine learning (ML) techniques, the retrieved characteristics are then combined by voting in the third stage. There are 5178 aberrant CXR photos and 4310 normal CXR images used in this investigation. Models like CNN and ML can't compete with the suggested model. 99.35% of the model's measurements are accurate and precise, and 98% of its recall and F1-score are perfect. It is argued that the suggested model provides a rigorous and trustworthy evaluation of clinical decision-making in the setting of a public health crisis. Introduction Severe Respiratory Syndrome Corona infection 2 (SARS-CoV-2) was discovered in Wuhan, China, in January 2020, and the World Health Organization (WHO) has labelled it a pandemic. ( WHO, 2020 ). In March 2020, the World Health Organization (WHO) declared this pandemic a Public Health Emergency of International Concern ( Bhagavathula, Aldhaleei, Rahmani, Mahabadi & Bandari, 2020 ). To far, more than 36 million individuals have been infected with the extremely infectious COVID-19 virus, which has resulted in over a million fatalities ( Dong, Du & Gardner, 2020 ). As a result of a shortage of centres, poor health care, and ineffective analytical methods, all countries, developed, developing, and undeveloped alike are battling the epidemic. High fever, cough, and other severe respiratory symptoms are telltale indicators of COVID-19 infection ( Zu et al., 2020 ). Because of this, it is critical to diagnose the infection in order to prevent its spread. RT-PCR, Computed Tomography (CT), and Chest X-Ray (CXR) are the three primary screening methods employed in COVID-19 identification ( Wang et al., 2020 ). (CXR). Using the blockchain, Manoj et al. (2020) have developed a system that maintains track of every individual's COVID data in the country while keeping track of whether an individual has been tested positive, is suspected of having the virus, or is otherwise healthy. Reverse transcription-PCR (RT-PCR) is one of the most often used assays for COVID-19 detection, which uses PCR to amplify DNA for analysis. COVID-19 can be detected since the infection is exclusively RNA-based ( Corman et al., ( Quang, Xie & Dan, 2016 ). Kieu, Tran, Le, Le and Nguyen (2018) proposed a deep learning model that can identify CXR pictures with an exceptional density. It was possible to detect normal or abnormal densities in upper body X-ray pictures using three CNN models (CNN-128F, . In this dataset, 400 CXR pictures have been used, with a 3:1 ratio of training to testing. For normal photographs, we used the code 0; for aberrant images, we used the code 1. Extensive testing reveals a precision of 96%. They have created numerous deep semantic network versions for attribute removal from COVID-19infected breast X-ray pictures, ensuring remarkable precision, degree of sensitivity, and uniqueness. This is an ongoing effort by many researchers. In the absence of COVID-19 photos, several of them used a tiny dataset to train their model. In order to minimise over-fitting, Wang et al. (2021) suggested a deep rank-based average pooling network. For example, strided convolution, l2-norm pooling and average pools, as well as max pools. If you combine the CCT and CXR pictures into one, you may create a multiple-input deep convolutional attention network (MIDCAN), which processes both images concurrently, according to Zhang, Zhang, Zhang and Wang (2021) . As a result, the current approaches might use some refinement in the form of more effective training and assessment. Our approach to detecting COVID-19 CRX pictures employs a three-stage ensemble Boosted Convolutional Neural Network. The full image does not need to be examined in medical image processing, though. Consequently, CNN has to concentrate on its immediate surroundings while also improving its overall output. Pre-processed and segmented CXR pictures are used in the benchmark dataset. After that, in the feature extraction phase, the CNN model extracts deep features from CXR pictures in order to identify those afflicted with COVID-19 infection. Finally, a collection of classifiers is assembled for classification. Classification may be done using any of the ML models. Although a model that performs exceptionally well on one piece of data does not ensure that it will perform as well on all future sets of data. This means that different models will provide different results when applied to different datasets. As a result, in this study, four ML classifiers were integrated to form an ensemble of classifiers, which ensures superior results for datasets of varying sizes and resolutions. The following is a summary of the rest of the paper. Section 2 explores the use of deep learning to detect COVID-19 in CT scans and CXR radiography pictures. The dataset and the proposed three-stage ensemble Boosted Convolutional Neural Network model and its structure are described in detail in Section 3 . The suggested method's performance is evaluated in Section 4 using performance metrics and a comparative comparison with currently in use systems. Section 5 concludes with the final paragraph. Related works For a number of years now, deep learning approaches have demonstrated their capacity to solve a wide range of problems, particularly in computer vision. Non-invasive clinical adjuncts such as the chest X-ray (CXR) play an important role in the first assessment of various lung disorders , 2020aChandra, Verma, Jain & &Netam, 2020 ;Ke et al., 2019 ). Using clinical imaging system deep feature extraction techniques, the advances in computing capability made possible by the availability of large labelled datasets have been widely publicised . CXR pictures, one of the most widely used imaging procedures in clinical practise, have been utilised to examine these techniques' usefulness in detecting and assessing cardiac, thoracic, and pulmonary issues ( Monshi, Poon & Chung, 2020 ;Tang et al., 2020 ). Because of this, Anthimopoulos, Christodoulidis, Ebner, Christe and Mougiakakou (2016) developed a deep CNN for classifying lung image patches into seven distinct categories based on six distinct patterns of interstitial lung disease and healthy, balanced cells. The emphysema section of replacement CXR pictures may be quantified using a method based on the CNN provided by Campo, Pascau and Estépar (2018) . In order to identify pneumonia in CXR pictures, Jaiswal et al. (2019) de-veloped a deep semantic network that uses global and local information. In Pasa, Golkov, Pfeiffer, Cremers and Pfeiffer (2019) , a deep network architecture for consumption diagnostics has been shown. Others in this field use transfer discovery algorithms to perform a reliable understanding step over a restricted dataset utilising characteristics extracted from other large datasets in the same domain name. As a result, the first technique may classify patients into two groups: those with COVID-19 and those with other respiratory issues that are equivalent. As an alternative to COVID-19, a second approach uses the patient's CXR pictures to identify any other present or normal respiratory system issues. The third strategy allows for categorization between the clinical categories at the same time, which is very useful. When these methodologies are used, it is more easier for the researchers to detect COVID-19, abnormal and usual scenarios. However, despite the poor quality of the CXR pictures caused by the portable equipment, the offered methods provided global accuracy worth 79% to 90%. Help in clinical decision-making by enabling the reliable analysis of portable radiography. Using the discovery of five different semantic network designs, Chouhan et al. (2020) offer a new ensemble technique based on the transfer discovery. For COVID-19 screening on CXR images, Zhang et al. (2020) suggest a deep anomaly finding design. Using a deep learning version, Ozturk et al. (2020) advocated emphasising critical regions in CXR pictures to aid professionals in the early detection of COVID-19 disease. It is proposed that CXR pictures should be classified into usual courses, pneumonia, TB, and COVID-19 ( Shelke et al., 2020 ). COVID-19 is also divided into three categories: mild, medium, and severe. In order to find COVID-19 in breast X-ray pictures, Yoo et al. (2020) propose a deep learning-based decision-tree classifier. Initially, the photos are classified as normal by a first decision tree, which is followed by a second tree that detects images with irregularities indicative of TB, and a third tree that identifies scientific results associated with COVID-19. In order to detect incubation or death in patients with COVID-19, Li et al. (2020) used a Convolutional Siamese neural network using CXR pictures. Moura et al. (2020) have developed a deep learning method based on CNN that uses data augmentation and regularisation approaches to arrange the unbalanced COVID-19 information. Pre-processing tests were conducted in three distinct ways. In order to ensure the safety of the system, a thorough investigation is carried out into a variety of issues that might affect its performance. It has a 91.5 percent accuracy rate and an 87.4 percent recall rate. In Wang et al. (2020) , chest radiography pictures were advised to use the COVID-Net architecture. An open data collection of 13,975 CXR pictures, dubbed COVIDx, was used to create the network. The COVID-19 class has just 358 samples derived from 266 people. The obtained precision was 93.3%. For the COVID-19 medical diagnosis using CXR pictures, Singh and Singh used an automated Wavelets-Based Depth-wise Convolution Network (2021). The neural network used to evaluate the CXR pictures is enhanced using this technique's depth-wise convolution neural network. Multiresolution analysis in the network is integrated using wavelet decay. The input pictures' frequency subbands are supplied into the network to determine the current state. The network classifies the input picture into categories like "normal," "viral pneumonia," and "COVID-19," amongst others. Grad-CAM is used to see the model's output in order to aid in diagnosis. There are measures for analysing performance and verifying that the method accurately identifies sickness such as accuracy, sensitivity, and factor 1. An iteratively pruned deep learning design ensemble was used to detect pulmonary symptoms of COVID-19 using CXR, according to Rajaraman et al. (2020) . Open CXR datasets are used to train and evaluate custom convolutional neural networks and ImageNet pretrained versions for recognising modality-specific function descriptions at the individual patient level. For the linked job of classifying CXRs as either normal, bacterial pneumonia, or anomalies of COVID-19-viral illness, the learnt intelligence is relocated and calibrated. Integration of forecasts from the best-performing trimmed models is achieved through a variety of different methods. Better projections were achieved by the application of modality-specific knowledge transfer, recurrent version reduction, and ensemble learning. We anticipate that using breast radiographs, the suggested approach may be quickly implemented for COVID-19 screening. Methodology CXR pictures from a variety of sources were compiled to test the feasibility of the suggested strategy. For the sake of generalisation and avoiding overflow, the CXR pictures were scaled and normalised, and a dataset of preprocessed images was created. The CXR pictures were then segmented to increase the suggested model's accuracy. Training, validation, and testing sets were created using the dataset's photos. Training and validation data are used to train the intended CNN model. To begin, all of the training pictures are used to extract their features, which are then combined into a feature vector using a Fully Connected Layer (FCL). There are four ML classifiers that get feature vectors from the training CXR pictures. Each ML classifier is given the grid search technique in order to fine-tune the hyper-parameters that make up an ideal ML classifier. All the ML classifiers are eventually linked together to form an ensemble of classifiers. The ensemble classifier is a collection of classifiers that compares the results of classifications made using the same data by a number of different models. Decision Tree, Random Forest, Ada Boost, and SVM receive the most votes in the proposed system. As a result, training data may be more accurately labelled. As a consequence, class labels are decided by a majority vote of a group of classifiers. Fig. 1 depicts the suggested training model's overall working principle, and accuracy, precision, recall, and F1 score are used to gauge its effectiveness. Dataset description We built a new database including CXR images from a variety of sources, such as Github ( Cohen et al., 2020 ), SIRM (SIRM), TCIA (TCIA), and radiopaedia to improve the proposed model's classification and generalisation capability for the COVID-19 identification ( Saha et al., 2018 ). Pneumonia cases as well as COVID-19 positives and negatives make up the newly built CXR image database. It's not clear how many people were infected, but the average age of those afflicted was between 50 and 55 years old. There were 4260 COVID 19-infected photos in the database at the time of study, and 918 images were infected with various illnesses such viral and microbial pneumonia (MERS, SARS, and ARD), as well as 4310 CXR-normal images. All other viral and bacterial pneumonia illnesses are omitted from the proposed detection model for COVID-19. Over the course of the study, 9488 CXR pictures were used for testing. Fig. 2 shows a few examples of the CXR pictures that were used for training, validation or testing, using a 7:2:1 ratio, as shown in Table 1 . Proposed model 3.2.1. Stage 1: pre-processing and segmentation process CXR images are segmented using UNet ( Ronneberger, Fischer & Brox, 2015 ) with ResNet backbone (ResUNet) for the lung regions in this initial step. The design incorporates a narrowing route for capturing context and symmetric expanding facilites for precise localisation, both of which are key components. Essentially, the concept is to use succes- sive layers of upsampling operators and pooling operators to improve a standard contract network. In this way, the resolution of the output is improved by these layers. Contracted paths and upsampled results are used to find high-resolution features. In Fig. 3 , the findings reveal that the lung masks may be predicted more accurately with the help of the subsequent convolution layer. 3.2.1.1 UNet. Use low-degree details while keeping high-degree semantic information in semantic segmentation to achieve a better result ( Ronneberger et al., 2015 ). It is difficult to train a deep neural network, especially when there are just a few training candidates. Using a pretrained network after fine-tuning it on the target dataset is one way to address this problem. Using complete data augmentation, like in UNet, is still another option ( Ronneberger et al., 2015 ). U-approach, Net's in addition to data enrichment, we feel helps alleviate the training issue. By replicating low-level features at higher levels, progress is made for information propagation, allowing signals to spread between levels in an easier manner, not only promoting backward propagation during training but also making up reduced level better information for high degree semantic attributes. This is the instinct behind this.. Similarites may be drawn between the residual neural network and this ( He et al., 2016 ). Uperformance Net's can be increased by using a repeating device instead of the basic system described in this work. Recurring unit. Going much deeper would improve the performance of a multi-layer neural network, despite interfering with the training and a deterioration issue possibly occur ( He et al., 2016 ). To get rid of these troubles, He et al. (2016) have proposed a residual neural network for training and solve the degradation trouble. The residual semantic network is composed of a series of piled recurring units. Each residual system can be shown as a general type as in Eq. (1) and (2) . where, and +1 are the input and result of the r-th residual system, ( . ) is the recurring feature, ( ) is activation feature, and ℎ ( ) is an identification mapping feature, a regular one ℎ ( ) = . Fig. 4 shows the distinction between a simple and residual system. There are multiple mixes of batch normalization (BN), ReLU activation, and convolutional layers in a residual unit. He et al. offer a comprehensive discussion on the effects of various combinations. The ResUnet, additionally utilizes a full pre-activation residual unit to construct the deep recurring U-Net. ResUnet. Adapting the U-Net and the recurring neural network's characteristics, ResUnet creates a semantic segmentation neural network. There are two advantages to this fusion: (2) eliminates linkages amongst recurrent devices and between the low and high degrees of the network, which unquestionably aids in information transmission without destruction, the residual system simplifies training the network. In order to improve semantic segmentation efficiency, it is possible to build a neural network with fewer parameters. ResUnet has a sevenlayer architecture, as shown in Fig. 5 for CXR image segmentation of the lungs. Encoding, bridging, and decoding make up the network's three main components. Encoding and decoding are linked in the second portion of the process. Segmentation at the semantic level is performed as the final step in the process. All three components, including two 3 × 3 convolution blocks and an identification mapping, are constructed using residual systems. BN, ReLU activation, and a convolutional layer are all included in each convolution block. The unit's input and output are linked together via an identification mapping. In the encoding sequence, there are three remaining units. Instead of employing the pooling process, each unit reduces the future map dimension. The first convolution block uses astride of 2 to shrink the feature map by 50%. The decoding process, on the other hand, generates three repeated units. A concatenation of the feature maps from the corresponding encoding path is performed with the enhanced feature maps from the reduced levels before each unit. The multi-channel feature maps are projected onto the chosen segmentation using an 11 convolution and a sigmoid activation layer after the last degree of decoding. U-Net has 23 layers, whereas our convolutional network has 15. Stage 2: feature extraction process In the second step, features are extracted from the segmented CXR pictures using a CNN network. Convolutional, pooling, and fully linked are the three main layers of CNN (FCL). Table 2 displays the many types of CNN model layers and provides an explanation for each one. The RGB colour pictures (224 × 2243 × 3) are put into the CNN model to train the proposed model. Three representations () of the input image are available for processing. Kernel (fn) of dimension fn functions as a filter to represent each column and each row of a window in the convolutional layer. As the name suggests, it is a "feature identifier." Low-level characteristics like edges and contours are added to the layer using these filters. Convolution is performed on a portion of the picture by the filter. The convolution process involves a step-by-step replication and summation of the image's filter and pixel values. Weights and criteria are two terms used to describe the filter's values. The version must be trained to learn these weights. More convolutional layers improve the model's ability to remove pictures' deep properties, making it possible for this version to detect all of an image's features. A responsive image is the picture's sub-image. It begins convolution at the beginning of the picture and continues to do so until the entire image has been covered by the filter. Throughout the pictures, convolution generates a metric or range of values. The operation is expressed as in Eq. (3) . where, I is the input and F is the filter with size ℎ and . The operation is represented by the operator ( ×). An additional parameter, stride, identifies the change in filter amount. All convolutional layers have their stride set to 1 for the version. With each step up in stride, the amount of input that can be measured in terms of height and width decreases. An excessively high stride value might lead to difficulties, such as the response size exceeding input amount and reducing measurements. There are a number of ways to deal with these issues, such as zero padding (sometimes referred to as "the same" padding), which is a method of padding an input with no padding around its borders and keeping its output amount measurement constant if the stride length is 1, is identified by Eq. (4) . where, height or size of the filter is represented by ; both height and width are the same size. In stage two, valid extra padding was made instead of absolutely no padding, so the output image is not the same as the input, and its size is reduced after the convolution. Numerous filters in the convolutional layer have been made use of extracting several functions. In the first layer, 32 filters were used. The variety of filters increased gradually in the successive layers, from 32 to 128, 128 to 512, and so on. The resulting quantity is called activation map or feature map . In Table 3 , all the layers result in shape is shown, and the size of result volume is determined by Eqs. (5) , (6) , and (7) . Where ℎ means height, represents width, f h is the filter height, is the filter width, denotes the stride size, denotes padding, and is the number of filters. For the 1-st convolutional layer, ℎ = 224, = 224, ℎ = 3, = 3, = 1, = 0 and = 32. From Eqs. (5) , (6) , and (7) , the following values can be obtained. Non-linear activation is used on convolution results. The Convolutional layer has carried out linear computation (element-wise multiplication and also summation). Hence, nonlinearity is presented on the linear procedure with this activation. Activation of ReLU (rectified linear unit) is applied on convolution output. The feature for the ReLU procedure and it is expressed in (8). ReLU ( X ) = max ( 0 , X ) In this case, X represents the output of the convolution procedure. ReLU reduces the likelihood of any negative consequences to nil. ReLU is used in this suggested version because it enhances the nonlinearity of the design and assists in making the calculation time significantly faster without affecting model fidelity. When the lower layers are learnt more slowly, the vanishing gradient problem is reduced. Following two convolution layers, the max-pooling layer is employed. Reduces input's spatial dimension (both in height and breadth) through this layer. The layer's stride is 2 in the preferred design, which uses a 2 × 2 filter. Convolution around the input volume produces the best possible receptive area output. The relative placement of a feature in relation to other features is more important than the precise location of a feature when using this layer of monitoring. Reduces computational costs by preventing overfitting and reducing the number of weights. The dropout layer is then used, and it arbitrarily drops out some activation by setting it to zero. Although certain activations remain, this layer ensures that the model is able to predict the true class label of a picture. As a result, fitting the model to the dataset is unnecessary. In order to avoid over-fitting, the dropout layer is used. Previously, dropout layers employed a criterion of 0.25 as their threshold. The FCL uses a flatten layer to transform the 2D function map into a 1D feature vector. Images' finer details have been accentuated. In order to classify COVID-19, FCL uses 1D long features and then transfers the result activation to a second FCL. Softmax activation is predicted to occur in layers above and below the result layer, hence the model predicts classes between COVID-19 and "normal." Table 4 Tuned hyper-parameters of ML classifiers. Random Forest Bootstrap: True (method for sampling -with or without replacement) max depth: 100 (max number of levels in the decision tree) max features: 2 (maximum features for splitting) min samples leaf: 4 (minimum data allowed in a leaf node) min samples split: 10 (minimum data allowed for split) Support Vector Stage 3: ensemble of classifier Picked up were features created by the initial FCL of CNN photos. Table 3 shows that the first FCL had 64 nerve cells and was called "dense 2." As a result, the dimension of the Feature vectors is 164. For each training image, 64 characteristics were extracted from the depths of the image. To train the ML classifiers, 3220 photos are used in the training collection, and the new input is 3220 × 64. Table 4 shows the hyperparameters of the ML classifier that were tweaked using the input data. The ML classifiers' hyper-parameters were modified to use in the grid search technique. Five-fold cross-validation was used on the various ML models to predict the best classifiers. All the ML classifiers are linked together to form an ensemble of classifiers. To arrive at the final class label, a majority of the classifiers in the ensemble tallied their results from four separate machine learning models. It improved efficiency and performed better than any one classifier could have done on its own. An extremely large number of other versions' votes were utilised to generate classifier selections during assembly. The Testing stage of the ML classifiers is shown in Fig. 6 . Experimental analysis The proposed three-stage ensemble boosted CNN extracts features from CXR images using CNN and four different ML classifiers ensemble to identify COVID-19. The CNN model was trained, and a grid search technique was employed to tune the hyper-parameters of the ML classifiers. The efficiency of the designed CNN, ML classifiers and an ensemble of classifiers were examined for each class label in terms of accuracy, recall, precision, and F1score that have described in Eqs. (9) , (10) , (11) , and (12) , respectively. Where, TP, TN, FP, and FN represent true positive, true negative, false positive, and false negative, respectively. Experimental setup The proposed method was implemented using Python 3.7x with additional libraries such as Pandas, Tensor Flow, and Keras for training the CXR COVID-19 dataset. Windows 10 Operating System powered the System with configuration, Core i5 and 11-th generation. Table 1 shows the use of the picture datasets given in Section 3.1 for training, validation, and testing. A total of 9488 photos were used in the research. There were 4261 pictures with COVID infection; 2982, 852, 426 were used for training, validation, and testing; 918 756, 108, and 54 were used for training, validation, and testing of other sick photos. A total of 4310 normal photos were considered for training, validation, and testing, with 3017, 862, and 431 images being used for each step. COVID-19 was identified using a variety of performance indicators, including accuracy, precision and recall, as well as the F1 Score. Precision, recall, accuracy, and F1 Score for several classifiers applied to each picture category, such as COVID-19 and Normal, were tested for the proposed technique and the results are shown in Table 5 . Classifiers for Ensemble Boosted CNN have been evaluated and are shown in Table 6 . While the F1 score is 98.90 percent for COVID infected pictures, the CNN approach obtains an accuracy rate of 97.83 percent for precision, 97.83 percent for recall, and 100% for Accuracy, Precision, Recall and F1-score for COVID-infected images. If you use the RF model to analyse COVID photos, it has a 95.66% accuracy rate; it also has a 95.65% recall The performance of the proposed method was compared to stateof-the-art methods ( Islam, Islam & Asraf, 2020 ;Sing & Sing, 2021 ;Zhang et al., 2021 ) in terms of accuracy, precision, recall, and F1score results obtained have been summarized in Table 7 . The proposed three-stage ensemble boosted CNN model outperforms the state-of-theart methods. The method proposed by Islam et al. (2020) is slightly better in the cases of accuracy and sensitivity than the proposed method; it is negligible because there is no significant difference. At the same time, the proposed method is superior in the case of precision and F1-score. By and large, the proposed ensemble boosted CNN model results better than the existing methods. Conclusion For the classification of COVID-19, this article utilised a three-stage ensemble Boosted convolutional neural network. CXR pictures were improved and segmented using the ResUnet algorithm in stage-1 of the proposed method, in order to improve its performance. The training dataset was fed into a CNN, which was then used to extract the features. Using machine learning techniques and the voting mechanism, the retrieved features were ensemble-trained in step three. In this investigation, 5178 sick and 4310 healthy CXR pictures were used. Conventional CNN and ML methods are outperformed by the new approach. Accuracy, precision, recall, and F1 score were used to evaluate the suggested method's overall performance, and the results showed that it obtained 99.35% accuracy, 100% precision, 98.71% recall, and a 99.35% F1 score. In the event of a public health crisis, the suggested technique provides for a long-term and credible review of the medical decision-making process. The suggested technique produced superior classification results for the CXR than the current methods when applied to several categories of CXR pictures, such as Normal, COVID-19, TB, and BP. Melanoma photos and their different phases can be classified using the suggested technique in the future. Furthermore, it can be used to treat a wide range of malignancies, including brain tumours. Compliance with ethical standards Funding: This study was no funded by any Funding Agency.
2022-02-01T14:09:35.021Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e28a33363a07a2b8c20a58548735256ec144fcbd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijcce.2022.01.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73c41a79b78d13cc47a5313bed1680fe906ad334", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
38502940
pes2o/s2orc
v3-fos-license
Crystal structure of 2-benzylamino-4-(4-bromophenyl)-6,7-dihydro-5H-cyclopenta[b]pyridine-3-carbonitrile The packing of the title compound features N—H⋯N hydrogen bonds, which form inversion dimers, and weak aromatic π–π stacking interactions. In the title compound C 22 H 18 BrN 3 , the cyclopentane ring adopts an envelope conformation with the central methylene C atom as the flap. The dihedral angles between the central pyridine ring and the pendant benzyl and and bromobenzene rings are 82.65 (1) and 47.23 (1) , respectively. In the crystal, inversion dimers linked by pairs of N-HÁ Á ÁN n (n = nitrile) hydrogen bonds generate R 2 2 (12) loops. These dimers are linked by weakinteractions [centroidcentroid distance = 3.7713 (14) Å ] into a layered structure. Chemical context Cyanopyridine derivatives exhibit useful anticancer and antiviral activities (Cocco et al., 2005;El-Hawash & Abdel Wahab, 2006). 3-Cyanopyridine derivatives have been reported for their wide range of applications such as in their antimicrobial, analgesic, anti-hyperglycemic, antiproliferative and antitumor activities (Brandt et al., 2010;El-Sayed et al., 2011;Ji et al., 2007). As part of our ongoing work in this area, we synthesized the title compound, which contains a pyridine 3-carbonitrile group, and we report herein on its crystal structure. Structural commentary The molecular structure of the title compound (I) is shown in Fig. 1. The nitrile atoms C31 and N3 are displaced from the mean plane of the pyridine ring by 0.1016 (1) and 0.1997 (1) Å , respectively. The cyclopentane ring fused with the pyridine ring adopts an envelope conformation with atom C8 as the flap, deviating by 0.3771 (1) Å from the mean plane defined by the other atoms (C5/C6/C7/C9). The amino group is nearly coplanar with the pyridine ring as indicated by the torsion angle N2-C2-C3-C4 = À178.0 (16) . Steric hindrance rotates the benzene ring (C22-C27) out of the plane of the central pyridine ring by 82.65 (1) . This twist may be due to the non-bonded interactions between one of the ortho H atoms of the benzene ring and atom H21B of the CH 2 -NH 2 chain. Synthesis and crystallization A mixture of cyclopentanone (1 mmol) 1, 4-bromo benzaldehyde (1 mmol), malononitrile (1 mmol) and benzylamine were taken in ethanol (10 ml) to which p-TSA (1 mmol) was added. The reaction mixture was heated under reflux for 2-3 h. The reaction progress was monitored by thin layer chro- The molecular structure of the title compound, with displacement ellipsoids drawn at the 30% probability level. Figure 2 Partial packing diagram of compound (I). For clarity, H atoms bound to atoms not involved in hydrogen bonding are not shown. Table 1 Hydrogen-bond geometry (Å , ). Refinement Crystal data, data collection and structure refinement details are summarized in Table 2. The NH and C-bound H atoms were placed in calculated positions and allowed to ride on their carrier atoms: N-H = 0.86 Å , C-H = 0.93-0.97 Å , with U iso (H) = 1.5U eq (C) for methyl H atoms and = 1.2U eq (N,C) for other H atoms. The best crystal investigated was of rather poor quality and very weakly diffracting, with no usable data obtained above 49 in 2. Nonetheless, the structure solved readily and refined to give acceptable uncertainties on the metrical data.
2016-05-12T22:15:10.714Z
2015-02-21T00:00:00.000
{ "year": 2015, "sha1": "6edb9102e14b7d234436d44396483df4bce554ab", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2015/03/00/hb7365/hb7365.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6edb9102e14b7d234436d44396483df4bce554ab", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
216579113
pes2o/s2orc
v3-fos-license
HOW POLICE OVERCOMES MONEY LAUNDERING? STUDY ANALYSIS OF ROLE OF CENTRAL JAVA REGIONAL POLICE DEPARTMENT ON MONEY LAUNDERING CASE The main purpose of this research is to describe the role of the Indonesian National Police in tackling the rampant crime of the Money Laundering Criminal Act through conducting investigations on it carried out by the Central Java Regional Police Department and analyzing factors that influence the implementation of investigation. This is descriptive research according to the problem and purpose of the study. In analyzing the study used sociological juridical research methods with qualitative research types. The research emphasized that investigation of money laundering works effectively and quickly based on Article 74 Law of Money Laundering, and it carried out through systematic work management needed to support efficient and effective work so the handling of a case can run faster and measurable. The aim is to facilitate investigators in investigate of wealth from criminal acts, which are inseparable from the collection of evidence instruments in the investigation of all the 183 Jo 184 Criminal Procedure Code and article 73 Law on Criminal Procedure. 2) Factors that influence the investigator to investigate criminal acts of money laundering are legal factors, legal action, legal reasoning or facilities that support the enforcement of law and elements of the community. INTRODUCTION Money laundering recently is a world phenomenon and become an international challenge. Money laundering generally is defined as a process to change crime from corruption, drugs, gambling, smuggling, and other serious crime, so those crime results made visible and bright as a clean wealth because its origin has been concealed. So many countries have difficulties in preventing money laundering, include Indonesia. Even Indonesia already applying anti-money laundering regime approach since 17 From many reports and publications, the proliferation of money laundering criminal crimes in Indonesia based on some factors they are free foreign exchange regime that allows anyone to have an international exchange, and uses it to many activities but not required give it to Indonesian Bank. Not only that, the weak of law enforcement and the lack of professional law enforcement officials, moreover, the demands of globalization, especially global developments in the financial services sector as a result of the liberalization process, have allowed criminals to enter open financial markets. Technology advances in information system especially the using of the internet, make the organized crime possible that done by transnational organized crime become easy to do (Arifin, 2018;Muhtada & Arifin, 2019). The provisions of banking secrets often applied strictly though money laundering law is already minimalized those provisions. It is possible to use a pseudonym or anonymous by the bank customer that much influenced by the lack of know your customers' principle (KYC) application by the financial services industry (Arifin, 2018;Wibowo, 2018). It is possible to money laundering practice done by layering, which makes the detection of money laundering activity challenging by law enforcement. In this case, the money that has been placed in a bank is transferred to another bank, both banks in that country and other countries. While the transfer was carried out several times, so it can no longer be traced by law enforcement and legal provisions regarding the confidentiality of the relationship between the lawyer and his client, and between the accountant and his client. Yenti Garnasih (2003) stated that there are at least two major problems in the implementation of this anti-money laundering law enforcement, namely bank secrecy, and verification. From the aspect of bank confidentiality, customers have the right to privacy and are protected based on bank confidentiality law. While from the element of proof, money laundering crime is not a single crime, but a double. The demand for an act of money laundering requires proving two forms of misconduct at once, namely proving money laundering (follow-up crime) itself and proving that the money is illegal. In other words, enforcement of the Money Laundering Crime cannot be implemented if there are no other supporting elements. METHOD The research used empirical legal research, which conducted on the Regional Police Department of Central Java, Indonesia (POLDA Jateng). The study is examined the role of the Regional Police department of Central Java in overcoming money laundering cases. The problems of this research as follow: 1. How the implementation of money laundering crime investigation in central java police regional to prevent money laundering crime in central java? 2. What is the factor that affected the investigator to investigate money laundering crime? The task of investigators in handling money laundering crimes in Central Java Regional Police according to IPDA Arif Setyawan as Panit I of Unit 2 of Subdit 4 of Ditreskrimsus of Central Java Regional Police is receiving complaints from the public about the existence of Money Laundering Crime, receiving reports from the Central Financial Reporting and Analysis Center (PPATK) as well as conducting investigations of Money Laundering crime can be carried out in accordance with criminal investigations of misconduct, finding evidence of commencement. Enough criminal acts of money laundering when conducting investigations of criminal acts according to their authority. Whereas Article 75 is: INVESTIGATION In the case of investigators finding evidence of commencement that is sufficient for money laundering crime and notifying the original criminal act, the investigator combines investigation of unprecedented criminal action with the investigation of criminal laundering and informs him of the original criminal act. These opinions and arguments are based on the general explanation of Law Number 8 of 2010, which states: In its development, the act of launder was increasingly complex, crossed the boundaries of state jurisdiction, and used a more varied mode, utilized the financial system, increased the number of financial institutions, increased the number of financial institutions. Article 69 UU PPTTPU: To be able to carry out investigations, prosecutions, and inspections in the field of justice for acts of laundering, it is not compulsory to prove that it is more before legal action. Article 75 emphasizes the evidence for the inception of sufficient money laundering crime cases and the origin of money laundering. Where can sufficient evidence be obtained from the Investigation Procedure? The results of the recent investigation of the money laundering crime are 6 cases and it is showed that money laundering was only three cases, and three other cases. Thus, of the 6 TPPU cases reported by PPATK, only three cases had fulfilled the elements of the TPPU article to proceed to Public Prosecutor. While from three other examples not constituting money laundering, according to IPDA Arif Setyawan as Unit 2 Sub Directorate of Ditreskrimsus Central Java Police, can be broken down into 3 cases fulfilling elements and evidence, and three cases do not qualify as suspicious transaction reports (Suspicious Transaction Report) because complete evidence or sanctions do not support it. Based on the stipulations in Article 83 and Article 86 of the Law Money Laundering Crime, the investigator should protect reporters and witnesses. This issue highlighted that the definition of bank secrecy is unclear. With the enactment of Law No. 10/1998 on November 10, 1998, which amended in Law Number 7 of 1992, the terms "financial condition" and lack of clarity but still lack of clarification (Hakim, 2015;Huesin 2004;Amirullah 2003;Arifin & Choirinnisa, 2019). There is, as seen in the definition of bank secrecy in Law No. 10 of 1999 concerning the Amendment of Money in Law No. 7 of 1992 concerning Banking which states, that the confidentiality of the bank is everything related to the information "with regard to" and. Does everything in the case of saving funds and deposits must be kept secret by bank, for example the customer's name, address, account number, card number, hobbies, family customers and so on, who the customer deposits funds must keep secret and whether all customers store either bank account numbers, hobbies, family customers etc. B. Factors Affected Investigator in Investigation of Money Laundering Based on the interview with Ipda Arif Setiawan as Panit I Unit 2 Subdit 4 Ditreskrimsus Central Java Regional Police, Factors affected investigator in investigate of money laundering are: 1. Law 2. Law enforcement 3. facilities that support law enforcement 4. community One of the difficulties, in order to prevent investigation of money laundering crime, is Juridical constraints, that is as regulated in article 72 of money laundering law but need a long time to get allowed from the officials such as a bank. Besides, only in a short period, the lender can move the money from the deposit of one other bank in practice. The implementation of rules governing the confidentiality of bank secrecy at the level of investigation has not yet been made expertly. This is in accordance with the view of Husein (2004), that even though Law No. 8 of 2012 claims that there is a "general interest" can be used as a reason to open up or breakthrough provisions of bank secrecy, in the implementation of it in the field, there is a "general interest" that can be used as a reason to open up or break through the provisions of the bank secrecy, in the implementation of it in the field, as well as the progress and the relative effectiveness. Bank services that continue to develop make taxpayers, debtors (guarantor) and suspects/defendants in the calculation of minutes can only immediately move and account for other parties such as inter persons or their relatives. Based on an interview with Ipda Arif Setiawan as Panit I Unit 2 Subdit 4 Ditreskrimsus Central Java Regional Police, efforts to deal with obstacles faced by investigators in money laundering cases are as follows: 1. Against the juridical obstacles a. Bank secrecy provisions While repeating the judiciary's most profound sentences concerning the existence of regulations regarding bank secrecy, the agreement is made by bringing parties between investigators, banks, and customers into one place. b. Obligation to protect reporters and witnesses. To overcome legal problems which relate to the provision of obligation to protect reporters and witnesses in the investigation of money laundering crime, carried out by way of: first, making a report on criminal acts of washing the money as an immediate finding of criminal investigations. Secondly, protection was carried out secretly by not project was done in a manner, it was not immediately published, and third, with consideration of security and safety reasons, the taxation was placed at National Police Headquarters under the supervision of the police. c. The perception of investigators about money laundering crime was not yet perfect. Meanwhile, to overcome the obstacles that judicially related to investigators' existence perception regarding the crime of money laundering. d. The information from the PPATK is not yet complete. To overcome the constraints that are juridical in relation to conflict with PPATK mediation form, they are not complete, carried out by investigating how to coordinate with PPATK to present the testimonies through PPATK mediation so that they are not perfect, carried out by examining how to coordinate with PPATK to give the proofs through PPATK mediation so that they are not incomplete, carried out by investigating how to coordinate with PPATK. After that, the investigators conducted an examination of them by conducting interviews to determine who could be accused of witnessing and witnessing, which then carried out involuntary remedies. 2. non-juridical constraints a. the reporter is not necessarily a victim To overcome the non-legal issues that relate to the presence of reporters in criminal acts of laundering which have not yet been determined to be victims of crime, they have been carried out by providing guarantees to the complainant, that is, by doing criminal acts that are reported to be a crime. b. The ability of Human Resources in Humanity is limited investigations. To overcome the constraints that are non-legal experts related to the ability of human resources limited human investigators, they are carried out utilizing improving the capability of human resources. 1) Sending investigations to take part in a seminar on the crime of laundering. 2) Sending investigators to follow the special education investigation of leisure laundering. 3) Sending students to follow up on the study program in the legal market program. 4) Send out investigators to follow training in foreign countries such as United States Association. a) The reporter has not however been victimized b) The number of investigations and capabilities of HRM is limited. As for the scope of handling the financial problems of the investigators in the case of money laundering, the following facts are as follows: a. Against the legalistic control 1) The provisions of bank secrecy are carried out by bringing together parties between investigators, banks, and customers in one place. 2) The obligation to protect reporters and witnesses is carried out by making the report of money laundering and direct laundering of police as collection, and protection is carried out in the same manner or placed in Police Headquarters under the supervision and taking care of the police directly. 3) Perceptions of investigations regarding MPTP have not been perfect, carried out by holding socialization of Law No. 8 of 2010 concerning TPPU to investigators and issuing a Special Guidance on Money Laundering Crime. 4) Information from PPATK is not complete, yet it is carried out through conducting coordination with PPATK to present witnesses through PPATK mediation so that they do not feel unrestricted concerning the police. b. Against the control which is non-legalistic 1) Reporting has no determined victims yet, carried out by providing guarantees to the reporter by making sure that planted actions reported are a direct discovery of the policy. 2) Limited human resource capacity for students is carried out by increasing the ability of human resources to investigate through seminars, advanced studies, or overseas training such as in the United States. The author also suggests that the effective implementation of money laundering investigation and inclusion in the justice sector is a fast, cheap and straightforward grave. The government needs to establish an institution that supports the money laundering crime investigation, such as the PPATP in every region/province. To avoid interpreting and due to lack of clarity on duties and authority of the Money Laundering Act, by placing articles that regulate authority and regulations such as implicit regulation, authority, and regulation. Other authorities are more effective in implementing the intended authority. In optimizing TPPU investigations, it is necessary to increase the number of investigative personnel who have the qualifications to investigate Money Laundering in every institution/agency that has been given the authority to carry out investigations into Money Laundering and to intensify training/education activities.
2020-04-16T09:13:32.672Z
2019-10-17T00:00:00.000
{ "year": 2019, "sha1": "656e056fbccdd4835bceb650dfe1c822368749bd", "oa_license": "CCBYNCSA", "oa_url": "https://journal.unnes.ac.id/sju/index.php/jllr/article/download/35416/14610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17568b79e8959af2aab1b698d848cacf1cebbd04", "s2fieldsofstudy": [ "Political Science", "Law", "Sociology" ], "extfieldsofstudy": [ "Business" ] }
261495028
pes2o/s2orc
v3-fos-license
Allocation of synchronized phasor measurement units for power grid observability using advanced binary accelerated particle swarm optimization approach Large-scale power grid observability is still a challenge because of deteriorating infrastructure and the incorporation of renewable energy sources. A smart grid that makes use of cutting-edge technology, such as a phasor measurement unit (PMU), is an excellent option for monitoring and bringing networks up to speed with the latest information. Latterly, the considerable investment required for the deployment locations has slowed down the adoption of PMU. Therefore, because PMUs are expensive, it is necessary to deploy them in the best possible places on large-scale power grids. The most significant share of optimal PMU placement problems (OPPP) is defined as 0–1 knapsack problems. Considering this, the development of an effective optimization technique that can handle difficulties has emerged as an appealing topic in recent years. In this paper, a meta-heuristic algorithm based on the binary particle swarm algorithm (BPSO), a binary accelerated particle swarm optimization (BAPSO), is offered for solving OPPP. Since earlier research has shown that BPSO is likely to stick to local optima, the majority of them evaluated their suggested technique using small-scale test systems. The technique that has been suggested searches for the optimal solution by employing two topologies—one global and one local—that are analogous to BPSO. This work determines the optimal PMU position for a large network in a reasonable amount of time by fine-tuning the acceleration factor. Additionally, in order to employ fewer PMUs, an integration strategy was put into place for the radial buses. The OPPP solutions are provided by the suggested method within a reasonable period with prior solutions published in reliable publications, according to computational findings. Introduction The power networks of today are being run under difficult conditions in order to supply the rapidly growing demand for electrical resources and to keep the commercial activity going in the midst of a very dynamic, deregulated market.Therefore, power grid monitoring, preservation, and control become increasingly important for enhanced systems operation, maintenance, planning, and energy trading.As a result, PMU has evolved as a valuable piece of equipment for measuring phasors of voltage and current, which are synchronized with signals collected using GPS technology.They can enhance operations such as bad data detection, corrective action schemes, state estimation, stability control, and disturbance monitoring.When it comes to installing PMU on the electrical grid, one of the most important issues that have to be taken into account is the expense of doing so.It is, therefore, of the utmost importance to determine the optimum position of the PMU where reliability is maintained while minimizing the costs involved.Recently, many methods to analyze the OPPP employing various sets of optimization algorithms have been presented.Deterministic and stochastic algorithms are two categories that describe optimization strategies that can be used to solve the PMU placement problem. The integer linear programming (ILP)-based formulation to evaluate the OPPP was initially suggested in [1].This formulation, in which linear constraints are established based on a binary bus-to-bus connection matrix, evaluates the observability of power networks considerably more simply and straightforwardly.An ILP method was suggested in [2]; this method took traditional measurement as well as zero injection bus (ZIB) into consideration.The use of a permutation matrix is included in the suggested method, which helps reduce the nonlinear limitations.There is also an explanation of the idea of partial observability in [3].In addition, a malfunctioning of PMU was incorporated into the strategy that was presented.The bus observability index (BOI) and the system observability redundancy index (SORI) were described so that the optimal PMU employment set could be obtained. To address the effective solutions specified for the OPPP while taking into consideration the impact of ZIB, such as line and PMU outages, a mixed ILP (MILP) is presented in [4] and [5].A technique that is based on integer programming and genetic algorithms (GA) was developed by [6] to install PMU in order to obtain full monitorability of the power network.A combination of GA with a simulated annealing strategy was offered by Kerdchuen and Ongsakul in [7] as a way of obtaining a solution for the OPPP.In [8], researchers investigated a unique cellular GA-based approach for OPPP that takes into consideration the availability of channel capacity as well as single-line loss.Ahmadi et al. [9] recommended using conventional BPSO to decide on the OPPP with and without ZIBs.The measurement of redundancy is presented as a method for ranking the solutions. In the research conducted by Chakrabarti et al. [10], an enhanced particle swarm optimization (EPSO) for power grids, as described by Valle et al. [11], was applied to the OPPP.Further velocity update rules are implemented by EPSO if the particles cannot identify a viable solution.Similar to the study conducted by Chakrabarti et al. [10], the authors of [12] suggested a novel velocity update equation to locate the OPPP using BPSO.In addition to the velocity update equation, the authors created additional observability techniques for ZIB, a PMU, and a line failure.In reference [13], the authors introduced the exponential BPSO as a novel way of controlling the inertia mass of BPSO.The authors assert that it improves the searchability of the method.Wang et al. [14] presented a hybrid technique for the OPPP that combines simulated annealing and BPSO.In order to place PMUs in power distribution systems optimally, a tri-objective strategy has been presented in [15].Its goals are to reduce the number of PMU channels, state estimation uncertainty, and sensitivity to line parameter tolerances.Observability propagation depth and probabilistic observability are taken into account in [16] to improve formulation for the best placement of PMUs in power grids. A two-stage approach to optimize the placement of PMUs was proposed in [17,18] to achieve complete system visibility while minimizing cost, taking into account objectives such as cost minimization, redundancy, and efficiency maximization, as well as constraints such as zero injection buses, single PMU failure, single-line outage, and flow measurements.Article [19] addresses the issue of incomplete observability under single PMU loss (N − 1) contingencies and proposes an enhanced two-archive algorithm and a fuzzy decision-making method for PMU placement optimization.In ref. [20], author has presented a BPSO technique for the optimal allocation of PMUs in connected power networks, demonstrating its effectiveness and superiority compared to other methods through testing on various test systems.In addition, a technique for integration is not used for the radial buses; instead, the approach entails taking into consideration as well as ignoring ZIBs. In this article, author has proposed a meta-heuristic algorithm, based on BPSO, a BAPSO, to solve the OPPP in large-scale power grids, aiming to find the best locations for deploying expensive PMUs and achieve grid observability while considering the challenges posed by deteriorating infrastructure and cost constraints.The algorithm combines global and local search topologies and fine-tunes the acceleration factor to efficiently determine the optimal PMU positions, and it also incorporates an integration strategy for radial buses to reduce the number of PMUs required, providing solutions within a reasonable time frame compared to the previous research.As BAPSO is a meta-heuristic algorithm, it is expected to generate multiple PMU placement sets and to determine the quality of each set with the same number of PMUs, the one with the highest SORI value is chosen as the optimal result.The PMU placement set with higher measurement redundancy is considered better than the one with lower measurement redundancy.BAPSO is proposed to determine the minimum number and optimal locations of PMUs for complete monitoring of the power grid, taking into account factors such as normal operation and zero injection measurements. Method used for the optimal PMU placement problem In general, the primary goal of the OPPP is to obtain the fewest number of PMUs necessary, along with the location of those PMUs, to ensure full observability of the power grid.As a result, the following is the model for the generalized objective function that is used for the identification of the OPPP in this work [21]: (1) where n is the number of buses, c i is the vector of PMU price coefficients, Y is the binary design variable vector having components y i which decide the feasibility of PMUs on ith bus, and H and B PMU are interpreted as the transformation matrix that may be modified according to the contingency cases.A PMU = [A i,k ] n×n is the binary connectivity matrix that explains the bus-to-bus connection whose entries are shown in Eq. (5). Y provides the decision for the placement of PMU as given in Eq. (6). ] n×1 is the column vector that signifies the redundancy, that is, essential for the specific case. Radial BUS It is noted that installing PMU on a bus that is linked to more than one neighboring bus would have greater coverage of the connected power grid relative to the bus that has very few adjacent buses, in particular, the radial bus network [22].Hence, if the PMUequipped bus is radial, the PMU can only monitor two buses-the radial bus and its neighbor.Radial buses are excluded from prospective OPPP solutions since their PMU setup will measure the voltage phasors on that bus and one associated bus. Modeling of ZIB The consideration of ZIB may benefit in further reducing the PMU numbers necessary to achieve maximal observability of the power grid.Several methods for coping with ZIBs have been suggested in the previous research.The bus integration approach is one of the strategies that have been established to cope with the characteristics of the ZIB [22].The bus integration strategy requires an integration process between the ZIB and one of the neighboring buses.As a consequence of this, during the process of integration, the limits placed on both buses may be combined into a single constraint.As a result, the number of constraints that need to be satisfied to guarantee that the installed PMUs will observe each bus will be reduced.It is believed that if all observable buses except for the unobservable one are interconnected to the ZIB, then the unobservable bus can be construed as being observable.Because of this, the integration of the bus shows that if it is measurable, the bus that was picked to be integrated will also be observable. The 14-bus system of IEEE is taken into consideration to comprehend the bus integrating strategy.This system is illustrated in Fig. 1, and it is important to note that bus "7" is a ZIB, and it is coupled with bus "4, 8, " and "9." To identify a candidate bus to integrate with the ZIB, the following process may be utilized: (i) randomly integrate the ZIB with one of the buses that are near it.In this example, bus 7 is integrated with one of its neighbors.As an illustration, bus 7 and bus 9 are combined into one, (ii) integrate the (5) ZIB with one of the surrounding buses that have the fewest number of buses attached to it.Using this technique, bus 7 is integrated with bus 8, which only has one bus connected to it; therefore, the total number of buses linked to bus 8 is reduced to one, and (iii) integrate the ZIB with its adjacent buses that have a higher number of buses linked to it-in contrast with the plan that was shown previously, bus number 7 is interconnected with bus number 4, which has a total of five buses that are connected to it.Bus number 9 only has a total of four buses that are connected to it. When dealing with the presence of ZIB, a bus integrating approach may be used to establish the bare minimum number of PMUs that are required; despite this, there are a few drawbacks that need to be brought to the attention: (i) If a PMU is necessary to be installed on an integrated bus, this might imply that the PMU has to be installed on the ZIB, or on the bus chosen to be integrated with the ZIB, or on both buses.However, it could also mean that the PMU needs to be installed on both buses.Because of this circumstance, a further monitoring test needs to be conducted in order to determine which of the two buses should have the PMU placed on it, and (ii) every time an integration procedure has been carried out, the topology of the system has been modified.When referring to a power grid on a massive scale, this may cause the topology to become more complicated. OPPP rules without ZIB Rule 1 A PMU installed at a specific bus has the ability to compute not only the voltage phasors of that bus but also the current phasors of all of the lines that are related to it.In Fig. 2, bus {1} is PMU-equipped bus.Here, V 1 , I 12 , I 13 , and I 41 can be unswervingly meas- ured by the employed PMU.Rule 2 It is possible to calculate the voltage at the other end if the voltage at one end and the line currents of that end are known.Taking into consideration (Fig. 3) and assume that the values of the line current I 12 , I 13 , and I 41 are known, then Ohm's law can be used to compute the voltages at the buses {1} , {3} , and {4} .The values of V 2 and V 3 are the results of V 1 subtracting the potential drop induced by current flowing over the line.Therefore, the values of V 2 , V 3 , and V 4 are solved as follows: (8) Rule 3 If the voltages at both ends of the buses are known, then Ohm's law may be utilized to calculate the line currents that flow between the buses.Given (Fig. 4) that the values of V 1 and V 2 are well known, the line current I 12 may be determined by using Ohm's law, which is presented in the following form: OPPP rules with ZIB A bus is known as a ZIB when neither the load nor the generator is connected.As a result, the summation of line currents used at a ZIB is zero.If ZIB, which includes its neighbors, has N z members, then monitoring N z − 1 buses is enough to turn an unobservable bus into an observable bus.Because of this, while considering ZIB, the number of buses that need to be observed drops by one for each ZIB that is present in the power grid.This, in turn, reduces the minimum number of PMUs that are required for total observability.The following PMU observability criteria are implemented to analyze the topological observability using ZIB: Rule 4 If there is one bus that is not observable that is adjacent to a ZIB that can be observed, then the bus that cannot be observed can be deemed to be observable.Take, for instance, if the values of V 1 , V 2 , and V 3 are known, then V 4 may be determined with the use of the KCL at bus {2} which is a ZIB.Refer to Fig. 5, where bus {2} is a ZIB that is observable.Assuming for the moment that the values of V 1 , V 2 , and V 3 are identified, then the value of line currents I 12 and I 23 can be determined by using rule 3 as mentioned above.So, by using KCL at bus {2}, the value of I 12 is I 12 = I 23 + I 24 .For that reason, the value of I 24 and V 4 can be obtained as follows: (14) Fig. 4 Modeling PMU placement rule 3 Rule 5 If the observable buses are linked to ZIB which is unobservable, then the ZIB can be considered as observable.Consider Fig. 6, where bus {2} unobservable ZIB which is connected by all the observable buses such as buses {1}, {3}, and {4}, then the voltage of bus {2} can be obtained as follows: Algorithmic perspective of BAPSO in OPPP Refer to [23], Eqs. ( 22) and ( 23) are used to update the velocity of a particle i at each iteration m in the original PSO. (17 where w (m) is the inertia weight at the m iteration [24].The inertia weight in PSO con- trols the dynamics of flying among particles, and a higher value of this weight leads to global exploration, while a lower value promotes local search.If the inertia weight is set too high, the algorithm may focus too much on exploring new areas and neglect local search, making it challenging to find the exact optimal point.In order to accelerate convergence to the true optimum by balancing global and local exploration, a linearly decreasing inertia weight has been employed: The values of inertia weight are of w max = 0.9 and w min = 0.4 , and M max is the max- imum number of iterations used in PSO [24,25].By introducing a virtual mass to stabilize the motion of the particles, the algorithm is anticipated to have a faster convergence rate.A velocity threshold is introduced [26]: where v ij (m + 1) is the velocity component of the ith particle along the jth direction at the (m + 1)th iteration of the algorithm, and v max j is the maximum absolute value of velocity allowed along the same jth direction in the parameter space.The adaptation of the inertia weight allows the swarm to achieve convergence with greater accuracy and efficiency as compared to the original PSO.r 1 , r 2 are random vectors from the uniform distribution in the range [0, 1] to maintain the swarm diversity.The acceleration constants are c 1 = c 2 = 2 called cognitive parameters, so that c 1 r 1 and c 2 r 2 ensure that the particles would overfly the target about half the time. The present study presents a meta-heuristic optimization algorithm named BAPSO, which builds upon the BPSO algorithm by incorporating global and local topologies.The BPSO algorithm is known to face the issue of premature convergence and tends to get stuck in local minima.However, the newly introduced mutation strategies in BAPSO can effectively prevent agents from being quickly trapped in local optima, especially when dealing with complex combinatorial problems.BAPSO has the capability to explore the entire solution space for a global search and conduct a local search, leading to the identification of global minima [27].The evolutionary equation of BAPSO is as follows: where a is the acceleration factor.Compared to the conventional PSO, the evolution equation of BAPSO involves an additional parameter "a," while it includes one more parameter "w" than the PSO with the contraction factor.Despite this, BAPSO has demonstrated impressive results in solving complex OPPP for large-scale inter-power grids within a reasonable time.The proposed BAPSO in this study shares a similar structure with PSOCF.The equation of PSOCF is given as follows [28]: (23) .9(3.9 − 4) does not exist.The PSOCF could not be used, but BAPSO in this work is used for solutions of the OPPP of large-scale inter-power grids and obtained satisfying results.For The acceleration factor a is outside the space of the contraction factor , and the optimization of BAPSO in this article is performed as usual.Although BAPSO is similar to PSOCF, PSOCF did not have good adaptability as BAPSO [27].The pseudo-code of the BAPSO is as follows: For loop over all n particles and all j dimensions Generate new velocity v PSO was originally intended to handle unconstrained optimization, but it has the potential to solve constrained problems with modifications.To locate the global minimum while accounting for constraints, BAPSO employs a constraint-handling approach that updates both a particle's best position and the swarm's global best position.To steer the search toward the feasible area, a feasibility term is included, which determines the extent of the overall constraint violation.The choice of the global best (Gbest) topology in BAPSO depends on the dimension of the search space.In order to enable BAPSO to work with binary problems, the initial Gbest is represented as a binary column vector [25].The population size is selected according to the network size [29].The initial inertia parameter could be selected as w = (0.9 − 0.7 × rand). The objective is evaluated with a number of moving particles at each iteration.As observed, the BAPSO starts with the iteration to find the global minimum point, whereas the velocity tends to go into v max or −v max .The value of v max is carefully selected [25].When the size is insufficient, the algorithm can get stuck in a local minimum or have to perform more iterations to arrive at the correct solution.The particle is positioned within the binary search space [27], and its current velocity and position impact its future position.The BAPSO is capable of conducting both global and local searches (26) of the solution space without being confined to local minimum points.To attain better convergence, an inertia weight is utilized to maintain a balance between global and local searches. Particles For the OPP problem, every particle has a promising solution.The objective of this work is to determine the optimal minimum number and strategic locations of PMUs to maximize the observability of the power grid.As a result, the configuration of each particle is designed to indicate the availability of PMUs on a particular bus.When determining the OPPP for a 7-bus system (as shown in Fig. 7), the construction of each particle is depicted in Fig. 8, which can be found below.Each dimension of the power grid is linked to a specific bus, and each particle is developed according to these dimensions.A value of {1} at bus {2} indicates that a PMU is installed at that bus, while a value of {0} denotes that there is no PMU installed at bus {2}. Redundancy measurement In order to determine the most effective sets of PMU placements, the BOI and SORI redundancy measurement concepts, as described in reference [30], are utilized.BOI refers to the number of times a particular PMU observes a bus, while SORI is the sum of all BOI values.The solution sets that have the least number of PMUs and the greatest sum of BOI, represented by SORI, are considered to be the most optimal.The BOI is the performance metric, which can be calculated using Eq. ( 27), while Eq. ( 28) shows how to calculate SORI. Fitness function The BAPSO involves particles that carry potential solutions to the OPPP, and in order to determine the best solution, a fitness function is used to evaluate each solution during the investigation.The fitness function must meet three important criteria: ensuring power grid observability, determining the minimum number of PMUs needed for full observability, and measuring redundancy.Following these guidelines, the fitness function for identifying the desired target can be expressed as shown in [22]. where w 1 (= −2) , w 2 (= 1) , and C (= 0.01) are the weight parameters, N obs is the total number of a bus which is observable, N PMU is the number of PMUs equipped bus, and R 1 is the redundancy measurement.The fitness function described in Eq. ( 29) is com- prised of three components: (i) the count of observable buses, (ii) the count of PMUs, and (iii) the redundancy measurement.It is important to highlight that the first component determines the number of buses that can be monitored through the placement of installed PMUs.The value of N obs can be given as follows: Additionally, the second component determines the quantity of PMUs, which can be interpreted as follows: Moreover, regarding the third component, the value of redundancy measurement is established by: Results and discussion The OPPP is solved using a modified particle swarm optimization approach in this study.The traditional BPSO method is limited by premature convergence and is prone to get stuck in local minima.The proposed BAPSO method, on the other hand, can carry out both global and local searches to locate global minima.It effectively prevents agents from quickly becoming trapped in local optima, which is particularly useful in addressing complex combinatorial problems. In order to implement the proposed method effectively, it is necessary to determine the appropriate parameter values such as population size.To this end, various trial runs have been conducted on all the test systems studied for solving the OPPP, and the optimal results are presented here.The population size is four times the (28 number of buses, which is sufficient for solving the OPPP in the present work.The maximum number of iterations has been set to 250 for smaller systems such as IEEE 14-bus, IEEE 30-bus, New England 39-bus, and IEEE 57-bus systems, while it is 1000 for larger systems such as IEEE 118-bus, IEEE 300-bus, and NRPG 246-bus systems.MATLAB R2013a software was used to conduct the simulations, and the computer used had an Intel Core i3-5005U (2.0 GHz, 3 MB L3 Cache) processor and 8 GB of RAM.The number and location of radial and ZIBs are shown in Appendix "Number and location of a radial and zero injection buses, " while the connection of ZIBs is displayed in Appendix "Connection of ZIBs." The parameter values used in the proposed method for the PMU placement problem are listed in Table 1. The parameter values used to unravel the OPPP were carefully selected through extensive testing to ensure feasible performance.The proposed BAPSO algorithm was found to converge faster than the standard BPSO algorithm for every bus system.The results obtained were satisfactory, and the proposed method achieved adequate computational time, which was only slightly longer than the standard BPSO algorithm.Interestingly, the computational time was found to be superior to that of existing studies.Additionally, the consideration of ZIBs and radial buses from the OPPP minimized the number of PMUs necessary for the entire power network observability. According to Table 2, it is possible to ensure the observability of the power grid under normal operation for the standard 14-bus system by placing PMUs in the optimum locations and with the minimum number required.After considering the number of trials and with redundancy measurement, it is found that buses {2, 6, 7, and 9} are the best promising set for the OPPP.Here, solution set 2 has a maximum number of SORI, that is, 19.The entries of BOI signify that how many numbers of times the PMU-equipped bus observes each bus, and it is also explained in Sect."Redundancy measurement." Table 3 provides the details on the minimum number and optimum locations of PMUs required to achieve full observability of the power grid during normal Inertia parameter ( w max and w min ) 0.9-0.4 6, 9, 10, 13, 16, 17, 19, 20, 22, 23, 25, and 29} is the most promising optimal locations of PMU as depicted in Table 4 for NE 39-bus system to make power grid utterly observable during the normal operating condition. The solution to the OPPP may be found in Table 6, and it is presented here for 118bus, 246-bus, and 300-bus systems accordingly. Table 7 indicates that the number of PMUs required for achieving maximum observability increases with the expansion of the power grid.As the size of the network increases, computational time also increases.Table 8 displays the optimal position and minimum number of PMUs needed, taking into account ZIBs.It is worth mentioning that the inclusion of ZIBs in simulations reduces the number of PMUs required for observing all buses.For instance, in a 14-bus system under normal operating conditions, four PMUs are required for maximum network observability, but with the consideration of ZIB, only three PMUs are needed.Tables 9 and 10 compare the results obtained from the BAPSO technique with those obtained from Guo [31], Chakrabarti et al. [32], Milosevic et al. [33], Manousakis et al. [34], and Sodhi et al. [35] for IEEE 14-, 30-, 57-, 118-, NE 39-, NRPG 246-, and 300-bus systems, with and without ZIBs, respectively.In this study, the BAPSO technique was employed to determine the optimal number of PMUs and their positions while maximizing redundancy measurement, ensuring full observability of the power grid.The BAPSO approach was applied to IEEE networks, and the results were compared with those obtained using various programming methods proposed in the previous literature.The comparative analysis demonstrates that the BAPSO technique provides alternative methods where the objective function takes a minimal value in full agreement with the ones defined by the current programming techniques for each case study. Table 11 presents a comparison between the computational time of the proposed method and the results obtained from using the BPSO algorithm in recent studies.The study found that an increase in the number of buses resulted in a longer computational time.However, the proposed approach significantly outperformed the previous studies in terms of computational time.This demonstrates that the proposed 125,128,132,134,140,141,142,153,157,158,160,163,168,173,181,185,187,190,191,194,199,201,202,203 approach not only yields high-quality solutions but also operates at a faster computational pace. Conclusions and scopes of future work The purpose of this paper is to introduce a novel BAPSO algorithm that incorporates global and local topologies to solve OPPP, to enhance the learning and convergence procedure of classifiers.The proposed algorithms have numerous benefits, such as simplicity, ease of implementation, and the lack of need for algorithm-specific parameters.Instead, they require only common controlling parameters, such as the number of generations, population size, and tuning of the acceleration coefficient.The efficacy of the BAPSO algorithm in achieving an OPPP solution is demonstrated using IEEE bus systems.In binary PSO, the population size is a crucial factor in achieving optimal execution time and solution consistency.However, increasing the population size also increases the total execution time.The study finds that the algorithm's average execution time and performance are directly proportional to the size of the population and the maximum number of iterations.In a large-scale network, conventional BPSO can generate a set of optimum solutions, but it is not feasible within a reasonable timeframe.Conversely, the proposed BAPSO approach offers a fast OPPP solution tations into the PMU allocation procedure.In order to obtain optimal PMU placement, take into account the effects of network structure, such as the presence of radial or meshed networks, and design a strategy that integrates topological considerations.(d) Investigate the best location for PMUs when using them for wide-area monitoring applications, taking into account local or global power grids.Create a framework that considers geographic and connectivity factors in order to improve situational awareness and system stability in massive power systems.(e) Cybersecurity Considerations: Examine the possible hazards and vulnerabilities related to the installation of PMUs in the electrical grid.To lessen the danger of cyberattacks and unauthorized access to vital power system infrastructure, look into ways to protect the security and integrity of PMU data and suggest solutions for safe PMU installation.(f ) Cost-Effectiveness Study: Conduct a thorough cost-effectiveness study to assess the financial advantages of the suggested PMU allocation strategy.Think about things such as the price of PMUs, installation, communication setup, and upkeep.Create optimization models that strive to achieve the required level of observability while minimizing the total cost.(g) Real-Time Implementation: Examine the viability and practicality of putting the suggested PMU allocation technique into use during the real-time operation of the power system.Think about the computational effectiveness, the communication needs, and the SCADA system integration.Create methods for real-time PMU allocation changes and ongoing power system monitoring.(h) Application to Renewable Energy Integration: The suggested PMU allocation methodology should be expanded to accommodate the unique difficulties involved in integrating renewable energy sources into the power grid, such as solar and wind.Develop methods for the best PMU deployment in grids with a high concentration of renewable energy sources by looking at the effects of distributed generation and intermittent power generation on observability needs. (i) These suggestions can act as a springboard for more study, enabling the development and improvement of the suggested PMU allocation strategy and eventually advancing the observability and stability of the power grid.Velocity component of the ith particle along the jth direction at the (m + 1)th iteration of the algorithm v (m) Velocity of particle i at iteration m w 1 Weight parameter for the number of bus observed w 2 Weight parameter for the number of PMUs C Weight parameter value for the measurement redundancy Algorithm 1 : Pseudo-code for BAPSO Objective function f ( − → y ), − → y = y 1 , y 2 , ..., y n T Initialize locations y i and velocity v i of n particles Find Gbest from min {f (y 1 ), ...., f (y n )} at ( m = 0) while (criterion) m = m + 1(pseudo time or iteration counter) Assess the objective functions at new positions y m+1 i Determine the present optimum for every particle Pbest end ε Find the current global best Gbest end while output the final results Pbest and Gbest Table 1 Configuration settings for the optimization method Table 2 Optimum locations of PMU for 14-bus under normal operations Table 3 Optimum locations of PMU for 30-bus under normal operationsBold highlighted values are results that are considered the best in terms of time efficiency.They represent optimal outcomes achieved swiftly and efficiently within a given context Table 4 Optimum locations of PMU for 39-bus under normal operationsBold highlighted values are results that are considered the best in terms of time efficiency.They represent optimal outcomes achieved swiftly and efficiently within a given context Table 5 Optimum locations of PMU for 57-bus under normal operations Table 6 Optimum locations of PMU for 118-bus, 246-bus, and 300-bus under normal operations Table 7 Optimum locations of PMUs under normal operations Table 9 Comparison of obtained results with existing methods without a ZIB Table 10 Comparison of obtained results with existing methods with ZIB Table 11 Comparison of computational time with existing methods ignoring and considering ZIBs for large-scale power grids.The results indicate that the proposed algorithms outperform other meta-heuristic algorithms available in the state-of-the-art literature.The suggested method may be developed further in a number of different ways that may be researched.These include, among others things: (a) Performance Assessment: Compare the proposed BAPSO technique to other optimization algorithms utilized for PMU allocation.To illustrate the usefulness and efficiency of the suggested technique, this assessment should encompass a wide range of test systems with different sizes and complexity.(b) Resilience Analysis: Evaluate the suggested allocation method's resilience by taking into account power system parameter uncertainties including demand fluctuations, line outages, and generator failures.Look at the PMU allocation scheme's capacity to adjust to such dynamic events and provide dependable observability under challenging circumstances.(c) Network Topology Incorporation: Look into incorporating network topology limi- "-" means not reportedIg.and Cons.ignoring and considering; Comp.computational Table 13 Buses connected with ZIB for all test systems Table 13 ( continued)Maximum absolute value of velocity allowed along the same jth direction in the parameter space M max Maximum number of iterations used in PSO w max Personal best position for particle i discovered so far pop Population size r 1 and r 2Random numbers that are uniformly distributed between [0, 1] to maintain the swarm diversity R 1 PMUTransformation matrix that may be modified according to the contingency cases W N obs Total number of a bus which is observable c iVector of PMU price coefficients v ij (m + 1)
2023-09-04T13:28:35.800Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "d594482caf123f484edd090188a6c8a1035605db", "oa_license": "CCBY", "oa_url": "https://jesit.springeropen.com/counter/pdf/10.1186/s43067-023-00110-4", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "b0716d60baa2f7e635f317031b9dd68416b93fae", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
13676291
pes2o/s2orc
v3-fos-license
Live-cell imaging of nuclear–chromosomal dynamics in bovine in vitro fertilised embryos Nuclear/chromosomal integrity is an important prerequisite for the assessment of embryo quality in artificial reproductive technology. However, lipid-rich dark cytoplasm in bovine embryos prevents its observation by visible light microscopy. We performed live-cell imaging using confocal laser microscopy that allowed long-term imaging of nuclear/chromosomal dynamics in bovine in vitro fertilised (IVF) embryos. We analysed the relationship between nuclear/chromosomal aberrations and in vitro embryonic development and morphological blastocyst quality. Three-dimensional live-cell imaging of 369 embryos injected with mRNA encoding histone H2B-mCherry and enhanced green fluorescent protein (EGFP)-α-tubulin was performed from single-cell to blastocyst stage for eight days; 17.9% reached the blastocyst stage. Abnormalities in the number of pronuclei (PN), chromosomal segregation, cytokinesis, and blastomere number at first cleavage were observed at frequencies of 48.0%, 30.6%, 8.1%, and 22.2%, respectively, and 13.0%, 6.2%, 3.3%, and 13.4%, respectively, for abnormal embryos developed into blastocysts. A multivariate analysis showed that abnormal chromosome segregation (ACS) and multiple PN correlated with delayed timing and abnormal blastomere number at first cleavage, respectively. In morphologically transferrable blastocysts, 30–40% of embryos underwent ACS and had abnormal PN. Live-cell imaging may be useful for analysing the association between nuclear/chromosomal dynamics and embryonic development in bovine embryos. Using time-lapse cinematography analysis in cattle, we recently found that morphokinetic indicators (MKIs) such as timing, number of blastomeres at first cleavage, and number of blastomeres at onset of the lag-phase, are useful markers for evaluating embryo viability after transfer to a recipient 7,9 . Therefore, using MKIs may be a better assessment method than using IETS morphological grading, in terms of reliability and objectivity of evaluation of bovine IVF embryos 9 . However, biological information obtained by time-lapse cinematography is limited and lipid-rich dark cytoplasm prevents PN/nuclear observation in bovine IVF embryos. Techniques have been developed to visualise nuclear/chromosomal dynamics in mice by long-term live-cell imaging 10,11 . This consists of mRNA injection and time-lapse fluorescence confocal microscopy. Live-cell imaging revealed that almost all mouse embryos with abnormal chromosome segregation (ACS) during the first mitosis have no viability after transfer 11 . Here, we performed live-cell imaging in bovine IVF embryos. To evaluate the relationship between nuclear/chromosomal abnormalities and embryonic development and morphological blastocyst quality, we injected zygotes with mRNA encoding α-tubulin tagged with enhanced green fluorescent protein (EGFP) as a microtubule marker and histone H2B fused with mCherry as a chromatin marker. This method enabled the evaluation of nuclear/chromosomal integrity in the presence of dark cytoplasm. Impact of live-cell imaging on in vitro bovine embryo development. To evaluate the impact of live-cell imaging on embryonic development, we first investigated the effect of mRNA injection. In cultivation using a conventional incubator, there was no effect of mRNA probe injection on blastocyst development in bovine zygotes (Table 1). Subsequently, to determine optimal imaging conditions, embryos injected with mRNA probes of histone H2B-mCherry and EGFP-α-tubulin were exposed to a laser for various durations at different wavelengths. The best blastocyst yield was obtained when the embryos were exposed for 50 msec at 488 nm and 100 msec at 561 nm (Table 1). This developmental competence was not inferior to that of embryos that were not exposed to live-cell imaging (P > 0.05). In a preliminary assessment of embryo transfer, pregnancy with foetal heartbeat was diagnosed at days 31 and 45 (Supplementary Movie S1). This optimised imaging condition was used in subsequent analysis. Relationship between abnormal fertilisation and first cleavage and subsequent in vitro developmental competence. The developmental competence of the populations of embryos with abnormalities in the number of PN, chromosome segregation, cytokinesis at first cleavage, and number of blastomeres after first cleavage was analysed (Fig. 1B). Forty-eight percent of embryos (177/369) had an abnormal number of PN [0 PN = 13.6%, 1 PN = 19.5%, and multi-PN ( ≥ 3) = 14.9%]; 13.0% of them developed to the blastocyst stage. This value was significantly lower than that of embryos with a normal number of PN, 22.4% (43/192) of which developed into blastocysts (P = 0.019). ACS occurred in 30.6% of embryos, most of which stopped developing before the eight-cell stage, with only 6.2% of them developing into blastocysts. Eight point one percent of embryos exhibited abnormal cytokinesis at first cleavage and almost all of these embryos stopped developing before the eight-cell stage (Fig. 1B). Treatment with okadaic acid during oocyte maturation, which induces cytokinesis defects via inhibition of protein phosphatase type 1 and type 2 12 , increased abnormal cytokinesis and also inhibited embryonic development from cell-stages two to eight (Supplementary Tables S1 and S2). At the end of first cleavage, 22.2% of embryos had an abnormal number of blastomeres and 13.4% of these developed into blastocysts. This was comparable to blastocyst development of embryos with normal blastomere numbers (19.2%, P = 0.231). Multivariate statistical analysis indicated that blastocyst development was related to the presence of one PN and ACS at first cleavage (Supplementary Table S3). These results suggested a relationship between abnormal first cleavage and inferior embryo development. Factors relevant to timing and blastomere number at first cleavage and blastomere number at lagphase. A multivariate statistical analysis indicated that the timing of the first cleavage was related to embryos with one PN and ACS (Supplementary Table S4). The timing of the first cleavage in oocytes with one PN ( Fig. 2A) or ACS (Fig. 2B) was significantly slower compared to that of oocytes without these abnormalities (P = 0.008 and P < 0.001, respectively). Multi-PN number was related to an abnormal number of blastomeres, which was defined as three or more blastomeres observed at the end of the first cleavage (multi-division) (Supplementary Table S5). As shown in Fig. 2C, 47.6% of the embryos with multi-division had multi-PN, whereas only 5.6% with normal blastomeres had multi-PN. Interestingly, only two PN were involved in syngamy, and these nuclei formed mitotic spindles at each position and segregated in multi-PN embryos, undergoing multi-division (Supplementary Movie S2). The blastomere number at the onset of lag-phase (identified as a temporary developmental arrest during the fourth or fifth cell cycle) was related to the multi-PN number and ACS (Supplementary Table S6). Of embryos with three to five and six to eight blastomeres at lag-phase, 6.2% and 11.5%, respectively, had multi-PN, while 29.2% with nine to sixteen blastomeres had multi-PN (Fig. 2D). On the other hand, of the embryos with six to eight and nine to sixteen blastomeres at lag-phase, 12.4% and 16.7%, respectively, underwent ACS at first Asterisks indicate a significant difference between groups based on Wilcoxon rank sum test (*P = 0.008, P < 0.001). A hierarchical relationship between more than three blastomeres (multi-division) and three or more pronuclei (multi-PN) is shown by a mosaic plot (C). Hierarchical relationships between the number of blastomeres at lag-phase and multi-PN (D) and ACS (E) are shown by mosaic plots. Relationship between morphologically graded embryo quality and nuclear/chromosomal abnormalities. We examined the incidence of nuclear/chromosomal abnormalities in blastocysts graded as morphologically transferable. According to IETS criteria, the populations of blastocysts that exhibited ACS, abnormal number of PN, and both ACS and abnormal number of PN were 7.3%, 36.4%, and 1.8%, respectively (Fig. 3A). On the other hand, in blastocysts assessed by MKIs, 7.3% and 26.8% exhibited ACS and abnormal number of PN, respectively (Fig. 3A). Interestingly, IETS criteria indicated that 9.1% of blastocysts had multi-PN, whereas assessment by MKIs indicated that no multi-PN blastocysts were observed (Fig. 3B). Therefore, blastocysts that were graded as morphologically normal according to IETS criteria and MKIs may have nuclear/ chromosomal abnormalities. However, embryo quality assessment using morphokinetic indicators may reduce the risk of selecting embryos with multi-PN. Discussion In this study, we succeeded in performing non-invasive long-term live-cell imaging of bovine IVF embryos with fluorescence confocal laser microscopy. This technique allowed visualisation of the nuclear/chromosomal dynamics of bovine embryos for eight days, and determination of various biological factors involved in the relationship between nuclear/chromosomal abnormalities and subsequent in vitro embryonic development and morphological embryo quality. In human ART, the number of PN is the most important prerequisite for predicting the developmental competence of embryos. However, the lipid-rich dark cytoplasm of bovine embryos has inhibited observation of PN. In the present study, we observed that embryos with an abnormal number of PN had impaired embryo development compared with embryos with two PN. The blastocyst formation rates of oocytes with zero, one, two, and multi-PN were 18.0% (9/50), 9.7% (7/72), 22.4% (43/192), and 12.7% (7/55), respectively (Supplementary Table S7). The lower developmental competence of embryos with an abnormal number of PN, such as one PN and multi-PN, was consistent with that reported in a previous human study 13 . Embryos with an abnormal number of PN may be derived from in vitro maturation (IVM)/in vitro fertilisation (IVF) failure 14,15 . Further improvement of IVM and IVF technologies will be required to prevent deficient cytoplasmic maturation and abnormal fertilisation [16][17][18] . ACS during in vitro embryonic development was demonstrated as a promising indicator of embryo viability in mouse ART 11,19 . When okadaic acid induced severe ACS in bovine IVF embryos, subsequent development decreased (Supplementary Tables S1 and S2, Supplementary Fig. S1, and Supplementary Movie S3). ACS was also observed during first cleavage in embryos under normal bovine IVF conditions, which exhibited lower blastocyst competence as well; however, ACS incidence (30.6%) was higher than that of mouse IVF embryos (1.7-3.5%) 11 . In mouse studies, ACS was caused by double-strand DNA breaks in the sperm genome 11 ; the incidence rate was increased by exposing the sperm to freeze-thaw cycles 11 . In the present study, frozen semen was used for IVF, which may be a contributing factor to the high incidence of ACS 11 . It has been well documented that delayed timing of first cleavage is involved in blastocyst formation and pregnancy success 7,9,20,21 . We observed a relationship between delayed timing of first cleavage and ACS. In a similar study using a pig embryo model, there was a correlation between the presence of double-strand DNA breaks and delayed embryo cleavage and decreased blastocyst formation 22 . In somatic cells, micronuclei, often observed during ACS, are formed due to DNA damage, including double-strand DNA breaks, and cause delayed and prolonged mitosis, which is regulated by the spindle assembly check point (SAC) 23 for DNA repair. Hence, the delayed timing of the first cleavage may be a result of SAC activation by DNA damage pre-or post-fertilisation. A previous study reported that embryos which cleaved directly into three to four cells (multi-division) at first cleavage have a high incidence of chromosomal abnormalities and a low viability after transfer 7,9,24 . In the present study, we confirmed that a multi-PN number was involved in multi-division. A study of human embryos also revealed that most tri-pronuclei cleaved directly into three cells 24 . Approximately half of the instances of multi-division were in multi-PN embryos, whereas multi-PN were scarcely observed in embryos with two blastomeres. Furthermore, in embryos that reached the blastocyst stage, there were no multi-PN observed in embryos with two blastomeres (Supplementary Fig. S2). Thus, the number of blastomeres at first cleavage may be useful for predicting the occurrence of multi-PN. The lag-phase, which occurs in the fourth or fifth cell cycle in which the longer Gap 2 phase is inserted, corresponds to embryonic genome activation in cattle. Previously, we showed that a small number of blastomeres at the lag-phase was related to the higher incidence of apoptosis in blastocysts 9 . Live-cell imaging revealed a relationship between ACS and a low blastomere number at the lag-phase. It has been reported that DNA damage, such as double-strand breaks, may cause ACS 11 and induce apoptosis in embryos 25 , which could be a reason for the high incidence of apoptosis observed. In this study, nuclear/chromosomal abnormalities, such as ACS and abnormal number of PN, were observed in embryos that were graded as morphologically transferable, regardless of IETS criteria and MKIs. Thus, it may be difficult to judge nuclear/chromosomal abnormalities based on morphological evaluation. Indeed, karyotyping of blastocysts revealed that 14.3% (2/14) of in vivo embryos and 12.5% (1/8) of embryos selected by live-cell imaging contained mixoploids, whereas embryos selected by IETS and MKIs contained mixoploids at 42.9% (12/28) and 27.6% (8/29), respectively ( Supplementary Fig. S3). Interestingly, since multi-PN embryos were not included in the MKI-selected embryos, monitoring the embryo development with time-lapse cinematography instead of the morphological "snapshot" evaluation used in accordance with IETS criteria may reduce the risk of selecting embryos with multi-pronuclei, which are clinically discarded from transferrable embryos in human ART 13 . In conclusion, this live-cell imaging technique could be useful for analysing the association between nuclear/ chromosomal dynamics and embryo development in bovine embryos whose nuclei/pronuclei are not observable by visible light microscopy. Materials and Methods Ethics statement. This study was approved by the Ethics Committee for the Care and Use of Experimental Animals at Tokyo University of Agriculture and Technology, located in Tokyo, and the NARO Institute of Livestock and Grassland Science Animal Care Committee in Tsukuba, Japan. All animals received humane care according to guideline numbers 6, 22, and 105 of the Japanese Guidelines for Animal Care and Use. Chemicals. Unless specified, all chemicals were purchased from Sigma-Aldrich (St Louis, MO, USA). Oocyte collection. Ovaries from Japanese Black or Japanese Black × Holstein breeds were collected from a local slaughterhouse, transported to the laboratory, washed, and stored in physiological saline. Cumulus oocyte complexes (COCs) were aspirated from small follicles (2-6 mm in diameter), using a 10-mL syringe equipped with a 19-gauge needle 26 . Oocyte shipping and oocyte in vitro maturation. COCs were shipped to the IVF laboratory, while maturation proceeded in vitro (Fig. 4A). The IVM medium was 25 mM HEPES-buffered TCM199 (M199; Gibco, Paisley, Scotland, UK), supplemented with 5% calf serum (CS; Gibco) and 0.1 IU/mL recombinant human follicle-stimulating hormone (FSH) (Follistim; MSD, Tokyo, Japan). COCs were transferred to flat bottom . Schematic representation of experimental design. In Tokyo University of Agriculture Technology, Tokyo, Japan, COCs were collected from slaughterhouse-derived ovaries. The collected COCs were shipped while in vitro maturing (IVM) to Kindai University, Wakayama, Japan, where we performed live-cell imaging for this study. After arrival, in vitro fertilisation (IVF) was conducted for 6 h. mRNAs were injected into oocytes, in which polar bodies were observed, and live-cell imaging was performed for 8 days by CV1000. The obtained images were retrospectively analysed. A map was drawn by JapanPrefecturesMap function in Nippon package of R statistical software (A) 30 . 3D live-cell imaging of 369 embryos injected with mRNA encoding histone H2B-mCherry and EGFP-α-tubulin was performed from the one-cell to blastocyst stage for 8 days. Maximum intensity projections (MIP) were used for 2D image constrictions. Red and green represent histone H2B-mCherry (nuclei/chromosome) and EGFP-α-tubulin (microtubule), respectively (B). 2D/3D images of 8-cell embryos were constructed by ImageJ/Fiji and Volocity software. By optimising the imaging conditions, it was possible to obtain images up to the top nuclei/chromosome in the z-axis in lipid-rich bovine embryos (C). The number of pronuclei (PN), timing, chromosome segregation, and cytokinesis at first cleavage, number of blastomeres at end of first cleavage, number of blastomeres at the onset of lag-phase were retrospectively observed (D). SCIeNTIFIC RepoRTS | (2018) 8:7460 | DOI:10.1038/s41598-018-25698-w microtubes (TreffLab, Degersheim, Switzerland) with 500 μL of IVM medium (20-40 COCs/tube) and then covered with 300 μL of paraffin oil. The tubes were placed in a cell transport device (Fujihira, Tokyo, Japan), adjusted to 38.5 °C, and shipped. Upon arrival at the IVF laboratory, the tubes were placed in a CO 2 incubator (Astec, Fukuoka, Japan) at 38.5 °C in a humidified atmosphere of 5% CO 2 in air, and cultured continuously for up to 22 h of IVM. The developmental competence of oocytes derived from the shipping system was similar to that derived from a conventional IVM system with a CO 2 incubator (Supplementary Table S8). In vitro fertilisation (IVF). IVF was performed as described previously 9 . After 22 h of IVM, ejaculated sperm samples from Japanese Black bulls were thawed and then centrifuged in 3 mL of 90% Percoll solution (GE Healthcare, Uppsala, Sweden) at 750 × g for 10 min. After centrifugation, the pellet was re-suspended and centrifuged in 6 mL of sperm washing solution (Brackett and Oliphant solution, BO) 27 , supplemented with 10 mM hypotaurine and 4 U/mL heparin (Novo-Heparin Injection 1000; Aventis Pharma Ltd., Tokyo, Japan), at 550 × g for 5 min. Then, the pellet was re-suspended in a sperm washing solution and BO solution, supplemented with 20 mg/mL bovine serum albumin (BSA), to achieve a final concentration of 3 × 10 6 sperm/mL. This suspension (100 µL) was aliquoted into 35 mm dishes under paraffin oil as fertilisation droplets. COCs were washed twice in BO, supplemented with 10 mg/mL BSA, and cultured in a personal multi-gas incubator (APM-30DR, ASTEC Inc., Fukuoka, Japan) in fertilisation droplets for 6 h at 38.5 °C in a humidified atmosphere of 5% CO 2 in air. Live-cell imaging. Preparation of mRNAs encoding EGFP-α-tubulin and histone H2B-mCherry was described previously 28 . mRNA was synthesised using RiboMAX ™ Large Scale RNA Production Systems-T7 (Promega, Madison, USA). The 5′ end of each mRNA was capped using Ribo m7G Cap Analog (Promega). Synthesised RNAs were purified by phenol-chloroform treatment and gel filtration, using a MicroSpin ™ G-25 column (GE Healthcare). After insemination, oocytes were completely denuded from cumulus cells and spermatozoa by pipetting with a glass pipette in phenol red-free Charles Rosenkrans 1 with amino acid (CR1aa) medium 29 , supplemented with 5% CS and 0.3% BSA as in vitro culture (IVC) medium. mRNA (5 ng/μL each) were injected into ooplasm within four hours of IVF, using a piezo manipulator, in HEPES-buffered CZB medium. The oocytes were transferred to 5-μL droplets of IVC medium on a film bottom dish (Matsunami Glass Ind, LTD., Osaka, Japan). Embryos were imaged three-dimensionally (3D) using a boxed-type confocal laser microscope with a stable incubation chamber (CV1000, Yokogawa Electric Corp., Tokyo, Japan) set at 38.5 °C in 6% CO 2 , 5% O 2 , and 89% N 2 with saturated humidity. We preliminarily confirmed that a culture volume of 5 μL could support embryo development for 8 days (Supplementary Table S9). To prevent the movement of embryos during imaging, four oocytes were first stuck in each droplet of 2.5 μL protein (BSA and CS)-free CR1aa on the bottom of the dish, and then an additional 2.5 μL of CR1aa containing double the amount of proteins was added by utilizing the property that embryos stuck to the bottom of the dish in macromolecule free media. Images were taken at 10 min intervals for 8 days, using the following laser parameters: EGFP-α-tubulin and histone H2B-mCherry; excitation of 488 nm and 561 nm, emission of 525/50 and 617/73, laser power of 0.05 mW and 0.10 mW, exposure time of 50 msec and 100 msec, gain of 100% and 100%, range of 150 μm and 150 μm, slices of 31 and 31, respectively. The 2D/3D images were constructed by ImageJ/Fiji image analysis platform or Volocity software (PerkinElmer, Inc. Massachusetts, USA) (Figs 1A and 4B,C, and Supplementary Movie S4). The 2D/3D images were retrospectively analysed at several points (Fig. 4D). Statistical analysis. All statistical analyses were performed with R statistical software (The R Foundation, version 3.2.4). Unless specified, all functions were used from stats packages. The blastocyst formation rate was compared using a chi-squared test. The timing of first cleavage was compared using a Wilcoxon rank sum test. The variables reflecting timing and blastomere number at first cleavage were identified using a multivariate regression model and multivariate logistic regression model. The variables reflecting blastomere number at the lag-phase were identified using a cumulative logistic regression model with clm function in the ordinal package (version 2015.6-28). The cut-off value for timing of first cleavage was determined by roc function and coords function in the pROC package (version 1.8). A p-value < 0.05 was considered significant.
2018-05-11T13:26:38.883Z
2018-05-10T00:00:00.000
{ "year": 2018, "sha1": "0b2084fc244e6e7c3b949143ec047ac340e9a435", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-25698-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9dee21d9b671d81c5f5889518da1753c4347887", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13471305
pes2o/s2orc
v3-fos-license
YUCCA8 and YUCCA9 overexpression reveals a link between auxin signaling and lignification through the induction of ethylene biosynthesis Auxin is associated with the regulation of virtually every aspect of plant growth and development. Many previous genetic and biochemical studies revealed that, among the proposed routes for the production of auxin, the so-called indole-3-pyruvic acid (IPA) pathway is the main source for indole-3-acetic acid (IAA) in plants. The IPA pathway involves the action of 2 classes of enzymes, tryptophan-pyruvate aminotransferases (TRYPTOPHAN AMINOTRANSFERASE OF ARABIDOPSIS 1(TAA1)/TRYPTOPHAN AMINOTRANSFERASE RELATED (TAR)) and flavin monooxygenases (YUCCA). Both enzyme classes appear to be encoded by small gene families in Arabidopsis consisting of 5 and 11 members, respectively. We recently showed that it is possible to induce transcript accumulation of 2 YUCCA genes, YUC8 and YUC9, by methyl jasmonate treatment. Both gene products were demonstrated to contribute to auxin biosynthesis in planta.1 Here we report that the overexpression of YUC8 as well as YUC9 led to strong lignification of plant aerial tissues. Furthermore, new evidence indicates that this abnormally strong secondary growth is linked to increased levels of ethylene production. Auxin is associated with the regulation of an incredible wealth of different processes related to plant growth and development. These processes range from the promotion of cell elongation, induction of cell division activity of cambia, and initiation of adventitious and lateral roots, to contributions to photo-and gravitropic responses and fruit development. 2 Accumulation of auxin in plants is known to produce a number of auxin-related phenotypes. Elevated auxin levels translate into, for example, increased apical dominance, elongated hypocotyls and petioles, as well as epinastic cotyledons. 3,4 Very recently, we were able to disclose direct and intimate crosstalk between jasmonate signaling and auxin homeostasis. 1 With the conducted experiments we were able to demonstrate that the transcription of 2 YUCCA genes, YUC8 and YUC9, is significantly induced by oxylipins. Jasmonate, either applied exogenously or produced endogenously as a response to wounding, has been shown to substantially trigger YUC8 and YUC9 transcript accumulation. In order to address the question whether these 2 YUCCA isoenzymes also contribute to auxin biosynthesis in planta, we took a genetic approach and generated several independent 35S-driven gain-of-function lines for both, YUC8 and YUC9 (YUC8ox and YUC9ox). 1 When we analyzed the chemo-and phenotype of the overexpression lines, we discovered significantly increased free IAA levels in the overexpressors when compared with wild-type plants and clearly auxin-related phenotypes. Consistent with other auxin overproducer lines, YUC8ox and YUC9ox are characterized by elongated hypocotyls and petioles, as well as epinastic growing cotyledons. In addition, the lines showed longer and narrower leave blades than wild-type Arabidopsis. However, not all of the observed phenotypes could be directly attributed to the detected significantly increased amount of endogenous IAA. Intriguingly, some of the strong overexpression lines showed aberrant secondary growth of the stem (Fig. 1). Indeed, in some cases the secondary growth was so pronounced that the stem was no longer able to follow the increased growth and the epidermis cracked open from the bottom to the top (Fig. 1A-D). As can be estimated from Figure 1E, the stem diameter of overexpressor lines reached about twice the size of wild-type stems. In fact, this can be attributed to a more pronounced cell expansion growth, particularly of the cortex, vascular bundle, and parenchyma cells, in the overexpressor lines, which confirms the previous finding that the transient overexpression both of YUC8 and YUC9 results in an induction of cell expansion by 2-to 2.5-fold. 1 In addition, we observed that the overexpressors lost much less of their size when slowly dried at ambient temperatures (Fig. 1F). Comparing weight loss after drying, we did not observe significant differences auxin is associated with the regulation of virtually every aspect of plant growth and development. many previous genetic and biochemical studies revealed that, among the proposed routes for the production of auxin, the so-called indole-3-pyruvic acid (iPa) pathway is the main source for indole-3-acetic acid (iaa) in plants. the iPa pathway involves the action of 2 classes of enzymes, tryptophan-pyruvate aminotransferases (trYPtoPhan aminotranSFEraSE oF araBiDoPSiS 1 (taa1)/trYPtoPhan aminotranSFEraSE rELatED (tar)) and flavin monooxygenases (YuCCa). Both enzyme classes appear to be encoded by small gene families in Arabidopsis consisting of 5 and 11 members, respectively. We recently showed that it is possible to induce transcript accumulation of 2 YUCCA genes, YUC8 and YUC9, by methyl jasmonate treatment. Both gene products were demonstrated to contribute to auxin biosynthesis in planta. 1 here we report that the overexpression of YUC8 as well as YUC9 led to strong lignification of plant aerial tissues. Furthermore, new evidence indicates that this abnormally strong secondary growth is linked to increased levels of ethylene production. between overexpressor lines and wild type, pointing out that the examined lines lost the same amount of water by vaporization. Remarkably, the fresh weight of the YUC8ox and YUC9ox lines (2.51 ± 0.05 g) was only slightly higher than that of the wild type (2.24 ± 0.03 g). Nevertheless, shrinkage of the YUC8ox and YUC9ox lines was apparently reduced relative to wild-type Arabidopsis. Mechanical strength and plasticity of plants is to a great extent attributed to their cell walls. Cell walls are multilayered structures unique to plants that surround every cell providing sufficient rigidity to counteract the turgor pressure. 5 The biosynthesis of this extracellular matrix that constitutes the boundary of the cell is a highly complex process that requires multiple coordinated enzymatic reaction steps. 6 With respect to the described findings, we hypothesized that increased disposal of stabilizing biopolymers, e.g., lignin and embedded cellulosic compounds, in the secondary cell walls is likely to be the causal link for the abnormal phenotype of the overexpression lines relative to the wild type. Staining of various plant parts using phloroglucinol in the presence of alcohol and HCl, 7 in fact, confirmed this hypothesis and revealed a substantially increased degree of lignification in YUC8ox and YUC9ox lines in comparison to the wild-type controls (Fig. 1G-I). Both plant development and plant stress responses can be regulated by the mutual interoperability and congruence of plant hormones, a concept that is widely accepted. Various examples for the crosstalk between phytohormones and the underlying molecular bases can be found in the literature. [8][9][10][11] For example, secondary plant growth can be stimulated by the interplay of auxin and strigolactone signaling, which seem to steer cambial activity. 12 However, ethylene signaling is also assumed to contribute to radial (horizontal) growth by modulating vascular cell division. 13 The stimulation of ethylene emission from plant tissues by exogenously applied as well as endogenously produced auxin is a well-established phenomenon. 14,15 Ethylene, for its part, also affects auxin biosynthesis and transport-dependent local auxin distribution. 16 Remarkably, recent findings significantly augmented the current insight into this intimate relationship between auxin and ethylene biosynthesis. Auxin and ethylene production are metabolically linked by a pyridoxal-phosphatedependent aminotransferase, REVERSAL OF sav3 PHENOTYPE (VAS1), that catalyzes the transamination of IPA to l-tryptophan and 2-oxo-4-methylthiobutyric acid, specifically using methionine as the amino donor. Given that vas1 accumulates more auxin and 1-aminocyclopropane-1-carboxylate (ACC) under normal growth conditions, VAS1 seemingly controls the amounts of these 2 plant hormones. 17 Overall, it seems as if there is a circle of mutual activation and a tight metabolic link between the 2 plant hormonal pathways. Most relevant for our experiments, however, was the discovery that overproduction of IAA in transgenic plants induces the concomitant overproduction of ethylene. 15 To examine whether YUC8-and YUC9mediated IAA overproduction also affects ethylene biosynthesis, we tested the primary root growth inhibition by an ethylene biosynthesis inhibitor, 2-aminoisobutyric acid (AIB), in wild type and YUC8ox and YUC9ox lines (Fig. 2). Although higher concentrations of AIB ultimately suppressed primary root growth in wild-type Arabidopsis as well as YUC8ox and YUC9ox, the 2 overexpression lines clearly showed hyposensitivity toward AIB, pointing toward an increased resistance that is mediated by the stimulated formation of ethylene. This result was confirmed by quantitative transcript analyses, which highlighted transcript accumulation of a number of ethylene biosynthesis-and signaling-related genes in YUC9ox (Table 1). Hence, we conclude that YUC8 and YUC9 overexpression-mediated overproduction of IAA, in turn, triggers the induction of ethylene production and signaling, which in combination stimulates secondary growth and deposition of lignin into the cell walls. Having in mind that jasmonates are capable of inducing the accumulation of YUC8 and YUC9 transcripts, 1 thus being probably the initiator of a jasmonate/auxin/ethylene cascade, there is already evidence that coaction of ethylene and jasmonate is integrated through the ethylene-stabilized transcription factors EIN3 and EIL1, which physically interact with JASMONATE-ZIM-domain (JAZ) proteins to repress EIN3/EIL1. 18 This leads to the emergence of a picture in which all 3 plant hormone signaling pathways may contribute to the stimulation of lignification in the overexpression lines. There is mounting evidence that changes in lignification are linked to various plant hormone actions. 19 For instances, the cellulose synthase mutant cev1 and the V-type ATPase mutant vha-a3 show ectopic lignification alongside with increased levels of JA-regulated genes and JA-precursors. 20,21 In vha-a3, the AtMYB61 transcription factor 22 appears to be misexpressed and suppression of AtMYB61 can restore the mutant phenotype. It is therefore possible that JA signaling is linked to lignin biosynthesis through the transcriptional regulation of AtMYB61. In addition, it has been shown that another MYB-type transcription factor, AtMYB32, is largely upregulated by IAA, 23 while the KNOX gene family transcription factor BREVIPEDICELLUS (BP) that negatively regulates lignin biosynthesis is effectively repressed by IAA. 24 Finally, also ethylene has been associated with the regulation of lignin biosynthesis. Characterization of a mutant in the chitinase-like protein AtCTL1, elp1, revealed that the phenotype of the mutant was due to the ectopic deposition of lignin and increased ethylene production. 25 A similar lignin deposition phenotype has also been found in mutants of 2 leucinerich-repeat receptor-like kinases, which seemingly link cell wall biosynthesis with ACC activity in Arabidopsis. 26 Consistent with these circumstantial evidences, the transcription of AtMYB61 and AtMYB32 responds to both IAA and methyl jasmonate treatment, while BP additionally also responds to ACC treatment in a transient manner (http://jsp.weigelworld.org/expviz/expviz.jsp). Although the underlying molecular mechanisms for the crossregulation of these transcription factors is largely unknown, it may be reasonable to think that their coordinated expression, orchestrated by the 3 plant hormones, is a possible determinant to regulate lignin formation. How AtCTL1 and the 2 receptorlike kinases feed into this picture remains, however, uncertain. So far, there is only indication that their mutation translates into altered lignin deposition and the concomitant increase of ethylene and ACC levels, respectively. Changes in the transcript profiles of both YUC8ox and YUC9ox have not yet been assessed, and nothing is known about the differential regulation of lignin synthesis-related genes in these mutants. It will be interesting to determine the molecular bases for the increased strong secondary growth and aberrant lignin deposition in YUC8ox and YUC9ox. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2016-05-04T20:20:58.661Z
2013-09-10T00:00:00.000
{ "year": 2013, "sha1": "55ae9035276c6a23bd22ff17767f424aa621e34c", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/psb.26363?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "55ae9035276c6a23bd22ff17767f424aa621e34c", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244821124
pes2o/s2orc
v3-fos-license
Training and development methods and organizational performance: A case of the local government organization in Central Region, Ghana Abstract Purpose: Is to investigate the relationship between training and development (T&D) methods and organizational performance in the local government sector, as well as contribute to knowledge. Research Methodology: The study employed a quantitative approach and correlation design, a census sampling method to sample 215 employees, a structured questionnaire to collect data, multiple linear regression to test hypotheses, and the Statistical Package for Social Science (SPSS) version 20 to input, transform and analyze the data. Result: The result shows that training and development (T & D) methods (job orientation, job rotation, workshop & conference, and classroom lectures) had a significant relationship with organizational performance. The findings also revealed that training and development (T & D) methods (job orientation, workshop & conference, and classroom lectures) have a significant relationship with the quality of service delivery. However, job rotation has no relationship with quality service delivery. Limitation: The study's main weakness is that it only looked at four different training and development methods and their impact on organizational performance and service quality. Another flaw in the study is that it only looked at one metropolitan assembly. Contribution: In this regard, metropolitan and municipal assembly executives and managers should consider employing training and development methods that provide stronger predictions to boost the organization’s performance and delivery of quality services to communities. They should train and develop employees within the organization as soon as political power changes hands to enhance organization performance because policy adherence is critical to the organization's performance. Introduction People are involved in every element of an organization, and their efforts contribute to the organization's success. Employees are an organization's most valuable asset which needs to be trained and developed based on organization and individual needs. As a result of the ever-changing organizational environment, lifelong learning is a vital coping mechanism. Organizational environments vary over time, necessitating ongoing training and development of staff skills and capacities in order to increase work performance, growth, and the ability to adapt quickly to changing economic circumstances in order for the institution to remain competitive (Amin, Saeed, & Lodhi, 2013). Staff training and development are essential in order for them to keep up with current events and practices (Tsaur & Lin, 2004). Effective training programs, according to Elnaga and Imra (2013), are essential to developing the desired knowledge, skills, and capacities of employees in order for them to perform well on the job. Training and development, according to Abiodun (1999), is the systematic acquisition of knowledge, skills, and attitudes by employees in order for them to perform satisfactorily on a task or job. Training and development is the process of altering an organization's personnel through planned and unplanned learning in order for the institution to gain and sustain a competitive advantage (Harrison, 2000). Organizations provide training to prepare workers to accomplish their jobs as desired, according to Elnaga and Imra (2013), in order to maximize their employee's potential. It has been observed that most institutions, through long-term planning, invest in developing new skills in their workforce, enabling them to cope with uncertain conditions that they may face in the future, thereby improving employee performance through higher levels of motivation and commitment, and when employees recognize their organization's interest in them through training programs, they are more likely to participate. Training is essential for establishing a flexible, motivated, and devoted team (Amin, Saeed, Lodhi, Mizna, Simra, & Iqbal, Tehreem, 2013). Recognizing the importance of training and development, the local government sector needs to host training and development programs for its employees on a regular basis. Because the sector is known to have a fairly fluid workforce and is organized as a political institution, the majority of the workforce population is either appointed by the government or voted in by electorates, training and development programs for these staff in the local government sector are extremely important. Because in Ghana each administration is given a fouryear tenure by the 1992 Ghana republican constitution, the majority of these workers are replaced or maintained after that time. As a result, employee training is required for an institution with such a fluid and dynamic workforce, as newly appointed and existing personnel must undergo on-the-job or off-the-job training. According to Agu (2002), local government training and development in Nigeria are in the deep woods and are plagued by enormous problems. The obvious ramifications of the nation's local government's massive neglect of effective training and development programs are clear. It is critical for the local government sector institutions to provide systematic and adaptable training and career development programs for their personnel. By focusing on specific skills required for the current need, training has been shown to assist employees in their current occupations and help them reach current performance criteria. Too much political appointment of the local governing institutions has been accused of consistent, consequently poor performance in delivering fundamental quality services to communities as a whole. Yang (2010) believes that the inefficient provision of basic public services by local governments has led to the inefficient supply of basic services to the masses and the deployment of 'fellow citizens' to positions they were not qualifying for within the local government structure. According to Mwesigwa, Bogere, and Anastassova (2021), both elected and appointed officials within these structures may not accurately represent the outcome of the political system. This could be due to the fact that the electorate has few opportunities to interact with their leaders in order to make proposals and receive feedback on policy outcomes. The management and performance of the local governments is thus an issue of both timely and enduring importance to researchers, policy-makers, and citizens alike (Sharpe, 1970). According to Mwesigwa (2021), Ugandan public organizations lack adequate skill in managing community expectations and community pressures in dynamic conditions. According to Heathfield (2012), providing correct staff training and development at the right time brings significant benefits to the institution and improves performance. Training and development are important factors in defining an organization's optimal performance. It is a good policy for a local government organization to invest in the training and development of employees' skills, knowledge, and abilities to boost individual and, ultimately, organizational performance. Prior studies in the area of training and development (T&D) and organizational performance focused on sectors such as banking (Gunu, Oni, Tsado & Ajayi, 2013;Oladimeji & Olanrewaju, 2016;Engetou, 2017;Ojoh & Okoh, 2015;Falola, Osibanjo & Ojo, 2014;Emeti, 2015), the oil and gas (Raza, 2015) and Pharmaceutical (Hafeez & Akbar, 2015). Furthermore, a few Ghanaian researchers have looked at employee performance in industries such as mining (Ali, 2014), banking (Appiah, 2010;Agyei, 2014), security (Okyireh & Okyireh, 2016), insurance (Hogarh, 2012;Ofobruku & Nwakoby, 2015), and communication (Tetteh, Sheng, Yong, Narh & Sackitey, 2017). However, studies on training and development and organizational performance in the local government sector appear to be rare. Furthermore, there appears to be a dearth of literature on the relationship between both training and development methods and organizational performance and quality service delivery in the Ghanaian setting. To address these vacuities, the study looked into the relationship between training and development methods (T&D) and organizational performance in the local government sector, with the goal of adding to the body of knowledge. Objectives of the Study The following research objectives steer this study: 1. To investigate the relationship between training and development (T&D) methods organizational performance in the local government sector. 2. To investigate the relationship between training and development (T&D) methods and quality service delivery in the local government sector. Literature Review Human Capital Theory Many studies have demonstrated training and development relates to performance, job satisfaction, service delivery, etc., and is embedded in the human capital theory. The human capital theory which states that training and development have a favorable impact on employee performance, innovation, and career is one of these theories proposed by literature. The theory of human capital is Adam Smith's brain (Schuller & Field, 1998). In 1961 Schultz further developed this according to literature (Schulz, 1981). It is hypothesized that training, education, and skills investment is an important component in economic growth, physical plant, and equipment investment (Schuller & Field, 1998). In addition, Bohlander, Snell, and Sherman (2001) describe the meaning of human capital as "knowledge, skills, and capacity of people who value an organization economically." Schultz (1993) described the term "human capital" as an essential factor in the improvement of company assets and staff, both to raise productivity and to maintain competitive advantage. Schultz said it is a tool used for improving productivity to sustain competitiveness in the human capital organization. Boadu, Fokuo-Dwomo, Boakye, and Kwaning (2014) add that investment in training and development thus acts as a catalyst for improved development performance among districts assemblies. Becker (1993) contends, however, that there are different sorts of capitals including education, development, and computer training. The theory of human capital encourages the provision of education or training to workers that in turn boost their productivity and revenue (Becker, 1964). Some academics (Levin & Kelley, 1994;Thurow, 1975) have vehemently criticized the theory of human capital that economists and other social scientists have overestimated the payouts from increased training and development as well as overlooked additional contributions to improving productivity, such as training, contracting conditions and management practices. Productivity is mainly a feature of jobs rather than a worker's talent because trained workers are easier to train. However, the human capital theory has proven durable and remains the basic theoretical structure used to understand investment in human capital, from a personal and company standpoint (Bassi & McMurrer, 2006). The theory was significant to this study, because human capital is a primary driving component in an organization's activities, and must be trained and developed to achieve excellent performances and accomplish organizational goals. The theory is well-known in the literature as one of the best theories for determining the influence or degree of effect independent variables and their resulting factor-dependent variables. Concepts of Training and Development Training and development is a dynamic, flexible, and complicated concept with no universally accepted definition (Cloete & Mokgoro, 1995). This has provided researchers and scholars with the opportunity to conceptualize training and development in relation to the type of organization under investigation or the study area. Training Training is the process of inspiring new or current personnel, especially in public and political service, with knowledge and expertise to fulfill their organizational objectives based on public policy direction. Cheminais, Bayat, van der Waldt, and Fox (1998) define training as "planned and purposeful activities that improve knowledge, skills, insight, attitude, behavior, values, and working and thinking habits of public servants or prospective public servants in order for them to perform designated or intended tasks more efficiently." Training is defined as the systematic development of skills, standards, concepts, and attitudes leading to better work performance (Goldstein, 1993). Employee training also involves systematic planning and behavioral changes through instructional events, programs, and instructions that help people to get the necessary information, expertise, and skills to operate successfully (Armstrong, 2006). Buckley and Caple (2000) describe training as an endeavor to change or build information, skills, or attitudes through the learning experience, in order to attain an efficient performance in a particular activity or set of activities. Development Development is the act of increasing and gaining the information needed to carry out specific tasks or responsibilities in a position. According to Tailor (2000), development is the process of broadening people's options and increasing their level of well-being. It is a comprehensive, integrated process in which economic and political forces interact in dynamic and diverse ways to improve the lives and opportunities of the poorest people. Development is the continuation of education and training in order to gain the necessary experience, skills, and attitude to be appointed to the highest positions (Cheminais, Bayat, van der Waldt, & Fox, 1998). Only by development, did individuals or groups acquire knowledge, abilities, values, and conduct as a learning experience of every sort. It is more professional than job-oriented, with an emphasis on the individual's long-term development and potential. Employee development can be demonstrated in a variety of ways, according to Katcher and Snyder (2003), including training, evaluation, education, and even feedback received from communities they serve. Difference Between Training and Development Although in most kinds of literature there have been no evident differences between training and development, some other scholars have devoted efforts to address these gaps. The difference between training and development, according to Olusoji, Adedayo, and Akaighe (2017), training provides new employees with the learning procedure, in which they learn the key skills needed to get the job done while developing the training process to enhance their skills by the existing employees. Training is a short-term capacity enhancement procedure that takes from 3 to 6 months. Development is a continuous process that is typically carried out over the long term, and the focus for training is on skills and knowledge development for present jobs. Training of employees is an essential aspect of human capital production (Tzafrir, 2005). Training is specially intended to ensure that the employee continues to provide the greatest results in a favorable way. The training focuses on identifying, ensuring, and enabling individuals to carry out their existing employment through planned learning and core competencies (Buckley & Caple, 2000). Employees trained for personal and organizational purposes become more efficient. Training effects on employee performance often promote growth in the employee and the organization (Katcher & Snyder, 2003). Kibibi (2011) has split training and development into 2 primary groups, namely on-the-job and offthe-job. Ojoh and Okoh (2016) emphasized that two main types of training normally exist, namely on and off the job. But Lisk (1996) identifies macro and micro training and development. Both on-thejob and off-the-job training, according to Amoah-Mensah and Darkwa (2016), are genre terms for training and development classifications rather than training and development approaches per se. That an organization can choose to train its employees on the job or off the job; on-the-job training takes place inside (internally) the organization, whilst off-the-job training takes place outside (externally). This study posits that the research training and development, as previously said, differs based on the sort of company providing the training. The type of training and development methodologies to be employed to suit the needs of the organization and its personnel would be determined by the needs of the individual and organization. Off-the-Job Training Formal training may also include daytime training type, which allows employees to attend formal lectures one or two working days off weekly or monthly (Ojoh & Okoh, 2016;Ali, 2014). This sort of training takes place outside of the workplace, according to Ojoh and Okoh, however, attempts in some cases to stimulate the actual working environment are made, while training outside the workplace can be focused on the classroom with seminars, lectures, and movies. They may involve training in the vestibular, where an employee works on the actual equipment and materials, but in a room other than that in which he works. The reasons are to minimize the pressure at work that could limit learning. The method enables the employment of a wider range of training activities, including apprenticeships, lectures, assistantships, internships, special studies, films, televised conferences or discussions, case studies, role acting, simulation programmed instruction, and laboratory training (Cole, 2002). Job Orientation Is the introduction of new employees into their existing position by teaching them the skills and knowledge needed in their present position or position. The orientation of the newly selected personnel soon after they have been employed is provided. The new employees also have to use the orientation method to enable them to build confidence and perform better in order to reach their chosen expectations. The times may vary depending on the situation, from a few days to a few weeks (Ali, 2014). Milkovich and Boudreau (2004) view guidance as an ongoing process, taking time to achieve it. Based on three factors, organizations are guiding their new employees. In the first place, the orientation will allow the new employee to learn about work practices. The new employee is also oriented to the relationship with other employees. In the end, it makes the new employee think that he or she is a member of the organization. Kumar and Siddika (2017) said it is another approach to training and development. This comprises familiarizing and training new employees with the new role in an organization. In this procedure, they are exposed to the nature of their new employment. A successful orientation gives employees different benefits and as an institution as a whole (Richards, 2017). Orientation is the key aspect to cut sales and hence to reduce costs in order to run your firm (Klein & Weaver, 2000). Job Rotation The rotation of work is based on the knowledge, skills, and skillful capability of an individual, according to Jorgensen, Davis, Kotowski, Aedla, and Dunning (2005). This is about training new employees, enabling them to get to know the work and the entire business as far as values, rules, and regulations are concerned (Olaniyan & Ojo, 2008). Tuei and Saina (2015) make this progress by moving the learner from one lateral task to another, which allows the employee to gain skills. That the rotation of the task makes the trainee a multi-skilled worker and increases the performance. This strategy is excellent to increase the experience of an individual in organizing activities and turn a professional into general practitioners, develop their personal know-how, enable employees to gain new information, and encourage fresh ideas (Ali, 2014). Tuei and Saina (2015) advance that job rotation is when the trainee moves from one task lateral to another which affords the employee the opportunity to acquire skills. Job rotation enables the trainee to become a multi-skilled employee. On-the-job training According to Ojoh and Okoh (2016), these two sorts of training take place when the supervisor or senior officer takes time out of his or her schedule to coach or instruct his or her subordinate. It could take the form of job rotation, in which employees are allowed to move from one unit or department to another, working on a succession of jobs and gaining a variety of abilities. Job rotation is especially common in service businesses such as banks and insurance companies. Kibibi (2011) states that onthe-job techniques are procedures that are used in the workplace while employees are working to learn specific skills. This strategy is essential to improve the understanding of personnel who have insufficient academic qualifications for job performance. It is also viewed as training within the context of the organization's policy. The firm trains employees using four main techniques: orientation, job instruction, job rotation, and coaching (Ali, 2014;Laing, 2009). On-the-job training could be a continuous procedure that does not significantly impede routine business operations. Classroom Lecture Is an official technique for employees to gain the skills and knowledge essential to carry out future work or tasks and is conducted externally to employees and it mainly takes place in a classroom in which specialists and academics can have an impact on knowledge and experience based on research. Sutherland (1976) remarked that a lecture is about the process by which a trainer teaches or orally disseminates information or ideas to trainees who have little or no engagement. The knowledge can come from his own lectures, investigations, and experiences. According to Ahammad (2013), this approach is used to teach many people a great deal of knowledge and if the subject is comprehensive. Ojoh and Okoh (2015) stressed that the training circumstances are traditionally controlled by the trainer. Efficiency can be assessed by the goal or by knowledge and its development is inexpensive and time and group size versatile in application. The speech, however, will not alter the position and will make little contribution to capacity building. Certain lectures may be boring, others without instruction can be humorous. Conferences and Workshop This sort of training is generally done within one week or two outside of the working environment, which allows employees to learn from experts, professionals, and consultants. On the occasion of the conference, Saakshi (2005) indicated that it is a strategy utilized to help employees improve problems. This is an informational and working sequence in which small groups of individuals meet in a short space of time to discuss a particular area of concern. The workshops are kinds of training in which skills are gained externally to the organization and trainees are taken away from their working environments. That immediately in the workstation can be handy. In this manner, the trainer lectures on the specific subject and answers questions and discussions. The leader of the conference must have the skills needed to guide the debate significantly without losing sight of the subject or subject (Kibibi, 2011). This form of training and development is characterized as an approach involving presentations to a large audience by more than one person. It is more affordable because a group of employees is instructed simultaneously in huge audiences on a certain topic. However, these methods are bad because it isn't easy for every individual trainee to understand the subject as a whole because not all trainees follow the same speed during the training sessions and focus on particular trainees who appear to be understanding the subject faster than others so that other people are trained (Kumar & Siddika, 2017). Training and Development Process A variety of training literature (Cuming, 1980, Hanif, 2013, Benedicta, 2010 has traditionally indicated that training involves the organization's systematic approach, generally following a series of training policies that are supported by the identification of training needs, the design of training, and the evaluation and feedback training programs. The training and development process implies the procedures or stages by which the objectives of personnel and organizational needs that enable the organization to attain the desired objectives are addressed. The training and development process involves four phases or stages according to Desmone, Werner, and Harris (2002). These include training needs assessment, training design, training execution, monitoring, and training assessment. This is done in a sequential manner in order to achieve the desired result. Training and Development Needs Assessment The training requirement is defined by Cole (2002) as any lack of knowledge, understanding, and attitude on behalf of the employee in relation to the job or expectations for corporate change. Barbazette (2006) believes that the process of collecting information to educate personnel to meet organizational demands is an assessment of training needs. In order to conduct the education and development process, Onah (2008) indicated that it is necessary, inside an organization, to analyze the need for four information, therefore data of organization-level (management, product/services communication channel offered and personnel requirements) (i.e. occupational standards agreed nationally for different levels of responsibility). However, Noe (2013) points out that the assessment of training needs concerns the process of finding out whether or not training is required. Three tests are conducted: staff (personnel), organizational, and work (job) tests. The training needs analysis emphasized by Armstrong (2003) is likewise divided into three phases: firstly, the total corporate need organization, the team in second departments, the tasks or occupation of the organizing needs group, and thirdly, the individual needs of employees. McConnell (2003) argues that an examination of training needs is necessary when changes are made to a system/work, new technologies are introduced, new standards are implemented, work or performance quality declines, skills and knowledge are lacking and incentives are lacking. Design of the Training and Development Details of the training program such as trainer identity, methodology, skill, material, etc. are defined in this stage. Training and development design, according to Noe (2013), has to do with the components or activities included in the training program to maximize the likelihood of a high degree of knowledge transfer. Training design focuses on the definition and identification of goals and scope, methods and media to follow. The design of training and development should include the opinions of management, supervisors, and employees and full participation in them (Brown & Harvey, 2000). Zaccarelli (1997) explains the training process as follows: participants, trainers, methodologies and procedures to be employed, training level, and venue should also be discussed. This training plan will provide guidance to the trainer and trainee in order to carry out the program successfully. It includes those who participate in the training, the person who administers the program, the resources necessary, and the content to be followed. The training lesson is developed once the plan for the program has been determined. Implementation of Training and Development Training and development implementation mostly involves implementing the plan or design. The implementation of training and development is the start of the training and development program, according to Hailemichael (2014). The organization's reporting and readiness to deliver and learn respectively by trainers and trainees. The program, as agreed, should also start on time. Resources are offered and are ready for use such as money, automobiles, instructional aides, and learning materials. Equipment such as classrooms, equipment, lighting systems, physical surroundings, and the overall environment should also be learning-friendly. This means that each organization must be planned, implemented systematically, and adapted to enhance performance and productivity in order to achieve the objectives of its training program (Armstrong, 2008). Monitoring, Evaluation of Training and Development Evaluation and monitoring are processes used to measure the training and development program efficacy and efficiency. Evaluation is therefore the means of measuring a training program's effectiveness. Assessment is vital to determine whether the training program is effective or not and whether its objectives are accomplished. This is a crucial stage that not only evaluates the quality of training provided but also the training plan to see whether future revisions would improve it in terms of results. Training assessment may take a number of forms, including questioning, observation, questionnaires, test effect, etc., according to Beardwell and Holden (1993). This is designed to have a longer-term and broader impact. It provides the response as to how much the training was kept and used after a period of time by the trainee at the workplace. The duration may be many weeks, months or even longer. Evaluation promotes training by feedback for trainers, members, and employers and evaluates the competence level of employees (Pynes 2004). Training programs can be evaluated at four main levels, according to Kirkpatrick and Kirkpatrick (2006). The first stage is to measure the reactions of participants to the training programme. To this step, Kirkpatrick and Kirkpatrick refers to the satisfaction of customers. The second level assesses the occurrence of learning as a result of the training. Have the participants gained the skills or information that are part of the goals? The third level of assessment assesses the extent to which the participants who attended the training program are affected by behavioral change on the job. The use of performance assessments aiming to gauge the new competencies is another strategy with this level of assessment. The fourth level tries to measure the ultimate results achieved when staff participated in the training. Training is ideally associated with the performance of employees. At this level, the assessment aims to assess the effect of the training on the organization. Satisfactory end-points may involve fewer supervisor complaints, higher staff productivity, reduced client complaints, a reduction in job accidents, higher funding amounts, better board relationships, and fewer discriminatory workplace situations. A final stage is to examine whether the training benefits outweigh their direct and indirect expenses. Local Government Organization In Ghana, local government organizations are known as Metropolitan, Municipal, and District Assemblies (MMDAs). They are supported by Ghana's Republican Constitution of 1992. The Constitution states unequivocally in Chapter 20, Article 240 (1), that "Ghana shall have a system of local government and administration that shall, as far as practicable, be decentralized." There are several departments that implement decisions made by the general assembly of each local government. Accounting, auditing, planning, and engineering are just a few of the functions performed by officials in this decentralized organization (Crawford, 2004). Local government organizations provide important services to citizens and must live up to the public's expectations by providing high-quality, efficient services (Buccus, Hemson, Hicks & Piper, 2007). The goal is to improve people's quality of life by providing essential quality services and creating an enabling environment to ensure the organization's growth. Stakeholders, civil society organizations, and individuals demand accountability and openness from local government institutions that receive taxpayer funds, with an emphasis on efficiency (Brusca & Montesinos, 2016). The argument that public sector organizations frequently have numerous goals, as a result of which they are connected with many dimensions of performance, has received a lot of attention in the literature (Boyne 2010, Halligan, Bouckaert, & Dooren, 2010. Performance Performance denotes progress toward goal achievement and is intended to strengthen municipalities' abilities to be more responsive, effective, and sensitive to constituent demands, while also being efficient in utilizing the limited available resources to address those demands (Putman 1993;Turk, 2016). Performance refers to how well it achieves its policy objective or other intended effects. Performance signifies effectiveness (equity, empathy, ecology), efficiency, economics, and ethics in public administration (Doherty & Horne, 2002). It is also argued that in order to define organizational performance, conceptual frameworks of performance must be properly specified so that all essential stakeholders can agree on what performance comprises (Dess & Robinson, 1984). Organizational Performance The definition of organizational effectiveness or performance is a contentious issue (Mitchell, 2012). This is always conceptualized from the perspective of the researcher, the research area, the organizational perspective, and other scholars in the field of study. Organizational performance is commonly defined as the extent to which a corporation achieves its goals (Miles, 1980, Price, 1972. The organizational performance was used to determine the extent to which organizations considering social systems met their goals (Georgopoulos & Tannenbaum, 1957). Organizational performance is concerned with the extent to which public and private agencies are able to carry out their core mandates by carrying out appropriate administrative and operational functions in ways that seek to achieve both short and long-term objectives (Kim, 2005). Organizational performance is frequently a feature of internal control systems that involve periodic evaluation of performance standards and how operational objectives are met (Kloot & Martin, 2000). Organizational performance is defined by Boyne (2003) as the 3Es: economy, efficiency, and effectiveness of public services. The cost of procuring specific service inputs (facilities, staff, and equipment) for a given quality is referred to as the economy. Efficiency is defined as the technical cost per unit of output, as well as the responsiveness of services to public preferences, which results in measures of user satisfaction (Jackson, 1982). The actual achievement of institutional service objectives is referred to as effectiveness (Boyne, 2002). Empirical Review and Hypotheses The impact of training and development on organizational performance was researched by Khan, Khan, and Khan (2011). Using a sample size of 100, questionnaires, and descriptive statistics, the study indicated that Training and Development has a favorable effect on Organizational Performance. A case study of select banks in Nigeria (Gunu, Oni, Tsado & Ajayi, 2013) examines training and development relationships as an instrument for organizational performance. On the basis of survey design, questionnaires, and Pearson moment correlations, descriptive statistics. The study has shown a beneficial association between training and development and banks' performance in Nigeria. Training and development: is it important in providing quality service at Sinapi Aba Trust, Kumasi? Ampong, Nkuah and Okyere (2020) following interviews and questionnaires, the samples were 60 and nonexperimental study designs it was found that training and development enhance employee performance and has a good impact on the SAT delivery system. Altarawneh (2005) explores Jordanian banking organizations' existing practices, policies, and training and development roles (T&D). The study demonstrates that in the majority of the businesses, there is a lack of a systematic assessment of the needs of employees and an effective assessment process. The multi-method approach was adopted with a purposeful sample of 15 senior managers and 38 HRM management. T&D enhances the skills, knowledge, attitudes, and behaviour, but doesn't raise the commitment and satisfaction of employees. T&D does not affect the organizations in profit, innovation and change, sales, absenteeism, turnover, job satisfaction, and cost savings, but promotes customer satisfaction, quality service, and productivity. The study by Mpofu and Hlatywayo (2015) examined the association between staff quality training and development and service delivery in a selected township. The questionnaire, 150-sample employees, and the ANOVA had been employed for analysis in the quantitative approach. The results showed the necessity for efficient training and development methods and processes for employees to obtain better employee results, improving the provision in the communities of fundamental services. Gupta (2017) investigated the relationship between training, development, and organizational performance (The Case of Commercial Bank of Ethiopia). The stratified, simple random sampling of 125 staff, questionnaires, descriptive and inferential analyses, based on a sectional and quantitative approach. The study concluded that training and development influence the performance of the organization. Based on the above it is hypothesized that: This study's conceptual framework is based and anchored on the theory of human capital. Education, training, and development theory teaches workers useful information and skills, which in turn boost their productivity, service delivery, and incomes. Human capital is the transfer of training to the welfare of an organization and staff in order to carry out its work successfully. The theory indicates that training and development methods (therefore job orientation, job rotation, workshop & conference, and classroom lectures) are hypothesized to influence organizational performance and quality service delivery, as demonstrated in the framework. Research Design The study took place at Cape Coast Metropolitan Assembly with thirteen (13) departments. They are central administration, internal audit, finance/revenue, social welfare/community dev, urban roads, budget, transport, legal, waste management/environment, public works, environmental health, planning, and procurement & stores. The population for the study comprised 215 employees, of 13 departments of Cape Coast Metropolitan Assembly. These thirteen departments have been places where work processes in regards to delivering the objectives of the organization occur. The population includes employees of the Cape Coast Metropolitan Assembly and the entire population has been used for this study. The quantitative research approach was employed to allow the researcher to achieve objective and reasonable findings. The quantitative method allows researchers to employ mathematical approaches to achieve objective and logical deductions (Creswell, 2009). Creswell (2014) further states that the approach of quantitative research is utilized to examine objective ideas by investigating the inter-variable relationships. The variables can, in turn, be measured on instruments in order to examine numerated data using statistical processes. The design of descriptive quantitative correlation studies was used as both Creswell (2008) and Lappe (2000) claim that quantitative correlative study results are anticipated and relations between variables may be explained. There were 215 people in the survey from 13 departments. The method of census sampling was used. Prasad (2015) stressed the census approach to ensure a high level of precision and practical phenomenon description with no element of bias since all the factors have no probability of being taken into account. Census is one way to use the complete population as a sample, according to Singh and Masuku (2014). A census is more appealing for small populations, despite the fact that financial considerations make it impracticable for big populations. A census removes sampling error and provides information on all members of the population. In addition, some costs such as questionnaire design and developing the sampling frame are "fixed,". To attain a desired level of precision, the complete population will have to be sampled in small groups. Data Collection and Analysis Structured questionnaires were used as a research instrument since it is a reasonable method for the collection of data from a possible required number of respondents. According to Abawi (2017), it is appropriate for quantitative data collection because it allows for subjective and objective data collection as well as participant privacy protection. The survey instrument was divided into three parts A, B, and C. Section 1(A) deals with the personal data (demographics), such as gender, education, department, and experience of the personnel. The second part (B) covers training and development methods that are job orientation, rotation of job, Workshops & Conferences, and classroom lectures. Section 3 (C) also dealt with organizational performance and service delivery. The independent variables training and development (T & D) were rated on a five-point Likert scale, with 1 indicating least satisfied, 2 indicating less satisfied, 3 indicating satisfied, 4 indicating much satisfied, and 5 indicating most satisfied. Organizational performance and service delivery were also evaluated on a five-point Likert scale, with 1 denoting little influence, 2 denoting less influence, 3 denoting influence, 4 denoting much influence, and 5 denoting the most influence. It is vital to use Likert-type scales and to allow the researcher to calculate Cronbach's internal consistency alpha coefficient (Gay, Mills, & Airasian, 2006). Instructors in the area reviewed the structured questionnaire for face and content validity, and their feedback was included in the final instrument before administration. The instrument was also examined for internal consistency dependability using the Cronbach Alpha reliability index, and the results revealed 0.75, indicating that the instrument was trustworthy enough to produce reliable and valid data. The data collection took two months and involved 215 questionnaires. The two hypotheses were tested using a multiple linear regression test. The Statistical Social Science Package (SPSS) 20.0 version was used for data entry, data transformation, outputs, and analysis. Organizational Performance Measures Every organization must have a performance measurement system because such a system is critical in developing strategic plans and evaluating the success of organizational objectives (Ittner & Larcker, 1998). According to Armstrong (2006) and Hakala (2008), various authors in the field of business and human resource management suggest quality, customer satisfaction, timelessness, absenteeism/tardiness, and achievement of goals as indicators of organizational performance. However, in the perspective of Matei and Elis-Bianca (2013), the results of the local government and public sector were measured in a wide variety of ways. Such as efficient services, equity, nondiscriminatory treatment, diversity in management, respect for the right, democracy, fairness, and dignity. Organizations in the public sector use performance management and measurement systems to improve the efficiency and effectiveness of service delivery (Wouter, Bouckaert, & Halligan, 2015). Some organizations in the public sector, such as those in the health and education sectors, use performance measurement systems. Local government organizations, on the other hand, are notorious for opposing the use of performance measurement systems, which research has shown to improve organizations' efficiency, effectiveness, and outcomes (Holzer, Fry, Charbonneau, Van, Wang, & Burnash, 2009). This implies that organizational performance measures are determined by the organization and its stakeholders in order to achieve the organization's target or goals. Table 1 shows that of 215 respondents, 126 of them are male, representing 58 percent, 89 of them, 42 percent are female. 13 participants who were 6 percent had secondary school education, 19 representing 9 percent were technically educated, 51 were diploma graduates representing 24 percent, 100 respondents representing 46 percent were bachelor's holders, 33 were postgraduate students representing 15 percent. 70 respondents representing 33 percent were in the central administration, 16 respondents representing 7 percent were in internal audit, 8 respondents representing 3 percent were in finance/revenue, 9 respondents represented 4 percent in social welfare/community development, 6 respondents representing 3 percent worked on urban roads, 2 respondents representing 1 percent worked on the budget, 9 respondents representing 4 percent worked in the transport, 2 respondents representing 1 percent worked on the legal, 19 respondents representing 9 percent were in waste management and environmental, 25 respondents representing 12 percent were in public works, 34 respondents representing 16 percent were in environmental health,11 respondents representing 5 percent in the area of planning and 4 respondents representing 2 percent working in procurement departments correspondingly. The table further indicated that 36 of 17 percent were in the establishment for fewer than one year, 55 of them 25 percent worked for 1-5 years, 65 of them worked for 30 percent from 6 to 10 years and 60 of them worked 11 years and older. The table again showed that 100 respondents between the ages of 18 and 30 years represented 47 percent, 50 respondents between the ages of 31 and 40 years represented 23 percent, 45 respondents between the ages of 41 and 50 years represented 21 percent, and 20 respondents between the ages of 51 and above years represented 9 percent. This shows that male respondents outnumber female respondents in the company. Central administration had the most responders, followed by public works and trash collection. The firm employed the greatest number of people with 6 to 10 years of experience. Bachelor's and post-graduatereplies predominate in terms of educational degree. Hypotheses Testing Multiple Linear Regression Analysis The regression analysis has been conducted in order to determine the statistical significance between the independent variables Training and development methods (job guidance, job rotation, workshop and conference and lectures in the classroom) and dependent organization's performance in the local government sector. H1: Training and Development (T & D) methods influence organizational performance. Table 3 demonstrates that the independent variables training and development approaches job orientation (β=.389, P = .000), workshop and conference (β=.192, P = .024), and classroom lecture (β=.236, P = .008) all showed a significant relationship with the dependent variable quality service delivery. Hypothesis H1 was validated as a result, which asserts that Training and Development (T & D) approaches influence quality service delivery, because the pvalue for job orientation, workshop & conference, and classroom lecture was less than the alpha (α) value of 0.05. However, one independent variable (β= -.010, P = .887) shows no significant link with quality service delivery and was not supported because the p -value was bigger than the alpha (α) value of 0.05. Discussion The purpose of this research was to investigate the relationship between training and development (T & D) methods and organizational performance in the local government sector. Two hypotheses were formulated based on two objectives that support the study. Statistical data show that H1 is supported, indicating that training and development (T & D) methods (job orientation, job rotation, workshop & conference, and classroom lecture) have an effect on organizational performance and that training and development methods are strongly related to organizational performance. Furthermore, the data have shown that training and development methods (job orientation, workshops, and conferences, classrooms lectures) have an influence on quality service delivery and H2 has been supported. This means that improvement in these strategies will improve organizational performance and quality service delivery. However, there are no relationships between the training and development (T&D) method (job rotation) and quality service delivery and improvement in this method will not improve the quality of services in the local government sector. Previous empirical investigations indicate that training and development strategies have an impact on organizational performance. This study's findings support the conclusions of (Khan, Khan, & Khan, 2011), (Gunu, Oni, Tsado, & Ajayi, 2013), and (Gupta, 2017) that training and development approach influence organizational performance. The findings also support (Ampong, Nkuah, & Okyere, 2020), (Altarawneh, 2005), and (Mpofu & Hlatywayo, 2015) that training and development techniques influence quality service delivery. In a smooth, politically active organization, to ensure effectiveness and efficiency. Emphasis needs to be placed on work orientation, workshops/ conferences, and lectures in the classroom. More attention should also be paid to job orientation, workshop/conference, and classroom lectures, and fewer job rotations in order to attain extremely high-quality services to communities as a whole. Conclusion Previous research has underlined the importance of training and development (T&D) methods and employee and organizational performance in other sectors such as banking, oil and gas, agriculture, and so on, but research on T&D in the local government sector appears to be scarce. As a result, it is vital to investigate and establish any issue pertaining to training and development methods as well as organizational performance in the local government sector. The objective of this research is to investigate the relationship between the training and development (T&D) methods and the organizational performance in the local government sector in the Central Region of Ghana. The sample was drawn from the Cape Coast Metropolitan Assembly in Ghana's Central Region. The study concludes that training and development (T&D) methods (job orientation, job rotation, workshop & conference, and classroom) had a significant relationship with organizational performance. Training and development (T&D) methods (job orientation, workshop & conference, and classroom) had a significant relationship with quality service delivery. Implication This study's findings have theoretical and practical implications. The hypothesis demonstrates that when training and development are performed with the proper investment objectives and are not politically motivated, they will improve employee, organizational performance, and quality service delivery. To improve organizational performance and quality service delivery at Metropolitan Assembly, there is room to improve training and development approaches as well as the delivery process. Its management should guarantee that there is sufficient employee engagement in T&D initiatives through improved content and delivery processes to improve organizational performance, as the majority of the organization's personnel are political appointments and only a few are nonpolitical appointments. Specific content and delivery mechanisms should be developed for different levels of employees to ensure their readiness to take on tasks and accept change. The management of the organization should guarantee that significant employee participation in T&D methods initiatives is achieved by improving content and delivery processes to enhance organizational effectiveness, since most employees are political and only limited, because in a politically charged climate like the study area where the winner takes all, policy direction and service delivery may be delayed. Limitation and Study Forward For academic research, defining the study's constraints is crucial. As a result, before proceeding, it is critical to acknowledge the current study's limitations. To begin, one of the study's key shortcomings is that it only examined four alternative training and development methods. Another limitation of the study is that it only looked at how training and development improve organizational performance and quality service delivery in one metropolitan assembly. Only a few studies on the effects of training and development on organizational performance and none on the local government sector have been conducted.
2021-12-03T16:23:43.976Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "ab4a26d139241957b54fcf8b8813f4fd429d64e5", "oa_license": "CCBYSA", "oa_url": "https://goodwoodpub.com/index.php/jshe/article/download/757/229", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "12be296e9f471f4ba7d71361e7e918266c70f216", "s2fieldsofstudy": [ "Business", "Education" ], "extfieldsofstudy": [] }
239706504
pes2o/s2orc
v3-fos-license
Environmental impact assessment of nanofluids containing mixtures of surfactants and silica nanoparticles Due to widespread use of nanoparticles in surfactant-based formulations, their release into the environment and wastewater is unavoidable and toxic for biota and/or wastewater treatment processes. Because of concerns over the environmental impacts of nanofluids, studies of the fate and environmental impacts, hazards, and toxicities of nanoparticles are beginning. However, interactions between nanoparticles and surfactants and the biodegradability of these mixtures have been little studied until now. In this work, the environmental impacts of nanofluids containing mixtures of surfactants and silica nanoparticles were valuated. The systems studied were hydrophilic silica nanoparticles (sizes 7 and 12 nm), a nonionic surfactant (alkyl polyglucoside), an anionic surfactant (ether carboxylic acid), and mixtures of them. The ultimate aerobic biodegradation and the interfacial and adsorption properties of surfactants, nanoparticles, and mixtures during biodegradation were also evaluated. Ultimate biodegradation was studied below and above the CMCs of the individual surfactants. The interfacial and adsorption properties of surfactant solutions containing nanoparticles were influenced by the addition of silica particles. It was determined that silica nanoparticles reduced the capability of the nonionic surfactant alkyl polyglucoside to decrease the surface tension. Thus, silica NPs promoted a considerable increase in the surfactant CMC, whereas the effect was opposite in the case of the anionic surfactant ether carboxylic acid. Increasing concentrations of surfactant and nanoparticles in the test medium caused decreases in the maximum levels of mineralization reached for both types of surfactants. The presence of silica nanoparticles in the medium reduced the biodegradability of binary mixtures containing nonionic and anionic surfactants, and this effect was more pronounced for larger nanoparticles. These results could be useful in modelling the behaviour of nanofluids in aquatic environments and in selecting appropriate nanofluids containing nanoparticles and surfactants with low environmental impact. Graphical abstract Introduction Despite the many applications and numerous advantages of surfactants in industrial and economic fields, from an environmental point of view, they are considered an important contaminant of aquatic environments, and high volumes of these substances are released daily into this medium. Once used, the surfactants reach treatment plants through urban and industrial wastewater and in certain cases are directly discharged into surface waters. During treatment of wastewater, a high percentage of the surfactants present in the aquatic environment are eliminated by aerobic biodegradation and adsorption onto particulate material, while the metabolites generated and the remaining nondegraded surfactants are dispersed in different environmental compartments. A growing problem is currently arising at wastewater treatment plants due to mixing of domestic wastewater and hospital and industrial effluent containing significant surfactant loads, and the mixtures may contain surfactants with different properties. The concentrations of surfactants Responsible Editor: Gerald Thouand present in domestic wastewater can vary between 1 and 10 mg/L, while they can reach levels of 300 mg/L in industrial wastewater (Siyal et al. 2020). Sewage treatment plants can lower the concentrations of surfactants by up to 1-3 mg/L, but surfactants are still present in active sludge, and this leads to significant environmental impacts (Bautista-Toledo et al. 2014). The growing concern in recent years regarding the design of nonpolluting detergents has led to the development and use of more environmentally friendly surfactants, such as ether carboxylic acid derivatives and alkyl polyglucosides (APG) analysed in this study. The consumption of these surfactants is increasing year by year due to their remarkable environmental profiles. The fast-moving consumer goods industry demands products with low environmental impact, and consumers pay special attention to their components. Recent market reports (Fact.Mr. 2021) predict a growth of 0.6% in APG consumption this year, and through 2031, the market for APG is anticipated to expand at a high CAGR (compound annual growth rate) of close to 8%. This trend of replacing traditional surfactants with new biobased surfactants will continue to increase in the next few years. Therefore, a detailed study of new surfactants in combination with other surfactants and/or nanoparticles is mandatory for predicting their environmental impact. The surfactant ether carboxylic acid is used in cleaning and cosmetic products that come in contact with the skin. These surfactants improve the foaming capacity of surfactant formulations and decrease levels of irritation (Jurado et al. 2011) when compared with other anionic surfactants. Alkyl polyglucosides have great advantages over other classes of surfactants. Their natural origin is the source of their good physical and environmental properties. Moreover, alkyl polyglucosides present high compatibility and foam production, excellent cleaning efficiency, wettability, and ocular and dermatological safety and have been proven to be readily biodegradable under aerobic conditions (Jurado et al. 2002;Zgoła-Grześkowiak et al. 2008). All of this makes them potential components for a variety of domestic and industrial applications (Pantelic and Cuckovic 2014;Tasic-Kostov et al. 2014). The special properties of small particles (1 nm to 1 μm) and the advantages they offer in processes related to catalysis, new materials, or biomedicine have led to increased use in consumer products such as detergents (Ma et al. 2008). Scientific interest in recent years has focused on silica nanoparticles (Slowing et al. 2010;Mamaeva et al. 2013), and several detergent formulations and related formulations containing silica particles have been patented (Orlich et al. 2007). Nanoparticles are present in many formulations and applications due to their physicochemical properties, low toxicity, stability, and functionalization capacity with a range of polymers and molecules (Ríos et al. 2018a). Silica nanoparticles are frequently mixed with surfactants for oil recovery, nanofluid production, immobilization of enzymes, and removal of dyes, detergents, or foam stabilizers (Maestro et al. 2014;Zhu et al. 2015;Patra et al. 2016;Plomaritis et al. 2019). As with surfactants, particles of colloidal size can accumulate spontaneously at liquid-gas or liquid-liquid interfaces where they are acting as stabilizers of emulsions and foams (Eskandar et al. 2011). Simple algorithms have recently been used to estimate potential concentrations of NPs from consumer products. However, the concentrations estimated by applying these models are significantly lower than the results of many published studies (Tiede et al. 2009). When nanoparticles are used together with surfactants, synergistic effects can be observed in the production of emulsions and stable foams, so it is of great interest to study these interactions from an environmental point of view. Due to the widespread use of nanoparticles in formulations in recent years, their release into the environment and wastewater is unavoidable (Huang et al. 2017) and brings toxicity to biota and/or wastewater treatment processes. Because of increasing concern about the environmental impacts of the latest materials, studies of the toxicity, hazards, fate, and environmental impact of nanoparticles are beginning (Liu et al. 2014;Skorochod et al. 2016;Ríos et al. 2018b). The interactions between nanoparticles and surfactants as well as the biodegradability of surfactant mixtures have not been sufficiently studied until now. A recent paper by Bimová et al. (Bimová et al. 2021) summarized the possible toxic effects of nanomaterials on the environment and living organisms due to their use in different technologies, environmental sectors, and medicine. However, this work did not include any reference to the mixtures of nanoparticle surfactants. From our humble point of view, this is consistent with the lack of knowledge in this particular field. Predictability of the joint effects of solutions containing surfactants and nanoparticles is of great interest for adequate assessments of environmental risk due to the growing usage of nanoproducts, nanomaterials, and nanofluids. Biodegradability tests can produce variable results attributable to changes in inoculum, inoculum origin, and ratio, which result in false negatives (Lundgren et al. 2013). In this sense, "positive" results can be considered sufficient evidence of biodegradability and can generally be substituted for negative results. The OECD 301 series of readily biodegradable tests is considered the standard for screening purposes (OECD 1992). Ready biodegradability tests are conservative in nature and stringent enough to assume rapid and complete biodegradation of compounds in aquatic environments (OECD 1992). This work is focused on biodegradation of anionic and nonionic surfactants, and their relative risk profiles are compared to those for mixtures of surfactants and surfactant-nanoparticles due to the high production volumes and the massive and dispersed use of surfactant-based formulations. The aerobic biodegradability of nanofluids, solutions containing silica nanoparticles in combination with an anionic surfactant (ether carboxylic acid), a nonionic surfactant (alkyl polyglucoside) whose individual environmental impacts have been previously assessed (Jurado et al. 2013;Lechuga et al. 2016;Ríos et al. 2017), and mixtures of them have been studied. In addition, with the goal of gaining insight into environmental behaviour and other aspects related to interfacial phenomena and cleaning efficiency, the effects of nanoparticles on the surfaces, interfacial tensions, and critical micellar concentrations (CMCs) of surfactants and mixtures were measured. Silica nanoparticles Two types of hydrophilic silica nanoparticles (Aerosil 380 and Aerosil 200, Evonik Industries AG (Essen, Germany)) were used. Table 1 shows the physicochemical properties of the nanoparticles used in this study, including mean diameter (D m ), specific surface area (S), tapped density (d), and pH. Nanoparticles were observed by TEM using an ultrahigh-resolution scanning transmission electron microscope (S/TEM) and a high-angle annular dark-field imaging (HAADF) system (FEI TITAN G2 60-300). The images showed amorphous structures for both nanoparticles, and these tended to be spherical in shape ( Fig. 1), but both Aerosil 380 and Aerosil 200 showed sphericity values of 0.851 and 0.943, respectively. TEM analyses were performed to corroborate this statement. Surfactants The nonionic surfactant alkyl polyglucoside (APG) was supplied by Sigma-Aldrich (St. Louis, USA), and the anionic surfactant ether carboxylic acid (EC) was provided by KAO Corporation (Tokyo, Japan). Table 2 summarizes their main characteristics. Surfactant solutions were studied at two concentrations, 25 mg/L and 50 mg/L. A binary mixture of these surfactants with a proportion of 1:1 (w/w) was also studied at a total concentration of surfactant of 50 mg/L. Sample preparation A magnetic stirrer was used to wet the silica particles with aqueous media, and then dispersion and deagglomeration were performed by ultrasonication for 30 min (Sonorex RK 106 S, Bandelin, Berlin, Germany) in 1 L of ultrapure water. Subsequently, the surfactant was aggregated to obtain a suitable concentration. Ultrasonic cavitation helped to disperse particles since it generates high shear that breaks particle agglomerates. The interfacial tension, superficial tension, and biodegradability of surfactant solutions with silica nanoparticles were assessed as described in the following sections. Surface and interfacial tension Surface and interfacial tensions were determined for nanoparticles and surfactants. Additionally, during the biodegradability tests, surface and interfacial tension were determined over time. Surface tension was measured at 25 °C using the Wilhelmy plate method with a Krüss KSV tensiometer equipped with a 2-cm platinum plate (Krüss GmbH, Hamburg, Germany). The platinum plate was cleaned by heating it to a reddish orange colour with a burner prior to use. Standard deviations were calculated by carrying out successive measurements, resulting in values less than 0.1 mN/m. The interfacial tensions (IFT) between dodecane and aqueous solutions were determined at 25 °C by a pendant drop tensiometer (KSV CAM 200, KSV Instruments Ltd, Finland). Measurements were performed in triplicate. The critical micellar concentration CMC was calculated by plotting the surface tension vs. surfactant concentration (0 to 5·10 3 mg/L). The break point in the plot indicates the formation of micelles. CMC results for anionic and nonionic surfactants are shown in Table 3. Biodegradation and adsorption tests Ultimate ready biodegradability tests followed OECD 301E test guidelines (OECD 1992). Ready biodegradability was determined for solutions containing individual and mixtures of surfactants. Reference assays were used as a positive control with a readily biodegradable surfactant (linear alkylbenzene sulfonate) to check the activity of the microbial population present in the test medium. The biodegradation tests are based on the removal of organic compounds measured as dissolved organic carbon (DOC) (OECD 2001). This test wastewater. This water sample was a mixed aerobic culture of faecal microorganisms, including, for the most part, total coliforms, faecal coliforms, and enterococcus. The microbial activity of the supernatant was determined to be 10 5 to 10 6 CFU/mL. Supernatant microbial sludge was added to the test medium. Biodegradation was determined from the residual surfactant concentration over time by measuring dissolved organic carbon (DOC) in samples filtered through a 0.45μm Millipore membrane. In the reference tests, the initial concentration of surfactant was 5 mg/L in all cases, and the average biodegradability reached at the end of the test was 98.34%; this fulfilled the 90% criterion set by the OECD for 5 days for soft standards and thus indicated the validity of the assay. Test surfactant concentrations ranged from 25 to 50 mg/L in order to ensure at least 40 mg ThOD/L (Theoretical oxygen demand). The test temperature was maintained at 25 °C ± 1 °C (with minor deviations of less than 1 °C). All test vessels were stirred constantly with magnetic stir bars at 125 sweeps/min. All glassware was cleaned using a solution of ammonium persulfate in H 2 SO 4 (98%). Adsorption experiments were carried out under the same conditions as biodegradation tests but in the absence of microorganisms. Surface and interfacial tension Surface and interfacial tensions of nanoparticle dispersions in Milli-Q® water were determined in the concentration range 0−1.000 mg/L at 25 °C. For both nanoparticle dispersions, the surface and interfacial tensions did not change with concentration, and the values were approximately 44.6 ± 0.4 mN/m for interfacial tension and 71.3 ± 0.6 mN/m for surface tension (Table 3), which are close to those between dodecane and pure water. Therefore, the silica particles were not surface active, and they did not show a preference for the water-air/dodecane interface due to their hydrophilic character. These results are consistent with the surface tension data obtained by Ma et al. (Ma et al. 2008) and Vatanparast et al. (Vatanparast et al. 2018) for Levasil® silica solutions. Anionic and nonionic surfactants decrease the surface tensions of air-water interfaces and the interfacial tensions of liquid-liquid interfaces. As shown by the results in Table 3, the inclusion of negatively charged hydrophilic silica nanoparticles (diameters of 7-12 nm) in surfactant solutions modified their interfacial properties. Due to the assumed lack of surface-active character for silica nanoparticles, the differences in interfacial properties relative to those of the single surfactant system were attributed to nanoparticle-surfactant interactions (Vatanparast et al. 2018). In the case of anionic surfactants, silica nanoparticles increased the surface activity and therefore the efficiency of the EC surfactants due to repulsive coulombic interactions between the surfactants and nanoparticles, which promoted surfactant adsorption at air-water interfaces. Similar results were found by Ma et al. (Ma et al. 2008) for systems with SDS involving nanoparticles with diameters of 13 nm. For solutions involving nanoparticles and APG nonionic surfactants, which effectively decrease the efficiency and increase interfacial tensions, the nanoparticle effects were similar to those for air-water interfaces. The surface and interfacial tensions for solutions of surfactant containing nanoparticles were measured, and CMCs were determined at 25 °C. Solutions containing nanoparticles and nonionic surfactant (APG) showed a CMC larger than that of pure surfactant solution, whereas solutions containing nanoparticles and anionic surfactant (EC) showed considerably reduced CMCs. Similar results were obtained by Rios et al. (Ríos et al. 2018a) for anionic-nonionic surfactant systems and silica nanoparticles. The modifications of surface and interfacial tensions were the same when using the same surfactant with different nanoparticles, and CMCs were similar or on the same order of magnitude (Fig. 2). The decrease in CMC with anionic surfactant was due to repulsive electrostatic forces operating between particles with anionic surfactant that favour diffusion of surfactant molecules towards the interface (Zargartalebi et al. 2015). Silica particles make the Gibbs free energy for adsorption and micellization more negative (Ma et al. 2008), which promotes adsorption and aggregation in micelles. The decrease in CMC was greater in the case of anionic surfactant solutions containing smaller nanoparticles. In the case of a nonionic surfactant, the effect was opposite because adsorption and electrostatic forces were much weaker; in this case, micellization and effects on Gibbs free energy were negligible. Biodegradability of surfactants and silica nanoparticles The aerobic biodegradation of ether carboxylic acid and alkyl polyglucoside solutions in combination with 250 mg/L hydrophilic fumed silica nanoparticles was studied. The initial surfactant concentrations in the biodegradability tests were below or above CMC, 25 and 50 mg/L. Surfactant adsorption onto materials considerably influences the environmental impact of surfactants, and some authors have studied this phenomenon van Compernolle et al. 2006). Adsorption tests were carried out in experiments with anionic and nonionic surfactants and mixtures of these surfactants with A200 and A380 nanoparticles. During the tests, the presence of surfactant was determined by DOC measurements. Additionally, the surface and interfacial tensions were analysed during the entire adsorption experiment (Fig. 3). Abiotic tests were carried out with dilute HgCl 2 to confirm adsorption, and it was found that the residual concentrations of surfactant remained at approximately 99% during the biodegradation period. The surface and interfacial tensions were approximately constant, which confirmed that there was no adsorption of the surfactants on the nanoparticles. This fact was observed independently of either the ionic character of the surfactant or the nanoparticle size. Thus, in the adsorption tests presented in this work, the results indicated that the contribution of the abiotic process degradation of the surfactant can be neglected even in the presence of nanoparticles. For solutions with nanoparticles and surfactants, the surface and interfacial tensions were lower for larger nanoparticles. If several adsorption experiments are compared according to surfactant character, it is observed that the interfacial and surface tensions were lower Figure 4 shows the time course of biodegradability over the degradation period for solutions of APG and EC with initial surfactant concentrations of 25 mg/L. The tests were carried out on surfactant solutions without nanoparticles (25-50 mg/L surfactant solutions) and solutions of surfactant containing nanoparticles at concentrations of 250 mg/L. The results showed that the effects produced by nanoparticles were highly dependent on the initial surfactant concentration in the test medium. Generally, the results showed that the presence of nanoparticles reduced primary and final biodegradation. This reduction in biodegradability of the anionic surfactant due to the presence of nanoparticles was 7.06% when the concentration of nanoparticles was increases from 0 to 250 mg/L and 10.67% for nonionic surfactants. Regardless of the presence of nanoparticles in the solutions, the anionic surfactant was more biodegradable than the nonionic surfactant. Table 4 shows that EC and APG were readily biodegraded. A surfactant can be considered biodegradable if one of the tests indexed in Annex III of Regulation (EC) No. 648/2004(Regulation (EC) 2004) exhibits a minimum ultimate biodegradation level of 60% after 28 days. The surfactants EC and APG fulfilled this requirement and yielded 91.8% and 80.64% DOC removal, respectively, for initial concentrations of 25 mg/L and 80.15% and 60.49% for initial concentrations of 50 mg/L. Surface and interfacial tensions were also determined for solutions containing nanoparticles during the biodegradation process. As the biodegradation process proceeded, increases in surface and interfacial tensions confirmed the disappearance of surfactant from the medium, which was consistent with the degradation curves obtained (Fig. 4). In the case of the nonionic surfactant (APG), increases in the superficial and interfacial tensions during biodegradation were smoother than those for the anionic surfactant, which may have arisen because the degradation metabolites of APG have a certain interfacial activity. APG follows a central scission biodegradation pathway in which ω-oxidation and central scission lead to dicarboxylic acids (Jurado et al. 2011), and the interfacial activity is associated with this process (Lee and Hildemann 2013). Regardless of the presence of nanoparticles in the biodegradation tests, it was observed that the anionic surfactant had higher surface and interfacial activities than the nonionic surfactant. This is directly related to low adsorption of the anionic surfactant during nanoparticle adsorption tests and the higher biodegradability of EC compared to APG. The higher hydration capacity of the polar head in the anionic surfactant makes adsorption more difficult than it is for nonionic surfactants. For APG, the adsorption of surfactant onto nanoparticles drove the nanoparticles towards the interfaces due to the increased hydrophobicity (Ravera et al. 2006). On the other hand, APG formed suspensions, and only a small part of the surfactant may be susceptible to biodegradation and available to bacteria (Zgoła-Grześkowiak et al. 2008), consistent with the lower biodegradation relative to that of EC. When comparing the influence of nanoparticle size on the biodegradability, it was observed that larger particles caused greater biodegradability independent of the character of the surfactant (Fig. 4). Both surfactants decreased the diameters of nanoparticle aggregates and increased their effective concentrations. To corroborate this, parameters characteristic of biodegradation profiles (Jurado et al. 2007) were calculated, including latency time (t L ), half-life time (t 1/2 ), mean biodegradation rate (V M ), percentage of primary biodegradation reached at 50 h of assay (B), and mineralization (Min). Table 4 summarizes the values of these characteristic parameters obtained for the biodegradation profiles. Equations 1 and 2 show the dependence of biodegradation (B) and mean biodegradation rate on nanoparticle concentration for APG and A200 nanoparticles. The nanoparticles affected the acclimation time of the microorganisms, t L ; this value varied between 15.96 h for the APG-A200 assay and 78.24 h for the EC-A200 assay. The latency time and half-life time were notably augmented for nanofluids containing A200 nanoparticles. The presence of nanoparticles in the biodegradability test did not alter the form of the resulting curve except for anionic surfactant EC and A200 nanoparticles, the shape of the curve was exponential, the biodegradation process became slower (B = 2.55%), and a long lag phase was observed, although the final mineralization level reached was the highest among the tests carried out with nanoparticles (Min = 87.02%). Therefore, it was possible to establish a dependence of biodegradability on silica particle size. The ZP of A200 particles was less negative than the ZP of A380 particles, revealing the greater stability of the smallest nanoparticles. On the other hand, the TEM image of A380 silica nanoparticles used in this study corroborated the aggregation phenomenon that has a direct effect on even minor biodegradation. The influence of initial surfactant concentration on biodegradability was demonstrated (Fig. 5). In general, surfactants biodegrade more easily at lower initial concentrations in the presence or absence of nanoparticles, which was the case for the two surfactants studied here. The minimum level of 60% ultimate biodegradation after 28 days was not reached for anionic and nonionic surfactants when the initial concentration of surfactant was 50 mg/L in the presence of nanoparticles, regardless of their size. This phenomenon is reflected in the characteristic parameters calculated for the biodegradation profiles, such as V M , t L , t ½ , and B. Therefore, the average velocity of biodegradation V M and biodegradability B was greater for lower initial concentrations, and the latency time t L and half-life time t ½ were greater. For nanofluids containing A200, the reduction in biodegradability, which was attributed to an increase in surfactant concentration, was more pronounced for the nonionic surfactant. Biodegradability of surfactant-nanoparticle mixtures The biodegradabilities of anionic/nonionic surfactant mixtures were evaluated to understand the interactions and synergies among different kinds of surfactants. Surfactants are used as cosurfactants in many formulations, and therefore, the ecotoxicological and interfacial interactions in binary mixtures with a 1:1 weight ratio of ether carboxylic derivative surfactants and alkyl polyglucosides were investigated. Mixtures of surfactants in detergents, household care products and industrial formulations generally include nonionic/ nonionic, cationic/cationic, anionic/anionic, and amphoteric/ amphoteric surfactant pairs. However, it has been demonstrated that the synergistic effects between them increase with increasing charge difference (Werts and Grady 2011), meaning that synergisms between nonionic/nonionic or anionic/ anionic pairs are less than those between nonionic/anionic surfactants (Kume et al. 2008). The level of biodegradation for APG-EC binary mixtures is lower than those for solutions with single surfactants. This negative synergistic effect may be explained by reductions of the electrostatic repulsions between the head groups of anionic surfactants upon inclusion of nonionic head groups, which results in lower aggregate stability and thus an increase in CMC for binary mixtures of anionic-nonionic surfactants. This occurs both in the presence and absence of nanoparticles (Fig. 6). Data for binary mixtures indicated that the lowest biodegradation level appeared when a mixture formed by the surfactant APG-EC and larger nanoparticles was tested. These results may suggest formulations of commercial surfactant mixtures with augmented biodegradability, especially if the surfactants EC and APG are incorporated. Conclusions This work investigated whether silica nanoparticles enhance the biodegradability of surfactants and other surfactant properties, particularly interfacial and adsorption behaviours. Binary mixtures of nonionic and anionic surfactants were also investigated. The nonionic and anionic surfactants studied (APG and EC, respectively) decreased the surface tensions of air-water interfaces. The inclusion of negatively charged hydrophilic silica nanoparticles reduced the efficiency of the nonionic surfactant and considerably increased its CMC, but the effect was opposite for the case of the anionic surfactant. Increasing concentrations of surfactant and nanoparticles in the test medium resulted in decreases in the relative maximum mineralization for both surfactants. These results imply that surfactants assayed at low concentrations may be considered safe for the environment when formulated as nanofluids with or without nanoparticles and with an initial surfactant concentration of 25 mg/L. Measurements of binary mixtures indicated that the mixture with the lowest biodegradability was formed with the surfactant APG-EC and larger nanoparticles. Since biodegradation is the main mechanism for removing organic compounds, knowledge of the biodegradability of surfactants in combination with other additives is necessary for understanding the environmental behaviour of these mixtures before designing a detergent formula. These results can lead to a useful methodology for development of more biodegradable formulations. Funding Funding for open access charge: University of Granada / CBUA. Research funded by the Spanish Ministry of Economy and Competitiveness (Ref. Project CTQ2015-69658-R). This project had a validity of 4 years, during which the experimentation, interpretation of the data, and writing of this article have been carried out. Consent for publication Not applicable Competing interest The authors declare no competing interests. Fig. 6 Biodegradation profiles for binary mixtures of surfactants and nanoparticles Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-10-22T15:29:47.722Z
2021-09-03T00:00:00.000
{ "year": 2022, "sha1": "9d303945812e84280fed06508eb057a0683efcf7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-022-21598-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "cce531c4b5e24ca69542697e95dfccd4928902ff", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
91452198
pes2o/s2orc
v3-fos-license
Effect of Foliar Application of Nutrients on Growth, Yield and Fruit Quality of Pomegranate (Punica granatum L.) Cv. Bhagwa A field experiment was conducted to study the effect of foliar application of nutrients on growth, yield and fruit quality of pomegranate (Punica granatum L.) cv. Bhagwa in the experimental farm of the Horticultural Research and Training Station and Krishi Vigyan Kendra Kandaghat, Solan, Dr YS Parmar University of Horticulture and Forestry Nauni, Solan, Himachal Pradesh (India), during the year 2016 2017. The experiment was laid out in Randomized Block Design (RBD) with four foliar applications of potassium nitrate, KNO3 (0.5%, 1% and 1.5%); calcium chloride, CaCl2 (0.5%, 1% and 1.5%); boric acid, H3BO3 (0.2%, 0.4% and 0.6%) and their combinations. The first spray was applied one month after fruit set, and the remaining three ones were applied at one month interval. Among the various treatments, significant increase in plant height, plant spread, plant volume, fruit size, fruit weight, fruit yield, total soluble solids, total sugars, reduction in fruit drop and fruit cracking were recorded with the application of KNO3 (1%) + CaCl2 (1%) + H3BO3 (0.4%). Leaf N, P, K, Ca and Mg were also significantly affected by the foliar application of KNO3 (1%) + CaCl2 (1%) + Original Research Article Kumar et al.; CJAST, 39(20): 50-57, 2020; Article no.CJAST.59127 51 H3BO3 (0.4%). Therefore, the combined foliar application of KNO3 (1%), CaCl2 (1%) and H3BO3 (0.4%) was found the best treatment for the improvement of growth, yield and fruit quality of pomegranate. INTRODUCTION Pomegranate (Punica granatum L.) belongs to the family Punicaceae is a delicious and dessert table fruit of tropical and subtropical regions of the world. Its fruit has a wide consumer preference for its attractive juice, sweet-acidic, refreshing arils and also valued for its nutritional and medicinal properties. There is a growing demand for good quality fruits both for fresh and processed juice, syrup and wine. Pomegranate is native to Persia (Iran), Afghanistan and Baluchistan [1]. It is cultivated on a large scale in Spain, Morocco, Egypt, Afghanistan, Pakistan and India. In India commercial plantation of pomegranate exists in Maharashtra, Gujrat, Rajasthan, Karnataka and to a limited extent in Andhra Pradesh, Tamil Nadu, Madhya Pradesh, UP, Punjab, Haryana and Himachal Pradesh. Its wild forms are found in lower hills of Himachal Pradesh. Total area under pomegranate in India is 209 thousand hectares with annual production of 2442 thousand million tonnes [2]. During past decades, pomegranate has been introduced in the foot hills of Himachal Pradesh comprising of sub-tropical and valley areas of Shivalik hills (mainly in mid-hill areas of Solan, Sirmour, Shimla, Chamba, Mandi, Kullu and Kangra districts), which holds tremendous scope of pomegranate cultivation. In Himachal Pradesh total area under pomegranate accounted for 2771 hectares with an annual production of 3215 tonnes [3]. The quality of fruits produced in this region is inferior because of lower summer temperatures coupled with higher humidity during the later stage of fruit development. Preharvest fruit cracking associated with soil moisture fluctuation in summer is another problem, which further causes substantial economic loss to the growers. The primary objective was to increase the productivity of quality fruits of pomegranate in this region. The 'Bhagwa' cultivar of pomegranate is presently under commercial cultivation in the state. This cultivar is a heavy yielder and possesses highly desirable fruit characters. This cultivar matures in 170-180 days. Fruits are medium to big in size, attractive, smooth, glossy with dark saffron thick skin and arils are sweet in taste with red colour and hence fetch a very good price in the market. It is suitable for long distance market as it has thick rind and better keeping quality. The growth habit of tree is spreading type [4]. Considering all these attributes, 'Bhagwa' cultivar is good for commercial cultivation in pomegranate growing regions of Himachal Pradesh. Foliar application of plant nutrients have various beneficial effects on pomegranate, therefore, foliar sprays of nutrients in adequate quantity should be applied at appropriate time for optimum growth, yield, fruit quality and control of fruit cracking. Foliar fertilization has the advantage of uniform distribution of fertilizer materials and quick response to the applied nutrients. Potassium is involved in a number of physiological processes, activation of numerous enzymes and regulation of the cation-anion balance [5]. Potassium promotes the translocation of photosynthates (sugars) for plant growth and storage in fruits and roots. Calcium has a major role in the formation of the cell wall membrane and its plasticity, affecting normal cell division by maintaining cell integrity and membrane permeability. Calcium is an activator of several enzyme systems in protein synthesis and carbohydrate transfer. Boron has been associated with lignin synthesis activity of certain enzymes, seed and cell wall formation, and sugars transport. Keeping in view the above importance, the present study was conducted to find out the effect of foliar application of nutrients and their combinations on growth, yield and fruit quality of pomegranate cv. Bhagwa. The observations were taken in respect of vegetative growth, yield, quality, and leaf nutrient content of pomegranate. The vegetative growth was recorded once before the start of the experiment and again at the termination of the experiment. Fruit yield was determined on the basis of total weight of fruits harvested from the trees under each treatment and was expressed as (kg pL -1 ). To measure fruit size, length and breadth, five randomly selected fruits were measured with the help of digital vernier calliper. Fruit length and breadth were calculated and expressed in centimetres. To record fruit weight, five randomly selected fruits were weighed with the help of a weighing pan and average fruit weight was calculated and expressed as weight per fruit in grams. Fruit cracking was calculated by counting the number of fruits cracked out of total number of fruits at the time of harvest and were expressed in per cent. Total sugars and reducing sugars were determined by the method suggested by [6], using Fehling's A and B solutions. For the estimation of leaf nitrogen, leaf samples were digested in concentrated sulphuric acid by adding digestion mixture (potassium sulphate 400 parts, copper sulphate 20 parts, mercuric oxide 3 parts and selenium powder 1 part). In the case of P, K, Ca and Mg leaf samples were digested in a diacid mixture containing nitric acid and perchloric acid in the ratio of 4:1 [7]. Mature and disease free leaf samples of current season's growth were collected from around the periphery of the trees on 15 th August. The data generated from present investigations were appropriately computed, tabulated and analysed by using MS-Excel and OPSTAT programme. The data were subjected to analysis of variance as outlined by [8] for Randomized Block Design. One-way analysis of variance was performed to determine significance of difference between the treatment means at 5 per cent level of significance. Vegetative Growth Foliar application of different nutrients alone or in combination resulted in significant increase in plant height, spread and volume. ). However, the maximum shoot extension growth (31.66 cm) was recorded in KNO 3 (1.5%) + CaCl 2 (1.5%) + H 3 BO 3 (0.6%) treatment as presented in (Table 1). These findings are in line with the earlier reports of [9,10,11]. The increase in vegetative growth might be due to the fact that potassium nitrate is a good source of nitrogen. It is an essential constituent of proteins, nucleic acids, and chlorophyll which enhances the metabolic processes of plants that lead to increase in vegetative growth. The boron has its role in nitrogen metabolism, hormone movement and cell division, which resulted in more plant growth. Fruit Yield The foliar applications of different nutrients alone or in combination also increased the fruit yield significantly as shown in (Table 1). The highest fruit yield (32.96 kg/plant) was recorded with foliar spray of KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%) treatment followed by KNO 3 (1.5%) + CaCl 2 (1.5%) + H 3 BO 3 (0.6%). The increase in fruit yield with the foliar application of nutrients may be attributed to increased fruit size, fruit weight and minimum fruit drop. In addition, more cell division, cell elongation and translocation of photosynthates and metabolites from leaves to the developing fruits which resulted in higher fruit yield. The highest fruit yield recorded by foliar spray of KNO 3 , CaCl 2 and H 3 BO 3, may be attributed to better uptake and mobilization of nutrients to the sink leading to better fruit development. These findings are also supported by earlier reports of [12] with the application of KNO 3 (2%) in pomegranate, [13] in apple with the application of H 3 BO 3 (0.1%) + CaCl 2 (0.4%) and [14] in pomegranate with (0.4%) boric acid in combination with zinc sulphate also increased fruit yield significantly. Fruit Size and Weight The maximum fruit length (9.03 cm) and fruit weight (383.88 g) were recorded with the spray of KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%). However, the maximum fruit diameter (9.28 cm) was recorded with the spray of KNO 3 (1.5%) + CaCl 2 (1.5%) + H 3 BO 3 (0.6%) has shown in (Table 1 and Table 2). The possible reason for increase in the fruit length and diameter might be due to the fact that mineral nutrients have an indirect role in hastening the process of cell division and cell elongation due to which the size of fruit might have improved. The increase in fruit weight might be attributed to rapid increase in the size of cells or it is also due the fact that foliar application of boron increased the fruit weight by maintaining the level of auxins in various parts of the fruits which helped in increasing the fruit growth [15]. These findings are in conformity with those of [16,17] in apple with the application of K and B. Fruit Cracking In the present investigation, different nutrients applied alone or in combination have a significant effect on fruit cracking ( Table 1). The minimum fruit cracking (1.75%) was obtained with the spray of intermediate level of KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%). These results are in agreement with those of [18,19,20] they recorded less fruit cracking in pomegranate by using 0.2% boric acid. The reduction in fruit cracking may be attributed to the physiological role of boron in the synthesis of pectic substances in the cell wall, which strengthened the tissues and prevented fruit cracking. Role of calcium in binding the tissues especially in the middle lamella play an important role in reducing the fruit cracking [21]. This can also be attributed to a synergism of boron that may helps in calcium metabolism in cell wall [22]. Juice Content In the present investigation, application of different nutrients alone or in combination had a significant effect on the increase in juice content of fruits (Table 2). A significant increase in juice content (71.33%) was recorded in fruits produced by plants sprayed with KNO 3 (1.5%). The results are in line with those of [23] who recorded an increase in the juice content of pomegranate by the application of potassium nitrate. Increase in juice content also might be because of smaller seeds and more juice in the variety 'Bhagwa' [24]. Total Soluble Solids (TSS) and Titratable Acidity The results of the present study revealed that application of different nutrients alone or in combination had a significant effect on TSS and titratable acidity ( Table 2). TSS content (13.50 °Brix) was improved with the treatment combination of KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%). [25] also observed an increase in TSS with the application of calcium in apple and [26] in 'Red Delicious' apple by using calcium chloride and boric acid. The highest TSS recorded by foliar application of nutrients might be due to lesser utilization of sugars in metabolic processes as a result of reduced respiration [27]. Increase in total soluble solids might be due to the fact that boron helped in trans-membrane sugar transport. Minimum titratable acidity (0.36%) in the present study was recorded with the application of H 3 BO 3 0.4%. These results are in agreement with the findings of [28], who also recorded reduction in fruit acidity with the application of 0.4% borax in litchi. Total, Reducing and Non-reducing Sugars In the present study, total, reducing and nonreducing sugars were significantly affected by foliar nutrient applications alone or in combination (Table 2). Higher content of total sugars (12.85%) and non-reducing sugars (2.33%) were recorded with the treatment of intermediate level of KNO 3 , CaCl 2 and H 3 BO 3 in combination i.e. KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%). However, the higher content of reducing sugars (10.58%) was observed in treatment KNO 3 (1.5%). The increase in the sugars content might be due to rapid translocation of sugars from leaves to the developing fruits. Boron facilitated sugar transport within the plant. It was also reported that borate reacted with sugars to form a sugarborate complex. Boron also act as a switcher in the degradation of glucose either by glycolysis or by pentose sugar path way [29]. Increase in leaf K concentration also enhances the rate of photosynthesis and this could be one of the reasons for increasing sugars content in the fruits. Leaf Nutrient Composition It is clear from the data presented in (Table 2) that the foliar application of different nutrients alone or in combination exerted a significant effect on nitrogen, phosphorus and potassium content of leaf. The maximum leaf nitrogen (1.92%) was found in the trees treated with KNO 3 (1.5%) + CaCl 2 (1.5%) + H 3 BO 3 (0.6%). In case of phosphorus, treatment of KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%) significantly improved leaf phosphorus (0.22%). Treatment KNO 3 (1.5%) recorded the maximum leaf potassium (1.78%) content. While the minimum leaf nitrogen (1.54%), leaf phosphorus (0.12%), and leaf potassium (0.98%) were recorded in control. The secondary elements like calcium and magnesium in leaves varied significantly by the application of different levels of potassium, calcium and boron. Maximum leaf calcium (2.29%) was recorded in the treatment CaCl 2 (1.5%) and leaf magnesium concentration (0.66%) was recorded to be highest in KNO 3 (1%) + CaCl 2 (1%) + H 3 BO 3 (0.4%). However, the minimum calcium (1.47%) and magnesium (0.51%) were recorded in control. The above findings are in conformity with those of [30,31]. The above nutrients increase in the leaves might be associated with calcium and boron application which are involved in several physiological processes in plants [32]. Potassium is involved in many biochemical reactions that are essential for enzyme activities [33]. CONCLUSION On the basis of results obtained from the present investigation, it is concluded that combined foliar application of KNO 3 (1%), CaCl 2 (1%) and H 3 BO 3 (0.4%) (T 11 ) proved to be the best treatment. Combined application of KNO 3 (1%), CaCl 2 (1%) and H 3 BO 3 (0.4%) resulted significant improvement in growth, yield, fruit quality and leaf nutrient content of pomegranate. This treatment also showed greater increase in plant height, spread, volume, with highest fruit size, fruit weight, fruit yield, TSS, total sugars, reduction in fruit cracking and increase in leaf N, P, K, Ca and Mg content in pomegranate cv. Bhagwa.
2019-04-03T13:10:19.125Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "ca51de5b05e7964dda1776358bcdf61e2826a451", "oa_license": null, "oa_url": "https://www.journalcjast.com/index.php/CJAST/article/download/30807/57791", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c98a7828230a9c075d5731090d37361f34a9a0db", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
18296812
pes2o/s2orc
v3-fos-license
A multi-scale approach to the computer-aided detection of microcalcification clusters in digital mammograms A computer-aided detection (CADe) system for the identification of microcalcification clusters in digital mammograms has been developed. It is mainly based on the application of wavelet transforms for image filtering and neural networks for both the feature extraction and the classification procedures. This CADe system is easily adaptable to different databases. We report and compare the FROC curves obtained on the private database we used for developing the CADe system and on the publicly available MIAS database. The results achieved on the two databases show the same trend, thus demonstrating the good generalization capability of the system. Introduction Microcalcifications appear as small bright circular or slightly elongated spots embedded in the complex normal breast tissue imaged in a mammogram. Especially when they are grouped in clusters, microcalcifications can be an important early indication of breast cancer. Computer-aided detection (CADe) systems can improve the radiologists' accuracy in the interpretation of mammograms by alerting them to suspicious areas of the image containing possibly pathological signs. The main problem one has to deal with, in developing a CADe system for mammography, is the strong dependence of the method, of the parameters and of the performances of the system on the dataset used in the set-up and testing procedures. The approach we adopted for our CADe system is mainly based on the exploitation of the properties of the wavelet analysis and the artificial neural networks. The use of wavelets in the pre-processing step, together with the implementation of an automatic neural-based procedure for the feature extraction, allows for a plan generalization of the analysis scheme to databases characterized by different acquisition and storing parameters. CADe scheme The CADe scheme can be summarized in the following main steps: • INPUT: digitized mammogram; • Pre-processing of the mammogram: identification of the breast skin line and segmentation of the breast region with respect to the background; application of a wavelet-based filter in order to enhance the microcalcification signal; • Feature extraction: decomposition of the breast region in several N ×N pixel-wide sub-images to be processed each at a time; automatic extraction of the features from each sub-image; • Classification: clustering of the processed sub-images into two classes, i.e. those containing microcalcification clusters and the normal tissue 1 ; • OUTPUT: merging of contiguous or partially overlapping sub-images and visualization of the final output by superimposing rectangles indicating suspicious areas to the original image. Tests and results The CADe system was set up and tested on a private database of mammograms collected in the framework of the INFN (Istituto Nazionale di Fisica Nucleare)founded CALMA (Computer-Assisted Library for MAmmography) project [1]. The digitized images are characterized by a 85µm pixel pitch and a 12-bit resolution, thus allowing up to 4096 gray levels. The dataset used for training the CADe consists of 305 mammograms containing microcalcification clusters and 540 normal mammograms. The system performances on a test set of 140 CALMA images (70 with microcalcification clusters and 70 normal images) have been evaluated in terms of the FROC analysis [2] as shown in fig. 1. In particular, as shown in the figure, a sensitivity value of 88% is obtained at a rate of 2.15 FP/im. In order to test the generalization capability of the system, we evaluated the CADe performances on the publicly available MIAS database [3]. Being the MIAS mammograms characterized by a different pixel pitch (50µm instead of 85µm) and a less deep dynamical range (8 bit per pixel instead of 12) with respect to the CALMA mammograms, we had to define a tuning procedure for adapting the CADe system to the MIAS database characteristics. A scaling of the wavelet analysis parameters allows the CADe filter to generate very similar pre-processed images on both datasets. The remaining steps of the analysis, i.e. the characterization and the classification of the sub-images, have been directly imported from the CALMA CADe neural software. The performances the rescaled CADe achieves on the images of the MIAS database have been evaluated on a set of 42 mammograms (20 with microcalcification clusters and 22 normal) and are shown in fig. 1. As can be noticed, a sensitivity value of 88% is obtained at a rate of 2.18 FP/im. Conclusions The implementation of the wavelet transform in the preprocessing step of the analysis and the use of an auto-associative neural network for the automatic feature extraction make our CADe system tunable to different databases. The main advantage of this procedure is that this scalable CADe system can be tested even on very small databases, i.e. databases not allowing for the learning procedure of the neural networks to be properly carried out. The strong sim-ilarity in the trends of the FROC curves obtained on the CALMA and on the MIAS databases provides a clear evidence that the CADe system we developed can be applied to different databases with no sensible decrease in the detection performance.
2014-10-01T00:00:00.000Z
2007-01-22T00:00:00.000
{ "year": 2007, "sha1": "80f70dc3d1abe4506fca2679bfc43aca38e29b08", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3e15cdb304dc991f24a91597fe00307cb24382a2", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
204824156
pes2o/s2orc
v3-fos-license
Scalable Neural Dialogue State Tracking A Dialogue State Tracker (DST) is a key component in a dialogue system aiming at estimating the beliefs of possible user goals at each dialogue turn. Most of the current DST trackers make use of recurrent neural networks and are based on complex architectures that manage several aspects of a dialogue, including the user utterance, the system actions, and the slot-value pairs defined in a domain ontology. However, the complexity of such neural architectures incurs into a considerable latency in the dialogue state prediction, which limits the deployments of the models in real-world applications, particularly when task scalability (i.e. amount of slots) is a crucial factor. In this paper, we propose an innovative neural model for dialogue state tracking, named Global encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue state with a very low latency time, while maintaining high-level performance. We report experiments on three different languages (English, Italian, and German) of the WoZ2.0 dataset, and show that the proposed approach provides competitive advantages over state-of-art DST systems, both in terms of accuracy and in terms of time complexity for predictions, being over 15 times faster than the other systems. INTRODUCTION Spoken dialogue systems, or conversational systems, are designed to interact and assist users using speech and natural language to achieve a certain goal [1]. A consolidated approach to build a task-oriented dialogue system involves a pipeline architecture (see Figure 1), where each component is trained to perform a sub-task, and the combination of the modules in a given sequence aims at handling the complete task-oriented dialogue. In such a pipeline, a spoken language understanding (SLU) module determines the user's intent and the relevant information that the user is providing represented in terms of slot-value pairs. Then, the dialogue state tracker (DST) uses the information of the SLU together with the past dialogue context and updates its belief state [2,3]. In this Fig. 1. A typical flow in a task-oriented dialog system [4]. framework a dialogue state indicates what the user requires at any point in the dialogue, and it is represented as a probability distribution over the possible states (typically a set of pre-defined slot-value pairs specific of the task). The dialogue policy manager, then, decides on the next system action based on the dialogue state. Finally, a natural language generation (NLG) component is responsible for the generation of an utterance that is returned as response to the user. In this paper we focus on the dialogue state tracker component, whose role is to track the state of the dialogue based on the current user utterance, the conversation history and any additional information available to the system [1]. Deep neural network techniques, such as recurrent neural network and convolutional neural networks, are the current state-of-the-art models for DST [5,6,7,8,9], showing high capacity to generalize from annotated dialogues. As an example, the GLAD model (Global-Locally Self-Attentive Dialogue State Tracker - [9]), employs independent slot-specific encoders, consisting of a recurrent and a self-attention layer, for each of the user utterance, the system action and the slot-value pairs. Another DST system, GCE (Globally Conditioned Encoder - [10]), simplifies the GLAD neural architecture removing the slotspecific recurrent and self-attention layers of the encoder, but still requires separate encoders for the utterance, the system action and the slot-values. Although the neural network models mentioned above achieve state-of-the-art performance, the complexity of their architectures make them highly inefficient in terms of time complexity, with a significant latency in their prediction time. Such latency may soon become a serious limitation for their deployment into concrete application scenarios with increasing number of slots, where real time is a strong requirement. Along this perspective, this work investigates the time complexity of state-of-the-art DST models and addresses their current limitations. Our contributions are the following: • we have designed and implemented an efficient DST, consisting of a Global encoder and Slot-Attentive decoders (G-SAT); • we provide empirical evidences (three languages of the WOZ2.0 dataset [7]) that the proposed G-SAT model considerably reduces the latency time with respect to state-of-art DST systems (i.e. over 15 times faster), while keeping the dialogue state prediction inline with such systems; • further experiments show that the proposed model is highly robust when either pre-trained embeddings are used or when they are not used, in this case outperforming state-of-art systems. The implementation of the proposed G-SAT model is publicly available 1 . The paper is structured as follows. Section 2 summarizes the main concepts behind the definition of dialogue state tracking. Section 3 reports the relevant related work. Section 4 provides the details of the proposed G-SAT neural model, and, finally, Section 5 and 6 focus on the experiments and the discussion of the results we achieved. DIALOGUE STATE TRACKING A Dialogue State Tracker (DST) estimates the distribution over the values V s for each slot s ∈ S based on the user utterance and the dialogue history at each turn in the dialogue. Slots S are typically provided by a domain ontology, and they can either be informable (S inf ) or requestable (S req ). Informable slots are attributes that can be provided by the user during the dialogue as constraints, while requestable slots are attributes that the user may request from the system. The dialogue state typically maintains two internal properties: • joint goal -indicating a value v ∈ V s that the user specifies for each informable slot s ∈ S inf . • requests -indicating the information that the user has asked the system from the set of requestable slots S req . For example, in the restaurant booking dialogue shown in Figure 2, extracted from the the WOZ2.0 dataset [7], the user specifies a constraint on the price range slot (i.e. inform(price range=moderate)) in the first utterance of the dialogue, and requests the phone number and the address (i.e. request(address, phone number)) in the second user utterance. The set of slot-value pairs specifying the constraints at any point in the dialogue (e.g. (price range=moderate, area=west, food=italian)) is referred to as the joint goal, while the requested slot-value pairs for a given turn (e.g. request(address, phone number)) as turn request. Latency in Dialogue Systems An effective dialogue system should be able to process the user utterance and respond in real-time in order to achieve a smooth dialogue interaction between the user and the dialogue system itself [11,12]. As a consequence, time latency in dialogue systems is very crucial, as it directly impacts the user experience. In real world applications, task-oriented dialogue systems typically follow a pipeline architecture (as shown in Figure 1) where multiple components need to interact with each other to produce a response for the user, the time complexity of each component plays a key role. In particular, the DST component is a bottleneck, as it needs to track the user's goals based on the dialogue and provides the output to other components of the whole process. Though end-to-end (E2E) approaches for dialogue systems have attracted recent research, dialog state tracking still remains an integral part in those systems, as shown by [13,6,14]. Current DST models use recurrent neural networks (RNN), as they are able to capture temporal dependencies in the input sentence. A RNN processes each token in the input sequentially, one after the other, and so can incur significant latency if not modeled well. Apart from the architecture, the number of slots and values of the domain ontology also affects the time complexity of the DST. Recent works [7,9,8] use RNNs to obtain very high performance for DST, but nevertheless are quite limited as far as the efficiency of the models are concerned. For instance, the GCE model [10] addresses time complexity within the same architectural framework used by of GLAD [9], although the latency prediction of the model is still quite poor, at least for a production system (more details in Section 5). This limitation could be attributed to the fact that both GLAD and GCE use separate recurrent modules to output representations for user utterance, system action and slot-value pairs. These output representations need then to be combined using a scoring module which scores a given slotvalue pair based on the user utterance and the system action separately. In this work, we investigate approaches that overcome the complexity of such architectures and improve the latency time without compromising the DST performance. RELATED WORK Spoken dialogue systems typically consist of a spoken language understanding (SLU) component that performs slotfilling to detect slot-value pairs expressed in the user utterance. This information is then used by dialogue state tracker(DST) [2,3]. Recent research has focused on jointly modeling the SLU and the DST [5,6]. For such joint models, deep neural network techniques have been the choice of use because of their proven ability to extract features from a given input and their generalization capability [5,6,7,8]. Following this research line, [5] proposed a word-based DST (based on a delexicalisation approach) that jointly models SLU and DST, and directly maps from the utterances to an updated belief state. [7] proposed a data-driven approach for DST, named neural belied tracker (NBT), which learns a vector representation for each slot-value pair and compares them with the vector representation of the user utterance to predict if the user has expressed the corresponding slot-value pair. The NBT model uses pre-trained semantic embeddings to train a model without semantic lexicon. GLAD (Global-Locally Self-Attentive Dialogue State Tracker) [9] consists of a shared global bidirectional-LSTM [15] for all slots, and a local bidirectional-LSTM for each slot. The global and local representations are then combined using attention, which then is used by a scoring module to obtain scores for each slot-value pair. GLAD also relies on pre-trained embeddings, and since it consists of multiple recurrent modules, the latency of the model is quite high. In [10], a Globally conditioned encoder (GCE) is used as a shared encoder for all slots and aim to address this issue by proposing a single encoder with global conditioning. While this approach reduced the latency of GLAD, it still has a considerable time complexity for real-world applications, which is discussed in Section 6. StateNet, proposed by [8], uses a LSTM network to create a vector representation of the user utterance, which is then compared against the vector representation of the slot-value candidates. [8] is the current state-of-art for DST: however it can be used for domains iff pre-trained embeddings exist and can only be modelled for informable slot and not for the requestable slots. [16] used convolutional neural network (CNN) for DST and showed that without pre-trained embeddings or semantic dictionaries, the model can be competitive to state-of-the-art. THE G-SAT MODEL The proposed approach, G-SAT (Global encoder and Slot-Attentive decoders), is designed to predict slot value pairs for a given turn in the dialogue. For a dialogue turn, given the user utterance U , the previous system action A and the value set V s for slot s ∈ S, the proposed model provides a probability distribution over slot-value set V s . The model consists of a single encoder module and a number of slot specific decoder (classifier) modules. The encoder consists of a recurrent neural network that takes as input both the user utterance U and the previous system action A, and outputs a vector representation h. The classifier then receives the representation h and the slot-values v ∈ V s and estimates the probability of each value in a given slot. Encoder The encoder takes in the previous system action A and the current user utterance U as inputs, and processes them iteratively to output a hidden representation for each token in the input, as well as the context vector. Let the user utterance at time t be denoted as U = {u 1 , u 2 , ..., u k } with k words and A denotes the previous system action. The system action A is converted into a sequence of tokens that include the action, slot and value (e.g. confirm(food=Italian) → confirm food Italian) and is denoted as A = {a 1 , a 2 , .., a l }. In case of multiple actions expressed by the system, we concatenate them together. The user utterance U and system action A are then concatenated forming the input X to the encoder. where [ ; ] denotes concatenation. Each input tokens in {x 1 , x 2 , ..x n } is then represented as a vector {x 1 , x 2 , .., x n } by an embedding matrix E ∈ R |v|×d , where |v| is the vocabulary size and d is the embedding dimension. This representation is then input to a bidirectional-LSTM [15] that processes the input in both forward and backward directions, to yield the hidden representations, as follows: where LST M f (.) and LST M b (.) are the forward and backward LSTMs. − → h t and ← − h t are the corresponding hidden states of forward and backward LSTMs at time t. The representations h t for each token in the input and the overall input representation h L are then obtained as follows: Since our model uses a shared encoder for all slots, the outputs of the encoder h t and h L are used by slot specific classifiers for prediction on corresponding slots. Classifier The classifier predicts the probability for each of the possible values v ∈ V s of a given slot s ∈ S. It takes in the hidden representations h t and h L of the encoder, and the set of possible values V s for a given slot s, and computes the probability for each value being expressed as a constraint by the user. Initially, each of the slot-value is represented by a vector v, using the same embedding matrix E of the encoder. For slotvalues with multiple tokens, their corresponding embeddings are summed together to yield a single vector. The embeddings are then transformed as following to obtain a representation of the slot values: where V s = {v 1 , v 2 , ..} for slot s, and W s is the parameter learned during training. The encoder hidden state h L is then transformed using the ReLU activation function, to obtain a slot specific representation of the user utterance, as follows: Based on the slot specific input representation U s , an attention mechanism weights the hidden states of the input tokens h t to provide a context vector C. Depending on the slot type (informable or requestable), the final layer of the classifier varies. Informable slots The context vector C and the slot-value representations Z s are then used to obtain the probability for each slot-value as follows: where score none is the score for none value, and it is learned by the model. ψ i p is the probability of the slot-value pair expressed as constraint. Requestable slots For requestable slots, we model a binary prediction for each possible requestable slot as follows: where ψ r p contains the probability for each requestable slot being requested by the user. EXPERIMENTS In this section we describe the dataset and the experimental setting used for the dialogue state tracking task. Datasets We use the the WOZ2.0 [7] dataset, collected using a Wizard of Oz framework, and consisting of written text conversations for the restaurant booking task. Each turn in a dialogue was contributed by different users, who had to review all previous turns in that dialogue before contributing to the turn. WOZ2.0 consists of a total of 1200 dialogues, out of which 600 are for training, 200 for development and 400 for testing. [17] translated the WOZ.0 English data both to German and Italian using professional translators. We experiment on the three languages (English, German, Italian) of the WOZ2.0 dataset. Evaluation metrics We evaluate our proposed model both in terms of performance and prediction latency. The performance of the model is evaluated using the standard metrics for dialogue state tracking, namely, joint goal and turn request [18]. • Joint Goal: indicates the performance of the model in correctly tracking the goal constraints over a dialogue. The joint goal is the set of accumulated turn level goals up to a given turn. • Turn Request: indicates the performance of the model in correctly identifying the user's request for information at a given turn. The prediction latency of the model is evaluated using time complexity. • Time complexity: indicates the latency incurred by the model in making predictions. The time complexity is indicated as the time taken to process a single batch of data. Experimented Models We compared our G-SAT model against seven DST models. The Delexicalisation model [7] uses a delexicalisation approach (i.e. replacing slot value tokens with generic terms) and requires large semantic dictionaries, while all the other approaches are data driven. The neural belief tracker (NBT) [7] builds on the advances in representation learning and uses pre-trained embeddings to overcome the requirement of handcrafted features. The convolutional neural network (CNN) model [16] is the only approach that does not use pre-trained embeddings, although they use also the development data for model training. GLAD [9], GCE [10] and StateNet PSI [8] are based on recurrent neural networks and use pre-trained embeddings. To facilitate comparison, our G-SAT approach is trained with the same pre-trained embeddings as used in GLAD and GCE. Implementation We use the pytorch [19] library to implement our model (G-SAT). The encoder of the model is shared across all slots and a separate classifier is defined for each slot. The number of hidden units of the LSTM is set to 64 and a dropout of 0.2 is applied between different layers. We use Adam optimizer with a learning rate of 0.001. The embedding dimension of the default model is set to 128, and embeddings are learned during training. In order to have a fair comparison with other models that use pre-trained embeddings, we also experiment our approach using pre-trained GloVe embeddings (of dimension 300) [20], and character n-gram embeddings (of dimension 100) [21] as used in GLAD, leading to embedding of size 400. The turn-level predictions are accumulated forward through the dialogue and the goal for slot s is None until it is predicted as value v by the model. The implemented model is experimented with 10 different random initializations for each language, and the scores reported in Section 6 are the mean and standard deviation obtained in the experiments. RESULTS AND DISCUSSION In this section we initially discuss the model performance in terms of joint goal and turn request; and later we show a comparison of the time complexity of the models. DST Performance The joint goal and turn request performance of the experimented models (as they are reported in their respective papers) are shown in Table 1. We can see that the G-SAT proposed architecture is comparable with respect to the other model and outperforms both GLAD and GCE on joint goal metric. This shows that G-SAT is highly competitive with the state of the art in DST. To investigate the behaviour of different models without any pre-trained embeddings, we use the official implementations of GLAD 2 and GCE 3 , and perform the experiments such that embeddings are learned from the data. We increased the epochs from 50 (default) to 150 for GLAD and GCE experiments, as we noticed that the model did not converge with 50 epochs (since embeddings are also learned here). The other parameters of the model are set as the default implementation. Since the core of the StateNet PSI model is to output a representation on which a 2-Norm distance is calculated against the word-vector of the slot-value, which is fixed, it is not suitable to train the embeddings. For this reason we do not experiment with StateNet PSI. In addition, the StateNet PSI model can only predict informable slots (i.e. can not predict requestable slots), unlike the other approaches considered in our experiments. Table 2 shows the joint goal performance of the models on both the development and test data for three different languages. We can see that our model (G-SAT) outperforms both GLAD and GCE on the three languages of the WOZ2.0 dataset when no pre-trained resources are available, and that the model performance is consistent across both the development and the test data. Table 3 shows the turn request performance of each model for the three languages. Even in this case the G-SAT model is very competitive on the three languages compared to both GLAD and GCE models. In addition, since predicting a requestable slot is a much easier task than predicting an informable slot, we note that all three models show very high performance. Time Complexity Performance The time complexity for GLAD, GCE and our model (G-SAT) is shown in Figure 3. All models are executed with batch size of 50, under the same environment and hardware (single GeForce GTX 1080Ti GPU). As the pre-processing and post-processing of each model can vary based on the implementation and the approach, we report only the time complexity for the model execution after it is loaded and ready to be executed. GLAD requires 1.78 seconds for training for each batch of data, and 0.84 seconds to predict for a batch. Since GCE does not require a separate encoder for each slot, as in GLAD, it reduces the training time to 1.16 seconds, and the prediction time to 0.52 seconds. Our approach has a significant advantage in the execution time, requiring only 0.06 seconds for training and 0.03 for prediction of each batch. We notice that, while the time complexity of GLAD and GCE reported in [10] coincide with our results for training, results on the test data differ considerably. In fact, the time complexity for GCE reported in [10] was 1.92 seconds, while in our experiment we found that GCE instead processes 1.92 batches/second leading to 0.52 seconds/batch. We also considered the number of parameters of the three models, as they have a direct impact on the memory footprint and the training/execution time. Only the trainable parameters for each model are reported under the default setting. Since both GLAD and GCE use pre-trained embeddings, the parameter count do not include the embedding size, while our approach includes also the embeddings as parameters as they are learned from the model. The GLAD model has ∼14M trainable parameters, while the GCE model has ∼5M parameters. Since GCE has a single encoder, compared to different encoders for each slot as in GLAD, it reduces the model size to almost one-third. On the other hand, our G-SAT approach has only ∼460K parameters, making it suitable for low memory footprint scenarios. To sum up, GCE has over 11 times the parameters than the proposed model, while GLAD has over 31 times the proposed model. Discussion Both GLAD and GCE, by default, use embeddings of size 400, while our G-SAT model has a default embedding size of 128. So we also investigated the effect of embedding dimension on these different models, to understand if results are consistent, or if the choice of the embedding size has a significant role in the performance of the models (as the embeddings are learned during training). First, we experimented our approach with the same embedding size as GLAD and GCE, which is of dimension 400. In this case G-SAT achieved 88.6 and 86.7 on the dev and test on English, respectively, still outperforming GLAD (dev:88.4, test:84.6) and GCE (dev:89.0, test:85.1). In a second experiment, we reduced the embedding dimension of both GLAD and GCE to 128, and trained the model. The performance of GLAD (dev:87.1, test:84.6), GCE (dev:87.8, test:85.6) and G-SAT (dev:89.0, test:87.6) showed again the same trend. CONCLUSION In this paper we addressed time complexity issues in modelling an effective dialogue state tracker such that it is suitable to be used in real-world applications, particularly where the number of slots for the task becomes very high. We proposed a neural model, G-SAT, with a simpler architecture compared to other approaches. We provided experimental evidences that the G-SAT model significantly reduces the prediction time (more than 15 times faster than previous approaches), still performing competitive to the state-of-the-art. As for future work, we would like to investigate our approach in the case of a multi-domain dialogue state tracking, where the DST should track for multiple domains and the number of slots is much higher compared to single-domain datasets. REFERENCES [1] Matthew Henderson, "Machine learning for dialog state tracking: A review," in The First International Workshop on Machine Learning in Spoken Language Processing, 2015.
2019-10-22T13:01:00.000Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "9efa00f14ebe23c924a5bf602102dee3af8a5724", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1910.09942", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9efa00f14ebe23c924a5bf602102dee3af8a5724", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238032731
pes2o/s2orc
v3-fos-license
Are Dark Triad Traits related with Intimate Partner Violence and Stalking Behavior? A survey on an Italian sample , most commonly in the personality literature (McHoskey, Worzel, & Szyarto, 1998); it is defined by high self-interest and tendencies toward deception, exploitation and manipulation of others, and by a cynical perspective both on life and interpersonal relationships (Christie and Geis 1970); Machiavellian individuals tend to be viewed as ambitious, strategic, capable, and amoral.Finally, individuals with high level on Narcissistic trait tend to extensively focus on themselves; they are characterized by sense of self-absorpion, dominance, grandness and devaluation of others (Emmons 1987).Over the past several years there has been an increase of researches that study the usefulness of these traits (for a review, see Furnham et al. 2013).Recent surveys have found that the Dark Triad traits are differently informative in predicting workplace, interpersonal, mating, antisocial behaviour, as aggressiveness and financial misbehaviours (e.g., Jones and Paulhus 2010; Lee and Ashton 2005;Malesza and Ostaszewski 2016a, b).But the role of the Dark Triad Traits in the I.P.V. Behavior or in the Stalking Behavior is poorly researched.Carton and Egan found in their study that psychopathy had the strongest associations and most predictive relationships with both psychological abuse and physical/sexual abuse (Carton and Egan, 2017), also Satoru found that only psychopathy uniquely predicted IPV perpetration and The Dark Triad personality is considered a proximal risk factor in the I.P.V. behavior (Satoru 2017(Satoru , 2019)).A broad range of risk factors have been implicated in IPV and Stalking, and are typically identified through comparing the characteristics of individuals who engage in the behaviour of interest to those who do not.In contrast to the empirical evidence base relating to I.P.V., the stalking literature (about Dark Triad traits) is less comprehensive (for a review see Dixon and Bowen 2012).In this study, we will address the theme of the relationship between the Dark Triad traits and the Intimate Partner Violence Behavior (I.P.V.) or with Stalking Behavior in an Italian Sample. Aims of the Study Having therefore found studies demonstrating the role of the Dark Triad of Personality in adverse and aggressive behavior, we wondered if these traits could have a correlation with Intimate Partner Violence behavior and with Stalking behavior.The Dark Triad Traits, at a sub-clinical level, have been widely studied in the International Literature (James et al. 2014;Jonason et al. 2013b;Petrides et al. 2011) with consistent research results, reporting mutual positive correlations, in particular in the introductory study of the Short Dark Triad questionnaire (SD3), and respectively (Pearson correlation coefficients): Machiavellism/Narcisism = .23, Machiavellism/Psychopathy = .37, Narcisism/Psychopathy = .20(Jones and Paulhus 2014;Paulhus and Jones 2011).In the present study the SD3 questionnaire was anonymously administered to an Italian sample, in order to investigate any correlations between the Dark Triad Traits and subject's admission of having hit one's partner (Intimate Partner Violence -I.P.V.) or with Stalking Behaviour (as ex partner). Also the sample was analyzed by dividing it into age groups and social-professional role, by verifying any correlations between the Dark Personality traits and the selected age group or with subject's social-professional role. Finally, through true/false items, we investigated any correlation between the Dark Traits subject's admission of having suffered criminal conviction, having been involved in a brawls (twice or more) or having had financial troubles. The Short Dark Triad Questionnaire -SD3 The SD3 (Jones and Paulhus 2014) is a self-report questionnaire developed to assess three dimensions of the Dark Triad personality model; is a 27 items measuring scale with nine items in each subscale, scored on a 5-point Likert scale (ranging from strongly disagree = 1 to strongly agree = 5), with statements that reflect the aforementioned dimensions of the Dark Triad.The psychometric properties of the original SD3 revealed acceptable internal consistency for every dimension and convergent validity with the external variables, consistent with Cronbach's alpha of the scale is in a range between .78 to .74 .(Furnham et al. 2013;Jones & Paulhus 2014;Lee & Ashton 2005;Paulhus & Williams 2002).An Italian version of the SD3 questionnaire already validated was chosen for the administration to our sample (Somma, Paulhus, Borroni, & Fossati, 2020). Partecipants and Proceeding By using the Italian version of the SD3 (Somma, Paulhus, Borroni, & Fossati, 2020), the sample was anonymously administered and randomly distributed throughout the Italian territory.The sample is composed as follows : Total Subjects = 541 (Female = 300 ; Male = 241) aged between 18 to 75 years, divided in age-groups for the purpose of the study: Age range 1 = 18-25 years -Age range 2 = 26-35 years -Age range 3 = 36-45 years -Age range 4 = over 46 years.Moreover, participants were asked to answer questions related to: • Social-professional role, the following categories have been identified : 1) Unemployed, 2) Student, 3) Employee, 4) Self-Employed, 5) Executive, 6) Retired • (through true/false item) Admission of having hit one's partner on several occasions (I.P.V.) • (through true/false item) Admission of stalking behavior (as ex partner) • (through true/false item) Admission of having suffered criminal conviction • (through true/false item) Admission of having been involved in a brawl twice or more • (through true/false item) Admission of having had financial troubles The collected data have been processed by using an SPSS v25 software (IBM SPSS 2017); Pearson's correlation was used to analyze the association among the different variables, while a Regression analysis (Backward elimination method) was further performed to examine whether the Dark Triad Traits were able to predict I.P.V. and Stalking Behavior. Results Descriptive statistics of the Sample and mutual intercorrelation among the D.T.T. resulting from the administration of the SD3 to our sample are presented in Table 1 and Table 2.These results (Tab.2) agree with the aforementioned literature concerning the mutual correlations between the dark traits of personality.No positive correlation was noted between the Dark Traits and the examined age-groups; on the contrary we noted a two tailed negative correlation (p< .01level) between the Psycopathy and the 46+ age-group. Then we proceeded to verify any probable correlation among the Dark Triad Traits and the subject's socialprofessional role.Results are shown in Table 4.No positive correlation between social-professional roles and Dark Triad traits has been observed.Anyway the negative correlation between Narcissism and the Unemployment is worth nothing, andand will be discussed in the conclusions paragraph .However, to perform in-depth analysis of data, steps were taken to make a statistic partial correlation for each Dark Personality dimension with the selected variables, by partializing the effects of the remaining two dimensions.Results are shown in Tabs 6, 7 and 8. Discussion In this study we explored the personality dark traits on an Italian sample through the anonymous administration of the Short Dark Triad questionnaire -Italian version (Somma, Paulhus, Borroni, & Fossati, 2020) to investigate the association between the Dark Triad Traits and the subject's admission of having hit own's partner (I.P.V. -Intimate Partner Violence) or subject's admission of Stalking behavior (as ex partner).Moreover, by dividing the sample into 4 age groups and social-professional role, we investigated any correlations between Dark Personality traits and the age grup or social-professional role.Finally, through true/false items, we investigated any correlation between the Dark Traits and subject's admission of : having suffered criminal conviction, having been involved in a brawls (twice or more) or having had financial troubles. Our results about the mutual correlations among Dark Personality Traits (see Tab. 2) agree with the international literature (Fehr et al. 1992; By getting ahead our study of the sample and by listing subjects according to their social-professional role (5 roles were identified : Unemployed, Student, Employee, Self-Employed, Executive, Retired) we proceeded to verify any probable correlation with the Dark Triad Traits .Results don't show any positive correlation between Dark traits and individual social-professional role.They furthermore reveals only a negative correlation (p < .05two-tailed) between Narcissism and the unempoyed status (see Tab. 6).Such results inspire a question: is the high Narcissism trait that couse the subject to a well defined social-professional role, acceptable for the "self" and for the society, or is the difficulty in finding a Job, and therefore being unemployed, that affects the decline in levels of such a dark traits?A first reasoning about, prompts us to think that this hypothesis may have a foundation.High levels of Narcissistic trait, because of its peculiarity, can cause the subject to engage himself in this sense, since he tends to a grandious idea of himself and of social role that he percieves as high.All this can makes the high level of Narcissistic trait hardly consistent with the unemployed status.More studies are desirable for a better understanding. Moreover, we investigated any probable correlation among Dark Traits and the subject's admission of : 1) having suffered criminal convictions, 2) having been involved in brawls (two times or more), 3) having had financial troubles (see Tab. 8-9-10).After partialising the effect for every dark trait (to the remaining two), results show that only Psychopathy trait has positive correlation with subject's admission of having suffered criminal convictions (p < .05two-tailed) and with subject's admission of having been involved in brawls (two times or more) (p < .01two-tailed). Lastly, keeping in mind the international literature demostrating the role of Dark Personality traits in aggressive behavior (e.g., Jones and Paulhus 2010; Lee and Ashton 2005; Malesza and Ostaszewski 2016a, b), I.P.V. (Satoru 2017(Satoru , 2019)), and psychological abuse (Carton and Egan, 2017) Than, with a similar methodology, we proceeded to verify if the Dark Triad traits of Personality and the presence of I.P.V. are good predictors of stalking behavior.Linear Regression results show Machiavellism and Narcissism traits are not related with Stalking behavior and they were excluded from the model as not predictive; on the contrary Psychopathy trait and the presence of I.P.V. are a good predictors of Stalking behavior (see Tab. 13 -14 -15) . Seen the Linear Regression analysis results in our sample, about the role of Dark Triad Traits and presence of I.P.V. in the prediction of Stalking behavior, we proceeded to verify if there were gender differences, and for this reason the same Linear Regression methodology was repeated, in the first step analyzing only Male subjects, and in the second step only Female subjects. In Male group Linear Regression results shown a convergence with the results of entire sample, confirming Psychopathy and the presence of I.P.V. as good predictors of Stalking behavior, excluding both Machiavellianism and Narcissism as not predictive (see .On the other hand, in Female group the results differ from the previous ones (see and confirming only Psychopathy as a good predictor of Stalking behavior. Limitation The first limitation of this study is the size of the sample which, although quite representative, is still limited, especially in the male group.A further limitation concerns the use of a self-report questionnaire that, although anonymously administred, still suffers the influence of social desirability and self-perception, and might not reflect accurately the subject's behavior. Compliance with Ethical Standards Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed Consent Informed consent was obtained from all individual participants included in the study. Table 1 - Descriptive statistic of the Sample Table 4 - Dark Triad traits and social professional role -bivariate Pearson correlation coefficients *p < .05level (two tail) Table 5 illustrates Pearson's correlation coefficients between Dark Triad traits and subject's admission of : criminal conviction / brawl / financial troubles.Results show positive correlations between both Narcissism (p < .05level two tail) and Psychopathy (p < .01level two tail) dimensions and the following variables : Admission of having suffered criminal conviction and Admission of having been involved in brawls.No significant correlation has been noted between the Personality Dark Triad traits and the admission of having suffered financial troubles.It is also relevant the significant correlation between Machiavellism and the admission of having been involved in browls (p < .01level two tail). Table 6 - Partial correlation cofficient -Machiavellism and : criminal conviction/brawl/financial troubles By partializing the effects of the other Dark Personality Traits (Tab.6 and Tab.7), no significant correlation is found neither between Machiavellism nor Narcisism and the selected variables.Such data lead us to note that the intercorrelations among the Dark Personality Traits have a strong influence on the interpretation of the data itselves, considered that, by partializing the effects for each dimension compared to the remaining two, only the Psychopathy Trait has a positive correlation with the selected variables (Tab.8). Table 8 - Partial correlation cofficient -Psichopathy and : criminal conviction/brawl/financial troubles More specifically, it was noticed that the Psychopathy trait has a positive correlation (p < .05level,twotail)with the admission of having suffered criminal conviction, and a stronger correlation (p < .01level,twotail)with the admission of having been involved in brawls.No significant correlation was found between Psychopathy and the admission of having had financial troubles.Than we arrived at very aim of the study and procedeed to verify any probable correletion between the Dark Triad Traits and the subject's admission of having hit one's partner on several occasions (I.P.V.) or stalking behavior (as ex partner).The first step is the Pearson correlation, Table9Show the results of bivariate Pearson correlation among the Dark Traits and I.P.V./Stalking Behavior.Results in Table9show us the relation between both Machiavellism and Narcisism traits and both I.P.V. and Stalking behavior.No significant correlation has been noted between Narcisism traits and both I.P.V. or Stalking behavior. Table 9 - D.T. and admission of : IPV/Stalking behavior -bivariate Pearson correlation coefficientsBut, keeping in mind the mutual intercorrelations between the Dark Triad traits, and to perform in-depth analysis of data in order to verify the hypothesis that these traits are predictors of Intimate Partner Violence behavior, Linear Regression (Backward elimination method) was performed between Dark Traits as Indipendent Variables.andI.P.V. as Dipendent Variable.Resulst of Linear Regression are shown in Table10-11 -12.They show Machiavellism and Psychopathy traits are related with the I.P.V. behavior (Model 2) , while the Narcisism trait was removed from the model as not predictor of I.P.V. (Model 1). Table 14 - Linear Regression D.T. I.P.V. and Stalking behavior -ANOVA* These results (Tab.13 -14 -15) show that, in our sample, the Psychopathy Trait and the presence of Intimate Partner Violence (I.P.V.) are good predictors of Stalking behavior (Model 3) , while Machiavellism and Narcissism traits were excluded from the model as not predictors of Stalking behavior (Models 1,2) . , and knowing that the stalking literature (about Dark Triad traits) is less comprehensive (for a review see Dixon and Bowen 2012), we wondered if these Drak traits of personality are in rrelation with Intimate Partner Violence behavior or with Stalking behavior.Results of Pearson's correlation (see Tab. 9) show Machiavellism and Psychopathy both related with the admission of Intimate Partner Violence, and with the admission of Stalking behavior (p <.01 two-tailed for Psychopathy and p <.05 two-tailed for Machiavellism); otherwise, Narcissism is not related with I.P.V. or Stalking behavior.Also we note a relation between I.P.V. and Stalking behavior (p <.01 two-tailed).Keeping in mind the mutual intercorrelations between the Dark Triad traits, and to perform in-depth analysis of data in order to verify the hypothesis that these traits are predictors of Intimate Partner Violence behavior, Linear Regression (Backward elimination method) was performed between Dark Traits as Indipendent Variables and I.P.V. as Dipendent Variable.Results show Machiavellism and Psychopathy traits are related with Intimate Partner Violence, while Narcissism trait was removed from the model (as not predictor), and it is confirmed to have no relation with I.P.V. behavior (see.
2021-08-27T16:19:45.176Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "2ec51c63e50f3d42727cec815f30a343d8bc1da7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.31124/advance.13514900.v1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "ce8f2b8c7c379ab8deec46bd25b942d6abf8b1c6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
16981508
pes2o/s2orc
v3-fos-license
Psychosocial treatment and interventions for bipolar disorder: a systematic review Background Bipolar disorder (BD) is a chronic disorder with a high relapse rate, significant general disability and burden and with a psychosocial impairment that often persists despite pharmacotherapy. This indicates the need for effective and affordable adjunctive psychosocial interventions, tailored to the individual patient. Several psychotherapeutic techniques have tried to fill this gap, but which intervention is suitable for each patient remains unknown and it depends on the phase of the illness. Methods The papers were located with searches in PubMed/MEDLINE through May 1st 2015 with a combination of key words. The review followed the recommendations of the Preferred Items for Reporting of Systematic Reviews and Meta-Analyses statement. Results The search returned 7,332 papers; after the deletion of duplicates, 6,124 remained and eventually 78 were included for the analysis. The literature supports the usefulness only of psychoeducation for the relapse prevention of mood episodes and only in a selected subgroup of patients at an early stage of the disease who have very good, if not complete remission, of the acute episode. Cognitive-behavioural therapy and interpersonal and social rhythms therapy could have some beneficial effect during the acute phase, but more data are needed. Mindfulness interventions could only decrease anxiety, while interventions to improve neurocognition seem to be rather ineffective. Family intervention seems to have benefits mainly for caregivers, but it is uncertain whether they have an effect on patient outcomes. Conclusion The current review suggests that the literature supports the usefulness only of specific psychosocial interventions targeting specific aspects of BD in selected subgroups of patients. Background Our contemporary understanding of bipolar disorder (BD) suggests that there is an unfavorable outcome in a significant proportion of patients [1,2]. In spite of recent advances in pharmacological treatment, many BD patients will eventually develop chronicity with significant general disability and burden. The burden will be significant also for their families and the society as a whole [3,4]. Today, we also know that unfortunately, symptomatic remission is not identical and does not imply functional recovery [5][6][7]. Since pharmacological treatment often fails to address all the patients' needs, there is a growing need for the development and implementation of effective and affordable interventions, tailored to the individual patient [8]. The early successful treatment, with full recovery if possible, as well as the management of subsyndromal symptoms and of psychosocial stress and poor adherence are factors predicting earlier relapse and poor overall outcome [9,10]. In this frame, there are several specific adjunctive psychotherapies which have been developed with the aim of filling the above gaps and eventually improve the illness outcome [11], but it is still unclear whether they truly work and which patients are eligible and when [12][13][14][15][16][17][18][19]. The current study is a systematic review of the efficacy of available psychosocial interventions for the treatment of adult patients with BD. Methods Reports investigating psychotherapy and psychosocial interventions in BD patient samples were located with searches in Pubmed/MEDLINE through May 1, 2015. Only reports in English language were included. The Pubmed database was searched using the search terms 'bipolar' and 'psychotherapy' or 'cognitive-behavioral' or 'CBT' or 'psychoeducation' or 'interpersonal and social rhythm therapy' or 'IPSRT' or 'family intervention' or 'family therapy' or 'group therapy' or 'intensive psychosocial intervention' or 'cognitive remediation' or 'functional remediation' or 'Mindfulness' . The following rules were applied for the selection of papers: 1. Papers in English language. 2. Randomized controlled trials. This review followed the recommendations of the Preferred Items for Reporting of Systematic Reviews and Meta-Analyses (PRISMA) statement [20]. Results The search returned 7,332 papers, and after the deletion of duplicates 6,124 remained for further assessment. After assessing these papers on the basis of title and abstract, the remaining papers were ( Figure 1). The number of paper reported for each intervention includes RCTs, post hoc analyses and meta-analyses together. Cognitive-behavioural therapy (CBT) The efficacy of CBT in BD was investigated in 14 studies which utilized CBT as adjunct treatment to pharmacotherapy or treatment as usual (TAU). They utilized some kind of control intervention which should not be considered as an adequate placebo. It is also interesting that the oldest study was conducted in 2003. This first study lasted 12 months and concerned 103 BD-I patients during the acute depressive phase and randomized them to 14 sessions of CBT or a control intervention. There was not any placebo condition. These authors reported that at end point fewer patients in the CBT group relapsed in comparison to controls (44 vs. 75%; HR = 0.40, P = 0.004), had shorter episode duration, less admissions and mood symptoms, and higher social functioning [21]. It was disappointing that the extension of this study (18 months follow-up) was negative concerning the relapse rate [22]. A second trial included 52 BD patients and was also negative concerning the long-term efficacy after comparing CBT plus additional emotive techniques vs. TAU [23]. On the other hand, the comparison of CBT plus psychoeducation vs. TAU in 40 BD patients reported a beneficial effect even after 5 years in terms of symptoms and social-occupational functioning. However, that study did not report the rate of recurrences and the time to recurrence [24]. A study in 79 BD patients (52 BD-I and 27 BD-II) compared CBT plus psychoeducation vs. psychoeducation alone and reported that the combined treatment group had 50% fewer depressed days per month, while at the same time the psychoeducation alone group had more antidepressant use [25]. Another study on 41 BD patients randomized to CBT vs. TAU reported similar results and an improvement in symptoms, frequency and duration of episodes [26]. An 18-month study compared CBT vs. TAU in 253 BD patients and reported that at end point, there were no differences between groups with more than half of the patients having a recurrence. It is interesting that a post hoc analysis suggested that CBT was significantly more effective than TAU in those patients with fewer than 12 previous episodes, but less effective in those with more episodes [13]. Similar negative results were reported concerning the number of episodes and time to relapse by another 12-month study of CBT vs. TAU in 50 BD patients in remission [17]. Again, negative findings concerning the relapse rate were reported by a 2-year study on 76 BD patients randomized to receive 20 sessions of CBT vs. support therapy [15]. Finally, the use of combined CBT and pharmacotherapy in 40 patients with refractory bipolar disorder suggested that the combination group had less hospitalization events in comparison to the group in the 12-month evaluation (P = 0.015) and lower depression and anxiety in the 6-month (P = 0.006; P = 0.019), 12-month (P = 0.001; P < 0.001) and 5-year (P < 0.001, P < 0.001) evaluation time points. However it is interesting that after the 5-year follow-up, 88.9% of patients in the control group and 20% of patients in the combination group showed persistent affective symptoms and difficulties in social-occupational functioning [27]. The use of CBT in BD comorbid with social anxiety disorder is of doubtful efficacy [28], while there are some preliminary data on the efficacy of an Internet-based CBT intervention [29] as well as recovery-focused addon CBT [30] and CBT for insomnia [31] in comparison to TAU. The review of the available data so far give limited support for the usefulness of CBT during the acute phase of bipolar depression as adjunctive treatment in patients with BD, but definitely not for the maintenance phase. During the maintenance phase, booster sessions might be necessary, but the data are generally overall negative. Probably, patients at earlier stages of the illness might benefit more from CBT. Unfortunately the type of patients who are more likely to benefit from CBT constitutes a minority in usual clinical practice. Psychoeducation The basic concept behind psychoeducation for BD concerns the training of patients regarding the overall awareness of the disorder, treatment adherence, avoiding of substance abuse and early detection of new episodes. The efficacy of psychoeducation in BD was investigated in 30 studies, all of which utilized psychoeducation as adjunct treatment to pharmacotherapy or TAU. All these studies utilize some kind of control intervention which should not be considered as an adequate placebo. It is also interesting that the oldest study was conducted in 1991. The earliest psychoeducational study was open and uncontrolled and reported that giving information about lithium improved the overall attitude towards treatment [32,33]. A similar small study was conducted a few years later and reported similar results [34]. However, the first study on the wide teaching of patients to recognize and identify the components of their disease with emphasis on early symptoms of relapse and recurrence and to seek professional help as early as possible had not been conducted until 1999. It included 69 patients for 18 months and compared psychoeducation (limited number of sessions; 7-12) vs. TAU. It reported a significant prolongation of the time to first manic relapse (P = 0.008) and significant reductions in the number of manic relapses over 18 months (30 vs. 52%; P = 0.013) as well as improved overall social functioning. Psychoeducation had no effect on depressive relapses [35]. In a more systematic way, the efficacy of the adjunctive group psychoeducation was tested by the Barcelona group. Their trial included 120 euthymic BD patients who were randomly assigned to 21 sessions of group psychoeducation vs. non-specific group meetings. The study included a follow-up with a duration of 2 and 5 years. The results suggested that psychoeducation exerted a beneficial effect on the rate of and the time to recurrence as well as concerning hospitalizations per patient. At the end of the 2-year follow-up, 23 subjects (92%) in the control group fulfilled the criteria for recurrence versus 15 patients (60%) in the psychoeducation group (P < 0.01). This beneficial effect was high and was not reduced The literature suggests that psychoeducation should be broad and that enhanced relapse prevention alone does not seem to work. This was the conclusion from another study with a different design. That study reported that only occupational functioning, but not time to recurrence, improved with an intervention consisting of training community mental health teams to deliver enhanced relapse prevention [39]. Additionally, a study with a 12-month follow-up and with a similar design to the first study of the Barcelona group, but with 16 sessions, reported no differences between groups in mood symptoms, psychosocial functioning and quality of life. It did find, however, that there was a difference in the subjectively perceived overall clinical improvement by subjects who received psychoeducation. The authors suggested that characteristics of the sample could explain this discrepancy, as patients with a more advanced stage of disease might have a worse response to psychoeducation [16]. In accordance with the above, a post hoc analysis of the original Barcelona data revealed that patients with more than seven episodes did not show significant improvement with group psychoeducation in time to recurrence, and those with more than 14 episodes did not benefit from the treatment in terms of time spent ill [40]. A 2-year follow-up in 108 BD patients investigated psychoeducation plus pharmacotherapy vs. pharmacotherapy alone. Psychoeducation concerned eight, 50-min sessions of psychological education, followed by monthly telephone follow-up care and psychological support. The results suggested that psychoeducation improved medication compliance (P = 0.008) and quality of life (P < 0.001) and had fewer hospitalizations (P < 0.001) [41]. Another study randomized 80 BD patients to either the psychoeducation or the control group and reported that the psychoeducation group scored significantly higher on functioning levels (emotional functioning, intellectual functioning, feelings of stigmatization, social withdrawal, household relations, relations with friends, participating in social activities, daily activities and recreational activities, taking initiative and self-sufficiency, and occupation) (P < 0.05) compared with the control group after psychoeducation [42]. A prospective 5-year follow-up of 120 BD patients suggested that group psychoeducation might be more cost-effective [43]. In support of the cost-effectiveness of psychoeducation was one trial in 204 BD patients which compared 20 sessions of CBT vs. 6 sessions of group psychoeducation and reported that overall the outcome was similar in the two groups in terms of reduction of symptoms and likelihood of relapse, but psychoeducation was associated with a decrease of costs ($180 per subject vs. $1,200 per subject for CBT) [44] Currently, there are some proposals of online psychoeducation programmes, but results are still inconclusive or pending [45,46]. More complex multimodal approaches and multicomponent care packages have been developed and usually psychoeducation is a core element. One of these packages also included CBT and elements of dialectical behaviour therapy and social rhythms and has shown a beneficial effect after the 1-year follow-up in comparison to TAU [47]. Another included a combination of CBT plus psychoeducation and reported that it was more effective in comparison to TAU in 40 refractory BD patients concerning hospitalization and residual symptoms at 12 months follow-up [27]. A collaborative care study on 138 patients and follow-up of 12 months also gave positive results [48]. One multicentred Italian study assessed the efficacy of the Falloon model of psychoeducational family intervention (PFI), originally developed for schizophrenia management and adapted to BD-I disorder. It included 137 recruited families, of which 70 were allocated to the experimental group and 67 to the TAU group. At the end of the intervention, significant improvements in patients' social functioning and relatives' burden were found in the treated group compared to TAU [49]. In general, the beneficial effect seems to be present concerning manic but not depressive episodes [50,51], while a benefit on social role function and quality of life seems also to be present [50]. The comparison of 12 sessions of psychoeducation vs. TAU in 71 BD patients reported that at 6 weeks, the intervention improved treatment adherence [52], while another on 61 BD-II patients reported no significant effect on the regulation of biological rhythms when compared to standard pharmacological treatment [53]. No significant effect was reported concerning the quality of life by another recent study on 61 young bipolar adults [54]. On the contrary, a trial on 47 BD patients reported that a psychoeducation programme designed for internalized stigmatization may have positive effects on the internalized stigmatization levels of patients with bipolar disorder [55]. There is preliminary evidence that a Web-based treatment approach in BD ('Living with Bipolar'-LWB intervention) is feasible and potentially effective [56]; however, other Web-based attempts returned negative results [57]. Automated mobile-phone intervention is another option and it has been reported to be feasible, acceptable and might enhance the impact of brief psychoeducation on depressive symptoms in BD. However, sustainment of gains from symptom self-management mobile interventions, once stopped, may be limited [58]. One meta-analysis of 16 studies, 8 of which provided data on relapse reported that psychoeducation appeared to be effective in preventing any relapse (OR: 1.98-2.75; NNT: 5-7) and manic/hypomanic relapse (OR: 1.68-2.52; NNT: 6-8), but not depressive relapse. That meta-analysis reported that group, but not individually, delivered interventions were effective against both poles of relapse [59]. In summary, the literature suggests that interventions of 6-month group psychoeducation seem to exert a long-lasting prophylactic effect. However this is rather restricted to manic episodes and to patients at the earlier stages of the disease who have achieved remission before the intervention has started. Although the mechanism of action of psychoeducation remains unknown, it is highly likely that the beneficial effect is mediated by the enhancement of treatment adherence, the promoting of lifestyle regularity and healthy habits and the teaching of early detection of prodromal signs. Interpersonal and social rhythm therapy (IPSRT) Interpersonal and social rhythm therapy is based on the hypothesis that in vulnerable individuals, the experience of stressful life events and unstable or disrupted daily routines can lead to affective episodes via circadian rhythm instability [18]. In this frame, IPSRT includes the management of affective symptoms through improvement of adherence to medication and stabilizing social rhythms and the resolution of interpersonal problems. Four papers investigating its efficacy were identified. The first study concerning its efficacy in BD included 175 acutely ill BD patients and followed them for 2 years. It included four treatment groups, reflecting IPSRT vs. intensive clinical management during the acute and the maintenance phase. The results revealed no difference between interventions in terms of time to remission and in the proportion of patients achieving remission (70 vs. 72%), although those patients who received IPSRT during the acute treatment phase survived longer without an episode and showed higher regularity of social rhythms [60]. In spite of some encouraging findings from post hoc analysis, there were eventually no significant differences between genders and concerning the improvement in occupational functioning [61]. More recently, a 12-week study in which unmedicated depressed BD-II patients were randomized to IPSRT (N = 14) vs. treatment with quetiapine (up to 300 mg/day; N = 11), showed that both groups experienced significant reduction in symptoms over time, but there were no group-by-time interactions. Response and drop-out rates were similar [62]. Finally, one 78-week trial investigated the efficacy of IPSRT vs. specialist supportive care on depressive and mania outcomes and social functioning, and mania outcomes in 100 young BD patients. The results revealed no significant difference between therapies [63]. Overall, there are no convincing data on the usefulness of IPSRT during the maintenance phase of BD. There are, however, some data suggesting that if applied early and particularly already during the acute phase, it might prolong the time to relapse. Family intervention The standard family intervention for BD targets the whole family and not only the patient and includes elements of psychoeducation, communication enhancement and problem-solving skills training. It also includes support and self-care training for caregivers. Fifteen papers concerning the efficacy of family intervention in BD were found. The first study on this intervention took part in 1991 and reported that carer-focused interventions improve the knowledge of the illness [64]. Since then, there have been a number of studies which in general support the use of adjunctive family-focused treatment. There are different designs and approaches which were tested in essentially open trials. One intervention design consists of 21 1-h sessions which combine psychoeducation, communication skills training and problem-solving training. The sessions take place at home and included both the patient and his/her family during the post-episode period. The treatment has shown its efficacy vs. crisis management in 101 BD patients in reducing relapses (35 vs. 54%) and increasing time to relapse (53 vs. 73 weeks, respectively) [65,66]. It was also reported to reduce hospitalization risk compared with individual treatment (12 vs. 60%) [67]. It is important that the benefits extended to the 2-year follow-up were particularly useful for depressive symptoms, in families with high expressed emotion and for the improvement of medication adherence [66]. Similar results were reported by a study of 81 BD patients and 33 family dyads, which reported that the odds ratio for hospitalization at 1-year follow-up was related with high perceived criticism (by the patients from their relatives), poor adherence and with the relatives' lack of knowledge concerning BD (OR: 3.3; 95% CI 1.3-8.6) [68]. Adjunctive psychoeducational marital intervention in acutely ill patients was reported to have a beneficial effect concerning medication adherence and global functioning, but not for symptoms [69]. Neither adjunctive family therapy nor adjunctive multifamily group therapy improves the recovery rate from acute bipolar episodes when compared with pharmacotherapy alone [14]. These interventions could be beneficial for patients from families with high levels of impairment and could result in a reduction of both the number of depressive episodes and the time spent in depression (Cohen d = 0.7-1.0) [70]. In this frame, in those patients who recovered from the intake episode, multifamily group therapy was associated with the lowest hospitalization risk [71]. Another format included a 90-min duration, delivered to caregivers of euthymic BD patients; after 15-months, it was reported to have both reduced the risk of recurrence in comparison to a control group (42 vs. 66%; NNT: 4.1 with 95% CI 2.4-19.1) and also to have delayed recurrence [72]. It was particularly efficacious in the prevention of hypomanic/manic episodes and also in the reduction of the overall family burden [73]. It had been shown before that carer-focused interventions improve the knowledge of the illness [64], reduce burden [74] and also reduce the general and mental health risk of caregivers [75]. Another format of intervention included 12 sessions of group psychoeducation for the patients and their families. It has been found superior to TAU in 58 BD patients concerning the prevention of relapses, the decrease of manic symptoms and the improvement of medication adherence [76]. Finally, the comparison of family-based therapy (FBT) vs. brief psychoeducation (crisis management) in 108 patients with BD reported that the outcome depended on the existing levels of appropriate self-sacrifice [77]. Overall, the literature supports the conclusion that interventions which focus on families and caregivers exert a beneficial impact on family members, but the effect on the patients themselves is controversial. The effect includes issues ranging from subjective well-being to general health, but it is almost certain that there is a beneficial effect on issues like treatment adherence. Intensive psychosocial intervention There are three papers investigating various methods of intensive psychosocial intervention. 'Intensive' psychotherapy has been tested on 293 acutely depressive BD outpatients in a multi-site study. Patients were randomized to 3 sessions of psychoeducation vs. up to 30 sessions of intensive psychotherapy (family-focused therapy, IPSRT or CBT). The results suggested that the intensive psychotherapy group showed higher recovery rates, shorter times to recovery and greater likelihood of being clinically well in comparison to patients on short intervention [78]. The functional outcome was also reported to be better after 1 year [79]. A second trial randomized 138 BD patients to receive collaborative care (contracting, psychoeducation, problem-solving treatment, systematic relapse prevention and monitoring of outcomes) vs. TAU. The results suggested that collaborative care had a significant and clinically relevant effect on the number of months with depressive symptoms, as well as on severity of depressive symptoms, but there was no effect on symptoms of mania or on treatment adherence [48]. Cognitive remediation (CR) and functional remediation (FR) Cognitive remediation and functional remediation tailored to the needs of BD patients include education on neurocognitive deficits, communication, autonomy and stress management. There are five papers on the efficacy of CR and FR. One uncontrolled study in 15 BD patients applied a type of CR and focused on mood monitoring and residual depressive symptoms, organization, planning and time management, attention and memory. The results suggested that there was an improvement of residual depressive symptoms, executive functions and general functioning. Patients with greater neurocognitive impairment had less benefit from the intervention [80]. The combination of neurocognitive techniques with psychoeducation and problem solving within an ecological framework was tested in a multicentre trial in 239 euthymic BD patients with a moderate-severe degree of functional impairment (N = 77) vs. psychoeducation (N = 82) and vs. TAU (N = 80). At end point, the combined programme was superior to TAU, but not to psychoeducation alone [81,82]. Finally, a small study in 37 BD and schizoaffective patients tested social cognition and interaction training (SCIT) as adjunctive to TAU (N = 21) vs. TAU alone (N = 16). There was no difference between groups concerning social functioning, but there was a superiority of the combination group in the improvement of emotion perception, theory of mind, hostile attribution bias and depressive symptoms [83]. A post hoc analysis using data of 53 BD-II outpatients compared FR vs. psychoeducation and vs. TAU, but the results were negative [84]. Mindfulness-based interventions Mindfulness-based intervention aims to enhance the ability to keep one's attention on purpose in the present moment and non-judgmentally. Specifically for BD patients, it includes education about the illness and relapse-prevention, combination of cognitive therapy and training in mindfulness meditation to increase the awareness of the patterns of thoughts, feelings and bodily sensations and the development of a different way (nonjudgementally) of relating to thoughts, feelings and bodily sensations. It also promotes the ability of the patients to choose the most skilful response to thoughts, feelings or situations. There are eight studies on the efficacy of mindfulness-based intervention in BD. The first study concerning the application of mindfulness-based cognitive therapy (MBCT) in BD tested it vs. waiting list and included only eight patients in each group. The results suggested a beneficial effect with a reduction in anxiety and depressive symptoms [85]. A second study included 23 BD patients and 10 healthy controls and investigated MBCT vs. waiting list and the results were compared with those of 10 healthy controls. The results suggested that following MBCT, there were significant improvements in BD patients concerning mindfulness, anxiety and emotion regulation, working memory, spatial memory and verbal fluency compared to the waiting list group [86]. The biggest study so far concerning MBCT included 95 BD patients and tested MBCT as adjunctive to TAU (N = 48) vs. TAU alone (N = 47) and followed the patients for 12 months. The results showed no difference between treatment groups in terms of relapse and recurrent rates of any mood episodes. There was some beneficial effect of MBCT on anxiety symptoms [87,88]. Recently, the focus has expanded to analyze the impact of MBCT on brain activity and cognitive functioning in BD, but the findings are difficult to interpret [86,89,90]. A study which applied dialectical behaviour therapy in which mindfulness represented a large component also reported some positive outcomes [91]. One study on mindfulness training reported negative results in BD patients [92]. In conclusion, the literature does not support a beneficial effect of MBCT on the core issues of BD. There are some data suggesting a beneficial effect on anxiety in BD patients. So far, there are no data supporting its efficacy in the prevention of recurrences. Discussion The current review suggests that the literature supports the usefulness only of psychoeducation for the relapse prevention of mood episodes and unfortunately only in a selected subgroup of patients at an early stage of the disease who have very good if not complete remission of the acute episode. On the other hand, CBT and IPSRT could have some beneficial effect during the acute phase, but more data are needed. Mindfulness interventions could only decrease anxiety, while interventions to improve neurocognition seem to be rather ineffective. Family intervention seems to have benefits mainly for caregivers, but it is uncertain whether they have an effect on patient outcomes. A summary of the specific areas of efficacy for each of the above-mentioned interventions is shown in Table 1. An additional important conclusion is that concerning the quality of the data available: the studies on BD patients suffer from the same limitations and methodological problems as all psychotherapy trials do. It is well known that this kind of studies suffers from problems pertaining to blindness and the nature of the control intervention. Additionally, the training of the therapist and the setting itself might play an important role. It is quite different to apply the same intervention in specialized centres than in real-world settings in everyday clinical practice. Even worse, research is not done in a standardized way and the gathering of data is far from systematic. The studies are rarely registered, adverse events are not routinely assessed, outcomes are not hierarchically stated a priori and too many post hoc analyses have been published without being stated as such. There is a lack of replication of the same treatment by different research groups under the same conditions. There are different theories on the mechanisms responsible for the efficacy of the psychosocial treatments. One suggestion concerns the enhancement of treatment adherence [93], while another proposes that improving lifestyle and especially biological rhythms, food intake and social zeitgebers could be the key factors [60]. Also, it has been proposed that the mechanism concerns the changing of dysfunctional attitudes [23], the improvement of family interactions [94] or the enhanced ability for the early identification of signs of relapse [35]. Overall, it seems that psychosocial interventions are more efficacious when applied on patients who are at an early stage of the disease and who were euthymic when recruited [14,95]. According to these post hoc analyses, a higher number of previous episodes [13,40] as well as a higher psychiatric morbidity and more severe functional impairment [96] might reduce treatment response, although the data are not conclusive [97]. Also, a differential effect has been proposed with neuroprotective strategies being better during the early stages [98] and rehabilitative interventions being preferable at later stages [99]. It is unclear whether IPSRT and CBT are efficacious during the acute episodes, but there are some data in support [13,60,78]. Maybe specific family environment [70,100]. Probably, there were subpopulations who especially benefited from these treatments [13,70], but these assumptions are based on post hoc analyses alone. It should be mentioned that most of the research concerns pure and classic BD-I patients, although there are some rare data concerning special populations such as BD-II [36,62], schizoaffective disorder [101,102], patients with high suicide risk [85,103,104] and patients with comorbid substance abuse [105,106]. It is interesting to note that the literature suggests that the benefits of psychosocial interventions if achieved could last for up to 5 years [36,107], although some patients might need booster sessions [23,108]. The complete range of the effect these interventions have is still uncharted. Although it is reasonable to expect a beneficial effect in a number of problems, including suicidality, research data on these issues are virtually non-existent [103,104]. Conclusions In conclusion, the literature supports the notion that adjunctive specific psychological treatments can improve specific illness outcomes. Although the data are rare, it seems reasonable that any such intervention should be applied as early as possible and should always be tailored to the specific needs of the patient in the context of personalized patient care, since it is accepted that both the patients and their relatives have different needs and problems depending on the stage of the illness. Authors' contributions KNF SM, SM and ET carried out the literature search and the interpretation of the results. KNF wrote the first draft and all the other authors contributed to the revision including the final draft. All authors read and approved the final manuscript. 1 Aristotle University of Thessaloniki, Thessaloníki, Greece. 2 Division of Neurosciences, 3rd Department of Psychiatry, School of Medicine, Aristotle University of Thessaloniki, 6, Odysseos Street (1st Parodos, Ampelonon Str.), Pournari Pylaia, 55535 Thessaloníki, Greece. 3 Thessaloníki, Greece.
2017-11-02T07:56:25.281Z
2015-07-07T00:00:00.000
{ "year": 2015, "sha1": "b9205b863caec03cc49295a16fea76799d3e7de5", "oa_license": "CCBY", "oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/s12991-015-0057-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9205b863caec03cc49295a16fea76799d3e7de5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
231615109
pes2o/s2orc
v3-fos-license
Phosphoproteomic Landscape of AML Cells Treated with the ATP-Competitive CK2 Inhibitor CX-4945 Casein kinase 2 (CK2) regulates a plethora of proteins with pivotal roles in solid and hematological neoplasia. Particularly, in acute myeloid leukemia (AML) CK2 has been pointed as an attractive therapeutic target and prognostic marker. Here, we explored the impact of CK2 inhibition over the phosphoproteome of two cell lines representing major AML subtypes. Quantitative phosphoproteomic analysis was conducted to evaluate changes in phosphorylation levels after incubation with the ATP-competitive CK2 inhibitor CX-4945. Functional enrichment, network analysis, and database mining were performed to identify biological processes, signaling pathways, and CK2 substrates that are responsive to CX-4945. A total of 273 and 1310 phosphopeptides were found differentially modulated in HL-60 and OCI-AML3 cells, respectively. Despite regulated phosphopeptides belong to proteins involved in multiple biological processes and signaling pathways, most of these perturbations can be explain by direct CK2 inhibition rather than off-target effects. Furthermore, CK2 substrates regulated by CX-4945 are mainly related to mRNA processing, translation, DNA repair, and cell cycle. Overall, we evidenced that CK2 inhibitor CX-4945 impinge on mediators of signaling pathways and biological processes essential for primary AML cells survival and chemosensitivity, reinforcing the rationale behind the pharmacologic blockade of protein kinase CK2 for AML targeted therapy. Introduction Protein phosphorylation is an essential post-translational modification in most cellular processes, making of protein kinases promising therapeutic targets for a wide variety of disorders, including cancer [1,2]. Among the protein kinases involved in cell signaling networks, casein kinase 2 (CK2) is responsible of about 25% of all cell phosphoproteome [3]. CK2 is a constitutively active and ubiquitously expressed Ser/Thr-protein kinase composed of two catalytic subunits (α or its isoform α') and two regulatory subunits (β) [4]. The CK2 consensus sequence (pS/pT-x1-x2-E/D/pS/pT, in which x1 = P), is a small motif characterized by several acidic residues in the proximity of the phosphorylatable amino acid, as well as the absence of basic residues in those positions [5]. Concerning CK2 substrates, about one third are involved in gene expression and protein synthesis, while numerous are signaling proteins implicated in cell growth, proliferation, and survival [3,6]. Moreover, a small number of CK2 substrates are classical metabolic enzymes or associated with some virus life cycle [3]. Protein kinase CK2 has been linked to basically all the hallmarks of malignant diseases [7,8]. Accordingly, several CK2 inhibitors have been described, including small organic compounds designed to target the ATP-binding site on the CK2 catalytic subunit, flavonoids and a synthetic cell-permeable peptide termed CIGB-300, originally designed to block CK2-mediated phosphorylation through binding to phosphoacceptor domain in the substrates [9][10][11]. Additionally, a cyclic peptide that antagonizes the interaction between the CK2 α and β subunits and antisense oligonucleotides that reduce CK2 alpha subunit transcription have also been explored [12,13]. However, only the ATP-competitive inhibitor CX-4945 and the synthetic peptide CIGB-300 have advanced to human clinical trials in and shall provide proof-of-concept for CK2 as a suitable oncology target [14,15]. Acute myeloid leukemia (AML) is one of the most frequent hematologic malignancies and high-expression of CK2α subunit has been connected to a worse prognosis in AML patients with normal karyotype [16,17]. Actually, CK2 is implicated in multiple signaling pathways, all of them essential for hematopoietic cell survival and function, and leukemic cells have been demonstrated to be more sensitive to downregulation of protein kinase CK2 [18,19]. The latter becomes particularly relevant since AML stand among the most aggressive and lethal types of cancer and are often characterized by resistance to standard chemotherapy as well as poor long-term outcomes [20]. In recent years, quantitative phosphoproteomic approaches have been useful to explore the cellular response to kinase inhibition in different types of cancer cells [21]. In fact, the proteomic and phosphoproteomic patterns associated with prognosis of AML patients and its progression from diagnosis to chemoresistant relapse has been recently described, studies that suggested the importance of CK2 for chemosensitivity in human AML primary cells [22,23]. Besides, the CK2-dependant phosphoproteome has been explored by quantitative phosphoproteomic using not only CK2 inhibitors in HEK-293T, HeLa, and NCI-H125 cells, but also through genetic manipulation of CK2 subunits in C2C12 cells [24][25][26][27]. However, the impact of CK2 inhibition has not been widely assessed in AML cells, since to our knowledge no previous phosphoproteomic studies have been conducted for CK2 inhibitors in this particular hematological pathology. Considering the above, we decided to explore the CK2-regulated phosphoproteome and the consequent signaling networks perturbations induced after exposure of AML cells to CK2 inhibitor CX-4945. Mass spectrometry (MS)-based phosphoproteomics profiling allowed us to gauge the global impact of CX-4945 in human cell lines representing two differentiation stages and major AML subtypes. Cell Culture Human AML cell lines HL-60 and OCI-AML3 were originally obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and the German Collection of Microorganisms and Cell Cultures (DSMZ, Braunschweig, Germany), respectively. Both cell lines were cultured in RPMI 1640 medium (Invitrogen, Carlsbad, CA, USA) supplemented with 10% (v/v) fetal bovine serum (FBS, Invitrogen, Carlsbad, CA, USA) and 50 µg/mL gentamicin (Sigma, St. Louis, MO, USA). Cells were maintained under standard cell culture conditions. Sample Preparation and Phosphopeptide Enrichment HL-60 and OCI-AML3 cells (10 7 cells per each condition, three biological replicates) were treated or not with 5 µM CX-4945 (Selleck Chemicals, Houston, TX, USA) for 8 h. After collected by centrifugation and washed with PBS, cells were resuspended in lysis buffer containing 2% SDS and 50 mM DTT. Samples were boiled at 95 • C for 10 min and proteins were extracted by multienzyme digestion filter-aided sample preparation (MED-FASP) with overnight lys-C and tryptic digestions [28]. Phosphopeptides were then enriched from each digestions using TiO 2 beads as previously described [29]. For enrichment, "Titansphere TiO 2 10 µm" (GL Sciences, Inc., Tokyo, Japan) was suspended in 200 µL of 3% (m/v) dihydroxybenzoic acid in 80% (v/v) CH 3 CN, 0.1% CF 3 COOH and diluted 1:4 with water and later used at a 4:1 ratio (mg beads: mg peptides). Next, 2 mg TiO 2 (per mg peptides) was added to each sample and incubated at room temperature under continuous agitation for 20 min. The titanium beads were sedimented by centrifugation and the supernatants were collected and mixed with another portion of the beads and incubated as above. The bead-pellets were resuspended in 150 µL of 30% (v/v) CH 3 CN containing 3% (v/v) CF 3 COOH and transferred to a 200 µL pipet tip plugged with one layer of Whatman glass microfiber filter GFA (Sigma, St. Louis, MO, USA). The beads were washed 3 times with 30% (v/v) CH 3 CN, 3% CF 3 COOH (v/v) solution and 3 times with 80% CH 3 CN (v/v), 0.3% CF 3 COOH (v/v) solution. Finally, the peptides were eluted from the beads with 100 µL of 40% CH 3 CN (v/v) containing 15% NH 4 OH (m/v) and were vacuum-concentrated to ∼4 µL. Phosphopeptides were further desalted by Stage procedure [30]. NanoLC-MS/MS and Data Analysis Chromatographic runs for phosphopeptides and non-phosphopeptides were in homemade column (75 mm ID, 20 cm length). For phosphopeptides, was used a gradient from 5% buffer B (0.1% formic acid in acetonitrile) up to 30% in 45 min, then increase to 60% in 5 min, and up to 95% in 5 min more. Meanwhile for non-phosphopeptides the gradient started at 5% buffer B up to 30% in 95 min, then increase to 60% in 5 min, and up to 95% in 5 min more. An EASY-nLC 1200 system coupled to a QExactive HF mass spectrometer (both from Thermo Fisher Scientific, Waltham, MA, USA) was used with the nanocolumn being at 60 • C. Peptides were detected in the mass range 300-1650 m/z using data-dependent acquisition and each mass spectrum was obtained at 60,000 resolution (20 ms injection time) and followed by 15 MS/MS spectra (28 ms injection time) at 15,000 resolution. Identification of peptides and proteins was based on the match-between-runs procedure using MaxQuant software (v1.6.2.10) [31], and considering oxidation (M), deamidation (NQ), N-terminal acetylation (proteins) and phosphorylation (STY) as variable modifications. None fixed modifications were considered as cysteines were not modified. Alignment of chromatographic runs were allowed with default parameters (20 min time window and a matching of 0.7 mins between runs). Filtering and quantification of phosphopeptides were performed in Perseus computational platform (v1.6.2.2) [32]. Reverse and potential contaminant hits were removed, while only phosphosites with localization probability above 0.75 were retained for further analysis. Student's t Test was employed to identify statistically significant changes (p-values lower than 0.05) in phosphorylation and protein levels, after filtering for two valid values in at least one group. An additional fold-change (treated vs. control) cutoff of 1.5 was also applied. Enzyme-Substrate Relationship and Kinome Network Analysis Enzyme-substrate-site relations were retrieved using the integrated protein posttranslational modification network resource iPTMnet [39]. iPTMnet is based on a set of curated databases like PhosphoSitePlus (http://www.phosphosite.org (accessed on 2 February 2021)) and PhosphoEML (http://phospho.elm.eu.org (accessed on 2 February 2021)), which annotate experimentally observed post-translational modification [40,41]. Besides, the KEA2 web tool (https://www.maayanlab.net/KEA2/ (accessed on 2 February 2021)) was used, first to retrieve information about kinases responsible for phosphoproteome modulation after CK2 inhibition, and second to identify which of such kinases were enriched based on the phosphoproteomic profile [42]. KEA2 is based on an integrative database of kinase-substrate interactions derived from disparate source including literature [42]. The software computes a Fisher Exact Test to distinguish significant enriched kinases (p-values lower than 0.05), through statistical analysis [42]. To represent the kinome network, the interactions among the protein kinases associated to the phosphoproteomic profile, according to KEA2 and iPTMnet annotations, were retrieved using the Metascape gene annotation and analysis resource (http://metscape.org (accessed on 2 February 2021)) [43]. Such bioinformatics software compiles the information from different integrative databases and applies the MCODE algorithm to extract highly connected regions or complexes embedded in proteins networks [44]. Identification and Analysis of CK2 Substrates In addition to bona fide CK2 substrates, we searched for candidate substrates based on: (1) the presence of the CK2 consensus sequence (pS/pT-x1-x2-E/D/pS/pT, x1 = P) [5], (2) the enzyme-substrate predictions retrieved from NetworKIN database [45], (3) the dataset of high confidence CK2 substrates reported by Bian et al. [46] and (4) the phosphoproteins which interact with CK2 according to Metascape database information [43]. Substrates that met at least two of such criteria were selected as the most reliable for further functional analysis. All identified substrates (bona fide and putative) were represented in a network context and classified according to biological processes annotated in GO database [33,34], and the STRING database (http://string-db.org/ (accessed on 2 February 2021)) was used to identify interactions between proteins [47]. In such analysis only databases and experimental evidences were used as source of interaction data and the confidence score was fixed at 0.4. All protein-protein interaction networks (kinome network and CK2 substrates network) were visualized using Cytoscape software (v.3.5.0) [48]. Profiling the CX-4945-Responsive Phosphoproteome in AML Cells Advances in high throughput technologies and bioinformatic tools for subsequent data analysis, make possible to explore on a wide-scale fashion the cellular response to inhibition of protein kinases. Particularly, phosphoproteomic studies provide solid evidences regarding kinase-substrates and kinases-kinases relationships involved in the complexity of networks regulating cellular processes in health and disease. Hence, we decided to explore the CK2-regulated phosphoproteome in AML cells using MS-based phosphoproteomic analysis of HL-60 and OCI-AML3 cells treated or not with 5 µM of the CK2 inhibitor CX-4945 during 8 h ( Figure 1A). Of note, the inhibitory effect of CX-4945 over CK2 enzymatic activity has been previously evidenced by reduction of bona fide CK2 substrates phosphorylation and immunoblotting with antibody against pan-CK2 phosphorylated motif [25,49]. In addition, as measured using AlamarBlue assay, CX-4945 showed a similar dose-dependent inhibitory effect on HL-60 and OCI-AML3 cells proliferation, with IC 50 values of 7.49 ± 1.55 µM and 4.69 ± 1.59 µM, respectively ( Figure S1A). AML is a highly heterogenous disease, and selected cell lines derive from the most common AMLs (i.e., acute promyelocytic and acute myelomonocytic leukemia), together accounting for roughly two thirds of all AML cases [50]. Moreover, in spite of the similar antiproliferative effect attained by CX-4945 in both AML cell lines, our results and previous studies have evidenced that HL-60 cells appears to be less sensitive to CX-4945 induced apoptosis when compared to other AML cell lines ( Figure S1A,B) [51]. Thus, selected cells lines not only represent major AML subtypes, but also different niches that can be found in the clinical setting considering its differential sensitivity to CK2 inhibition with CX-4945. Using this experimental approach, phosphoproteomic analysis of HL-60 led to identification of 3365 phosphopeptides corresponding to 3077 unique phosphopeptides (90% pS, 9.8% pT and 0.2% pY) on 1618 phosphoproteins ( Figure 1B). Similarly, in OCI-AML3 cells 3177 phosphopeptides were identified, corresponding to 2976 unique phosphopeptides (87.8% pS, 11.9% pT and 0.3% pY) on 1645 phosphoproteins ( Figure 1B). In parallel, proteomic analysis led to identification of 6636 and 6670 proteins in HL-60 and OCI-AML3, respectively ( Figure 1B). On the whole, we identified a total of 4267 unique phosphopeptides and 7515 proteins, with 1786 phosphopeptides and 5791 proteins that overlapped between both AML cell lines ( Figure 1B). Changes in phosphorylation and protein levels between untreated and CX-4945treated cells were assessed using Student's t Test and p-value < 0.05 was considered statistically significant. We also applied a fold-change (treated vs. control) threshold of 1.5 (|FC| ≥ 1.5) to define the down-and up-regulated phosphopeptides and proteins. In HL-60 cells 275 phosphopeptides on 224 proteins were significantly modulated, while in OCI-AML3 cells the number was almost 5-fold higher with 1324 on 847 proteins ( Figure 2A, Table S1). In both cellular contexts, treatment with CX-4945 elicited a global decrease of protein phosphorylation, based on the distribution of down-and up-regulated phosphopeptides in Volcano plots ( Figure 2A). On the contrary, proteomic analysis indicated that in both cell lines CK2 inhibition showed no bias towards the protein down-regulation ( Figure 2B, Table S2). Actually, proteome analysis evidenced that changes in phosphorylation upon CX-4945 treatment were mostly independent of protein abundance, since only eight down-regulated proteins (two in HL-60 cells and six in OCI-AML3 cells) had phosphorylation sites significantly inhibited ( Figure 2B). Those proteins were not considered as differentially phosphorylated after CK2 inhibition, and consequently, were not included in the functional interpretation of the phosphoproteomic profiles. Red points indicate those phosphopeptides/proteins that met statistical significance cut-off (|FC| ≥ 1.5, p-value < 0.05). Additionally, black points indicate those phosphopeptides with decreased phosphorylation due to the reduction of the corresponding protein abundance in proteomic analysis (down-regulated proteins are also indicated in black). In summary, after normalization with the proteome dataset a total of 273 and 1310 significantly modulated phosphopeptides were identified in HL-60 and OCI-AML3 cells, respectively ( Figures 1B and 2A). Remarkably, such difference indicates that CX-4945 has a more pronounced effect over the CK2-dependant signaling in OCI-AML3 cells, which suggests that the molecular perturbations induced by this inhibitor could rely on the AML cellular background. However, CX-4945 had a similar dose-dependent inhibitory effect on HL-60 and OCI-AML3 cells proliferation ( Figure S1A), suggesting that despite the divergence concerning the molecular impact of protein kinase CK2 inhibition, there is no differential sensitivity of AML cells towards the overall antiproliferative effect of CX-4945. Enrichment Analysis of Differentially Modulated Phosphoproteins For better understanding of putative biological processes perturbed after CK2 inhibition in AML cells, the differentially modulated phosphoproteins were classified in terms of their biological functions using the information from the GO database [33,34]. Analysis was performed using DAVID web-based tool and GO terms list was further submitted to REViGO for redundancy reduction [35][36][37]. Significantly represented biological processes in both phosphoproteomics profiles include mRNA processing, regulation of viral process and protein sumoylation ( Figure 3). Moreover, phosphorylation sites differentially modulated in HL-60 are located on phosphoproteins related to mRNA splicing, cellular response to DNA damage and ribosome biogenesis, while in OCI-AML3 covalent chromatin modification, nuclear transport, regulation of cell proliferation and gene expression are significantly represented (Figure 3). Of note, apoptotic signaling pathway was only identified as significantly enriched in OCI-AML3 cells. Consistently, our results and previous studies have evidenced that HL-60 cell line displays refractoriness to CX-4945 induced apoptosis ( Figure S1B), probably owing to the absence of p53 protein (HL-60 cells are p53 null) and the lower CK2 protein level and activity in comparison to other AML cell lines [51]. In such studies it was demonstrated that CK2 inhibition not only triggers apoptotic cell death in AML cell lines, but also in freshly isolated blasts from AML patients [51]. Recently, another phosphoproteomic study in non-small cell lung cancer (NSCLC) cell line NCI-H125 using the clinical-grade synthetic peptide CIGB-300, found mRNA processing and ribosome biogenesis as biological processes modulated after CK2 inhibition [26]. Protein folding, cytoskeleton organization, microtubule formation and protein ubiquiti-nation were also significantly modulated after treatment with CIGB-300 [26]. According with both studies, CK2 inhibition by CX-4945 or CIGB-300 modulates a common set of biological processes but also each drug exerts its own mechanism of action by modulating a unique array of phosphoproteins. Since this effect could be a consequence of the different neoplastic backgrounds explored in each study (AML and NSCLC), a phosphoproteomics study of AML cells treated with CIGB-300 is currently underway to validate our hypothesis. Noteworthy, proteins involved in cellular response to DNA damage appeared differentially phosphorylated in HL-60 cells treated with CX-4945 ( Figure 3). Accordingly, CK2-mediated phosphorylation has been verified to regulate proteins with critical role in DNA damage response and DNA repair pathways [52]. In fact, phosphoproteomic analysis of cells treated with radiomimetic compound or ionizing radiation to induce DNA double-stranded breaks showed a dynamic response for a significant number of CK2 phosphorylation motifs [53,54]. Furthermore, combination of CK2 inhibitors with DNA-targeted drugs evidenced a synergistic interaction in cancer models, owing to the suppression of DNA repair response triggered by such chemotherapeutic agents [55,56]. Interestingly, a number of modulated phosphorylation sites in AML cells belong to proteins implicated in regulation of viral process (Figure 3). The relevance of CK2 in viral infections has been well documented, and a number of viral and cellular proteins essential for virus replicative cycle and pathogenesis are listed as bona fide CK2 substrates [57]. On the whole, CK2 inhibition with CX-4945 impacted on a broader set of biological processes in OCI-AML3, which is in agreement with the higher number of differentially modulated phosphopeptides in this cell line (Figures 2A and 3). However, as pointed above such divergence does not impinge on the antiproliferative effect exerted by CX-4945. Sequence Analysis of Phosphopeptides Identified in AML Cells Protein kinases recognize structural and sequence motif, which in conjunction with other factors like subcellular co-localization or protein complex formation, determine their specificity [58]. Particularly, CK2 phosphorylation is specified by multiple acidic residues located mostly downstream from the phosphoacceptor amino acid, the one at position n + 3 playing the most crucial function. Besides, proline residue at position n + 1 acts as a negative determinant for protein kinase CK2 phosphorylation [3,5]. In our study, approximately 21% of the phosphopeptides identified in HL-60 and OCI-AML3 fulfill the CK2 consensus sequence ( Figure 4A and Figure S3). This proportion of putative CK2 substrates is in accordance with previous phosphoproteomic analysis [24,59]. In HL-60 the majority of phosphopeptides (83.3%) containing the CK2 consensus sequence were unaffected by CX-4945 treatment. Moreover, 107 phosphopeptides (16.7%) containing the CK2 consensus sequence were significantly modulated in HL-60 treated cells, of which 14.4% had a decreased and 2.3% had an increased phosphorylation respect to non-treated cells ( Figure 4A). In contrast to HL-60 cells, the majority of phosphopeptides (53.9%) containing the CK2 consensus sequence had a decreased phosphorylation in OCI-AML3 cells treated with CX-4945, whereas 45.8% were unaffected and 0.3% had an increased phosphorylation ( Figure 4A). This result reinforces the differential impact of CX-4945 over the CK2-dependent signaling, which was evidenced above by the higher number of total phosphopeptides that had a decreased phosphorylation in OCI-AML3 treated cells (1310 out of 2976) (Figure 2A). Table 60. and OCI-AML3 cells that either, contains or not the CK2 consensus sequence. For the former category, the percentage of phosphopeptides that are significantly increased or decreased, or that do not show significant changes in their phosphorylation levels are reported in lateral pie charts; (B) Sequence logos corresponding to phosphopeptides significantly down-phosphorylated in AML cells treated with CX-4945. Logos were generated using WebLogo tool and MaxQuant amino acid sequence window as input [38]. (*) Phosphopeptides with decreased phosphorylation due to the reduction of protein abundance were not considered as differentially regulated. CK2 substrates have different rates of phosphorylation turnover, some of them are promptly reduced after 6 h of treatment with CX-4945 but others are more resistant to dephosphorylation, since requires much longer treatment times (up to 24 h) and higher concentrations of the inhibitor [24]. We think that the foregoing could explain the proportion of putative CK2 phosphopeptides that resulted unaffected after 8 h of treatment with CX-4945 in AML cells. Even more, in C2C12 cells devoid of CK2 catalytic activity (CK2α/α (−/−) ) was demonstrated that not all the phosphopeptides conforming the CK2 consensus sequence have reduced phosphorylation levels, suggesting that other kinase(s) could fulfill the phosphorylation of these sites in the absence of CK2 [27]. CK2 consensus is a quite distinctive motif where phosphoacceptor amino acid is surrounded by acidic residues [5]. As demonstrated by sequence logo analysis, the positions up-and down-stream of phosphorylated sites in peptides that significantly decreased after treatment with CX-4945 are predominantly occupied by acidic residues ( Figure 4B). Furthermore, 30% and 16% of the phosphopeptides down-regulated by CX-4945 had a glutamic acid at position n + 3 in HL-60 and OCI-AML3 cells, respectively ( Figure 4B). Basic residues are less represented or practically absent at positions spanning between n + 1 to n + 4. All these features are consistent with the previously reported linear motif preference of CK2. Notably, phosphopeptides containing the S/T-P motif were also down-phosphorylated in AML cells after CK2 inhibition with CX-4945 ( Figure 4B). In fact, 35% and 53% of the significantly down-phosphorylated peptides had a proline at position n + 1 in HL-60 and OCI-AML3 cells, respectively ( Figure 4B). This motif is targeted by the large and heterogeneous category of proline-directed kinases and has been previously reported that such motif is incompatible with direct phosphorylation by CK2 [60]. Thus, the downregulation of phosphopeptides containing S/T-P motif could be interpreted as off-target effect of CX-4945 or just an indirect result of CK2 inhibition, i.e., perturbations of other kinases involved in signaling networks where CK2 is also implicated. Considering that this effect has been associated not only to CX-4945, but also to others CK2 inhibitors [24][25][26], we reasoned that decrease in phosphorylation such phosphopeptides is just a consequence of signaling propagation following CK2 inhibition. Network Analysis of Kinases Associated with AML Phosphoproteomic Profiles To identify kinases responsible for the phosphoproteomic profile modulated in HL-60 and OCI-AML3 cells, an enzyme-substrate network was constructed using iPTMnet and KEA2 bioinformatic resources [39,42]. A total of 37 differentially modulated phosphopeptides in HL-60 cells (|FC| ≥ 1.5, p-value < 0.05) were attributed to 31 kinases including CK2 with the higher number (10 phosphopeptides) ( Figure 5, Figure S2 and Table S4). A broader picture was observed in OCI-AML3 phosphoproteome, in which 207 differentially modulated phosphopeptides were associated to 73 kinases. As expected, CK2 enzyme was again among the most represented kinases with 29 phosphopeptides (Figure 5, Figure S2 and Table S4). Kinases significantly associated with the phosphoproteomic profile were also identified using KEA2 bioinformatic tool [42]. In addition to CK2, members of the CDKs and MAPKs families like CDK1, CDK2, MAPK9 and MAPK14 were also significantly associated with the OCI-AML3 phosphoproteome ( Figure S2). These results are in accordance with sequence logo analysis, which indicates that CK2 and proline-directed kinases motifs are the most frequent among the phosphopeptides down-regulated after CK2 inhibition in AML cells. An interaction network of protein kinases associated with the phosphoproteomic profile modulated in HL-60 and OCI-AML3 cells was represented using the Metascape bioinformatic software ( Figure 5) [43]. The kinome network also includes those kinases that were identified in AML cells after CK2 inhibition, with either not differentially modulated phosphopeptides (green nodes) or down-phosphorylated peptides (blue nodes). For instance, the tyrosine-phosphorylated and regulated protein kinase DYRK1A is known to promote cell proliferation and survival [61]. DYRK1A is auto-phosphorylated in S529, modification that enhances 14-3-3-β protein binding and consequently increases the kinase catalytic activity [62]. DYRK1A S529 was found down-phosphorylated in our study, suggesting an inhibition of this kinase in HL-60 cells. In fact, the S369 of Cyclin-L2, a known DYRK1A substrate which is involved in RNA processing of apoptosis-related factors [63], was also found down-phosphorylated in HL-60 cells ( Figure S2). CK2 has direct interactions with 13 and 27 kinases related to the phosphoproteomic profile identified in HL-60 and OCI-AML3 cells, respectively ( Figure 5). Such kinases include nine bona fide CK2 substrates, three of them (MAPK1, MAPK9 and CDK1) related to both phosphoproteomics profiles ( Figure 5). Although none of the CK2 phosphosites belonging to these kinases were identified in the present study, the results suggest a signal propagation downstream of these proteins. For instance, CK2 phosphorylates mitogenactivated protein kinase 1 (MAPK1) at S246 and S248, such event promotes MAPK1 nuclear translocation and phosphorylation of target transcription factors [64]. A total of 19 phosphopeptides which are substrates of MAPK1 were identified down-phosphorylated in OCI-AML3 after CK2 inhibition ( Figure S2). Besides, CK2 phosphorylates cyclin-dependent kinase 1 (CDK1) at S39 and regulates cell cycle [65]. Accordingly, the enzyme-substrate network evidenced an inactivation downstream of CDK1 since at least, 43 phosphosites modulated by CDK1 were down-phosphorylated in OCI-AML3 cells. Such phosphopeptides belong to proteins related to chromatin remodeling, mitotic spindle assembly, and DNA repair ( Figure S2). Protein-protein interaction network of kinases associated to phosphoproteomic profiles differentially modulated by CX-4945 in HL-60 and OCI-AML3 cells. Protein clusters were identified with MCODE algorithm and the related biological processes and signaling pathways are indicated. For each protein kinase the node size is proportional to the number of target phosphopeptides that appeared differentially phosphorylated in response to CK2 inhibition. Kinases that are significantly associated with the phosphoproteomic profiles, according to KEA2 results, are highlighted with a red border. In addition, kinases indicated with a red line are bona fide CK2 substrates, whereas green and blue nodes correspond to those kinases that were identified in our analysis with either not differentially modulated phosphopeptides and down-phosphorylated peptides, respectively. Highly connected regions in the kinome networks associated to HL-60 and OCI-AML3 phosphoproteomic profiles were identified using MCODE algorithm [44]. Clusters representing cell proliferation (MAPK targets) and cell cycle appeared as a common denominator in kinome networks from both AML cell lines ( Figure 5). Accordingly, we found that CX-4945 impairs AML cells proliferation and cell cycle progression ( Figure S1A,B). In contrast, signaling pathways mediated by VEGF and PI3K/AKT only appeared in OCI-AML3 kinome network ( Figure 5). Protein kinase CK2 it is known that up-regulates PI3K/AKT pathway, in part by phosphorylating and activating AKT [66]. To note, PI3K/AKT pathway is constitutively active and sustain viability of primary acute lymphoblastic leukemia cells (ALL), signaling alteration that results from CK2 overexpression and hyperactivation [67]. AML and ALL are hematological diseases with several features in common, and previous studies have showed that the antineoplastic effect of CX-4945 in both malignancies is mediated by attenuation of the PI3K/AKT pathway [51,[68][69][70]. Accordingly, we found a number of AKT substrates down-phosphorylated in OCI-AML3 cells after CK2 inhibition with CX-4945, whereas in HL-60 cells the PI3K/AKT pathway did not appeared significantly represented in our analysis, explaining perhaps its refractoriness to CX-4945-induced apoptosis. Such findings are in agreement with Annexin V/PI staining and immunodetection of phosphorylation status and total protein levels of PI3K/AKT mediators ( Figure S1C,D). Importantly, previous phosphoproteomic results from primary AML cells have indicated that at the diagnosis time, patients that relapse after chemotherapy had a higher CK2, MAPK and CDK activity in comparison with patients which have free-relapse evolution [22]. However, the high CK2 activity at diagnosis of relapsed patients was no longer observed in chemoresistant cells [23]. Aasebø et al. pointed out that the proteome and phosphoproteome profiles changed considerably from the first diagnosis to the first relapse, therefore CK2 could be important in inducing treatment-resistant clones but dispensable for the survival of clones that already have become resistant to therapy [23]. Remarkably, in our study substrates of CK2, MAPKs and CDKs were found down-phosphorylated after CX-4945 treatment of AML cell lines, being MAPKs and CDKs signaling modulation probably a down-stream consequence of CK2 inhibition ( Figure 5, Table S4). Identification of CK2 Substrates Modulated by CX-4945 in AML Cells Besides the bona fide CK2 substrates annotated in iPTMnet and KEA databases [39,42], additional candidate CK2 substrates in AML cells were searched. According to the presence of the CK2 consensus sequence, 39% and 26% of all differentially modulated phosphopeptides on HL-60 and OCI-AML3 respectively, could be putative CK2 substrates responsive to CX-4945. However, phosphosites recognized by other protein kinases like Ser/Thrprotein kinase Chk1 or cAMP dependent protein kinase catalytic subunit alpha (PKACA) could contain an acidic amino acid at position n + 3 ( Figure S3). Indeed, we observed that arginine is frequent at position n − 3 from the phosphorylated residue (Figure 4), a motif that is recognized by basophilic kinases [59]. Therefore, we search for additional evidences in support phosphoproteins containing the CK2 consensus sequence as candidate CK2 substrates. First, differentially phosphorylated proteins identified in AML cells were searched as candidate CK2 substrates using NetworKIN database [45]. Such database includes enzymesubstrate interactions predicted not only based on the consensus sequence recognized by the enzyme, but also using a protein association network to model the context of substrates and kinases, which improves the prediction accuracy [45]. Second, the phosphoproteomic profile differentially modulated in AML cells after CK2 inhibition was compared with a dataset of high confidence CK2 substrates reported by Bian et al. [46]. These authors identified in vitro CK2 substrates by combining kinase reaction on immobilized proteomes with quantitative phosphoproteomics, and to reduce false positive results compared in vitro phosphosites with in vivo phosphorylation sites reported in databases [46]. Lastly, the differentially modulated phosphoproteins that interact with CK2 were searched using Metascape, which performed interactome analysis based on integrative protein-protein interactions databases like InWeb_IM and OmniPath [43]. Taking into account the four levels of predictions (CK2 consensus sequence, Net-worKIN prediction, CK2 substrates predicted by Bian et al. [46] and interaction with CK2) we identified a total of 117 and 359 candidate CK2 substrates differentially modulated after CK2 inhibition in HL-60 and OCI-AML3 cells, respectively (Table S5). This dataset was filtered out to find those substrates that had the concomitant occurrence of two or more criteria associated to CK2 phosphorylation. Applying this workflow, in HL-60 cells 64 phosphosites on 53 proteins were identified as the most reliable CK2 substrates modulated after treatment with CX-4945, whereas 168 phosphosites on 120 proteins were identified in OCI-AML3 cells ( Figure 6, Table S5). The list includes those CK2 substrates previously confirmed as bona fide according to iPTMnet and KEA databases [39,42]. Remarkably, for the 67% and 71% of the high confidence CK2 substrates modulated in HL-60 and OCI-AML3 cells, respectively, any related enzyme was annotated in iPTMnet database. Besides, to our knowledge the phosphosites S280 of coilin protein and T180 of inosine-5'-monophosphate dehydrogenase 2 (IMPDH2) are reported for the first time. Coilin protein is an integral component of Cajal bodies-subnuclear compartments, whereas IMPDH2 catalyzes the first and rate-limiting step for de novo guanine nucleotide biosynthesis pathway [71,72]. Interestingly, both proteins regulate cell growth and have been related to malignant transformation [72,73]. However, validation of coilin S280 and IMPDH2 T180 as phosphorylation sites targeted by CK2 and the biological roles of such post-translational modifications need further experimentation. Functional Characterization of CK2 Substrates Identified in AML Cells Phosphoproteins identified as candidate CK2 substrates are related to transcription, mRNA splicing, rRNA processing, translation, DNA repair and cell cycle in both AML cells lines ( Figure 6). However, the number of potential CK2 substrates differentially modulated after CK2 inhibition is higher in OCI-AML3 cells than in HL-60 cells. As pointed before, this could explain the different sensitivity to CX-4945 cytotoxic effect of HL-60 cells in comparison to other AML cell lines [51]. In fact, we identified candidate CK2 substrates related to apoptosis only in the phosphoproteomic profile of OCI-AML3 cells ( Figure 6). This subset includes three tumor suppressors: erythrocyte membrane protein band 4.1 like 3 (EPB41L3 S88), the programmed cell death 4 protein (PDCD4 S457) and the death inducer-obliterator 1 (DIDO1 S809). However, the effect of CK2-mediated phosphorylation for the function of these proteins remains to be determined. CK2 inhibition in AML cells could impact the transcriptional machinery by modulating the phosphorylation of several candidate substrates. Such CK2 candidate substrates in OCI-AML3 phosphoproteomic profile are centered around the RNA polymerase II subunit A (POLR2A) according to protein-protein interactions gathered from STRING database ( Figure 6) [47]. Three components of the PAF1 complex which interacts with RNA polymerase II during transcription were identified as candidate CK2 substrates: RNA polymerase II-associated factor 1 homolog (PAF1 S394), RNA polymerase-associated protein LEO1 (LEO1 S296, S630, S658 and T629) and RNA polymerase-associated protein CTR9 homolog (CTR9 T925). PAF1 complex is required for transcription of Hox and Wnt target genes [74]. Therefore, down-phosphorylation of these candidate substrates could modulate the Wnt signaling pathway. Supporting this hypothesis, previous studies highlights that CK2 is a positive regulator of Wnt signaling pathway and CK2 inhibition by CX-4945 has been associated with Wnt/β-catenin inhibition [75,76]. Substrates related to transcription include bona fide CK2 targets such as the nonhistone chromosomal protein HMG-14 (HMGN1) and the high mobility group protein HMG-I/HMG-Y (HMGA1) [77][78][79]. The phosphorylation level of both proteins (HMGN1 S7, S8, S89; HMGA1 S103) decreased after CK2 inhibition by CX-4945 ( Figure 6). Importantly, AML patients that relapsed after chemotherapy have an increased phosphorylation level of HMGN1 S7 [22]. In general HMG proteins modulate chromatin and nucleosome structure, participate in transcription, replication, DNA repair, and extracellular HMGN1 has been described to function as an alarmin that contributes to the generation of innate and adaptative immune responses [80,81]. The biological effect of CK2 phosphorylation of HMGN1 and HMGA1 is currently unknown, although, previous studies suggest that phosphorylation of HMGN1 could interfere with its nuclear localization [78]. The most densely down-phosphorylated protein among the candidate CK2 substrates is the protein IWS1 homolog (IWS1) which was identified with eight phosphopeptides in OCI-AML3 cells ( Figure 6). This protein recruits a number of mRNA export factors and histone modifying enzymes to the RNA polymerase II elongation complex and modulates the production of mature mRNA transcripts [82,83]. As illustrated by Figure 6, several candidate CK2 substrates related to mRNA splicing were down-phosphorylated after CK2 inhibition in AML cells, including members of the spliceosome complex. Among those proteins are heterogeneous nuclear ribonucleoproteins (HNRNPC, HNRNPL), serine and arginine rich splicing factors (SRSF2, SRSF11) and pre-mRNA processing factors (PRPF3 and PRPF40A) ( Figure 6). In particular, CK2 phosphorylation of heterogeneous nuclear ribonucleoproteins C1/C2 (HNRNPC) it known that regulates its binding to mRNA [84,85]. In agreement with our results, was previously demonstrated that CK2 inhibition by quinalizarin and CIGB-300 modulates a subset of CK2 substrates related to transcription, RNA processing and mRNA splicing [24,26]. To note that at the time of diagnosis, phosphoproteins containing CK2 phosphoacceptor sites and related to RNA processing have an increased phosphorylation level in relapse AML patients when compared to those which have a relapse-free evolution [22]. Another phosphoproteomic study comparing pairing samples of AML patients at the time of diagnosis and first relapse found that also RNA-splicing and -binding proteins were up-phosphorylated at first relapse [23]. CK2 phosphorylation of proteins related to rRNA processing and translation has been well documented [3]. Among the proteins probably subject to CK2 regulation in AML cells are members of the nucleolar ribonucleoprotein complex (NAF1 S315; DKC1 S451, S453, S485, S494; NOP56 S520, S570) ( Figure 6). According to information gathered from STRING database [47], such proteins interacts with phosphoproteins related to ribosome biogenesis (RIOK2 S332, S337; BMS1 S639; LTV1 T171) which were identified mainly in OCI-AML3 cells ( Figure 6). The effect of CK2 regulation of these proteins remains to be elucidated. However, the results highlight the important role of CK2 in regulating protein biosynthesis to support the high proliferative rate of tumor cells. In line with this result, a cluster of eukaryotic translation initiation factors (EIF) was down-phosphorylated after CK2 inhibition ( Figure 6). This cluster contains two members of the EIF3 complex: EIF3J S11 and EIF3C S39. EIF3J is a known CK2 substrate and its phosphorylation on S127 promotes assembly of EIF3 complex and activation of the translational initiation machinery [86]. Besides, CK2 phosphorylates EIF2β on S2, a phosphopeptide also identified in our study, and such modification stimulates EIF2β function in protein synthesis [87]. Down-phosphorylation of proteins related to the translational machinery after CK2 inhibition could add a beneficial impact at the clinical evolution of AML patients, since protein translation has been associated with increased relapse risk [22,23]. Another function attributed to CK2 is the regulation of the cellular DNA damage response [52]. After CK2 inhibition in AML cells, the biological process of DNA repair appeared significantly represented in the phosphoproteomic profiles ( Figure 3). A recent study demonstrated that proteins related to DNA repair have increased phosphorylation levels in relapse AML patients [22]. Among those phosphoproteins associated with such unfavorable chemotherapy outcome, we identified in our study that treatment of AML cells with CX-4945 down-phosphorylates TRIM28 S19, TP53BP1 S523/S525 and LIG1 S66, this latter a known CK2 substrate (Table S1) [88]. Besides, others known and putative CK2 substrates related to DNA repair were also found down-phosphorylated in our study, like the DNA damage recognition and repair protein (XPC S94) ( Figure 6). In particular, CK2 phosphorylation of XPC at S94 promotes recruitment of ubiquitinated XPC to the chromatin which is important for nucleotide excision repair following ultraviolet induced DNA damage [89]. Previous studies demonstrated that CK2 inhibition by CX-4945 inactivates the function of other essential DNA repair proteins, supporting the synergistic interaction of this inhibitor with chemotherapeutic agents that induce DNA damage [55]. Worthy of note, we identified members of the heat shock protein 90 (HSP90) chaperone proteins differentially modulated in OCI-AML3 phosphoproteomic profile. CK2 mediated phosphorylation of HSP90 is required for its chaperone activity toward client kinases, some of them involved in human cancers [90,91]. Phosphosites from HSP90-alpha (HSP90AA1 S263) and HSP90-beta (HSP90AB1 S226) were both down-phosphorylated after CK2 inhibition in OCI-AML3 cells ( Figure 6). Thus, modulation of HSP90 by CX-4945 in OCI-AML3 cells could be in part responsible for the signal propagation downstream of CK2 inhibition and the pronounced effect over the kinome network in this cell line. In agreement with our findings, besides attenuation of PI3K/AKT pathway, disruption of unfolded protein response (UPR) have also been pointed as a mediator of CX-4945-induced apoptosis in ALL cell lines and primary lymphoblasts [69,70]. Importantly, in such effect the reduction of chaperoning activity of HSP90 appears to play a critical role [69,70]. Moreover, in multiple myeloma (MM) cells, another hematological malignancy having common features with AML, has been documented that CK2 inhibition causes apoptotic cell death through alterations of the UPR pathway [92]. In summary we found that the phosphoproteomic profiles modulated after CK2 inhibition with CX-4945 in AML cell lines, contain protein mediators of signaling pathways and biological processes previously described in primary AML cells (Figure 7) [22,23,51,68]. Therefore, our findings, in conjunction with Quotti Tubi et al. results and AML patients phosphoproteomic data from Aasebø et al., support the rationale of protein kinase CK2 pharmacologic inhibition for AML targeted therapy, an approach that could significantly improve the outcome in AML therapeutics. Conclusions Our study provides the first quantitative phosphoproteomic analysis exploring the molecular impact of the ATP-competitive CK2 inhibitor CX-4945 in human cell lines representing two differentiation stages and major AML subtypes. Here, we identified a total of 273 and 1310 unique phosphopeptides as significantly modulated in HL-60 and OCI-AML3 cells, respectively. Modulated phosphopeptides are mainly related to mRNA processing and splicing, response to DNA damage stimulus, protein sumoylation and regulation of viral processes. In addition, the network analysis illustrated how the relationship of CK2 with other kinases could orchestrate the perturbation of AML cells phosphoproteome. In this complex cellular response, phosphorylation mediated by other kinases besides CK2 could be interpreted as a consequence of signal propagation downstream of CK2 inhibition, rather than off-targets effects. Additionally, using database mining and prediction tools, in HL-60 cells we identified 64 phosphosites on 53 proteins as high confidence CK2 substrates responsive to CX-4945, whereas 168 phosphosites on 120 proteins were identified in OCI-AML3 cells. Such substrates not only explain the variety of cellular effects exerted by CX-4945, but also reinforce the instrumental role of protein kinase CK2 in AML biology. Besides, selected cells lines not only represent two major AML subtypes, but also different niches that can be found in the clinical practice if we consider the differential sensitivity to CK2 inhibition with CX-4945 displayed by these cell lines. Finally, our results, in conjunction with previous findings in primary AML cells, support the suitability of using CK2 inhibitors for AML targeted therapy, a pharmacologic approach that could significantly improve the outcome in AML patients. Supplementary Materials: Supplementary materials can be found at https://www.mdpi.com/ 2073-4409/10/2/338/s1. Figure S1. CK2 inhibitor CX-4945 impairs proliferation and viability of AML cells. Figure S2. Enzyme-substrate network of differentially modulated phosphopeptides identified in AML cells using annotations from iPTMnet and KEA2. Figure S3. Sequence logos of phosphopeptides targeted by protein kinases representing five kinase groups (CAMK, Atypical, CK1, AGC and other) in the human kinome. Table S1. Phosphoproteomic profile of AML cells treated with the CK2 inhibitor CX-4945. Table S2. Proteins differentially modulated in AML cells treated with the CK2 inhibitor CX-4945. Table S3. Phosphopeptides that fulfill the CK2 consensus sequence in AML phosphoproteomic profiles. Table S4. Data mining of kinases associated to differentially phosphorylated peptides in AML phosphoproteomic profiles. Table S5
2021-01-07T09:11:24.984Z
2021-01-04T00:00:00.000
{ "year": 2021, "sha1": "c9669f94432e4a5a9502d5659b2d0d78203fa95b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/10/2/338/pdf?version=1612927754", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e8ea46f156078b0dd317d82b269c0cbce23a861", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
28643547
pes2o/s2orc
v3-fos-license
Photon correlations for colloidal nanocrystals and their clusters Images of semiconductor `dot in rods' and their small clusters are studied by measuring the second-order correlation function with a spatially resolving ICCD camera. This measurement allows one to distinguish between a single dot and a cluster and, to a certain extent, to estimate the number of dots in a cluster. A more advanced measurement is proposed, based on higher-order correlations, enabling more accurate determination of the number of dots in a small cluster. Nonclassical features of the light emitted by such a cluster are analyzed. Many quantum information protocols, such as quantum key distribution, linear optical quantum computation, and quantum metrology, require on-demand singlephoton sources [1,2]. Among the sources available now in the laboratory, there are atoms [3], molecules [4], nitrogen vacancies in diamond [5], and epitaxial semiconductor quantum dots [6]. Colloidal quantum dots [7], despite their disadvantages such as blinking [8], bleaching [9], and a noticeable probability of two-photon emission, are still one of the most promising types of singlephoton emitters as they can be operated at room temperature and can be synthesized relatively easily. Especially promising are dot-in-rods (DR) [10], which have a higher probability of single-photon emission, show reduced blinking [11], and emit single photons with a high degree of polarization [12]. DRs can also merge into clusters. A cluster of DRs is not a single-photon emitter but it can be used for quantum information purposes since it still emits nonclassical light. In this work we perform measurements that allow one to distinguish between a single DR and a cluster. As a criterion, we use Glauber's second-order correlation function (CF) and the brightness of emission. Another method of distinguishing between a single DR and a cluster is defocused microscope technique [12], based on the dipole-like angular distribution of the DR emission; however, this method only allows one to recognize clusters consisting of differently oriented DRs. As an extension of our correlation measurement, we propose the study of higher-order correlation functions. Such a measurement will allow one to resolve the number of DRs in a cluster and also observe nonclassical behavior of its emission. For our experiment we used an intensified CCD (ICCD) camera. This device is equivalent to an array of single-photon detectors and can be used for spatially resolved measurements of Glauber's CFs. Alternatively, the pixels can be used for photon-number, rather than spatial, resolution, so that higher-order CFs can be measured. The ICCD camera can be used in either analog or photon-counting mode. In the photon-counting mode, used in our experiment, a certain level of the dimensionless readout signal S (proportional to the integral electric charge acquired in a pixel) is chosen as a threshold, and any signal exceeding this value is interpreted as a single-photon event in a corresponding pixel. If the threshold S th is taken too high, the resulting quantum efficiency becomes low, which fortunately does not affect the normalized CF measured in our experiment [13]. If the threshold is too low, the noise is increased. The experimental setup is shown in Fig. 1. CdSe/CdS DRs with 4.6 nm core diameter, 29 nm shell length and 11 nm shell width were dissolved in toluene with the concentration 10 −13 mol/l and coated onto a glass substrate. After drying, the sample was placed over an NA1.3 oil immersion objective (IO) with the DRs on the side op-posite to the objective. The excitation was performed by irradiating with cw diode laser light at 405 nm. To slow down the bleaching of the DRs, the laser beam was modulated with a frequency of 30 Hz and a pulse duration of 550 µs. There was a possibility of excitation through the same objective IO as well as through an NA0.75 objective (O) placed on top of the sample. This way a larger number of DRs could be excited. The excitation rate was chosen to be close to the saturation level. The emission from the DRs, centered at 630 nm, was collected by the immersion objective and sent to the registration part of the system by means of a dichroic mirror (DM). Afterwards, the beam was split in two by a beamsplitter (BS), and both beams were directed to the Princeton Instruments PI-MAX3:1024i ICCD camera (ICCD) at a small angle. The path length difference between the beams was 20 cm, corresponding to a time delay far below the smallest time scale in the experiment (10 ns). A lens (L) with the focal length 30 cm was placed in front of the camera so that the photocathode was in its focal plane, producing two images of a group of DRs and their clusters (parts A and B of the image shown in Fig. 2, left). An interference filter with a width of 40 nm and a central wavelength of 650 nm was placed in front of the camera to reduce the contribution of stray light. The ICCD camera was gated synchronously with the laser pulses, and the gate width could be varied between 10 ns and 40 ns. In order to optimize the single-photon detection, binning of pixels was performed (every group of 4x4 pixels was joined into a single 'superpixel') and the readout threshold was chosen. For this, an area around a single DR was selected and the number of singlephoton counts over this area ('the signal') was compared with the corresponding number for an identical empty area ('the noise'). The threshold value S th = 685 corresponded to the largest signal-to noise ratio. However, in experiment this value was slightly changed (within the range 685 < S th < 700) in order to minimize the error in the correlation function measurement. The resulting signal-to-noise ratio always exceeded 3. For a measurement of the second-order CF, a set of 10 6 frames was acquired. Because of the limited gate rate, the whole data acquisition took many hours. The unavoidable small displacements of the images during this time (due to small temperature variations and the mechanical vibrations) were taken into account by taking a long frame after every 10 4 standard frames. These 'control' frames allowed us to trace the displacement of the DR images and to make appropriate corrections to the coordinates of pixels chosen in each frame. Five datasets were taken, with the gate times 10, 15, 20, 30, and 40 ns. For each dataset, the mean number of single-photon events per 'superpixel' per frame was less than 0.1. The probability of having two photons taken for one was therefore negligible. Second-order CF at zero delay and displacement, g (2) (0, 0), was calculated as where the angular brackets denote averaging over all frames and N A,B are numbers of single-photon events for an area associated with a given object, either a DR or a cluster, in the fields A, B (N A,B = 0, 1). Besides, for each object seen in the image (Fig. 2, left), the dimensionless 'brightness' B was calculated taking into account the average number of counts N A,B per gate, the gate time T g , and the intensity I of the excitation beam at the position of the object: Here,Ĩ = αI and α is a normalization factor, the same for all measurements, which has been chosen in such a way that most frequent objects have a value of B close to 1. The distribution of the objects with respect to the brightness value B is shown in Fig. 2, right. One can see that, despite the low total number of objects in the image, several groups can be seen: roughly, with B < 1.25, 1.25 < B < 2.5, and 2.5 < B < 4. The comparison with the CF measurements below will show that these groups very likely relate to a single DR and to clusters of 2 and 3 DRs, respectively. Due to the mechanical vibrations in the system as well as to the bleaching and blinking of the DRs, the brightness of each object changed during the data acquisition time. This was taken into account by normalizing the resulting CF g (2) (0, 0) of each object to its value at a time delay of T = 30 ms, g (2) (T, 0). The latter was calculated by taking frames separated by 30 ms, which is much larger than the lifetime. The values of g (2) (0, 0) were also corrected taking into account the level of the noise, for which g (2) (0, 0) = 1. The results of g (2) measurement with the gate time T g = 10ns are shown in Fig. 3. The measurement error is estimated from the number of registered single-photon events. For all objects, no anti-bunching is observed at low threshold values (Fig. 3a). This is because at low threshold, the noise is too high and its Poissonian statistics masks the anti-bunching. At higher threshold values, we see that an object that most probably is a single DR (B = 0.74) manifests significant anti-bunching: g (2) = 0.35 ± 0.1. However, this value is relatively high because of the large gate width (T g = 10 ns). The effect of the averaging over T g can be described as follows. For cw excitation [11,12], the CF depends on the times of the first and second photons registration t 1 , t 2 as where k is the decay rate and p the probability of twophoton emission. In order to determine these parameters, g (2) (t 1 − t 2 ) was measured in a different setup, with a single DR excited at the saturation level and the emission registered by two avalanche photodiodes followed by a coincidence circuit. The results (Fig. 4, top) show that k = 0.1 ns −1 and p = 0.22 if the noise contribution into g (2) (0) is eliminated. Integration of Eq. (3) in t 1 , t 2 within the limits from 0 to T g yields From this expression, the expected value of the bunching parameter measured for a single DR with the gate time 10 ns is 0.43 (dashed line in Fig. 3), which is in agreement with the measured value. For a brighter ob- int averaged over all 'dim' objects (B < 1.25) as a function of the gate (integration) time T g . Solid line: fit using (4). ject (B = 2.35), which we associate with a cluster of 2 DRs, the value of g (2) measured at optimal threshold values is 0.65 ± 0.1 (Fig. 3b). Theoretically, the existence of two single-photon emitters in a cluster can be taken into account by using the general formula describing g (2) in the presence of m independent contributions (modes) [14], where g is the value for a single emitter. For m = 2 and g (2) 1 = 0.43, we get g (2) 2 = 0.72, in agreement with the data in Fig. 3b. Finally, Fig. 3c shows the bunching parameter for an object with B = 4.91, for which we assume m = 4. This gives the theoretical prediction g (2) 4 = 0.86, again in good agreement with the measurement (0.85 ± 0.1). The conclusion follows that our hypothesis about the numbers of DRs in clusters, based on their brightness, agrees very well with the bunching parameter measurements. This gives a good indication for the relevance of our method to evaluate the number of nonclassical emitters in a cluster. Figure 4 (bottom) shows the results of averaging the measured bunching parameter over all objects for which B < 1.25 and which are therefore identified as single DRs. The threshold was chosen to be 693. We see that the value of the bunching parameter grows with the gate width. The dependence is well fit with Eq. (4). The errors take into account both the number of single-photon events and the spread of the data obtained for different DRs. In total, 10 DRs contributed into the plot, but only 3 of them, for T g = 10ns, 2 for T g = 15ns, 2 for T g = 20ns, and 3 for T g = 40ns, as the statistical error was too large for the rest. It is also interesting to consider the measurement of higher-order CFs for a DR cluster. Indeed, for a cluster of N DRs, the N th-order CF will be nonzero, g (N ) (0, 0) = 0 while the next-order one will show the analog of antibunching, g (N +1) (0, 0) ≈ 0. This is because a cluster of N single-photon emitters cannot emit more than N photons within the lifetime. From such measurements, one can get more information about the number of DRs in a cluster. Note that here, it is not required that all DRs emit photons into the same radiation mode. Even for a product state of N photons simultaneously emitted into different modes, the nonclassicality condition [15] will be satisfied. Moreover, if all DRs in the cluster emit photons into the same mode, even with small (but equal) probabilities, another nonclassical feature can be observed. Indeed, the cluster emits then a Fock state In the presence of losses leading to a finite detection efficiency η, one can account for the latter using the beamsplitter model, with the transmission coefficient t ≡ √ η and the reflection coefficient r ≡ √ 1 − η. Then the light state becomes and the probability to register k photons is Regardless of the detection efficiency, this probability distribution satisfies another nonclassicality condition [15], since for the state (8) (Note that for a Poissonian state, both (11) and (6) will give 1.) The nonclassical behavior will be especially noticeable for k close to N . In conclusion, we have observed spatially resolved images of several single-photon emitters ('dot-in-rods') and their clusters. Using a single-photon photodetector array (ICCD camera), we were able to measure the bunching parameter for each individual object and to distinguish a single emitter from a cluster of such. By assuming that the brightness of an object scales as the number of DRs in it, we identified two-dot, three-dot, and four-dot clusters. This assumption is confirmed by the results of the correlation function measurement. Finally, we propose the measurement of higher-order correlations as a method for a better determination of the number of single-photon emitters in a cluster. This work was supported by ERA-Net.RUS (project Nanoquint) and by the Russian Foundation for Basic Research, grant 12-02-00965. O. A. S. acknowledges support from the Dynasty Foundation. We are grateful to M. Sondermann and V. Salakhutdinov for helpful discussions.
2013-12-18T16:45:39.000Z
2013-12-18T00:00:00.000
{ "year": 2013, "sha1": "b46139008593af3ca1b3bece3068666d6f7645b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1312.5217", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b46139008593af3ca1b3bece3068666d6f7645b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
247380233
pes2o/s2orc
v3-fos-license
Inhibition of mycelial growth and conidium germination of Colletotrichum sp. for organic and inorganic products Objective: To evaluate the effect of hydrogen peroxide, potassium sorbate, sodium bicarbonate, and chitosan on mycelial growth and in vitro germination of Colletotrichum sp., to be used for future management of anthracnose disease in postharvest cv. Ataulfo mango fruit. Design/Methodology/Approach: The effectiveness of the treatments was evaluated using the poisoned culture method. The evaluated concentrations of hydrogen peroxide and potassium sorbate were 1.0, 0.8, 0.6, 0.4, 0.2, 0.16, 0.12, 0.08, and 0.04%; sodium bicarbonate, 1.0, 0.8, 0.6, 0.4 and 0.2%; and chitosan, 2.5, 2.0, 1.5, 1.0 and 0.5%. A 6-day disk of Colletotrichum sp. mycelial growth was placed in each poisoned culture medium. The inhibition of mycelial growth and the germination of Colletotrichum sp. conidia were evaluated. The experimental design was completely randomized with five repetitions for mycelial growth and four for conidium germination. The results were analyzed using the Kruskal-Wallis test and the comparison of average ranges. The CE50 and CE95 of each product was estimated using Probit analysis with the results of mycelial growth inhibition. Results: The mycelial growth inhibition (100%) of the Colletotrichum sp. strain was reached starting at concentrations of 0.16, 0.2, 1.0, and 2.5% for hydrogen peroxide, potassium sorbate, sodium bicarbonate, and chitosan, respectively. The inhibition of conidium germination was only observed in treatments with hydrogen peroxide and potassium sorbate. The CE50 and CE95 for hydrogen peroxide was 0.1 and 0.12%; for potassium sorbate, 0.10 and 0.19%; for sodium bicarbonate, 0.16 and 0.88%; and for chitosan, 1.20 and 2.18%. Findings/Conclusions: The evaluated treatments represent an effective and viable ecological alternative for the control of Colletotrichum sp., causal agent of anthracnosis in mango fruit. INTRODUCTION One of the diseases with greatest economic importance in mango (Mangifera indica L.) farming is anthracnosis or cankers, caused by the fungus Colletotrichum gloeosporioides (Sharma and Kulshrestha, 2015). During postharvest, this disease appears as small, rounded lesions, brown to black in color, with undefined outlines that are slightly sunken into the fruit's flesh. The lesions increase in size as the fruit ripens until joining together, and in severe cases, they cover the entire surface (Siddiqui and Ali, 2014). The control of anthracnosis in postharvest mango fruit is generally done with synthetic fungicides. However, due to the demands of the international market, these have ceased to be used due to the possible risks for human health and environmental contamination (Landero-Valenzuela et al., 2016). Because of the restriction of pesticide use, currently the market has proposed control alternatives such as hydrothermal treatment, use of inorganic salts, storage in controlled and modified environments, biological control strategies, products of organic origin, and vegetable extracts, among others (Dessalegn et al., 2013). Among the products of organic origin, the use of chitosan has shown an inhibitory effect on the development of the disease in postharvest mango fruit cv. Tommy Atkins (Gutiérrez-Martínez et al., 2017). However, there are reports that the effectiveness of chitosan depends on the pathogenic strain evaluated, the molecular weight of the product, the concentration used, and its degree of deacetylation, among other variables (Bautista-Baños et al., 2006;Li et al., 2008). Other organic alternatives for the management of anthracnosis are the use of sodium bicarbonate and potassium sorbate, which have shown total control of the disease in the postharvest of papaya (Ferreira et al., 2018) and olives (El-Sayed et al., 2014). The use of hydrogen peroxide has been reported as inorganic alternative, whose use in the laboratory has shown promising results for control of the pathogen (Muangdech, 2014). Based on the above, the study evaluated the in vitro biological effectiveness of hydrogen peroxide, potassium sorbate, sodium bicarbonate, and chitosan on the mycelial growth and germination of Colletotrichum sp., with the aim of applying the findings in future studies on the management of anthracnosis in postharvest mango fruit cv. Ataulfo. MATERIALS AND METHODOLOGY The study was carried out in the Phytopathology Laboratory of the Rosario Izapa Experimental Field, belonging to the INIFAP based in Tuxtla Chico, Chiapas. The pathogenic strain (6523) of Colletotrichum sp. used in this study was obtained from mango inflorescences (Mangifera indica L.) with symptoms of anthracnosis, collected in Huehuetán, Chiapas, Mexico. This strain was selected given its previous evaluation for pathogenicity and aggressiveness (Martínez-Bolaños et al., data not published). The evaluation of treatment effectiveness was done using the poisoned culture method. For this, individual flasks (one per treatment) were prepared with potato-dextrose-agar (PDA) medium and sterilized at 120 °C for 15 min, and then each treatment was added once the medium reached an average temperature of approximately 40 °C, followed by transferring the growth medium into Petri dishes. The evaluated products were hydrogen peroxide and potassium sorbate in concentrations of 1.0, 0.8, 0.6, 0.4, 0.2, 0.16, 0.12, 0.08, and 0.04%; and sodium bicarbonate in concentrations of 1.0, 0.8, 0.6, 0.4, and 0.2%. Each combination of product/dose was considered one treatment. Additional treatments consisted of chitosan of low molecular weight (Sigma-Aldrich) in five concentrations (0.5, 1.0, 1.5, 2.0, and 2.5%) (Ghaouth et al., 1991), for which a PDA medium was prepared; after its solidification, 1000 L of each concentration of chitosan were added to form a film approximately 1 mm in thickness on the growth medium. After the growth medium solidified, a disk (5 mm diameter) of the strain's mycelial growth (6 days old) was deposited on the medium's surface, in the central area; and, finally, the dishes were incubated at room temperature (252 °C) for a period of 6 d. As a control treatment, mycelial growth disks were used on PDA without adding any treatments. A completely randomized experimental design was used with five repetitions for each one. The evaluated response variable was the percentage of effectiveness for each treatment, expressed as a percentage of inhibition of mycelial growth (PIMG) of the Colletotrichum sp. strain, with the following formula: To evaluate the effect of each of the treatments on the germination of fungal conidia, two additional Petri dishes were used for each treatment, and 100 L of the 6523 strain conidia suspension (concentration of 110 5 conidia/mL) were deposited and dispersed on the surface of the poisoned growth medium. The Petri dishes were incubated at room temperature (252 °C) for 24 h and then 100 conidia were counted and the total germination percentage was determined under a compound microscope (40x). A conidium was considered germinated when the length of its germination tube was greater than that of the conidium itself. The results on inhibition of Colletotrichum sp. conidia growth and germination were analyzed using the Kruskal-Wallis test and a comparison of average ranges (P0.05), given that the errors were not normally distributed. The effective concentration of the products to inhibit 50 and 95% of mycelial growth (CE 50 and CE 95 , respectively) was estimated using a Probit analysis. RESULTS AND DISCUSSION The concentrations of hydrogen peroxide, potassium sorbate, sodium bicarbonate, and chitosan demonstrated a significant inhibitory effect on the mycelial growth of Colletotrichum sp. (P0.05). The hydrogen peroxide showed inhibition of more than 95% on the mycelial growth of the fungus at a dose of 1.12 to 1.0% (Table 1 and Figure 1). These results were statistically different from the other concentrations (difference with the average ranges test). In the lowest dose of this inorganic product (0.04%), there was little effectiveness (15.7%). Similar results were obtained with the evaluation of potassium sorbate, as it totally inhibited the development of the pathogen at concentrations from 0.2 to 1.0% (Figure 2). Concentrations at less than 0.2% demonstrated a 14.8 to 83.0% inhibition. With respect to sodium bicarbonate, the total inhibition of mycelial growth of the fungus was reached only at the highest dose (1.0%), followed by the concentration of 0.8% with 92.9% effectiveness (different from the average ranges test at a concentration of 1.0%), while at the lowest dose (0.2%), effectiveness was 54.2% ( Figure 3). Finally, chitosan in the highest concentration (2.5%) did not allow the growth of the mycelial pathogen, but there was no statistical difference with concentrations at 2.0 and 1.5%, with an inhibition of 90.6 and 85.9%. In the lowest dose of chitosan (0.5%), the effect was minimal (8.2%) (Figure 4). In the germination tests of Colletotrichum sp. conidia, a significant effect was observed from concentrations of hydrogen peroxide, potassium sorbate, and sodium bicarbonate (P0.05) compared to the control, with total inhibition of germination when using the different concentrations of hydrogen peroxide and potassium sorbate, and with 43.5% inhibition of germination when using sodium bicarbonate at 1.0%. Finally, there was no observed inhibitory effect on conidia germination when the different concentrations of chitosan were used, or in the control. Similar results of the effectiveness of hydrogen peroxide on the inhibition of mycelial growth of C. gloeosporioides were reported by Muangdech (2014) when using concentrations of 0.5% and 0.25%. The inhibitory effect of hydrogen peroxide can be attributed to its capacity for producing highly reactive oxygen free radicals, which adhere to and damage some of the cellular components, including membrane rupture, enzymatic inhibition, nucleoside oxidation, disruption of protein synthesis, and finally, cellular death (Finnegan et al., 2010). Its inhibitory effect on fungal cells has been shown in different fungi species and is attributed to the peroxidase enzyme. Together with an adequate concentration of peroxide as an oxygen donor, this enzyme directly affects the proteins of the spores and mycelium by forming a lignin barrier in the cell walls and in so doing, limiting the development of the fungus ( Joseph et al., 1998). The effectiveness of potassium sorbate for inhibiting the growth of Colletotrichum was previously reported by Jabnoun-Khiareddine et al. (2016), who obtained total inhibition of the mycelial growth of C. coccodes with the use of potassium sorbate at concentrations of 0.5, 1.0, and 1.5%. The principal mode of action in most of the compounds based on potassium salts was a reduction in the turgor pressure of the fungi, which causes a collapse and contraction of the hyphae (Fallik et al., 1997a;Palmer et al., 1997). The sensitivity of Colletotrichum sp. to sodium bicarbonate in the present study is consistent with that obtained by Hasan et al. (2012), who observed greater sensitivity of C. gloeosporioides with the increase in concentration of this compound. The authors reported more than 60% inhibition of mycelial growth at concentrations of 1.0%, and total inhibition of 2.0, 2.5, and 3.0%. However, the effect of sodium bicarbonate on the germination of spores observed in this study differed partially from that reported by Hasan et al. (2012), who mentioned an inhibitory effect only at doses above 2.0%, while in the case of Colletotrichum sp., inhibition was observed starting at 1.0% in this study. The inhibitory and antifungal effect of sodium bicarbonate is attributed to its different modes of action. Sodium bicarbonate has the capacity to elevate the pH of its surroundings, it can deactivate the extracellular enzymes of fungi, and it can interact directly with cellular membranes and interrupt cellular physiology (Palou et al., 2001). Additionally, the salts in sodium bicarbonate increase osmotic stress, reducing the turgor pressure of fungal cells, which results in the collapse of hyphae and spores (Fallik et al., 1997b;De Costa and Gunawarhana, 2012). The results obtained with the use of chitosan in the inhibition of mycelial growth of Colletotrichum sp. are similar to those reported by Berumen et al. (2015). However, they differ from that reported by these authors in relation to its effect on conidia germination: they reported an effect at concentrations of 1.0, 1.5, and 2.0%, while no effect was observed for this variable under the doses evaluated in this study. The inhibition of the growth of this fungus is due to the groups of free aminos in chitosan that produce changes in cellular permeability and cellular disequilibrium of ionic homeostasis of K  and Ca 2 , among others, which cause the hyphae to atrophy, deform, and collapse ( Jun et al., 2011;Peña et al., 2013). In addition to the aforementioned changes, it has been shown that chitosan produces a physical barrier for diverse pathogens in different fruits, while also increasing firmness and delaying ripening in strawberry, tomato, peach, and papaya (Luna et al., 2001;Bautista-Baños et al., 2003). Hydrogen peroxide reached the lowest CE 50 and CE 95 for the mycelial growth of Colletotrichum sp. with 0.1 and 0.12%, followed by potassium sorbate with 0.1 and 0.19%, while chitosan of low molecular weight reached the highest CE 50 and CE 95 with 1.2 and 2.18%, respectively (Table 2). CONCLUSIONS The inhibitory effect observed on mycelial growth and conidia germination of Colletotrichum sp., with the use of hydrogen peroxide, potassium sorbate, sodium bicarbonate, and chitosan, suggest their possible use as ecological alternatives for the postharvest management of anthracnosis in mango fruit var. Ataulfo.
2022-03-11T16:07:11.723Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "fea941489861306c8a8eb4e73d5c237831663e95", "oa_license": "CCBYNC", "oa_url": "https://revista-agroproductividad.org/index.php/agroproductividad/article/download/2051/1783", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9d47ae54ca09a2d1280f16c7b47e5788fd07e8aa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
115215515
pes2o/s2orc
v3-fos-license
Ab initio reconstruction of transcriptomes of pluripotent and lineage committed cells reveals gene structures of thousands of lincRNAs RNA-Seq provides an unbiased way to study a transcriptome, including both coding and non-coding genes. To date, most RNA-Seq studies have critically depended on existing annotations, and thus focused on expression levels and variation in known transcripts. Here, we present Scripture, a method to reconstruct the transcriptome of a mammalian cell using only RNA-Seq reads and the genome sequence. We apply it to mouse embryonic stem cells, neuronal precursor cells, and lung fibroblasts to accurately reconstruct the full-length gene structures for the vast majority of known expressed genes. We identify substantial variation in protein-coding genes, including thousands of novel 5′-start sites, 3′-ends, and internal coding exons. We then determine the gene structures of over a thousand lincRNA and antisense loci. Our results open the way to direct experimental manipulation of thousands of non-coding RNAs, and demonstrate the power of ab initio reconstruction to render a comprehensive picture of mammalian transcriptomes. INTRODUCTION A critical task in understanding mammalian biology is defining a precise map of all the transcripts encoded in a genome. While much is known about protein-coding genes in mammals, recent studies have suggested that the mammalian genome also encodes many thousands of large ncRNA genes1-3,4. Recently, we used a chromatin signature, combining Histone 3 Lysine 4 tri-methylation modifications (H3K4me3) that mark the promoter region and Histone 3 Lysine 36 tri-methylation modifications (H3K36me3) that mark the entire transcribed region ( Supplementary Fig. 1), to discover the genomic regions encoding ~1600 large intergenic ncRNAs (lincRNAs) in four mouse cell types4, and ~3300 lincRNAs across 6 human cell types5. Defining the complete gene structure of these lincRNAs is a pre-requisite for experimental and computational studies of their function. We previously gained initial insights by hybridizing total RNA to tiling microarrays defined across the K4-K36 region4. This provided a coarse list of putative exonic locations, but could not define the precise gene structures and exon connectivity. Advances in massively-parallel cDNA sequencing (RNA-Seq) have opened the way to unbiased and efficient assays of the transcriptome of any mammalian cell6,7, [8][9][10]. Recent studies in mouse and human cells have mostly focused on using RNA-Seq to study known genes6, 8,7,10,11, and depended on existing annotations. They were thus of limited utility for discovering the complete gene structure of lincRNAs or other non-coding transcripts. An alternative strategy is to use an ab initio reconstruction approach9,12-14 to learn the complete transcriptome of an individual sample from only the unannotated genome sequence and millions of relatively short sequence reads. A complete ab initio transcriptome reconstruction of a sample will (1) identify all expressed exons; (2) enumerate all the splicing events that connect them; (3) combine them into transcriptional units; (4) determine all isoforms, including alternative ends, and (5) discover novel transcripts. A successful ab initio method should be applicable to large and complex mammalian genomes, and should be able to reconstruct transcripts of variable sizes, expression levels and protein-coding capacity. Despite early successes in yeast9, ab initio reconstruction of a mammalian transcriptome has remained an elusive and substantial computational challenge. There has been important recent progress, including (1) efficient gapped aligners (e.g., TopHat13) that can map short reads that span splice junctions ('spliced reads'); (2) use of such gapped alignments to identify novel splicing events9,13; (3) exon identification methods14; and (4) genomeindependent assembly of unmapped reads to sequence contigs (e.g., Abyss12). Each of these methods provides an important component towards reconstruction, but none can reconstruct the complete transcriptome of a mammalian cell, due to scaling issues9, limitations in handling splicing14, or inability to identify transcripts with moderate coverage12. Here, we present Scripture, a comprehensive method for ab initio reconstruction of the transcriptome of a mammalian cell that uses gapped alignments of reads across splice junctions (exploiting recent increases in read length) and reconstructs reads into statistically significant transcript structures. We apply Scripture to RNA-Seq data from mouse embryonic stem cells (ESC), neural progenitor cells (NPC), and mouse lung fibroblasts (MLF) and correctly identify the complete annotated full-length gene structures for the vast majority of expressed known protein coding genes. The reconstruction of the three transcriptomes reveals substantial variation in protein coding genes between cell types, including thousands of novel 5′-start sites, 3′ ends, or additional coding exons. Many of these variant structures are supported by independent data. We also discover the gene structure and expression level of over 2000 non-coding transcripts, including hundreds of transcripts from previously identified lincRNA loci, over a thousand additional lincRNAs with similar properties, and hundreds of multi-exonic antisense ncRNAs. We show that lincRNAs have no significant coding potential, and that they are evolutionary conserved. Our results open the way to direct experimental manipulation of this new class of genes and highlight the power of RNA-seq along with an ab initio reconstruction to provide a comprehensive picture of cell specific transcriptomes. RNA-seq libraries We used massively parallel (Illumina) sequencing to sequence cDNA libraries from polyA(+) mRNA from ESC, NPC and MLF cells, with 76 base paired-end reads. For the ESC library, we generated a total of 152 million paired-end reads. Using a gapped aligner13, 93 million of these were alignable (497Mb aligned bases, 262X average coverage of known protein coding genes expressed in ESC). We obtained similar numbers for the NPC and MLF libraries (Methods). In ESC, 76% of these reads map within the exonic regions of known protein-coding genes, 9% are in introns of known protein coding genes, and 15% map in intergenic regions. We found a strong correlation between expression levels of protein-coding genes as measured by RNA-Seq and Affymetrix expression arrays (r=0.88 for all genes, Supplementary Fig. 2). Scripture: a statistical method for ab initio reconstruction of a mammalian transcriptome We next developed Scripture, a genome-guided method to reconstruct the transcriptome using only an RNA-Seq dataset and an (unannotated) reference genome sequence. Scripture consists of five steps (Fig. 1, Supplementary Note 1, Methods). First, we use reads aligned to the genome, including those with gapped alignments13 spanning exon-exon junctions ('aligned spliced reads', Fig. 1c). 'Spliced' reads provide direct information on the location of splice junctions within the transcript, and ~30% of 76 base reads are expected on average to span an exon-exon junction. From the aligned spliced reads, we construct a connectivity graph (Fig. 1d), where two bases in the genome are connected if they are immediate neighbours either in the genomic sequence itself or within a spliced read. We use agreement with splicing motifs at each putative junction to orient the connection (edge) in the connectivity graph9,13 (Fig. 1d). Second, to infer transcripts, we use a statistical segmentation approach4 and both spliced and non-spliced reads to identify paths in the connectivity graph with mapped read enrichment compared to the genomic background (Fig. 1e). This is done by scoring a sliding window using a test statistic for each region, computing a threshold for genome-wide significance, and using the significant windows to define intervals. Third, from the paths, we construct a transcript graph connecting each exon in the transcript (Fig. 1f). Each path through the graph is directed and represents one oriented (strand-specific) isoform of the gene (Fig. 1e). Alternative spliced isoforms are identified by considering all possible paths in the transcript graph. Fourth, we augment the transcript graph with connections based on paired-end reads and their distance constraints, allowing us to join transcripts or remove unlikely isoforms (Fig. 1g, below). Finally, we generate a catalogue of transcripts defined by the paths through the transcript graph. Paired-end reads in transcriptome reconstruction and resolution of alternative spliced isoforms Paired-end information, consisting of reads that came from the two ends of the sequenced RNA fragment, provides valuable additional information in the reconstruction. First, the presence of paired-ends linking two regions shows that they appear in the same transcript; such a connection might not otherwise be apparent because low expression levels or non-alignable sequence might prevent a continuous chain of overlapping sequence reads (spliced or unspliced) across the transcript. We thus augment the transcript graphs with paired-end information, where available, to (indirectly) link nodes in the graph. We use these indirect links (Fig. 1g) to add edges between disconnected graphs, add internal nodes (exons) that might have been missed within a path (transcript), and add extra support for existing edges. This refines the structure of our transcripts and increases our confidence in them, especially in lowly-expressed transcripts that are more likely to have coverage gaps. Second, the distribution of library insert sizes constrains the distance between the paired end reads; these distance constraints can be used to infer the relative likelihood of some potential transcripts (for example, those in which the paired ends would be much closer or much further than expected). We infer the distribution of insert sizes for a given library from the position of read pairs on transcripts from those genes for which there is only a single transcript model (i.e., no detectable alternative splicing, Methods). For example, in the ESC library, this distribution matches well with the experimentally determined sizes. Using this distribution we assign likelihoods to each connection, filtering unlikely ones (Methods). Reconstruction of full-length gene structures We applied Scripture to our mouse ES RNA-Seq dataset, and compared our reconstructions to protein-coding gene annotations15. Scripture identified 16,389 nonoverlapping, multiexonic transcript graphs which correspond to 15,352 known multi-exonic genes (Methods). 88.4% of reconstructed genes are covered by a single graph (no fragmentation of the reconstructed transcript) and 8.0% are covered by two transcript graphs (fragmentation of the transcript to two separate pieces in the reconstruction). Focusing on the 13,362 genes with a significant expression level (P<0.05, Methods), Scripture reconstructed the fulllength structure of the longest known splice isoform (from 5′ to 3′ end, including all exons and splice junctions, Fig. 2a) for 10,355 of them (~78%). All of our reconstructed transcripts for known multi-exonic transcripts also had the correct orientation (strand), allowing us to reconstruct genes that overlap one another on opposite strands (Fig. 2a). Complete transcript structures are recovered across a very broad range of expression levels (Fig. 2b,c) for both single and multi-exonic genes. For example, Scripture accurately reconstructs the full-length transcript of ~73% of the known protein-coding genes at the second quintile of expression, and ~94% of the genes from the top quintile. Furthermore, the average proportion of bases constructed for each transcript was high (Fig. 2c). Even for the bottom 5% of expressed genes, we recover on average 62% of each of these transcripts' bases ( Fig. 2c). For single-exon genes, we recover on average 80% of the transcribed bases. We obtained similar results in the other two cell types (19,835 and 20,407 transcript graphs for 14,212 and 13,351 known genes in NPC and MLF, respectively). Most of the genes that are not fully reconstructed are those with low expression levels; it should be possible to reconstruct most of these by generating additional RNA-Seq data. The few highly expressed genes that are not fully reconstructed are either the result of alignment artifacts caused by recent processed pseudogenes or stem from novel transcriptome variations, missing from the current annotation (explored in detail below). Novel transcriptome variations in annotated protein-coding genes Given that the vast majority of the Scripture reconstructions of protein-coding genes are extremely accurate, we next investigated the differences between the reconstructed transcriptome and the known gene annotations (Supplementary Table 1). We focused on transcripts with (i) novel 5′ start sites; (ii) novel 3′ ends; and (iii) previously unidentified exons within the transcriptional units of known protein-coding genes. In each category, we first discuss below the reconstructed transcripts in ESC and then consider the results for the NPC and MLF. (i) Alternative 5′ start sites are supported by H3K4me3 marks-We found 1804 transcripts in ESC that match the annotated 3′-end but have an alternative 5′ start site, derived from an additional exon not overlapping the annotated first exon. We distinguish between internal alternative 5′ start sites (1397 cases, Fig. 3a) that occur downstream of the annotated start, and external alternative 5′ start sites (407 cases, Fig. 3b) that occur upstream of the annotated start. 90% of the internal 5′-start sites and 75% of the external 5′ start sites contain an H3K4me3 modification, a mark of the promoter region of genes16 ( Supplementary Fig. 3). These alternative start sites are on average 21kb upstream of the annotated site, substantially revising the annotated promoters. Notably, ~60% of the transcripts with an alternative start site (internal or external) had no reconstructed isoform starting at the annotated 5′-start site. We observed similar results from NPC and MLF (Fig, 3a,b, Venn diagrams, Supplementary Table 1). Altogether, we identified 2813 internal 5′ start sites (2302 are supported by K4me3 in their respective tissues), and 807 external 5′ start sites in at least one cell type. In particular, 33% of these novel 5′ ends are likely unique to ESC. (ii) Alternative 3′ UTRs are supported by polyadenylation motifs-There are 551 (~4%) ESC-reconstructed transcripts with an alternative 3′-end downstream of the annotated 3′-end (mean distance 30 kb downstream, Fig. 3c). Of these, 275 (~50%) have evidence of a polyadenylation motif within the novel 3′ exon, which is only slightly lower than for annotated 3′ ends (60%), and much higher than for randomly chosen size-matched exons (6%). The frequency of the polyadenylation motif supports the accuracy of the reconstruction. To conservatively distinguish between upstream (early) termination and incomplete reconstruction, we designated novel 3′ ends only in those cases that did not overlap any of the known exons in the annotated transcript and that contained complete 5′ start sites. We identified 759 transcripts with upstream 3′-ends in ESC (Fig. 3d), 44% of them containing a poly-adenylation motif, supporting their biological relevance. For the vast majority (90%) of these transcripts, Scripture also reconstructed an isoform that contained the annotated 3′ end. We observed similar results for NPC and MLF (Fig. 3c,d, Venn diagrams, Supplementary Table 1). Altogether, we identified 940 downstream 3′ ends and 1850 upstream 3′ ends in at least one cell type. (iii) Additional coding exons are highly conserved and preserve ORFs-We found 534 transcripts in ESC with at least one additional previously unannotated internal coding exon spliced into annotated protein-coding transcripts (Fig. 3e). These transcripts contained 588 novel internal exons, ranging in length from 6bp to 3.5kb (median 111bp, 60-224 20%-80% quantiles). Of these additional exons, 322 (54.5%) are present in all versions of the reconstructed transcript in ESC. The vast majority (83%) of these novel exons retain the reading frame of the transcript, and are as highly conserved as known coding exons ( Supplementary Fig. 4), consistent with their coding capacity. We validated the presence of the novel exons within 5 of 5 tested transcripts, using RT-PCR followed by Sanger sequencing (Methods). We observed similar results in MLF (124 transcripts, 144 exons) and NPC (325 transcripts, 363 exons) (Fig. 3e, Venn diagram). The majority (~70%) are present in all versions of the reconstructed transcript within a cell type. Altogether, we identified 960 novel internal exons in at least one cell type (Fig. 3e, Venn diagram). Discovery of the complete gene structures of hundreds of previously identified lincRNA loci We next turned to identifying the gene structures of transcripts expressed from known lincRNAs loci. We had previously identified 317 lincRNA loci based on K4-K36 domains in ESC cells4. When applied to ESC RNA-Seq data, Scripture reconstructed multi-exonic gene structures for 250 (78.8%) of them (Fig. 4a). This is comparable to the proportion (78.5%) reconstructed for protein-coding genes with K4-K36 domains in ESC. Scripture reconstructed 87% (160/183) of ESC lincRNAs for which we previously identified an RNA hybridization signal from tiling microarrays. We discuss possible reasons for the few remaining discrepancies in Supplementary Note 2. The reconstructed lincRNA transcripts in ESC have 3.7 exons on average, an average exon size of 350 bp, and an average mature spliced size of 3.2 kb (compared to 9.7 exons, exon length of 291 bp, and average length of 2.9kb for protein coding genes). The Scriptureidentified strand information for each lincRNA is consistent with that inferred from the location of K4me3 modification, and with the orientation determined from a strand-specific RNA-Seq library which we generated independently (Methods). The majority of lincRNAs likely represent 5′ complete transcripts based on overlap with H3K4me3 (82%) and 3′ complete transcripts based on presence of a polyadenylation motif (~50%, comparable to 60% for protein-coding genes and far above background of 6%). Similarly, Scripture successfully reconstructed lincRNA gene structures for K4-K36 lincRNA loci in MLF and NPC (232 of 289 in MLF and 224 of 270 in NPC). Most are likely 5′ complete (69% in MLF and 81% in NPC based on overlap with H3K4me3) and many may be 3′ complete based on detectable 3′ polyadenylation sites (18% in MLF and 37% in NPC). In addition, we successfully reconstructed another 116 lincRNAs previously identified only in mouse embryonic fibroblasts but which were now reconstructed in at least one of the other three cell types. Altogether, we identified gene structures for 609 previously defined lincRNA loci in at least one of the three cell types. Discovery of novel lincRNAs In addition to the previously identified lincRNAs, we found another 1140 multi-exonic transcripts that map to intergenic regions (591 in ESCs, 318 in MLF, and 528 in NPC). The majority of these transcripts do not appear to encode proteins, and are designated as noncoding, based on their Codon Substitution Frequency (CSF) scores17-18 (Methods) across the mature (spliced) RNA transcript (88%, Fig. 5a), and the lack of an open reading frame (ORF) larger than 100 amino acids (80%, Fig. 5b). Careful review of the remaining ~12%, reveals 66 loci that are likely to be novel protein coding genes (high CSF score, ORF >200 amino acids, and very high evolutionary conservation, Supplementary Fig. 5). Most of the novel lincRNA loci were not identified in our previous study due to the stringent criteria we imposed when using chromatin maps to identify lincRNAs. Specifically, we required that a K4-K36 domain extend over at least 5 Kb and be well-separated from the nearest known gene locus4. Indeed, the vast majority of novel intergenic transcripts (76%) were enriched for a K4-K36 domain (a comparable proportion as for expressed proteincoding genes) but failed to meet the other two criteria or were too weak to be identified at a genome-wide significance (without knowing their locus a priori). On average, the genomic loci of the novel lincRNAs are closer to neighboring genes, have smaller genomic sizes (~3.5Kb average) and shorter transcript lengths (859bp). Of the lincRNAs that did not have a chromatin signature that reached genome-wide significance, ~40% showed chromatin modifications enriched at a nominal significance level (compared to 57% for protein coding genes). On average, the lincRNAs are expressed at readily detectable levels, albeit somewhat lower than those of protein-coding genes. The median expression level of the reconstructed lincRNAs (as estimated by RPKM, Methods) is approximately 3-fold lower than that of protein-coding genes (Fig. 5d), with ~25% of lincRNAs having expression levels higher than the median level for protein-coding genes (Fig. 5d). The novel lincRNAs identified in this study are expressed at somewhat lower levels than those from chromatin identified loci, consistent with the fact that chromatin enrichment is positively correlated with expression levels (Fig. 5d). We compared the novel lincRNA genes to a collection of ~35,000 mouse cDNA and found evidence that ~43% of our lincRNAs were present in this collection1. This is comparable to the reported fraction (40%) of known transcripts covered by the same cDNA catalogue1. The remaining lincRNAs are unique to this study. These were likely previously missed due to the different cell types and limited coverage of the previous study1. Most lincRNAs are evolutionarily conserved, with 22% of bases under purifying selection The reconstructed full-length gene structures of lincRNAs allow us to accurately assess their evolutionary sequence conservation in each exon and in small windows. To this end, we identified the orthologous sequences for each lincRNA across 29 mammals and estimated conservation by a metric (ω, Methods) reflecting the total contraction of the branch length of the evolutionary tree connecting them19. We calculated ω over the entire lincRNA transcript, as well as over individual exons. Based on our high resolution gene structures, the lincRNA sequences show significantly greater conservation than random genomic regions or introns (Fig. 5c), comparable to 8 known functional lincRNAs20,21,22, and lower than protein-coding exons. The results are consistent with our previous estimates of conservation4. Interestingly, conservation levels are indistinguishable between the chromatin defined lincRNAs4 and the novel ones identified only in this study (Fig. 5c), consistent with membership in the same class of functional large ncRNA genes. These conservation levels are considerably higher than those reported for a previous catalogue of large non-coding RNAs1. We also determined the specific regions within each lincRNA that are under purifying selection and thus likely to be functional, by computing ω within short windows (Methods). On average, 22% of the bases within the lincRNAs lie within conserved patches (comparable to 25% for the 8 known functional lincRNAs, much higher than 7% for intronic bases and lower than 77% of protein coding bases, Supplementary Fig. 6). These conserved patches provide a critical starting point for functional studies23. Variations in lincRNA expression and isoforms A substantial fraction (~41%) of the novel lincRNAs reconstructed in at least one cell type shows evidence for expression in at least two of the three cell types. This is comparable to the 45% of the previously identified lincRNAs present in at least 2 out of the 3 cell types. In contrast, 80% of expressed protein coding genes are expressed across two of the three cell types. This is not merely a result of the lower overall expression of lincRNAs, since the fraction of cell-type specific lincRNAs is higher than that of tissue specific protein-coding genes in every expression quantile ( Supplementary Fig. 7). Thus, lincRNAs are likely to be more tissue-specific than protein coding genes. A substantial portion of lincRNA loci also produce alternative spliced isoforms. For example, within ESC we identified two or more alternative spliced isoforms for 25% of lincRNA genes, comparable for 30% of protein coding genes (15% of lincRNAs in MLF have alternative spliced isoforms, and 14.7% in NPC). Altogether, 28.8% of the 1749 lincRNA loci have evidence for alternative isoforms in any of the three cell types. Identification of hundreds of large antisense transcripts Scripture reconstructed hundreds of transcripts that overlap known protein-coding gene loci but are transcribed in the opposite orientation and likely represent anti-sense transcripts. To determine orientation, we required that any identified antisense transcript be multi-exonic (Methods). Using these criteria, we identified 201 antisense multi-exonic transcripts in ESC (Fig. 4b); these transcripts have an average 5 exons per transcript and an average transcript size of 1.7Kb. On average, the antisense transcripts overlap the genomic locus of the sense protein coding gene by 1023 bp (83% of the transcript length), and most (64%) overlap at least one sense exon, but this overlap is substantially lower (766 bp, 48%). Some of these antisense transcripts (79, ~40%) were identified by a previous cDNA sequencing study1,24, but the majority (122, ~60%) were previously unidentified. Most (~85%) of anti-sense transcripts are non-protein coding by both ORF analysis (Fig. 5b) and CSF scores (Fig. 5a). Four of the newly identified antisense transcripts had a large, conserved open reading frame and are likely novel, previously unannotated protein coding genes. We validated the reconstructed ESC anti-sense transcripts by three independent sets of experimental data. First, the majority of the anti-sense loci carry an H3K4me3 mark at their 5′-end (Fig. 4b), consistent with their independent and antisense transcription (e.g., 64% of the 164 transcripts where it is possible to detect an independent H3K4me3 mark, because the 5′-end of the anti-sense transcript does not overlap the 5′-ends of the sense gene). Second, we generated and sequenced a strand-specific library in ESC (17.5M reads, Illumina, Methods), and found a significant number of reads on the anti-sense strand in >90% of cases (the remaining are likely missed in this limited sequencing due to lower expression). Finally, we confirmed 5 of 5 tested anti-sense transcripts using RT-PCR to unique exons of the antisense transcript (Methods) followed by Sanger sequencing. We obtained similar results for anti-sense transcripts in MLF and NPC (112 and 202 multiexonic antisense transcripts, respectively). Altogether, we identified 469 antisense transcripts expressed in at least one cell type, only 125 of which (27%) were previously identified in large scale sequencing of mouse cDNAs24. The remaining 344 (73%) were previously unidentified by this study, likely reflecting the distinct cell types used in this study, and the limited coverage of previous catalogues. The 469 anti-sense transcripts are expressed at comparable levels to the novel lincRNAs (Fig. 5d), but show substantially lower sequence conservation. Indeed, the antisense ncRNAs showed very little evolutionary conservation as estimated by the ω metric for the portions that do not overlap protein-coding exons on the sense strand, suggesting that the antisense ncRNAs are a distinct class from the lincRNAs (Fig. 5c). DISCUSSION Despite the availability of the genome sequence of many mammals, a comprehensive understanding of the mammalian transcriptome has been an elusive goal. In particular, the computational tools needed to reconstruct all full-length transcripts from the wealth of short read data were largely missing. A recent study proposed to overcome this limitation experimentally by using very long reads (e.g.. 454 sequencing), as a scaffold for short read reconstruction25. This is applicable, albeit at a substantial cost, for highly expressed genes, but would require extraordinary depth to cover more lowly expressed ones. Here, we present Scripture, a novel computational method to reconstruct a mammalian transcriptome with no prior knowledge of gene annotations. Scripture relies on longer reads that span splice junctions to connect discontiguous (spliced) segments, resolve multiple splice isoforms, and leverages paired-end information to refine these transcripts. Scripture can identify short but strongly expressed transcripts as well as much lower expressed transcripts for which there is aggregate evidence along the entire transcript length. While Scripture does rely on a reference genome sequence, many of its components can also be used in the development of methods for assembly of transcripts from read data only. We applied Scripture to RNA-Seq data from pluripotent ES cells and differentiated lineages and showed that we can accurately reconstruct the majority of expressed annotated protein coding genes, at a broad range of expression levels, as well as uncover a large number of novel isoforms in the protein-coding transcriptome. This variation may play key regulatory roles, defining new cell-type specific promoters, UTRs and protein-coding exons. We leveraged Scripture's sensitivity and resolution to reconstruct the gene structures and strand information of hundreds of lincRNAs and multi-exonic antisense transcripts, many of whom are only moderately expressed. Scripture identified over a thousand lincRNAs across the three cell types studied. The substantial majority of the lincRNAs identified were not previously found by classical largescale cDNA sequencing1. Many of these lincRNAs could not be reliably identified solely on the basis of chromatin structure, owing to their proximity to protein-coding genes or their short genomic lengths. Overall, we find that the ratio of expressed protein-coding to noncoding genes in these cell types is ~10:1, but that the total number of RNA molecules is more heavily biased toward the protein-coding fraction (~30:1), similar to previous observations26. Scripture identifies precise gene structures for the majority of previously found lincRNA loci (as well as for the newly discovered ones), a pre-requisite for further studies. For example, we used these to identify the specific regions within each lincRNA that are under purifying selection (conservation), a starting point for experimental and computational investigation. Taken together our results highlight the power of ab initio reconstructions to discover novel transcriptional variation within known protein coding genes, and provide a rich catalog of precise gene structures for novel non-coding RNAs. The next step is clearly to apply this approach to a wide range of mammalian cell types, to obtain a comprehensive picture of the mammalian transcriptome. Data Availability The sequencing data in this study is available at the NCBI Gene Expression Omnibus (GEO) under accession number GSE20851. The Scripture method is implemented as a stand-alone Java application and is available at www.broadinstitute.org/software/Scripture/, along with all assembled transcripts in both GFF and BED file formats. Additionally, all transcript graphs are available in the dot graph language. Author Manuscript Author Manuscript Author Manuscript Author Manuscript RNA Extraction & Library Preparation RNA was extracted using the protocol outlined in the RNeasy kit (Qiagen). Extracts were treated with DNase (Ambion 2238). Polyadenylated RNAs was selected using Ambion's MicroPoly(A)Purist kit (AM1919M) and RNA integrity confirmed using Bioanalyzer (Agilent). We used a cDNA preparation procedure that combines a random priming step with a shearing step8-9,28 and results in fragments of ~700 bp in size. We previously found9,28 that this protocol provides relatively uniform coverage of the whole tanscript, thus assisting in ab initio reconstruction. Specifically, a 'regular' RNA sequencing library (non strand specific) was created as previously described28, with the following modifications. 250 ng of polyA + RNA was fragmented by heating at 98°C for 33 minutes in 0.2 mM sodium citrate, pH 6.4 (Ambion). Fragmented RNA was mixed with 3 μg random hexamers, incubated at 70°C for 10 minutes, and placed on ice briefly before starting cDNA synthesis. First strand cDNA synthesis was performed using Superscript III (Invitrogen) for 1 hour at 55°C, and second strand using E. coli DNA polymerase and E. coli DNA ligase at 16°C for 2 hours. cDNA was eluted using Qiagen MiniElute kit with 30ul EB buffer. DNA ends were repaired using dNTPs and T4 polymerase, (NEB) followed by purification using the MiniElute kit. Adenine was added to the 3′ end of the DNA fragments to allow adaptor ligation using dATP and Klenow exonuclease (NEB; M0212S) and purified using MiniElute. Adaptors were ligated and incubated for 15 minutes at room temperature. Phenol/choloform/isoamyl alcohol (Invitrogen 15593-031) extraction followed to remove the DNA ligase. The pellet was then resuspend in 10ul EB Buffer. The sample was run on a 3% Agarose gel (Nusieve 3:1 Agarose) and a 160 -380 base pair fragment was cut out and extracted. PCR was performed with Phusion High-Fidelity DNA Polymerase with GC buffer (New England Biolabs) and 2M Betaine (Sigma). [PCR conditions: 30 sec at 98°C, (10 sec at 98°C, 30 sec at 65°C, 30 sec at 72°C -16 cycles) 5 min at 72°C, forever at 4°C], and products were run on a poly-acrylamide gel for 60 minutes at 120 volts. The PCR products were cleaned up with Agencourt AMPure XP magnetic beads (A63880) to completely remove primers and product was submitted for Illumina sequencing. The "strand-specific" library was created from 100 ng of polyA + RNA using the previously published RNA ligation method29 with modifications from the manufacturer (Illumina, manuscript in preparation). The insert size was 110 to 170 bp. RNA-Seq library sequencing All libraries were sequenced using the Illumina Genome Analyzer (GAII). We sequenced 3 lanes for ESC corresponding to 152 million reads, 2 lanes for MLF corresponding to 161 million reads, and 2 lanes for NPC corresponding to 180 million reads. Alignments of reads to the genome All reads were aligned to the mouse reference genome (NCBI 37, MM9) using the TopHat aligner13. Briefly, TopHat uses a two-step mapping process, first uses Bowtie30 to align all reads that map directly to the genome (with no gaps), and then maps all the reads that were not aligned in the first step using gapped alignment. TopHat uses canonical and noncanonical splice sites to determine possible locations for gaps in the alignment. Generation of connectivity graph Given a set of reads aligned to the genome, we first identified all spliced reads, as those whose alignment to the reference genome contains a gap. These reads and the reference genome are used to construct connectivity graphs. Each connectivity graph contains all bases from a single chromosome. The nodes in the graph are bases and the edges connect each base to the next base in the genome as well as to all bases to which it is connected through a 'spliced' read ( Fig. 1). In the analysis presented, we defined an edge between any two bases in the chromosome that were connected by two or more spliced reads. The connectivity graph thus represents the contiguity that exists in the RNA but that is interrupted by intron sequences in the reference genome. Identification of splice site motifs and directionality We restricted our analysis to splice reads that mapped connecting donor/acceptor splice sites, either canonical (GT/AG) or non-canonical (GC/AG and AT/AC). We oriented each mapped spliced read using the orientation of the donor/acceptor sites it connected. Construction of transcript graphs The 'spliced' edges in the connectivity graph reflect bases that were connected in the original RNA but are not contiguous in the genome. To construct a transcript graph, we 'thread' the connectivity graph (which was constructed only from the genome and spliced reads) with the non-spliced (contiguous) reads, to provide a quantitative measure of the reads supporting each base and edge. We then use a statistical segmentation strategy to traverse the graph topology directly and determine paths through the connectivity graph that represent a contiguous path of significant enrichment over the background distribution (below). In this segmentation process, we scan variable sized windows across the graph and assign significance to each window. We then merge significant paths into a transcript graph. Specifically, for a window of fixed size, we slide the window across each base in the connectivity graph (after augmenting it with the non-spliced reads). If a window contains only contiguous non-spliced reads, then it represents a non-spliced part of the transcript. However, if the window hits an edge in the connectivity graph connecting two separate parts of the genome (based on two or more spliced reads), then the path follows this edge to a non-contiguous part of the genome, denoting a splicing event. Similarly, when alternative splice isoforms are present, if a base connects to multiple possible places, then all windows across these alternative paths are computed. Using a simple recursive procedure we can compute all paths of a fixed size across the graph. Identification of significant segments To assess the significance of each path, we first define a background distribution. We estimate a genomic defined background distribution by permuting the read alignments in the genome and counting the number of reads that overlap each region and the frequency by Given a distribution for the real number of counts over each position we scan the genome for regions that deviate from the expected background distribution. First consider a fixed window size w. We slide this window across each position (allowing for overlapping windows), and compute the probability of each observed window based on a Poisson distribution with λ=wnp. Since we are sliding this window across a genome of size L, we correct our nominal significance for multiple testing, by computing the maximum value observed for a window size (w) across a number of permutations of the data. This distribution controls the family-wise error rate, defined as the probability of observing at least one such value in the null distribution31. Notably, we can estimate this maximum permutation distribution well by a distribution known as the scan statistic distribution32, which depends on the size of the genome that we scan, the window size used, and our estimate of the Poisson λ parameter. This method provides us with a general strategy to determine a multiple testing corrected P-value for a specified region of the genome in any given sample. We use this method to compute a corrected significance cutoff for any given region. Finally, to identify significant intervals, we scan the genome using variable sized windows, computing significance values for each and filtering by a 5% significance threshold. For each window size, we merge the significant regions that passed this cutoff into consecutive intervals. We trim the ends of the intervals as needed, since we are computing significant windows (rather than regions) and it is possible that an interval need not be fully contained within a significant region. Trimming is performed by computing a normalized read count for each base in the interval compared to the average number of reads in the genome. We then trim the interval to the maximum contiguous subsequence of this value. We test this trimmed interval using the scan procedure and retain it only if it passes our defined significance level. We work with a range of different window sizes in order to detect paths (intervals) with variable support, Small windows have power to identify short regions of strong enrichment (e.g. short exon which is highly expressed), whereas long windows capture long contiguous regions with often lower and more 'diffuse' enrichment levels (e.g. a longer lower expression transcript, whose 'moderate evidence' aggregates along its entire length). Estimation of library insert size We estimated the insert size distribution by taking all reconstructed transcripts for which we only reconstructed a single isoform and computing the distribution of distances between the paired-end reads that aligned to them. Weighting of isoforms using paired end edges Using the size constraints imposed by the length of the paired ends, we assigned weights to each path in the transcript graph. We classified all paired ends overlapping a given path and assigned them to all possible paths that they overlapped. We then assigned a probability to each paired end of the likelihood that it was observed from this transcript given the inferred insert size for the pair in that path. We used an empirically determined distribution of insert sizes, estimated from single isoform graphs. We then scaled each value by the average insert size. We refer to this scaled value as our insert distribution. For each paired end in a path, we computed I, the inferred insert size (the distance between nodes following along the full path) minus the average insert size. We then determined the probability of I as the area in our insert distribution between −I, I. This value is the probability of obtaining the observed paired end insert distance given this distribution of paired end reads. To aggregate these into weights for each path, we simply weight each paired end by its probability of observing to the given path. Paired ends that equally support multiple isoforms will count equally for all, but paired ends with biases toward some isoforms and against others will provide weighted evidence for each isoform. We assign this weight to each isoform path. This score is normalized by the number of paired ends overlapping the path. We filter paths with little support (normalized score<0.1) of paired reads supporting it. Determination of expression levels from RNA-Seq data Expression levels are computed as previously described8. Briefly, the expression of a transcript is computed in Reads Per Kilobase of exonic sequence per Million aligned reads (RPKM) defined as: , where r is the number of reads mapped to the exonic region of the transcript, t is the total exonic length of the transcript, and R is the total number of reads mapped in the experiment. Array expression profiling in ESC cells Microarray hybridization data was obtained from our previous studies including ESC, NPC16 and MLF4. Comparisons to known annotation The reconstructed transcripts were compared to the RefSeq genome annotation15 (NCBI Release 39). To determine whether a known annotation of a protein coding gene from RefSeq was fully reconstructed, we first compared the 5′ and 3′ ends of the reconstructed vs the annotated transcript. If these overlapped, we further verified that all exons in the annotated transcript matched those in the reconstructed version. To score the portion of an annotated transcript covered by our reconstructions, we found the reconstructed transcript whose exons covered the largest fraction of the annotated transcript, and reported the portion of the annotation that it covered. ChIP-seq profiles in ESC cells and determination of K4 and K36 regions To determine regions enriched in chromatin marks from ChIP-seq data we used our previously described method4, applied to ESC, MLF, and NPC data4,16. Determination of external and internal 5′ start sites We identified alternative 5′ start sites by comparing the 5′ exon of our reconstructed transcripts to the location of the 5′ exon of the annotated gene overlapping it. If the reconstructed 5′ start site resided upstream to the annotated 5′ we termed it 'external start site'. For the novel 5′ ends that are downstream of the annotated 5′ end (internal) we required a few additional criteria to avoid reconstruction biases due to low coverage. First, we required that the novel internal 5′ end do not overlap any of the known exons within the known gene. Second, we required that the reconstructed gene contains a completed 3′ end. To determine the presence of H3K4me3 modifications overlapping the promoter regions defined by these novel start sites, we computed regions of enriched K4me3 genome-wide (as previously described) and intersected the location of the novel 5′ exon (both internal and external) with the location of a K4me3 peak. Determination of premature/extended 3′ end To determine novel 3′ ends, we compared the locations of the 3′ exon of our reconstructed 3′ ends and those of annotated genes. If the reconstruction extended past the annotated 3′ end, we classified it as an extended 3′ end. If the reconstruction ended before the annotated 3′ end we required that it not overlap any known exon and have a fully reconstructed 5′ start site. Determination of sequence conservation levels We used the SiPhy19 algorithm and software package (http://www.broadinstitute.org/ genome_bio/siphy/) to estimate ω, the deviation ('contraction' or 'extension') of the branch length compared to the neutral tree based on the total number of substitutions estimated from the alignment of the region of interest across 20 placental mammals (build MM9, http://hgdownload.cse.ucsc.edu/goldenPath/mm9/multiz30way/). For global (whole transcript) conservation, we estimated ω for each protein coding, lincRNA and antisense transcript exon and compared it to similarly sized regions within introns. To identify local regions of conservation within a transcript, we computed ω for all 12-mers within the transcript sequence, and assigned a p-value for each 12-mer based on the chi-square distribution, as previously described19. We then took all 12-mers showing significance at p< 0.05, collapsed overlapping 12-mers, and identified constrained regions within the transcript (e.g. Supplementary Fig. 6). ORF determination We estimated maximal supported open reading frames (ORFs) for each transcript built by scanning for start codons and computing the length (in nucleotides) until the first stop codon was reached. CSF Scores To further estimate the coding potential of novel transcripts, we evaluated whether evolutionary sequence substitutions were consistent with the preservation of the reading frame of any detected peptide. In a nutshell, if a transcript encodes a protein, we expect a reduction in frame shifting indels, non synonymous changes and, in general, any substitution that affects the encoded protein. To assess this, we used Codon Substitution Frequency (CSF) method as previously described17-18. RT-PCR validations Primers were obtained for a randomly selected set of predicted lincRNA, protein coding genes, antisense transcripts, and intron primers (Supplementary Table 2); all begining with M13 primer sequence. RNA from ESC cells was extracted using Qiagen's RNeasy kit (74106). A one-step cDNA/RT-PCR reaction was run using Invitrogen's one-step RT-PCR kit (12574-018), following the manufacturer's instructions, with the following PCR protocol: 55°C for 30 minutes, 94°C for 2 minutes (94°C for 15 seconds, 64°C for 30 seconds, 68°C for 1 minute -40 cycles) 68°C for 5 minutes, 4°C forever. Samples were separated on a 3% agarose gel, and all bands were cut out and gel extracted using the QIAquick Gel Extraction Kit 28706. 30ng of DNA were mixed with 3.2pmol M13 forward or M13 reverse primer for sequencing. Reads (black bars) originate from sequencing a contiguous RNA molecule. Shown are transcripts from two different genes (blue and red boxes), one with seven exons (blue boxes) and one with three exons (red boxes), which are adjacent in the genome (black line). The grayscale vertical shading in subsequent panels is shown for visual tracking. (c) Spliced reads. Scripture is initiated with a genome sequence and spliced aligned reads (dumbbells) with gaps in their alignment (thin horizontal lines). Scripture uses splice site information to orient splice reads (arrow heads). (d) Connectivity graph construction. Scripture builds a connectivity graph by drawing an edge (curved arrow) between any two bases that are connected by a spliced read gap. (Edges are color coded to relate to the original RNA and eventual transcript). (e) Path scoring. Scripture scans the graph with fixed-sized windows and uses coverage from all reads (spliced and non-spliced, bottom track) to score each path for significance (p-values shown as edge labels). (f) Transcript graph construction. (a) A typical Scripture reconstruction on mouse chr9. Top (red) -RNA-Seq read coverage (from both non-spliced and spliced reads); middle (black) -three transcripts reconstructed by Scripture, including exons (black boxes) and orientation (arrow heads); bottom (blue) -RefSeq annotations for this region. All three transcripts are fully reconstructed from 5′ to 3′ ends capturing all internal exons; notice that Scripture correctly reconstructed the overlapping transcripts Pus3 and Hyls1. (b) Fraction of genes fully reconstructed in different expression quantiles (5% increments) in ESC. Each bar represents a 5% quantile of read coverage for genes expressed (mean read coverage is noted in blue). The height of each bar is the fraction of genes in that quantile that were fully reconstructed. For example, ~20% of the transcripts at the bottom 5% of expression levels are fully reconstructed; ~94% of the genes at the top 95% of expression are fully reconstructed. (c) Portion of gene length reconstructed in different expression quantiles in ESC. Shown is a box plot of the portion of each transcript's length that was covered by a Scripture reconstruction in each 5% coverage quantile. The black line in each box is at the median, the rectangle spans the 25% and 75% coverage quantiles; the whiskers depict the annotations in the quantile most and least covered by our reconstruction. For example, at the bottom 5% of expression, Scripture reconstruct a median length of 60% of the full length transcript. Shown is the cumulative distribution of CSF scores (a) and maximal ORF length (b) for protein coding transcripts (black), lincRNAs (blue) and multi-exonic anti-sense transcripts (green). (c) Conservation levels for exons from protein coding transcripts, lincRNAs, multiexonic antisense transcripts and introns. Shown is the cumulative distribution of sequence conservation across 29 mammals for exons from protein-coding exons (black), introns (red), exons from previously annotated lincRNA loci (blue), exons from newly annotated lincRNA transcripts (grey), and exons from multi-exonic antisense transcripts (green). (d) Expression levels of protein coding, lincRNAs and multi-exonic antisense transcripts. Shown is the cumulative distribution of expression levels (RPKM) in ESC for protein coding transcripts (black), transcripts from previously annotated lincRNA loci (blue), transcripts from newly annotated lincRNA loci (gray), and multi-exonic antisense transcripts (green).
2016-05-04T20:20:58.661Z
2010-04-13T00:00:00.000
{ "year": 2010, "sha1": "5272f35538c6b69806971b19d2f9b59640daedf9", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc2868100?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8bf238fff4eb50b11b7763158c9e6ced0203c815", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119208038
pes2o/s2orc
v3-fos-license
Bayesian electron density inference from JET lithium beam emission spectra using Gaussian processes A Bayesian model to infer edge electron density profiles is developed for the JET lithium beam emission spectroscopy system, measuring Li I line radiation using 26 channels with ~1 cm spatial resolution and 10~20 ms temporal resolution. The density profile is modelled using a Gaussian process prior, and the uncertainty of the density profile is calculated by a Markov Chain Monte Carlo (MCMC) scheme. From the spectra measured by the transmission grating spectrometer, the Li line intensities are extracted, and modelled as a function of the plasma density by a multi-state model which describes the relevant processes between neutral lithium beam atoms and plasma particles. The spectral model fully takes into account interference filter and instrument effects, that are separately estimated, again using Gaussian processes. The line intensities are inferred based on a spectral model consistent with the measured spectra within their uncertainties, which includes photon statistics and electronic noise. Our newly developed method to infer JET edge electron density profiles has the following advantages in comparison to the conventional method: i) providing full posterior distributions of edge density profiles, including their associated uncertainties, ii) the available radial range for density profiles is increased to the full observation range (~26 cm), iii) an assumption of monotonic electron density profile is not necessary, iv) the absolute calibration factor of the diagnostic system is automatically estimated overcoming the limitation of the conventional technique and allowing us to infer the electron density profiles for all pulses without preprocessing the data or an additional boundary condition, and v) since the full spectrum is modelled, the procedure of modulating the beam to measure the background signal is only necessary for the case of overlapping of the Li line with impurity lines. Introduction Edge electron density profiles have been recognised as one of the key physical quantities in magnetic confinement devices for controlling and understanding edge plasma phenomena, such as edge localised modes (ELMs) [1], L-H transitions [2] and turbulence transport [3]. Lithium beam emission spectroscopy (Li-BES) systems, capable of providing the profiles of edge electron density, have thus been widely used at various devices (TEXTOR [4,5], ASDEX Upgrade [6,7], W7-AS [6], and JET [8,9,10]). Li-BES system is a type of beam diagnostics that injects neutral lithium atoms into the plasma and measures Li I (2p-2s) line radiation caused by spontaneous emission processes from the first excited state (1s2 2p1) to the ground state (1s2 2s1) of the neutral lithium beam atoms. The Li I line intensity can be expressed as a function of the plasma density by a multi-state model [11] which describes the relevant processes between lithium atoms and plasma particles. The profiles of edge electron density can be inferred from the measured profiles of the Li I line intensity. The integral expression of the multi-state model which calculates a profile of electron density [4] from the measured Li-BES data has been used conventionally at many devices [5,6,8,9]. This method, however, has a limitation that profiles of absolute electron density (based on the absolute calibration factor) can be obtained only if either a singular point is found or an additional boundary condition is provided in the data. Consequently, this method involves some weaknesses: i) preprocessing of the data is usually required to find the singular point, ii) the singular point cannot be found accurately, iii) a small change of the location of the singular point can cause a large difference of the density profile and iv) an additional boundary condition, which is required if the singular point does not exist, cannot be properly fixed because of the difficulty of obtaining all the populations of the different states of the neutral Li beam atoms. Another method utilising Bayesian probability theory to analyse the Li-BES data was reported at ASDEX Upgrade [7], using non-spectral APD (Avalanche Photo Diode) detectors and made impressive progress. Our method fits the full Li beam emission spectrum and uses Gaussian processes to model and regularise the electron density profiles, rather than using the non-spectral data and the combination of splines with a regularising weak monotonicity constraint used in [7]. Our proposed method requires neither preprocessing of the data, inner boundary information nor a profile monotonicity regulariser. The method comprises two parts. The first part is obtaining the profile of the Li I line intensity. The JET Li beam emission spectrum is here modelled as a single Li I emission line and a background signal, convolved with an instrument function and filtered through an interference filter. The interference filter and instrument function need to be separately estimated and the noise on the spectra is modelled by an electronic offset as well as photon statistics and electronic noise. We infer interference filter and instrument functions based on separate measurements (which are required only once in a while as they do not vary much shot-to-shot base) using Gaussian processes. We use Gaussian processes because we do not know the parametric form, i.e., analytical expression of these functions. Having the interference filter and instrument functions, we then infer intensities of Li I line radiation, background and the electronic offset simultaneously. This provides the advantage of removing the necessity of beam modulations to obtain separate background measurements within a plasma shot. Furthermore, as the fitted background intensity is likely to be dominated by Bremsstrahlung radiation, our method opens a possibility to obtain the effective charge Z eff . The second part of our method infers the profile of edge electron density based on the intensity profile of Li I line radiation using the multi-state model. During this second part, the absolute calibration factor of the system is inferred directly from the measurements, removing the need for the singular-point method mentioned above. All modelling and analyses are performed using a Bayesian scheme within the Minerva framework [12]. Sec. 2 describes the models we use: the multi-state model describing how to obtain electron density information from the Li I line radiation intensity and the spectral model of the raw data, forming together the forward model of the JET Li-BES system. Sec. 3 explains how the interference filter and instrument functions are inferred and the procedure for obtaining the intensity of the Li I line radiation and electron density profile. Conclusions are presented in Sec. 4. Multi-state model Li-BES system measures the intensities of the Li I (2p-2s) line radiation from the neutral lithium beam penetrating into the plasma. The Li I line radiation is produced by spontaneous emission processes from the first excited state (1s2 2p1) to the ground state (1s2 2s1) of the neutral lithium beam atoms. The Li I line intensity is a function of a population of the first excited state which can be expressed in terms of a plasma density via a multi-state model. The change of relative populations in time using the multi-state (collisionalradiative) model [4] is which describes population and de-population of states of the neutral lithium atoms caused by processes between lithium beam atoms and plasma particles in addition to spontaneous emissions. N i is a relative population of the i th state with respect to the total number of the neutral lithium beam atoms at the position where the lithium beam enters the vacuum vessel. For instance, N 1 = 0.7 and N 2 = 0.1 mean that 70 % and 10 % of the initial neutral lithium beam atoms are in the ground and first excited states, respectively. M Li is the number of states of the neutral lithium atoms, and we consider nine different states in this paper; thus, M Li = 9. n s is a plasma density of species s where s = e and s = p denote electron and proton, respectively. a s ij (i = j) > 0 is a net population rate coefficient by the plasma species s from the j th state to the i th state increasing the relative population of the i th state, while a s ii < 0 is a net de-population rate coefficients including excitation, de-excitation and ionisation effects leaving the i th state. All population and de-population rate coefficients caused by plasma species s depend on the relative speed between the neutral lithium beam atoms and plasma species s which is denoted as v s r . b ij is the spontaneous emission rate coefficient or Einstein coefficient. It becomes easier to solve Eq. (1) if it is expressed in terms of the beam coordinate z: d/dt = d/dz · dz/dt. Realising that dz/dt is the velocity of the neutral lithium beam atoms v Li , we obtain Here, we assume that v Li is constant over the penetration range of the beam into plasmas. The relative speed v s r (z) is not directly measured but can be approximated using other quantities. The relative speed between the neutral lithium beam atoms and electrons v e r (z) is dominated by the electron temperature T e since the typical (thermal) speed of electrons is much faster than that of the neutral lithium beam atoms. The relative speed between the neutral lithium beam atoms and protons v p r (z) can be approximated to the lithium beam velocity in case of JET Li-BES since the lithium beam energy is ∼ 55 keV which is much higher than the ion temperature. Other species are not considered in this work. Thus, the multi-state model becomes with the initial condition Eq. (4) where we assume that all the lithium beam atoms are neutral and in the ground state (i = 1) at the initial position where the beam enters the tokamak vacuum vessel corresponding to z = 0, i.e., N 1 (z = 0) = 1. The rate coefficients have been obtained from the Atomic Data Analysis Structure (ADAS) [13] and the reference [14]. Fig. 1 shows an example of steady-state relative populations for the first excited state N 2 as a function of electron density and temperature with a beam energy of 50 keV. Note that this multi-state model does not consider the population of ionised lithium atoms, which leave the beam due to a strong magnetic field of JET. Therefore, electron loss processes such as ionisation and charge-exchange simply attenuate the total population of the neutral lithium beam atoms, i.e., Spectral model The JET Li-BES system measures spectra, including the Doppler shifted Li I line radiation from the 26 different spatial positions, covering a few nanometres in wavelength using the transmission grating spectrometer (dual entrance slit with interference filter for preselection of passband, details in [10]). A charge coupled device (CCD) camera detects the photons for integration time of approximately 10 ms. More detailed description of the JET Li-BES system can be found elsewhere [9,10]. A spectrum from each spatial position contains four types of signals (in addition to noise): i) Li I line, ii) a background dominated by Bremsstrahlung radiation, iii) an electronic offset and iv) impurity lines. Doppler broadening of the Li I line radiation is negligible since the lithium beam is a mono-energetic beam (∼ 0.02 nm broadening occurs for the beam temperature of ∼ 10 eV, and the dispersion of the CCD pixel is ∼ 0.04nm/pixel), therefore we treat the Li I line as a delta function in the spectrum. A measured spectrum S (λ) from each spatial position can be expressed as where A is the intensity of Li I line radiation, B the background level and Z the electronic offset, which are all inferred together with their uncertainties through Bayesian inference. The instrument function C (λ) and interference filter function F (λ) are inferred through a Bayesian scheme using Gaussian processes from separate measurements [15]. Here, λ is the wavelength corresponding to a CCD pixel index [9]. Gaussian processes are probabilistic functions defined by a multivariate Gaussian distribution whose mean and covariance function specifies the mean and the covariance between any two points in the domain [16]. This constrains the variability of the function without any analytic specification, i.e., in a non-parametric way. Gaussian processes were introduced in the fusion community in [17] and are implemented as a standard representation of profile quantities in the Minerva framework [12]. It has been used for current tomography [17,18], soft x-ray tomography [19], and representing profile quantities [17,20,21]. The covariance function of a Gaussian process is defined as a parametrised function whose parameters, so called hyperparameters, determine aspects of the function such as overall scale and length scale. The hyperparameters are selected based on the measurements by maximising the evidence through Bayesian model selection. A detailed description of the Bayesian inference and modelling of the JET Li-BES data with Gaussian process can be found elsewhere [15]. Forward model Our goal is to find all possible profiles of the edge electron density n e consistent with the spectral observations. For this, we consider the forward model as shown in Fig. 2. The edge electron density profile n e is modelled as a set of values at given positions, with a prior given by a Gaussian process with given overall scale and scale length hyperparameters, discussed in more detail in Sec. 3.3. Edge n e profiles are mapped onto flux surface coordinates ψ calculated by the EFIT equilibrium code. Electron temperature T e , required for the rate coefficients a s ij , is measured by the High Resolution Thomson Scattering (HRTS) system [22] and mapped onto the same flux surface coordinates. This will allow us to calculate a relative population of the first excited state of the neutral lithium beam atoms, i.e., N 2 , based on the multi-state model Eq. (3) with a quasi-neutrality condition, i.e., n e = n p . Here, we assume that impurity densities are low enough to be ignored §. Once we have N 2 , we can predict the Li I line radiation intensity, A in Eq. (5), where the detailed procedure is provided in Sec. 2.3.1. This model provides a prediction of the measured Li I line radiation A * , given the free parameters of an electron density n e and an absolute calibration factor α, by where A (n e , α) is a model prediction with specific values of the free parameters, n e and α. σ is the uncertainty associated with the observation A * . This is our basic form of the forward model in this paper and is the likelihood in Bayes formula (Eq. (16) [12]. The free parameters are shown with red circles and observations as a blue circle. The rectangular boxes represent operations or constants. The electron density n e and temperature T e are mapped onto the EFIT estimated flux surfaces. The relative populations of the neutral lithium beam atoms are calculated from the multi-state model, and profiles of the Li I line radiation intensities are predicted given edge n e profiles and an absolute calibration factor, alpha (α). All the possible edge n e profiles whose predicted Li I line intensity profile agree with the observation (blue circle) within their uncertainties are found through a MCMC scheme. of the relative population of the first excited state due to the spontaneous emission as the beam travels a distance of ∆z denoted as |∆N 2 | is where ∆z can be considered as the observation length. Since one spontaneous emission produces one photon, the total number of emitted photons N em ph corresponding to Li I line radiation over the integration time ∆t with the lithium beam current I Li is The emitted photons falling into the solid angle of the collection optics pass through various mirrors, lens and grism before being detected by the CCD camera. We denote all these effects of optics including the solid angle as an effective transmittance of the system, T . Then, the number of photons detected by (or arrived to) the CCD camera N det ph (z) is Also, we define Q as the count per photon of the CCD camera. Q describes the number of counts produced by the CCD camera when one photon arrives at the CCD detector. Then, the CCD output count due to the Li I line radiation N Li CCD which we measure is and this is, by definition, equal to the Li I line intensity A multiplied by the spectrally integrated signal of the instrument function C (λ) and the interference filter function F (λ) in Eq. (5). We finally obtain where α is the absolute calibration factor which is taken as a free parameter in our forward model in addition to the n e profile as shown in Fig. 2. Note that we have included the magnitude of the relative calibration factors in the instrument function C (λ). Uncertainties The main measurement error is due to the Poisson distributed photon statistics. On top of that, there is an additional electronic noise which is measured before a pulse starts and is here taken as a Gaussian distribution. To be able to determine a level of photon noise, it is necessary to find the value of Q in Eq. (10) so that the measured N Li CCD can be converted to the detected number of photons N det ph which is the quantity following a Poisson distribution. With an aim of determining the value of Q, we shine a uniform intensity light-emitting diode (LED) to the CCD camera while varying the intensity of the LED with all other conditions fixed as if it were actual measurements of the Li-BES during plasma discharges. The arithmetic mean of CCD output countsN CCD and its associated variance σ 2 CCD arē whereN ph is the mean of the number of photons detected by (arrived to) the CCD camera andN DC CCD the mean CCD output counts due to the dark current of the CCD. Here,Z CCD is the mean CCD offset. σ 2 ph and σ 2 e are the variances due to photon statistics and electronic noises, respectively. Note that we treat fluctuations in the dark current as a part of the electronic noise because they exist in the absence of detected photons. WithN ph = N CCD −N DC CCD −Z CCD /Q from Eq. (12) andN ph = σ 2 ph owing to a Poisson distribution, recasting Eq. (13), we get Notice thatN CCD and σ 2 CCD can be directly measured with the LED on, and by varying the intensity of the LED we can determine the value of Q. Fig. 3(a) shows a graph of the measured σ 2 CCD vs.N CCD , using a total of 4,175 (167 pixels from 25 channels) independent data points, the variances and the means which are estimated using 332 independent time points. The slope is the value of Q we seek, and we find that Q = 1.247 ± 0.005. To find the electronic noise level σ 2 e , we switch on all the electronics and measure fluctuations in N CCD without any photons to the CCD, i.e., N ph = 0. Here, N CCD and N ph are individual measurements rather than their means. Fig. 3(b) shows such measurements for all 26 spatial channels (different colours). Fig. 3(c) is the histogram of the N CCD . The variance is estimated to be 160 with a mean of 4342. Therefore, σ 2 e ≈ 160. As can be seen from the histogram, the dark current fluctuations are approximately Gaussian shaped. Furthermore, as we find the mean value of the offset, i.e., 4342, appears constantly for all channels, we always subtract this offset value from the measured signal before performing any analyses on the data. Any residual offset is captured by Z in Eq. (5). When the number of counts is large a Poisson distribution can be approximated with a Gaussian distribution. Since the detected number of photons N det ph is larger than 100, we take the photon statistics to follow a Gaussian distribution as well. Therefore, the variance σ 2 in Eq. (6) is Bayesian inference For our case, we have a spectrum S (λ) described by three free parameters: the Li I line radiation intensity A, the background B dominated by Bremsstrahlung radiation, and the electronic offset Z. The instrument function C (λ) and the interference filter F (λ) in Eq. (5) are inferred separately using Gaussian processes. In the Bayesian scheme, we calculate the probability distribution of a free parameter W given observation D known as the posterior p (W|D). The posterior is given by Bayes formula where p (D|W), p (W) and p (D) are the likelihood, prior and evidence, respectively. The likelihood is a model for observations given free parameters as described in Eq. (6). The prior quantifies our assumptions on the free parameters before we have observations. The evidence is typically used for a model selection and is irrelevant if one is only interested in estimating the free parameters. A detailed description of Bayesian inference can be found elsewhere [23]. To minimise possible confusion, we define our notations used in this section in Table 1. As the JET Li-BES system obtains spectra from 26 different spatial positions, the channel index corresponds to the spatial position and the pixel index to the wavelength. The predicted signal at the i th channel and j th pixel is denoted as S i j , and D i j represents the observed signal. Using these notations, we will find the most probable prediction of the line intensity, background and offset at i th channel by calculating the posterior p (A i , B i , Z i |D i ) where the predicted signal at the i th channel and j th pixel is In the following subsections, we describe how to infer two unknown functions, the interference filter and instrument functions (F i , C i ), and the free parameters (A i , B i , Z i ). Interference filter and instrument functions To infer the i th channel interference filter function F i , we illuminate uniform LED light to the fibres. Since there is no Li I line radiation (A i ) with a negligible electronic offset (Z i ) as shown in Fig. 4, the predicted signal is where B i is uniform LED light intensity. According to Bayes formula, the posterior is where the likelihood is Here, S i = F i B i as in Eq. (18), and N pixel is the total number of CCD pixels for the i th channel.Σ is an N pixel × N pixel square diagonal matrix containing variances of the measured signal at each pixel of the CCD camera as iň where σ 2 j = σ 2 ph,j + σ 2 e,j at the j th pixel as Eq. (15) is used in Eq. (6). σ 2 ph,j and σ 2 e,j can be estimated as described in Sec. 2.3.2. Note thatΣ is different for different channels. The prior p (F i ) in Eq. (19) needs to be specified. Since we do not know the parametric form, i.e., analytical form, describing the interference filter of the i th channel, F i , as a function of wavelength (pixel index), we use a Gaussian process prior for F i : Here, 0 is a column vector whose entries are all zeros. The N pixel × N pixel matrixǨ, which varies channel by channel, is defined as a squared exponential covariance function with the value at the j th row and k th column of δ jk is the Kronecker delta. x is a vector of the CCD pixel index, thus |x j − x k | is the difference in pixel index between the j th and k th pixels. σ 2 f is the signal variance and the scale length. σ 2 n is a small number for the numerical stability of the model. The hyperparameters σ 2 f and govern the characteristic of the Gaussian process Eq. (22), and we find their values by maximising the evidence p (D i ). More detailed description can be found elsewhere [15]. Fig. 4(b) shows the MAP estimates of the filter functions for all channels of the JET Li-BES system. Note that we normalise all the filter functions to have the maximum value of one as what we need is the shape of the filter functions in the wavelength (pixel index) domain. This does not create any problems because relative sensitivities among the channels are captured by the instrument functions as relative calibration factors, while α in Eq. (11) takes care of the absolute calibration factor. To infer the i th channel instrument function C i , we use beam-into-gas shots. During the beam-into-gas shots, neutral lithium beam atoms are injected into the tokamak filled with a neutral deuterium gas whose pressure is less than 10 −4 mbar. Because there is no plasma, there exists a negligible background signal caused by Bremsstrahlung (B i = 0). For this case, the posterior is p (C i |D i ) with where the interference filter function F i is set to be the MAP estimation of p (F i |D i ) in Eq. (19). Due to the small deuterium pressure inside the tokamak during the beaminto-gas experiments, a strong beam attenuation is not expected. According to [9], there is no indication of any beam attenuation, so the emitted photons N em ph should not vary along the beam. The variation of the observed intensities must therefore be due to differences in T , Q, and ∆z in Eq. (11). Assuming the Li I line emission is constant over the beam, C i will give us these relative calibration factors. Since the electronic offset is not negligible for some channels as shown in Fig. 5, we calculate posterior of both instrument function and offset p (C i , Z i |D i ). The likelihood p (D i |C i , Z i ) is taken as the Gaussian with the mean given by Eq. (24). We let the prior p (C i ) to have the form of Eq. (22) with the covariance function Eq. (23). Again, the hyperparameters are set such that the evidence is maximised. The prior p (Z i ) is a normal distribution with a zero mean and a very large variance (10 6 ). Fig. 5(a) compares the observation and instrument function (MAP) for channel 18. Fig. 5(b) shows the instrument functions (MAP) for all channels, which also capture the relative calibration factors. Line intensities We inferred F i and C i from Sec. 3.1 and are left with three free parameters A i , B i and Z i in Eq. (17). The posterior p As we have three independent free parameters, the prior p (A i , B i , Z i ) is where all three priors are Gaussian distributions with a zero mean and very large variance (10 6 Edge electron density profiles To infer the electron density profile, we take the MAP estimate of the Li I line intensities with their variances (A ± σ A ). The posterior is given by p (n e , α|A, σ A ) ∝ p (A|σ A , n e , α) p (n e , α) , where the absolute calibration factor α and the edge electron density profile n e are the free parameters. The likelihood p (A|σ A , n e , α) is given by where N ch = 26 is the total number of the channels.Σ A is the N ch × N ch diagonal matrix with the entry of (σ i A ) 2 at the i th row and i th column. We calculate N 2 using the Runge-Kutta method (RK4) from the model Eq. (3) with the initial condition Eq. (4). We give n e and α independent priors, where p (α) is uniform between 1 and 1000. For p (n e ), based on a large database of existing profiles, we can estimate the hyperparameters for the Gaussian process prior. From this we set the hyperparameters σ f and for the covariance matrixǨ to be 20.0 and 0.025, respectively. We note that these values for the hyperparameters are not rigorously obtained by maximising the evidence due to the requirement of too much computation time. Nevertheless, these values give good fit to the data. A possible improvement would be to marginalise over these hyperparameters as in [17]. The posterior of n e and α is explored by a Markov Chain Monte Carlo (MCMC) sampling scheme. Fig. 7(a) and (c) show the MAP estimate of the edge electron density profiles (red) with their associated uncertainties, which cover 95% of the samples from posterior, i.e., the shortest 95% interval. For the sake of comparison, n e profiles from the HRTS system (blue) and results from the conventional analysis of the JET Li-BES system (yellow) [9,10] are also shown in the same figures. Fig. 7(b) and (d) show the MAP estimates of the Li I line intensities from the previous section (blue), i.e., A in Eq. (26), and prediction (red), i.e., αN 2 , for Fig. 7(a) and (c), respectively. It is clear from these results that we have inferred a proper absolute calibration factor α even though we have not used the singular point method [4]. The range of the density profile inference has been extended to the full observation range which was not possible with the conventional data analysis method. We stress that we have not used a separate background measurement via Li neutral beam modulations because our method is capable of providing intensities of Li I line and background radiations simultaneously. Finally, we also have not made an assumption of monotonic profile, either. In some cases, we observe a difference between the profiles inferred from the Li-BES and HRTS systems (Fig. 8). Calibration of the spatial position for the Li-BES may be questioned. However, this calibration is performed with relatively high reliability [9]. We do suspect that it may have been caused by the EFIT reconstruction. The Li-BES system injects neutral lithium beam atoms vertically from the top of the JET at major radius R = 3.25 m and covering the vertical position Z = 1.67 ∼ 1.40 m approximately; whereas the HRTS system observes electron density along the laser penetrating horizontally at the midplane (R = 2.9 ∼ 3.9 m and Z = 0.06 ∼ 0.11 m). The flux coordinate mapping provided through EFIT may well be inaccurate when comparing the midplane with the top of the vessel. We leave further investigation of this issue to future work. In Fig. 7(a) and (c) and Fig. 8(a) and (c) we can see that the uncertainties of the electron densities in the inner region is larger than those of the outer region. This result cannot be explained solely by the number of detected photons as attested by Fig. 7(b) and (d) and Fig. 8(b) and (d). This trend of larger uncertainties in the inner region is also observed in ASDEX Upgrade [7,24]. Here, we provide two qualitative reasons to explain this trend. As shown in Fig. 1, the relative population of the first excited state N 2 becomes less sensitive to the change of n e as it increases. Typically, n e is larger in the inner region than the outer region, therefore the similar level of uncertainty in N 2 corresponds to a larger uncertainty of n e in the inner region. In addition, the neutral Li beam attenuation as it penetrates into the plasmas can cause this trend of increasing uncertainties: consider two separate measurements of the absolute number of the first excited state which both give the same value of 200±20 where the total number of neutral beam atoms is 500 in one case and 1000 in another case. Then, the relative population N 2 is (200 ± 20)/500 = 0.4 ± 0.04 for the former case and (200 ± 20)/1000 = 0.2 ± 0.02 for the latter case. It is evident that the former case has the larger uncertainty than the latter case even if the absolute numbers of the first excited state are the same for both cases. Therefore, the beam attenuation, i.e., decrease of the total number of beam atoms, can cause the larger uncertainty of n e in the inner region [24]. Finally, we note that there can be additional effects from the uncertainties of the absolute calibration factor [4,8]. Conclusion In this paper, we have presented a Bayesian model to obtain edge electron density profiles based on the measured JET Li-BES spectra. The model has been implemented in the Minerva Bayesian modelling framework. Our scheme includes uncertainties due to photon statistics and electric noise estimated from the measured data obtained with the transmission grating spectrometer. The instrument effects such as the interference filter function and instrument function are inferred from separate measurements using Gaussian processes whose hyperparameters are selected by evidence maximisation. Also the electron density profiles are modelled using Gaussian processes, whose hyperparameters are determined from the JET historical electron density profiles. Inference is done through maximisation of the posterior (MAP) and Markov Chain Monte Carlo Method (MCMC) sampling. The Li I line and background intensities are simultaneously inferred as well as their associated uncertainties, thereby eliminating extra effort of measuring background intensity via Li neutral beam modulations.
2016-09-13T12:01:07.000Z
2016-09-13T00:00:00.000
{ "year": 2017, "sha1": "0c2028bb488555b777a94f52ddaeaa43cf80ecdc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.03787", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0c2028bb488555b777a94f52ddaeaa43cf80ecdc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
251640664
pes2o/s2orc
v3-fos-license
Pharmaceutical Coating and Its Different Approaches, a Review Coating the solid dosage form, such as tablets, is considered common, but it is a critical process that provides different characteristics to tablets. It increases the value of solid dosage form, administered orally, and thus meets diverse clinical requirements. As tablet coating is a process driven by technology, it relies on advancements in coating techniques, equipment used for the coating process, evaluation of coated tablets, and coated material used. Although different techniques were employed for coating purposes, which may be based on the use of solvents or solvent-free, each of the methods used has its advantages and disadvantages, and the techniques need continued modification too. During the process of film coating, several inter-and intra-batch uniformity of coated material on the tablets is considered a critical point that ensures the worth of the final product, particularly for those drugs that contain an active medicament in the coating layer. Meanwhile, computational modeling and experimental evaluation were actively used to predict the impact of the operational parameters on the final product quality and optimize the variables in tablet coating. The efforts produced by computational modeling or experimental evaluation not only save cost in optimizing the coating process but also saves time. This review delivers a brief review on film coating in solid dosage form, which includes tablets, with a focus on the polymers and processes used in the coating. At the end, some pharmaceutical applications were also discussed. Introduction Around 1500 BCE, the first reference to the term pill as a solid dosage form came into existence. The first source of pills in ancient Egypt was recorded to be written on papyruses. The pills were made from bread dough, grease, and honey. Pills were made of simple hand-using ingredients like spices or plant powders. In ancient Greece, medicines were termed katapotia [1]. Roman scholars termed Pills as pilula (little ball). In medieval times, pills were coated using slippery substances obtained from plants. By 1800, gelatin capsules were invented. William Brockedon made a machine that can formulate lozenges and pills with the help of pressure on suitable dies [1]. This device compresses the powder without using adhesive into tablets. Professor Brockedon, 1844 in England, developed the first compressed tablets. These tablets were hard, and no reference was found concerning their disintegration time and solubility. In 1871, Messrs Newbery had purchased Professor Brockedon's business. The Brockedon method of tablet compression was used by Philadelphian Jacob Dunton to formulate tablets of different formulations, including quinine [2]. In 1872, two brothers, Mr. Henry Bower and John Wyeth built an advanced machine that was not only more advanced than the previous one but also reduced the cost of producing tablets. In 1878, Dr. Robert R. Fuller from New York, for the first time, suggested the concept of loading these molds with medicated milk sugar. Mr. Fraser, in 1883, started to fabricate molded tablets in a completely new concept that we use today. From the start of the 1940s to the 1990s, synthetic and semisynthetic polymers were used for enteric coating. Dextroamphetamine sulfate was the first manufactured by Kline, Smith, and French as sustained release products using the Spansule method [2,3]. Film coatings and film coating compositions based on polyvinyl alcohol EP1208143B1 3 Film coatings and film coating compositions based on dextrin US6348090B1 Definition and Scope of Pharmaceutical Coating The coating is defined as a procedure in which the desired dosage form may be a granule or tablet coated with an outer dry film to obtain specific objectives such as masking taste or protecting against environmental conditions. The coating material may be composed of coloring materials, flavorants, gums, resins, waxes, plasticizers, and a polyhydric alcohol. In the modern era, polymers and polysaccharides were principally used as coating materials along with other excipients like plasticizers and pigments. Many precautions must be considered during the coating process to make the coating durable and steady. According to the International Council for Harmonisation (ICH) guidelines, organic solvents are avoided in the formulation of pharmaceutical dosage forms due to their safety issues [21]. Tablets that are susceptible to degradation by moisture or oxidation must be coated by using the FC technique. This technique could increase its shelf life, mask its bitter taste, and make a smoother covering, which makes swallowing easier. Chitosan and other mucoadhesive polymers were also used for coating tablets to adhere these tablets to mucous membranes and achieve sustained drug release in localized areas [22]. In recent times, coating of the dosage form by using biopolymers has been extensively studied [23]. Active pharmaceutical ingredients (APIs), which are sensitive to light, can be protected by coating with opacifying agents. Similarly, enteric-coated tablets reach the intestine after an extended time and possibly help maintain the efficacy level of acid labile APIs [24]. Objective of Coating Common forms of tablet coating are FC and SC. The coating helps maintain the physical and chemical integrity of the active ingredient; meanwhile, it also controls the drug release as it is controlled or continues to be released at a specific target site. Additionally, the coating was used to enhance the elegance of the pharmaceuticals, and the sophistication of appearance was enhanced by printing or making them with attractive colors [25]. Benefits of Coating Coating provides stability to the tablets in handling and prevents them from sticking together. The coating also improves the mechanical strength of the dosage form, causes the dosage form smoother and more suitable for swallowing purposes. Pharmaceutical companies could print their marks, symbols, or abbreviations on the tablets and mask a disagreeable color or odor of the tablets. The release of the active ingredient can even be controlled with the help of coatings. Coated dosage forms could be site-specific. The coating prevents acid-sensitive drugs from having a negative impact on the intestine. The drug release rate in the gastrointestinal tract (GIT) could be controlled by controlling the dissolution rate of the tablet [26]. Drawbacks of FC The drawbacks of FC are represented in Table 2 [27]. Table 2. Representing drawbacks of FC. Flaw Definition Possible Reason Treatment Blistering Blistering refers to the detachment of film from the surface of the object (like a tablet) and results in the formation of blisters. The possible reason for this defect could be the entrapment of gases in the layer of the film during the process of spraying (mainly when the process is overheated) That defect could be treated by designing the drying conditions to be mild. Chipping Chipping states a condition where the film becomes dented, chipped from the edges. Possibly due to a decreased rotation of the drum or flow of fluidizing air in the coating pan. The operator must be careful at the pre-heating stage and not over-dry the tablets. Otherwise, the tablets encourage the defect by becoming brittle. Picking It is defined as the adhered film on the tablet's surface that may be torn away, resulting in the sticking of tablets. The main cause of such defect is the production of wet tablets, which may stick together. The condition may be treated by reducing the volume of applied liquid or by increasing the temperature of dry air. Pitting In this type of defect, specific pits have appeared on the surface of the dosage form without any visual disappearance of the FC. The reason for such a problem appeared due to the melting point of the materials used is less than the temperature of the tablet core used in the tablet formulation. Adjusting the temperature during the process of tablet core results in the removal of such defects. To solve such instability caused by the ingredients, a reformation with different additives and plasticizers is the best way to solve the problem. Film Coating It is a process in which a thin coat of a polymer material is coated with oral solid dosage forms, including particles, granules, and tablets. Coating thickness may range from 20 to 100 µm [28]. Organic Film Coating Based on the material used for the coating perspective, the binding material can be changed accordingly. Organic film coating may include water-based paints, lacquers, and enamel [29]. Aqueous Film Coating The disadvantages of SC have led to the development of aqueous FC methods. Previously, these methods employed organic solvents, but due to the safety issue of these solvents, a better and more cost-effective method was developed in which the solvent was switched by aqueous-based FC [30]. These are applied as a thin film on the surface of the dosage form to obtain numerous benefits, including modified release, environmental protection, and taste masking. The coating depends on several factors, including tablet shape, the liquid used for coating purposes, equipment used for coating, and characteristics of the tablet surface. The coated film must be smooth in appearance, stuck smoothly with the tablet's surface, and maintains physical and chemical stability. Based on the solubility of the water and the former film polymer used, the coating could be done by the solution or dispersion method [25] (Table 3). The Present Trend of Aqueous FC in Pharmaceutical Oral Solid Dose Forms Despite the purpose and rational use of FC techniques, the aqueous coating could possibly be reported as the most widely used method for coating purposes. Aqueous coatings are used for conventional and delayed-release systems [32] (Table 4). Table 4. Factors that affect the quality of film coating [31]. Factors Affecting the Quality of Film Coating Factors that Affect the Coating with the Interaction of Substrate Drying process The viscosity of the coating liquid influences the coalescence of droplets Interaction between core and coating material There exists an influence of solid contact on the viscosity of the coating and the roughness of dry coating. Uniform distribution of coating There appeared a great influence of surface tension on the spreading of coating material across the surface of coated material, wetting the surface of the substrate, and evenly distributing the liquid in the form of thin film over the substrate Polymers Used in Pharmaceutical Coating Polymers play a vital role in coating technology; sometimes, they are used for modifying the delivery of dosage forms, taste masking, and film forming agent. Some of the polymers used for such purposes are illustrated in Table 5. 3.1.1. CAP To achieve enteric coating or controlled release of tablets or capsules CAP (the chemical structure shown in Figure 1), phthalate, Cellacefate, cellulose acetate, and cellulose esters were commonly used. To provide delayed action regarding drug absorption, CAP disintegrates at a pH greater than 6, producing it as a natural polymer used for enteric coating. Its properties determine that it is hygroscopic, which makes it vulnerable to solubility and penetration of moisture into GI fluid [37]. The molecular weight of CAP represents another parameter that affects the properties of the polymer. The properties of polymer vary with variations in the factors like viscosity, surface tension, conductivity, and rheology. A polymer with lower molecular weight yielded beads, while fibers with large diameters were yielded with a high molecular weight polymer. Polymers with high molecular weight were utilized for electrospinning to achieve formulation-based required viscosity. The viscosity of a solution directly reflects the chain entanglement of polymer chains. In contrast, processing electrospinning chain entanglement of the polymer depicts a vital role [38]. and rheology. A polymer with lower molecular weight yielded b large diameters were yielded with a high molecular weight polym molecular weight were utilized for electrospinning to achieve form viscosity. The viscosity of a solution directly reflects the chain en chains. In contrast, processing electrospinning chain entanglemen a vital role [38]. Cellulose Acetate Trimellitate (CAT) Both CAP and CAT are similar other than the occurrence of the carboxylic group on the aromatic ring of CAT (as represented in Figure 2). Manufacturers quoted a value of 22% for acetyl and 29% for timellityl correspondingly. This polymer proves its enteric coating property by dissolving at pH 5.5 in the upper part of the intestine. Dissolution studies further demonstrated that both CAP and CAT exhibit similar solubility properties in organic solvents. Meanwhile, regarding aqueous solvents, studies have demonstrated that, while achieving full enteric properties, ammoniacal solutions of CAT were utilized with water. The plasticizers recommended to be used with aqueous solvents include acetylated monoglyceride, diethyl phthalate, and triacetin [40]. , 14, x FOR PEER REVIEW 22% for acetyl and 29% for timellityl correspon coating property by dissolving at 5.5 pH in the studies further demonstrated that both CAP and in organic solvents. Meanwhile, regarding aque that, while achieving full enteric properties, amm with water. The plasticizers recommended to be ylated monoglyceride, diethyl phthalate, and tria Methylcellulose (MC) One of the most commonly and commercially used polymers is MC. The polymer is cellulose ether and has several industrial applications. It is the cellulose derivative with a structure comprising a methyl group followed by anhydrous-D-glucose moiety, which substitutes hydroxyl group (OH) at positions of C-2, 3, and 6 (as represented in Figure 3). One of the most important esters of the methyl family is methyl cellulose (MC). Structurally it consists of a methoxy group that accounts for approximately 27.5-31.5% of the whole MC. An aqueous solution of MC showed heat-related gelling properties. It is soluble in water. Its average molecular weight ranges between 10,000-220,000 daltons. It is most commonly used as a coating agent, binder, and disintegrant in oral solid formulations. Furthermore, it is also used for sustaining the drug release [41]. the placement of the OH group upon its substitution from three to zero. Meanwhi increasing the temperature of the polymer towards a critical temperature, Singular mal behavior was observed, which reduces the viscosity and produces an aqueous tion. With a constant rise in temperature, the lowest critical solution temperature (L of the polymer MC was observed that produced a thermoreversible gel with augm viscosity. Below LCST temperature MC is highly water soluble, while the polym comes insoluble at temperatures exceeding LCST. That could be the possible reaso the saturated solution of the polymer converts to a solid state upon heating [24]. Ethylcellulose (EC) Directly, EC is water insoluble; it is further made water and fluid soluble after tion with other additives like HPMC (as represented in Figure 4). It is a partial deriv Polymer exhibits exceptional amphiphilic and physicochemical properties. Solubility of the polymer shifts from water-soluble towards organo-soluble, which depends upon the placement of the OH group upon its substitution from three to zero. Meanwhile, by increasing the temperature of the polymer towards a critical temperature, Singular thermal behavior was observed, which reduces the viscosity and produces an aqueous solution. With a constant rise in temperature, the lowest critical solution temperature (LCST) of the polymer MC was observed that produced a thermoreversible gel with augmented viscosity. Below LCST temperature MC is highly water soluble, while the polymer becomes insoluble at temperatures exceeding LCST. That could be the possible reason that the saturated solution of the polymer converts to a solid state upon heating [24]. Ethylcellulose (EC) Directly, EC is water insoluble; it is further made water and fluid soluble after addition with other additives like HPMC (as represented in Figure 4). It is a partial derivative of cellulose ether (O-ethylated). EC is available in various molecular grades, which vary in viscosity. With the structural combination of alkali cellulose and ethyl chloride EC was prepared. The substitution of ethoxy groups was controlled throughout this reaction. In pharmaceutical formulations, EC is used as a binder, taste masking agent, and modified release agent [43]. The polymer is non-toxic, colorless, and tasteless and is widely used in organic solvents. EC can resist drug release. EC can also be used to incorporate materials by employing direct compression or wet granulation. Different microencapsulation techniques were used for the encapsulation of EC microparticles. It is one of the most widely used polymers for coating solid dosage forms that are water-insoluble [44]. Colorectal capecitabine-based microspheres were developed by Kumbhar et al. with the help of natural polysaccharide polymers to enhance cost-effectiveness. Microspheres were developed using single emulsification technology using calcium chloride (CaCl 2 ) loaded with pectin, which was further coated with EC using the solvent evaporation technique. Furthermore, characterization of the microspheres was done, which includes particle size, Fourier-transform infrared spectroscopy (FTIR), surface electron microscopy (SEM), differential scanning calorimetry (DSC), drug release, and entrapment efficiency. Drug release studies observed that less than 20% of the drug was released in an acidic medium. An initial burst of drug release was observed, but at the end of the 12th hour, a total drug release of 85.33-95.55% was observed due to coating with EC. It was concluded that capecitabine-based microspheres loaded with pectin and coated with CE were also used effectively in the treatment of colon cancer and can replace conventional therapy [45]. acterization of the microspheres was done, which includes particle size, Fourier-tr infrared spectroscopy (FTIR), surface electron microscopy (SEM), differential s calorimetry (DSC), drug release, and entrapment efficiency. Drug release studies o that less than 20% of the drug was released in an acidic medium. An initial burst release was observed, but at the end of the 12th hour, a total drug release of 85.33 was observed due to coating with EC. It was concluded that capecitabine-base spheres loaded with pectin and coated with CE were also used effectively in the tr of colon cancer and can replace conventional therapy [45]. Hydroxyethyl Cellulose (HEC) It is a cellulose-based polymer used for gelling and thickening properties. H chemical structure represented in Figure 5) is further used in the hydrophilization which increases the solubility profiling of drugs within GI fluids. HEC has a m weight of 90 kDa, improved water solubility and neutral nature, making it an e candidate for drug carrier systems. Regarding its demand, its high biocomp chemical stability, and exceptional thickening property make it a good candidate maceutical formulations. Before the formulation of a carrier system, the characte both the drug and the carrier must be examined carefully [47]. It is further used as cleaning solutions, household products, and cosmetics d water-soluble and non-ionic abilities. HEC produces crystal-clear gels and solid water phase of cosmetic emulsions. This polymer has a big disadvantage: it forms erates or lumps when it first gets moistened with water. One of the grades of HEC as R grade, is used for solution formation because no lumps were formed as it c contact with moisture and ultimately enhances solubility and processing time of tion [31]. Chowdary et al. developed a bilayer film-coated tablet of paliperidone. let was further characterized for in vitro drug release studies. The tablet core wa lated with varying concentrations of polyox. An enteric coating optimized the Hydroxyethyl Cellulose (HEC) It is a cellulose-based polymer used for gelling and thickening properties. HEC (the chemical structure represented in Figure 5) is further used in the hydrophilization process, which increases the solubility profiling of drugs within GI fluids. HEC has a molecular weight of 90 kDa, improved water solubility and neutral nature, making it an excellent candidate for drug carrier systems. Regarding its demand, its high biocompatibility, chemical stability, and exceptional thickening property make it a good candidate for pharmaceutical formulations. Before the formulation of a carrier system, the characteristics of both the drug and the carrier must be examined carefully [47]. It is further used as cleaning solutions, household products, and cosmetics due to its water-soluble and non-ionic abilities. HEC produces crystal-clear gels and solidifies the water phase of cosmetic emulsions. This polymer has a big disadvantage: it forms agglomerates or lumps when it first gets moistened with water. One of the grades of HEC, termed as R grade, is used for solution formation because no lumps were formed as it comes in contact with moisture and ultimately enhances solubility and processing time of the reaction [31]. Chowdary et al. developed a bilayer film-coated tablet of paliperidone. The tablet was further characterized for in vitro drug release studies. The tablet core was formulated with varying concentrations of polyox. An enteric coating optimized the coating with cellulose acetate and a sub-coating using HEC. Different influencing factors like the composition of tablet core and ingredients of the coating were investigated. The formulations were optimized by comparing the results of in vitro drug release studies [48]. Polymers 2022, 14, x FOR PEER REVIEW 9 of 2 with cellulose acetate and a sub-coating using HEC. Different influencing factors like th composition of tablet core and ingredients of the coating were investigated. The formula tions were optimized by comparing the results of in vitro drug release studies [48]. HPMC is a synthetic alteration of natural polymer (chemical structure shown in Fig ure 6). It is white to slightly off-white, odorless, and tasteless. It is a water-soluble polyme Hydroxypropyl Methylcellulose (HPMC) HPMC is a synthetic alteration of natural polymer (chemical structure shown in Figure 6). It is white to slightly off-white, odorless, and tasteless. It is a water-soluble polymer and can also be used in the controlled release delivery of tablets. It is also used for coated and uncoated matrix tablets. Upon hydration of the matrix with water, the polymeric chains disentangled [50]. Drug releases from drugs follow a two-way mechanism; in the first step, the drug is diffused from the gel layer of the polymer, while in the second mechanism, the release of the drug is followed by erosion of the swollen layer. As a result of the presence of cellulose ether, it is possibly used for the controlled release of oral drug delivery. HPMC can further be used for aqueous and solvent film coating. Matrix-based tablets could be developed using wet granulation or direct compression [51]. In another study conducted by Ifat Katzhendler et al., the release of naproxen and naproxen sodium was studied by varying the molecular weight of HPMC. It was concluded from the results that when used alone, naproxen decreases the drug's solubility while naproxen sodium increases the system's pH and ultimately increases drug loading; hence, drug release also increases [50]. HPMC is a synthetic alteration of natural polymer (chemical structure sho ure 6). It is white to slightly off-white, odorless, and tasteless. It is a water-solub and can also be used in the controlled release delivery of tablets. It is also used and uncoated matrix tablets. Upon hydration of the matrix with water, the chains disentangled [50]. Drug releases from drugs follow a two-way mechan first step, the drug is diffused from the gel layer of the polymer, while in the sec anism, the release of the drug is followed by erosion of the swollen layer. As the presence of cellulose ether, it is possibly used for the controlled release o delivery. HPMC can further be used for aqueous and solvent film coating. Ma tablets could be developed using wet granulation or direct compression [51]. study conducted by Ifat Katzhendler et al., the release of naproxen and naprox was studied by varying the molecular weight of HPMC. It was concluded from that when used alone, naproxen decreases the drug's solubility while naprox increases the system's pH and ultimately increases drug loading; hence, drug r increases [50]. Polyvinyl Pyrrolidone (PVP) It is a water-soluble polymer; its molecular weight ranges between 40,000 Daltons and can be distinguished into different grades. PVP is manufactured b izing vinyl pyrrolidone in isopropyl alcohol or water (the chemical structure represented in Figure 7). Due to the presence of the polar amide group and hy alkyl group, and the polar amide, it is highly water-soluble. Due to Its high Polyvinyl Pyrrolidone (PVP) It is a water-soluble polymer; its molecular weight ranges between 40,000 to 600,000 Daltons and can be distinguished into different grades. PVP is manufactured by polymerizing vinyl pyrrolidone in isopropyl alcohol or water (the chemical structure of which is represented in Figure 7). Due to the presence of the polar amide group and hydrophobic alkyl group, and the polar amide, it is highly water-soluble. Due to Its high degree of compatibility, it is an excellent candidate for a drug carrier system. PVP is a non-carcinogen, non-toxic, and temperature stable polymer. PVP exhibits a superior drug carrier system [53]. Different grades of PVP were used to enhance the bioavailability of poorly watersoluble drugs. In essence, it is used in tablet manufacturing as a binder. Granules produced by wet granulation using this polymer exhibit greater binding strength, low friability, and good flowability compared to other binders [54]. Tang et al. prepared paliperidone tablets using simple manufacturing and then coated them to produce a sustained effect. Tablets were evaluated and investigated for their in-vitro drug release behavior. Tablets were coated using a highly viscous HPMC K 100 M and HPC coat. The in-vitro drug release parameters were evaluated considering different factors that include the core tablet composites, the material used for FC, and the formulation parameters. Gravimetric analysis was used to determine the drug release mechanism. The data obtained from drug release profiling were then put into the Peppas model. Drug releases at different intervals were then plotted in graphical form; the drug release was represented in the form of a slope at various time points. The results showed that the preparation could achieve better ascending drug release once the weight relation of paliperidone was 5:1 (core: layer). The fraction of HPMC and HPC was 33%. The ascending drug release was probably due to the penetration of solvent into the coated paliperidone tablets with the subsequent dissolution of the drug from the viscous polymer HPMC and HPC due to erosion of the matrix. Both erosion and diffusion mechanism of drug release was followed. It is concluded that coated tablets prepared by compression possibly are used for ascending control drug release over 24 h [55]. formulation parameters. Gravimetric analysis was used to determine th mechanism. The data obtained from drug release profiling were then put i model. Drug releases at different intervals were then plotted in graphical release was represented in the form of a slope at various time points. The r that the preparation could achieve better ascending drug release once the w of paliperidone was 5:1 (core: layer). The fraction of HPMC and HPC wa cending drug release was probably due to the penetration of solvent into t peridone tablets with the subsequent dissolution of the drug from the vi HPMC and HPC due to erosion of the matrix. Both erosion and diffusion drug release was followed. It is concluded that coated tablets prepared b possibly are used for ascending control drug release over 24 h [55]. Shellac Due to structural novelty (as shown in Figure 8), shellac was consi unique properties. It is composed of an ester complex with polyhydroxy p Shellac has various applications, including adhesiveness, insulator, film f and thermoplastic agent. As shellac is obtained from animal origin and is c ferent from other polymers, the unavailability of resins, aromatic compou phenolic compounds, oxidized polyterpenic acids, and resinotannols give i erties [57]. Shellac consists of an acidic group with a high acidic dissociation valu it is not easy for the group to dissociate in a gastric environment, which caus dissolution effect in the stomach (pH 2). With the modification of shellac c ture with the addition of sodium carbonate (alkaline group) performance o stomach was enhanced. In one study, nanoparticles and nanofibers of ke formulated, incorporated with shellac, and done with its characterizatio FTIR [58]). Results showed that nanocomposites were suitable for the con of ketoprofen [59]. Shellac Due to structural novelty (as shown in Figure 8), shellac was considered to have unique properties. It is composed of an ester complex with polyhydroxy polybasic acids. Shellac has various applications, including adhesiveness, insulator, film forming agent, and thermoplastic agent. As shellac is obtained from animal origin and is completely different from other polymers, the unavailability of resins, aromatic compounds, resumes, phenolic compounds, oxidized polyterpenic acids, and resinotannols give it unique properties [57]. Shellac consists of an acidic group with a high acidic dissociation value. Due to this, it is not easy for the group to dissociate in a gastric environment, which causes a decreased dissolution effect in the stomach (pH 2). With the modification of shellac chemical structure with the addition of sodium carbonate (alkaline group) performance of shellac in the stomach was enhanced. In one study, nanoparticles and nanofibers of ketoprofen were formulated, incorporated with shellac, and done with its characterization (SEM, XRD, FTIR [58]). Results showed that nanocomposites were suitable for the controlled release of ketoprofen [59]. It has a further polynomial cross-linked form known as croscarmellose sodium (Fig ure 9). It has excellent swelling properties, hydrophilic with excellent absorbing properties. Commercially SCMC is available with varying degrees of substitution (DS) ranging from 0.7 to 1.2, with a subsequent amount of sodium content of 6.5%-12% of total weight It has a further polynomial cross-linked form known as croscarmellose sodium (Figure 9). It has excellent swelling properties, hydrophilic with excellent absorbing properties. Commercially SCMC is available with varying degrees of substitution (DS) ranging from 0.7 to 1.2, with a subsequent amount of sodium content of 6.5-12% of total weight. SCMC is extremely hygroscopic in nature and absorbs more than 50% of water content. Tablets formulated by using SCMC tend to harden with time [43]. Croscarmellose sodium enhances the bioavailability of numerous formulations, giving excellent disintegration and dissolution characteristics. In oral formulations, croscarmellose sodium is used as a disintegrant. While related to the pharmaceutical industry, it is used to develop tablets with direct compassion and as an insecticide employed in the paper and textile industries. It behaves as a protective colloid to prevent water loss [60]. Shinde et al. tried to develop sustained swellable matrix release tablets using diltiazem hydrochloride as a model drug. The purpose of the dosage form was to improve the dissolution profile of the drug as the drug is more soluble in the upper part of the GI tract [61]. It has a further polynomial cross-linked form known as croscarmellose sodium ( ure 9). It has excellent swelling properties, hydrophilic with excellent absorbing prop ties. Commercially SCMC is available with varying degrees of substitution (DS) rang from 0.7 to 1.2, with a subsequent amount of sodium content of 6.5%-12% of total wei SCMC is extremely hygroscopic in nature and absorbs more than 50% of water cont Tablets formulated by using SCMC tend to harden with time [43]. Croscarmellose sodium enhances the bioavailability of numerous formulations, ing excellent disintegration and dissolution characteristics. In oral formulations, cros mellose sodium is used as a disintegrant. While related to the pharmaceutical industr is used to develop tablets with direct compassion and as an insecticide employed in paper and textile industries. It behaves as a protective colloid to prevent water loss [ Shinde et al. tried to develop sustained swellable matrix release tablets using diltiaz hydrochloride as a model drug. The purpose of the dosage form was to improve the solution profile of the drug as the drug is more soluble in the upper part of the GI t [61]. Zein It is a natural polymer derived from plant origin and is more beneficial than synthetic polymers. It has applications for controlled drug release and biomedical purposes. Zein is highly nutritive due to the presence of numerous components, which include proteins. It comprises 50% corn protein and 6 to 12% protein according to its dry weight. About 25% of this protein is present between the bran and germ, while 75% of this protein is present in endosperm tissues. Zein is also used in vaccines, tissue engineering, and gene delivery. It is used as a biopolymer due to its two basic properties: biodegradability and biocompatibility [23]. A complete illustration of zein structure was not discovered until now, but with the help of chromatographic techniques, some of its characteristics were discovered in the 80s. With the help of the small-angle X-ray scattering (SAXS) technique, the helical structure (with ten successive folds) of zein was revealed [63]. Zein was obtained in α-, β-, δ-, and γ forms depending on the molecular weight and extraction method used. It was further used in various industrial fields, including adhesives, ink, food industry, ceramics, ink, chewing gums, candy formation, plastic packaging materials, and adhesives ( Figure 10). Initially, zein was used as protective material on coated materials because it is more resistant to humidity, abrasion, and heat tolerable. Due to its low cost, it was also used as a taste-enhancing agent in an immediate release dosage form. It was concluded from a study that there appeared to be no influence of the coating process on the hardness of the core. However, tablets coated with zein (FC) showed a high strength compared to HPMC and CAP [63]. Zein exhibits excellent physical characteristics, which is why it is used in different formulations, including gels, fibers, films, nanoparticles, and for the controlled release of drugs in tablets. Products prepared using zein have improved shelf life because zein is resistant to water, heat, and abrasion [40]. Van et al. inspected zein as a coating material by preparing prednisolone for colon-specific drug delivery. A suitable proportion of zein and Kollicoat MAE 100P were prepared and tested to confirm the strengthening capacity of zein films. It becomes evident from the specific dosage form of the colon that zein exhibited an immediate release of the drug substance immediately as it reaches the basic medium of the intestine. Furthermore, the formulations were characterized by FTIR, and it was evident that different ratios of zein and Kollicoat MAE 100P experience physical interactions [64]. study that there appeared to be no influence of the coating process on the har core. However, tablets coated with zein (FC) showed a high strength compare and CAP [63]. Zein exhibits excellent physical characteristics, which is why it is used formulations, including gels, fibers, films, nanoparticles, and for the controlle drugs in tablets. Products prepared using zein have improved shelf life bec resistant to water, heat, and abrasion [40]. Van et al. inspected zein as a coat by preparing prednisolone for colon-specific drug delivery. A suitable propo and Kollicoat MAE 100P were prepared and tested to confirm the strengthen of zein films. It becomes evident from the specific dosage form of the colon hibited an immediate release of the drug substance immediately as it reach medium of the intestine. Furthermore, the formulations were characterized b it was evident that different ratios of zein and Kollicoat MAE 100P experien interactions [64]. Eudragit L-100-55 It is a copolymer obtained from the esters of methacrylic acid and acrylic the functional group (R) is responsible for its physicochemical properties (che ture represented in Figure 11). Eudragit is anionic, white in color, and has properties. It is used for entering coating purposes and dissolves at a pH of [66]. One of the pharmaceutical industry's most commonly used pH-sensitive Eudragit because of their soluble nature at various pH ranges. At a pH high Eudragit L-100-55 It is a copolymer obtained from the esters of methacrylic acid and acrylic acid, where the functional group (R) is responsible for its physicochemical properties (chemical structure represented in Figure 11). Eudragit is anionic, white in color, and has free-flowing properties. It is used for entering coating purposes and dissolves at a pH of 5.5 or more [66]. One of the pharmaceutical industry's most commonly used pH-sensitive polymers is Eudragit because of their soluble nature at various pH ranges. At a pH higher than 5.5, Eudragit L100-55 controlled the release of the pharmaceutically active ingredient. There appeared a difference in Eudragit L100 and Eudragit L100-55 by substituting a methyl group rather than an ethyl group. The difference in the functional groups eventually imparts a change in the dissolution profile of both polymers at different pH values [67]. Alsulays et al. developed enteric coated tablets of lansoprazole to improve their physical and chemical properties by using a new technique named hot-melt extrusion. Kollidon 12PF was used as polymer, Lutrol F68 was used as a plasticizer, and magnesium oxide (MgO) as an alkalizing agent. An amorphous state of lansoprazole appeared and presented a better drug release when it was extruded with Kollidon 12 and Lutrol F68. At the same time, incorporating MgO improved the extrudability of lansoprazole and its release, resulting in more than 80% of drug release within the buffer zone [68] (Figure 11). lidon 12PF was used as polymer, Lutrol F68 was used as a plasticizer, and magnesiu oxide (MgO) as an alkalizing agent. An amorphous state of lansoprazole appeared a presented a better drug release when it was extruded with Kollidon 12 and Lutrol F68. the same time, incorporating MgO improved the extrudability of lansoprazole and its r lease, resulting in more than 80% of drug release within the buffer zone [68] (Figure 11 Plasticizer These are low molecular materials that were added to enhance the mechanic strength of a polymer [70]. Plasticizers weaken the intermolecular forces of the polyme thereby reducing their rigidity and improving their coalescence properties while maki films [70]. They can reduce the glass transition temperature of amorphous polymers, d crease the interactions of different polymers, and reduce the brittleness of films [70]. Th alter the plasticity of film-forming polymers (FFP) in two basic ways, external and intern plasticizing. External plasticizing involves the use of plasticizers, while internal plastic ing appears to be due to a modification in chemical structure that ultimately changes physical properties. External or internal plasticizers were used in an optimum rang which ranges from 1-50%, but most commonly 10% plasticizers were used. Polyethyle glycol and HPMC were the polymers most commonly and effectively used. Triacetin less commonly used plasticizer, protects the aqueous coating by creating a moisture ba rier against the coat and protects the formulation [12]. Colorants and Opacifiers To improve product identification, enhance the appearance of products, and decrea the risk of counterfeit products, colorants were added to the formulations. Opacifiers we used in those products that were damaged by light. The ideal concentration of coloran used in film coating formulations (FCF) ranges from more than 2% w/w for dark sha and 0.01% w/w for light shade. Each country has its own regulatory approved opacifie and colorants. Some of them are mentioned in Table 4. Colorants may be water-insolub known as pigments, and water-soluble colorants, known as dyes, as represented in Tab 6 [12]. Other Additives Plasticizer These are low molecular materials that were added to enhance the mechanical strength of a polymer [70]. Plasticizers weaken the intermolecular forces of the polymers, thereby reducing their rigidity and improving their coalescence properties while making films [70]. They can reduce the glass transition temperature of amorphous polymers, decrease the interactions of different polymers, and reduce the brittleness of films [70]. They alter the plasticity of film-forming polymers (FFP) in two basic ways, external and internal plasticizing. External plasticizing involves the use of plasticizers, while internal plasticizing appears to be due to a modification in chemical structure that ultimately changes its physical properties. External or internal plasticizers were used in an optimum range, which ranges from 1-50%, but most commonly 10% plasticizers were used. Polyethylene glycol and HPMC were the polymers most commonly and effectively used. Triacetin, a less commonly used plasticizer, protects the aqueous coating by creating a moisture barrier against the coat and protects the formulation [12]. Colorants and Opacifiers To improve product identification, enhance the appearance of products, and decrease the risk of counterfeit products, colorants were added to the formulations. Opacifiers were used in those products that were damaged by light. The ideal concentration of colorants used in film coating formulations (FCF) ranges from more than 2% w/w for dark shade and 0.01% w/w for light shade. Each country has its own regulatory approved opacifiers and colorants. Some of them are mentioned in Table 4. Colorants may be waterinsoluble, known as pigments, and water-soluble colorants, known as dyes, as represented in Table 6 [12]. Table 6. Opacifiers and colorants are used in FC [12]. Class Examples Natural colorants Beta-carotene, riboflavin, carmine lake Water soluble dyes FD&C yellow no 5 lake, FD&C blue no 2 lakes Inorganic Pigments Titanium dioxide, iron oxides D&C lakes D&C red no 30 lake, D&C yellow no 10 lake FD&C lakes FD&C yellow no 5 lake. FD&C blue no 2 lakes Issues Related to Aqueous Film Coating The FC process must be treated at a temperature above the polymer's Tg. Additionally, the quantity and quality of the pigment and plasticizer in the coating process influence many of the mechanical properties, barrier properties, physicochemical properties, and other factors discussed in Table 7 [12]. Table 7. Effect of plasticizer and pigments in FC [12]. Plasticizer Pigment Elastic modulus Reduced Increased Tensile strength Reduced Reduced Film Permeability It depends on the physicochemical properties of the plasticizer used. Decreased. The pigment volume reaches a critical concentration. Hiding power Little or no effect Increased, but is dependent upon the refractive index and light absorption characters of the pigment Viscous nature of coating material Increases, but is directly related to plasticizer molecular weight. Increased Tg temperature Reduced Slight or no effect Adhesion of the films Generally, increases under ideal conditions Slightly affected Equipment Used for Tablet Coating Equipment was generally used to coat the tablet surface with a thin film that acts as a coating material. The general purpose of the film was to prevent the tablet from physical or chemical harm and mask the unpleasant smell, odor, and taste. The coating also protects the tablet from the harsh gastric environment and promotes sustained drug release. The coating also enhances the appearance of the tablet [71]. Equipment used for coating purposes was constructed on simple principles: the coating is applied on the tablets in a solution form while the rotator is moving horizontally or vertically. During rotation, a stream of hot air is also introduced, which promotes the evaporation of the solvent. Continued movement of the beds causes an even distribution of the coating material over the tablets and even drying [26]. Some of the important parameters of the coating process are as follows: Configuration of Coating Material Coating material usually consists of a solvent carrier system and the dissolved coating material meant to be coated on the tablets. The solvent carrier system evaporated with the help of the drying mechanism during the film coating process. The heat was delivered with the inlet air used to evaporate water, while the exhausted air appeared to contain more water content due to the evaporation process. Thus, the exhausted air became cooler in temperature compared to the inlet air until the entire drying process was completed. Capacity of Air It represents the amount of solvent or water removed during the coating process. It depends on the rate and extent of air that flows through the bed of the tablets. Efficiency of the Equipment Used The coating material's adherence to the coating pan's walls will determine its efficiency. In the case of sugar coating, the efficiency is very less, while the satisfactory limit of equipment efficiency reaches 60%. Surface Area of the Tablets The coating parameters were affected by the tablet surface area and size. The smaller the tablet size, the larger the surface area per unit weight [31]. Standard Coating Pan Conventional pan systems or standard coating pans, which are similar, were mostly employed by pharmaceutical industries. Specifically, they were designed for coating purposes in such a manner that the circular pan is considered a drum, which is metallic and has a diameter of 6-80 inches. The drum was further tilted from the top of the bench at an angle of approximately 45 degrees. An electric motor was fitted in standard coating pans that rotated the drum on its horizontal axis, which tumbles the tablet's batch. It is a fast process and decreases the drying time. These conventional coating pans can further be used for sugar or film coating purposes with slight modifications, which include the use of an immersion sword, pellegrini baffled pan, pellegrini baffled diffuser, and immersion sword, as illustrated in Figure 12. On the contrary, the equipment has some disadvantages, including using organic solvents, which might be risky, and air supply, if unregulated, can complicate the process. Furthermore, as drying occurs on the surface of the tablets only, this might lead to improper coating and mixing of the tablets [31]. The coating parameters were affected by the tablet surface area and size. The smalle the tablet size, the larger the surface area per unit weight [31]. Standard Coating Pan Conventional pan systems or standard coating pans, which are similar, were mostl employed by pharmaceutical industries. Specifically, they were designed for coating pu poses in such a manner that the circular pan is considered a drum, which is metallic an has a diameter of 6-80 inches. The drum was further tilted from the top of the bench at a angle of approximately 45 degrees. An electric motor was fitted in standard coating pan that rotated the drum on its horizontal axis, which tumbles the tablet's batch. It is a fa process and decreases the drying time. These conventional coating pans can further b used for sugar or film coating purposes with slight modifications, which include the us of an immersion sword, pellegrini baffled pan, pellegrini baffled diffuser, and immersio sword, as illustrated in Figure 12. On the contrary, the equipment has some disad vantages, including using organic solvents, which might be risky, and air supply, if un regulated, can complicate the process. Furthermore, as drying occurs on the surface of th tablets only, this might lead to improper coating and mixing of the tablets [31]. Immersion Sword It is a technique used to increase the drying productivity of a conventional pan coatin apparatus. In this process, a perforated metal sword is inserted into the bed of tablets. Du to the presence of a perforated sword, this system allows the circulation of just one flow o dry air through the middle portion of the sword and resists many flow points of air. Immersion Tube System The commercially available immersion tube system consists of an additional tub further immersed in the bed of the tablet coating machine. The function of the tube-nozz was to provide both the coating solution and hot air concurrently. It is a long tube with spray nozzle at its tip. It was designed in such a manner that heated air leaves the system Immersion Sword It is a technique used to increase the drying productivity of a conventional pan coating apparatus. In this process, a perforated metal sword is inserted into the bed of tablets. Due to the presence of a perforated sword, this system allows the circulation of just one flow of dry air through the middle portion of the sword and resists many flow points of air. Immersion Tube System The commercially available immersion tube system consists of an additional tube further immersed in the bed of the tablet coating machine. The function of the tube-nozzle was to provide both the coating solution and hot air concurrently. It is a long tube with a spray nozzle at its tip. It was designed in such a manner that heated air leaves the system by flowing in an upwards direction through conventional ducts. The drying time and efficiency of a standard coating pan could be enhanced with the simple inclusion of an immersion tube system. This technique may be used for film and sugar coating [10]. Baffled Diffuser and Pan The drying efficiency of standard pans used for coating purposes was improved using the Pellegrini baffled diffuser and pan technique. One of the possible reasons was that they improved the drying and tumbling of the coating equipment. The tablet coater was successfully used to evenly distribute the drying air over all coated tablets. Ordinary coating pans with baffling diffuser and pan were only suitable for sugar coating purposes due to drying capacity limitations [46]. Perforated Coating Pan Among other coating techniques, many pharmaceutical companies widely adopted perforated coating pans. These coating pans consist of a full, partial, or one perforated drum. Like other pans, the drum of this coating pan rotates on its horizontal axis. The perforated drum is on the horizontal axis and equipped with an air-atomized spray nozzle and airflow controller. Perforated coating pans have an efficient drying mechanism; unlike other coating pans, perforated pan coaters have an effective drying system, as illustrated in Figure 13. Moreover, they have a high capability in the tablet coating process. They are used for both sugar coating and aqueous film coating. Perforated coating pans appeared to be efficient for film and sugar coating, compared to conventional pans, due to their high coating capacity, numerous airflow patterns, and increased tablet drying [47]. successfully used to evenly distribute the drying air over all coated tablets. Ordin ing pans with baffling diffuser and pan were only suitable for sugar coating purp to drying capacity limitations [46]. Perforated Coating Pan Among other coating techniques, many pharmaceutical companies widely perforated coating pans. These coating pans consist of a full, partial, or one pe drum. Like other pans, the drum of this coating pan rotates on its horizontal perforated drum is on the horizontal axis and equipped with an air-atomized spra and airflow controller. Perforated coating pans have an efficient drying mechan like other coating pans, perforated pan coaters have an effective drying system trated in Figure 13. Moreover, they have a high capability in the tablet coating They are used for both sugar coating and aqueous film coating. Perforated coat appeared to be efficient for film and sugar coating, compared to conventional p to their high coating capacity, numerous airflow patterns, and increased tablet dry In this coating system, hot air is passed directly from the top part of the dru falls directly on the bed of the tablets, and the air is exhausted from the drum AccelaCota System In this coating system, hot air is passed directly from the top part of the drum which falls directly on the bed of the tablets, and the air is exhausted from the drum from the perforations present at the bottom of the drum. The material coated on the tablets is evenly distributed in the drum through spraying nozzles. Meanwhile, the presence of baffles in the drum improves the tumbling of the tablets and provides free mixing. It is used effectively for both coating (FC, SC) and drying processes [10]. Dria Coater Pan This type of coating pan has perforated ribs that are present on the inner periphery of the coating drum. The working principle of Dria coater is like an Accela coating machine. Meanwhile, the air used for drying purposes enters from below the coating drum and flows through the tablets in an upward direction, eventually leaving the system from the back of the tablet coating pan [31]. Glatt Coater One of the most advanced technologies, having a shorter processing time and higher coating capacity, is known as Glatt coater. It is designed so that one can easily direct the drying air inside the tablet coating drum. Generally, it consists of an exhaust system, and air after passes over the tablet bed exit from it. It has a unique design that reduces the turbulence produced by the spray nozzle, which ultimately ensures a smooth coating on the surface of the tablets. Furthermore, the pan is fitted with baffles, which protect the tablets from damage during mixing and enhance their mixing simultaneously. Glatt coater was also constructed with a perforated system like other coating pans. Spray nozzles were situated at the top of the drum while aiming toward the tablet bed and atomizing the fluid used for coating purposes [72]. Fluidized-Bed Coater The coating mechanism in these coaters follows the fluidization principle; in these coaters, an increased amount of air enters through the center of the column, which raises the tablet in the center and proceeds the coating process. The fluid used for coating purposes is sprayed using spray nozzles placed at the top or bottom of the equipment [73]. It has a similar working mechanism to other bed coaters. It consists of a vertical cylinder in which the tablets are suspended in the chamber and dried due to an upthrust of drying air. A fluidization process will occur, which causes the tablets to move outward, upward, and then downward. The spray nozzle was then used to spray the desired fluid used for coating the tablets, either from the bottom or top of the fluidized bed coater, as shown in Figure 14 [74]. Glatt Coater One of the most advanced technologies, having a shorter processi coating capacity, is known as Glatt coater. It is designed so that one c drying air inside the tablet coating drum. Generally, it consists of an e air after passes over the tablet bed exit from it. It has a unique desig turbulence produced by the spray nozzle, which ultimately ensures a the surface of the tablets. Furthermore, the pan is fitted with baffles tablets from damage during mixing and enhance their mixing simultan was also constructed with a perforated system like other coating pans. situated at the top of the drum while aiming toward the tablet bed and used for coating purposes [72]. Fluidized-Bed Coater The coating mechanism in these coaters follows the fluidization prin ers, an increased amount of air enters through the center of the colum tablet in the center and proceeds the coating process. The fluid used for sprayed using spray nozzles placed at the top or bottom of the equipme ilar working mechanism to other bed coaters. It consists of a vertical cy tablets are suspended in the chamber and dried due to an upthrust of d zation process will occur, which causes the tablets to move outward, downward. The spray nozzle was then used to spray the desired fluid u tablets, either from the bottom or top of the fluidized bed coater, as show High-Pressure Airless Systems It is used to pump out the liquid without the need for air at a very high pressure of about 250,300 psig. The nozzle used for this process is very small in size, about 0.0090 to 02 inches. The spray rate and degree of atomization depend on the liquid's orifice size, fluid pressure, and viscosity. The size of the orifice and the pressure of the fluid are the controllers for regulating the degree of atomization and the spray rate [31]. Low-Pressure Air-Atomized System This system uses a low pressure of 550 psig to pump the fluid through a 0.020-inch larger orifice. Some major parameters that regulate the spray rate and the atomization process are the fluid cap orifice, the pressure of the air, the design of the air cap, and the viscosity of the fluid [75]. Evaluation Parameters of FC Tablets Hardness and Friability Hardness and friability tests were conducted to ensure that the tablets' mechanical strength persists during handling, transportation, storage, and usage. The hardness of the prepared tablets was performed using a manual or automatic hardness tester, and its units are in Kg/cm 2 . The friability of the formulated tablets was determined using a friabilator. The apparatus consists of a plastic body in which the tablets were rotated at 25 rpm and given a shock and abrasion condition from a height of 6 inches. The weight of the tablets before and after the experiment was determined. The friability of the tablets was determined by using Equation (1). Here, W1 represents the original weight of tablets, and W2 represents the weight of tablets after the experiment is completed. The friability value must not be greater than 1% [76]. Uniformity of weight Uniformity of dosage form represents the even distribution of drug substances or excipients in all dosage units. The addition of the ingredients (active and excipients) must be within the range as claimed on the label. Content uniformity and weight variation were both parameters to determine uniformity in dosage units [77]. Disintegration time According to pharmacopeial recommendations, one of the vital evaluation parameters for all capsules, granules, and tablets is the disintegration test. This specific test evaluates the performance and quality of a dosage form to disintegrate completely over time. For instance, if a tablet is highly compressed or the gelatin-based capsule does not obey pharmacopeial recommendations, then the time of dosage forms to disintegrate elevates. This test also ensures the consistency and uniformity of the contents within all batches. In case of any variation or if any sample does not comply with the result, suitable actions must be taken according to the results [31]. Disintegration tests were carried out in a disintegration apparatus recommended according to USP guidelines. One dose unit was introduced at a time. Temperature conditions and rpm were maintained accordingly [78]. In vitro dissolution studies and release kinetics In vitro dissolution and release kinetics were evaluated to determine the amount of drug release from the dosage form. The amount of API released from a dosage form will ensure the presence of an active drug present for absorption at the site of action. As dissolution is directly related to bioavailability, increased dissolution ensures increased bioavailability of API. Mathematical models were used to investigate the drug release process. The system's goal was to maintain the number of therapeutic moieties with therapeutic concentrations in the desired organ or blood. These mathematical tools better Stability testing Stability studies of pharmaceutical formulations were conducted to ensure the formulation's efficacy, safety, and quality. Stability testing was accelerated for 6 to 12 months, and additional tests were performed for 3 months while the product was stored at 50 • C with 75% relative humidity (RH) [31]. Stability studies ensure that the finished product bears the temperature variations produced from the manufacturing process to the use of the patient. Modified Drug Release In most cases, to achieve patient compliance or to improve drug efficacy, modified drug release systems were used [79]. Consequently, the tablets were coated as FC using suitable polymers that retard or control the drug release. Some of the approaches for a modified drug delivery system are as follows. Delayed Drug Release One of the major advantages of EC is to increase the gastric stability of the dosage by protecting it from the harsh gastric environment. Polymers having pH dependency and solubility were mostly used for EC. They also tend to prevent the premature release of drugs in the stomach. Some of the drugs, which include proton pump inhibitors (omeprazole, esomeprazole, lansoprazole, rabeprazole, and pantoprazole), were acid labile and needed EC to prevent degradation in the stomach and ensure proper drug release [56]. Likewise, Gobinath et al. [80] formulated CE tablets using pantoprazole as a model drug using Eudragit and CAP. Tirpude and Puranik [81] proved that rabeprazole's performance improves by using EC with two different enteric polymers: an outer coating with cellulose and an inner coating with acrylic polymer [81]. The enteric coating of the granules was also used to formulate a time-dependent drug delivery system to release APIs at different times, one of which dissolves in the upper and the other in the lower portion of the intestine. The Food and Drug Administration (FDA) has now officially accepted a formulation of dexlansoprazole that is formulated using two different types of enteric-coated granules that have different dissolution profiles related to different pH, one of which releases after 1-2 h of administration while the other after 5-6 h [82]. Using such formulations in once-daily dosing controls gastric acid contents for a longer time and prolongs drug absorption [82]. Macromolecules, including proteins and peptides, have low permeability and stability when administered orally. Thus, the enteric coating of the formulations was considered to overcome such issues and enhances drug release [83]. Wong et al. [84] prepared oral tablets by using insulin as a model drug, and the tablet was then enteric coated by using cellulose acetate hydrogen phthalate, and other additives, which include absorption enhancer (chitosan) and enzyme inhibitor (sodium glycocholate). This tablet showed maximum drug release at insulin-dependent Glut-4 translocation and decreased or no drug release at acidic pH [84]. Likewise, many other formulations for oral administration including hormones or insulin were considered or are present in the market [83,85]. Colon-Targeted Drug Release A colon-specific drug delivery system was used to treat numerous diseases, including irritable bowel syndrome (IBS), Crohn's disease, and colon cancer [86][87][88]. Such a colonspecific delivery system could be used to administer proteins and peptides through this route, and their bioavailability could be enhanced [88]. Pathological conditions, motility, pH, and fluid content of the GI tract change from the colon, so the materials used for coating purposes are more complex than in oral dosage form [87][88][89]. Ibekwe et al. [90] developed a new colon-based system triggered by bacteria and pH-dependent in a single-layer matrix film. To facilitate a site-specific delivery system, prepared tablets were coated using a pHresponsive polymer [91]. Dodoo et al. [92] also developed probiotics and then coated them to find their effectiveness when given through the colon. Goyanes et al. [93] formulated budesonide-based colonic tablets for the controlled release of the active drug. The tablets were formulated in capsule form, while each capsule consisted of 9 mg of the active drug. The tablets were coated with Eudragit L100 and fabricated using 3D printing technology. They were further evaluated by scanning electron microscope (SEM) to investigate the outer coating. Drug release profiling was also done to ensure the release behavior of these tablets. The release of the active drug starts in the small intestine after 1 h after dosing, and the process continues sustainably under the circumstances of the colon and distal intestine [93]. Chronotherapeutic Drug Release The release of APIs could be programmed or delayed for a specific period to meet the needs of chronotherapeutic, particularly for the symptoms of circadian rhythms [94,95]. Diseases that include bronchial asthma, cardiovascular disease, sleep disorders, and rheumatoid arthritis that are likely to appear in the early morning or night are the best examples of circadian rhythms. The enteric coating was also used to achieve chronotherapeutic drug release. Luo et al. formulated a combination of fixed-dose by using pravastatin sodium and telmisartan in a tablet coated with an enteric coating that matches with circadian rhythms of cholesterol and hypertension and cholesterol and is administered before bed once daily [64]. Enteric coating prevents the early release of the drug from the tablet at acidic pH in the stomach, but it finally releases the drug at pH 6.8. Similarly, a delivery system that provides delayed release of therapeutic moieties at bedtime dosage treatment is therapeutically suggested, matching the variation in blood pressure and cholesterol synthesis due to circadian rhythmic variations. This system has the benefit of providing maximum therapeutic effect [90]. Sustained Drug Release The amount of drug release could be controlled by using the amount of polymer during surface coating. Tortuosity, the permeability of coating membrane, and thickness, so, by altering these factors, different releases of the drug could be achieved. To achieve sustained drug release, the coating materials are pH-independent and water-soluble [90]. Optimize drug release has also been attempted using hydrophilic and hydrophobic polymers in combination. A commonly used antidepressant, venlafaxine HCl, has a very short halflife of about 5 h, so to reduce its dosing frequency, a sustained release formulation was developed. Jain et al. [96] formulated an organic and aqueous-based reservoir type coated tablet using venlafaxine as a model drug for sustained drug release. In such, a formulation polyacrylate was used as a coating agent while ethyl cellulose was used as dispersion. Wan et al. [97] prepared loxoprofen sodium-based sustained release pellets via doublelayered coating. These pellets consist of a dissolution rate that regulates the sublayer with HPMC and pH modifier (citric acid) and an external distribution rate that regulates the coating using EC as aqueous dispersion on the surface of the loaded pellets with drug [97]. Taste Masking In the case of geriatric and pediatric patients, one of the major hurdles in medication intake is unpleasant taste. Bitterness is the main cause of medication repletion. Thus, one of the key parameters to improve patient compliance is to mask the unpleasant taste [98]. Meanwhile, masking the taste must not mark any negative effect on the dosage form, including affecting the bioavailability of the drug, causing irritation of mucosa, dryness of mouth, or obstructing swallowing [99]. Different methods were employed for taste masking, including surface coating, the addition of flavoring agents, complexation, salt formation, and chemical modification [98]. Amongst all these methods, one most effective and commonly used method is FC [98,99]. Many synthetic and natural polymers are available that are used for taste masking. Hydrophilic polymers include derivatives of starch, e.g., cellulose ethers, hydrophilic block copolymers, and starch derivatives, as well as gel-forming and lipophilic polymers were also used for the masking of taste [100]. Polymers may be used in combination or alone. Commonly, they are used in combination with hydrophilic and hydrophobic polymers in different concentrations [98,100]. In a study conducted by Nishiyama et al. [101], FC was done to mask the unpleasant taste of lafutidine. Orally disintegrating tablets were prepared using water-soluble and waterinsoluble polymer (Hypromellose and ethyl cellulose). The polymer ratio affected the tablets, including their tensile strength, drug release, lag time, and water permeability [101]. Active Film Coating It is a process of coating tablets or granules that contain APIs using a solution or suspension. The coating was done to improve product stability, prevent any interaction between APIs, and the development of a fixed dose combination [102,103]. Hydrophilic drugs easily dissolve in a solution or water-based coating suspension and then easily be sprayed on core tablets. Hence, the coating process is easier for hydrophilic drugs than lipophilic drugs [102]. Moreover, to protect spray nozzles from powder clogging, the particle size of water-insoluble drugs must be very small. Meanwhile, the coating process must be homogeneous to obtain acceptable uniformity of content [78]. There appeared some challenges in the active coating, which include [104] determining the end point of coating attaining targeted potency [88], confirming weight variation in each tablet [105], and maximizing the efficiency of the coating process [102,106,107]. During the FC process, random tablets were selected and weighed to ensure any weight gain and the quantity of APIs deposited on the core of the tablets during the process assay [103]. Based on this assay, further quantities of coatings suspension or solution were added to attain the desired potency. A linear relationship was observed between the coating time and the number of APIs to be deposited when the coating conditions, particularly the spray rate, remained constant [102,106,107]. The uniformity of the contents could possibly be affected by various factors which include the temperature of air, the rate of spray, the speed of the pan, the residual moisture and the atomization pressure [106]. Thus, it is significant to realize the factors in the coating process that affect content uniformity [102]. FC in the Field of Nanotechnology Researchers have struggled to formulate and optimize magnetic nanoparticles in recent years, which appeared to be helpful in biotechnology, computer, and drug delivery. The application and performance of such dosage form are highly influenced by its proper synthesis and design. Until now, many nanoparticles using metals such as copper, iron, magnesium, manganese, and their oxides have been developed effectively. Some conditions, which include coating surface, shape, particle size, surface charge, and magnetic properties of the particles were effectively monitored during the synthesis process. After choosing a suitable method for synthesis, the shape, size, colloidal stability, and surface coating of the nanoparticles were controlled in the optimum range. The efficiency of the coating process depends on the coating system (especially its mechanical properties), concentration and type of the suspended material, and the treatment of the metal surface before the conduction of the process. Generally, the coating solution consists of additives, a pigment, a filler, and a binder. Ideal coatings possess better stability, low permeability, and cost-effectiveness [108] (Ansari, Kadhim, Hussein, Lafta, & Kianfar, 2022). Marketed Available Products Some of the marketed available FC products are presented in Table 8. Conclusions It was found that tablets were the most common and ancient dosage form. Before the invention of proper machines for their manufacturing, tablets were made with the help of hands. Thus, to mask the unpleasant taste of different active constituents, to prevent them from atmospheric conditions, or to prevent a harsh gastric environment, coatings were done. Different coating techniques were employed for the coating of dosage form, and each coating technique has advantages and disadvantages. FC is a critical but common process that provides a dosage form with different functionalities, thereby meeting diverse therapeutic needs. FC was rendered as the most suitable and weightless coating material. In the pharmaceutical industry, FC not only masks the unpleasant taste and increases patient compliance, but it also protects the APIs from direct contact with water and thus enhances their stability. Current Limitations and Potential Challenges in the Field of FC As FC appeared to be associated with some challenges addressed as follows. • Due to the coating of the dosage forms, the processing time could be increased. The issue could be minimized by using a solid aqueous coating method. • Water, used as a universal solvent, if not removed effectively, may initiate chemical reactions. However, some modern formulation procedures use solid coating methods to resolve such issues effectively. • It is also possible that if the harsher process for coating or removal of water contents were used, they might affect the dissolution rate of the formulated dosage form. Thus, specialized coating formulations with specific pressure and temperature requirements were used to minimize such issues [110]. Funding: This research received no external funding.
2022-08-18T15:17:26.149Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "11f8e2e10c73cda21aa6538d2cd2b0645fa2b1cb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/16/3318/pdf?version=1660561885", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab9636acee972441ca05bdad3ee8f9fd2561d742", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
7691267
pes2o/s2orc
v3-fos-license
Autoantibody Profiling for Lung Cancer Screening Longitudinal Retrospective Analysis of CT Screening Cohorts Recommendations for lung cancer screening present a tangible opportunity to integrate predictive blood-based assays with radiographic imaging. This study compares performance of autoantibody markers from prior discovery in sample cohorts from two CT screening trials. One-hundred eighty non-cancer and 6 prevalence and 44 incidence cancer cases detected in the Mayo Lung Screening Trial were tested using a panel of six autoantibody markers to define a normal range and assign cutoff values for class prediction. A cutoff for minimal specificity and best achievable sensitivity were applied to 256 samples drawn annually for three years from 95 participants in the Kentucky Lung Screening Trial. Data revealed a discrepancy in quantile distribution between the two apparently comparable sample sets, which skewed the assay’s dynamic range towards specificity. This cutoff offered 43% specificity (102/237) in the control group and accurately classified 11/19 lung cancer samples (58%), which included 4/5 cancers at time of radiographic detection (80%), and 50% of occult cancers up to five years prior to diagnosis. An apparent ceiling in assay sensitivity is likely to limit the utility of this assay in a conventional screening paradigm. Pre-analytical bias introduced by sample age, handling or storage remains a practical concern during development, validation and implementation of autoantibody assays. This report does not draw conclusions about other logical applications for autoantibody profiling in lung cancer diagnosis and management, nor its potential when combined with other biomarkers that might improve overall predictive accuracy. A panel of six autoantibody markers were used to assay samples from the Mayo Clinic CT screening trial, to gather normal distribution values, and generate a cutoff value that might be used to improve efficiency of lung cancer screening. Established cutoff values were applied to 285 samples from 95 participants of a regional CT screening study in the 5 th district of Kentucky (Appalachia). The primary objective of the study was to determine the ability of an autoantibody profile to detect lung cancers at the time of or before CT scan. The uniformity of sample collection and study entry criteria was an important standard for analysis within and between the two screening sample cohorts. Class prediction in sample sets comprised predominantly of occult lung cancers (prior to radiographic detection) is a unique aspect of this analysis. Accurate classification of stage I screening detected cancers was a secondary metric. Samples were collected under protocols approved by accredited Institutional Review Boards (Mayo Clinic IRB and University of Kentucky IRB). All subjects provided written informed consent prior to any research procedures. This research was approved by respective IRBs and was conducted according to Institutional Review Board regulations and oversight. Mayo cohort The Mayo Lung Screening Trial performed five annual CTs on 1520 subjects with a minimum 20 pack-year smoking history, age 50-75, and no other malignancy within five years of study entry [16,17]. Cancer rates were 2.6% at 3 years rising to 4% at 5 years of screening. A single blood sample was drawn at study entry. The sample cohort was comprised of 180 non-cancer controls, six stage I prevalence lung cancers, and 44 lung cancers diagnosed 12 to 60 months from blood draw [16,17]. Kentucky cohort The Marty Driesler Lung Screening Project was a communitybased CT screening study that accrued 254 at risk subjects from Eastern Kentucky between 2005 and 2008 [18]. Eligibility criteria included age 55 to 75 years, 30 pack-years history of smoking, and no other malignancy within five years of study entry. Cancer rate was 2.6%. All subjects provided written informed consent prior to any research procedures. Since analysis of all available samples was cost prohibitive, a sample set of two hundred fifty six samples from ninety-five participants was constructed by an independent investigator and analyzed in a blinded fashion. The test cohort of nineteen lung cancer samples included five stage I screening detected lung cancers (three prevalence, two incidence), and four lung cancers diagnosed clinically one to five years after the last serial screening CT and corresponding blood sample. One case of head and neck cancer was diagnosed during the screening period, and six other non-thoracic malignancies were diagnosed up to five years from the last lung cancer screening CT. All cancer cases are summarized in Table 1. One or more non-malignant pulmonary nodules were noted in 56% of the study cohort. Dominant nonmalignant radiographic findings included emphysema, mediastinal adenopathy and granulomatous disease. Assay composition and procedures Marker discovery, measurement and statistical analysis has been described previously [7][8][9]. The marker panel was comprised of six individual tumor-associated autoantibodies that offered robust discrimination between cancer and noncancer samples in prior analysis; these six also provided consistent performance as a combined measure in a single assay based on receiver operating characteristic area under the curve. T7-phage-expressed capture proteins were derived from cDNA tumor libraries [7][8][9]. These putative autoantibody markers corresponded to apurinic/apyrimidinic endonuclease-1 (APEX1), nucleolar and coiled-body phosphoprotein 1 (NOLC1), splicing factor 3a (SF3A3), paxillin (PXN), BAC clone R-580E16 (unknown protein product) and mitochondrial 16S ribosomal RNA (MT-RNR2). [7,8 and unpublished] All phage-expressed capture proteins were covalently bound to Luminex microspheres for multiplex analysis using commercially available protocols. Autoantibody levels were quantified using biotinylated anti-human IgG and R-phycoerythrinlabeled streptavidin. The mean absolute fluorescence to each marker was calculated from triplicate measurements for each sample. No-sample controls included in each run consistently measured near zero. A single absolute fluorescence value was generated for each sample using the sum from individual markers. A cutoff value of 640, corresponding to the lower quartile (set specificity at 25%), would be expected to maximize capacity for detecting cancer at the earliest stages of disease while still providing an improved the ratio of scans performed to cancers detected. That cutoff was applied to class prediction in the Kentucky CT screening cohort. Relevant points of data analysis included distribution in the at risk population and comparability to the Mayo Clinic cohort, consistency of annual measures from individual subjects, accurate classification of cancer samples at the time of and prior to radiographic detection. Results The additive sum of absolute fluorescence from six markers was used as an intuitive measure of overall autoantibody reactivity to provide a single value point for each sample, define distribution in the at risk population, and assign cutoffs for cancer prediction in an independent cohort. The median value across 180 non-cancer samples from the Mayo Clinic sample cohort was 1126 fluorescent units (FU), with 25%/75% quartile values of 640 and 2076 FU respectively; there was one extreme outlier. A cutoff of 640 fluorescent units offered 88% sensitivity across fifty cancer samples in the Mayo cohort, which included accurate classification of 6/6 established stage I cancers and 38/44 samples drawn one to five years prior to radiographic appearance. By comparison the median value across 237 non-cancer samples from the Kentucky cohort was 726 fluorescent units (FU), with 25%/75% quartile values of 461 and 1249 FU respectively, which is roughly one third lower than measured in the Mayo Clinic sample cohort. A contingency chart (table 2) shows class prediction in the Kentucky cohort at the predetermined cutoff of 640 FU, and also bares the effect of inflated cutoff values on sensitivity and specificity that resulted from the discrepancy between the training and testing cohorts. The cutoff of 640 FU accurately classified 102/237 nonlung cancer samples (43%) and 11/19 cancer samples (58%), which included 4/5 stage I lung cancers (80%), and 7/14 of occult cancer samples (50%) one to five years prior to radiographic appearance. Class prediction and temporal relationship of sample draw to cancer diagnosis is summarized in table 1. Squamous and adenocarcinoma histologies were both represented among the true positives; there was nothing uniquely apparent about false negative samples. Other cancers accounted for 13/135 false positive measures ( Table 1). Six of the seven independently diagnosed non-thoracic malignancies in the KY cohort measured positively in one or more annual samples. The single highest value was a subject lost to follow-up after prevalence screening who was diagnosed with extranodal marginal zone Bcell lymphoma (MALT) five years after enrollment. Benign intrathoracic findings were common to subjects with false positive and true negative measures. The majority of false positives represented persistent elevations across serial screening cycles. Among the 130 false positive samples (.640 FU) in subjects with at least two annual samples, only six (4.6%) were singular events within the series of two or more annual measures. Discussion Primary objectives were to confirm the principles and precepts of autoantibody profiling and assess the potential of an autoantibody profile to increase efficiency and diagnostic accuracy of screening CT. Samples from the Mayo Clinic CT screening trial were used to define range and distribution of a composite measure within a screening population, and assign a cutoff value that would allow maximum sensitivity for lung cancers at and below the detectable limits of CT scanning. Distribution measures and relative cutoffs for cancer detection were tested in an independent screening cohort from the 5 th district of Kentucky. A cutoff set on the lower quartile of 180 noncancer controls in the Mayo cohort provided reliable detection of established stage I cancers and capacity to detect a percentage of incidence cancers prior to radiographic appearance in both cohorts. Observed frequency of serially positive and serially negative values across annual repeats in the Kentucky screening cohort suggests that autoantibody levels have a specific biologic basis even when there is no clinically apparent significance to the measure. The assay does not appear specific for lung cancer, although the variety of non-thoracic malignancies precludes any conclusion about histologic specificity. Inflated cutoff values that resulted from the notable discrepancy in the quartile distributions between the two cohorts skewed the dynamic range towards specificity in the Kentucky cohort. Although demographics, differences in eligibility criteria of the two studies and numerous independent clinical variables could account for this discrepancy, neither cohort is adequately sized for multivariable stratification. Conversely, observed differences in two independent but uniformly collected, moderately large and relatively comparable sample sets point strongly to sample age, processing, handling and/or storage as a source of preclinical error. Specifically, distribution analysis and assignment of cutoff values based on archived samples from two high-risk cohorts seems likely to have identified a biological effect that might not have been recognized with alternate study designs. Despite the presumption that autoantibodies are resilient biomarkers, there is a paucity of data on the consistency of autoantibody measures under various storage conditions and durations. Albeit limited, literature indicates serum antibody levels increase in cryopreserved samples over years of storage, possibly related to antigen-antibody complex dissociation and protein degradation [19,20]. Importantly, the current data shows how the validation process can be encumbered by variables unique to archived sample sets, which must be considered when transitioning from laboratory-based analysis to implementation in population-based applications. Even when given allowance for quantifiable preclinical error and the effect of inflated cutoff values on predictive accuracy in the validation set, the data discourage more advanced validation. The appeal of detecting occult disease with lead-time advantage over CT scanning is tempered by excessive false negative rates, and certainly restricts this assay's utility in selecting individuals that most warrant serial imaging [5]. Interpretation of positive measures is further confounded by the apparent lack of specificity for thoracic malignancy. Provisional assessment of the small number of radiographically detectable cancers in post hoc analysis approximates that of autoantibody profiles independently validated by other groups testing for established cancers [21][22][23]. If by extension we assume the best achievable sensitivity for stage I cancer is 80%, with a corresponding specificity of 40% expanding our analysis to sample sets with larger number of established cancers does not seem warranted. Also similar to other assays in the literature, a provisional sensitivity of 40% for established disease corresponds to specificity .90% [21][22][23][24]. Adjusting the cutoff for high specificity seems only to further deviate from a conventional screening paradigm. If used to further stratify cases by probability of cancer, however, a cutoff that favors specificity could mitigate inter-reader variability and reduce the number of false negative readings on screening CT scans [25,26]. A highly specific assay might also help discriminate benign from malignant nodules identified during screening, even though predictive value will be compromised by the promiscuity of the assay for both occult and radiographically apparent disease [27]. In summary, this report does not draw conclusions about future utility of this approach, but this validation study does not seem to support use of this assay as a primary population-based screening tool. Combining additional investigation with knowledge of this assay's performance may identify other logical areas for autoantibody profiling in lung cancer diagnosis and management.
2017-06-02T09:28:15.561Z
2014-02-03T00:00:00.000
{ "year": 2014, "sha1": "e25c39accae9a6e3c8af2df7f898b435b2996e62", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0087947&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b6d80df343cf516fa3ce86d94ea3812e6a2b200", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18465402
pes2o/s2orc
v3-fos-license
The Globular Cluster Systems of NGC 1400 and NGC 1407 The two brightest elliptical galaxies in the Eridanus A group, NGC 1400 and NGC 1407, have been observed in both the Washington T_1 and Kron-Cousins I filters to obtain photometry of their globular cluster systems (GCSs). This group of galaxies is of particular interest due to its exceptionally high M/L value, previously estimated at ~3000h, making this cluster highly dark-matter-dominated. NGC 1400's radial velocity (549 km/s) is extremely low compared to that of the central galaxy of Eridanus A (NGC 1407 with $v_\odot$ = 1766 km/s) and the other members of the system, suggesting that it is a foreground galaxy projected by chance onto the cluster. Using the shapes of the globular cluster luminosity functions, however, we derive distances of 17.6 +/- 3.1 Mpc to NGC 1407 and 25.4 +/- 7.0 Mpc to NGC 1400. These results support earlier conclusions that NGC 1400 is at the distance of Eridanus A and therefore has a large peculiar velocity. Specific frequencies are also derived for these galaxies, yielding values of S_N = 4.0 +/- 1.3 for NGC 1407 and S_N = 5.2 +/- 2.0 for NGC 1400. In this and other respects, these two galaxies have GCSs which are consistent with those observed in other galaxies. Introduction NGC 1400, a bright E0/S0 galaxy located in a large sub-condensation of the Eridanus cluster, has recently drawn attention to itself and its curious environment. The southern Eridanus group of galaxies extends between 3 h 15 m ∼ < R.A. ∼ < 4 h and −26 • ∼ < Dec. ∼ < −15 • , and exhibits a large degree of internal sub-clustering (see Figure 1 from Willmer et al. 1989). Its most concentrated clump of galaxies is Eridanus A (3 h 40 m , −19 • ), which includes roughly 50 galaxies within a radius of ∼ 0. • 8. At an estimated distance of 16.4 Mpc (Tonry 1991), the large central E0 galaxy of the Eridanus A sub-cluster (NGC 1407) has a heliocentric velocity typical of the majority of the members of the group. NGC 1400, however, reveals an anomalously low redshift (549 km/s) compared with those of its neighbouring galaxies, causing some uncertainty as to its true distance. Table 1 lists the members of Eridanus A for which heliocentric velocities have been determined. The mean radial velocity of these galaxies is 1643 ± 95 km/s, or 1765 ± 93 km/s if NGC 1400 is not considered a member. This straightforward question as to the true distance of a potential Eridanus A member has resulted in what may prove to be an extraordinary find: the sub-cluster has an abnormally high M/L value. Of the 10 Eridanus A galaxies listed in Table 1, the two early-type galaxies NGC 1407 and NGC 1400 account for nearly 80% of the light. Adding the blue luminosities of the Eridanus galaxies within 0. • 8 of NGC 1407 (including NGC 1400) and calculating the mean of four virial mass estimators, Gould (1993) finds M/L ∼ 3125h. This value is anomalously large; typically, groups or clusters of galaxies have M/L values of several hundred, making Eridanus A one of the darkest clusters known. Considering only the virial theorem mass, the exclusion of NGC 1400 from the cluster mass calculation reduces M/L by roughly a factor of 2. Consequently, even if NGC 1400 has not virialized within the cluster, Eridanus A's M/L is still excessively high. If NGC 1400 is at the distance of Eridanus A, it may yet have originated from some other location, giving rise to a peculiar motion which is not due to the mass concentration of the sub-cluster. Since there are no other nearby mass concentrations large enough to account for NGC 1400's motion, this possibility is unlikely (see Gould 1993). Therefore, if NGC 1400 is at the distance of Eridanus A, it is likely to be dynamically associated with the sub-cluster. Is NGC 1400 indeed at the distance of Eridanus A, or is it instead a foreground galaxy mistakenly associated with the group due to projection effects? Five independent studies have attempted to determine whether NGC 1400 is at the distance of Eridanus A. The results of four of these studies are summarized in Table 2 (see discussion in Gould 1993 and references therein). One additional piece of evidence presented by Gould is the globular cluster luminosity function (GCLF), credited as showing that NGC 1400 and NGC 1407 are at the same distance. However, these GCLFs are secondary results obtained by Tonry (1991) during the surface-brightnessfluctuation (SBF) analysis, and no rigorous investigation of the globular cluster systems of the two bright galaxies in Eridanus A has yet been performed. We therefore elect to use the globular cluster luminosity functions of the two brightest ellipticals in Eridanus A, NGC 1400 and the central galaxy NGC 1407, to provide new estimates of their distances. In §2 and §3 we discuss the observations and data reduction, while in §4 we present the GCLFs and distance determinations. In addition, the inferred globular cluster radial profiles and scaled population sizes of these two galaxies are investigated in §5 and §6 respectively, in search of any signature of the peculiar environment in which they are found. Observations The data for this study of the globular cluster systems (GCSs) of NGC 1400 and NGC 1407 were acquired as part of two separate observing runs. The first set of observations was obtained on the nights of November 14 and 16, 1993, with the Cerro Tololo Inter-American Observatory (CTIO) 4-m telescope. Use of the Tek 2048 × 2048 CCD camera at prime focus with an image scale of 0. ′′ 46 per pixel yielded an image size of 15. ′ 7 × 15. ′ 7. The readout noise was 6 electrons/pixel, and the gain was set to 3.2 electrons per ADU. A total of three images, each one including both NGC 1400 and NGC 1407, were taken using the Washington T1 filter (Canterna 1976;Harris & Canterna 1977). The seeing was mediocre at best, ranging from 1. ′′ 5 to 2. ′′ 0 over the two nights. The second set of data was collected on January 2 and 3, 1995, with the Canada-France-Hawaii 3.6-m telescope (CFHT). The Loral3 detector (2048 × 2048) was used at a nominal gain of 1.45 electrons/ADU and with a readout noise of 9 electrons/pixel. This detector, mounted behind the re-imaging optics of the MOS instrument, yielded an image scale of 0. ′′ 32 pix and a frame size of 10. ′ 9 × 10. ′ 9. Six images, three each of NGC 1407 and NGC 1400, were obtained using the Kron-Cousins I-band filter. The seeing varied from 1. ′′ 1 to 1. ′′ 9 over the two nights. A log of the observations is presented in Table 3. In addition to the program fields, standard star fields, dome flatfields and bias frames were also obtained. Data Reduction Preprocessing of the raw frames included bias subtraction and division by a mean dome flatfield exposure. Frames taken in comparable seeing were combined Table 1 Galaxies of Eridanus A with known velocities (Willmer et al. 1989 to form composite images with improved signal-to-noise. The resultant composite images with substantially different seeing were reduced individually using the standard Sun/Unix implementation of IRAF 3 . Figure 1 shows an example of a preprocessed frame in the T1 filter. The data reduction was largely automated with the aid of an IRAF script and the DAOPHOT subroutines (Stetson 1987). A median filtering procedure was applied in order to create a model of the diffuse galaxy light which was then subtracted from the original image to better reveal the underlying globular cluster system. The fitting of a model stellar profile (a "point spread function" -PSF) was carried out upon the filtered image to yield the uncalibrated photometry list. The fraction of objects successfully detected in each frame as a function of magnitude is known as the completeness function (Harris 1990). Using the IRAF ADD-STAR task, a total of 10 3 artificial stars of known magnitudes were added to each frame in 10 trials, thus increasing the population of objects in an individual frame by roughly 20% per trial. In order to compensate for the variation in object detection with radial distance from the inner galaxy regions, separate completeness functions were determined for a series of annuli centred on the galaxy. The uncertainties associated with the computed completeness function (f ) are derived assuming a binomial distribution: where n add refers to the number of artificial stars added 3 Image Reduction and Analysis Facility, distributed by the National Optical Astronomical Observatories, which is operated by AURA under contract with the NSF. to a given bin (Bolte 1989). The detection limits (50% recovery levels) for the program frames are listed in Table 4. Using the known magnitudes of the artificial stars, we found that the systematic errors in the recovery process are negligible. An estimate of the internal uncertainty as derived from the scatter of the artificial stars was within 0.01 ∼ < σ ∼ < 0.05 for all frames over the magnitude range of interest. For galaxies as distant as NGC 1400 and NGC 1407, individual globular clusters are indistinguishable from stars under typical seeing conditions. A galaxy's GCS reveals itself as a statistical overabundance of faint, star-like features concentrated around the galaxy. An effective means of identifying non-stellar objects employs the CLASSIFY routine (Harris 1990). This routine calculates radial moments using weighted intensity sums around each object (see Harris et al. 1991 and references therein). We chose to use the effective radius parameter r−2 as a discriminator: where r is the radial distance of the j th pixel from the object centroid, and Ij is the intensity of the j th pixel after subtraction of the the local sky value. The summations were performed over those pixels within a maximum radius of 10 pixels which had intensities greater than the 3.5σ detection threshold. For objects of a given magnitude, non-stellar objects such as faint background galaxies tend to exhibit more extended wings and faint cores, thus deviating towards larger values of r−2. Upper limits were chosen such that roughly 95% of the artificial stars were correctly classified. In total, between 20% and 40% of the detected objects in each frame were identified as non-stellar and were removed from each photometry list. Photometric calibrations were performed in the standard fashion. Fitting of transformation equations to the photometric magnitudes from short exposures of standard star fields (Geisler 1996, Porter 4 ) observed during the same nights as the program fields yielded: where t 1(ap) and i (ap) are instrumental magnitudes, and T1 and I are the known magnitudes of the standards. Since only T1 and I frames were obtained for the program data, the colour indices C − T1 and V − I were set to 1.5 and 1.0 respectively, typical values for globular clusters. The rms residuals for the transformations are ±0.05 in T1 and ±0.04 in I. Determining the GCLFs The observed globular cluster luminosity functions for NGC 1400 and NGC 1407 were obtained by binning the objects in the final photometry list into 0.4 mag intervals. We subtract the local sky LF (containing field objects and none of the GCS) from the observed LF (comprising both GCs and field objects) to reveal the GCLF for the target galaxy. No separate background frames were observed, hence the background density of objects was estimated from outer regions of the target galaxy frames where the GC density has dropped to zero (as determined from the radial profiles presented in the next section). The galaxy LF and that for the background region were divided by their respective completeness corrections in order to compensate for increasing detection incompleteness at fainter magnitudes. Subtraction of the corrected background LF from the galaxy LF yields the globular cluster LF for each frame. The error bars on the luminosity distributions reflect the uncertainty on the inferred number of objects per magnitude bin: (Bolte 1989). Since the completeness (f ) is the ratio of the observed number of objects (n obs ) to the number inferred in a magnitude bin (n), Eq. 1 allows us to rearrange Eq. 3 in terms of the known quantities for each bin to obtain: Once the final cluster luminosity distributions of each data frame were calculated, the observed GCLFs corresponding to the same galaxy and filter were averaged, weighted by the uncertainty on the number counts. Unfortunately, for NGC 1400, the CTIO T1 frames in best seeing were not long exposures (see Table 3), while the deeper exposures were in poorer seeing than enjoyed at CFHT. For these reasons the NGC 1400 GCLF was not well delineated in T1; we chose to work with the I observations exclusively. The final background-subtracted, completeness-corrected and averaged GCLFs in T1 and I for NGC 1407 and in I for NGC 1400 are presented in Figures 2, 3 and 4. The globular cluster luminosity function is defined as the number of clusters per unit magnitude (m) and can be described to first order by a Gaussian function: where A is a normalization factor, m 0 represents the turnover (peak) magnitude and σ is the dispersion (Hanes 1977). To fit the Gaussian function of Eq. 5, a nonlinear leastsquares fitting procedure was applied. Since the data does not extend past the turnover magnitude, it was not possible to leave all three parameters (A, m 0 , σ) unconstrained during the fitting. Consequently, a series of values of 1.0 ∼ < σ ∼ < 1.5 were adopted and held fixed while the scale factor and turnover magnitude were permitted to vary. These values of σ span the range typically found for GCLFs (Harris 1991). The best-fit results are shown in Tables 5, 6 and 7. Note that the uncertainties quoted here reflect the formal errors associated with the least-squares fit. Distance Determinations Comparing the observed brightness distributions for the globular cluster systems of NGC 1400 and NGC 1407 with those known from local calibrators, we can calculate distances to these galaxies. With an adopted absolute turnover magnitude of M 0 V = −7.27 ± 0.23 (a mean value for the 10 largest galaxies in Table 2 of Harris 1991) and assuming σ = 1.2 for the GCLF shapes, we estimate distances of 17.6 ± 3.1 Mpc to NGC 1407 and 25.4 ± 7.0 Mpc to NGC 1400 (Table 8). Note that the turnover magnitudes have been corrected for foreground absorption. For NGC 1407, we adopted galactic extinction corrections of AT 1 = 0.11 and AI = 0.06, based on a value of AB = 0.17 (RC3). NGC 1400's value of AB = 0.14 yielded correc- tions of AT 1 = 0.09 and AI = 0.06 for this galaxy. The distances we have calculated in Table 8 are inconsistent with the hypothesis that NGC 1400 is a foreground galaxy roughly 3 times closer than NGC 1407, as implied by its anomalously low recessional velocity. Further experimentation shows that the choice of dispersion (σ) has little bearing on the relative distances obtained for these galaxies. The distance ratios fall within the range d(N 1400)/d(N 1407) = 1.31 ± 0.04 regardless of adopted σ, suggesting that NGC 1400 is in the background with respect to NGC 1407. Adopting the worst-case scenario in which σ = 1.5 for NGC 1407 and σ = 1.0 for NGC 1400, we find that d(N 1400)/d(N 1407) = 0.63, still placing NGC 1400 considerably further than the distance implied by its low recessional velocity. From this we may again conclude that, within the reasonable range of σ, our observations of the GCS of NGC 1400 are not consistent with it being a factor of 3 closer than NGC 1407. If NGC 1400 were at a distance of 5.49h −1 Mpc, the turnover magnitude of the globular cluster luminosity function would be expected to appear at T 0 1 = 21.6 and I 0 = 21.0, assuming h = 0.8. With NGC 1400 GCS luminosity distribution data down to completeness limits of T1 = 22.6 and I = 22.4, our GCLFs should thus extend well beyond the turnover in such a case. Since the observed GCLFs for this galaxy do not show any evidence of having reached the turnover magnitude, this reaffirms our conclusion that NGC 1400 is not a foreground galaxy one third the distance of Eridanus A. Cluster Radial Profiles The surface density distribution of GCs plotted as a function of galactocentric radius reveals the projection of the spatial structure of the GCS. The annular bins used to construct the profiles were 75 pixels wide and concentric on the galaxy centroids. The number counts in each radial bin, corrected for completeness and divided by the total area of the annulus sampled, were taken to represent the surface density at the geometric mean annular radius (r = √ rinrout). Subtracting the local background density (σ bgd ), found by adopting the mean value of the surface density beyond the radius at which the radial profile is no longer decaying, we obtain the distribution of the GC population alone. Non-linear least-squares fitting of the following relations were performed on the radial profiles: The first model profile is a form of the empirical de Vaucouleurs law, and the second represents a scale-free power law (Harris 1986). The curves superimposed upon the radial profiles shown in Figures 5 to 8 represent the best fits to the data with the parameters and formal errors listed in Table 9. In all of the radial profile plots, we can clearly identify the presence of a centrally concentrated globular cluster system. Surface intensity profiles of the galaxy halo light are also provided on the GC radial profile plots. These halo profiles were determined by fitting elliptical isophotes over the galaxy region, yielding the relative intensity of light as a function of radius. After subtracting the local sky level, the halo intensity was arbitrarily scaled to the amplitude of the GCS radial distribution for easy comparison. Specific Frequencies The specific frequency (SN ) of a galaxy is defined as the number of globular clusters per unit halo light, normalized to an absolute magnitude of MV = 15: where the total population of GCs surrounding a galaxy is given by Nt (Harris & van den Bergh 1981). This parameter conveniently reflects the size of the GC population, independent of galaxy luminosity. To calculate specific frequency we must first estimate the total number of clusters which make up the GCS of the galaxy. We use the radial density profiles from §5 to count the total number of the clusters surrounding the galaxy down to the limiting magnitude (N obs ). We divide this Table 8 Galaxy distances assuming σ = 1.2, corrected for absorption. result by the fraction of the GCLF which was observed to infer the total number of clusters in the system. To define the shapes of the GCLFs, we adopt a Gaussian form (Eq. 5) assuming σ = 1.2 and the best-fit parameters from Tables 5, 6 and 7. Absolute V magnitudes are also required in order to determine SN . According to RC3, NGC 1407 has a corrected total blue magnitude BT = 10.5 ± 0.2 and colour (B − V )T = 0.97 ± 0.01, yielding VT = 9.5 ± 0.2. For NGC 1400, the RC3 values are BT = 11.87 ± 0.13 and (B − V )T = 0.92 ± 0.01, giving VT = 10.95 ± 0.13 for this galaxy. To translate VT into absolute magnitude MV we must adopt distances to the galaxies in question. For both galaxies, we assume a distance of 20.5 ± 1.2 Mpc based on the mean recessional velocity of the Eridanus A galaxies and H0 = 80 km s −1 Mpc −1 . The results of the specific frequency calculations appear in Table 10. The specific frequencies calculated for NGC 1407 in the two filters are in agreement within their respective uncertainties, and we therefore adopt a mean value of SN = 4.0 ± 1.3 for NGC 1407 along with that of SN = 5.2±2.0 for NGC 1400. The Question of Distance The primary motivation of this study was to determine whether NGC 1400 is at a distance comparable to that of NGC 1407 and the rest of Eridanus A, or if it is instead a foreground galaxy as implied by its anomalously low recessional velocity. The globular cluster luminosity functions are not consistent with the foreground placement of NGC 1400, which indicates that this galaxy has a large peculiar velocity with respect to Eridanus A, the origin of which remains unknown. We may now add globular cluster systems to the evidence that NGC 1400 does indeed lie at roughly the distance of Eridanus A, and should be counted as a member of the cluster. The distances we have calculated assuming σ = 1.2 for the GCLF are slightly higher but still within two standard deviations from those derived by Tonry (1991) of 16.3 ± 1.0 Mpc (NGC 1400) and 16.4 ± 1.0 Mpc (NGC 1407), and are well within the range of potential distances previously quoted in the literature (see Table 2). GCS Shapes and Specific Frequencies The spatial profiles of the globular cluster systems of NGC 1400 and NGC 1407 shown in §5 seem rather unexceptional in shape. The radial distributions in Figures 5 and 6 reveal that the projected density of NGC 1407's GCS falls off with the same slope as the halo light intensity Fig. 5.-The background-subtracted radial density profile of the NGC 1407 GCS (in T1) with de Vaucouleurs and scale-free power law fits provided (slopes given by b and d respectively). Also shown is the galaxy halo intensity profile. The background density level was found to be σ bgd = 5.9 ± 0.3. for this central elliptical galaxy of Eridanus A. Figures 7 and 8 indicate that the light profile for the NGC 1400 halo may be slightly steeper than the cluster density distribution, but this effect is not uncommon. It is somewhat unusual for GCS radial profiles to follow the halo light profile as closely as seen for NGC 1407; generally, the cluster surface density is more distended (Harris 1991). According to Merritt (1983Merritt ( , 1984, the cluster population and halo structure of a system dominated by dark matter will remain largely unchanged (to first order), not significantly altered by galaxy interaction processes. We are therefore likely seeing the cluster distributions as they were originally formed, with GCSs which are roughly as centrally Figure 5. The background density was σ bgd = 9.8 ± 0.6. concentrated as the halo stars. There exists an apparent relationship between the integrated V magnitude of a galaxy and the shape of its GCS density profile (Harris 1986). The parameter generally used to describe the shape of the radial distribution is the slope of the logarithmic profile. This parameter was given the label "d" in the power-law fitting of Section 5, but is commonly referred to as α. A plot of the slope parameter as a function of galaxy magnitude is shown in Figure 9, with data and references presented in Table 11. A linear fit to the data yields the relation: This is consistent with the relation as determined in Harris 1993 (see correction appearing in Kaisler et al. 1996). The two galaxies in this study follow the overall Table 11 Correlation between radial density and galaxy luminosity. Fig. 7.-The NGC 1400 GCS radial density profile (in T1), as in Figure 5. The background density was σ bgd = 5.7 ± 0.4. trend whereby higher-luminosity galaxies exhibit more extended globular cluster systems. The measured slopes, α(NGC1407) = −1.99 ± 0.36 and α(NGC1400) = −1.96 ± 0.56, compare reasonably well with the predicted values of α = −1.53 and −1.95 respectively. If we do not include the data points for the dwarf ellipticals or the spiral galaxies, a linear regression yields The omission does not significantly alter the relation; both of the linear fits above are shown in Figure 9. The specific frequencies calculated for NGC 1407 (SN = 4.0±1.3) and NGC 1400 (SN = 5.2±2.0) are not unusual. Normal elliptical galaxies generally have SN ∼ 2 − 5, with variations from this range most likely attributable to differences in galaxy environment and GC formation mechanisms. GC Colours Although the T1 and I filters do not provide a sufficient baseline to obtain cluster metallicities, we can estimate mean colours for the globulars surrounding NGC 1407 and NGC 1400. Using the regions of overlap between the T1 and I images, we determine T1 − I colours for the GCs detected in both filters. The colour distributions are shown in Figure 10; the lack of a well-defined peak in the NGC 1400 GCS colour distribution is likely due to the poor quality of the T1 photometry for this galaxy, with its smaller cluster population. NGC 1407's cluster population is found to have a mean of T1 − I = 0.53 and a median value of 0.54, while for NGC 1400 we obtain mean and median colours of T1 − I = 0.54 and 0.55, respectively. These values are consistent with typical GC colours observed in other galaxies, a finding which confirms the identification of the GCSs. Conclusions The principal results of this study of the globular cluster systems of NGC 1400 and NGC 1407 can be summed up in the following points: 1. From the shapes of the globular cluster luminosity functions, we determine distances to these galaxies which place NGC 1400 at or beyond the distance of the Eridanus A group. This finding is in agreement with conclusions made using other methods previously cited in the literature. 2. The shapes of the GCS radial density profiles and the specific frequencies of the two systems reveal no obvious abnormalities. This implies that, if Eridanus A is as dominated by dark matter as its estimated M/L value indicates, no anomalies are evident from the GC spatial distributions and population sizes of its two largest galaxies. With a distance comparable to that of NGC 1407, NGC 1400 must have a high peculiar velocity in order to account for its exceptionally low radial velocity. Gould (1993) demonstrates that if the distance of NGC 1400 is consistent with that of Eridanus A, it must be bound to the sub-cluster due to a lack of other nearby mass concentrations large enough to generate its high peculiar motion. The exact cause for the large peculiar velocity remains as yet unknown. It is possible that NGC 1400 has a large component of its velocity moving it towards the core of Eridanus A (ie: it has a large transverse velocity), reducing its net radial velocity. Perhaps the velocity dispersion of the cluster has been severely underestimated since we only have velocity data for 10 of the 50 or so members, and that by some coincidence the other galaxies in the sample have significant velocity components perpendicular to our line of sight. If this is the case, NGC 1400's high peculiar velocity might not be particularly anomalous. The second enigma surrounding the Eridanus A subcluster is its abnormally high M/L ratio. It is possible that it is merely a dark cluster -many such clusters could exist which have so far avoided detection. More expansive surveys at higher limiting magnitudes in combination with reliable cluster-finding algorithms may reveal the presence of more dark clusters. The question remains: if indeed this cluster contains a great deal of dark matter, where did it come from, and why is NGC 1400 the only member (so far) to show such a high peculiar velocity? Furthermore, why is there no evidence of strange effects on the radial distributions and population sizes of the member galaxy GCSs, for example, given this unusual environment? A more extensive analysis of the GCS of the two Eridanus A galaxies NGC 1407 and NGC 1400 could be provided by additional deeper multicolour photometry as well as spectroscopic observations. This data may contribute to a better understanding of the nature of the galaxies and their environment. There is an obvious lack of redshift measurements for the majority of the Eridanus A galaxies (see Table 4 of Ferguson & Sandage (1990) for the complete membership list). This makes it difficult to determine an accurate M/L ratio for the cluster, as well as to derive any conclusions regarding the dynamical processes at work within Eridanus A. A more complete database of member galaxy velocities may shed some light on this dark cluster.
2014-10-01T00:00:00.000Z
1996-11-06T00:00:00.000
{ "year": 1996, "sha1": "b05c95f6810f0a6a08eb6e2f5d4f09e2b98ca9ca", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9611052", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4e4a4cf45f60292e79bc0612997ad4da1e5cea77", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118256213
pes2o/s2orc
v3-fos-license
On Invarinat Theory of $\theta$-groups This paper is a contribution to Vinberg's theory of $\theta$-groups, or in other words, to Invariant Theory of periodically graded semisimple Lie algebras. One of our main tools is Springer's theory of regular elements of finite reflection groups INTRODUCTION This paper is a contribution to Vinberg's theory of θ-groups, or in other words, to Invariant Theory of periodically graded semisimple Lie algebras [Vi1], [Vi2]. One of our main tools is Springer's theory of regular elements of finite reflection groups [Sp], with some recent complements by Lehrer and Springer [LS1], [LS2]. The ground field k is algebraically closed and of characteristic zero. Throughout, G is a connected and simply connected semisimple algebraic group, g is its Lie algebra, and Φ is the Cartan-Killing form on g; l = rk g. Int g (resp. Aut g) is the group of inner (resp. all) automorphisms of g; N is the nilpotent cone in g. For x ∈ g, z(x) is the centraliser of x in g. Let g = ⊕ i∈Zm g i be a periodic grading of g and θ the corresponding m th order automorphism of g. Let G 0 denote the connected subgroup of G with Lie algebra g 0 . Invariant Theory of θ-groups deals with orbits and invariants of G 0 acting on g 1 . Its main result is that there is a subspace c ⊂ g 1 and a finite reflection group W (c, θ) in c (the little Weyl group) such that k[g 1 ] G 0 ≃ k[c] W (c,θ) . We say that the grading is N-regular (resp. S-regular) if g 1 contains a regular nilpotent (resp. semisimple) element of g. The grading is locally free if there is x ∈ g 1 such that z(x) ∩ g 0 = {0}. The same terminology also applies to θ. In this paper, we obtain some structural results for gradings with these properties and study interrelations of these properties. Section 1 contains some preliminary material on θ-groups and regular elements. In Section 2, we begin with a dimension formula for semisimple G 0 -orbits in g 1 . We also prove two "uniqueness" theorems. Recall that Int g is the identity component of Aut g, and it operates on Aut g via conjugations. Given m ∈ N, we prove that each connected component of Aut g contains at most one Int g-orbit consisting of automorphisms of order m that are either N-regular or S-regular and locally free. In Section 3, we show that θ-groups corresponding to N-regular gradings enjoy a 1. VINBERG'S θ-GROUPS AND SPRINGER'S REGULAR ELEMENTS Let θ be an automorphism of g, of finite order m. The automorphism of G induced by θ is also denoted by θ. Let ζ be a fixed primitive m th root of unity. Then θ determines a periodic grading g = i∈Zm g i , where g i = {x ∈ g | θ(x) = ζ i x}. Whenever we want to indicate the dependence of the grading on θ, we shall endow 'g i ' with a suitable superscript. If M is a θ-stable subspace, then M i := M ∩ g i . Recall some standard facts on periodic gradings (see [Vi1,§1]): • Φ(g i , g j ) = 0 unless i + j = 0; • Φ is non-degenerate on g i ⊕ g −i (i = 0) and on g 0 . In particular, g 0 is a reductive algebraic Lie algebra and dim g i = dim g −i ; • If x ∈ g i and x = x s + x n is its Jordan decomposition, then x s , x n ∈ g i . Let G 0 be the connected subgroup of G with Lie algebra g 0 . The restriction of the adjoint representation of G to G 0 induces a representation ρ 1 of G 0 on g 1 . The linear group ρ 1 (G 0 ) ⊂ GL(g 1 ) is called a θ-group. The theory of orbits and invariants for θ-groups, which generalizes that for the adjoint representation [Ko] and for the isotropy representation of a symmetric variety [KR], is developed by E.B. Vinberg in [Vi1]. A Cartan subspace of g 1 is a maximal commutative subspace consisting of semisimple elements. Let c ⊂ g 1 be a Cartan subspace. Set N(c) 0 = {g ∈ G 0 | Ad (g)c = c} and Z(c) 0 = {g ∈ G 0 | Ad (g)x = x for all x ∈ c}. The group N(c) 0 /Z(c) 0 is said to be the little Weyl group of the graded Lie algebra, denoted W (c, θ). The following is a summary of main results in [Vi1]. Theorem. (i) All Cartan subspaces in g 1 are G 0 -conjugate; Despite its maturity, the theory of θ-groups still has a vexatious gap. A long-standing conjecture formulated in [Po,n.7], to the effect that any θ-group has an analogue of the section constructed by Kostant for the adjoint representation (see [Ko,n. 4]), is still open. (In [KR], such a section was also constructed for the isotropy representation of a symmetric variety. So that the problem concerns the case m ≥ 3.) Kostant's section for the adjoint representation is an instance of a more general phenomenon in Invariant Theory, a so-called Weierstrass section. The reader is referred to [VP,8.8] for the general definition of a Weierstrass section and a number of related results. In my opinion, it is more natural to use term Kostant-Weierstrass sections, or KW-sections in the context of θ-groups. It was shown in [Pa,Cor. 5] that a KW-section exists whenever g 0 is semisimple. Below, we will discuss some aspects of KW-sections in a more general situation. If W be a finite reflection group in a k-vector space V , then v ∈ V is called regular if the stabiliser of v in W is trivial. Let σ be an element of finite order in N GL(V ) (W ). Then σ is called regular (in the sense of Springer) if it has a regular eigenvector. The theory of such elements is developed by Springer in [Sp]; for recent results, see [LS1]. Let f 1 , f 2 , . . . , f l be a set of algebraically independent homogeneous generators of k , with suitable roots of unity ε i . Given a root of unity ζ, we let V (σ, ζ) denote the eigenspace of σ corresponding to the eigenvalue ζ. The following is a sample of Springer's results, see Theorem 6.4 in [Sp]. 1.2 Theorem. Suppose V (σ, ζ) contains a regular vector. Then w ∈ W , then σ and wσ are conjugate by an element of W . We will apply Springer's theory in the context of θ-groups, when V = t is a θ-stable Cartan subalgebra of g, W = W (t) is the Weyl group of t, and σ = θ| t . Obviously, such σ normalises the Weyl group. MISCELLANEOUS RESULTS ON PERIODIC GRADINGS 2.1 Proposition. x] holds for all x ∈ g 1 . Proof. Consider the Kirillov form K x on g. By definition, K x (y, z) = Φ(x, [y, z]). From the invariance of Φ one readily deduces that Ker K x = z(x). Since x ∈ g 1 , we have K x (g i , g j ) = 0 unless i + j = −1. It follows that This already implies (ii). If m is arbitrary, then one obtains a good conclusion only for semisimple elements. Indeed, if z(x) is reductive, then dim z(x) k = dim z(x) −k for all k. Hence dim g k − dim z(x) k does not depend on k. Definition. A periodic grading (or the corresponding automorphism) of g is called Sregular, if g 1 contains a regular semisimple element of g; N-regular, if g 1 contains a regular nilpotent element of g. It is called locally free, if there exists x ∈ g 1 such that z(x) 0 = {0}. Our aim is to prove two conjugacy theorems for periodic gradings. 2.2 Theorem. Let θ 1 , θ 2 be automorphisms of g having the same order and lying in the same connected component of Aut g. Suppose the corresponding periodic gradings are S-regular and locally-free. Then θ 1 , θ 2 are conjugated by means of an element of Int g. Recall that x ∈ N is called semiregular, if any semisimple element of the centraliser Z G (x) belong to the centre of G. The corresponding orbit and sl 2 -triple are also called semiregular. The semiregular sl 2 -triples in simple Lie algebras were classified by E.B. Dynkin in 1952. Theorem. Let θ ′ , θ ′′ be automorphisms of g having the same order and lying in the same connected component of Aut g. Suppose there exists a semiregular nilpotent orbit O ∈ g such that O ∩ g ′ 1 = ∅ and O ∩ g ′′ 1 = ∅. Then θ ′ , θ ′′ are conjugated by means of an element of Int g. Proof. In case O is the regular nilpotent orbit, a proof is given in [An]. It goes through in our slightly more general setting. For convenience of the reader, we give it here. N-REGULAR PERIODIC GRADINGS Let O reg be the regular nilpotent orbit in g. Recall that a periodic grading (or automorphism) of g is N-regular, if O reg ∩ g 1 = ∅. Since O reg is semiregular, Theorem 2.3 says that any connected component of Aut g contains at most one Int g-orbit of N-regular automorphisms of a prescribed order. To give a detailed description of the N-regular periodic gradings, some preparatory work is needed. For any γ ∈ Γ(g) := Aut g/Int g, let C γ denote the corresponding connected component of Aut g. The index of (any element of) C γ is the order of γ in Γ(g). The index of µ ∈ Aut g is denoted by ind µ. Thus, ordγ = ind C γ = ind µ for any µ ∈ C γ . Since Int g ≃ G/{centre}, the group Γ(g) acts on k[g] G (or on g/ /G = Spec k[g] G ). Let µ ∈ Aut g be arbitrary. Denote by µ the corresponding (finite order) automorphism of g/ /G. 3.1 Lemma. The action of Γ(g) on g/ /G is effective. In other words, the order of µ equals ind µ. Proof. It is clear that the order of µ divides ind µ. To prove the converse, we have to show that if µ is trivial, then µ is inner. Without loss of generality, one may assume that µ is a semisimple automorphism. Then, by a result of Steinberg [St,Thm. 7.5], there is a Borel subalgebra b ⊂ g and a Cartan subalgebra , the restriction of µ to t is given by an element of W (t). On the other hand, the relation µ(b) = b shows that µ| t permutes somehow the simple roots corresponding to b. It follows that µ acts trivially on t and therefore µ is inner. In the following theorems, we describe N-regular periodic gradings and give some relations for eigenvalues and eigenspaces of θ. (Antonyan). Fix m ∈ N, and consider a connected component C γ ⊂ Aut g. Then C γ contains an N-regular automorphism of order m ⇐⇒ ind C γ divides m. Theorem In other words, if a connected component of Aut g contains elements of order m, then it contains an N-regular automorphism of order m. Proof. If ind C γ does not divide m, then C γ does not contain automorphisms of order m. To prove the converse, we first fix a Borel subalgebra b ⊂ g and a Cartan subalgebra t ⊂ b. Let ∆ be the root system of (g, t). Let Π = {α 1 , . . . , α l } be the set of simple roots such that the roots of b are positive. For each α i ∈ Π, let e i be a nonzero root vector. Recall that the finite group Γ(g) is isomorphic to the symmetry group of the Dynkin diagram of g [VO,4.4]. This means that each C γ contains an automorphism θ γ such that θ γ (t) = t and θ γ (e i ) = c i eγ (i) , i = 1, . . . , l, whereγ is a permutation on {1, . . . , l} and c i ∈ k \ {0}. The permutationγ represents an automorphism of the Dynkin diagram of g and the order ofγ equals ind µ. Conjugating θ γ by Ad (t) for a suitable t ∈ T , we can obtain arbitrary coefficients c i . Therefore we may assume without loss of generality that c 1 = · · · = c l = ζ. Then θ = θ γ is N-regular, and of order m. Indeed, e 1 +· · ·+e l is a regular nilpotent element lying in g (θ) 1 . Next, θ m is inner and θ m (e i ) = e i for all i. Hence θ m = id g . Let F 1 , F 2 , . . . , F l be homogeneous algebraically independent generators of k The numbers m 1 . . . , m l are called the exponents of g. Given θ ∈ C γ ⊂ Aut g, we may choose the F i 's so that θ(F i ) = ε i F i (i = 1, . . . , l) for some roots of unity ε i . We shall say that the ε i 's are the factors of θ. Note that the multiset {ε 1 , . . . , ε l } depends only on the connected component of Aut g, containing θ. If t is an arbitrary Cartan subalgebra, then So, the ε i 's are also factors in the sense of Springer [Sp,§ 6]. Given m ∈ N, we shall exploit two sequences indexed by elements of Z m . Set In this way, we obtain the numbers satisfying the relation i∈Zm k 3.4 Theorem. Suppose θ ∈ Aut g is N-regular and of order m. Let {ε 1 , . . . , ε l } be the factors of θ and {e, h, f } a θ-adapted regular sl 2 -triple. Then [Ko,Theorem 6], there exists a basis x 1 , . . . , x l for z(e) such that [h, i . (Another proof for z(h) can be derived from [Sp]. Since h is regular semisimple, z(h) is a θ-stable Cartan subalgebra. Set σ = θ| z(h) . Then σ is a regular element in the sense of Springer. Indeed, σ normalizes the Weyl group N G (z(h))/Z G (z(h)) and h is a regular eigenvector of σ. By [Sp,6.5(i)], the eigenvalues of σ on z(h) are equal to ε −1 i , i = 1, . . . , l.) (iii) It follows from (i) and (ii) that dim z(e) i = l i and dim z(f ) −i = k i . It remains to observe that dim z(e) i = dim z(f ) −i , since Φ is θ-invariant and yields a nondegenerate pairing between z(e) and z(f ). (iv) Since x i is a highest weight vector in R(m i ), it follows from (i) that the eigenvalues of θ in g are So, the problem of computing the required differences becomes purely combinatorial. Notice that each ε i is an m th root of unity, so that each eigenvalue is a power of ζ. Let us calculate separately the contribution of each submodule R(2m j ) to the difference D i := dim g i+1 −dim g i . Usually, two consecutive eigenvectors with eigenvalues ζ i+1 and ζ i occur together in R(2m j ); i.e., this has no affect on the difference in question. The exceptions can only occur near the eigenvalues of the highest and the lowest weight vectors in R(2m j ). Thus, taking the sum over all irreducible a-submodules yields Since G·e is open and dense in N , G 0 ·e is a nilpotent orbit in g 1 of maximal dimension. By Theorem 1.1(v), dim G 0 ·e is also maximal among dimensions of all G 0 -orbits in g 1 . Thus, In the next claim, we regard {0, 1, . . . , m − 1} as a set of representatives for Z m . Corollary Proof. Write the relations of Theorem 3.4(iv) in the form dim Together with the equality m−1 i=0 dim g i = dim g, these form a system of m linear equations with m indeterminates {dim g i }. Utility of N-regularity is explained by the fact that this allows us describe the algebra of invariants k[g 1 ] G 0 and guarantee the existence of a KW-section. Let us briefly recall the last subject. An affine subspace A ⊂ g 1 is called a KW-section if the restriction of π 1 : g 1 → g 1 / /G 0 to A is an isomorphism. By Theorem 1.1(iii), such an A contains a unique nilpotent element. So that A is of the form v + L, where {v} = A ∩ N and L is a linear subspace of g 1 . Theorem. Suppose θ is N-regular and of order m. Let {e, h, f } a θ-adapted regular sl 2 -triple. Then (i) The restriction homomorphism k[g] G → k[g 1 ] G 0 is onto. Moreover, k[g 1 ] G 0 is freely generated by the restriction to g 1 of all basic invariants F j such that ζ m j +1 ε j = 1. (ii) e + z(f ) 1 is a KW-section in g 1 . Proof. (i) Choose a numbering of basic invariants so that the relation ζ m i +1 ε i = 1 holds precisely for i ≤ a. Observe that a = k −1 . It is immediate that F i vanishes on g 1 unless i ≤ k −1 . For, is a polynomial algebra in dim c variables, i.e., in our case in k −1 variables.) A standard fact of sl 2 -theory says that z(f ) ⊕ [g, e] = g. By a famous result of Kostant [Ko], (dF i ) e are linearly independent as elements of g * and their images in z(f ) * form a basis for z(f ) * . Therefore, restricting the differentials of basic invariants to z(f ) 1 , one obtains a basis for z(f ) * 1 . The preceding exposition shows (dF i ) e = (dF i ) e | g 1 = 0 unless 1 ≤ i ≤ k −1 . On the other hand, it follows from Theorem 3.4(i) that dim z(f ) 1 = k −1 . Hence (dF i ) e (1 ≤ i ≤ k −1 ) are linearly independent andF i are algebraically independent. Furthermore, the linear independence of differentials implies that eachF i is a member of minimal generating system for k[g 1 ] G 0 , since e lies in the zero locus of all homogeneous G 0 -invariants of positive degree. This completes the proof. (ii) It is a standard consequence of the fact that {F i } generate k[g 1 ] G 0 and (dF i ) e (1 ≤ i ≤ k −1 ) are linearly independent, see e.g. [Pa,§ 3]. Remark. If θ is inner, then ε i = 1 for all i, and the previous exposition simplifies considerably. In this case, we also have k i = #{j | m j ≡ i (mod m)}. Corollary. Suppose θ is inner and N-regular. Then k[g 1 ] G 0 is freely generated by those F i | g 1 whose degree is divisible my m. APPLICATIONS AND EXAMPLES In this section, we demonstrate some applications of Springer's theory of regular elements to θ-groups. Maintain the notation of the previous section. In particular, to any θ ∈ Aut g, of order m, we associate the factors ε i (i = 1, . . . , l), which depend only on the connected component of Aut g that contains θ, and then the numbers k i = k i (θ, m) (i ∈ Z m ), which are defined by Eq. (3.3). It is shown in the previous section that the k i 's play a significant rôle in the context of Nregular gradings. Now we show that these numbers also relevant to S-regular gradings. 4.2 Theorem. Let g = i∈Zm g i be an S-regular grading and θ the corresponding automorphism. Then Proof. 1. Let x ∈ g 1 be a regular semisimple element. Set t = z(x). It is a θ-stable Cartan subalgebra of g. Set σ := θ| t . Because σ originates from an automorphism of g, it normalizes W (t), the Weyl group of t. Furthermore x is a regular eigenvector of σ whose eigenvalue is ζ. Thus, σ is a regular element of GL(t) in the sense of Springer, and we conclude from [Sp,6.4(v)] that the eigenvalues of σ are equal to ζ −m i ε −1 i (i = 1, . . . , l). It follows that dim t i = k −i . Since Φ| t is nondegenerate and σ-stable, k −i = k i . It is also clear that t 1 is a Cartan subspace of g 1 . 2. These two relations follow from Proposition 2.1(i) applied to x. 3. As the grading is locally-free, k 0 = dim z(x) 0 = 0. Letθ be an N-regular automorphism of order m that lies in the same connected component of Aut g as θ (cf. Theorem 3.2). Let g = ⊕ igi be the corresponding grading. Let c ⊂g 1 be a Cartan subspace. By Theorem 3.4(v), dim c = k −1 . Lett be anyθ-stable Cartan subalgebra containing c. Then, by the definition of a Cartan subspace, we have c =t 1 . Conjugatingθ by a suitable inner automorphism, we may assume that t =t. Setσ =θ| t . Since θθ −1 is inner by the construction, σσ −1 ∈ W (t). Thus, we have the following: σ has finite order, σW (t)σ −1 = W (t), σ = wσ for some w ∈ W (t), and dim t 1 = dimt 1 . Since t 1 contains a regular vector, Theorem 6.4(ii),(iv) from [Sp] applies. It asserts that σ andσ are conjugate by an element of W (t). It follows that dim t i = dimt i for all i, and t 1 contains a regular vector, too. Thus,θ is S-regular and locally free as well. Finally, applying Theorem 2.2 to θ andθ, we conclude that these two are conjugate by an element of Int g. Hence θ is also N-regular. Remark. If θ is not assumed to be locally free, then part (iii) can be false, see example below. Combining Theorem 3.6(ii) and Theorem 4.2(iii), we obtain 4.3 Corollary. If θ is S-regular and locally free, then the corresponding θ-group admits a KW-section. Examples show that N ∩ g 1 , the null-fibre of π 1 , is often reducible. Any KW-section, if it exists, must meet one of the irreducible components of N ∩ g 1 . It turns out, however, that some components are 'good' and some are 'bad' in this sense. It may happen that there is only one irreducible component that can be used for constructing a KW-section. It is worth noting in this regard that, in case θ is involutory, all irreducible components of N ∩ g 1 are 'good', see [KR,Theorem 6]. Example. Let g be a simple Lie algebra of type E 6 . Consider two inner automorphisms θ 1 , θ 2 of g that are defined by the following Kac's diagrams: The reader is referred to [VO,4.7] or [Vi1, § 8] for a thorough treatment of Kac's diagrams of periodic automorphisms. Here we give only partial explanations: • (The conjugacy class of) a periodic inner automorphism of a simple Lie algebra g is represented by the corresponding affine Dynkin diagram, with white and black nodes. • The semisimple part of g 0 is given by the subdiagram consisting of white nodes. • Dimension of the centre of g 0 equals the number of black nodes minus 1. • The order of θ is equal to the sum of those coefficients of the affine Dynkin diagram that correspond to the black nodes. • Each black node represents an irreducible g 0 -submodule of g 1 , so that the number of black nodes is equal to the number of irreducible summands of g 1 . (We do not give here a general recipe for describing the g 0 -module g 1 .) It follows that both automorphisms under consideration have order 4, G 1 has 2 summands: tensor product of simplest representations of all simple factors (dimension 18) plus 2-dimensional representation of A 1 . The weights of k * on these summands, say µ 1 and µ 2 , satisfy the relation 3µ 1 + µ 2 = 0 (in the additive notation). We have dim g 1 and therefore dim G·x i = 72 by Proposition 2.1. Notice that 72 = dim g − rk g. Thus, both θ 1 and θ 2 are S-regular but not locally free. Clearly, these are not conjugate. This proves that the assumption of being locally free cannot be dropped in Theorem 2.2. It is not hard to compute directly that the degrees of basic G (i) 0 -invariants are equal to 8, 12 for θ 1 and 4, 8 for θ 2 . As the degrees of E 6 are 2, 5, 6, 8, 9 and 12, we see that the restriction is not onto. Hence θ 2 is not N-regular. This proves that the assumption of being locally free cannot be dropped in Theorem 4.2(ii). By the way, θ 1 is N-regular. Let θ be an arbitrary periodic automorphism and let G 0 : g 1 be the corresponding θgroup, with a Cartan subspace c ⊂ g 1 and the little Weyl group W (c, θ). The isomorphism 1.1(iv) means that G 0 ·x ∩ c = W (c, θ)·x for all x ∈ c. It is not however always true that G·x ∩ c = G 0 ·x ∩ c for all x ∈ c. A similar phenomenon can be seen on the level of Weyl groups, as follows. Let t be a θ-stable Cartan subalgebra such that t 1 = c. Write W for the Weyl group N G (t)/Z G (t). Set W 1 = N W (c)/Z W (c). It is easily seen that W (c, θ) is isomorphic to a subgroup of W 1 (as all Cartan subalgebras of z g (c) are Z G (c)-conjugate), but these two groups can be different in general. We give below a sufficient condition for the equality to hold. Let π : g → g/ /G be the quotient mapping. By [Ko], it is known that, for ξ ∈ g/ /G, the fibre π −1 (ξ) is an irreducible normal complete intersection in g of codimension l. The complement of the dense G-orbit in π −1 (ξ) is of codimension at least 2. Using a result of Richardson [Ri] and an extension of Springer's theory to non-regular elements [LS1], we prove normality of some G-stable cones in g associated with θ-groups. Theorem. Suppose θ satisfies the relation dim g 1 / /G 0 = k −1 . Then, for any Cartan subspace c ⊂ g 1 , we have This variety is irreducible, normal, and Cohen-Macaulay. Furthermore, its ideal in k[g] is generated by the above basic invariants F i , i.e., π −1 (π(c)) is a complete intersection. Proof. Let t be a θ-stable Cartan subalgebra containing c and W the corresponding Weyl group. Set σ = θ| t . By [LS1,5.1], W 1 (which is not necessarily the same as either W σ or W (c, θ)) is a reflection group in c and the functions F i | c with ε i ζ d i = 1 form a set of basic invariants for W 1 (our k −1 is a(d, σ) in [LS1]). In other words, k[c] W 1 is a graded polynomial algebra and the restriction mapping k[t] W → k[c] W 1 is onto. This means that Theorem B in [Ri,§ 5] applies here, and we may conclude that X := π −1 (π(c)) is normal and Cohen-Macaulay. Furthermore, Lemma 5.3 in loc. cit. says that X is irreducible; and the argument in p. 250 in loc. cit shows that the ideal of X is generated by the required basic invariants F i . We have proved before that the hypothesis of Theorem 4.5 is satisfied for S-regular or N-regular gradings. However, in these cases some more precise information is available. Proof. There are two isomorphisms given by restriction Since θ is N-regular, res g,g 1 : k[g] G → k[g 1 ] G 0 is onto by Theorem 3.6(i). It follows that the restriction mapping res t,c : k[t] W → k[c] W (c,θ) is onto, too. In the geometric form, the ontoness of res g,g 1 yields the closed embedding g 1 / /G 0 ֒→ g/ /G. Because the points of such (categorical) quotients parametrise the closed orbits and the closed G 0 -orbits in g 1 are those meeting c, the above embedding is equivalent to the fact that G·x ∩ g 1 = G 0 ·x for all x ∈ c. This gives (i). Similarly, the ontoness of res t,c yields the equality W (c, θ)·x = W·x∩c for all x ∈ c. Since W (c, θ)·x ⊂ W 1 ·x ⊂ W ·x ∩ c, part (ii) follows. Let us prove (iii). Set X := π −1 (π(c)). It is a closed G-stable cone in g. By [Ri,5.3], X is irreducible. Since G·g 1 is irreducible and G·g 1 ⊂ X, it suffices to verify that dim X = dim G·g 1 . Because each fibre of π is of dimension dim g − l and dim π(c) = dim c, we obtain dim X = dim c + dim g − l. On the other hand, regular elements are dense in g 1 , since θ is N-regular. Therefore G·g 1 contains a dim c-parameter family of G-orbits of dimension dim g − l. This yields the required equality. In view of 3.4(v), Theorem 4.5 applies here. Remark. For the S-regular locally free gradings, the coincidence of W (c, θ) and W 1 was proved in [Vi1,Prop. 19]. (In that case W 1 = W σ for σ = θ| z(c) .) Therefore, in view of Theorem 4.2(iii), the equality 4.6(ii) is an extension of that result of Vinberg. Proof. The first equality stems from the presence of regular elements in g 1 (cf. the proof of 4.6(iii)). Since regular semisimple elements are dense in g 1 , G 0 ·c = g 1 . This gives the second equality. In view of 4.2(i), Theorem 4.5 applies here. 4.8 Examples. 1. Consider again the automorphism θ 2 from Example 4.4. As we already know, θ 2 is not N-regular and the ontoness of res g,g 1 fails here. The latter shows that 4.6(i) does not hold. Since c contains regular elements, Z W (c) = {1}. It is easily seen that W 1 is isomorphic to W σ , the centraliser of σ = θ| t in W . Springer's theory [Sp,§ 4] says that W σ ⊂ GL(c) is a finite reflection group whose degrees are those degrees of W that are divisible by m, i.e., 8, 12. Hence #W 1 = 96 and #W (c, θ) = 48. Thus, 4.6(ii) fails, too. However, π −1 (π(c)) is normal, Cohen-Macaulay, etc., and the reason is that θ 2 is S-regular. 2. It really may happen that θ is neither N-regular nor S-regular, but the equality dim g 1 / /G 0 = k −1 holds. Let g be a simple Lie algebra of type E 7 . Consider the inner automorphism θ that is determined by the following Kac's diagram: θ: Then the order of θ is 4, G 0 = A 3 × A 3 × A 1 , and dim g 1 = 32. Here k −1 = dim g 1 / /G 0 = 2, so that Theorem 4.5 applies. On the other hand, (dim g − rk g)/4 / ∈ N. Hence θ is not S-regular. It can be shown that the maximal nilpotent orbit meeting g 1 is of dimension 120 (its Dynkin-Bala-Carter label is E 6 ), i.e., θ is not N-regular. EXPONENTS AND COEXPONENTS OF LITTLE WEYL GROUPS We keep the previous notation. Here we briefly discuss some other consequences of [LS1], [LS2] for θ-groups. Recall the definition of (co)exponents. IfW is a reflection group in c, then (k[c] ⊗ N)W is a graded free k[c]W -module for anyW -module N. The exponents (resp. coexponents) ofW are the degrees of a set of free homogeneous generators for this module, if N = c * (resp. N = c). As is well known, if {d i } are the degrees of basic invariants in k[c]W , then {d i − 1} are the exponents. The theory of Lehrer and Springer gives a description of coexponents for the subquotient W 1 = N W (c)/Z W (c) under the constraint dim c = k −1 , while the theory of θgroups deals with the group W (c, θ) = N G 0 (c)/Z G 0 (c). By 3.4(v) and 4.6(ii), we know that dim c = k −1 and W 1 = W (c, θ) whenever θ is N-regular. Thus, N-regularity allows us to exploit the theory of Lehrer and Springer in the context of θ-groups. However, to use that theory in full strength, we need the constraint that θ is S-regular, too. 5.1 Proposition. Suppose θ ∈ Aut g is N-regular and let {e, h, f } be a θ-adapted regular sl 2 -triple. (i) The exponents of W (c, θ) correspond to the eigenvalues of θ on z(e) −1 . More precisely, m j is an exponent of W (c, θ) if and only if ε j ζ m j = ζ −1 ; (ii) If θ is also S-regular, then the coexponents of W (c, θ) correspond to the eigenvalues of θ on z(e) 1 . More precisely, m j is a coexponent of W (c, θ) if and only if ε j ζ m j = ζ. Proof. (i) This part is essentially contained in Theorem 3.6(i). (ii) A description of coexponents for subfactors of the form N W (c)/Z W (c), if c contains a regular vector, is due to Lehrer and Springer. However, an explicit formulation was given only in the "untwisted" case (see Theorem C in [LS2]), which in our setting correspond to the case where θ is inner. A general statement can be extracted from the discussion in [LS2,§ 4], especially from Propositions 4.6, 4.8, and Theorem D. The connection with θ-eigenvalues on z(e) follows from Theorem 3.4. Remark. If θ is N-regular, then formulas of Section 3 shows that k −1 = dim c = dim z(e) −1 and k 1 = dim z(e) 1 . S-regularity guarantee us that k −1 = k 1 and, even stronger, that k −i = k i for all i (see (4.2)). However it can happen that k 1 > k −1 . For instance, if θ is an N-regular inner automorphism of E 6 , of order 5, then k 1 = 2 and k −1 = 1. In this case, it is not clear how to characterise the coexponents for W (c, θ). Another difficulty may occur if θ is S-regular, but not N-regular. Here k −1 = k 1 , but the groups W 1 and W (c, θ) can be different. Example. The exceptional Lie algebra g = E 6 has an outer automorphism θ of order 4 such that G 0 = A 3 ×A 1 . This automorphism is determined by the following Kac's diagram (the underlying graph is the Dynkin diagram of type E (2) 6 ) : Here g 1 is the tensor product of a 10-dimensional representation of A 3 (with highest weight 2ϕ 1 ) and the simplest representation of A 1 . We have dim g 0 = 18 and dim g 1 = 20. It is easily seen that the G 0 -representation on g 1 is locally free, hence dim g 1 / /G 0 = dim c = 2. Using Proposition 2.1(i), we conclude that θ is S-regular and then, by Theorem 4.2(iii), that θ is also N-regular. In order to use the preceding Proposition, one has to know the factors {ε i }. In this case the pairs (m i , ε i ) (1 ≤ i ≤ 6) are (1, 1), (4, −1), (5, 1), (7, 1), (8, −1), (11, 1). Then an easy calculation shows that the exponents of W (c, θ) are 7, 11 and the coexponents are 1, 5. Then looking through the list of the irreducible finite reflection groups, one finds that here W (c, θ) is Group 8 in the Shephard-Todd numbering. (A list including both the exponents and the coexponents is found in [OS, Table 2]).
2019-04-12T09:12:33.952Z
2003-07-17T00:00:00.000
{ "year": 2003, "sha1": "14a843a95be0845a1302dfdd70eed4cf51fa9d1b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "84f8886b774b9c29b562b6b189538025310e1ab2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
261668962
pes2o/s2orc
v3-fos-license
Wong-type dermatomyositis with interstitial lung disease and anti-SRP and -PM/Scl antibodies treated with intravenous immunoglobulin DM: dermatomyositis IHC: immunohistochemistry ILD: interstitial lung disease IVIg: intravenous immunoglobulin PM/Scl: polymyositis/scleroderma PRP: pityriasis rubra pilaris SRP: signal recognition peptide INTRODUCTION Dermatomyositis (DM) is an idiopathic inflammatory disorder with a heterogenous presentation. Patients exhibit varying degrees of concomitant cutaneous, muscle or other distinct extracutaneous manifestations that often allow for a clinical diagnosis. Wong-type DM is a rare variant with clinical and pathological overlap between DM and pityriasis rubra pilaris (PRP). The atypical clinical presentation of Wong-type DM often obscures and delays the diagnosis and workup. Herein, we present a case of refractory Wong-type DM with anti-signal recognition peptide (SRP) and anti-polymyositis/scleroderma (PM/Scl) antibodies, interstitial lung disease (ILD), and treatment response to intravenous immunoglobulin (IVIg). INTRODUCTION Dermatomyositis (DM) is an idiopathic inflammatory disorder with a heterogenous presentation.Patients exhibit varying degrees of concomitant cutaneous, muscle or other distinct extracutaneous manifestations that often allow for a clinical diagnosis.Wong-type DM is a rare variant with clinical and pathological overlap between DM and pityriasis rubra pilaris (PRP). 1 The atypical clinical presentation of Wong-type DM often obscures and delays the diagnosis and workup.Herein, we present a case of refractory Wong-type DM with anti-signal recognition peptide (SRP) and anti-polymyositis/scleroderma (PM/Scl) antibodies, interstitial lung disease (ILD), and treatment response to intravenous immunoglobulin (IVIg). CASE REPORT A 52-year-old female with a history of poorly controlled psoriasiform plaques over her upper and lower extremities (Fig 1) presented for reevaluation of her cutaneous disease following recent onset of severe ILD with positive antinuclear (1:320), SRP and PM/Scl-100 antibodies.Diagnosed with pathology confirmed PRP at age 12, she had failed numerous systemic immunomodulators and topical treatments including isotretinoin, methotrexate, cyclosporine, adalimumab, secukinumab, and most recently, ustekinumab.At initial presentation, she denied any muscle pain, weakness, dysphonia, or dyspnea. Physical examination revealed psoriasiform plaques over the bilateral extensor forearms and elbows as well as the anterior, lateral, and posterior thighs.Subtle features consistent with DM were also observed, including heliotrope sign, erythema of the upper chest (Fig 2 ), and erythema over the knuckles.Skin biopsy revealed psoriasiform dermatitis with overlying confluent parakeratotic and orthokeratotic scale in a checkerboard pattern without epidermal atrophy or interface change, consistent with PRP (Fig 3, A and B).However, immunohistochemistry was positive for deposition of intravascular C3 and C5-9, while direct immunofluorescence was negative for IgA, IgG, and IgM, findings consistent with DM (Fig 3 , C ).Additional laboratory evaluation showed a low-titer positive cyclic citrullinated peptide along with normal creatine kinase and aldolase.This overlap of clinical, histologic and laboratory features of DM and PRP is consistent with Wong-type DM. Given the severe and refractory disease, the patient was promptly initiated on monthly IVIg.Given concerns for muscle involvement, methylprednisolone DISCUSSION Wong-type DM is a rare DM variant in which there is clinical and pathological overlap with PRP.Wongtype DM has been reported in children and adults with unclear consensus on whether this is an overlap syndrome of DM and PRP or a unique presentation of an already protean disease. 2,3Onset of PRP-like lesions varies and has been noted to occur prior to, simultaneously or after the diagnosis of DM.This variant often exhibits hyperkeratotic follicular papules with characteristic islands of sparing, as seen in PRP, but may present with classic cutaneous features of DM as well. 4Our case varies from typical Wong-type DM in that the primary PRP-like lesions are large psoriasiform plaques rather than scattered papules.The prominent linear distribution on the legs is also atypical for PRP. The pathologic presentation classically reveals compact orthokeratosis alternating with parakeratosis in a checkerboard pattern as in PRP, though can include DM-features of vacuolar interface dermatitis or mucin deposition. 1 Recently, findings of columnar dyskeratosis, defined as nonfollicular epidermal invaginations containing keratotic plugs with scattered dyskeratotic cells, has been postulated as a histological feature suggestive of Wong-type DM. 5 To the best of our knowledge, this is the first reported case that demonstrates simultaneous findings of PRP on hematoxylin and eosin staining with concomitant immunohistochemistry findings consistent with DM.Unfortunately, we are unable to investigate the initial biopsies this patient received with her diagnosis in childhood.Ideally, this would delineate whether this was indeed Wong-type DM masquerading as PRP since childhood or if she developed DM superimposed on already present PRP. As there are fewer than 45 cases of Wong-type DM in the literature, there remains no established connection with myositis-specific autoantibodies or risk of systemic involvement.Myositis-specific autoantibodies are associated with particular DM phenotypes and useful in prognostication of potential pulmonary involvement or association with a visceral malignancy.Our case expands the characterization of Wong-type DM to include anti-SRP and anti-PM/Scl antibodies in the setting of ILD.These autoantibody subtypes and ILD are both uncommon in juvenile-onset DM.Traditionally, anti-SRP and anti-PM/Scl have been associated with necrotizing myopathy and connective tissue disease overlap syndromes with ILD, respectively; these known relationships may provide valuable insights to guide extracutaneous workup in Wong-type DM patients.Further evaluation of myositis-specific autoantibodies in cases of PRP may also expedite diagnosis of unrecognized Wong-type DM.Additionally, although the prevalence of malignancy in Wongtype DM remains unclear, 4 previous reports indicate that Wong-type DM may present as a paraneoplastic phenomenon, particularly with gynecological malignancies. 6Malignancy work-up in this case was negative. Given the paucity of Wong-type DM, management and treatment remains anecdotal.Cutaneous DM has been traditionally treated first-line with various immunosuppressants.More recently, a seminal breakthrough in the treatment of DM was provided in the ProDERM trial, which investigated the use of IVIg in active DM patients.Although the primary clinical endpoint was reduction of myositis disease activity, there was clinically significant cutaneous improvement based on reductions in the Cutaneous DM Disease Area and Severity Index. 7hus, IVIg provides an alternative treatment option for refractory or severe disease.Although the mechanism of action remains nebulous, IVIg is believed to play a multifaceted role in immunomodulation.To our knowledge, our case is the third report of refractory Wong-type DM being successfully treated with IVIg, 8,9 and the first in the setting of ILD. 2,3his case draws attention to a rare clinical entity, Wong-type DM, that often masquerades as PRP.Although seldom encountered, maintaining Wongtype DM in a differential, especially when confronted with severely recalcitrant PRP associated with pulmonary symptoms, could allow for expedited diagnosis.Our case additionally highlights a unique presentation of Wong-type DM with anti-SRP and anti-PM/Scl antibodies associated with ILD and supports the therapeutic potential of IVIg in this rare condition. Fig 1 . Fig 1. Psoriasiform plaques overlying the extensor surfaces of the upper (A) and lower (B) extremities. Fig 2 . Fig 2. Cutaneous features of dermatomyositis, including heliotrope sign with violaceous erythema of the upper eyelids (A) and V-sign with erythema of the upper chest (B).
2023-09-11T15:06:21.104Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "e0f5d2fc60094387c7d62b08dfc6dd299b4435d5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "547c2201c6f5f7e2ab09d158cc2a15c3305b4469", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261101086
pes2o/s2orc
v3-fos-license
Parallel-in-time quantum simulation via Page and Wootters quantum time In the past few decades, researchers have created a veritable zoo of quantum algorithm by drawing inspiration from classical computing, information theory, and even from physical phenomena. Here we present quantum algorithms for parallel-in-time simulations that are inspired by the Page and Wooters formalism. In this framework, and thus in our algorithms, the classical time-variable of quantum mechanics is promoted to the quantum realm by introducing a Hilbert space of"clock"qubits which are then entangled with the"system"qubits. We show that our algorithms can compute temporal properties over $N$ different times of many-body systems by only using $\log(N)$ clock qubits. As such, we achieve an exponential trade-off between time and spatial complexities. In addition, we rigorously prove that the entanglement created between the system qubits and the clock qubits has operational meaning, as it encodes valuable information about the system's dynamics. We also provide a circuit depth estimation of all the protocols, showing an exponential advantage in computation times over traditional sequential in time algorithms. In particular, for the case when the dynamics are determined by the Aubry-Andre model, we present a hybrid method for which our algorithms have a depth that only scales as $\mathcal{O}(\log(N)n)$. As a by product we can relate the previous schemes to the problem of equilibration of an isolated quantum system, thus indicating that our framework enable a new dimension for studying dynamical properties of many-body systems. In the past few decades, researchers have created a veritable zoo of quantum algorithm by drawing inspiration from classical computing, information theory, and even from physical phenomena. Here we present quantum algorithms for parallel-in-time simulations that are inspired by the Page and Wooters formalism. In this framework, and thus in our algorithms, the classical time-variable of quantum mechanics is promoted to the quantum realm by introducing a Hilbert space of "clock" qubits which are then entangled with the "system" qubits. We show that our algorithms can compute temporal properties over N different times of many-body systems by only using log(N ) clock qubits. As such, we achieve an exponential trade-off between time and spatial complexities. In addition, we rigorously prove that the entanglement created between the system qubits and the clock qubits has operational meaning, as it encodes valuable information about the system's dynamics. We also provide a circuit depth estimation of all the protocols, showing an exponential advantage in computation times over traditional sequential in time algorithms. In particular, for the case when the dynamics are determined by the Aubry-Andre model, we present a hybrid method for which our algorithms have a depth that only scales as O(log(N )n). As a by product we can relate the previous schemes to the problem of equilibration of an isolated quantum system, thus indicating that our framework enable a new dimension for studying dynamical properties of many-body systems. I. INTRODUCTION The field of quantum foundations studies the fundamental principles of quantum theory, such as the nature of quantum states, the interpretation of measurements, the equilibration and thermalization of isolated systems, and the emergence of classicallity [1][2][3][4][5]. Another important question that has recently attracted wide attention within this field is that of the role of time in quantum mechanics (QM) : It is clear that ever since its inception, time in QM has been treated as an external classical parameter, in asymmetry with other quantum observables. For instance, in the canonical quantization procedure, one promotes the position and momentum variables to operators and the Poisson bracket to a commutator [33]. This quantization is implemented at a fixed time value so the variable t appearing in Schrödinger equation is the same as that appearing in the classical equations of motion. While seemingly innocuous, it is believed that the imbalance between time and space could be a critical issue in developing a quantum theory of gravity [34][35][36]. At the same time, such asymmetry inevitably limits the range of applicability of quantum information and computation tools, as asking questions like "what is the entanglement between the space and time coordinates? " is an entirely moot point within the conventional quantum mechanical framework. It is tempting to fix the space-time asymmetry by promoting iℏ d dt to a quantum operator conjugate to some quantum time observable T , such that [T, H] = iℏ. However, the previous approach has the critical issue that it forces T and H to have exactly the same eigenspectrum, which is generally incompatible (such argument is often attributed to Pauli [37]). Despite this apparent difficulty and other subtleties, there are several proposals to treat time on equal footing with other physical quantities [13,14,[38][39][40][41][42]. Here we will focus on the so-called Page and Wooters (PaW) mechanism [38]. In this framework the universe is composed by a quantum system of interest plus an ancillary clock quantum system, such that the joint state of the universe, the history state, is in a stationary state. The previous "Pauli's objection" is circumvented since the operator T acts on the clock system implying [T, H] = 0. Remarkably, as long as the system and the clock are correlated in a specific way, the unitary evolution of the system can be restored by conditioning over the clock states. In this way, measures of system-time entanglement become rigorous quantifiers of the amount of distinguishable evolution undergone by the system during its history [8]. The foundational discussion surrounding the role of time has many intriguing ramifications, even when focusing solely on the PaW mechanism. The interested reader can refer to the Appendix A for pertinent discussions. How-ever, the primary objective of this work is computational: In this manuscript, we provide a translation of the PaW mechanism into a useful quantum computational scheme where the quantum aspects of time are captured by clock qubits. This allows us to develop quantum algorithms for studying temporal averages of several dynamical properties of a quantum system. Specifically, given an n-qubit quantum system that is evolving under the action of a time-independent Hamiltonian H, we consider the problem of approximating the infinite-time average of some timedependent dynamical quantity by a discrete sum over N different times. In a standard setting, we can estimate said discrete average by sequentially running N different quantum circuits (one for each time in the average). However, by leveraging the history state of the Page and Wooters formalism we propose a quantum algorithm for parallel-intime simulations that uses log(N ) (any logarithm in this manuscript is taken in base 2) ancillary clock qubits and that allows us to evaluate the temporal average with a single quantum circuit. The previous shows that using the history state leads to an exponential trade-off between temporal complexity (running multiple circuits) and spatial complexity (using more qubits). In addition, we also show that the entanglement between the system and the clock qubits carries operational meaning since it serves as a bound for the infinity time average of the Loschmidt echo and for the temporal variance of expectation values. These results imply that the history state encodes valuable information in its correlations that can be used to study and understand the system's dynamics and equilibration. Given this operational meaning of the entanglement, we present two different schemes to compute the linear entropy of the history state, one based on the state-overlap circuit [43], and another one leveraging classical shadows and randomized measurements [44,45] We also present a depth study of the circuits showing a clear advantage of using parallel-in-time protocols over the conventional sequential-in-time approaches where time is not mapped to clock qubits. Moreover, we propose a scheme to further reduce the circuit depth needed to prepare the history state via Hamiltonian diagonalization [46,47]. Here, we show that by leveraging tools from variational quantum algorithms [48,49] and quantum machine learning [50][51][52] to variationally diagonalize H, one can significantly further reduce the required circuit depth. For the special case when H is given by the Aubry-Andre model, we show that all of our algorithms can be implemented with a depth that only scales as O(log(N )n), i.e., as the product of the number of clock and system qubits. Finally, we perform simulations which showcase the performance of our algorithms for studying temporal averages of systems evolving under an Aubry-Andre model. II. QUANTUM TIME FORMALISM AND ITS DISCRETIZATION Let us consider an n-qubit quantum system with associated Hilbert space H S . Then, let H be a time-independent Hamiltonian under which the system evolves. The dynamical evolution of the system is determined by the Schrödinger equation where we have set ℏ = 1. It is well known that the solution of Eq. (1) is given by and where |ψ 0 ⟩ is some initial state of the system. As previously discussed, and as shown in Fig. 1(a), there exists an inherent asymmetry between the space and time variables in quantum mechanics. Namely, the t variable over which we take a derivative is a fully classical parameter that is external to the quantum system. An alternative to fully incorporate time in a quantum framework is to introduce a new Hilbert space H T spanned by some states |t⟩ (see Fig. 1(b)) such that T |t⟩ = t|t⟩ and [T, P T ] = iℏ, which in the time basis leads to P T ≡ −iℏ d dt . Note that P T is not the Hamiltonian of the system and in fact [T, H] = 0 (as they act on distinct Hilbert spaces). Evolution is then recovered from an extended Schrodinger equation, involving both the system and the clock Hilbert spaces, which is given by J |Ψ⟩ = 0, for J = P T ⊗ 1 1 S + 1 1 T ⊗ H and |Ψ⟩ ∈ H T ⊗ H S . Here 1 1 T and 1 1 S respectively denote the identities on H T and H S . In general, the extended Schrodinger equation, together with an initial condition, leads to entanglement between the system and the time Hilbert space. The previous scheme can also be regarded as the mathematical basis of the Page and Wootters (PaW) mechanism. Under this framework the universe state |Ψ⟩ is stationary (as J |Ψ⟩ = 0) while the unitary evolution of the subsystem S emerges by conditioning on the rest. In our previous notation this means that given a universe state we can recover the state of the system as |ψ(t)⟩ = ⟨t|Ψ⟩ (assuming ⟨t|t ′ ⟩ = δ(t − t ′ )). More notably, one can readily see that J |Ψ⟩ = (i∂ t − H)|ψ(t)⟩ = 0 precisely recovers the standard Schödinger equation with the index t being demoted from a quantum state label to a time parameter. In order to make the states |Ψ⟩ accessible to conventional (discrete) qubit-based quantum computers, one needs a In standard quantum mechanics, one promotes x and p to quantum operators, but the time variable t is treated as a classical parameter that is external to the quantum system being studied. Quantum algorithms for studying dynamical properties based on this framework are implemented for some fixed time t. If we want to compute an average of N times, we need to repeat the run N different sequential-in-time experiments. b) In the PaW formalism, time is treated as a quantum variable, with its own associated Hilbert space. In this work we present quantum algorithms for parallel-in-time simulations which trade circuit repetitions for ancilla clock qubits. c) After a proper entangling protocol one has access not only to properties of the system at a given time, but also to their complete history. This information can be retrieved by performing measurements at the end of the circuit, which now can involve either the clock qubits, the system qubits or both. Measurements on the system which are conditioned to a certain time value give properties of the system at a given time. More interestingly, if one only measures on the system and complete ignore the clock's values, temporal averages are obtained. This is a consequence of the entanglement between the system and the clock which induces a useful quantum channel when the clock is treated as an environment. Because of the quantum nature of the simulated clock and system, many other measurements can be proposed, meaning that the different protocols we discuss in this manuscript do not exhaust all the possibilities opened by this computational framework. proper discrete time framework. Fortunately, it is easy to guess the form of a discrete time history state. Namely, we start by introducing a finite dimensional Hilbert space H T , which we denote as the time or clock -Hilbert space with basis |t⟩ satisfying ⟨t ′ |t⟩ = δ tt ′ for t = 0, . . . , N − 1. A discrete history state is then defined as the state with |ψ(εt)⟩ = U (εt)|ψ 0 ⟩ ∈ H S . Here, we have ε = T /N the time-spacing for a given time-window T , while t denotes a discrete dimensionless index (so that εt is a physical time interval). In analogy with the continuum case one can recover the state of the system at a given time by conditioning as |ψ(εt)⟩⟨ψ(εt)| = Tr T [|Ψ⟩⟨Ψ|Π t ]/⟨Ψ|Π t |Ψ⟩ for Π t = |t⟩⟨t| ⊗ 1 1 S . In this way the unitarily evolved state is recovered for the time values allowed by H T . Notice that this operation is different from a direct partial trace over the clock states which generally yields a mixed state. It turns out that the partial trace induces a quantum channel which also encodes useful information about the system's dynamics and its (eventual) equilibration. In fact, one can think about the history states as a purification of that particular quantum channel. This is related to the system-time entanglement as we discuss in Section IV and Appendix B. Here we note that for the case of N being a power of two, the discrete history state can be prepared with the quantum phase estimation-like circuit of Fig. 2. For N being a power of two, one requires log(N ) ancillary or clock qubits (we henceforth assume the logarithms to be base 2). As such, the clock Hilbert space H T is of dimension dim(H T ) = 2 log(N ) = N . This result has been reported recently in [8] and [9], where the discrete history state of Eq. (4) has also been extensively studied. The advantages of encoding history states in a quantum computer become clear once one starts considering measurements on the end of the circuit which are different from simple conditioning: while conditioned measurements allow one to recover properties of the system at a given time, new genuinely quantum possibilities become accessible through the clock qubits. A small summary of such possibilities is provided in Fig. 1(c). . Circuit for preparing history states. As shown above, the initial state to the clock qubits is |0⟩ ⊗ log (N ) while that of the system is |ψ0⟩. The action of the Hadamard gates is to map the initial state to |+⟩ ⊗ log(N ) ⊗ |ψ0⟩. Here, we find it convenient to write |+⟩ ⊗ log (N ) III. FROM QUBIT-CLOCKS TO PARALLEL-IN-TIME SIMULATIONS Here we discuss how the mathematical formalism of qubit-clocks presented in the previous section can be leveraged to create novel quantum algorithms aimed at studying averages of dynamical-in-time properties of quantum systems. In particular, in this section we focus in developing parallel-in-time-type algorithms that estimate time averages of physical quantities. A. Setting Given a time-independent Hamiltonian H acting on nqubits, and its associated time evolution operator U (t) = e −iHt , we consider the problem of estimating general quantities of the form where Here, ρ is an n-qubit state acting on the d-dimensional Hilbert space H S (with d = 2 n ), O 1 and O 2 are two operators, and ω ∈ R. To illustrate the relevance of the quantity F (ρ, O 1 , O 2 , ω) in Eq. (5) let us consider several special cases. First, let ω = 0 and O 2 = 1 1, which leads to We can see that F (ρ, O 1 ) simply corresponds to an infinite temporal average of the observable O 1 . These quantities are crucial to understanding the dynamical properties of closed quantum systems and in particular their equilibration [4, 53,54]. They are also relevant to the study of quantum quench processes in field theories [55] and signatures of non-equilibrium quantum phase transition through infinite-time averages of Loschmidt echos [56][57][58][59][60]. Next, when ω = 0, we have Here we can recognize ⟨O 1 (t)O 2 ⟩ ρ as a two-point correlation function (also known as a dynamical Green's function). Two-point correlation functions are used to describe the behavior of a system under perturbations, and are a widely used tool in quantum many-body systems and condensed matter physics [61][62][63][64]. The infinite-time average of ⟨O 1 (t)O 2 ⟩ ρ has been recently considered in [65] to study thermodynamics properties of closed quantum systems such as the emergence of dissipation at late times. Finally, we note that the general function F (O 1 , O 2 , ω) corresponds to a Fourier transform of the two-point correlation function, which is commonly referred to as the dynamical structure factor in the condensed matter community [66,67]. Crucially, the dynamical structure factors are used to study dynamical properties of a given system and have the properties of being experimentally accessible [68,69], and usually being hard to compute via classical simulations [67]. While the importance of Eq. (5) is clear, the computation of F (O 1 , O 2 , ω) might not be straightforward. On the one hand, the classical simulation of some quantum mechanical dynamical process is generally expected to be exponentially expensive in classical computers. Such scaling can be mitigated by using a quantum computer. Here, there are several schemes capable of computing fixed-time quantities of the form ⟨O 1 (t)O 2 ⟩ ρ [66,[70][71][72]. Still, the issue remains that one needs to perform the time average. In practice, this can be achieved via the discrete-time approximation where we have ε = T /N (for simplicity, we will henceforth assume that N is a power of 2). That is, for a given (finite) time window T , we are computing the average over N points separated by a spacing ε. As shown in Fig. 3, the spacing ε determines the level of accuracy in the approximation, as a smaller ε leads to a more precise discretization of the integral and a better approximation of the true infinite-time average. On the other hand, the final time T determines the resolution of the approximation, as a larger T allows for a longer time interval to be averaged over, capturing more information about the system's behavior over time. One can see that both the resolution and the accuracy can be improved by a larger number of discrete time steps N . B. Sequential and parallel-in-time protocols Let us now consider the task of estimating F (O 1 , O 2 , ω) when O 1 and O 2 are Pauli operators by either sequential-or parallel-in-time simulations. Here, by sequential, we mean that each term in the sum in Eq. (9) is estimated on a . The algorithm show can be used to individually compute each term in the summation. That is, the circuits can be used to estimate quantities of the form ⟨O1(t)O2⟩ρ. Then, one can combine those expectation values classically (as well as add the appropriate phases e −iωεt ) to estimate the quantity F (O1, O2, ω) up to precision δ. The colored dashed gate is replaced with an identity (an S † gate) to compute the real (imaginary) part of ⟨O1(t)O2⟩ρ. This approach requires a quantum device with (n + 1)-qubits and O(N/δ 2 ) different experiments. quantum device by running some finite number of "experiment". For instance, consider the circuit in Fig. 4, as explicitly shown in the Supplemental Information, it can be used to estimate an expectation value of the form ⟨O 1 (εt)O 2 ⟩ ρ . Thus, we have that the following proposition holds. The proof of Proposition 1, as well as that of all other main results, is presented in the Supplemental Information. Clearly, the fact that we need to sequentially estimate ⟨O 1 (εt)O 2 ⟩ ρ for each t = 0, . . . , N −1, leads to a complexity in the number of experiments (i.e., number of calls to the quantum computer) that scales as O(N ). As we now show, this complexity can be reduced by using a scheme based on the discrete history state formalism, which allows us to directly estimate the whole sum of Eq. (9). That is, the following result holds. Note that while Proposition 1 and Theorem 1 are derived and proved for the case of O 1 and O 2 being unitary operators, one can readily generalize the previous results for the case when they are instead expressed as a linear combination of Pauli operators. In particular, if for U µ being a Pauli operator, then the experiment complexities in Proposition 1 and Theorem 1 respectively change as O(N M 1 M 2 /δ 2 ) and O(M 1 M 2 /δ 2 ). Here, we Figure 6. Algorithm for sequential-in-time estimation of the Loschmidt echo of Eq. (13). We show an algorithm which computes, up to precision δ, the overlap between |ψ0⟩ and U (εt) |ψ0⟩ for t = 0, . . . , (N − 1). The algorithm is based on Bell-basis measurements as described in [43]. Once these overlaps are estimating, we can average them classically to estimate discrete-time temporal average of the Loschmidt echo L(ψ0). This approach requires a quantum device with (2n)qubits and O(N/δ 2 ) different experiments. again recover an exponential temporal-to-qubit resource trade-off by using the parallel-in-time algorithm. Next, let us consider ρ = O 1 = |ψ 0 ⟩⟨ψ 0 |, O 2 = 1 1 and ω = 0. In this special case, The quantity on the right hand side is the infinite-time Loschmidt echo average [56,57,59,60], which we denote asL(ψ 0 ). We see that Similarly, for its discrete-time approximation L(ψ 0 ), we can write Figure 7. Algorithm for parallel-in-time estimation of the Loschmidt echo of Eq. (13). We show an algorithm which computes, up to precision δ, the overlap between the discrete history state |Ψ⟩⟨Ψ| and 1 1 ⊗ |ψ0⟩⟨ψ0|. As shown in Eq. (14), the overlap between these two states is equal to L(ψ0). The algorithm is based on Bell-basis measurements as described in [43]. This approach requires a quantum device with (2n + log(N ))-qubits and O(1/δ 2 ) different experiments. It is clear that while L(ψ 0 ) can technically be computed with the circuits in Figs. (4) and (5), this requires expanding O 1 = |ψ 0 ⟩⟨ψ 0 | into a linear combination of unitaries, and such summation will generally contain exponentially many terms. To mitigate this issue, we also present two results which allow us to estimate Eq. (13) by either sequential-, or parallel-in-time simulations. First, let us consider the following proposition. Again, we can see from Theorem 2 that performing a parallel-in-time simulation allows us to exponentially reduce the experiment complexity (from linear in N to being N -independent) at the cost of log(N ) ancillas. Similarly to Proposition 2, the proof of Theorem 2 simply spans from computing the overlap between the discrete history state |Ψ⟩⟨Ψ| and 1 1 T ⊗ |ψ 0 ⟩⟨ψ 0 |. Explicitly, we have IV. ACCESSING DYNAMICAL INFORMATION VIA SYSTEM-TIME ENTANGLEMENT Thus far, we have seen that using the history state allows us to push the complexity of running multiple experiments onto ancillary clock-qubit requirements. However, as we will now show, the entanglement present between the time and system qubits in the history state has operational meaning and contains information that we can use to learn about the system's dynamics. Moreover, we will unveil a rigorous and explicit connection between these correlations and the equilibration problem. Protocols for obtaining these quantities from variations of the previous circuits are also provided in this section. A. Properties, relation to the problem of equilibration and to temporal fluctuations of observables First, let us again recall that the discrete history state is a bipartite state between the system Hilbert space H S and the time, or clock Hilbert space H T . That is, |Ψ⟩ ∈ H T ⊗ H S . Moreover, it is apparent from Fig. 2 and Eq. (4) that history states are in general entangled across the systemtime partition. We will henceforth refer to the correlations between the system qubits and the clock qubits as systemtime entanglement (following [8]). It is important to note that, in general, (4) is not in the Schmidt's decomposition [76] of |Ψ⟩ (as the states |ψ(t)⟩ are not necessarily orthogonal). However, there exists a basis in which we can write the history state as where √ p l are the so-called Schmidt coefficients, and {|l⟩ S }, and {|l⟩ T } are orthonormal sets of states in H S and H T , respectively. A simple way to quantify the systemtime entanglement is through the linear entropy, defined as where ρ T (S) = Tr S(T ) [|Ψ⟩⟨Ψ|] is the reduced state of the history state in the clock (system) qubits. Here, we denote as Tr S(T ) the partial trace over the system (clock) qubits. In principle, one can also consider other entropies such as the Von-Neumann entropy. However, the linear entropy has the desirable property of being efficiently computable in a quantum device (see below). There is a deep connection between the system-time entanglement and dynamical properties of the system, in particular to the problem of its equilibration: Let us recall first that given an arbitrary (for simplicity) pure state |ψ⟩ = k c k |k⟩ the infinite-time average of the associated density matrix is where one assumes large (infinite) T and with H|k⟩ = E k |k⟩. In other words, if the state of the system is averaged over large enough times it loses all coherences in the energy basis. Under experimentally realistic conditions it is feasible to identify this state with the stationary equilibrium state [4]. For "most" observables this actually holds for short times T [54], meaning that a finite time window average of observables is also an interesting quantity in general. The quantum time formalism gives a new interpretation to the loss of coherences induced by a time average: since the system is "entangled with time", we lose information by ignoring the "clock qubits". This loss induces precisely the (dephasing) quantum channel ρ →ρ in the large T and small ε limit, a result that can be derived directly from a continuum quantum time formalism [77]. For discrete time the following result holds. Theorem 3. Let |Ψ⟩ be the discrete history-state in Eq. (4). The partial trace over the clock induces a quantum channel which in the large time limit implies ρ S →ρ. Moreover, for any ε and N the following majorization relation holds:ρ withρ a discretization of Eq. (17). Furthermore, for a periodic evolution with period τ generated by a Hamiltonian with M distinct eigenvalues (i.e., e −iHτ = 1 1) and given a history state with log(M ) clock qubits and time window T = τ , we have While phrased in a rather abstract way, this result has many interesting corollaries with clear operational meaning. The reason for this is that roughly speaking the history state is providing a way to prepare the equilibrated state of a quantum system: one simply needs to prepare the history state and ignore the clock-qubits. In fact, this is the reason why the previous for evaluating time averages work. Moreover, the system time entanglement entropies are in fact a lower bound to the entropies of the state in equilibrium, as it follows directly from Theorem 3 and basic majorization properties. Furthermore, one can rediscover the quantum time formalism from the natural purification of this approximate dephasing channel. The interested reader can refer to Appendix B where the proof of Theorem 3 is provided together with a more detailed discussion. With the previous in mind, let us consider again the task of estimating the infinite-time Loschmidt echo average in Eq. (11). We recall thatL(ψ 0 ) quantifies the degree of reversibility of the time evolution and is an indicator of the stability of the quantum system. Moreover, it is easy to see thatL i.e., the infinite-time average of the Loschmidt echo is the purity of the dephased stateρ. We can now use these considerations and Theorem 3 to obtain the following result. Corollary 1. Let |Ψ⟩ be the discrete history-state in Eq. (4), and let E 2 be the linear entropy of the system-time partition. Then, for any T and N we have Corollary 1 has several important implications. First, it bounds the amount of entanglement between the system and the clock qubits. In particular, it shows that the system-time entanglement can only be large if the infinitetime average of the Loschmidt echo value is small. Conversely, ifL(ψ 0 ) is large, E 2 has to be small. Second, let us remark that Eq. (21) is valid for all values of T , but most notably, also for all values of N . For large N and T the equality is reached asymptotically, and we have that Eq. (21) becomes Tr ρ 2 T ≡L(ψ 0 ). Moreover, as we will see below, our numerical analysis shows that Tr ρ 2 T can provide a better approximation toL(ψ 0 ) than L(ψ 0 ), implying that there exists no simple general relation between E 2 and L(ψ 0 ). We can understand the intuition behind Corollary 1 as follows. Let |ψ 0 ⟩ be a stationary state of the unitary evolution. For instance, let |ψ 0 ⟩ be an eigenstate of H with eigenenergy E 0 , so that U (εt) |ψ 0 ⟩ = e −iεtE0 |ψ 0 ⟩. Then, the discrete history state becomes Equation (22) reveals that |Ψ⟩ is separable. It is also not hard to verify that in this case L(ψ 0 ) = 1. On the other hand, if |ψ 0 ⟩ evolves through N orthogonal states ⟨ψ(εt)|ψ(εt ′ )⟩ = δ tt ′ then Eq. (4) is already the Schmidt decomposition of |Ψ⟩ and the state is maximally entangled. The previous toy model shows that if the state is quasi stationary (i.e., large Loschmidt echo), we can expect small values of entanglement. Similarly, if the state is significantly changing during the evolution (e.g., small Loschmidt echo value), then the history state will likely possess large amounts of entanglement. We note that the relation between the distinguishability of the evolved state and the system time-entanglement was first reported in [8]. However, the connection with the Loschmidt echo was not explored therein. The result in Corollary 1 can be further strengthened for the special case where the time evolution is periodic. That is, when for some τ , and where we assume that H has M distinct eigenvalues, for M being a power of two. Now, we find that the following result holds. Corollary 2. For a periodic evolution with period τ generated by a Hamiltonian with M distinct eigenvalues, as in Eq. (23), then for a history state with log(M ) clock qubits and time window T = τ , we have Corollary 2 shows that for periodic Hamiltonians the system-time entanglement is exactly the same as the infinite-time average of the Loschmidt echoL(ψ 0 ), as well as the discrete-time approximation L(ψ 0 ). As shown in Appendix B, tracing out induces now a completely dephasing channel in the energy eigenbasis so that ρ S =ρ =ρ. The previous results connecting the system-time entanglement with the Loschmidt echo allows us to derive even more operational meaning to E 2 as a bound for temporal fluctuations of observable. In Ref. [4], it was shown that given an observable O,L(ψ 0 ) provides a bound on temporal fluctuations of observables as (the difference between the largest and smallest eigenvalues of O in the subspace of states satisfying ⟨n|ψ⟩ ̸ = 0), and where σ 2 O denotes the temporal variance Here we have used the notation defined in Eq. (7) with F (O) ≡ ⟨O⟩ (at a given time) while the "overline" denotes temporal-average. Eq. (25) shows that small temporal Loschmidt echo averages imply a small temporal variance of the observable O, and vice versa. In other words, a system with a smallL(ψ 0 ) can only exhibit smaller temporal fluctuations in its observables compared to a system with a large Loschmidt echo. It should be clear to see that Theorem 1 readily implies the following corollary. Corollary 3. Let O be an observable, and let σ 2 O denote its temporal variance as in Eq. (26). The system-clock entanglement provides bound on temporal fluctuations as Corollary 3 shows a clear physical meaning of the systemtime entanglement. Namely, if E 2 is small, then the system is stable and predictable. This follows from the fact that the temporal variances of expectation values will be small. Conversely, if the system-time entanglement is large, then the system can be unstable and unpredictable, as evidenced by potentially large observable fluctuations. B. Protocols for computing the system-time entanglement The previous theorems and corollary shed light on the exciting possibility of understanding the dynamics of the system through the system-time entanglement. However, in order for these results to be truly useful, one needs to be able to measure E 2 from the history state. As we can see in Eq. (16), we need to estimate Tr ρ 2 S or Tr ρ 2 T . While mathematically, it makes no difference whatsoever which subsystem we focus on, as their purity is the same (see Eq. (15)), in practice it can be substantially easier to work with one system or the other. As heuristically evidenced by our numerics (see below), the discrete history state with a number of clock qubits log(N ) much smaller than the system size n produces results which accurately reproduce the infinity time average Here we consider the task of evaluating Eq. (16). By taking two copies of the history state, we can estimate Tr ρ 2 T up to δ precision via the state-overlap circuit in [43]. This approach requires a quantum device with (2n + 2 log(N ))-qubits and O(1/δ 2 ) different experiments. properties of the system dynamics. Thus, we will henceforth assume that log(N ) ≪ n. This assumption implies that we can compute E 2 , and therefore learn about the system, by just looking at the clock qubits. We now present two methods for estimating Tr ρ 2 T . When using the circuit in Fig. 8 one prepares two copies of the history state |Ψ⟩ and then performs the state overlap circuit of Ref. [43]. On the other hand, when using the circuit in Fig. 9 one can estimate E 2 with a single copy of |Ψ⟩ by using classical shadows, or randomized measurements [44,45]. For instance, one can prepare the history state and performs a random unitary on each qubit, followed by a measurement on the computational basis. The measurement outcomes are stored and then combined classically to estimate Tr ρ 2 T . To finish this section, we note that by comparing Propo- Figure 9. Algorithm for estimating E2 via randomized measurements. Here we consider the task of evaluating Eq. (16). We start with a copy of the history state, and we apply a random unitary (indicated by a colored gate) to each qubit. Then we perform measure each qubit in the computational basis and record the measurement outcome. These constitute the so-called "classical shadows" of ρT . As shown in [44], this procedure allows us to estimate Tr ρ 2 T up to δ precision with a quantum device with (n + log(N ))-qubits and O(N/δ 2 ) different experiments. sition 2, Theorem 2, and Theorem 4, the method to estimate either L(ψ 0 ) or E 2 with the least computational requirement (assuming log(N ) ≪ n) is that of Fig. 9. Namely, here we can compute E 2 up to δ precision with a quantum computer with (n + log(N )) << 2n qubits and with O(N/δ) experiments. This result then showcases the power of using the history state as it allows us to study physical properties of the system (such as boundingL(ψ 0 ) or the temporal variances ∆O 2 ) with less requirements than we would otherwise need. V. DEPTH-ESTIMATION AND PARALLEL-IN-TIME ADVANTAGES In the previous sections we have presented several methods where we used the history state to study temporal averages of quantities of the form of F (O 1 , O 2 , ω) in Eq. (9). At the same time, we have shown how to compute the system-time entanglement, a new quantity with many interesting applications. Crucially, these techniques require being able to implement the phase estimation-like circuit for preparing history states in Fig. 2 as a sub-routine. In this section, we give an explicit estimation of the depth of this preparation circuit based on two different implementations: a direct Lie-Trotter product formula [78] approach and a Hamiltonian diagonalization scheme which we implemented variationally [46]. The aim is to compare the associated running times of the algorithms with the ones obtained in sequential methods, focusing only on the parts of the protocols involving evolution gates. A. Direct Trotterization approach We start by recalling that U (εt) = e −iHεt for some Hamiltonian of interest H. Usually, if one wishes to implement U (εt), the standard approach is to break the evolution into a smaller, easier to implement evolutions U (ε), and then repeat it t times. That is, one has U (εt) = U (ε) t . In this way, the depth of the circuit needed to implement U (εt) grows with t. To be more specific, let us assume that one is employing a Lie-Trotter decomposition-based product formula. This was the first example of quantum advantage for quantum simulations [78] and remains a relevant and straightforward technique to this day [79]. The basic idea is to decompose a given Hamiltonian H = l j=1 h j as U (ε) ≈ j e −ihj ε . We will further assume that each h j is local, i.e., it acts nontrivially on at most O(1) qubits. Importantly, note that the locality condition implies that l is at most polynomially growing with the system size. The evolution operator up to time τ = tε can be approximated with t copies of these gates as U (τ ) ≈ j e −ihj ε t which is also logarithmic in the system size. A rough estimation of the error involved was provided in [78] by assuming that the main contribution to the error comes from the second-order term in the Lie-Trotter formula. Under this assumption, the number of times steps required for guaranteeing a fixed precision grows as t ∝ τ 2 . Then, the total number of gates, denoted as # g (τ ), scales as # g (τ ) = γlτ 2 for γ a constant dependent on the precision and the particular Hamiltonian. More general bounds were found in [80] which allows us to write # g (τ ) = γlτ α . This is the estimation we will be using. Under certain scenarios, this bound can be improved e.g. by using the Lie-algebraic structure associated with the given Hamiltonian [81], and in general, the "actual" error scaling of such product formulas remains poorly understood [79]. For our purpose the previous bound will suffice: we want to compare the total number of gates required in the sequential (Figures 4 and 6) versus the parallel-in-time approach (Figures 5 and 7) assuming the same Trotterization scheme is applied to both. The interest in this quantity relies on the fact that the total number of gates employed in each simulation protocol is what determines the total time span required to complete the computation [82]. By estimating this quantity in each protocol we can establish whether it is more convenient to use a sequential or parallel-in-time approach. Theorem 5. Consider the total number of gates required for implementing the evolution in the sequential approaches # seq , and the total number for the parallel in time approaches # par . They scale as for β ∈ O(1) a constant independent both of the system and clock size. Theorem 5 is a consequence of the fact that # seq is given by the sum over the amount of gates of each run so that # seq ∼ N α+1 . Instead, in the parallel approach the total number gates only involves a sum over the log(N ) gates of the same run, thus giving # seq ∼ N α βl. The extra factor βl comes from the fact that those gates need to be controlled. Remarkably, the depth scaling with the number of times of the parallel-in-time approach is the same as the one of a single Trotter evolution up to time τ ≡ εN . See Appendix D for the details and the proof of the previous theorem. Something really interesting has happened: in the parallel approach we have an increase in depth which is logarithmic in the system size, but we have reduced the total number of gates exponentially in the number of clock qubits (with respect to a sequential approach). We can then state the following: Proposition 3. Given log(N ) clock-qubits and system of dimension d, the parallel-in-time approach outperform the computational times of the sequential approach for Remarkably, the condition for a convenient clock size is doubly logarithmic in the system's Hilbert space dimension d. Typically, a modest number of qubits for the clock, much smaller than the system size, is sufficient to improve computational times. Finally, let us remark that Proposition 3 will hold under rather general conditions, since it is based on the fact that a sum over N terms is involved in the estimation of # seq , while a sum over log (N ) terms is required for # par (see proof in Appendix F). However, the scaling of the number of gates # par we provided in Theorem 5 is still based on a pessimistic bound and linked to product formulae: the generic bounds we used for Trotterization can overestimate by far [83,84] the actual errors, which depend on the specific initial states and observables involved in the complete protocols. This means that actual implementations of the parallel-in-time protocols, whether based on product formulae or more advanced methods, might be much more efficient. The important message is that the advantages over sequential-in-time protocols, as stated in Proposition 3, hold more generally (see also below). B. Hamiltonian diagonalization and Cartan decomposition approach In this section, we repeat the circuit depth analysis in another relevant scheme, namely assuming one has access to a diagonalization of the Hamiltonian. In particular, we will also discuss how one can obtain such diagonalization variationally via the algorithm presented in [46]. Let us recall that there always exists a unitary W (whose columns are the eigenvectors of H) and a diagonal matrix D (whose entries are the eigenvalues of H) such that Without loss of generality, we can expand D is some basis of mutually commuting operators where [h µ , h µ ′ ] = 0 for all µ, µ ′ . If one has access to W and D, then the unitary evolution can be expressed as The power of Eq. (32) can be seen from Fig. 10, where it is shown that we can use the diagonalization of H to implement U (εt) at fixed depth. Namely, the circuit depth for µ e −icµεthµ and for µ e −icµεthµt ′ is exactly the same, we just change the parameters associated to each time evolution generated by h µ . This means that in a sequential approach, each run requires O(n) gates independently of the evolution time, and assuming for simplicity that each h µ acts as a one-body operator. The total number of gates is then # seq = O(nN ). Moreover, the benefits of diagonalizing the Hamiltonian H are amplified when using this technique in the circuit for preparing the history state. As shown in Fig. 11(a), we can see that instead of controlling log N gates U (2 j−1 ε) (where for j = 1, . . . , log N ), the history state can be prepared by first acting on the system qubits with the noncontrolled unitary W , followed by log N controlled gates e −iD2 j−1 ε , and finally by implementing a non-controlled unitary W † . This further reduces the depth required to prepare the history state. First, we do not need to control W , nor W † . Second, we note that controlling e −iDεt (for any t) is equivalent to controlling each term e −icµεthµ (since the h µ are mutually commuting). For instance, we can see in Fig. 11(b) that if the h µ are single-qubit Pauli operators acting on each qubit, then implementing a controlled e −iDεt gate, just requires controlling n single qubit rotations. This gives the following Theorem. Theorem 6. By replacing the history state preparation subroutine with the diagonalized Hamiltonian as in Eq. (30) and Fig. 11, we can prepare the history state with a total number of gates i.e. logarithmic in both the number of times and system size. The parallel-in-time advantage over the sequential approach condition now becomes N > β log (N ), which is independent of the system size and is virtually always reached. Of course, one could argue that if we have classical access to the diagonalized Hamiltonian, then we could just expand the initial state state and the measured operators in the energy eigenbasis to compute any expectation value. However, this kind of expansion will not be tractable for large problem sizes. Instead, we will show below that if W is accessible in a quantum computer, then one can still leverage the diagonalization for depth reduction. Finally, we note that if we want to study the entanglement in ρ T as in Theorems 1 and 4 (see also Figs. 8 and 9), then the final unitary W † in Fig. 11(a) can be omitted. This is due to the fact that the entanglement is invariant under local unitaries [85], and hence W † cannot change the entanglement nor the spectral properties of ρ T . It is worth highlighting the fact that the main challenge for using Eq. (32) is that it requires access to the decomposition in Eq. (30), and that W and D might not be readily accessible. However, one can still attempt to variationally On the left, we implement U (εt) by expanding the evolution into shorter-time evolutions U (ε) (which we can then implement via Trotterization), and then perform U (εt) = U (ε) t . This comes at the cost of increasing the depth as t is increased. On the right, we use the diagonalization of H as in Eq. (30) to express U (εt) = W e −iDεt W † . Moreover, it can be seen by expanding D in a basis of mutually commuting operators (as in Eq. (31)) that the circuit implementation of e −iDεt has the same depth for any t (see also Eq. (32)). learn them [48]. For example, one can use the Variational Hamiltonian Diagonalization algorithm in Ref. [46] which is aimed at training a parametrized ansatz for the diagonalization of H. The ansatz is composed of two parts: 1) A parametrized unitary W (α), and 2) A diagonal Hamiltonian D(β) such that One can quantify how much H(α, β) approximates the target Hamiltonian H by defining the cost function where ∥X∥ HS = Tr[X † X] is the Hilbert-Schmidt norm. Clearly, the cost is equal to zero if H = H(α, β). Thus, the parameters β and α are trained by solving the optimization task arg min Here, where a quantum computer is used to estimate the term in C(α, β) [46], while classical optimizers are used to train the parameters. In this variational setting, it is extremely important to pick an ansatz (i.e., a given unitary W (α), and diagonal Hamiltonian D(β)) which do not lead to trainability issues such as barren plateaus [52,[86][87][88], where the cost function gradients are exponentially suppressed with the problem size. One of the leading strategies to mitigate such issues is to use the so-called problem-inspired ansatzes, where one creates ansatzes with strong inductive biases [89][90][91] based on the problem at hand. Recently, one such method was developed which is exploits the Cartan decomposition of the Lie algebra generated by the target Hamiltonian H [92][93][94]. Below we explain such method. Consider the Hamiltonian of interest H. Then, without loss of generality we assume that it can be expressed as a sum of Hermitian traceless operators {H i } as where a i ∈ R. Then, let g = ⟨{iH i }⟩ Lie be the Lie closure of the set of operators [95]. Note that, by definition, iH ∈ g. The result in Ref. [47], provides an efficient-indim(g) circuit for the simulation of any e −iHt for any set of coefficients {a i }. To understand the technique of Ref. [47], we recall that a Cartan decomposition of the Lie algebra g refers to the decomposition of g into two orthogonal subspaces g = k ⊕ m, where k is a Lie subalgebra, i.e., [k, k] ⊆ k, whereas m is not: [m, m] ⊆ k. Moreover, these two orthogonal subspaces satisfy [k, m] = m, and m contains the maximal commutative subalgebra, also known as the Cartan subalgebra, h of g. Note that for any pair of element ih 1 and ih 2 in h, we have [h 1 , h 2 ] = 0. The Cartan decomposition provides us an ansatz to diagonalize H as follows. First we note that H always admits a decomposition of the form with K ∈ e k and h ∈ h. Reference [47] provides us with an ansatz for Eq. (34) as we can now parametrize and optimize over the Lie group e k and the algebra h. That is, we simply pick where iB ν belongs to a basis of k, and with ih µ belonging to a basis of h. Taken together, Eq. (39) and (40) provide a problem-inspired ansatz for the diagonalization of H which we can use to solve Eq. (36). Fig. 2 for preparing the history state changes when using the Hamiltonian diagonalization. In particular, we see that instead of controlling the log N gates U (2 j−1 ε) for j = 1, . . . , log N we now need to implement non-controlled unitaries W and W † , and simply control the fixed-depth log N gates e −iD2 j−1 ε for j = 1, . . . , log N . We note that if we only care about the entanglement in ρT as in Figs. 8 and 9, then the final unitary W † can be omitted (the entanglement is invariant under local unitaries [85]). b) For the special case when the diagonal Hamiltonian D is expressed as a sum of single-qubit operators acting on each qubits (i.e., hµ in Eq. (31) is a one-body operator acting on the µ-th qubits), then controlling the gates e −iD2 j−1 ε simply requires controlling single-qubit gates. Let us here exemplify the Cartan decomposition-based method. Consider a general XY Hamiltonian of the form Here, one can prove that g = ⟨i{X j X j+1 , Y j Y j+1 , Z j }⟩ Lie ∼ = so(2n). Thus, the ensuing Cartan decomposition is where we used the notation Here, we can see that the ansatz for the diagonal part of the ansatz in Eq. (34) is simply On the other hand, it is clear from Eqs. (39) and (42) that a drawback in the proposal of Ref. [47] is that it requires us to implement gates which are obtained by exponentiation of highly non-local operators (e.g. e −iα X1Yn ), a task which can be hard to implement and lead to deep circuits. Hence, we propose a different parametrization for W (α). Consider the following set of local operators. We can prove that the following proposition holds. Proposition 4. iG is a generating set of the algebra k. The key implication of Proposition 4 is that we can generate any unitary in the unitary subgroup e k by exponentiating only the local operators in G. This means, that one can diagonalize H using an ansatz of the form We explicitly show the form of this ansatz in Fig. 12. Note that since the operators in Eq. (47) are two-body, then the circuit for W (α) only requires local two-qubit gates. Hence, such construction significantly reduces the circuit requirements over that in Ref. [47]. The question still remains to how large L needs to be. Here, we can leverage recent results from the quantum machine learning literature which state that by taking L ⩾ dim(k)/(2n − 2) it is generally sufficient to guarantee that any K ∈ e ik will be expressible [90]. Moreover, in this regime the ansatz is said to be overparametrized. In this overparametrization regime, the optimization of Eq. (36) becomes much easier to solve as many spurious local minima disappear [90,96]. Putting the previous results together, and assuming we can efficiently solve Eq. (36), we can derive the following theorem. Theorem 7. Let H be an XY Hamiltonian of the form in Eq. (41). Then, let D(β) be a diagonal operator as in Eq. (45), and let W (α) be a unitary as in (47). By replacing the history state preparation subroutine with the trained Figure 12. Ansatz for W (α). By using Proposition 4 we propose an ansatz for the diagonalizing unitary W (α) which only uses local gates acting on neighbouring qubits. We shown here a single "layer" of an n = 4 ansatz which is repeated L times. diagonalized Hamiltonian as in Eq. (34) and Fig. 11, we can implement the circuits used in Theorems 1, 2, and 4 with circuit depths in O(log(N )n). The results in Theorem 7 showcase the extreme power of diagonalizing H via its Cartan decomposition as we can implement all the circuits in Figs. 5, 7, 8, and 9 with a depth that only scales as the product of the number of system and clock qubits. VI. NUMERICAL SIMULATIONS In this section we first provide numerical simulations that showcase how the discrete-time approximations (computable via our algorithms) can capture the behaviour of their continuum time counterparts. Similarly, we also show numerically that the system-time entanglement provides a new way to understand dynamical properties of the system. Next, we will demonstrate how the variational Hamiltonian diagonalization algorithms can be used to reduce the depth of the history state preparation circuit, as discussed in Section V B 1. In all of our experiments, we consider a system of n-qubits evolving by a unitary generate by the timeindependent non-uniform XX model, whose Hamiltonian reads to show that in the thermodynamic limit this model exhibits a delocalization-localization transition at the critical point λ = J. Indeed, it is well known that such transition induces sharp changes in long-time dynamical properties such as the Loschmidt echo average [60]. Our goal is then use this paradigmatic model as a test-bed to show that our proposed discrete-time average of the Loschmidt echo can capture the behavior of their continuum time counterparts. A. Discrete time averages and system-time entanglement To study the discrete-time average of the Loschmidt's echo we have considered a chain of n = 200 sites with J = 2, α = √ 5−1 2 , and a number of clock qubits ranging from 1 to 10, corresponding to a maximum number of N = 1024 times. Note that with this choice, the system dimension is equal to 2 200 and hence, much larger than the clock Hilbert space dimension, N . To study the effects of the window size, we have also considered values of ε spanning from 0.05 up to 1.95 with spacing 0.1 (see Fig. 3). The initial state of our simulations is |ψ 0 ⟩ = (s + 99 + s + 100 + s + 101 )| ↓↓ . . . ↓⟩/ √ 3, where s + j denotes the creation operator at site j. As such, at t = 0, the state is only partially delocalized in the middle of the chain. All simulations, including the computation of the exact infinite-time averageL(ψ 0 ), where performed via Jordan-Wigner diagonalization, and we refer the reader to Appendix G for additional details. In Fig. 13 we first present a two-dimensional plot of the error between the infinite-time average of the Loschmidt echoL(ψ 0 ) and its discrete-time approximation L(ψ 0 ) (|L(ψ 0 )− L(ψ 0 )|) averaged over λ ∈ (0.1, 3.5) (with spacing ∆λ = 0.05), for different values of ε and n. Here, we can see that, as expected, the error is reduced by increasing the number of clock qubits. The improvement follows two tendencies. First, there is an overall improvement when increasing T (i.e., when moving up in the log(N ) axis for fixed ε), as this corresponds to better accuracy. On the other hand, for constant T it is beneficial to reduce ε (i.e., increase resolution), as shown by the blue solid curves. We further explore the effect of fixing ε and increasing log (N ) in Fig. 14 a). Therein we showL(ψ 0 ), as well as its discrete-time approximation L(ψ 0 ) for different number of clock qubits as a function of λ for fixed resolution ε = 0.45 (vertical dashed line in Fig. 13). First, we note that the infinite-time Loschmidt echo captures the delocalizationlocalization transition occurring at λ = J = 2. In particular, for λ < 2 we see thatL(ψ 0 ) is small, indicating a delocalized phase. On the other hand, for λ > 2 the evolved state is localized asL(ψ 0 ) is large. Next, let us note that as log(N ) increases, L(ψ 0 ) quickly becomes a good approximation for its infinite-time counterpart (as expected from Fig. 13). However, Fig. 14 also reveals that L(ψ 0 ) capture the delocalization-localization transition even for a small number of clock qubits. Already for log(N ) = 6 the inflection point of L(ψ 0 ) approaches the critical value λ = 2. Next, we study how the system-time entanglement, as measured through the subsystem purity Tr ρ 2 S for ρ S = Tr T [|Ψ⟩⟨Ψ|], approximates the infinite-time average Loschmidt echo (see Corollary 1). In Fig. 14 b) we plot L(ψ 0 ), as well as Tr ρ 2 S , for different number of clock qubits as a function of λ. Again, we see a clear convergence towardsL(ψ 0 ) as the number of clock qubits are increased. This result shows that the subsystem purity provides an excellent approximation ofL(ψ 0 ). Moreover, one can also observe that the system-time entanglement clearly captures the delocalization-localization transition. This fact can be readily understood from the fact that in the localized phase the state does not change considerably with time, and hence a small amount of entanglement is expected. This example perfectly exemplifies the fact that the system-time entanglement in the history state carries valuable information about the system dynamics. Moreover, since we know that Tr ρ 2 S = Tr ρ 2 T , then one can estimate the reduced state tomography by studying only the reduced state on the log(N )(≪ n) clock qubits. Figures 14 a) and b) show that both the discrete-time Loschmidt echo and the subsystem purity provide good approximations ofL(ψ 0 ). To better compare their performance, we show in Fig. 14 c) curves forL(ψ 0 ), L(ψ 0 ) and Tr ρ 2 S for the same chain of n = 200 spins, but for ε = 1.25, i.e., for less accuracy (see Fig. 3). In this regime, one can see that while L(ψ 0 ) suffers from undesired oscillations, Tr ρ 2 S can still provide a good approximation for the same number of qubits. In particular, Fig. 14 c) shows that L(ψ 0 ) can be smaller thanL(ψ 0 ) in unpredictable ways (due to insufficient resolution), meaning that L cannot be strictly used to provide strict bounds such as the one in Corollary 1. While Tr[ρ 2 S ] oscillates as well, this quantity never crosses the black points, in agreement with our bounds. Here we also observe that the systemtime entanglement provides a better convergence in the localized region. On the other hand, the entanglement curves are above the L curves in the delocalized sector. Notice however that this discrepancy can be mitigated by increasing the number of qubits. Finally, we note that in Fig. 14 c) we also depict the differences L(ψ 0 ) −L(ψ 0 ) and Tr ρ 2 S −L(ψ 0 ) , which confirm that Tr ρ 2 S is always strictly larger thanL(ψ 0 ), whereas L(ψ 0 ) can indeed be smaller that the infinite-time average. Finally, as an example of Corollary 3 we also numerically show how the system-time entanglement provides a bound for the fluctuation of observables. We use as an example the observable O = s + L/2 s − L/2+1 + s + L/2+1 s − L/2 and as the initial state |ψ 0 ⟩ = (s + L/2 + s + L/2+1 )| ↓↓ . . . ↓⟩/ √ 2. In this case, the bounds of Eq. (27) becomes Fig. 15 we plot the numerical results for a chain of n = 100 sites and log(N ) = 9 qubit clocks (i.e., 512 times). We see that while the bound is not tight, bothL(ψ 0 ) and Tr ρ 2 S are capable of clearly separating the different phases. As expected from our bounds, the system-time entanglement provides a less tight but strict bound. However, given that one can experimentally compute the system-time entanglement efficiently in quantum computers, this bound is still useful for practical purposes. Moreover, it is important to highlight again the fact that the system-time entanglement is obtained from a discretetime formalism (in contrast toL(ψ 0 ) which requires infinite time averages). As such, our new notion of systemtime entanglement provides valuable and strict information about the system's observable dynamics and its eventual equilibration (a feature not available for the discrete time Loschmidt echo L(ψ 0 )). B. Diagonalization via Cartan decomposition In this section we show how one can use the variational Hamiltonian diagonalization (to diagonalize the Hamiltonian in Eq. (48) and thus reduce the depth of the history state preparation circuit. We will take the ansatz for D(β) and W (α) as appearing in Eqs. (45) and (47). Thus, as depicted in Fig. 12, W (α) consists of L layers of two qubit gates generate by XY and Y X arranged in a brick wall fashion, whereas the diagonal part D(β) is just a sum of Pauli Z operators on each qubit. To train the parameters Figure 15. Observable fluctuations as a function of λ for a chain of n = 100 sites. We depict the observable fluctuations σ 2 O , the infinity-time average of the Loschmidt echoL(ψ0) and the reduced subsystem purity Tr ρ 2 S . We consider log(N ) = 9 qubit clocks, and we take ε = 0.5 α and β), we will optimize the Hilbert-Schmidt cost function defined in Eq. (35). Details of the simulation can be found in Appendix G. We considered a chain of n = 6 sites, and set J = 2, and α = √ 5−1 2 . Moreover, we diagonalized the Hamiltonian for λ = 1, 2, 3, thus allowing us to show the success of the algorithm in each important region of the phase diagram. In Fig. 16 we show the training curves (loss function versus iteration step) for an ansatz with L = 18 layers the three different values of λ that we considered. Here we can clearly see that as the number of iterations increases, the cost function value goes to zero, indicating that we can accurately diagonalize the target Hamiltonian. Notably, we can see that all the trained curves converged to the solution, meaning that the optimizer did not get stuck in a local minima. Such extremely high optimization success rate can be understood from the fact that the circuit is overparametrized [98], i.e., it contains enough parameters to explore all relevant directions. In fact, using the results from [98] we know that a circuit with a set of generators G will be overparametrized if the number of parameters is dim(⟨iG⟩ Lie ). Importantly, we can use Proposition 4 to know that dim(⟨iG⟩ Lie ) = n(n − 1) = 30. Since the ansatz W (α) contains 2(n − 1) parameters per layer (see Fig. 12), then we can overparametrize it with L = ⌈n/2⌉. Indeed, in Fig. 17 we show the minimal cost achieved versus the number of layers L, and as expected we see a computational phase transition at L = 3: For smaller number of layers, the ansatz is underparametrized and can get stuck in local minima, but for L ⩾ 3 it is overparametrized and training becomes easier. As the plot shows, once the model is overparametrized, further increasing the number of layers does not lead to any improvement in the minimum loss value achievable. Lastly, in Fig. 18 we show the success of the Figure 18. Testing the success of the Hamiltonian diagonalization algorithm. The left panel shows the initial "diagonalized" HamiltonianD(α) = W (α)HW † (α) where α is a vector of initial random parameters αi ∈ [0, 2π). The right panel instead featuresD(α * ) obtained applying the trained diagonalizing unitary to its target Hamiltonian H. Here α * are the optimal parameters of the random successful run at hand. diagonalization by applying a successfully trained diagonalizing unitary W (α) to its target Hamiltonian H, obtaining a perfectly diagonal matrix and verifying the success of the algorithm. VII. DISCUSSIONS The simulation of quantum systems has widely been considered the most important application of quantum computation since its conception [99]. Traditionally, the focus of quantum simulations has revolved around computing quantum states and physical quantities at a given time, harnessing the exponential growth of the Hilbert space of qubits to mimic the behavior of many-body systems. However, many fundamental quantities, such as correlation functions or the equilibrium state of a quantum system, are associated with large temporal sums of the previous. In this manuscript we have shown that by treating time itself quantum mechanically, which in a computational scheme corresponds to using clock-qubits, those quantities become readily accessible. This result arises from a fruitful analogy between the recent quantum time discussions in quantum foundations and quantum gravity fields, and the fields of quantum information and computation. More importantly, by developing quantum-time inspired algorithms, we have disclosed new important connections between the correlations contained in history states and the problem of equilibration of an isolated quantum system, thus unveiling a link to statistical mechanics as well. In particular, we have shown that the system-time entanglement is a good measure of equilibration, and as a by product, how the formalism provides a way to prepare approximate equilibrium states. Whether under proper conditions this can provide a useful scheme for studying thermalization as well is left for future investigations. These considerations show that, in addition to the practical applications of the various proposed algorithms, the framework we presented offers new insights that can be applied to diverse areas of many-body physics. The system-time entanglement is also indicative of the need for entangling gates to prepare history states. In this work we have considered a fixed architecture which even for a direct Trotterization approach provides an exponential advantage over sequential in time schemes. We have also considered a further simplification via a variational Hamiltonian diagonalization approach. Interestingly, the fact that the entanglement is state and evolution dependent suggest that one can consider more adaptive preparation schemes, where the entanglement is used as a measure of "compressibility" (or complexity) of the history state. While we leave the study of these possibilities for future investigations, we should mention that a global variational scheme has been proposed recently [30] based on the Feynman-Kitaev Hamiltonian [6]. As with any variational protocol, knowledge of the structure of the solution is valuable for choosing proper ansatzes, thus rendering our efficient protocol particularly relevant for any near-term implementation as well. At the same time, all of our protocols can be easily updated by replacing the history state preparation subroutine. The newly disclosed parallel-in-time advantages clearly hold and can be readily extended to any related proposal including [30]. In all the protocols we have considered, the circuits give information about the history of a closed system. However, these protocols can be extended to consider the history of states which at some points in time are being subject to measurements. This becomes feasible by following the recent treatment of Ref. [7] that incorporates ancillary memories (following Von Neumann) and describe the history of the whole (history of the system+ancillas). This framework opens many new interesting possibilities for novel quantum algorithms. A similar treatment may be applied to quantum evolutions associated to non-unitary channels (open systems). On the other hand, even if we focus in closed systems there is much to explore yet: most of the parallel protocols in the paper involve measurements on the system side. We also know that adding a projective measurement in the time basis of the clocks qubits yields predictions at a given time. However, the most characteristic features of quantum mechanics arise when one is considering measurements in different bases meaning that the full potential of quantum time effects still need to be explored. In addition, multiple copies of history states may be used to study higher momenta of mean values, thus opening even more possibilities. Going even further, it has been recently discussed [12][13][14]31] that the PaW formalism is not enough for achieving a fully symmetric version of quantum mechanics. Hence, we can speculate that in the near future, and motivated by the current results, these and other proposals may allow to provide further informational and computational insights related to the time domain. APPENDICES FOR "PARALLEL-IN-TIME QUANTUM SIMULATION VIA PAGE AND WOOTTERS QUANTUM TIME" Here we present derivations, additional details for the main results, and further general discussions on quantum time notions. The appendices are organized as follows: In Appendix A we give an overview of quantum time approaches and describe connections between our work and the literature. In Appendix B we explain in more detail the relation between quantum time and the equilibration problem and prove the related Theorems in the main body. In Appendix C we give detailed proofs of all the different circuits we proposed in the main body. In Appendix D we prove Theorem 5 about circuit depths in the sequential and parallel-in-time schemes. In Appendix E we prove Proposition 4 about the algebra of the XY model. In Appendix F we prove Theorem 7 regarding the depth of the total protocols involving both Hamiltonian diagonalization and history states. Finally, in Appendix G we provide details about the methods employed in the numerics (sec. VI). Appendix A: Connection to literature and Overview of Quantum Time approaches Here we briefly discuss the broader conceptual picture of "quantum time-related proposals". This should also clarify the range of applicability of our current ideas. Regarding the Page and Wootters approach [38], it is worth noting that their original idea was to replace dynamics with quantum correlations. The physical picture was a static universe from which evolution emerges from a convenient separation between the system, and the rest. In this sense the "time" Hilbert space corresponds to a "cosmological" clock. One main motivation for these ideas was the discussion about time in quantum gravity [36], where the Wheeler-deWitt equation [102] suggests a static universe. Such equation appears as a constraint induced by the "gauge" freedom in the choice of coordinates in general relativity, and can be understood within the framework of Dirac's generalization of Hamiltonian dynamics to constrained systems [103,104]. However, there are important conceptual and mathematical differences between the PaW's and Dirac's approaches which are at the core of our ability to leverage the first and not the second to develop useful computational schemes. To clarify this statement, let us briefly discuss a simple but archetypal [105] application of Dirac's approach: in classical mechanics, one can accommodate time itself and its conjugate variable p t in an extended phase-space by introducing a new variable τ which parametrizes phase-space variables, including t = t(τ ), p t = p t (τ ). The Poisson brackets are also extended by imposing {t, p t } = 1 with p t ̸ = H which for a single particle and x 0 ≡ t, p t ≡ p 0 completes the algebra {x µ , p ν } = δ µ ν , which now is explicitly covariant. Part of the quantization scheme now consists of the replacement which for µ = ν = 0 is just [T, P T ] = iℏ (when the system is a particle), i.e. x 0 ≡ T with [T, H] = 0 for H some Hamiltonian of the particle. On the other hand, the independence of physical quantities on the way the τ -parametrization is chosen leads to J |Ψ⟩ = 0 after quantization. This reparametrization invariance is analogous to the general covariance of general relativity, and the constraint is analogous to the Wheeeler-deWitt equation. However, contrary to our main body discussion, in Dirac's approach the constraint equation defines the so-called physical Hilbert space which is regarded as different from the previous "kinematic" Hilbert space. Moreover, the physical Hilbert space is not treated as a subspace of the latter, instead the kinematical one appears only as an auxiliary step in the whole quantization process but not in the final construction. As a consequence, the time operator is ruled out in the final formalism for not being a physical observable (see e.g. [106] for discussions about this procedure and the related "Hilbert space problem"). As we have shown in the main body, in the PaW approach the interpretation and treatment of the constraint are completely different: the time operator is not disregarded since it corresponds to an actual observable of the clock system. The complete Hilbert space e.g. as defined by (A1) is preserved, while the state of the system is recovered by conditioning [107]. The theoretical ideas underlying our manuscript clearly exploit the "extended" Hilbert space associated with the PaW approach and are mostly inspired by the recent developments in the context of quantum information [7,8,11,12], and in particular in the discrete-time formulation [8], which was further extended in [9]. Let us also stress that in for the purposes of our manuscript it is not relevant whether explicit covariance is achieved via the definition of a time operator. This means that we can consider arbitrary many-body systems as we have shown in the main body. This also includes relativistic systems (properly discretized quantum field theories) but their simulation corresponds to the evolution as seen from a fixed reference frame: in this case there is no simple rule to relate a history state in a given reference frame to another. In this sense, we regard the PaW formalism as non-explicitly-relativistic. There is however a simple situation where such rule is straightforward: for a single relativistic particle Lorentz transformations can be introduced explicitly and geometrically (as space-time "rotations" independently of the theory) [11,12]. For this to be actually achieved one needs to preserve the kinematical space and deal with an extended inner product, a result which is again highlighting the advantages of the extended scheme. This approach was recently developed in [11] for Dirac's particles and in [12] for scalar particles where the new inner products were successfully related to the "physical" ones while preserving the extended Hilbert spaces. As a consequence, the associated history states can still be realized by the means presented in this manuscript (adopting some discretization of space-time), and in principle, Lorentz transformations may be introduced as non-local gates acting on both the system and time qubits. Another interesting feature of these single-particle quantum time formalisms (which preserve the extended Hilbert space) is that their "second quantization", as introduced in [12][13][14] naturally leads to a new approach to many-body scenarios and to quantum field theories in which space and time are explicitly on equal footing (see also the closely related "quantum mechanics of events" proposal [31]). In particular, the approach in [13] also leads to a redefinition of the Path Integral formulation [14]. In fact, the real challenge to treat space and time on equal footing at the Hilbert space level in relativistic settings has to do with another more subtle asymmetry in the treatment of time [13,14,40,108]: joint systems separated in space are described by the tensor product of the corresponding Hilbert spaces. No such rule is applied to time. This asymmetry is particularly evident in the case of quantum field theories, in which case space is treated as a site, which e.g. for bosons is equivalent to a tensor product structure in space. While time is a parameter, it is not a site index and there is no associated tensor product structure. This is manifest in the equal time canonical algebra imposed on the fields which requires a fixed foliation. We notice also that this is an obstacle in defining a notion of "time-like entanglement" (see however [13,14,[109][110][111] for recent related discussions), a consideration which applies to all QM: the previous asymmetry is present in any quantum mechanical system, as also discussed in [10,13,14,40,42,112]. For these reasons, it is not sufficient to define a quantum time operator to solve "the problem of time" or more precisely to have an explicit space-time symmetric version of QM. This renders the aforementioned second quantization of "PaW particles" particularly interesting. While its description clearly exceeds the purpose of this manuscript, we can speculate that these developments may provide additional quantum computational and informational tools, just as the PaW approach inspired the various algorithms of our manuscript. In order to relate this state withρ we can use the energy basis |k⟩ so that H|k⟩ = E k |k⟩. Any initial state has an expansion |ψ 0 ⟩ = k c k |k⟩. Notice that even if there is degeneracy we can always write a fixed pure state as before, with |c k | 2 := j |ψ kj | 2 , |k⟩ := c −1 k j ψ kj |kj⟩ , and H|kj⟩ = E k |kj⟩, i.e. j is a degeneracy index. The quantity c k |k⟩ is the projection of |ψ⟩ onto the subspace of states with energy E k . Thus we obtain which is precisely the discrete time version ofρ as defined in Eq. 17, i.e. Notice that while the purity ofρ coincides withL, the purity of ρ S is not L. Remarkably, the natural discrete-time generalization ofL = Tr ρ 2 is Tr ρ 2 S and not L: the system-time entanglement provides the proper discrete-time version of time average related bounds (it is worth remarking that the discrete-time entanglement provides strict bounds to the infinite and continuum time averages). This is captured by Theorem 3 and its corollaries. We are now in a position to give the proof of Theorem 3: Proof. Let us first rewrite ρ S as with Being a quantum state, we can diagonalize ρ S in some basis as In fact, it is clear that this is what is obtained by tracing over the clocks with |Ψ⟩ in its Schmidts decomposition (15). We can combine these two expansions to write The quantity |⟨l|k⟩| 2 defines a double stochastic matrix thus yielding the desired majorization relation thus implying the desired majorization relation between states (we recall thatρ = k |c k | 2 |k⟩⟨k|). Notice that similar ideas have been used in [8]. It is also clear that by taking the limits ε → 0 and then T → ∞ the energy dephasing is recovered: the first limit can be considered by writing with dt ≡ ε. The large T limit now is the familiar limit used conventionally (which yields a delta), thus leading again to the asymptotic relation Instead, in the periodic case we can use the fact that the periodicity condition requires E k ≡ E l k = 2πl k /T for l k an integer. This means that the last equality holding only in the periodic case. By replacing this in Eq. (B5) we obtain precisely ρ S =ρ. Notice that the Corollaries 1 and 2 follow immediately from the Schur-concavity of functions defining any entropy. These Corollaries are the particular case of the linear entropy. Interestingly, new bounds can be obtained by just considering other entropies. Let us also make more precise the main body statement that the quantum time formalism gives a new interpretation to the loss of coherences induced by a time average: as we said, since the system is "entangled with time", by ignoring the "clock qubits" we lose information. For discrete time, the information loss induces ρ →ρ which is a quantum channel with Krauss operators K t := e −iHtε / √ N such thatρ As usual this quantum channel can be purified by using the isometric extension [114] U [K t ] := t |t⟩ ⊗ K t so that the channel is recovered by tracing over the "environment" (here the clock) in a global state U [K t ]|ψ⟩. This global state is precisely the history state of Eq. (4), i.e., We see that one can "rediscover" the quantum time formalism from the natural purification of the channel (B9). Just as in the general case, where a nontrivial channel is induced by correlations between the system and an environment, the system is correlated (entangled) with time. Moreover, the quantum channel's theory [114] implies that there is a unitary V such that One possible unitary is provided by the circuit of figure 2. We should remark however that the operation of tracing over the clock degrees of freedom is very different from measuring on the clock register and conditioning the state of the system. When conditioning one has access to the clock, as it is required for implementing a projection at a given time state. In summary, the history state contains both information about evolution at specific times, recovered from conditioning, and about the "equilibration channel" (B9), recovered by ignoring the clock. The final Hadamard gate produces the state Then, probability of measuring the ancilla qubit in the zero state is Similarly, the probability of measuring the ancilla qubit in the one state is Combining Eqs. (C6) and (C7) shows that the expectation value of the Z operator on the ancilla qubit is By adding, an S † gate in place of the colored dashed gate in Fig. 4 one finds Since this procedure needs to be repeated N times, and since we want to estimate the expectation values up to precision δ, then one needs to perform O(N/δ 2 ) experiments. Proof of Theorem 1 Proof. Consider the circuit in Fig. 5. For ease of calculation, we will assume that ρ = |ψ 0 ⟩⟨ψ 0 | for some initial state |ψ 0 ⟩. We also recall that both O 1 and O 2 are Pauli operators. The input state to the circuit is |0⟩ log N |ψ 0 ⟩ |0⟩, where we recall that |0⟩ log N is the initial state of the clock-qubits. The action of the Hadamard gates is to map Here we have used the identity |+⟩ t=0 |t⟩ which follows by expressing t in its binary form t = log N j=1 t j 2 j−1 . Assuming the colored dashed gate in Fig. 5 is an identity, we next have a controlled O 2 operation. This produces the state Thus, the probability of measuring the ancilla qubit in the zero state is Similarly, the probability of measuring the ancilla qubit in the one state is If, one wishes to estimate ⟨Z⟩ up to precision δ, then one needs to perform O(1/δ 2 ) measurements. This results in O(1/δ 2 ) experiments. Proof of Proposition 2 Proof. Consider the circuit in Fig. 6. We can readily see that the circuit therein is nothing but the circuit for computing the overlap between two quantum state ρ and σ derived in Ref. [43] and also shown in Sup. Fig. 1. To understand the algorithm of Ref. [43], let us consider first the case of ρ and σ being single qubit states. We know that the following identity holds Supplementary Figure 1. Algorithm for computing the overlap between two quantum states. We show an algorithm which takes as input two arbitrary n-qubit quantum states ρ and σ and which estimates the overlap Tr[ρσ]. In a) we show the algorithm for the case when ρ and σ and single qubit states, and in b) the generalization for larger qubit sizes. We can see that in all cases the circuit depth is equal to two, and hence independent of n. where SWAP denotes the SWAP operator, whose action can be defined in the computational basis as SWAP |ij⟩ = |ji⟩. Then, let us define the Bell basis: It is not hard to see that the SWAP operator is diagonal in the Bell basis, and that it can be expressed as Thus, we can estimate the expectation value of the SWAP operator over the state ρ ⊗ σ ⟨SWAP⟩ ρ⊗σ = Tr[(ρ ⊗ σ)SWAP] = P (Φ 1 ) + P (Φ 2 ) + P (Φ 3 ) − P (Φ 4 ) , where here we defined the probabilities P (Φ i ) = ⟨Φ i |ρ ⊗ σ|Φ i ⟩. That is, P (Φ i ) denotes the probability of measuring the state ρ ⊗ σ in the Bell basis, and obtaining the measurement outcome |Φ i ⟩. This result shows that we cane estimate Tr[ρσ] by preparing ρ ⊗ σ and measuring in the Bell basis. Crucially, one can readily prove the such a measurement can be performed with the circuit in Sup. Fig. 1(a), which is composed of a CN OT gate and a Hadamard gate in the first qubit (i.e., with the inverse of the circuit used to prepare a Bell state). A similar result will follow when ρ ⊗ σ are n-qubit states, but now one has to measure the expectation value of the operator n j=1 SWAP j A ,j B , where here SWAP j A ,j B denotes the operator that swaps the j-th qubit of ρ with the j-th qubit of σ. Since each operator SWAP j A ,j B can be expanded in a (local) Bell basis, the circuit used to measure in the eigenbasis of n j=1 SWAP j A ,j B , and hence to estimate Tr[ρσ] is precisely that which is shown in Sup. Fig. 1(b). Finally, since we are computing an expectation value, if we want to reach a precision δ for each N , we need to run O(N/δ 2 ) experiments. Proof of Theorem 2 Proof. The proof of this theorem follows very closely that of Proposition 2. In particular, we can see from Fig. 7 that we are performing a SWAP test, i.e., measuring the overlap via Bell basis measurements, between the reduced state ρ S of the history state on the system qubits, and the state |ψ 0 ⟩⟨ψ 0 |. On the other hand, one can readily see by expanding ρ S as in Eq. (B1) that the overlap between ρ S and |ψ 0 ⟩⟨ψ 0 | is precisely L(ψ 0 ). Hence, we can use the circuit in Fig. 7 to estimate L(ψ 0 ) up to δ precision with O(1/δ 2 ) experiments. in order to write which up to the border term is the Aubry-Andre Hamiltonian [97], with {c j , c † j ′ } = δ jj ′ and the other anticommutators vanishing. Here σ indicates the parity dependence of H σ on the sector of states in which acts, with σ = ±1 ≡ e iπN [116] and with the convention c n+1 = c 1 . As usual the vacuum state |0⟩ is mapped to | ↓ . . . ↓⟩ with Z j = 2c † j c j − 1. Notice that for each parity we can write H σ = c † M σ c with c = (c 1 c 2 . . . ) t and M σ an n × n matrix. For general "single particle" (sp) states |ψ⟩ = j ψ j c † j |0⟩ ≡ ψ 1 | ↑↓ . . . ⟩ + ψ 2 | ↓↑ . . . ⟩ + . . . this allows us to write with ψ = (ψ 1 ψ 2 . . . ) t . Thus we only need to exponentiate a matrix of n × n dimensions, rather than the original Hamiltonian of size 2 n × 2 n . The states we employed in Section VI are mapped to sp states, allowing us to reach large values of n. In addition, one has to perform the sums and/or integrals in time. Let us also mention that the spectrum of the model exhibits remarkable (fractal) properties [97,117,118] rendering numerical treatments almost mandatory for our purposes. To obtain L(ψ 0 ) we basically summed the expression (G2) over discrete times. Instead, in order to obtain the exact Loschmidt's echo average we used for |k⟩ of the eigenbasis H. Notice that we don't need the complete basis |k⟩ but only those eigenstates corresponding to sp excitations so that |k⟩ ≡ ϕ k c † |0⟩ implying ⟨k|ψ⟩ = ϕ † k ψ, for |ψ⟩ a sp state. For computing the infinite temporal variance of observables we used a similar strategy. One can prove that for a general observable and state [4] σ 2 O = k̸ =k ′ |ρ kk ′ | 2 |O kk ′ | 2 with O kk ′ = ⟨k|O|k ′ ⟩, ρ kk ′ = ⟨k|ρ|k ′ ⟩. In the main body example we used a single particle operator which in fermionic notation can be written as O = c † L/2 c L/2+1 + h.c. = i,j M ij c † i c j . Thus one can replace |O kk ′ | 2 with |M kk ′ | 2 := |ϕ † k Mϕ k ′ | 2 and |ρ kk ′ | 2 with |ϕ † k ψ| 2 |ϕ † k ′ ψ| 2 . Regarding now the numerical computation of E 2 , notice first that i.e., the quantity Trρ 2 T = Trρ 2 S also depends on the overlap of the state at different times. For sp states, these overlaps have the expression meaning again that we only need to exponentiate a matrix of m × m size. For time-independent Hamiltonians only the time differences matter. This allows one to write the ρ S purity as the single sum which holds for general time-independent Hamiltonians. For our numerics, we have combined Eqs. (G6) and (G2).
2023-08-25T06:42:19.325Z
2023-08-24T00:00:00.000
{ "year": 2023, "sha1": "d944e3b494b0f2e3faed381de9f575177aa8de96", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cfe5b0f0be481d9dbbf057f1ceaed6e05bafc784", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
249073437
pes2o/s2orc
v3-fos-license
An Investigation on the Detection of Human Leucocyte Antigen HLA Class I Loci (A, B, C) and Class II Loci (DR, DQ) Allele Frequency in Nepalese Population by Next Generation Sequencing Introduction : Human Leucocyte antigen (HLA) has offered a tremendous contribution to the human population by providing definite and undeniable facts of immense magnitude in human genetics, disease dynamics, and transfusion and transplantation. Materials and Methods : Blood samples of 90 unrelated healthy populations residing in Kathmandu, a central region of Nepal, were collected. DNA was extracted from the blood samples, and the allele frequency for HLA class I loci (HLA –A,-B,-C) and II loci (DRB1 and DRQ1) was studied by ion torrent Next Generation Sequencing platform using GenDx NGSgo R workflow. Further, comparing the most frequently detected HLA alleles in the Nepalese population with populations of neighboring countries was also done. Results : A total of 10 HLA *A alleles, 18 HLA*B alleles,11 HLA*C alleles,11 HLA*DRB1, and 4 HLA*DQB1 were detected. The most common alleles detected were HLA A*01(16.67%), A *33(31.67%), HLA B*35(13.33%) B*44(11.67%) HLA C* 04(16.67%),C* 07(23.33%), C*15(16.67%).HLA - DR*07(16.67%),DR*15(25.0%) HLA-DQ *05(38.33%) respectively. Comparison with a population of the neighboring countries and Caucasian population revealed that these common alleles were also present in high frequency in North Indian Hindus and some frequencies with Mongolian and Caucasian population but not with the Chinese population. Discussion : We believe that this data is the first report of HLA class I loci (HLA A, B, C) and II loci (DR*B1 and DQ*B1) in a healthy population from Nepal, and this provides helpful information with diverse applications in Nepal. IntroductIon Nepal, an Asian landlocked country, is located between China to the north and India to the east, west, and south. It has a territory that extends roughly 90 to 150 miles from north to south and 500 miles from east to west. [1] Despite its size, the country has a prodigious geographic diversity covering as low as 59 meters and covers as high as 8848 meters. The country is divided mainly into 3 belts which are Terai, Pahad, and Himal. [2] Furthermore, Nepal is a cultural mosaic comprising Tibeto-Burman and Indo -Aryan linguistic families, which is the consequence of the migrations from east, west, north, and south respectively over 2000 years. [3,4] The Indo-Aryan family constitute Nepali, Maithili, Bhojpuri, and Tharu, whereas Tibeto Burman families included Tamang, Newari, Magar, Rai-kiranti, Gurung, and Limbu. [3,5] Kathmandu metropolitan is the capital city of Nepal and has played a central role in representing the cultural, caste/ ethnic mosaic of the nation. It historically represents Newar settlement but has also witnessed a high influx of population due to migration in the city. [6,7] There are reports that among other castes/ethnic groups present in Nepal, the newars, brahmins, chhertri, and tamang are the most predominant casts/ethnic group present in Kathmandu. (8) Thus, from the above, Nepal has a heterogeneous group of population. Known to be highly polymorphic, closely linked genes that can split into many allelic types, the human leukocyte antigen (HLA) holds immense importance in physiological and pathological conditions. The HLA complex is considered a potent marker of population genetic analysis, paternity determination, and various disease-associated studies. [9][10][11][12] Besides this, the HLA profile has played a key role in the selection of donors for successful transplantation and transfusion. [13] Minimal data on HLA profiling of Nepali population in some conditions along with worldwide data have been documented previously. [14] However, there are early reports of Nepali population migration to India, mostly known as Gorkhas [15][16][17] but very scarce information is available regarding HLA typing in such population. [18] However, the report on the healthy population residing in Nepal is yet to be done. In the present investigation, the frequency of HLA A1, A2, B1, B2C1, C2, DR1, and DQ1was studied in a healthy Nepali population from Kathmandu city, and it was further compared with the neighboring countries of Nepal and the Caucasian population to understand the frequency of prevalence of the HLA types and genetic diversity in Nepalese population present in Kathmandu valley. Methods And MAterIAls The below-mentioned work is a preliminary pilot study conducted on the healthy control population, a part of the HLA class I and II work done in hepatitis B positive patients. Ethical approval The ethical approval was taken by the ethical committee of the Review Board of Nepal, The Nepal health research council (Ref Number:138/2018). Furthermore, written informed consent was taken from all the study participants before commencing the study. Characteristics of Healthy Subjects This was a pilot study in which 90 healthy individuals, 60 males and 30 females (30.75+-8.59), were recruited, corresponding to 90 Hepatitis B infected patients. The healthy individuals were negative for the serological markers of Hepatitis B virus infection (HBV), HIV, and Hepatitis C virus infection (HCV). None of the participants were related to each other. DNA Extraction and HLA Typing According to the manufacturer's instructions, the DNA was extracted from the stored blood samples using the QI amp DNA mini kit( Qiagen, Hilden, Germany). Then, extracted DNA was quantified by NanoDrop ( Thermo Scientific, USA). The extracted and quantified DNA was then sent to Supratech Laboratories Pvt. Limited, Ahmadabad, India, to type class I and class II HLA molecules. This included HLA A, HLA B, HLA C, HLA DR, and HLA DQ. Finally, the amplification of each DNA sample was carried out by Next-Generation sequencing (NGS) with the Ion Torrent NGS platform using GenDx NGSgo R workflow. Statistical Data Analysis The HLA allele frequency of classes I and II was calculated and then compared with the chi-square test and/or Fishers exact test. Further, Statistical analysis was performed by IBM SPSS statistic version 20. dIscussIon To the best of our knowledge, this study is the first study of an eight-digit HLA study of both HLA classes I and II in the Nepalese population from Nepal. However, only two digit data analysis was performed to prevent biasness in results due to fewer sample sizes. HLA has been known to play a major role in the genetic diversity of a population and organ transplantation, its association with infectious disease, in understanding drug reactions, and its relevance to adaptive immunity and vaccine development. Regarding allele A, a comparative study with the neighboring countries shows a high frequency of A*0101 in the north Indian population. [19,20] Further, A*3303 is also reported to be high in Asia and the Japanese and Korean populations, indicating their origin from the Mongolian population or the Mongolian pool. {18, [21][22][23][24][25][26] A*33 have also been reported to show greater diversity with the existence of several unique alleles. [27] However much more detailed work has to be done in the Nepali population in this regard. HLA A*24 is also one of the frequent alleles similar to the north Indian population. [28] A*0101 was not detected in an earlier study in the Nepalese population. [32] The absence or presence of such allele could be due to the precise use of the Next Generation Sequencing technique used in our study. Our investigation also shows the presence of allele A*0211, A*0206, and B*2705 in the Nepalese population.These have also been found to be the frequent alleles found in Asian Indians. [42,43] The data is further compared with china (Tibet), the neighboring country of Nepal. The most frequent allele is A*110101/1121N in the Chinese population, which is detected in lower Nepali populations. However, HLA*0101 is reported as an allele of high frequency in Caucasians and Jews and is often found oftenly in Nepali populations. [29] Besides this, the Caucasian population shows a high frequency of alleles A 150101(32.9%) and 070101 (30.1%), which is very different from the observations available here in the Nepali population. Regarding allele B, The allele B*44 and B*35 are also most common alleles in North Indian population and Gurkha population. [18,30,31] Some of these alleles have also been reported previously from a Renal transplant study conducted in Nepal [32] and found to be the frequent alleles in the Caucasian population. [33,34] Our study reveals the prevalence of the allele HLA B*3503 in high frequency. The allele, as mentioned above, has also been associated with the progression to AIDS. [40] There are reports that HLA B*3503 makes the population more susceptible to the rapid progression to AIDS after infection. [19] Regarding HLA C, the common alleles found were C*04(16.67%), C*07(23.33%), and C*15(16.67%). Of all these, Cw 04 is one of the most common HLA C alleles reported in all population. [29] Regarding the allele DR, our comparison with other neighboring countries of Nepal revealed that the allele DR*1501 was common in the North Indian population but is infrequently found in the Tibet region of China, which is on the close border with Nepal. Also, DR*1501 are common alleles in Asia as well as in Uttar Pradesh, a region that lies in India and the close border with Nepal. [35,36] Regarding allele DQ, DQ*05(38.33%) was the most frequently detected allele in the Nepalese population. DQ0301 and DQ0501 have also been reported frequently in the North Indian populations and caucasain populations, respectively. [37][38][39] Similarly, DQB1*0301 alleles have been associated with persistent Hepatitis B infection in African Americans. [41] Prevalence of this allele in high frequency in a Healthy population can be an alert sign for the individuals to be cautious. However, more extensive data is needed to for the best implication of its application. conclusIon The HLA data for the healthy population of Nepal shows a variety of heterogeneity. However, the data regarding HLA from Nepal and the countries neighboring Nepal is limited, and there is a strong need to type the HLA loci of these populations to understand better the role of HLA and its effect on the diverse population. Therefore, the paper under discussion will provide information on the genetic diversity prevalent in Nepal's Kathmandu valley.
2022-05-27T15:13:29.106Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "df7078b5f58ebb612d489efa01c3c2bbece17a15", "oa_license": "CCBYNC", "oa_url": "https://www.japsr.in/index.php/journal/article/download/147/80", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d5a9af845f8c8d17f021b61eef3d645f5b2d24d5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
16166603
pes2o/s2orc
v3-fos-license
Do Malignant Bone Tumors of the Foot Have a Different Biological Behavior than Sarcomas at Other Skeletal Sites? We analyze the delay in diagnosis and tumor size of malignant bone tumors of the foot in a retrospective study. We compared the oncological and surgical long-term results with identical tumor at other anatomical sites in order to analyze the biological behavior of sarcomas that are found in the foot. Thirty-two patients with a histologically proven malignant bone tumor (fifteen chondrosarcomas, nine osteosarcomas, and eight Ewing sarcomas) between the years 1969 and 2008 were included. The median follow-up was 11.9 years. The overall median time gap between the beginning of symptoms and diagnosis in the study group was 10 months. Ewing sarcoma presented with the longest delay in diagnosis (median of 18 months), followed by osteosarcoma (median of 15 months) and chondrosarcoma (median of 7.5 months). The delay in diagnosis of these tumors was significantly longer than that of equivalent tumors at other skeletal sites, but the 5- and 10-year survival rates and the occurrence of distant metastases were comparable. In contrast, the average size of foot tumors was 5- to 30-fold less than that of tumors analyzed at other skeletal sites. This study indicates that sarcomas of the foot demonstrate a distinct biological behavior compared to the same tumor types at other skeletal sites. Introduction Bone tumors of the foot are rare and represent only 3%-6% of all bone tumors [1][2][3][4][5]. They are benign in 75%-85% of cases and malignant in 15%-25% [2,5,6]. The bone most commonly affected is the calcaneus, followed by metatarsal and phalangeal bones [1,7]. Chondrosarcoma is the most frequent malignant tumor of the foot, followed by Ewing sarcoma and osteosarcoma [1,2]. Although there is no thick soft tissue layer to potentially cover a developing mass, a relatively long delay in diagnosis has been reported for such tumors [8]. However, despite a high rate of misdiagnoses, which may lead to incorrect first-line treatment, foot sarcomas rarely develop metastases [5,9]. It was hypothesized that this might be due to a less aggressive behavior of bone tumors at the foot compared to other sites of the skeletal system [7,9]. Although amputation of the foot is hardly an acceptable surgical solution for many patients with sarcomas, the resection margins commonly contain residual tumor tissue after initial excision and biological reconstruction. The desire to make a functionally optimal reconstruction and the complexity of this anatomical region can easily lead to an inadequate resection. Wide surgical margins, however, are an important factor for the oncological outcome of malignant bone tumors [9,10]. The aim of this retrospective study was to evaluate the delay in diagnosis, the tumor size, and the long-term survival rate of patients with malignant bone tumors of the foot. To our knowledge, there is a lack of information regarding these factors in the literature. The results were compared with data from equivalent tumors in the literature both at the foot and also at other skeletal sites. Materials and Methods After approval of the local ethical committee (Reference no. EK 143/08), we retrieved records of 32 patients diagnosed between 1969 and 2008 with a primary bone tumor of the foot from the database of the Bone Tumor Reference Center Sarcoma (BTRC) in Basel. The dataset included age, gender, histology, grade, anatomical site, size (volume) of the tumor, metastases, recurrence, and treatment modalities. In order to obtain detailed information on the chronology of symptoms and patient survival rate, a questionnaire was sent to the patients' general physicians. All patient data are provided in Table 1. We distinguished between low-(G1) and high-grade (G2 + G3) sarcomas, and all diagnoses were confirmed by a reference pathologist. The tumor volume was calculated roughly respecting its geometrical shape (ellipsoidal or cylindrical) from plain X-rays and computed tomography (CT)/magnetic resonance imaging (MRI) scans, depending on the tumor configuration and presence/absence of a soft tissue component. The interval between diagnosis and the events local recurrence-free survival (LRFS) and metastasisfree survival (MFS) were calculated. Delay in diagnosis was defined as the time period between the first clinical symptoms and the diagnosis, which was based on histology after biopsy. Adequate treatment of high-grade tumors was considered to be bioptic diagnosis followed by neoadjuvant chemotherapy (in cases of Ewing sarcoma and osteosarcoma) and wide or radical resection (for all kinds of sarcomas). Intralesional resections were considered to be inadequate treatment in all cases. Surgical procedures were classified as radical, wide, marginal, and intralesional, according to Enneking's classification [10]. Data analysis was performed using SPSS 11.5 software (SPSS Inc., Chicago, IL, USA). Data description was primarily based on median and quartile values for continuous endpoints. Binary endpoints were characterized by frequencies. Interindividual comparisons between patient subgroups were based on the two-sample Wilcoxon test for continuous endpoints and Fisher's exact test for binary endpoints. Survival analysis was based on the Kaplan-Meier method and Logrank test. In addition to the overall survival rate (OS), LRFS and MFS were calculated as a function of various clinical parameters. values < 0.05 were considered statistically significant. Delay in Diagnosis. The overall median delay in diagnosis of our cases was 10 months (IQR 3-18 months, range 3-128 months). Ewing sarcoma showed the longest delay between onset of symptoms and diagnosis ( Table 2). Patients with a delay in diagnosis of >12 months and <12 months did not show a significant difference in the 5-year (86% versus 74%) and 10-year (63% versus 54%) survival rates ( = 0.24). The rate of metastasis when correlated to a delay in diagnosis of >6 or <6 months and >12 or <12 months revealed no significant influence of the delay in diagnosis on the occurrence of subsequent metastasis ( = 0.69 for 6 months and 0.44 for 12 months). Tumor Size, Survival Rate, and Treatment 3.2.1. Chondrosarcoma. The median size of the low-grade chondrosarcomas of the foot was 3.1 mL (IQR 2.0-4.5 mL, range 1.2-158 mL), and all patients with low-grade chondrosarcomas ( = 9) were alive at last follow-up. The 5-and 10-year survival rates of these patients were 100% and 86%, respectively (Table 3). Two patients with high-grade chondrosarcoma treated with intralesional resection had local recurrences and subsequently amputation in both cases. Both patients died of metastatic disease. Ewing Sarcoma. The overall survival in patients with Ewing sarcoma was 37.5%, including two patients with no evidence of disease (NED) and one patient alive with disease (AWD). The median size was 14.4 mL (IQR 4.5-36, range 0.9-60). The 5-and 10-year survival rates were 71% and 28%, respectively ( Table 3). All patients ( = 8) were treated with neoadjuvant chemotherapy according to the current protocols. Two patients with Ewing sarcoma presented with metastases at the time of diagnosis. In one patient, chemotherapy and surgical treatment of the metastases were successful. The second patient developed recurrent metastases after 55 months, received radiotherapy, and died 2 months later. The remaining six patients developed distant metastases after a median of 42 months (range 8-70). One patient died 2 months after occurrence of systemic spread without further treatment. Three patients were treated with chemotherapy and the remaining two with radiotherapy following surgery. Five of these six patients died after a median of 8 months (range 2-30). The one surviving patient was treated by resection of the lung metastases and additional chemotherapy. There were two local recurrences, one of which appeared after a marginal and the second after a radical resection. These patients were treated with amputation or radiotherapy, and both died of metastatic disease. Osteosarcoma. The overall survival rate of patients with low-grade osteosarcoma ( = 4) was 75%, and the median tumor size was 50 mL (IQR 8-101, range 2.5-134). Both 5-and 10-year survival rates of these patients were 67% (Table 3). The only nonsurvivor of this group developed metastatic disease after 7 months and died 19 months later. None of the patients with osteosarcoma presented with metastases at the time of diagnosis. After a median of 39 months (IQR 15.3-60, range 4-63), a total of five patients developed distant metastases. In three cases, local surgery was performed, and in the remaining cases, chemotherapy was applied. Only one patient treated surgically was still Sarcoma 3 alive after follow-up of 11.5 years, and the other patients died from metastatic disease. All osteosarcoma patients without metastases were still alive at the time of the latest follow-up. One patient with low-grade osteosarcoma developed local recurrence after an intralesional resection and was further treated with amputation. The patient refused to undergo the recommended chemotherapy. The patient is still alive without significant impairment of his daily activities. Four of five patients with local recurrence received inadequate prior treatment. In only one case, a local recurrence occurred despite adequate therapy. Three patients developed subsequent distant metastases. Overall Treatment. Twenty-three patients (72%) underwent adequate treatment. Of the nine patients receiving inadequate therapy, 7 received insufficient local resection (intralesional/marginal resection). The latter comprised 2 low-grade chondrosarcomas and 5 high-grade sarcomas (2 chondrosarcoma, 1 osteosarcoma, and 2 Ewing sarcomas). One patient with osteosarcoma refused to undergo chemotherapy, and one patient with Ewing sarcoma had an inadequate neoadjuvant chemotherapy. Metastases. Twelve patients developed distant metastases after a median of 27.5 months (IQR 13-51.3, range 3-70). Four patients presented with metastases already at the time of diagnosis ( = 0.039). Patients with metastases at the time of diagnosis had worse 5-and 10-year survival rates (40% and 20%) than those without (89% and 65%). Patients with late metastases had a significantly lower survival rate compared to patients without metastases (58% versus 100% after 5 years and 17% versus 88% after 10 years; = 0.01). Discussion In the recent years, we have observed several patients with malignant bone tumors of the foot with a long delay in diagnosis. In this study, we wanted to elucidate whether such a delay may reflect characteristic biological differences between bone sarcomas of the foot and their counterparts at other skeletal sites. To our knowledge, there is only one study in the literature reporting on the delay in treatment of tumors of the foot but not in comparison to tumors at other sites of the skeletal system [1]. Sarcoma Because the foot has only a thin soft tissue envelope, one would suspect swelling caused by a tumor to lead to an immediate clinical recognition. However, we observed a long overall delay in diagnosis in the foot, especially in high-grade tumors. Ewing sarcomas, which usually are rapidly and aggressively growing lesions, showed the longest delays (median of 18 months). This is 2-6 times longer than delays in Ewing sarcomas located at other sites of the skeleton [2,[11][12][13]. These findings are consistent with Adkins et al. [14] and Metcalfe and Grimer [15] reporting on a delay of 11,75 and 14 months. In addition, the sizes of sarcomas in our patients were considerably smaller than those at other sites. Delays seen in diagnosis of osteosarcomas in our study (median of 15 months), as with Ewing sarcomas, were considerably longer (4.5-to 14-fold) than reported for osteosarcomas at other sites [2,11,12,16,17]. Likewise, the volume of these tumors was much smaller than reported for tumors at other sites. In contrast to Ewing sarcomas, the more slowly growing chondrosarcomas showed the shortest median delay in diagnosis with 7.5 months. This is almost comparable to the delay in diagnosis of chondrosarcomas at sites other than the foot. Several authors argue that the rarity of bone tumors in this special anatomical location is a major cause for the long delay in diagnosis of bone tumors of the foot [8,9,18]. In our opinion, this argument is not very convincing, since bone tumors are rare anyway. First symptoms as pain and swelling are unspecific and frequently misinterpreted as being of inflammatory or posttraumatic nature. The variety of differential diagnoses explains the long delay in diagnosis of bone tumors in general but not the striking difference between tumors of the foot and those at other skeletal sites. Zeytoonjian et al. [9] tumors found a death rate of 8% in primary malignant bone tumors of the foot compared to 27% in tumors in other anatomical locations. In this study, the death rate (34%) was higher but in the same range of sarcomas at other sites. However, despite the higher death rate, the long delay, and a relatively large proportion of cases with inadequate treatment, the OS is not significantly worse. It has been assumed that primary malignant bone tumors of the foot may have less deleterious effects than those located at other sites, but this is not completely understood [1,3,4,9,19]. Results of our study indicate that these tumors in foot may have certain basic biological differences from those at other sites. The delay in diagnosis of primary malignant bone tumors of the foot is-especially for high-grade tumorsconsiderably longer than that at other sites (Table 2). In contrast, the average volume is tumors significantly smaller than reported for other sites (Table 4). For chondrosarcomas localized in the rest of the skeleton, the size is 20-30-fold, for osteosarcomas 3-10-fold, and for Ewing sarcomas 5-6-fold higher according to the literature [20][21][22][23][24]. The difference is even more striking if the time of development is taken into consideration. Based on these assumptions, a rough calculation of 12-month tumor development in chondrosarcomas would, for example, result in a tumor volume of 30 mL at the foot and of 800 mL at other sites. Although the evidence of such estimations is not very strong, the difference is so obvious that it allows the assumption that tumors of the foot exhibit a different biological behavior and grow much slower. This could explain the long delays in diagnosis. The survival rate of malignant bone tumors of the foot is affected by metastases at the time of diagnosis, the occurrence of distant metastasis, and local recurrence of the primary tumor. In these respects, sarcomas in the foot do not differ from those at other sites. Such factors normally significantly worsen the prognosis, but this was not found in our study. However, the long delay in diagnosis found in this study did not correlate with a higher rate of primary metastases. The risk of developing a local recurrence is eight times higher with an inadequate compared to an adequate therapy. Local recurrence is associated with a significantly decreased survival rate and a higher occurrence of metastases. Consequently, the prognosis worsens. In our series, there was a significantly lower survival rate for patients with distant metastases ( = 0.01). In summary, patients with a local recurrence have a worse survival rate, accompanied by a higher rate of distant metastases. This phenomenon is well known in the literature too [19,25]. As expected, comparing adequate versus inadequate treatments indicated a positive influence of adequate treatment on the survival rates in this study. These rates imply an unequivocal but not significantly better prognosis ( = 0.26). One reason for the high number of patients with an inadequate treatment is the long follow-up of the study; diagnoses and treatments were performed in the 1970s and 1980s. Meanwhile, the treatment regimens have markedly changed-for example, multimodal therapy regimes including chemotherapy-and have led to significant improvements in the outcome for patients with sarcomas [2,3,14,19,[26][27][28]. The main cause for an inadequate therapy was an insufficient surgical procedure. In 7 of 9 patients with inadequate therapy in our study, an intralesional or marginal resection was performed most likely caused by the specific anatomical challenges in this location (e. g. small compartments). Compared to intralesional, marginal, or wide resections, we found a significantly lower rate of local recurrences and higher survival rates in patients who underwent radical surgical treatment. This is in accordance with the results of other studies considering radical surgery as the best option for local tumor control too [9,18,28,29]. Despite radical resection, patients with foot sarcomas usually do not have significant functional restrictions after surgery and rehabilitation. An unknown factor is the latency of the tumor (i.e. time between the emergence of the first tumor cell and the appearance of symptoms). It is quite probable that the latency of sarcomas of the foot is shorter than that at other sites. In such a case, tumors in long bones and the trunk would be larger at clinical manifestation than comparable tumors of the foot. Likewise, the longer latency at other sites could be attributed to a masking by the relatively thick soft tissue layers in the leg and trunk. The indeterminacy of latency is a weakness in the calculation of tumor growth before diagnosis. As cell growth is exponential, detectable increases in tumor volume require much more time in small compared to large tumors. Nevertheless, the observed differences between Sarcoma 7 Ewing sarcoma 25.5 145 [27] 144 [24] The superscripts listed in the last column of the table refer to references. tumor growth in the foot and other sites are striking. It is likely that-despite the unknown latency factor-this reflects a differential biological behavior. One major limitation of this study is that almost one half of the patients were diagnosed and treated before the end of the 1980s when chemotherapeutic regimes and imaging modalities were improved dramatically. Further limitations derive from the retrospective design and the small patient population. Sarcomas of the foot are rare, but the number of patients in this series is within the range (6-87 patients) of those in other reports [1,3,6,8,15]. In contrast, the long median follow-up of 11.9 years is the strength of this study. In conclusion, primary malignant bone tumors of the foot appear to grow slower and to be less aggressive than those at other anatomical locations. We observed a long delay in diagnosis of foot sarcomas, which is in contrast to the general assumption that the thin soft tissue layer of the foot should allow an immediate clinical recognition. Interestingly, despite the delay in diagnosis, the prognosis is similar to that of tumors at other skeletal locations. From a systematic comparison of reported delays in diagnosis and tumor volumes at other sites, we conclude that malignant tumors of the foot grow at an approximately 10-20-fold slower rate than tumors at other sites of the body, and this property indicates a distinct biological behavior of bone tumors in this special anatomic location.
2018-04-03T05:53:21.594Z
2013-03-20T00:00:00.000
{ "year": 2013, "sha1": "95ea7263851d60a4b711d9755fb33831baafb9bd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2013/767960", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a407c8aa6eeb8edf4a312fa0567ffa029afee150", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260758628
pes2o/s2orc
v3-fos-license
Brain Volumes and Cognition in Patients with Sickle Cell Anaemia: A Systematic Review and Meta-Analysis Cognitive decline is a major problem in paediatric and adult patients with sickle cell anaemia (SCA) and affects the quality of life. Multiple studies investigating the association between quantitative and qualitative neuroimaging findings and cognition have had mixed results. Hence, the aetiology of cognitive decline in this population is not clearly understood. Several studies have established cerebral atrophy in SCA children as well as adults, but the relationship between cognition and brain volumes remains unclear. The purpose of this systematic review was therefore to evaluate the literature on regional brain volumes and their association with cognitive outcomes. We also meta-analysed studies which compared regional brain volumes between patients and controls. Studies report that patients with SCA tend to have lower grey matter volumes, including total subcortical volumes in childhood as compared to controls, which stabilise in young adulthood and may be subjected to decline with age in older adulthood. White matter volumes remain stable in children but are subjected to reduced volumes in young adulthood. Age and haemoglobin are better predictors of cognitive outcomes as compared to regional brain volumes. Introduction While patients with sickle cell anaemia (SCA) are at risk of stroke and show persistent cerebrovascular damage [1], the effect of SCA pathology on brain volume reduction is not well defined.Earlier studies visually inspecting brain volumes showed some cortical atrophy [2,3].With more recent advances in MRI techniques, the literature on brain volumes in this population is now increasing.However, there is still a poor understanding of how SCA pathology affects regional brain volumes, and the pathophysiological mechanisms underlying the reduction of brain volumes are still unclear. Moreover, cognitive deficits which are commonly noted in patients with SCA, including a reduction in intelligence quotient (IQ), are well-documented in adult as well as paediatric patients with SCA [4], as well as in adults with anaemia in the general population [5].Several studies have investigated various factors, such as the presence of silent cerebral infarcts (SCI) as well as transcranial Doppler velocities, to explain the cognitive deficits; however, very few studies have found a significant association.For example, studies looking at transcranial Doppler (TCD) velocities have found an association between mid-range to higher TCD velocities and short-term verbal recall [6].In nine-month-old infants with SCA, higher TCD velocities and anaemia were associated with neurodevelopmental delay [7].However, no association between TCD and IQ was found in older SCA children [8,9].Investigations into MRI abnormalities have reported an association between IQ and SCI or lacunae, but the association diminished when age was added as a covariate [10].Cognitive decline in sickle cell anaemia patients persisted even in the absence of SCI [4].Another study at higher field strength found no association between lesion quantification and IQ [11]. Cerebral atrophy is a common finding in SCA populations [12,13].However, the relationship between brain volume and cognition is not clearly established.There was an association between working memory index (WMI) and subcortical volume in adult SCA patients [14], while in our original paper [15], processing speed index (PSI) was associated with white matter volume (WMV) in adult and paediatric males with SCA.Other studies [16,17] on the relationship between brain volumes and cognition do not report similar findings. MRI studies in typically developing children suggest associations with cognition.Grey matter density in regions such as the cingulate gyrus, orbitofrontal gyrus, cerebellum, and thalamus are associated with working memory, attention, and response selection [18].Another study of brain volumes in typically developing children found an association between grey matter volume (GMV) in the anterior cingulate cortex and IQ in older children [19]. This suggests a case for associations between cognitive processes and regional brain volumes in patients with SCA.However, it remains to be established whether brain volumes are reduced in patients with SCA and if so, that explains the reduced cognitive scores in this patient population. Hence, this systematic review aimed to objectively evaluate the literature on total and regional brain volumes in patients with SCA and their association with cognition.We also intended to meta-analyse regional brain volumes between patients with SCA and controls. Search Strategy Six databases (PubMed, Embase, Web of Science, Medline, Psychinfo, and Scopus) were used to conduct literature searches using terms including "brain volume" and "MRI" paired with "Sickle cell."All studies from any time point until February 2023 were assessed based on rigorous inclusion/exclusion criteria (Table 1).Articles were eligible if they reported total or regional brain volumes based on MRI assessment.Case studies, editorials, conference abstracts, and reviews were excluded.References from excluded articles were searched for eligible studies.Authors were contacted for additional data.Studies in languages other than English were included only if they were translated into English. Inclusion Criteria Exclusion Criteria Types of studies: Critical Appraisal The Critical Appraisal Skills Programme (CASP) [20] Case-Control Checklist was used to assess the quality of the cross-sectional studies, while the CASP Cohort Study Checklist was used for longitudinal studies.Quality assessment was focused on the appropriateness of the control group (i.e., sibling/community controls or normative databases), the validity of the neuropsychological tools and the MRI methodology.All articles were graded on 11-12 questions with yes (1) or no (0) responses.Total scores were calculated, and articles were categorised as good (66% and above), satisfactory (36-65%), or poor quality (0-35%). Meta-Analysis Studies which compared the same regional brain volumes (grey matter volumes, white matter volume and subcortical volumes) between patients and controls were included in the meta-analysis.A random effect meta-analysis was performed to estimate a summary estimate of the standardised mean difference (Cohen's d), and heterogeneity was assessed using the Q-test.The amount of variation related to the heterogeneity was represented using the I 2 statistic.All statistical analyses were performed using the Comprehensive Meta-analyses Software V3. PRISMA Statement The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 27-item checklist was used to report the results of this systematic review. Results The literature search resulted in 451 articles.After removing 61 duplicates, 390 articles were screened based on titles and abstracts.Reasons for exclusion are mentioned in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [21] flow chart (Figure 1).Thirty-five full-text articles were assessed for eligibility based on the inclusion and exclusion criteria.Twenty articles were retained for the systematic review, of which six were included in the meta-analysis. Characteristics of the Study Of the twenty articles that analysed brain volumes, thirteen articles (65%) were crosssectional studies, four (20%) were prospective cohort studies, and three (15%) were longitudinal studies.Most studies (60%) had control groups.The sample size ranged from 25-312 patients and control groups ranged from 21 to 71 participants.The mean ages for the studies ranged from 8.4 to 34.3 years.Five studies performed analysis stratified by the presence of SCI.Publication years for the studies ranged from 1996-2023, of which two studies were published before 2000 and the rest of the studies were between 2000-2023.Sixteen studies were conducted in the USA, three studies were from the UK and one study was from Tanzania.Most studies analysed mean haemoglobin, age, and sex, as well as intracranial volumes as covariates, while some studies analysed socioeconomic status measured by education deciles and family income, as well as blood oxygen measures (SpO 2 ). Ten studies (50%) analysed brain volumes alongside cognitive outcomes, and one study analysed pain prevalence alongside grey matter volume (GMV).Most studies used the Weschler Scales for intelligence to measure cognitive function.Two studies used K-BITS and the Tanzanian study used Raven's Progressive matrices to measure IQ.One study analysed executive functioning alongside intelligence quotient measurement using the Delis-Kaplan Executive Function Scales (D-KEFS) as well as the Test of Everyday Attention (TEA). The magnetic field strength of MRI scanners ranged from 0.5 Tesla to 3.0 Tesla, except for one study that analysed hippocampal volumes at 7 Tesla.Various methods of analysing brain volumes were used.Studies (15%) before 2000 visually inspected and graded atrophy based on severity.Studies after 2000 have used various automated and semi-automated techniques and software to evaluate regional brain volumes.Four studies used various versions of SPM, of which two studies report the grey matter and white matter densities using Voxel-Based Morphometry (VBM) analysis.Two studies used FSL, four used FreeSurfer, two used SIENA, two used BrainSuite, one study used Photoshop and one study used Surgical Navigation Technology (See Supplementary Materials). Characteristics of the Study Of the twenty articles that analysed brain volumes, thirteen articles (65%) were crosssectional studies, four (20%) were prospective cohort studies, and three (15%) were longitudinal studies.Most studies (60%) had control groups.The sample size ranged from 25-312 patients and control groups ranged from 21 to 71 participants.The mean ages for the studies ranged from 8.4 to 34.3 years.Five studies performed analysis stratified by the presence of SCI.Publication years for the studies ranged from 1996-2023, of which two studies were published before 2000 and the rest of the studies were between 2000-2023.Sixteen studies were conducted in the USA, three studies were from the UK and one study was from Tanzania.Most studies analysed mean haemoglobin, age, and sex, as well as intracranial volumes as covariates, while some studies analysed socioeconomic status measured by education deciles and family income, as well as blood oxygen measures (SpO2). Ten studies (50%) analysed brain volumes alongside cognitive outcomes, and one study analysed pain prevalence alongside grey matter volume (GMV).Most studies used the Weschler Scales for intelligence to measure cognitive function.Two studies used K-BITS and the Tanzanian study used Raven's Progressive matrices to measure IQ.One study analysed executive functioning alongside intelligence quotient measurement using the Delis-Kaplan Executive Function Scales (D-KEFS) as well as the Test of Everyday Attention (TEA). Critical Appraisal The quality of the studies was assessed using two CASP checklists for cohort studies and case-control studies.Eighty-five per cent of the studies were graded good, while two studies were graded satisfactory.The results of the critical appraisal are summarised in Table 2. Meta-Analysis Eight studies out of twenty compared regional brain volumes between patients and controls.Of these, there was missing data for two studies which were excluded from the analyses.Three studies compared regional grey matter volumes between patients and controls, three studies compared white matter volumes (WMV) between patients and controls and five studies compared subcortical volumes between patients and controls.A separate random-effects meta-analysis was conducted for each ROI.The result of the meta-analysis is summarised in forest plots (Figure 2).There was a significant mean effect size for three studies comparing GMV between patients and controls of −0.597 ((95% CI = −0.861 to −0.332); Heterogeneity: Tau 2 = 0.013, Q-value = 2.581, df(Q) = 2, I 2 = 22.5, Z = −4.42).For the WMV, the standardised mean difference between patients with SCA and controls was non-significantly lower: −0.636 ((95% CI = −1.556 to 0.283); Heterogeneity: Tau 2 = 0.618, Q-value = 32.17,df(Q) = 2, I 2 = 93.7,Z = −1.35).Five studies analysed the mean difference for subcortical between patients and controls.The standardised mean difference was −1.478 ((95% CI = −2.966 to 0.012) Heterogeneity: Tau 2 = 2.843, Q-value = 225.48,df(Q) = 4, I 2 = 98.22,Z = −1.94),suggesting a trend for lower total subcortical volumes in patients. A. Grey Matter Volumes Out of twenty studies, ten studies analysed grey matter volumes, of which six studies analysed grey matter volume differences between patients with SCA (two studies stratified by SCI) and controls, and four studies analysed grey matter volumes only in patients with SCA.Studies comparing GMV in children with SCA and controls tended to report lower grey matter volumes in the frontal and parietal lobes [22,30].GMV was further reduced in patients with SCA who tended to have SCI lesions and vasculopathy [1,27].One longitudinal study analysed grey matter volume change over a period of four-time points.The authors of this study reported that grey matter volumes were reduced by 411 mm 3 and were linearly associated with age in children with SCA, while GMV was reduced by 227 mm 3 /yr in controls and was quadratically associated with age [23].The SIT trial data compared brain volume change longitudinally between transfused and patients with standard treatment; GMV was reduced by 0.9%/yr and transfusion status did not affect the reduction in brain volume percentage [13,28].Two studies in adults did not report any differences in GMV [16,25]. Four studies (40%) assessed the association between GMV and IQ; the results are inconsistent, attributable to differences in methodologies.However, all studies make a meaningful contribution to the existing literature.Two studies out of four did not recruit any control groups and used K-BITS to measure IQ [22,24].The cross-sectional study by Chen et al. (2009) used GAMMA to investigate the regions of GMV that correlated strongly with IQ.They found decreased GMV in the frontal lobes, including the frontal medial orbital gyrus, the superior frontal gyrus, the parietal lobe, including the supramarginal and angular gyri, and the temporal lobe, including the parahippocampal gyrus, superior, middle, and inferior temporal gyrus, and fusiform gyrus, areas associated with lower IQ in neurologically intact children [22].The longitudinal observational study examined the effect of volumes and cognition in the same cohort over a period of 5 years and divided the patients into decline and non-decline groups.Patients in the decline group tended to have higher IQ at baseline, but also lower GM volumes in five out of six regions associated with K-BITS decline.These children also had a higher incidence of SCI, which predicted a K-BITS decline over the 5 years [24].However, these results had low generalisability as the sample size was low in both studies.Other studies looking at this association were from unpublished datasets.The analysis did not reveal any associations between GMV and IQ measured on Weschler Scales for Intelligence in paediatric or adult SCA patients [15], nor were any associations seen with Raven's Progressive Matrices in Tanzanian children with SCA [27].Although IQ remained lower in patients across all studies, GMV was significantly lower only in Tanzanian SCA children [27]. Children 2023, 10, x FOR PEER REVIEW 6 of 12 Meta-Analysis Eight studies out of twenty compared regional brain volumes between patients and controls.Of these, there was missing data for two studies which were excluded from the analyses.Three studies compared regional grey matter volumes between patients and controls, three studies compared white matter volumes (WMV) between patients and controls and five studies compared subcortical volumes between patients and controls.A separate random-effects meta-analysis was conducted for each ROI.The result of the metaanalysis is summarised in forest plots (Figure 2).There was a significant mean effect size for three studies comparing GMV between patients and controls of −0.597 ((95% CI = −0.861 to −0.332); Heterogeneity: Tau 2 = 0.013, Q-value = 2.581, df(Q) = 2, I 2 = 22.5, Z = −4.42).For the WMV, the standardised mean difference between patients with SCA and controls was non-significantly lower: −0.636 ((95% CI = −1.556 to 0.283); Heterogeneity: Tau 2 = 0.618, Q-value = 32.17 B. White Matter Volumes Eleven studies out of twenty analysed WMV.Seven studies compared WMV between patients and controls, of which one was stratified by SCI [1].Results for reduced WMV were contradictory.Four studies reported reduced WMV, while three studies did not report any reductions.Patients with SCA and SCI tended to have reduced white matter densities in regions along the MCA territories [1].One study in young adults found a reduced WMV of 8.1% in the right hemisphere and 6.8% in the left hemisphere [26].In studies that did not report reduced WMV in patients, only one article reported results in paediatric patients [31].In a longitudinal study of children with SCA, WMV increased at a lower rate as compared to controls.WMV was linearly associated with age in patients with SCA but was quadratically associated with age in controls [23]. Only one published study found that WMV predicted IQ in male adolescents and young adults with SCA [26].In this study, tensor-based morphometry (TBM) was used to create a mean deformation index for WMV, which was higher in male patients, as was the burden of SCI, and positively correlated with IQ and haemoglobin in SCA patients.Additionally, lower WMV correlated positively with anaemia severity in the bilateral frontal, temporal and parietal lobes.Two other studies revealed no association between WMV and IQ in SCA patients or controls.WMV, however, was associated with PSI only in male patients in one of the studies [15].Haemoglobin appeared to be a predictor of WMV and IQ in both male and female patients [26], suggesting that early exposure to anaemia with compensatory cerebral haemodynamic mechanisms influences WMV.Oxygen is carried through haemoglobin in red blood cells.When haemoglobin is lower, cerebral blood flow (CBF) increases to compensate for lower oxygen, also increasing the risk of stroke.While GMV in young adults is relatively preserved, the white matter may remain hypoxic, resulting in lower volumes and an increased risk for microstructural damage [15,26]. C. Subcortical Volumes Six studies examined total subcortical volumes, while one study evaluated hippocampal subfields at 7T in patients with SCA and controls [29].Subcortical volume reduction was noted only in 2/5 studies in SCA children and was not associated with cognitive outcomes in any of the five.Both studies noted significant reductions in the hippocampus, amygdala, and pallidum bilaterally, while some regions, such as the right thalamus and accumbens, were spared [17,27].Moreover, SCA patients with SCI appeared to have lower subcortical volumes [17,27]. Adult studies have had mixed results in subcortical volumetric reductions.Two studies did not show any reductions in subcortical volume [15] and total hippocampal volume [16], nor were they associated with IQ, performance IQ [16] or working memory [15].All four studies noted anaemia severity as a strong predictor of neurocognitive scores as well as volumetric reduction in SCA.Mackin et al. [14] found that adult SCA patients tended to have lower basal ganglia and subcortical volumes.They also showed an association with reduced WMI in SCA patients.Contrary to other studies, haemoglobin was not associated with cognitive outcomes in SCA patients or controls in this study.The authors noted that subcortical volumes and cognition may be affected by inflammation and other diseaserelated pathologies such as multiorgan dysfunction, sleep apnoea, arthritis, and chronic pain [14].Age significantly predicted volumetric reduction and cognitive decline in SCA patients [15,16,27].Hippocampal volume is particularly vulnerable to atrophy with age, which may be associated with cognitive decline in older adults [14]. D. Total Cortical Atrophy Two studies before 2000 reported total cortical atrophy in patients with SCA [2,3].These studies used visual inspection of atrophy on MRI scans and presented the prevalence of atrophy in patients with SCA.All studies report data on paediatric patients with SCA.One study [2] reported MRI-related outcomes in 312 patients, of which 15 patients showed only atrophy while 20 showed atrophy and infarction.Focal atrophy was noted in 5/9 patients and was seen in the frontal, occipital, and temporal lobes.They also noted that atrophy was mainly seen in patients over the age of 30 years [2].Similarly, another study noted generalised atrophy in 2/146 patients aged 12 years above [3].Moser et al. compared patients by genotype status and found that 8/25 patients that showed atrophy were of SS genotype. Discussion In this article, we reviewed the literature on regional brain volumes in patients with SCA.Studies had differing objectives and various scanner types and MRI processing techniques, making deriving robust conclusions challenging.However, certain underlying trends do exist in the literature.Consideration of the dynamic nature of brain development is of utmost importance in understanding the implications of SCA disease pathology on regional brain volumes.Brain volume development tends to be delayed in patients with SCA, which may be a result of abnormal cerebral haemodynamics at an early age.Hence, regional brain volumes, specifically GMV, tend to be lower in paediatric patients, stabilise during young adulthood, and may be vulnerable to increased damage during adulthood.More than volumes, haemoglobin and oxygen delivery are influential in cognitive functioning. Grey Matter Volumes Out of six papers that compared GMV between patients and controls, four studies showed reduced volumes in patients with SCA.Three of the six studies involved paediatric patients, which all noted reduced GMV.Another longitudinal study in children with SCA also noted a linear reduction of GMV with age in paediatric patients with SCA as compared to their typically developing peers, who show a more stabilised quadratic relationship between GMV change and age.In contrast to the reduced volumes in young children and adolescents [1,27,31], studies in adults with SCA did not show reduced GMV [15,16,25].While this may seem like a discrepancy in the literature on the surface, this "catch-up" of GMV could be attributed to delays in the early development of GMV.In typically developing children between the ages of 3-15 years, GMV tends to reduce with age due to developmental processes such as synaptic formations as well as synaptic pruning [33].These developmental processes start early (around the age of 3), and increasing myelination could result in steady declines of GMV until the ages of 9-11 years [31,33].In young children with SCA, this initial process of synaptic formation and pruning is likely delayed, resulting in a delay of GMV decline beyond the age of 9 years which seems accelerated when compared to their TD peers in late childhood (9 years) [23,31].Both studies that recruited young adults did not have patients below the age of 12 years, which makes this hypothesis plausible [16,25].However, more longitudinal studies in larger samples and wider age groups need to be conducted to investigate GMV change in patients with SCA. Chen et al. [22] found an association between grey matter volumes and IQ variability in low-IQ SCA patients, while other studies did not report any associations with cognition.This discrepancy with other studies could be explained by one of the recruitment factors.Chen et al. (2009) observed this relation only in neurologically intact SCA children with below-average IQ, which is a small subset of SCA patients, reducing the generalisability of the results.Other studies comparing the association of GM volumes with IQ did not find any association between the two [15,27].However, Chen et al's contribution towards a prognostic model of IQ decline in SCA children is commendable [24]. White Matter Volumes Most studies in this systematic review noted reduced WMV in patients with SCA.One of the studies only found reduced WMV in male patients [26].Choi et al. (2019), in their study, noted that anaemia severity was associated with lower WMV in watershed areas of the brain.WM density has also been reported as lower in SCA patients compared with controls [1].Haemoglobin levels are a marker of oxygen delivery to the brain.In anaemia, cerebral blood flow increases to compensate for lower oxygen delivery [34].Increased CBF also increases the risk of white matter structural damage mainly seen in watershed areas as the ceiling for further CBF increase in response to increased metabolic demand is exceeded [11].Males tend to have larger brain volumes compared to females [31].Hormonal factors, such as high oestrogen levels, may have a protective mechanism in females, leaving males more vulnerable to SCA disease severity, potentially explaining reduced WMV in males with SCA [26]. WMV was associated with IQ only in one study, while no other studies reported any association between WMV and cognition [26].Likely, WMV is not very influential in cognitive processes, but the microstructural integrity of white matter tracts could influence processing speed underlying other cognitive output [35].White matter tracts, studied using DTI, are particularly vulnerable to hypoxic damage, especially along the watershed areas [11], and are associated with processing speed which may contribute to poor cognition in patients with SCA. Subcortical Volumes All studies in paediatric populations tend to show reduced subcortical volumes in the patient group.Studies with adults show mixed findings.One study showed no reductions in total subcortical volumes in patients and controls with SCA, while another study that looked at basal ganglia and thalamus reported reduced volumes in the patient group.Two studies looked at hippocampal volumes, while one of them investigated 7T MRI data.Both these studies report reduced hippocampal volumes.Similar developmental trends that are attributable to GMV development may explain the discrepancy in the literature.While the developmental delay may explain reduced subcortical volumes in paediatric patients with SCA [31], accelerated ageing and poor cerebrovascular mechanisms may explain reduced volumes in older patients with SCA [14,29]. Only one study in adults showed an association between reduced basal ganglia volumes and reduced working memory and performance IQ in general, which could be attributed to age-related effects [14]. Influence of Cerebral Haemodynamic Most studies supported the role of haemoglobin in predicting neurocognitive outcomes.In SCA patients as young as nine months old, anaemia predicted neurodevelopmental delay measured on Bayley's Infant Neurodevelopmental Scale [7].Haemoglobin levels were also associated with short-term verbal memory in SCA children [36].Vichinsky et al. [16] also found that the severity of anaemia was associated with age-related decline in SCA adults.Studies of oxygen extraction fraction and CBF have shown a negative association with processing speed index and working memory [34].Steen et al. [31] also found that 27% of the variability in IQ was related to haemoglobin levels in SCA children.All this evidence suggests that compared to brain volumes, haemoglobin levels are a better predictor of IQ. Limitations Most studies in this review were conducted on the same cohorts in high-income countries.It is noted that brain development, as well as cognitive trajectories, in low-and middle-income countries, may be notably different from high-income countries, influenced by factors such as nutrition, maternal education, socioeconomic status, and sex [37].Only one paper in this review looked at research from a low-middle-income country, limiting the generalisability of results.This review also considered evidence from the author's original paper, which may introduce bias.Studies in this review used varying neuropsychological tests and MR data processing steps, making direct comparison challenging. Conclusions In summary, cognitive difficulties in SCA persist regardless of SCI presence [4].Lower GM volumes may be associated with IQ decline in neurologically intact SCA children [22,24].This suggests that brain morphology may be a marker of cognitive decline in neurologically intact SCA children.However, further analysis with bigger sample sizes must be considered to increase the generalisability of the findings.Haemoglobin is the best predictor of IQ in SCA.Efforts to measure haemoglobin and targeted interventions to keep steady haemoglobin levels would be effective strategies to reduce cognitive decline.Male SCA patients may be more vulnerable to severe disease as compared to female patients [26]; increasing haemoglobin should be considered as a part of the treatment plan [34].Age-related cerebral atrophy is a marker of SCD pathology in both paediatric and adult patients [28], but it is unclear if it explains cognitive difficulties and decline.Further research can explore the relationship between cerebral atrophy and cognitive decline as a function of age.
2023-08-10T15:03:20.954Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "c2bd28ab8bab4cf6ba0101cfbb88cf261b82f77f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/8/1360/pdf?version=1691491856", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3da31c26561ed32975e196bf719879446d8ec7d", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [] }
268319578
pes2o/s2orc
v3-fos-license
SPATIAL MAPPING OF COCONUT PLANTATION IN MINAHASA REGENCY, NORTH SULAWESI PROVINCE USING REMOTE SENSING DATA The production of coconut/copra in Minahasa Regency in 2019 was 21,350 tons with an area of 18,240 Ha. The area of coconut plantations in Minahasa Regency in 2018 was 18,470 Ha (Central Bureau of Statistics Minahasa Regency in Figures, 2020). There was a decrease in the area of coconut plantations in Minahasa Regency by 230 ha. This research was conducted in Minahasa Regency, North Sulawesi Province in April-October 2021. This study aims to interpret the SPOT 6 2019 satellite imagery of Minahasa Regency using a visual interpretation method using ArGIS Software. Image interpretation with visual techniques is the division of land cover classes by direct delineation on satellite images according to the pattern, hue and compactness of pixels in the image. The results of visual interpretation of satellite images show coconut plantations in Kab. Minahasa is 19,622 Ha. Coconut plantations are spread across all sub-districts in Minahasa Regency. The largest coconut plantation area is in Tombulu Regency (6,635.58 Ha) followed by Mandolang Regency (2,807.35 Ha) and Tombariri Regency (2,147.57 Ha). The smallest coconut plantation area in Minahasa Regency is in East Tondano District (0.16 Ha) followed by North Tondano District (1.56 Ha) and Kawangkoan District (5.35 Ha). INTRODUCTION Coconut is one of the plantation commodities that has an important role in the national economy with the main product being copra.All parts of the plant can be utilized so that the count plant is known as the Tree of Life.In addition, coconut is a social crop because +98% is cultivated by farmers.In the midst of the Covid pandemic, coconut is one of the prima donnas of agriculture with VCO (Virgin Coconut Oil) products which are claimed to be able to kill the Coronavirus. The development of coconut plants must continue to be pursued because this commodity has several comparative and competitive advantages, which are not found in other palm trees.Apart from being a source of food, this commodity is a source of renewable energy.Some of the main coconut products cannot be replaced by competing plant products, including palm oil.These products are coconut milk, desiccated coconut, coconut sap, and coir. Plantation production is not only a provider of raw materials for the processing industry but also for environmental conservation (protective plants).Coconut as a plantation commodity has a fairly high economic value, so it is more sought after by the community.Intercropping or integration of coconut plants with other commodities is one solution.Farmers earn other income on land that is cultivated for coconut plantations.This effort may not directly increase the income from coconut plantations.Integration of coconut with other commodities can increase land productivity.However, research has shown that coconut plantations that are integrated with other commodities are able to increase the productivity of coconut plants. In North Sulawesi, the area of coconut plantations in 2019 was 265,300 Ha.In 2018, the area of coconut plantations in Nyiur University.This research will contribute to the achievement of the NUSRAT research roadmap that will contribute to agricultural development in North Sulawesi Province, especially in the Minahasa Regency.Up-todate land cover information is needed for policymakers or relevant stakeholders for sustainable land resource management.The general method used to obtain land cover information is field surveys and using remote sensing data and Geographic Information System (GIS) technology. Problems faced with monitoring changes based on field surveys are the size of the study area, the length of time and the cost of the survey.As a result, the monitoring carried out is not effective because it cannot keep up with the rate of land cover change, especially in the tropics.Remote sensing technology provides up-to-date, quality, efficient and relatively inexpensive land cover data and with a wide area coverage for an effective inventory and monitoring of land cover changes (Jensen, 1996).The Food and Agriculture Organization (FAO) has adopted remote sensing technology in conducting Forest Resources Assessments since the 1990s (FAO, 2007b).Remote sensing and GIS data have been used to monitor land cover changes in the Tondano watershed area of North Sulawesi (Rotinsulu, et al. 2018). This study aims to interpret satellite images (current year 2019) using the visual interpretation method using ArcGIS software. The results of image interpretation will then be used for spatial mapping of coconut plantations in the Minahasa Regency.The results of the spatial mapping will be used for future land use planning for coconut plantations. Remote Sensing Data The remote sensing data used is SPOT image data for 2019.The SPOT (Satellites Pour observation de la Terre) satellite is a constellation satellite used for earth observation.Together with SPOT 1 and SPOT 3, the SPOT 2 Satellite is a French satellite in cooperation with Belgium and Sweden.Each SPOT series provides two identical highresolution optical imaging instruments namely panchromatic (P) and Multispectral (XS: Green, Red, and Near Infrared).SPOT-6 has a resolution of 1.5 meters Panchromatic and 8 meters multispectral (Blue, Green, Red, Near-IR). Ground Reference Data For visual interpretation of coconut plantations, the survey carried out was to take the real conditions of coconut plantations in the field in several locations of coconut plantations spread across the Minahasa regency.Location data is recorded using the Global Positioning System (GPS).In addition, visual observations were recorded using a digital camera.Field documentation is presented in the following figure.To collect field data, tools and materials are needed, including maps of the earth, GPS, digital cameras, writing instruments, compasses, and batteries.The research procedure in the form of a flow chart can be seen in Figure 1. Image Interpretation Data Analysis Classification is the process of grouping pixels into classes/groups that have homogeneous spectral characteristics (Campbell, 2002).Image classification using remote sensing software and GIS (ArcGIS).Image classification is a method for dividing each pixel in a digital image into several classes.The land cover study uses image classification methods to divide land cover classes that can represent land cover conditions on the earth's surface.Image classification techniques are divided into three Visual interpretation of satellite images Visual interpretation of satellite imagery has produced a map of the distribution of coconut plantations in Kab.Minahasa spatially (Figure 5 and Figure 6).Visual interpretation of coconut plantation land is based on several elements, namely: 1. Pattern.The pattern is a series of geological forms, topography, vegetation, or other earth surface phenomena.Coconut plantations have a specific pattern, planted with a certain spacing so that it can be seen from the image that the pattern is lined up (Figure 3). 2. Shape is a qualitative measure of the length, width, and height of an object.Interpretation of form provides important information related to the type, quality, and quantity of singular or plural objects.Coconut plantations can be seen from the specific canopy shape in the image as shown in Figure 3 The interpretation of a vegetation cover with a coarse texture provides clues to variations in the type and size of the vegetation.This condition allows a more detailed interpretation of the vegetation cover, for example as dense forest or mixed gardens (Figure 5). 5. Association is the relationship of a phenomenon with other phenomena around it.Expanded objects with fine textures associated with the presence of several road networks and settlements, can provide interpretive information that leads to cultivated areas or gardens (Figure 4 and Figure 5). Table 1 shows the results of visual interpretation of satellite images showing coconut plantations in coconuts in Minahasa regency 18,351.52Ha.Coconut plantations are spread across all subdistricts in Minahasa Regency.The area of the largest coconut plantation in the district.Tombulu (6,635.58Ha), followed by Mandolang district (2,807.35Ha) and Tombariri district (2,147.57Ha).The smallest coconut plantation area is in East Tondano district (0.16 Ha), followed by North Tondano district (1.56 Ha) and Kawangkoan district (5.35 Ha).Spatial map of coconut plantations in Minahasa regency is presented in Figures 4, 5 and 6.The decline in the area of coconut plantations in North Sulawesi, especially in Minahasa Regency, is thought to be due to the conversion of coconut plantation land to residential or industrial areas.The results of research conducted by Rotinsulu, et al (2018) in the Tondano watershed area which includes Minahasa Regency, North Minahasa Regency and Manado City show that within a period of 13 years (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) there was conversion of forest land to agriculture, and agriculture to urban areas.settlement.Conversion of agricultural land to settlements, especially in areas bordering Manado City with the development of residential/housing land. CONCLUSION 1. Spatial map of the distribution of coconut plantations Minahasa regency can be produced through visual interpretation of images by using elements of pattern, shape, association and texture interpretation.2. Coconut plantations are spread across all sub-districts in Minahasa Regency.The area of the largest coconut plantation in the Tombulu district (6,635.58Ha), followed by the Mandolang district (2,807.35Ha) and Tombariri district (2,147.57Ha).The smallest coconut plantation area is in East Tondano district (0.16 Ha), followed by North Tondano district (1.56 Ha) and Kawangkoan district (5.35 Ha). Figure 2 . Figure 2. SPOT image of Minahasa Regency in 2019 below.3. Location is the position of the object in a certain coordinate or the location of an object compared to other objects.Object location information is very useful in interpretation.Coconut plantations are located in various locations in Minahasa Regency.By taking coordinates through a field survey, it is easier to interpret the image.4. Texture is the roughness or smoothness of the visualization of the surface of the object in the image.Coarse texture indicates heterogeneity in the crowd of objects on earth. Figure 3 .Figure 4 . Figure 3. Image display of coconut plantations in Minahasa regency (certain patterns and unique canopy shapes) Table 1 . Area of coconut plantations in Minahasa Regency visual interpretation results
2024-03-11T16:59:59.788Z
2022-12-19T00:00:00.000
{ "year": 2022, "sha1": "c29e4a40fae0a9de811e9af62c198dc52538ea32", "oa_license": "CCBYNC", "oa_url": "https://ejournal.unsrat.ac.id/v3/index.php/samrat-agrotek/article/download/44520/40549", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "805c89a01c0c44838449d5048991dd6f2fd54374", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
19609038
pes2o/s2orc
v3-fos-license
What do kidneys and embryonic fish skin have in common? Both have to cope with fluctuations in osmotic pressure and acidity. [Flynt et al.][1] now show how a microRNA (miRNA) molecule acts as a crucial part of the osmoregulatory machinery. ![Figure][2] Fish lacking miR-8 microRNAs are unable to cope with osmotic pressure and can develop edemas ( H ow your brain grows might come down to how your cells divide. Lake and Sokol report that mouse protein Vangl2 controls the asymmetrical cell division and developmental fate of progenitor neurons. Vangl2 (aka Strabismus in fl ies) is a component of the PCP (planar cell polarity) pathway that is active in a variety of tissues and organisms. Mice that lack Vangl2 have a number of neurological defects including incomplete neural tube closure and reduced brain size. Lake and Sokol wondered how Vangl2 might infl uence brain development. In the cerebral cortex, neurons are born from a pool of progenitor cells, and the time of their birth determines their fate. The research duo found that Vangl2-lacking mouse embryos had large numbers of early-born neurons and few remaining progenitor cells. This hinted that Vangl2-lacking neurons were differentiating prematurely-a suspicion confi rmed in vitro. The progenitor pool is maintained by asymmetrical divisionone daughter cell becomes a neuron, the other self-renews. This fate asymmetry is thought to depend on the orientation of cell division, and the authors observed an increase in the number of symmetrically dividing progenitors in the brains of Vangl2-lacking mouse embryos. Also, Vangl2-lacking cells in culture showed symmetrical distribution of a spindle-orienting factor that in normal cells distributes asymmetrically. Such similarities between Vangl2-lacking cells in vitro and in vivo will facilitate ongoing studies of the PCP pathway in neurogenesis. Speedy versus sluggish cells S ome cells live a fast-paced life, traveling far and wide. Others are more sedentary and stay closer to home. Ou and Vale now report molecular differences that underlie these lifestyle choices. Cell migration studies in living multicellular organisms are not easy. Ou and Vale used spinning disc confocal microscopy-a technique that allows fi ne focusing and rapid image capture-to follow the paths of individual fl uorescent cells in the bodies of worms. Images were captured from up to 10 worms at once over a period of hours, and the microscope automatically moved to specifi c focal points for each cell in each worm. Because the worms (and cells) were alive and moving, however, Ou had to readjust the focal points every 15 minutes or so. The cells of interest were neuroblasts, which are known to vary in their migration distances. The authors now report that these cells also vary in their speed, and faster cells ultimately go farther. Compared with their sluggish sisters, fast cells boosted their levels of a cytoskeletal regulator, lowered their levels of an extracellular matrix attachment factor, or did both. Essentially, they revved the engine and/or took off the brakes. The team now plans to use its microscopy setup to investigate how neuroblasts move in multiple cell migration mutants. What do kidneys and embryonic fish skin have in common? B oth have to cope with fl uctuations in osmotic pressure and acidity. Flynt et al. now show how a microRNA (miRNA) molecule acts as a crucial part of the osmoregulatory machinery. miRNAs are small noncoding RNAs that bind to gene transcripts, preventing their translation into proteins. There are potentially thousands of miRNAs encoded in the genomes of higher eukaryotes, and predicting their target transcripts is tricky, as binding occurs via imperfect sequence matches. Researchers like Flynt and colleagues are taking a one-at-atime approach to identify miRNA targets and functions, starting with the most highly conserved miRNAs. Among these are the miR-8 family, which has several conserved members in vertebrates. In fi sh, the team observed, miR-8 family members were abundant in cells called ionocytes. These cells are dotted throughout the skin and participate in osmoregulation. Without miR-8, ionocytes looked normal, but couldn't cope with pH changes or osmotic stress-in the latter case the fi sh developed edemas due to water retention. miR-8, it turns out, was targeting an mRNA that encodes a protein called nherf1 (Na + /H + exchange regulatory factor 1). Originally identifi ed in renal brush border membrane extracts, Nherf1 acts as an adaptor between the plasma membrane and cytoskeleton. In ionocytes lacking miR-8 family members, membrane traffi cking of ion channels and other proteins was disrupted. miR-8 is predicted to target nherf1 in mammals too. The team now plans to see whether intercalated cells of the kidneythe functional equivalents to ionocytes-use the same osmotic regulation pathway. Early-born neurons (green) are more abundant in the Vangl2-lacking mouse embryo brain (bottom) due to premature progenitor differentiation. Fish lacking miR-8 microRNAs are unable to cope with osmotic pressure and can develop edemas (arrow).
2019-03-22T16:09:10.585Z
2009-04-06T00:00:00.000
{ "year": 2009, "sha1": "b0ec45ab5f8973a77799a16c1b9345edd530e278", "oa_license": null, "oa_url": "https://rupress.org/jcb/article-pdf/185/1/2/1341386/jcb_1851iti.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5f6ba1a8ca6a7416f2ceed189ea5109fde7401cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
259283208
pes2o/s2orc
v3-fos-license
Simultaneous detection and quantification of multiple pathogen targets in wastewater Wastewater-based epidemiology has emerged as a critical tool for public health surveillance, building on decades of environmental surveillance work for pathogens such as poliovirus. Work to date has been limited to monitoring a single pathogen or small numbers of pathogens in targeted studies; however, few studies consider simultaneous quantitative analysis of a wide variety of pathogens, which could greatly increase the utility of wastewater surveillance. We developed a novel quantitative multi-pathogen surveillance approach (35 pathogen targets including bacteria, viruses, protozoa, and helminths) using TaqMan Array Cards (TAC) and applied the method on concentrated wastewater samples collected at four wastewater treatment plants in Atlanta, GA from February to October of 2020. From sewersheds serving approximately 2 million people, we detected a wide range of targets including many we expected to find in wastewater (e.g., enterotoxigenic E. coli and Giardia in 97% of 29 samples at stable concentrations) as well as unexpected targets including Strongyloides stercoralis (a human threadworm rarely observed in the USA). Other notable detections included SARS-CoV-2, but also several pathogen targets that are not commonly included in wastewater surveillance like Acanthamoeba spp., Balantidium coli, Entamoeba histolytica, astrovirus, norovirus, and sapovirus. Our data suggest broad utility in expanding the scope of enteric pathogen surveillance in wastewaters, with potential for application in a variety of settings where pathogen quantification in fecal waste streams can inform public health surveillance and selection of control measures to limit infections. Introduction Wastewater-based epidemiology (WBE) incorporates a range of tools intended to complement traditional public health surveillance, optimally providing timely and actionable data on pathogens circulating in populations of interest.Historically, wastewater monitoring has been used as a surveillance tool for individual pathogens including poliovirus [1,2], hepatitis A [3], Vibrio cholerae [4], Salmonella enterica serotype Typhi [5] as well as for chemical analytes (e.g., drug use) [6].This strategy has gained global prominence in the detection and quantification of SARS-CoV-2 RNA in wastewater [7][8][9], specifically focusing on community prevalence [7,10,11], apparent trends in infections over time and space [12], and emerging variants [13,14].Advantages and limitations of wastewater as a surveillance matrix have been widely discussed since 2020 [15][16][17]. The need to expand wastewater monitoring to screen multiple pathogens or variants is a valuable approach to better understand the possibility of emerging pathogens or circulating strains in a particular population.In addition to a rapidly expanding array of sequencing techniques to more completely characterize microbial composition of environmental samples, more sensitive quantitative or semiquantitative multiple-target detection approaches exist [18,19] and some have been subjected to crossmethod comparisons for pathogen detection and quantification [20][21][22].Such tests could complement the highly sensitive and precisely quantitative emerging digital PCR techniques now considered the gold standard for single-pathogen detection in wastewater, either as a screening method as a precursor to more in-depth work on targets of interest or to gain information on a wide range of pathogens of interest. Emerging and re-emerging infectious diseases [23] -including those with pandemic potential [24] represent ongoing risks to society, and wastewater surveillance can fill critical gaps in data to inform public health responses [25]. Based on the demonstrated potential for WBE to complement traditional diagnostic public health surveillance for a diverse array of pathogens, we implemented a customized multi-parallel molecular surveillance tool for simultaneous detection and quantification of 35 common pathogenic bacteria, viruses, protozoa, and helminths in wastewater.Such approaches can expand the existing WBE platform by screening for many more pathogens -including rare or emerging microbes of interest -enhancing monitoring to inform public health response.We demonstrate the utility of this method in an analysis of primary untreated influent samples from four wastewater treatment plants in metro Atlanta, Georgia, USA. Sample Collection We collected one-liter primary influent grab samples (n=30) in high-density polyethylene (HDPE) plastic bottles from four wastewater treatment plants (anonymized as WWTP A, B, C, D) in Atlanta, GA between March 20 th , 2020 -November 5 th , 2020 between 9:30 AM-11 AM.We obtained permission for sample collection from each WWTP manager prior to sampling.Flow values from the WWTPs ranged from 14 -80.2 million gallons per day.All samples were transferred to the laboratory on ice and stored at -80°C until further processing was completed.Initial sample processing began on November 8 th , 2021. Frozen samples were thawed in a 5L bucket of water located in a 4°C walk-in fridge for up to 3 days or until thawed.Samples were then recorded for temperature and pH, and a 50 mL aliquot was taken for total suspended solids measurements (S1 Table ).Each 1L sample was spiked with 10 µL of Calf-Guard (Zoetis) resuspended vaccine, containing attenuated bovine coronavirus (BCoV), and 10 µL of MS2 (10 5 /µL), which served as the process recovery controls.A 1:100 ratio of 5% Tween 20 solution was added to the sample bottle as recommended by InnovaPrep for processing wastewater samples [26].A graduated 1L bottle was used as a reference for the total volume in each sample bottle.Samples were mixed by inverting the bottle 3-4 times.A subset of samples (n=4) were processed using three different methods to establish a reasonable workflow for the remaining samples: (1) direct extraction, (2) InnovaPrep Concentrating Pipette (CP) Select, and (3) skim milk flocculation (SMF). Direct Extraction We directly extracted 200 µL of wastewater influent into the DNeasy PowerSoil Pro Manual extraction kit (Qiagen, Hilden, Germany).Technical representatives indicated kits co-purify DNA and RNA and others have compared DNA kits with DNA + RNA kits with similar performance [27]. InnovaPrep Concentrating Pipette 150 mL from the wastewater influent sample was transferred into a 500 mL conical centrifuge tube. Samples were centrifuged for 20 minutes at 4800 x g.The 500 mL conical tube was placed under the CP Select, and the fluidics head lowered into the sample.The sample supernatant was filtered using a 0.05 µm unirradiated hollow-fiber CP tip and eluted using the InnovaPrep FluidPrep Tris elution canister. Processing times and eluted volumes were recorded.For each day samples were run, one negative control consisting of 100 mL of DI water was also filtered and processed. Skim Milk Flocculation With the remaining wastewater sample, we proceeded to use the SMF method [28].We combined 1 mL of a 5% skimmed milk solution per 100 mL of wastewater sample (average volume = 750 mL) and adjusted the pH of the skimmed-milk-wastewater solution between 3.0 -4.0 using 1M HCl.Samples were placed on a shaker plate at room temperature (20-25°C) at 200 RPM for two hours.After shaking, samples were centrifuged at 3500 x g at 4°C for 30 minutes.The supernatant was discarded and the pellet was archived at -80°C until two batch extractions of 15 samples were completed within one week. A subset of 4 samples were directly extracted and the TaqMan Array Card (TAC) results from CP, SMF, and the direct extractions were compared to determine an optimal concentration method prior to full scale downstream processing.Additional details can be found in S2 Table .In the methods trial, SMF resulted in greater number of pathogen detections and was therefore used for the subsequent full-scale analyses.In the SMF workflow, skim milk pellets were processed for RNA using the Qiagen DNeasy PowerSoil Pro manual extraction kit.One extraction blank was run using nuclease-free water for each batch of sample extractions.Extracts were placed in the -80°C freezer until reverse transcriptase real-time (quantitative) polymerase chain reaction (RT-qPCR) or digital PCR (dPCR) processing followed within one week. Skim milk pellets were run on TAC with 7% in duplicate.All CP eluants were extracted for RNA using Qiagen AllPrep PowerViral manual kits following manufacturer instructions to be further processed using dPCR.CP and dPCR were used for process controls and fecal indicators in the full-scale analyses. Molecular Analysis Two PCR platforms were used to process extracts, the first was an RT-qPCR QuantStudio (QS) 7 Flex (ThermoFisher Scientific, Waltham, MA) and the second a dPCR QIAcuity Four (Qiagen, Hilden, .Germany).All skim milk pellets were analyzed using the QS7 Flex.The QS7 works in conjunction with a custom TAC, which is prespecified with lyophilized primers and probes for 35 enteric pathogen targets (see S3 Table ).The card was designed to include bacterial, viral, protozoan, and helminth targets that may be circulating in the United States as well as the leading etiologies of diarrhea among children globally [29,30].Cq values < 40 were considered positive for the target and confirmed through clear amplification signals in the amplification and multicomponent plots.We prepared our TAC by combining 38 µL of template with 62 µL of AgPath-ID One-Step RT-PCR Reagents (Applied Biosystems) and assessed TAC performance through an 8-fold dilution series (10 9 -10 2 gene copies/reaction) using 2 plasmids (one for DNA and one for RNA targets) that were linearized, transcribed, cleaned, and quantified as described in [29].The samples were analyzed in single, not replicates on the same TAC.Additional MIQE details are found in S4 Table .All CP eluant samples were analyzed using the dPCR QIAcuity Four platform (Qiagen, Hilden, Germany).On the dPCR platform previously designed and optimized multiplex assays were used for bovine coronavirus (BCoV), pepper mild mottle virus (PMMoV), and human mitochondrial DNA (mtDNA) [31] (see S5 Table, S1 Text, and S1 Fig) ).Gene copy concentration results for PMMoV and mtDNA were used as normalization markers for the TAC pathogen data so that we divided the sample gene copy concentrations/liter by the normalization marker gene copy concentrations/liter. Data Analysis When multiple gene targets for a single microbial taxon was detected, we used the highest concentration gene target to calculate summary statistics and supported figures.We used R Studio version 4.2.1 and specific R packages to complete all data cleaning (dplyr v1.1.2),analyses (janitor v2.2.0, gtsummary v1.7.1) and generate graphs (ggplot2 v3.4.2).All TAC data was analyzed using QuantStudio Design and Analysis Real-Time PCR software (v2.6.0,Thermo Fisher Scientific).Equivalent sample volumes (ESV) have previously been described as the original sample volume processed and analyzed in a PCR reaction [32].Here, we calculated ESVs using the following equation: The 95% limit of detections (LODs) were calculated for each assay using probit models [33].We translated these 95% analytical LODs (aLODs) into a 95% matrix LOD (mLOD) using the following equation and the previously calculated effective volumes for SMF: ‫ܦܱܮ݉‬ ൌ 1 ‫ܸܧ‬ ሺܽ‫ܦܱܮ‬ሻ Results TAC results were generated using skim milk pellets extracted by the PowerSoil Pro Manual kit to process the influent samples.The average SMF pellet was 2.2 mL and the average wastewater influent processed for SMF was 688 mL.Supplemental data on any other method performed (direct extraction or InnovaPrep CP pellet) is provided in S2 Enteric Pathogen Measurement by Skim Milk Flocculation The log 10 -transformed gene copy concentrations by pathogen class and specific enteric pathogen (Fig 1) demonstrates the wide range of pathogens detected in Atlanta wastewater influent (n=30 Necator americanus Pathogen concentrations normalized by mtDNA and PMMoV Quantitative log 10 gene copies per liter of wastewater influent before (S9 Table ) and after normalization (S10-11 Tables), with mtDNA normalization resulting in overall higher log 10 ratios.In Fig 3, we note a considerably smaller ratio when using PMMoV normalization over mtDNA.These concentrations are caused by increased PMMoV concentrations in wastewater influent compared to mtDNA concentrations. Standard Curves The standard curves for this custom TAC included two assays (Adenovirus 40/41 and Hepatitis A) with poor standard curve performances (r 2 < .95) and therefore were excluded from all analyses.Of the remaining 40 enteric targets, the DNA control was phocine herpes virus and RNA control was MS2.For performance metrics (S12 Table ), reasonable linearity was detected for all included assays with an average R 2 value of 0.997 across all assays with the lowest R 2 of 0.967 for STEC (stx2) and the highest R 2 of 1 for Acanthamoeba spp., Balantidium coli, E. coli O157:H7, Giardia spp., Plesiomonas shigelloides, Salmonella spp., and STEC (stx1).The lowest efficiency assay was Astrovirus at 87% while the highest was Entamoeba spp. at 104%. Effective Volume The effective volume, which does not account for recovery efficiency, is calculated as the proportion of original wastewater sample assayed in a single qPCR reaction.The effective analyzed wastewater volume for InnovaPrep CP was 0.155 mL (SD 0.0605) per reaction and SMF was 0.410 mL of wastewater per reaction (SD 0.121). Limit of Detection and matrix LOD The 95% aLOD was calculated for each assay in S12 Discussion Wastewater surveillance sampling, processing, storage, and analysis methods have advanced rapidly since the emergence of SARS-CoV-2.Most studies have examined primary influent [34,35] and solids [36,37]. Sampling methods have also varied from grab, composite, and more recently passive techniques [38].In addition to testing different matrices, many laboratories have implemented various methods to concentrate SARS-CoV-2 in wastewater using ultracentrifugation, polyethylene glycol precipitation, electronegative membrane filtration, and ultrafiltration [28,39], but few have considered a concentration step followed by a simultaneous, multi-parallel quantitative assay or multiple pathogen detection assays. The possibility of high-plex, high throughput platforms are of particular interest to stakeholders looking to expand wastewater monitoring nationally in the US and abroad.For example, the CDC has expanded upon the previously single-plex N1 assay for SARS-CoV-2 to include influenza A and/or B for increased testing capacity [40].Practical applications of surveillance suggest that downstream sampling analyses of . 3 or 4 samplings per week could provide useful results regarding trends, but the specific design would have to be driven by local public health trends and goals [41][42][43]. TAC performance metrics We compared our traditional metrics such as R 2 trends of standard curves and found that our TAC results are within a reasonable R 2 range for almost all assays (R 2 >0.96), except for two explicitly excluded due to poor standard curve performance.Our 95% LODs calculated also indicate a broad range of analytical sensitivities across all pathogen targets.With the lowest detections at 0.6 gene copies per reaction, we also have targets on the higher end of 291 gene copies per reaction for ETEC.While other studies indicate a loss of sensitivity when using TAC, there was still an 89% detection rate compared to singleplex assays run [44]. Prevalence of bacteria, protozoan, and viral targets Our qPCR data indicated 10 4 -10 6 gene copies per liter for SARS-CoV-2 prior to normalization efforts, which is comparable to other studies [45].Researchers had previously detected Giardia duodenale., Cryptosporidium spp., and Enterocytozoon bieneusi at 82.6%, 56.2% and 87.6%, in combined sewer overflows (CSO) around China [46].These molecular surveillance findings were also similar to ours at 97% (n=29/30) for Giardia spp., not specifically Giardia duodenale, and 27% (n=8/30) for Cryptosporidium spp., and 53% (n=16/30) for E. bieneusi.Our data showed the presence of Strongyloides stercoralis in urban wastewater, a human parasite typically associated with rural, underserved settings [47].This finding is an example of the utility of screening for uncommon or unexpected targets, revealing novel information that can supplement existing public health surveillance. Groundwater and runoff can intrude into wastewater collection systems through inflow and infiltration (I&I), which may be relevant for fungi and a possibility for other microbial species to mix with wastewater flows [48].Other potential explanations of sources into wastewater may include animal waste, commercial and/or industrial waste.These influent flows and their sources are difficult to determine, but .routine surveillance -including with the addition of source-tracking -may provide additional insight into influent pathogens, their possible origins, and their utility in understanding infection transmission and control in the sewershed. Value of multiple detections on TAC Multi-parallel detection of pathogens of interest using TAC can be helpful in long-term surveillance or monitoring of pathogens, including in rapid screening programs or where numerous pathogens may be of interest.Apart from known, emerging, or suspected pathogens, antimicrobial resistance genes or other PCR-detectable targets of public health relevance can be included in TAC design.One key premise of WBE and monitoring is the potential value of using the method as an early detection for the onset of a potential outbreak [49,50], yet most detection methods have a needle in a haystack approach versus a wider screening that could be especially applicable to state health departments or in routine monitoring. Most clinical testing is conducted one sample at a time and a high throughput method for simultaneous testing could expand the early warning potential to many other pathogens. The customizability of TAC has proven useful in other applications such as surveillance of respiratory illness [51,52], acute-febrile illness for outbreak or surveillance purposes [53], and to improve etiological detection of difficult neonatal infectious diseases for low-resource clinical settings [54].Some studies have focused on applications of combining nucleic acid detection with quantitative microbial risk assessments [55], but none have considered such a broad set of applications to wastewater monitoring and surveillance, although some have applied these methods qualitatively on fecal sludge samples [56,57].It is possible to create a multiplex assay for digital PCR, the leading technology for wastewater monitoring, for up to five different genes, but no other platform provides quantitative data on up to 48 gene targets during a single experimental run. TAC methods can fill a critical gap in existing molecular monitoring tools.As a method yielding quantitative estimates of potentially dozens of targets, it offers complementary advantages over emerging digital PCR platforms (greater sensitivity and lower limits of quantification, but fewer targets) and sequencing methods (many more targets, but high limits of detection and generally not quantitative).TAC should be considered where targets are present in high numbers -like in wastewaters and fecal sludgesand where many pathogens are of interest. The application of improved methods for the detection and quantification of enteric pathogens in wastewater, in addition to other enteric pathogens of interest, can then be translated into relevant intervention and monitoring efforts [21].As SARS-CoV-2 surveillance in wastewater reaches scale [7,34,58], detection and quantification of other pathogens has been proposed.Researchers have expanded on wastewater monitoring to focus increased surveillance on other respiratory viruses such as human influenza and rhinovirus [59], norovirus [60], or as an outbreak detection tool for influenza, [61] and are also considering other emerging infections such as monkeypox [62]. Value of sensitivity of dPCR The current and suggested methodology to process wastewater samples using a molecular platform is dPCR due to its low limit of detection and quantification.While these efforts make sense to consider when focused on one particular pathogen, it is not as feasible and consumes several resources if considering a truly practical monitoring system for wastewater.Time, technical staff labor and resources are always a challenge for laboratories and especially public health laboratories that have been tasked with monitoring wastewater for SARS-CoV-2.We can expect enteric targets to be present in wastewater, but to further identify which enteric pathogens are present and their concentrations with respect to each other would be a useful application towards building a wastewater monitoring system. While SARS-CoV-2 was detected through TAC, we were also interested in detecting additionally relevant targets, including BCoV, PMMoV, and mtDNA, which were not previously included on the TAC.The normalization of pathogen concentrations using mtDNA consistently lowered concentrations across samples and may be useful as a normalization variable instead of or in addition to PMMoV.While . PMMoV has been widely used for normalization of wastewater data [63,64], we found the normalization efforts did not drastically reduce the noise-to-signal ratio.While several studies have used PMMoV as a normalization marker for SARS-CoV-2 [12,65,66], fewer studies have considered human mitochondrial DNA markers and those who have found the marker to have strong correlations to clinical case counts [67].Additional studies have also considered the use of crAssphage [12,64], HF183 [41,68], and Bacteroides ribosomal RNA (rRNA) and human 18S rRNA as other normalization markers to explore using for wastewater fecal concentration data [12].Normalization techniques using a variety of biological (PMMoV, HF183, crAssphage) and chemical markers (ammonia, total kjeldahl nitrogen, total phosphorous, biochemical oxygen demand) have been proposed as a way of accounting for non-human inputs to sewers (i.e., dilution effects) and improving correlation with clinical data and comparability between sites.However, the effects of normalization with a variety of techniques on correlations with clinical data have been mixed [41,63,[69][70][71].Our observations are consistent with those of previous studies.Normalization with mtDNA nor PMMoV reduced the coefficient of variation for single analytes. Limitations Wastewater sample recovery for SARS-CoV-2 has been successful when using fresh samples, but for many WWTP and their partners it may be unrealistic to complete same-day processing for logistical reasons [72].This work demonstrated the recovery of pathogen targets using archived grab samples, which makes this approach open to a broader range of applications such as retrospective analyses where clinical data is available or can be linked to these environmental surveillance results.However, more research is needed to understand which recovery methods work best and can be performed efficiently for archived samples.While we did not optimize methods for recovery across all targets, it will be increasingly important to consider such methods when screening for multiple targets and depending on target selection [68,73,74]. . A major limitation to interpreting this work is limited data on using multiple TAC targets and their incorporation into predictive models.Researchers have gained interest in calculating community-specific or dorm-specific fecal shedding rates specifically for SARS-CoV-2 [75,76], but there was no specific information on the fecal shedding rates for this particular population to consider a modeling approach to relate pathogen concentration and clinical case data for asymptomatic individuals.Additionally, sewersheds of different sizes may have specific challenges in determining accurate shedding rates.Robust data on enteric shedding rates is not widely available for high-income countries, but efforts to estimate these variables and their uncertainties have been attempted [77]. TAC methods are also limited by the number of gene target detections one can consider.With the option of detecting many pathogens comes with a need for determining the most relevant genes of interest. While TAC can run up to 48 unique targets, the total amount of template that enters each individual well is ~ 0.6 μL.This low template volume, compared to a 2-5 μL of template included in other molecular assays can affect the overall limits of detection for this platform.While singleplex assays may have lower limits of detection, the likelihood of optimizing a multiplex for up to 46 or more agents is unrealistic; therefore, giving TAC a considerable advantage as a high parallel, multiple detection platform [44]. Additionally, these targets and the QA/QC involved require dedicated time and effort to include relevant targets that may change based on future applications.The need for additional replicates run to produce robust analytical limits of quantification are encouraged for future work.Using this multiple pathogen detection tool does not account for variant changes and may not be suitable for all applications.Our findings indicate TAC offers a multi-parallel platform for screening wastewater for a diverse array of enteric pathogens of interest to public health with strong potential for screening other targets of interest including respiratory viruses and antibiotic resistance genes. Fig 2 . Fig 2. Log 10 gene copies per liter of wastewater influent using the InnovaPrep Concentrating Pipette (CP) method.The dashed line represents the limit of detection when calculated as 3 partitions out of the total Fig 3 . Fig 3. A) Pathogen data normalized by mtDNA.B) Pathogen data normalized by PMMoV.The dashed Table 2 Fig 1. Log 10 concentrations of enteric pathogens per liter of wastewater influent using the SMF method and PowerSoil Pro Manual extraction.Of the SMF samples, the bacterial targets of highest concentration were ETEC and enteropathogenic E. coli (EPEC -atypical), whereas viral targets were mainly astrovirus and norovirus GI/GII.Somewhat unexpected protozoan targets detected were Cyclospora cayetanensi (3/30) and Entamoeba histolytica (6/30).Both Cryptosporidium spp.and Giardia spp.were detected at means of 5.0 log 10 and 6.5 log 10 , respectively.Of the total samples, we detected SARS-CoV-2 RNA in 50% of samples (n=15) at concentrations between 3.0 log 10 -6.0 log 10 gene copies per liter of wastewater influent. ). Enteric bacteria, specifically enterotoxigenic E. coli (ETEC), were detected most frequently and at higher gene copy concentrations compared to helminths and viruses.Notable protozoan detections were Acanthamoeba spp.(28/30), Balantidium coli (29/30), Entamoeba spp.(29/30), and Giardia spp.(29/30).stercoralis in one wastewater sample (S2 Fig and S6 Table). .Prevalence of pathogens [n by column (%)] detected in wastewater influent from four treatment plants in Atlanta, Georgia -using SMF method ) Concentrating Pipette and normalization markers A total of n=30 CP samples were processed for PMMoV, mtDNA, and BCoV.Fig 2 demonstrates the log 10 gene copies per liter of wastewater influent and indicates PMMoV concentrations exceed mtDNA concentrations.The average concentrations for BCoV dPCR reactions was 43.3 gene copies (gc)/μL, PMMoV was 1602 gc/μL, and mtDNA was 4.33 gc/μL.The average concentrations of log 10 gene copies/liter per reaction of wastewater was 5.2 x 10 4 for mtDNA and 1.9 x 10 7 for PMMoV.All positive controls and non-template controls performed without suspicion and additional details on control performance is included in S2 Text and in the dMIQE checklist (S7 Table).Additionally, BCoV as a process control yielded a 29% average recovery with a standard deviation of 28, with recovery by sample available as S8 Table. stx1 and stx2.†EnteropathogenicE. coli (EPEC); enteroinvasive E. coli(EIEC).dPCR for Table and includes the minimum, maximum, mean, standard deviations, standard error, and confidence intervals.These results indicate average gene copies per mL of wastewater influent as low as 1.591 for average Cq of 19.3 gene copies per reaction [CI 2.04].With a Cq difference of 1.5, we can reasonably conclude inhibition was not a major issue with our sample matrix since samples and controls had Cq difference less than 2.
2023-06-30T01:33:04.521Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "4f2f67e72db721fdfd3287da734b41399080c83d", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327253", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b5aa026b94b0bb8488c682f3d468bd031e1cba0e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245763350
pes2o/s2orc
v3-fos-license
Antibacterial Screening of Bacterial Isolates Associated with Mangrove Soil from the Ngurah Rai Mangrove Forest Bali In this study we reported cultivation of bacteria associated with mangrove soil from the Ngurah Rai Mangrove Forest, Bali. Mangrove soil samples were serially diluted using sterile artificial seawater, spread onto Starch Casein M agar and incubated at 28C for 28 days. Cultivation of mangrove soil samples yielded 165 bacterial colonies with 68 isolates were selected and purified based on different morphology. Of these 68 isolates, 22 isolates displayed antibacterial activities ranging from weak to strong inhibition against at least one of four bacterial indicators namely Staphyloccocus aureus, Streptococus mutans, Escherichia coli and Klebsiella pneumoniae using perpendicular streak method. Overall, 19 out of 22 bacteria isolates displayed weak antibacterial potential and two isolates exhibited moderate antibacterial activity. The isolate SA4 was the only bacterium with strong antibacterial potential with measured clear distance ≥ 10 mm against the four bacterial isolates. Sequence analysis based on 16S rRNA gene fragment assigned the isolate SA4 as Bacillus subtilis strain BIL/BS-168. Overall, this study confirmed the untapped potential of antibacterial activities from bacteria associated with mangrove soil. INTRODUCTION Bacterial resistance against antibiotic drugs is currently an emerging global health threat that require immediate actions (World Health Organization, 2018;Centers for Disease Control and Prevention, 2018). Antibiotic resistance arises because bacteria develop immune mechanism by mutation, horizontal gene transfer and enzyme inactivation (Munita and Arias, 2016). As a rough estimation, resistance against antibiotics will reach 10 millions cases in 2050 and is predicted to result in financial loss of 100 trillions USD due to increased cost for hospitalization and loss of productivity. To date, overused and misused antibiotics are the main factors of the increasing rate of antibiotic resistance (Ventola, 2015). One of the efforts to overcome the increasing rate of resistance of pathogenic bacteria is to discover new sources of novel antibiotics with much stronger efficacy compared to current drugs that are available on the market (Roca et al., 2015;Cheng et al., 2016). Exploration of bacteria with a potent antibacterial activity has been mainly focused on terrestrial ecosystem (Elbendary et al., 2018;Singh et al., 2016;Mohamed et al., 2017;Assis et al., 2014). However, a number of studies indicated the highest de-replication of active compounds that have been previously reported. (Debbab et al., 2010). Therefore, exploration of marine habitat should also be given more priority as bacteria in this habitat is rather unexplored yet yield (Debbab et al., 2010). Mangrove forest is characterized with extreme salinity, high-low tide, wind pressure , muddy and low oxygen concentratio (Friess, 2016). Therefore, microbe in this habitat tend to be able to adapt on harsh environmental conditions (Booth, 2018). The group of actinobacteria in mangrove forest synthesize various of secondary metabolites to survive with this extreme condition (van der Heul et al., 2018;Bentley et al., 2002). Bioactive compounds from mangrove origins have been shown to display therapeutics potential and could be developed as new drugs including antibiotics. A number of studies have isolated bacteria from mangrove and these isolates could inhibit wide range of pathogenic bacteria (Azman et al., 2015;Lee et al., 2014;Jiang et al., 2018). The Ngurah Rai mangrove forest is the biggest mangrove ecosystem in Bali with over 19 mangrove plants species inhabited the area and dominated by four main species namely: Rhizophora mucronata, Avicennia marina, Rhizophora apiculata dan Sonneratia alba (Balai Pemantapan Kawasan Hutan Wilayah VIII Denpasar, 2018). To date diversity of bacteria in The Ngurah Rai Mangrove Forest is rather unexplored, therefore this research aims to isolate and pre-screen bacterial isolates with antibacterial activities. Cultivation of bacterial isolates from mangrove soils Ten grams of each mangrove soil samples were pretreated with wet heating in a water bath at 60 o C for 15 minutes by combining sterile artificial seawater and distilled water (1:1 v/v) (Azman et al., 2015;Lee et al., 2014;Jiang et al., 2018). Subsequently, 1 mL of each of soil sample suspension was serially diluted (10 -1 to 10 -3) in 9 mL sterile artificial seawater. One hundred µL of each diluted soil sample was spread using sterile cotton swab (Onemedia) onto starch M-protein agar (63 gram/L, HiMedia, India) which was supplemented with 100 μg/mL nalidixic acid and 25 μg/mL nystatin. Agar plates were sealed with parafilm and incubated at 28 o C for 28 days. Observation of colonies grew on agar was performed every two days and the total colonies observed on agar media were counted. Bacterial colonies with different morphologies (colour, form, elevation) were picked up from each agar plate and these colonies purified individually by streaked onto ISP-2 agar media (4.0 gram/L yeast extract, 10 gram/L malt extract, 4 gram/L dextrose, 20 gram/L bacto agar). Each of the purified bacterial isolate was stained using Gram staining dan observed under microscope to identify their morphological form. Antibacterial prescreening Antibacterial activities of each purified isolates were pre-screened using perpendicular streak method (Boontanom and Chantasari, 2020) with a slight modification against four bacterial indicator strains which represented Gram-negative bacteria (Escherichia coli dan Klebsiella pneumoniae) and Gram-positive bacteria (Staphylococcus aureus dan Streptococcus mutans). In brief, each of pure bacterial indicator strains was streaked as a five cm vertical line and incubated for 48 hours at 37 o C until fully grown. Bacterial indicator strains were streaked five cm perpendicular from the original line of the bacterial isolate ( Figure 2). Antibacterial potential was determined based on the distance formed between an isolate and a bacterial indicator according to four categories: no activity (a bacterial indicator show no distance with an isolate), weak (1 -4 mm), moderate (5 -9 mm) and strong ≥ 10 mm. Genetic analysis of bacterial isolates with strong antibacterial activities Bacterial isolates with a strong antibacterial activity were genetically identified using polymerase chain reactions followed by Sanger sequencing. The genomic DNA of selected bacterial isolates was extracted using Bacteria DNA preparation kit Jena Bioscience (Jena, Germany) by following the manufacturer instruction. Genomic DNA of each the selected isolates were amplified by targeting 16S rRNA gene fragment using primer pairs 27F: 5'-AGAGTTTGATCMTGGCTCAG-3' and 1492R 5'-GGTTACSTTGTTACGACTT-3' (Lane, 1991). The 50 µL PCR master mix contained 25 µL My Taq HS Red Mix 2 x, 1 µL (20 µM) of each forward and reverse primer, 22 µL of sterile DNA/RNA free water and 1 µL of genomic DNA. The PCR cycles consisted of 95°C for 5 minutes pre-denaturation, 30 cycles consisted of 95 o C for 1 min denaturation, 55 o C for 1 minutes annealing, 72 o C for 1.5 min extension, and a final extension for 7 minutes. Amplified PCR product was analysed using 1% agarose supplemented with SYBR Safe by gel electrophoresis for 45 minutes (80 volt/200 watt) and visualised using UV light in a gel documentation machine. Subsequently, PCR products were sent to PT Genetika Science (https://ptgenetika.com/) for Sanger sequencing. The obtained sequence data were subjected to NCBI BLAST database (https://blast.ncbi.nlm.nih.gov/Blast.cgi) to assign the closely identity of related bacterial sequence. Bacterial isolates with antibacterial activities A total of 68 bacterial isolates were selected over 165 bacterial colonies observed on agar plates after 28 days of incubation based on morphological observations. Of these 68 isolates, only 22 isolates showed potential antibacterial activities against at least one bacterial isolate using perpendicular streak method. These bacterial isolates were grouped as Gram positive cell wall with rod morphology (Table 1). The level of antibacterial inhibition varied among the 22 bacterial isolates. In general majority of isolates weakly inhibited at least one indicator bacterium. Two isolates (SA1 and RM10) displayed moderate antibacterial activity. SA4, however, is the only bacterial isolates with the strongest antibacterial potential against all the four bacterial indicator (Figure 3). Perpendicular streak was selected as the prescreening method because the approach is rather straight forward and has been applied effectively to determine isolates with antibacterial activities (Boontanom and Chantasari, 2020;Balouiri et al., 2016). It could be assumed that isolate SA4 could potentially synthesis active antibacterial molecules based on substantial distance formed between the isolate and bacterial indicators. Antibacterial substances produced by an active bacterial isolate were diffused on agar media so that growth of other bacteria was inhibited (Balouiri et al., 2016). Nucleotide sequences comparison of isolate SA4 against NCBI blast database indicated that the top hit was assigned as Bacillus subtilis strain BIL/BS-168 with 97,36% identity. Bacillus subtilis in general has been regarded for its ability to produce antimicrobial compounds and has been applied for food preservation and crop protection (Caulier et al., 2019). For example, bacteriocin is one example of an active compound responsible for antimicrobial activities in B. subtilis (Sharma et al., 2018). However, at this stage it is still unknown what type of antibacterial substances that could be produced by the SA4 isolate. Further research, therefore should be focused to identify type of antibacterial compounds synthesized by the SA4 isolate. In addition, genetic analysis should also be done to analyze metabolic pathways that are responsible to synthesize antibacterial compounds in the SA4 isolate. Despite, the remaining 21 bacterial isolates only showed weak to moderate antibacterial activities based on perpendicular streak do not automatically exclude the full potential of these isolates. It could be that each isolate requires different time to accumulate their antibacterial substances. Therefore, a liquid fermentation followed by organic extraction of these isolates using different solvents depending on polarity of the expected compounds should be done to fully unravel the true antibacterial potential or other therapeutic capability. CONCLUSION In conclusion, this study obtained 22 bacterial isolates with antibacterial potential from mangrove soil of the Ngurah Rai mangrove forest. All of these isolates were characterized by rod shape and Gram-positive cell wall under microscope observation. Isolate SA4 displayed the strongest antibacterial potential based on the perpendicular streak method with clear distance zone above 10 mm. Isolate SA4 was closely related to Bacillus subtilis strain BIL/BS-168with 97.36% sequence identity based on 16S rRNA gene sequences. Further studies should be focused to extrapolate the antibacterial potential of these 22 isolates especially the isolate SA4 by performing organic extractions and screening against different bacterial pathogen. Apart from antibacterial activities, other therapeutic potential of the obtained isolates should also be explored such as antifungal, antioxidant, or anticancer in order to fully unravel the biological capabilities of these isolates.
2022-01-06T20:27:39.372Z
2021-11-02T00:00:00.000
{ "year": 2021, "sha1": "95d26cf41b7b20bdd1ca3cf3ecd76d541edc37a1", "oa_license": "CCBYNC", "oa_url": "http://sciencebiology.org/index.php/BIOMEDICH/article/download/165/130", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cbd197bbee07f4068d0c59ed234ad5749928d0e6", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
234552603
pes2o/s2orc
v3-fos-license
The analysis of Venus’ physical surface using methods of fractal geometry In this paper, the work on investigating fractal structures on Venus was performed on the basis of observations taken by the “Magellan” spacecraft (NASA). The uncertainties in some data produced by “Magellan” were filled by the information that had been collected before – in “Venera 15”, “Venera 16”, and “Pioneer” missions. During the implementation of the work a digital map of Venus’ surface was built, and its spatial model was created. It is worth noting that the choice of basic level surface on Venus is defined by a certain value of potential or a point on its surface through which the geoid passes. The model of Venus’ physical surface was created use the harmonic expansion into spherical functions of altimetry data the “Magellan” mission. In the present paper, for determining and analyzing fractal dimensions the Minkowski mathematical algorithm, which is a simplified option of Hausdorff-Besicovitch dimension and provides high reliability and accuracy, was used. As a result, fractal correlations of Venus’ geoid anomalies in both longitude and latitude as well as the mean value of fractal dimensions were calculated. The following values of mean fractal dimension for Venus surface are obtained: in latitude – Dβ = 1.003; in longitude – Dλ = 0.98. Based on these values, we may conclude that the topographic model of Venus’ physical surface is close to spherical figure. The comparison between the obtained Venus fractal parameters with the ones of the Earth shows the good agreement. Introduction Currently, the main approaches to the study and description of processes in planetary systems are statistical and fractal methods. In particular, the robust method allows investigating the structure of complex objects taking into account their specific character, while fractal geometry allows studying not only the structure, but also the connection between structure and processes of its formation [1]. In this respect, the problem of developing methods for recognizing fractal structures of planetary objects is relevant. As variations of Venus' physical surface represent a complex multi-parameter system [2], its analysis should be conducted by means of complex physics methods, one of whose directions is fractal analysis [3]. For the studies of Venus, data from the Magellan spacecraft (NASA) [4] were used. The equipment of this artificial satellite allowed scanning almost the entire surface of Venus using a radar with synthesized aperture of S-band (12 cm) and a microwave radiometer as well as investigating topography by the special radar -altimeter [5]. The comparison between the obtained IOP Publishing doi:10.1088/1742-6596/1697/1/012019 2 Venus fractal parameters with the ones of the Earth confirms the conclusion in [6], in which the geological evolution of Earth and Venus are analyzed. Some of the main methods available for analyzing complex astronomical systems are the robust methods [7][8][9][10][11]. It is necessary to take into account that such methods cannot be used in all cases, as not all celestial structures are of stochastic nature (including shape, physical parameters etc.). To study complex systems, fractal analysis can be used. For complex objects, the fractal method allows to determine values of fractal dimension (FD) and self-similarity coefficients. On the basis of these parameters it is possible to study connections between a celestial body's evolutionary parameters and its structure. Therefore, the study of celestial objects by means of fractal geometry methods is the relevant and modern task. Both the structure of Venus and its gravitational field relate to complex multiparameter systems. For studying such systems, it is necessary to use the theory of complex physics, which includes the fractal geometry methods [3]. In this work, the regression model of Venusian structure was created on the basis of altimetry data of "Magellan" mission (NASA) [13]. The general aim of "Magellan" mission was the study of Venusian chemical parameters, its inner and outer structure, and planetary properties [5]. Building a model of Venus' physical surface using harmonic analysis To develop the topographic model of Venus, the harmonic expansion of altimetry data from the "Magellan" mission into spherical functions was implemented using a formula [15,16] as follows: where h(λ, β) -altitude function dependent on longitude and latitude; λ, β -longitude, latitude (known parameters); ̅ , ̅ -harmonic normalized amplitudes; � -Legendre functions, normalized and associated; ε -regression error, random. This mathematical expression was also used for the processing of other astronomical observations [17]. Similarly, one may explore Venusian gravity anomalies [18,19]. The average profiles were created by modeling slices of the Venus surface through cutting Venus sphere by meridian planes for every 20° in longitude from 0° to 360° and for each of such longitude values, the latitude changed from -60° to +70° angular degree. As an example, such a model of a slice of the surface of Venus by the meridian plane for 180° in longitude is shown in figure 1. The Venusian gravity potential models (GPM) were created similarly to the topography models (1). For GPM were created averaged Venusian profiles for every value in longitude in 20° interval (from 0° to 360°) and for each of such fixed values of longitude, the latitude changed from -70° to +70°. As an example, such a model of the profile of the Venusian gravity anomalies in the meridian plane for 30° in longitude is shown in figure 2. According to equation (2), for each of the averaged profile models the values of D were obtained. In equation (2) N is the number of cubes with set size that could be placed into the averaged profile model. For N to be integer, the division into cubes with set size starts with the value of 24. The calculated values for the structure and gravity models are shown in figure 3. Distribution of fractal similarity coefficient along Venus' surface The structure of an object under consideration may be presented as an ordered set ( 2 ), where 2 is the number of elements of the set ∈ ( 2 ), where , = 1 … . Figure 1 gives the differences in longitudes and latitudes after taking into account the secular acceleration, without taking precession into account from the planets. Figure 2 shows the discrepancies in longitude and latitude after joint consideration of the precession from the planets and secular acceleration. Table 1 lists all the numerical values of the series used to convert the reference frame. These series were taken from [8], except for the last two arguments, which were taken from [9]. Unfortunately, there is ambiguity in the values of some arguments. The contribution of these quantities leads to a discrepancy by an amount not exceeding 2 arc seconds. A partial order in a finite set is defined by a Hasse diagram ( figure 4). Elements of a set have some properties ξ ( ) (size, colour, volume, shape, etc.) inherent only to the elements of this set ∀ ( ∈ � � ξ ( )�. If there are more than 1 (ξ > 1) common properties, a set could be described by several fractal properties. Let us represent the set ( 2 ) as where ( ) ( 2 ) -disjoint subsets of the set where α and n are integers. Then α and n represent sets ∀ ∈ � � and ∀ ∈ � �. Let us note that There are upper and lower borders of the ( 2 ) set according to the properties ξ ( ): while ξ ∈ ( 2 ) and ξ ∈ ( 2 ). The fractal property ξ of the ( 2 ) set according to the ξ ( ) property is defined by an angle dependence coefficient of log Г ξ ( 2 ) on log ξ 2 , where Г ξ ( 2 ) is the number of discontiguous cubes' surfaces covering the ( ) ( 2 ) set: The self-similarity coefficient ξ is defined as where 0 is a fractal dimension of a self-similar set: An analysis of figure 5 shows that the distribution of self-similarity coefficient for various zones of the Venus' surface varies from -0,8 to +0,8, which points to a fractality of Venus' surface and also to the change in its structure from one zone to another. It is worth emphasizing that the following models were investigated: in paragraph 2 -the averaged profiles of Venus (i.e. plane models), while in paragraph 3 -the surface of Venus (i.e. 3D model). Summary and conclusions The analysis of modern methods for solving problems of Venus topography on the basis of the data produced in space missions is conducted. In particular, altimetry was investigated. A multiple processing of space dataset was found necessary due to the constant improvement of processing methods based on which global models of Venus are constructed. This direction has become particularly important after the appearance of observational data based on space measurements. As a result, the fractal dimensions in both latitude and longitude were obtained for Venusian physical surface model. The values of averaged fractal dimensions were calculated: averaged fractal dimension of Venusian physical surface model in longitude was D = 1.039, in latitude -D = 1.063.
2020-12-24T09:12:37.667Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "679db89982137fc270c6f82caf834ec18608f013", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1697/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "87ad9237c8a13d7f3ebbd2f41b5f7eb6ccb370b1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
125644028
pes2o/s2orc
v3-fos-license
Heavy ion results from the CMS Collaboration The first heavy ion run at the LHC occurred in November of 2010 and was followed by a second run in late 2011 that increased significantly the available event sample achieving integrated luminosity of 150 μb−1. Heavy ion collisions at the LHC are expected to produce a partonic medium which has a higher energy density and a longer life-time than could be created at RHIC. This work gives an overview of what has been learned about the nature of the hot and dense medium created in high energy heavy ion collisions using new data from the CMS experiment. Specifically, azimuthal anisotropy at high transverse momentum, collection of nuclear modification factor measurements for different particle species and identified jets, differential jet properties, and quarkonia measurements are discussed. Introduction High energy heavy ion collisions provide unique environment to study nuclear matter at the extremes of temperature and energy density.They allow for experimental tests of theoretical framework for strong interactions provided by Quantum Chromodynamics (QCD).The first heavy ion collisions delivered by the LHC in 2010 allowed experimental access to the highest energy density medium created in the laboratory.The results obtained from these early PbPb data collected by the CMS experiment [1] at a center-of-mass energy of 2.76 TeV have shown that the matter created in these collisions is indeed strongly interacting, explosive, and exhibits collective behavior [2,3].In the following year, CMS continued taking PbPb data at the same energy, recording an event sample corresponding to an integrated luminosity of about 150 µb −1 (more than 10-fold increase from the first run).This sample extended significantly the kinematic reach for rare probes and processes, particularly in the "hard" sector (high energy or transverse momentum) and heavy flavor studies.In this work an overview of selected recent CMS heavy ion results from 2011 run is presented.Only mid-rapidity measurements are included here, specifically, azimuthal anisotropy at high transverse momentum (p T ), collection of nuclear modification factor measurements for different particle species and identified jets, jet shapes and fragmentation functions, and quarkonia measurements.Complete collection of all published and submitted CMS heavy ion papers and preliminary heavy ion results can be found at [4]. Experimental Setup Although the CMS detector was not originally conceived specifically for heavy ion physics studies, the versatile system is exceptionally equipped to explore a wide range of related physics topics.CMS is a multi-layer detector, providing nearly 4π coverage with a silicon tracking system nested inside high granularity electromagnetic (ECAL) and hadronic (HCAL) calorimeters, in turn enclosed by the muon tracker system.A set of forward detectors (HF, TOTEM, CASTOR, and ZDC) extends acceptance to over 8 units of pseudo-rapidity (η).These forward detectors allow to test the limiting fragmentation, gluon saturation and color-glass-condensate ideas in new regions of parton momentum fraction x. The silicon tracker design allows excellent performance in the high multiplicity environment typical for high energy heavy ion events, and provides acceptance for tracks with momentum above 300 MeV/c with momentum resolutions that is better than 2% for 100 GeV/c tracks.The tracker alone makes study of many physics topics possible; for example, charged particle multiplicity, azimuthal distributions, spectra and correlations of charged particles, that help constrain initial state gluon densities, collective flow and medium viscosity in models.The track pointing resolution of the silicon system also permits statistical identification of displaced vertices, essential for heavy flavor studies.The ECAL granularity (0.087 × 0.087 at central rapidities) allows direct identification and reconstruction of jets and di-jets.Combination of the ECAL, HCAL and tracker information allows detailed studies of the medium effects on jet properties.Unsurpassed capabilities of muon system allow to resolve quarkonium states and disentangle various productions mechanisms and medium-induced suppression.Last, but not least, the CMS High Level Trigger utilized during the heavy ion data taking extended the energy reach for jets to above 300 GeV, provided access to ultra-central collisions, and significantly improved the reach for detailed studies of J/ψ, Z 0 , and ϒ production as a function of transverse momentum and centrality, which is defined as a fraction of the total geometric cross section, with 0% denoting the most central collisions (zero impact parameter), and 100% -the most peripheral collisions. Azimuthal Anisotropy The strong azimuthal anisotropy observed at RHIC in the angular correlations of final state particles with respect to the event plane is found to be well described by hydrodynamic calculations with little or no viscosity.These measurements have established some of the most important properties of the medium, provided strong evidence for its partonic nature and led to establishing of its "perfect liquid" properties (see, for example, [5] and references wherein).It is recognized that the collective expansion of the medium under path-dependent pressure gradients is responsible for the development of the final state azimuthal anisotropies in softer hadrons, while at high p T jet quenching, or path-dependent energy loss, produces variations of particle densities with respect to event plane orientation.Within the first year of operations CMS results confirmed the near-perfect liquid behavior in the new energy domain.Here, the extension of these measurements from the run 2011 data is discussed.To characterize anisotropy in the particle azimuthal distributions with respect to a relevant event plane typically a Fourier expansion of the relative azimuth (∆φ ) correlation for pairs of charged hadrons is performed.It is expected in theory that the anisotropies of the azimuthal distribution, studied in terms of v n coefficients of Fourier expansion, will be sensitive to the early partonic properties of the system.Until recently, experimental measurements were focused on just one harmonic of Fourier expansion, v 2 , often termed "elliptic flow".Under the simplistic assumption of smooth initial conditions, the higher terms, and all odd terms in general, were expected to be subdominant or extinguished by symmetry.Theoretical developments in recent years led to recognition of importance of the initial state nonuniformity caused by density/geometry fluctuations, which result in the significant anisotropies of higher orders.The evolution of the flow harmonics of all orders with initial eccentricities and energy densities provides experimental constraints on the initial state properties, and the shear viscosity over entropy density ratio (η/s) during the subsequent system evolution. CMS has studied in detail the centrality and transverse momentum dependence of the azimuthal anisotropy coefficients v 2 , v 3 , v 4 , v 5 and v 6 in minimum bias PbPb collisions at √ s NN =2.76 TeV. Figure 1 shows results of this study [6] for second through fifth harmonics for a mid-central bin, corresponding to 30-35% of total geometric cross-section.Finite values observed for the odd harmonics (v 3 , v 5 ) illustrate the effects and importance of initial state fluctuations.The hierarchy of v n magnitudes observed is consistent with theoretical expectations from hydrodynamic evolution of the system.Comparisons with model calculations for the second and third harmonics with various input shear viscosity over entropy density ratios [7] favors η/s values that are non-zero yet close to the expected quantum limit. At low transverse momentum, the second order harmonic, v 2 , is a dominant anisotropic term of Fourier expansion for most of the impact parameter range of the collisions due to the initial state spacial eccentricity.The magnitudes of v 2 and v 3 become comparable in ultra-central collisions.As noted earlier, the strength of the elliptic flow is found to be approaching the ideal hydrodynamic limit in the low transverse momentum region.It has been found that the elliptic anisotropy persists to much higher momenta, that would be considered reasonable for hydrodynamic treatment of the system.At high p T the path-length-dependent partonic energy loss is considered as a primary source for the observed v 2 .Figure 2 shows recent CMS results from run 2011 data for two selected centrality bins.For both selections, 10% most central and 40-50% mid-central events, significant anisotropy persist to transverse momenta of at least 40 GeV/c.High precision measurements of v 2 vs. p T , to large extent enabled by successful implementation of a corresponding High Level Trigger algorithm during 2011 run, allow to constrain significantly the path-length dependence of energy loss in the models, and/or determine the dominant mechanisms for jet quenching.Linear or quadratic dependencies are expected in perturbative QCD for collisional or radiative energy loss scenarios (respectively), while more exotic theories predict stronger (cubic) trends.In Fig. 2 direct comparison with one available perturbative calculation for radiative energy loss mechanism [9] is shown to capture the data trends for both centrality bins quite closely.Additional studies are underway to further quantify the agreement and explore the sensitivity to different types of initial conditions tested. Jet quenching The effects of the jet energy loss or quenching that lead to the azimuthal anisotropies of particle distributions with respect to event plane at high transverse momenta, as discussed above, could be studied through comparative analysis of inclusive particle distributions in PbPb and pp collisions at the same energy.To compare the relative particle abundances produced by the two different systems, nuclear modification factors, R AA , are constructed for each centrality bin of PbPb data by dividing the transverse momentum spectrum measured in this bin by the pp reference spectrum, scaled appropriately by the corresponding nuclear overlap function, T AA (detailed definitions are provided in citeCMShRAA).The exact T AA values are model-dependent, but experimental confirmation of the overlap functions is obtained by constructing nuclear modification factors for the probes that do not interact strongly, and therefore are not expected to suffer energy loss in the colored medium.Figure 3 (a) shows R AA measurements for isolated photons [10], W -and Zbosons [11,12] as function of the transverse mass.All three of these measurements are found consistent with unity, confirming the calculated T AA values give appropriate scaling. [GeV] It is evident from the measurements of R AA for charged hadrons, also presented in the Fig. 3 (a), that the suppression of relative hadron production extends to the edge of kinematic regime covered [13].In the most central PbPb events the suppression levels off at about 50% above 50 GeV/c, with no pronounced p T -dependence up to 100 GeV/c.Similar level of suppression is consistently seen in the jet R AA shown in Fig. 3 (b) [14] as a function of jet p T . The exact mechanisms for parton energy loss remain under discussion, as various scenarios could accommodate the inclusive data.Differential measurements, such as di-jet energy balance [15], jet shape measurements and comparative analysis of jet fragmentation patters in PbPb and pp data are essential experimental tools to constrain the model parameters and to gain deeper insights into the nature of QCD interactions in the medium. Figure 4 shows preliminary CMS results for jet fragmentation functions from run 2011.This measurement of detailed energy distribution among the jet fragments updates the earlier CMS work [16] by extending the measurement to the lower p T hadrons (down to 1 GeV/c).For each centrality bin studied (from 50-100% peripheral events to top 10% most central PbPb collisions) the measured fragmentation function is compared to the reference derived from pp data at the same PoS(Bormio 2013)033 Heavy ion results from CMS Olga Evdokimov energy.The reference is obtained by smearing (to account for resolution differences) and reweighting (to account for possible differences in the spectral distributions) the measured jet fragmentation function for pp collisions (for details see [17]).In this work the jets are selected to have transverse momentum of at least 100 GeV/c, which ensures nearly perfect jet trigger efficiency.Distributions of jet fragments are plotted as a function of ξ = ln 1 z , z = p track || p jet , where p jet is jet momentum and p track || is component of the track momentum parallel to the jet axis.To better visualize the differences, the ratio of PbPb to pp fragmentation functions is taken for each centrality bin and shown in the lower panel of Fig. 4. In most peripheral PbPb data the fragmentation functions are found to be very similar between the two systems (with a corresponding ratio of 1).For more central PbPb collisions the jet fragmentation functions become progressively different from the pp reference at intermediate and high ξ values (e.g. for softer jet fragments between about 1 and 3 GeV/c).The observed excess of charged hadrons in high-ξ region for central events could be taken as an evidence of fragmentation softening due to in-medium interactions. Redistribution of jet energy towards softer hadrons observed in the fragmentation function measurements, is augmented by studies of the angular distribution of fragments within the jet.The differential jet shape measurements were carried out for the same data selection as in the fragmentation analysis: the reconstructed jets were required to have transverse momentum of at least 100 GeV/c within the jet-cone of radius R = 0.3.The charged tracks within the jet were measured down to transverse momentum of 1 GeV/c.The jet shapes ρ(r) are defined as the fraction of jet energy within a differential cone dr at a given radial distance r from the jet axis.Preliminary results of differential jet shape measurements from run 2011 are shown in Fig. 5.The upper panel presents centrality evolution of the measurement, starting from 50-100% most peripheral events (on PoS(Bormio 2013)033 Heavy ion results from CMS Olga Evdokimov the left), to the top 10% most central events (on the right).On each plot, a reference distribution, derived from pp data at the same energy, is shown for comparison.Again, lower panel shows the ratio of the differential jet shapes from PbPb and pp data for each of the centrality bins above to help visualize the differences.The peripheral events show no variations between the two systems, with the ratio of differential jet shapes consistent with unity (within errors) across the entire range of radii.The deviation from unity begins to develop at large r in more central events, becoming most pronounced in the most central bin.The excesses of hadrons at large distance from jet axis found in jets from central PbPb collisions indicates appreciable broadening of jet structure. Heavy flavor measurements The higher integrated luminosity data of run 2011 allowed major advances in the studies of heavy flavor production and in-medium interaction effects for c and b quarks.These new data enabled first detailed studies of centrality dependence for quarkonia production.Relative suppression of different quarkonia species is one of the long standing signature predictions of the quark-gluon plasma formation.The suppression, expected to originate from colour screening of the binding potential for quarkonium states from abundant gluons and light quarks, is considered as one of the most direct evidence for the deconfinement.Additionally, studies of quarkonia suppression in heavy ion collisions are expected to aid our understanding of degree of medium thermalization.It is predicted [18,19] that different quarkonium states will "melt" sequentially, depending on the interplay between their binding energies and the medium temperature.Because of this interplay, the degree of the melting for various quarkonia states is also long proposed as experimental QGP thermometer [20]. PoS(Bormio 2013)033 Heavy ion results from CMS Olga Evdokimov The data points show di-muon invariant mass distribution with the peaks for ϒ, ϒ and ϒ states from the minimum bias PbPb data [21].The ϒ and ϒ peaks are clearly visible, while the ϒ peak is hard to identify.The solid line shows the form fit to the data used to extract the integrated yields for each state.The reference measurement from pp data at the same energy is also shown in the Fig. 6 (b) as a dashed line.This dashed line is obtained by invariant mass fit to pp data, and then the otherwise fixed shape is scaled to match the ϒ yield from the PbPb collisions.Comparing the PbPb and pp fit lines the nearly complete melting of ϒ and significant suppression of ϒ relative to the expected in-vacuum produced abundances is evident.The last panel of Fig. 6 summarizes the available suppression measurements for various quarkonia states from the CMS experiment.The nuclear modification factors are presented to test the sequential melting prediction.Minimum bias PbPb results [22,23,24] show the lowest R AA value (strongest suppression) for ψ(2S) and ϒ , and the highest one (least suppressed) for the ϒ, with the J/ψ measurement falling in between (we note that only an upper limit on the ϒ R AA is estimated at this point).The R AA value ordering is consistent with ordering of the binding energies of the quarkonia states [25] and the sequential melting scenario in the QGP medium. Summary In this work a brief review of selected CMS results from PbPb collisions at a center of mass energy of 2.76 TeV from run 2011 is presented.This recent data set, corresponding to an integrated luminosity of 150 µb −1 , significantly improves the physics reach for the CMS heavy ion studies. Additionally, many smaller cross-section measurements have benefited from the CMS High Level Trigger capabilities fully implemented in the run. Detailed studies of medium properties via angular correlation analyses confirmed the significance of the initial state fluctuations imprinted on the final state particles.Centrality and transverse momentum dependence of higher order Fourier harmonics (v n ) provide precise set of measurements to constrain the medium properties, specifically the η/s ratio. Various measurements presented, update the earlier findings on jet quenching.Nuclear modification factors for colorless probes (W , Z and isolated photons) are found to be consistent with unity within (admittedly) large errors, confirming the relevance of the binary scaling hypotheses for hard probes production.At the same time, strong suppression of charged hadrons is established with significantly improved precision.No appreciable p T dependence is found in their R AA values above 50 GeV/c, of about 0.5 for the most central collisions.This constant R AA ≈ 0.5 extends up to 100 GeV/c for reconstructed tracks and up to 300 GeV for jets. We observe jet modifications in response to the medium in jet shape and fragmentation function measurements.While in peripheral events both jet properties are found to be consistent with the reference derived from pp data at the same energy, broadening of the jet shapes and softening of the jet fragmentation functions gradually set in as the collisions become more central. The increase in the integrated luminosity recorded made possible new observations in the heavy flavor sector.First detailed centrality dependence studies of nuclear modification factors for quarkonia states have been initiated.The new data have allowed to firmly establish the sequential melting of ϒ states; the hierarchy of suppression was found to be consistent with expectations based on binding energies, and overall suppression is found to increase with collision centrality. Figure 1 : Figure 1: Transverse momentum dependence of the Fourier expansion coefficients for midcentral events corresponding to 30-35% of total geometric cross-section.The lines illustrate expectations from hydrodynamic model with different η/s settings from [7]. Figure 2 : Figure 2: Elliptic anisotropy (v 2 ) as function of transverse momentum for 10% most central collisions (a) and events from the 40-50% centrality bin (b).Theoretical calculations from [9] are compared with the data.The model results for radiative energy loss are shown for Glauber (solid line) and Color Glass Condensate (dashed line) initial conditions. Figure 3 : Figure 3: a) Collection of CMS nuclear modification factor measurements for a variety of particle species: inclusive charged hadrons, isolated (direct) photons, W-and Z-bosons, b-quarks.Centrality selection for each measurement is listed on the figure.b) Jet R AA for 5% most central PbPb events as function of jet p T . Figure 4 : Figure 4: Top panel: fragmentation functions for different centrality bins of PbPb collisions at 2.76 TeV are compared with the corresponding reference from pp data at the same energy.The centrality selection ranges from 50-100% peripheral events (leftmost plot) to 0-10% most central collisions (on the right).The jets are selected with p T > 100 GeV/c.Fragmentation functions are built with all tracks with p T > 1 GeV/c.Bottom panel: ratio of PbPb and pp fragmentation functions for the corresponding centrality bin from the upper panel.Error bars represent statistical errors, while bands estimate systematic uncertainty in the measurement. Figure 5 : Figure 5: Top panel: differential jet shapes for several centrality bins of PbPb collisions at 2.76 TeV are compared with the corresponding reference from pp collisions at the same energy.The centrality selection ranges from 50-100% peripheral events (leftmost plot) to 0-10% most central collisions (on the right).The jets are selected with p T > 100 GeV/c.Jet shapes are studies with all tracks with p T > 1 GeV/c inside the jet cone of R < 0.3.Bottom panel: ratio of PbPb and pp differential jet shapes for each corresponding centrality bin from the upper panel.Error bands show estimate of systematic uncertainty in the measurement. Figure 6 : Figure 6: a) Invariant-mass spectrum of µ + µ − pairs from minimum bias PbPb collisions at 2.76 TeV.b) ϒ, ϒ and ϒ states measurements from minimum bias PbPb data.The solid line shows the form fit to the data used to extract the integrated yields for each state.The dashed line is the result of the form-fit from pp reference data matched to the ϒ yield.c) Nuclear modification factors (R AA ) for various quarkonia states for minimum bias PbPb collisions. Figure 6 Figure 6 (a) shows invariant-mass spectrum of µ + µ − pairs from minimum bias PbPb collisions at 2.76 TeV.The superb resolution of the CMS muon system, illustrated by the figure, enables clean separation of quarkonia states (with many states clearly visible even before the combinatorial background is subtracted).Panel (b) in the same figure zooms in on the mass range of ϒ family.The data points show di-muon invariant mass distribution with the peaks for ϒ, ϒ and ϒ states from the minimum bias PbPb data[21].The ϒ and ϒ peaks are clearly visible, while the ϒ peak is hard to identify.The solid line shows the form fit to the data used to extract the integrated yields for each state.The reference measurement from pp data at the same energy is also shown in the Fig.6(b) as a dashed line.This dashed line is obtained by invariant mass fit to pp data, and then the otherwise fixed shape is scaled to match the ϒ yield from the PbPb collisions.Comparing the PbPb and pp fit lines the nearly complete melting of ϒ and significant suppression of ϒ relative to the expected in-vacuum produced abundances is evident.The last panel of Fig.6summarizes the available suppression measurements for various quarkonia states from the CMS experiment.The nuclear modification factors are presented to test the sequential melting prediction.Minimum bias PbPb results[22,23,24] show the lowest R AA value (strongest suppression) for ψ(2S) and
2019-04-22T13:10:51.164Z
2013-10-22T00:00:00.000
{ "year": 2013, "sha1": "d9617f730915103c60b0aab9031c5e20a218aead", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/184/033/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d9617f730915103c60b0aab9031c5e20a218aead", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248936824
pes2o/s2orc
v3-fos-license
Saving time and effort: Best practice for adapting existing patient-reported outcome measures in hepatology It is increasingly recognised that collecting patient reported outcome measures (PROMs) data is an important part of healthcare and should be considered alongside traditional clinical assessments. As part of a more holistic view of healthcare provision, there has been an increased drive to implement PROM collection as part of routine clinical care in hepatology. This drive has resulted in an increase in the number of PROMs currently developed to be used in various liver conditions. However, the development and validation of a new PROM is time-consuming and costly. Therefore, before deciding to develop a new PROM, researchers should consider identifying existing PROMs to assess their appropriateness and, if necessary, make adaptations to existing PROMs to ensure their rigour when used with the target population. Little is written in the literature on how to identify and adapt the existing PROMs in hepatology. This article aims to provide a summary of the current literature and guidance regarding identifying and adapting existing PROMs in clinical practice. What are patient reported outcome measures? Patients are treated by healthcare providers with the primary goal of improving their health and wellbeing. Historically this improvement in health has been judged by improvement in biochemical, histological, radiological or clinical assessments. This approach does not always correlate with improvement from the patient perspective. From a patient perspective improving health is reflected in the documentation of their symptoms and experience of healthcare provision, which are more appropriately collected directly from the patient[1]. With a move towards shared-decision making and patient-centred care, there is growing recognition within the healthcare community of the importance of the patient perspective and the need to consider patient reported outcomes (PRO) as a key component of a holistic approach to patient care. The U.S. Food and Drug Administration (FDA) defines a PRO as "any report of the status of a patient's health condition that comes directly from the patient, without interpretation of the patient's response by a clinician or anyone else" [2]. The European Medicines Agency state that "Any outcome evaluated directly by the patient himself and based on the patient's perception of a disease and its treatment(s) is called a patient-reported outcome (PRO)" [3]. Patient reported outcome measures (PROMs) can be broadly classified as generic or disease-specific instruments. Generic PROMs assess general aspects of health and can be applied across multiple conditions. Disease-specific PROMs, on the other hand, assess specific aspects that are related to a particular condition. PROMs are designed to measure aspects of health that can neither be directly observed or are not feasible to observe [4]. Broadly speaking, collection of PROMs from the patient can be classified into three main categories based on the outcomes measured: Health status and quality of life: patients' health and well-being as indicated by patient report; Patient satisfaction: patient-reported satisfaction with their medical treatment or care; Resource use: patients' reported use of health services and resources; Patient knowledge questionnaires: patients' understanding of medical conditions and the treatment. What is currently driving the use of PROMs? PROMs were initially developed for research use and many regulatory authorities such as the European Medicines Agency (EMA) and the FDA advocate their use[2,5,6]. Recent consensus guidance also recommends the inclusion of PROMs in clinical trial designs [7]. The collection of PROMs aligns well with the increased drive within healthcare organisations for value based healthcare, whereby organisations aim to achieve the best possible outcomes for patients with the available resources [8]. As more clinicians recognise the benefit of collecting PROMs in addition to measuring clinical outcomes, PROMs have seen an increased use in routine clinical practice [9]. As a consequence of the drive to collect PROMs, there has also been an increase in the number of PROMs developed, validated, and used. The King's Fund report reflects on this as ''a growing recognition throughout the world that the patient's perspective is highly relevant to efforts to improve the quality and effectiveness of health care'' and that PROMs are likely to become ''a key part of how all health care is funded, provided, and managed'' [10]. This has been illustrated in the United Kingdom when, in 2009, the United Kingdom Government implemented the routine collection of PROMs in England for four routine elective surgical procedures -hip and knee replacement, groin hernia repair, and varicose vein surgery (https://digital.nhs.uk/data-and-information/data-tools-and-services/dataservices/patient-reported-outcome-measures-proms), in order to compare performance between providers. It is likely that routine PROMs collection will be extended to more conditions in the future. LITERATURE SEARCH TO IDENTIFY BEST PRACTICE FOR THE ADAPTATION OF EXISTING PROMS In order to identify relevant literature regarding best practice and guidance for the adaptation of existing PROMs we undertook a scoping review of the literature. This scoping review aimed to explore the extent of the literature within the PROMs field regarding best practice/guidance for PROM adaptation without describing ndings in detail [11,12]. We undertook a review of the literature to identify key papers/guidelines for the adaptation of existing PROMs. We searched PubMed and the Cochrane database (https://www.cochrane.org/). In order to limit the search, we searched for literature in the English language published within the last 10 years. Reference lists of relevant identified publications were also hand searched to identify further relevant literature. We also undertook a Google TM search to identify relevant publications. Details of the search strategy are presented in Table 1. The searches were conducted on 17 February 2020. The inclusion and exclusion criteria are listed in Table 2. We carried out an initial title screening, then abstract screening to identify relevant papers that fitted the inclusion criteria, which we then reviewed fully. We identified specific themes related to the adaptation of existing PROMs which we regarded as recommendations/good practice and have structured the paper according to these identified themes. FINDINGS Supplementary Table 1 illustrates the publications identified as part of the scoping review of the literature. The guidance identified within these publications is organised under the specific headings of: defining the requirements of a PROM, identifying and appraising existing tools, adapting existing PROMs, issues of content validity and getting the right people involved. Defining the requirements of a PROM In order to provide meaningful information, PROMs need to be appropriately developed and validated according to robust criteria. The psychometric validation of PROMs can be complex and timeconsuming and requires evidence of numerous facets including validity, reliability and responsiveness [13,14]. Given the growth in the number of available PROMs, even within the same condition, the old adage "don't reinvent the wheel" should be the first principle applied before taking the decision to embark on the development of a new PROM. Consequently, to enable researchers to appraise the quality of existing measures with the aim of ascertaining whether a new measure is needed, researchers must first establish a clear definition of what is required of the PROM. The requirements of the PROM need to be identified at the outset [15]. Consideration should include what the PROM aims to measure, whether the PROM should be generic or disease specific, the clinical condition of interest, the specific population for which the PROM will be applied and whether it will used as part of routine clinical care or research. These factors will help to determine whether existing PROMs are suitable or can be adapted. A useful overview and starting point for deliberations is provided by Luckett and King [16]. A generic PROM may allow comparison of patient outcomes across different conditions, however it will have less focus on specific symptoms relating to a condition. A disease-specific PROM will have a more defined focus on the condition itself and will be more sensitive to changes in the condition over time and its associated symptoms but may be longer and therefore the burden to the patient may be greater. If a disease-specific PROM is required, one needs to define the specific population. For example, a PROM developed to measure pruritus in primary biliary cholangitis may not be suitable for measuring pruritus in intrahepatic cholestasis of pregnancy as the patient experience of dyspnea may differ across these two clinical conditions [2,16]. It is also important to consider whether the PROM will be used within a routine clinical setting or a research setting [17]. In a clinic setting where time may be limited, the burden to the patient and the feasibility of completing the PROM need consideration. Within a research setting, time may not be as limited and longer, more detailed PROMs can be considered [17]. The proposed method of administration of the PROM is also important and authors planning on using a PROM should ensure that it has been appropriately validated for their proposed administration method [2]. Issues such as respondent and administrator burden -length, formatting, font size, instructions, privacy, literacy levels etc. also need to be considered [2]. Figure 1 provides an overview of the first steps required before choosing to develop a new PROM. Luckett et al [16], have provided useful principles to consider when selecting a PROM: Selection of PROMs should be considered early during study design -selection should be driven by the research objectives, samples, treatment and available resources; For the primary outcome, choose as 'proximal' a PROM as will add to knowledge and inform practice -'proximal' (symptoms) vs 'distal' (overall quality of life); Identify candidate PROMs primarily on the basis of scaling and content -which items/scales offer best coverage of the impacts of interest and which aspects of score distribution will be most meaningful to consider? Appraise the reliability, validity and 'track record' of candidate PROMs -look beyond articles that focus on evidence of validity and reliability; Look ahead to practical considerations -patient and staff burden, methods of administration, cost, availability of translated versions, guidelines for scoring and interpretation; Take that are suited to the clinical task being delivered and are also suited to the aims of your clinical work and the population you work with; Use valid and reliable measures in research that are relevant to the research question and consider patient burden when using measures [17,18]. Identifying and appraising existing tools Once one has defined the scope of the PROM, possible candidate measures can be identified. This process will determine whether there is a need for a new PROM. It will also allow for the identification of PROMs that could, be adapted, shortened, translated or expanded. Given the large number of available PROMs, there are several ways in which possible candidate PROMs may be identified. Identifying systematic reviews of PROMs for a particular clinical area may prove to be particularly fruitful as good reviews will assess the methodological quality[13-16] of the PROMs identified and provide a summary of the PROMs that offer the most promise. In addition to undertaking literature reviews, there are also databases and online resources that can be searched to identify existing PROMs. Some of these resources are generic and cover many conditions, whilst others provide a resource for disease-specific PROMs. Table 3 provides some examples of resources that can be used to identify candidate PROMs for adaptation. If these strategies do not identify any PROMs, conducting a new systematic review may uncover PROMs for consideration or items/questions in existing PROMs that could be included in the development of a new PROM [14]. Prinsen et al [13], have formulated a useful ten step process for conducting such systematic reviews of PROMs [13,14]. Such a systematic review should be conducted in accordance with the guidelines outlined by internationally recognised COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN). COSMIN provide detailed information and tools to aid this process on their website (https://www.cosmin.nl/tools/guideline-conductingsystematic-review-outcome-measures/). This will ensure that the methodological rigor of the PROMs identified is appropriately appraised. Once PROMs have been identified, the tools should be reviewed for their content and appropriateness for the desired application[13, [14][15][16][17][18][19][20][21][22]. This process will also help to identify relevant questions/ items that could be used to develop a new PROM or adapt an existing one. Adapting existing PROMs Researchers need to consider the existing PROM literature to determine whether an adequate instrument exists to assess and measure the concepts of interest. If no PROM exists, a new PROM can be developed or in some situations, a PROM can be adapted by modifying an existing instrument [2]. Examples of instrument modifications include: (1) Making minor cultural/Language adaptations within the same source language; (2) Undertaking a cross-cultural adaptation that includes translation into a different language; (3) Including additional items/questions; and (4) Shortening the original instrument. Such PROM modification may be necessary to enable the PROM to be used with a different population or with a different population age group (for example, modification of an adult PROM for a paediatric population), to facilitate its use in a different language, for use in a different disease stage or treatment (for example cancer stage, or for a newly diagnosed condition rather than a pre-existing condition), or to reduce patient burden. The FDA state that when a PROM is modified, evidence of adequacy for its new intended use should be provided and that "additional qualitative work may be adequate" to test such modifications[2]. Such changes include: Changing an instrument from paper to electronic format; Changing the application to a different setting, population or condition; Changing the order of items, item wording, response options, or recall period or deleting portions of a questionnaire; Changing the instructions or the placement of instructions within the PROM. Snyder et al [21], outline some requirements to revalidate a PROM when changes such as these are made to an existing PROM. The search for PROMs may identify existing instruments that have proven validity for the population being studied and can be applied without requiring any adaptation. Alternatively, a PROM may be identified that appears appropriate but requires modification. Before engaging in any adaptation, it is important first to contact the PROM developer/copyright holder to ask for permission to make changes to the original PROM. Wild et al [22], have recently published guidance from the International Society for Quality of Life Research (ISOQOL) Translation and Cultural Adaptation Special Interest Group (TCA-SIG) regarding copyright of PROMs. Failure to gain appropriate permissions for use and adaptation may result in legal challenges due to breaches in copyright. The authors present recommendations to prevent future conflict that includes: Protecting the copyright of the original PROM; Writing a contract; Taking care when publishing; Establishing rules; Making the copyright notice visible; Maintaining copyright of the PROM and any derivatives with the original author; Centralising distribution; Getting legal counsel; Clarifying the copyright situation with respect to legacy PROMs. It is therefore prudent for researchers considering the adaptation (including the translation of existing PROMs) to identify and obtain agreement from the copyright holder prior to any adaptation [22]. May 27, 2022 Volume 14 Issue 5 The COSMIN initiative (https://www.cosmin.nl/) aims to "develop methodology and practical tools for selecting the most suitable outcome measurement instrument). Their mission statement is: "to improve the selection of outcome measurement instruments of health outcomes by developing and encouraging the use of transparent methodology and practical tools for selecting the most suitable outcome measurement instrument in research and clinical practice". The COSMIN website provides a link to the COSMIN Database for Systematic Reviews which can be searched to identify literature reviews that have been undertaken within specific clinical areas. The database provides a summary of the review and the PROMs that formed part of the review and links to the original publications. Examination of these reviews is useful in assessing whether an existing PROM may be appropriate to use. Many of these reviews will also present a synthesis of each PROM with an assessment of its methodological quality and validity according criteria outline in more or more CROSS CULTURAL ADAPTATION If a suitable PROM is identified and has appropriate content validity for the population of interest, but was developed and validated in a different language, cross cultural adaptation represents an efficient way of adapting an existing PROM. Cross-cultural adaptation manages language translation and cultural adaptation issues with the aim of ensuring a PROM is sensitive to the linguistic and cultural needs of the target population [22]. A PROM that has undergone rigorous cross-cultural adaptation is suitable for use in multinational and multicultural studies. It is important that any cross-cultural adaptations of PROMs are undertaken rigorously. Guidance regarding the process of cross-cultural adaptation has been described in a wide range of publications [22][23][24][25]. The lack of 'gold standard' guidance for cross-cultural validation prompted the Patient Reported Outcome (PRO) Consortium, to update and develop further guidelines for best practice in the translation process [26]. These guidelines are based on the ISPOR Task Force guidelines, updated with greater detail through a further consensus process [22]. The aim of cross-cultural adaptation is to provide equivalence between the source PROM and the adapted version. Equivalence has many definitions; however, most current guidelines follow the universalist approach proposed by Harachi et al [27], which gives consideration to the influence of culture on how people respond to any given item on a questionnaire. Questions therefore not only require linguistic translation, but they must also be adapted to fit culturally to the target country [26]. For example, a question about difficulty using a fork in eating may not be applicable in a country where a fork is not used in eating [28]. Equivalence can be divided into five categories plus a summary category [22,28] (see Table 4), and this has formed the basis of many guidelines for the cross-cultural adaptation of outcome measures. Ultimately, all of the available guidelines are broadly based on a core set of principles that need to be considered when cross-culturally adapting an existing PROM: (1) Preparation. The initial stage of the process is to identify the team that will be responsible for the work, identify suitable translators and May 27, 2022 Volume 14 Issue 5 The degree of cross-cultural adaptation required varies depending on the proposed use of the adapted PROM. The intended use of the PROM may influence the number of steps of the above that require completion [29]. Table 5 illustrates five different scenarios where differing adaptation needs are required [25,29]. These range from a situation in which no adaptation is required (i.e., the questionnaire used in the same population, in the same culture and language as originally designed), to full translation and cross-cultural adaptation (i.e., where the questionnaire is to be used in a different country and language). ADDING TO EXISTING PROMS If an existing PROM is identified as largely meeting the requirement for the population of interest but following patient and expert consultation and/or exploration of the literature it is perceived to be missing in one or more key areas, there is the potential to adapt the PROM by adding new questions/items. There are various ways in which items can be sourced[17,28]: By asking patients. Patients can be asked to identify additional items and domains that do not exist in the current version of the PROM. Patients are essential to item generation, ensuring item content is both relevant and provides full coverage of the target construct. Qualitative methods such as patient focus groups, interviews and surveys are useful for generating potential new items [30][31][32]; By evaluating the PROMs identified as a result of reviewing the literature or online resources. This can be an efficient way to generate new items. There are benefits to sourcing items in this way, most notably that there are likely to be a limited number of ways to ask questions about a specific problem such as abdominal pain, vomiting, etc. Moreover, items in existing PROMs have been repeatedly used and validated in many studies and trials; By identifying possible items from clinical observations. These items can be derived by clinicians based on their experience; By asking experts. This is a commonly used approach to generate new items. Similar methods (for example interviews, focus groups and surveys) to those used with patients can be used for gathering information about possible items for inclusion. Although useful for generating items, expert involvement should be used in tandem with other methods and should not be used in place of patient input; By utilising item banks. Item banks are a source of validated items that can be added to existing PROMs. One such item bank, the Patient-Reported Outcomes Measurement Information System (PROMIS TM ) initiative was established in 2004, with the main goal of developing and evaluating, May 27, 2022 Volume 14 Issue 5 for the clinical research community, a set of publicly available, efficient and flexible measurements of PROs [33]. PROMIS TM (http://www.healthmeasures.net/explore-measurement-systems/promis/introto-promis/List-of-adult-measures) provides item banks that offer the potential for PRO measurement that is efficient (minimizes item number without compromising reliability), flexible (enables optional use of interchangeable items), and precise (has minimal error in estimate) measurement of commonlystudied PROs [33]. The PROMIS group has developed and tested several hundred items measuring 11 health domains [33]. These core PROMIS domains reflect common, generic symptoms and experiences that are likely to apply to people in a variety of contexts or with a variety of diseases [33]. With additional validation, these banks may provide a common metric of represented constructs across a range of patient groups, thereby reducing the large number of different measures currently used in research and allowing researchers to compare these constructs across patient groups in different studies [33]. SHORTENING OF EXISTING PROMS Although many single-item and short-form symptom measures exist, one reason for adapting an existing PROM is to shorten it and reduce the number of items included in it. This can result in reduced patient burden and facilitate the use of a PROM as part of routine clinical care. As with other aspects of adaptation, it is essential to ensure that a shortened PROM is comprehensible to patients, includes all the relevant items and is fit for purpose. Like cross-cultural adaptation and adding existing items to a PROM, shortening will require further psychometric testing according to recognised criteria [30]. Issues of content validity Where any adaptation is planned, the PROM will still need to show evidence that it is 'fit for purpose' with the intended population [33]. This guidance has since been updated to include further recommendations from the ISPOR good practice task force [34]. Additional guidance regarding content validity and its consideration with respect to PROM development and adaptation have also been published and includes best practices for undertaking qualitative research to explore content validity, including differences between establishing content validity for new measures compared with existing measures [14,35]. Assessment of PROM content is an important process when adapting an existing PROM and this should involve engagement with, most importantly, patients and also clinicians. Getting the right people involved Having identified a candidate PROM for adaptation it is important to ensure that it is appropriate for the patient population being studied. This is particularly important to undertake if the PROM is being May 27, 2022 Volume 14 Issue 5 adapted for use with a new clinical population. Pre-testing the PROM with patients, clinicians, and subject-matter experts will provide evidence of the PROM's content validity and help to ensure that any problems are rectified prior to applying the PROM in a large-scale study or implementing the instrument in routine clinical practice. GETTING PATIENTS INVOLVED In 2009 FDA guidance suggested that an important first step in establishing that a measure is fit for purpose is to develop a conceptual framework for the PROM and generate relevant items on the basis of direct input from patients with the clinical disease[2, 34,35]. Recent guidance [36,37] highlights the various roles that patients and patient advocates can play in PROM studies. These include: PROM design and selection -bringing knowledge of the disease, symptoms and attributes of care with the greatest impact on patients' lives; PROM implementation and administration -the patient can bring insights based on their experience to guide practical decisions around PROM administration and implementation; Linguistic and cultural input -patients can contribute to the language used in the PROM to ensure it is straightforward and understandable to patients. Guidance regarding how the patients can be recruited to PROM studies, how to engage with patients, defining the role, provision of training and remuneration [38] has also been provided. In addition, a framework for fully incorporating public involvement (PI) into PROMs has recently been published [38] which illustrates the extent to which patients can be involved in the adaptation process (see Table 6). Existing measures can be reviewed to ensure they match the domains of interest and if further modification may be required [16]. Recent research that explored the level of involvement of patients in the development of PROMs has concluded that what patients consider important can differ from what health-professionals regard as important [30,31]. Content validity is often cited as a PROM's most important measurement property as unless the PROM can be shown to be measuring the construct of interest from the patient perspective, all other measurement properties may be considered inconsequential [13]. This highlights the importance of engaging with patients as part of the PROM adaptation process. A variety of qualitative methods can be used by researchers to engage with patients with the aim of maximising a candidate PROM's content validity (relevance and comprehensiveness) and to pre-test an adapted PROM for comprehensibility and acceptability of instructions to respondents, its items and response format(s). In a recent study examining the developers' perspective of including patients, the methods used were interviews and/or focus groups, cognitive interviews and feedback questionnaires [30,31]. Maitland and Presser advocate a diverse range of methods, both qualitative and quantitative, for appraising the quality of PROM items and the ability of the items to generate reliable and valid responses [39]. Interview and focus groups are often used to gain insight into the experiences of the target population in relation to the construct of interest and, therefore can be used to generate content for new or additional questions. Cognitive interviews, on the other hand, are normally used to refine item candidates and their response scales. Cognitive interviews capture problems with the cognitive processes associated with item response [40], thereby enabling the developers to evaluate the relevancy, comprehensiveness, comprehensibility and acceptability of the instrument's items and response scales. Feedback questionnaires can also provide patients insight regarding their experience of using a health status questionnaire. The QQ10 is one such validated, self-completed questionnaire. It is made up of 10items scored using a 5-point Likert scale (0 = strongly disagree to 4 = strongly agree) covering two factors, "value" and "burden". It contains specific items developed to assess a PROM's content validity ( i.e., relevance, comprehensiveness) from the patient perspective [41,42]. GETTING EXPERTS INVOLVED The assessment of a candidate PROM from the expert clinical, researcher and academic perspective is also important. This can be achieved via focus groups and interviews, by questionnaire survey methods or by employing expert review panels [34]. Ideally, these panels should include clinicians with experience of treating the defined population, PROMs methodologists and researchers. The COSMIN standards recommend a minimum sample size of seven professionals for studies evaluating a PROM's content validity [14]. Experts can also be utilised to calculate content validity indices (CVI) based on ratings of item relevance. A minimum of three experts is recommended for the purposes of calculating a CVI [43]. A CVI is a consensus indicator of the content validity of an item or scale [44]. It represents the proportion of reviewers who agree that an item is content valid, adjusted for chance agreement. If all reviewers are in agreement, the CVI value for an item (I-CVI) will be 1.00. May 27, 2022 Volume 14 Issue 5 Employing different psychometric methods to PROMs development and adaptation Traditionally the development and psychometric evaluation of PROMs has been based on classical test theory (CTT). CTT is probably still the most commonly applied method in validation studies [20,45]. CTT assumes that the expected value of all the random error will equal zero [46]. There are, however, some disadvantages with CTT, such as sample dependency. This is where the item and scale statistics can only in theory, apply to the specific group of patients who took the test and as such further validation is required for a different population [29]. There is also the assumption of item equivalence, where it is assumed that all items contribute equally to the final test score and no item weightings are applied [46,47]. As a result of the disadvantages of CTT, modern psychometric methods of item response theory (IRT) and Rasch measurement theory (RMT) have been developed [48]. Rather than considering the questionnaire as a whole, as in the case of CTT, these methods allow analysis at the individual item level [49]. They also provide sample-free measurements (i.e., the results are applicable to all similar groups once the validation process has occurred). In IRT, additional model parameters are used to model the relationship between the individual's trait, the item property and the probability of endorsing an item. The assumption in IRT is that the "probability of answering any item in the positive direction is unrelated to the probability of answering any other item positively for people with the same amount of the trait" [28,29]. RMT differs somewhat in that the data are assessed to see if they fit the Rasch model. RMT allows for the creation of linear, interval-level measurement from categorical data. In the case of non-fitting data (items or persons), data can be further examined to understand why they do not fit or removed from the data set. Rasch analysis can be used to examine the properties of previously constructed scales as well as in the construction of new scales, and is important in making interval scales [50]. Although there has been a general shift towards using IRT in more recent years for developing and validating a PROM, there are some drawbacks to its use over CTT. One issue relates to the sample size required. It is recommended that sample sizes based on CTT should be large enough for the descriptive and exploratory pursuit of meaningful estimates from the data, starting with a sample of 30 to 50 subjects may be reasonable in some circumstances [51]. At later stages of psychometric testing, various recommendations have been given for exploratory factor analyses with recommendations of at least five cases per item and a minimum of 300 cases or to enlist a sample size of at least 10 times the number of items being analysed, so a 20-item questionnaire would require at least 200 subjects [51]. For IRT, sample sizes of a minimum of 150-250 patients has been proposed, with around 500 patients recommended for the latter stages of validation [51,52]. In addition to the inflated sample size recommendations for IRT, additional expertise in the study team is often required, and this may consequently result in greater development costs. Furthermore, strict assumptions in the model can mean that items may be rejected even when they have good content validity if they do not fit the IRT model. CTT should therefore not be disregarded and indeed, most authorities will agree that aspects of both CTT and IRT have a role to play in the validation process of a modern PROM. May [53]. The need to assess the impact of health treatment on patients and to demonstrate the value of the care provided to the patient by the provider is now recognised [9]. There is constant pressure on healthcare providers to improve the quality of healthcare provided and make it more patient-centered [54]. Given how much money is spend on treatment, it is important to assess if the treatment given offers value for the money. Clinical applications of PROMs can be divided into: Clinical research and trials: Health regulatory bodies such as the FDA and the National Institute for Health and Care Excellence (NICE) require PROMs to be incorporated into the assessment of new treatments, health technologies or medical devices; Quality improvement projects: PROMs can be very helpful in assessing the impact of a new service or project from the patient perspective. However, PROMs must be integrated into clinical practice with strong incentives to encourage their routine use in such quality improvement projects; Clinical practice: Measuring PROMs in clinical practice contributes to patientcenteredness and measures clinical effectiveness from a patient perspective. There has been widespread adoption of PROMs use within the research field, especially since FDA and EMA recommended that PROMs should form part of the outcome assessment for new drug trials[2, 5]. Reporting guidance for PROMs has also now been incorporated as an extension to CONSORT reporting for trials [7]. The value of collecting PROMs data routinely is now recognised as an important part of driving the delivery and organisation of healthcare and can thereby help to improve healthcare quality [9]. Although individual hospitals and clinicians have started to implement routine PROM collection, widespread adoption is largely restricted to England, Sweden and parts of the United States[9,55]. PROMs have now been implemented in England for the routine collection following some elective surgery (https://digital.nhs.uk/data-and-information/data-tools-and-services/data-services/patientreported-outcome-measures-proms). Their potential to be used in other clinical areas, such as oncology [56,57], multiple sclerosis is now also recognised. The routine collection of PROMs is not without its challenges however [1,9,[55][56][57][58][59][60]. Some of the practical challenges to routine integration include: the selection of the most appropriate tool; difficulties with patient completion (for example, lack of comprehension, elderly and frail or sick populations); clinical reluctance; achieving high rates of patient participation; operational difficulties; lack of clarity about the PROM; times pressure for patients and clinicians; lack of human resources; recognition of the three dimensions of quality (safety, effectiveness and experience); attributing outcomes to the quality of care; providing meaningful outputs from PROMs data for differing audiences; and avoiding misuse of PROMs [1,9,55,[59][60][61][62][63][64][65]. McDowell and Jenkinson[66], have developed a series of key strategic priorities that should be considered when implementing PROMs in real-world situations: Ensure international collaboration across multiple stakeholders to agree on a standardised approach to PROM assessment; Develop a comprehensive standard set of recommendations, methods and tools that are applicable to the generation of real-world evidence; Formulate a clear governance process including an ethical framework for how patients should be consented, who selects patients, who has access to data and how data will be used; Establish standard sets of PROMs, electronic tools and administration schedules; Develop and use electronic PROMs where possible; Minimise workload and technical complexity for patients and clinicians; Consider the objectives of the PROM assessment, timings, length of follow-up, strategies for managing missing data and inclusion of diverse patient populations; Ensure data collection adheres to the FAIR (findable, accessible, interoperable and reusable) guidance; Provide guidance on interpretation and use of the data; Ensure both patients and clinicians gain value from PROM collection to tailor their needs. FUTURE OF PROMS With new treatment and technologies, mortality is reducing and more patients are living with their illness for longer. As such, there is a growing need to develop and implement PROMs to facilitate the translation of clinical research into practice and, in keeping with the principles of shared clinical decision-making as part of routine clinical practice. The increased use of digital media presents an exciting opportunity for PROM capture and adaptation. By utilising new technologies to aid PROMs capture and support interpretation, more clinicians may be encouraged to use PROMs as part of their routine clinical care. For example, innovative delivery methods using app or web-based based technology [for example, through data platforms such as REDCap-Research Electronic Data Capture (https://www.project-redcap.org/)] are helping to streamline data capture from patients by facilitating PROM completion on tablets, mobile phones and the internet. Employing digital media also allows novel methods such as ecological momentary assessment (EMA) [64][65][66][67][68] to be used. EMA refers to a collection of methods often used in behavioural medicine research where a patient repeatedly reports on, for example, their symptoms or quality of life close in time to when they experience them and in their own environment [64]. EMA data can be collected in various ways, including written diaries, electronic diaries and telephone. EMA using mobile phones, for example, could facilitate the collection of PROM data in real-time and overcomes some of the inherent problems of PROMs, such as patient recall accuracy. The burden of data collection associated with the routine collection of PROMs data in practice can be reduced by simplifying data collection using techniques such as computerised adaptive testing (CAT). CAT involves using a computer to administer a PROM, one question at a time. The CAT then uses an algorithm to choose the subsequent question based on the previous answer given. For example if a PROM is assessing hand function and in response to the first question a patient answers that their hand function is 'normal', then there is little to be gained from asking increasingly granular questions about hand problems, which may be more appropriate to someone who has 'non-normal' hand function. By pre-selecting questions, the PROM score can be determined without having to ask all of the questions [61, 62,69,70]. Another future technology is that CAT and electronic PROMs could be administered by virtual assistants (such as Siri or Alexa, or similar) using voice recognition software to avoid the need for manual form filling to further reduce the manpower required for data collection [59][60][61][62]. CONCLUSION The process of developing a new PROM is often a complex and resource-intensive process. If possible, researchers should first consider whether any existing PROMs could be suitable candidates for use, or if they could be adapted. This review provides a general introduction to PROMs and some background regarding the recent drive to collect PROM data. It then reports on findings from a scoping review that identified good practice and issues that should be considered prior to adapting existing PROMs. These issues are organised under the specific headings of: defining the requirements of a PROM, identifying and appraising existing tools, adapting existing PROMs, issues of content validity and getting the right people involved. The review ends with some insights into different psychometric methods, clinical use of PROMs and future PROMs developments.
2022-05-21T15:22:54.831Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "a9acf062e12cff234934c0543e02fece5998525f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4254/wjh.v14.i5.896", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "420d051ba61a5f92d37d85b19b714a000c6647e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231992748
pes2o/s2orc
v3-fos-license
Oxytocin-induced increase in N,N-dimethylglycine and time course of changes in oxytocin efficacy for autism social core symptoms Background Oxytocin is expected as a novel therapeutic agent for autism spectrum disorder (ASD) core symptoms. However, previous results on the efficacy of repeated administrations of oxytocin are controversial. Recently, we reported time-course changes in the efficacy of the neuropeptide underlying the controversial effects of repeated administration; however, the underlying mechanisms remained unknown. Methods The current study explored metabolites representing the molecular mechanisms of oxytocin’s efficacy using high-throughput metabolomics analysis on plasma collected before and after 6-week repeated intranasal administration of oxytocin (48 IU/day) or placebo in adult males with ASD (N = 106) who participated in a multi-center, parallel-group, double-blind, placebo-controlled, randomized controlled trial. Results Among the 35 metabolites measured, a significant increase in N,N-dimethylglycine was detected in the subjects administered oxytocin compared with those given placebo at a medium effect size (false discovery rate (FDR) corrected P = 0.043, d = 0.74, N = 83). Furthermore, subgroup analyses of the participants displaying a prominent time-course change in oxytocin efficacy revealed a significant effect of oxytocin on N,N-dimethylglycine levels with a large effect size (PFDR = 0.004, d = 1.13, N = 60). The increase in N,N-dimethylglycine was significantly correlated with oxytocin-induced clinical changes, assessed as changes in quantifiable characteristics of autistic facial expression, including both of improvements between baseline and 2 weeks (PFDR = 0.006, r = − 0.485, N = 43) and deteriorations between 2 and 4 weeks (PFDR = 0.032, r = 0.415, N = 37). Limitations The metabolites changes caused by oxytocin administration were quantified using peripheral blood and therefore may not directly reflect central nervous system changes. Conclusion Our findings demonstrate an association of N,N-dimethylglycine upregulation with the time-course change in the efficacy of oxytocin on autistic social deficits. Furthermore, the current findings support the involvement of the N-methyl-D-aspartate receptor and neural plasticity to the time-course change in oxytocin’s efficacy. Trial registration: A multi-center, parallel-group, placebo-controlled, double-blind, confirmatory trial of intranasal oxytocin in participants with autism spectrum disorders (the date registered: 30 October 2014; UMIN Clinical Trials Registry: https://upload.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000017703) (UMIN000015264). Background Intranasal administration of oxytocin is a potential novel treatment for autism spectrum disorder (ASD) core symptoms, which currently have no established therapy [1,2]. Although the beneficial effects of singledose oxytocin on measures of ASD core symptoms have been consistently reported across studies [3][4][5][6][7][8], previous studies on the repeated administration of oxytocin have reported inconsistent findings, impeding further development of oxytocin as an approved medication [9]. Recently, we found a progressive deterioration in the efficacy of oxytocin [10,11] and proposed that this phenomenon may account for the reported inconsistencies in the effect of repeated administration. Elucidating the mechanisms underlying the time-course change in the efficacy of repeated oxytocin administration may help advance the development of oxytocin-based therapy for ASD core symptoms. Uncovering the interaction of oxytocin with other molecular systems is key to optimizing oxytocin-based therapies, including the identification of co-therapeutic agents [12]. We previously reported differential neurochemical effects of repeated oxytocin administration compared with acute treatment. Repeated administration specifically impacted the glutamatergic system, including the N-methyl-D-aspartate (NMDA) receptor [10,13]: Repeated oxytocin administration, unlike acute oxytocin, significantly decreased the glutamatergic metabolite levels in the medial prefrontal cortex of participants with ASD. The decreases were inversely and specifically associated with oxytocin-induced improvements of medial prefrontal functional MRI activity during a social judgment task and not with changes during placebo administration. Furthermore, in wild-type mice, we found that repeated administrations of oxytocin reduced medial frontal transcript expression of NMDA receptor type 2B, unlike acute oxytocin. Previous animal studies also support the existence of interactions between oxytocin and glutamatergic neurotransmission [14,15]. In addition, the time-course change in the efficacy of repeated oxytocin was detected with our unique dataset employing 2-week longitudinal assessments of objectively quantified measures of ASD social deficits [11]. Our facial expression analysis was based on videos recording during only a few minutes of activity in ADOS to quantify ASD-related social deficits [11,16], whereas the entire ADOS requires 40-60 min to administer [17]. Although ADOS and gaze observation [18] are not optimized for longitudinal and repeated assessments in individuals with ASD, facial expression analysis is easily repeatable in longitudinal assessments [11,16]. Previous studies have suggested that the efficacy of oxytocin deteriorates over time, possibly suggesting a potential underlying molecular mechanism [19,20], such as a downregulation of oxytocin receptors [21,22], or the glutamatergic system [10]. However, to the best of our knowledge, the relationship between the time-course change in efficacy and oxytocininduced changes in molecular pathways has not yet been examined. In addition, potential links between oxytocin and other molecular systems, other than the glutamatergic system, have not been examined. In the present study, we explored the interaction between oxytocin and molecular systems by analyzing oxytocin-induced changes using high-throughput metabolomics, which can quantify various metabolites related to the glutamatergic system as well as other molecular systems, such as the cholinergic and serotonergic systems. As the metabolomic panel, we selected a capillary electrophoresis system with an Agilent 6210 time-offlight mass spectrometer (CE-TOFMS, Agilent Technologies, Santa Clara, CA, USA), in which the detection limits for most amino acids and anionic species were improved several-fold on average, and as much as 65-fold over previously reported values for the CE-quadrupole mass spectrometer [23]. Metabolite concentrations were quantified using plasma samples collected from participants before and after repeated administration of oxytocin or placebo in our previous multi-center, parallel-group, placebo-controlled, double-blind, confirmatory trial of intranasal oxytocin in adult males with high-functioning ASD [11,24]. To the best of our knowledge, no previous study conducted metabolomic analyses before and after oxytocin administrations. Based on previous studies [10,13], we hypothesized associations of the effects of oxytocin with changes in amino acids associated with glutamatergic transmission and also explored these relationships in other metabolites. Furthermore, by utilizing repeatable and quantifiable behavioral outcome measures, we explored the molecular mechanisms underlying the time-course changes in oxytocin efficacy on ASD. Experimental design and participants In the current study, we analyzed plasma samples collected from participants in our previous multi-center, parallel-group, placebo-controlled, double-blind, confirmatory trial of intranasal oxytocin in adult males with high-functioning ASD. The trial sites were the University of Tokyo Hospital, Nagoya University Hospital, Kanazawa University Hospital, and University of Fukui Hospital in Japan (UMIN000015264) [24]. The details of this trial are described elsewhere [11,24]. Briefly, the inclusion criteria of this trial were as follows: (1) 18-54 years of age; (2) male; (3) diagnosis of autistic disorder, Asperger's disorder, or pervasive developmental disorders not otherwise specified (PDD-NOS) based on DSM-IV-TR; (4) score exceeding the cut-off value (i.e., [10]) for qualitative abnormalities in social reciprocity on Autism Diagnostic Interview-Revised (ADIR) [25]; and (5) full IQ above 80 and verbal IQ above 85 based on WAIS-Third Edition (WAIS-III) [26]. The exclusion criteria were as follows: (1) primary psychiatric diagnosis other than ASD; (2) instable comorbid mental disorders (e.g., instable mood or anxiety disorder); (3) changes in medication or doses of psychotropics within 1 month before randomization; (4) current medication with more than two psychotropics; (5) current pharmacological treatment for comorbid attention-deficit/hyperactivity disorder; (6) history of repeated administrations of oxytocin; (7) history of hyper-sensitivity to oxytocin; (8) history of traumatic brain injury with loss of consciousness for longer than 5 min or seizures; or (9) history of alcohol-related disorders, substance abuse, or addiction. Open to the public recruitment and the processes testing eligibility are explained in detail elsewhere [24]. A total of 106 men with high-functioning ASD were recruited between January 2015 and March 2016. Among these participants, 94 were psychotropic-free other than oxytocin during the all trial period, while 12 continued their medications with psychotropic during the trial period (four antidepressants, four antipsychotics, two mood stabilizers, and two hypnotics). The diagnosis for subtypes of participants with ASD was autistic disorder (N = 83), Asperger's disorder (N = 12), and PDD-NOS (N = 11). Intervention The participants received administrations of oxytocin (48 IU/day) or placebo in the morning and afternoon during 6 weeks [24]. The placebo contained all of the inactive ingredients in order to control for any effect of substances other than oxytocin. On the last day of the 6-week administration period, data, including peripheral blood and clinical evaluations including Autism Diagnostic Observation Schedule (ADOS) [17], were collected from the participants. These endpoint clinical assessments were started 15 min after the last administration of intranasal oxytocin or placebo. All participants were sufficiently trained with identical instructions for intranasal administration, and the procedure of intranasal administration was evaluated at each 2-week assessment point. A self-report daily record was utilized to record treatment adherence. Randomization and masking Drug administration was randomly assigned the participants to the oxytocin or placebo group in a one-to-one ratio by the manager of randomization and masking based on a computer-generated randomized order. The randomization was stratified based on the trial site and median score of ADIR (< 18 or ≥ 18, defined based on the results from our preliminary trial [27]). Spray bottles with the same visual appearance were utilized to store both active drug and placebo (Victoria Pharmacy, Zurich, Switzerland). The manager covered the labels of spray bottle to keep oxytocin or placebo blind to all the clinicians, assessors, their families, and participants. Registration, allocation, and data management procedures were defined separately [24]. The main outcome of the current study The main outcome of the current study was metabolite concentrations in plasma samples collected at baseline, immediately before the first administration of oxytocin or placebo, and at endpoint, 60 min after the last administration of oxytocin or placebo at 6 weeks from baseline. Peripheral blood samples were collected from the participants, while they were fasting (> 3 h without any meals or nutritious drinks) during the daytime. The blood sampling procedure was conducted by experienced physicians. Plasma was isolated with centrifugation at 1,600 g for 15 min at 4 °C and stored within 30 min after blood collection at − 80 °C until assay (see details in Additional file 1: "Standard operation paper for blood collection and processing in JOIN-Trial_in_English"). The plasma samples were collected from January 2015 to April 2016 and assayed with CE-TOFMS in January 2018. To the plasma samples (100 μL), 450 μL of methanol containing 10 mM each of methionine sulfone and 10-camphorsulfonic acid were added and mixed well. Then, 500 μL chloroform and 200 μL of Milli-Q deionized water (EMD Millipore, Billerica, MA, USA) were added. The solution was centrifuged at 2,300g for 5 min at 4 °C. Then, to remove proteins, a 400-μL aliquot of the supernatant was centrifugally filtered with a 5-kDa cut-off filter (Human Metabolome Technologies Inc., Tsuruoka, Japan). The filtrate was centrifugally concentrated in a vacuum evaporator and dissolved in 50 μL of Milli-Q water containing reference compounds before mass spectrometry analyses. Plasma samples were measured using a capillary electrophoresis system with an Agilent 6210 time-of-flight mass spectrometer (CE-TOFMS, Agilent Technologies, Santa Clara, CA, USA) [28]. Customized proprietary software (MathDAMP) was utilized to process raw data files acquired from CE-TOFMS [29]. To identify target metabolites, their mass-to-charge ratio (m/z) values and migration times were matched with the annotation table of the metabolomics library (The Basic Scan metabolomics service of Human Metabolome Technologies Inc.) [30]. The relative area was defined by dividing all peak areas with the area of the internal standard. The definition of relative areas allowed avoidance of mass-spectrometry detector sensitivity bias and injection-volume bias across multiple measurements and normalization of the signal intensities. Based on the peak area of internal controls of each metabolite, the absolute quantities of 110 pre-determined major metabolites can be measured with analysis by CE-TOFMS in our system. We used the absolute quantities obtained with CE-TOFMS as metabolite concentrations in plasma samples. Other outcome measures of oxytocin efficacy To examine their relationship to metabolite concentrations, we also included six additional outcomes found to be significant effects of oxytocin in this trial [11,24] as well as in previous trials [11,27]. The six clinical and behavioral indices of oxytocin efficacy were as follows: (i) ADOS repetitive behavior = changes in the ADOS repetitive score between baseline and 6-week endpoint of oxytocin administration (endpoint − baseline). ADOS is a standard diagnosis tool for ASD but recently has been increasingly adopted as a primary outcome in ASD-related trials [24,27,[31][32][33][34][35]. (ii) Gaze fixation time on socially relevant regions = changes in the percentage of gaze fixation time on the eye region of a talking face presented on a video monitor, between baseline and 6-week endpoint (endpoint − baseline), which were measured with Gazefinder, a validated all-in-one eyetracking system, for a few minutes subsequent to the ADOS sessions using the standardized and validated method described details in elsewhere [18,24,36,37] (JVC KENWOOD Corporation, Yokohama, Japan). (iii, iv, v, and vi) log-PDF mode of quantified facial expression production of a neutral face during 0-6, 0-2, 2-4, and 4-6 weeks = changes in the natural logarithm of the mode of the probability density function of neutral facial expression intensity during a semistructured situation conducted during a few minutes of social interaction in the "Cartoons" activity in ADOS module 4. The data were quantified using a dedicated software program [38][39][40] (FaceReader version 6·1, Noldus Information Technology Inc., Wageningen, The Netherlands) using a validated method previously described in detail elsewhere [11,16]. In addition to baseline and the 6-week endpoint, facial expression was assessed every 2 weeks as changes in log-PDF mode of neutral facial expression between each assessment point (i.e., (iii) 6 weeks-baseline, (iv) 2 weeks-baseline, (v) 4 weeks-2 weeks, and (vi) 6 weeks-4 weeks). The log-PDF mode for neutral facial expression is considered to reflect variation in facial expression [16] and can be characterized as a repeatable, objective, and quantitative measure of ASD-related social deficit. Classification of participants according to time-course change in the efficacy of oxytocin To investigate the molecular mechanisms underlying the time-course change in the efficacy of repeated oxytocin administration, we defined a subgroup of the oxytocinadministered group comprising participants exhibiting a prominent time-course change. The rationale of this classification was based on our previous findings on the time course of oxytocin-induced quantitative changes in facial expression in ASD, which exhibited maximum efficacy at 2 weeks and deterioration of efficacy from 2 to 6 weeks [11]. Using this classification, we expected to detect metabolites related to the characteristics of participants with prominent time-course changes in the clinical effects of oxytocin. Individuals showing reduction of log-PDF mode of neutral facial expression (i.e., improvement in ASD core symptom) from baseline to 2 weeks and increase of log-PDF mode neutral facial expression (i.e., deterioration in ASD core symptom) from 2 to 6 weeks were classified as participants exhibiting a time-course change (Fig. 2c). Statistical analysis Demographic and clinical information was compared using independent t-tests between placebo-and oxytocin-administered groups and between the placeboadministered group and the oxytocin-administered group exhibiting the time-course change. We analyzed the effects of oxytocin on metabolite concentrations using independent t-tests for comparing changes from baseline to endpoint in metabolite concentrations during the 6-week administration period between the oxytocin-administered group and the placebo-administered group. Furthermore, because the change in metabolite levels over the 6-week oxytocin administration period could be associated with both clinical improvement and potential attenuation of oxytocin effectiveness, differences in changes in metabolite levels were also examined between the oxytocin-administered group displaying the time-course change in efficacy and the placebo-administered group. The independent t-tests were conducted for each metabolite, with absolute quantities successfully measured by CE-TOFMS measurement in at least 80% of all subjects (≧ 67 subjects) [41]. The Benjamini-Hochberg false discovery rate (FDR) correction for the number of metabolites tested was applied, and FDR-corrected p values of < 0.05 were considered statistically significant. For the oxytocin-administered group, we calculated Pearson's correlation coefficients for 6-week changes in outcomes versus changes in metabolite concentrations (identified as significant differences between the oxytocin and placebo-administered participants). The outcomes used in the correlation analysis were 6-week change in ADOS repetitive behavior, 6-week change in gaze fixation time on socially relevant regions, and log-PDF mode of neutral facial expression change from baseline to 6 weeks. Furthermore, to clarify the relationships between the detected metabolite change and the time-course change in efficacy, changes in log-PDF mode of neutral facial expression between each assessment point (i.e., 2 weeksbaseline, 4 weeks-2 weeks, and 6 weeks-4 weeks) were calculated and correlated with changes in metabolites using Pearson's correlation coefficient. The Benjamini-Hochberg FDR correction for the number of outcomes tested was applied to adjust the results, and the statistical significance level was defined as FDR-corrected p values of < 0.05. STATA version 14.0 and GraphPad Prism 8.4.1 were employed to conduct all statistical analyses. To assess whether the association between the efficacy of oxytocin and changes in the variability of quantified neutral facial expression between 0 and 2 weeks was mediated by changes in the level of DMG (a metabolite that exhibits a significant increase related to oxytocin administration), linear regression models were fitted according to the Baron and Kenny procedures for mediation analysis [42]. Demographic information of participants Detailed flow of participant is shown in Fig. 1. Two participants in the oxytocin group and one in the placebo group did not complete the trial because of withdrawal of consent or discontinuation of administration. Among the remaining 103 participants, after exclusion of subjects failing to be recorded in the ADOS [17] video recordings at any assessment point, 44 subjects in the oxytocin group and 40 subjects in the placebo group remained. One subject in the oxytocin group, not classified as exhibiting attenuation of oxytocin efficacy, was unable to provide a blood sample. In the end, a total of 83 individuals with ASD were analyzed to investigate relationships between the paradoxical attenuation of oxytocin efficacy and metabolite concentration changes (Fig. 1). Twenty of the 44 subjects in the oxytocin-administered group were classified into the time-course change group (Fig. 2). This classification of individuals with time-course attenuation was based on our previous findings on the time course of oxytocin-induced quantitative changes in Fig. 1 Participants flow in the current study facial expression in ASD which showed maximum efficacy at 2 weeks and deterioration of efficacy from 2 to 6 weeks [11] (Fig. 2c). No significant differences between the oxytocin-and placebo-administered participants or between the time-course change and the placebo groups were detected in background information, except for age between the time-course change group and the placebo group (p = 0.02) ( Table 1). CE-TOFMS measurement of metabolite concentrations Using CE-TOFMS [28] analysis, which can measure absolute quantities of metabolite concentrations, among the 110 pre-selected metabolites, 50 were detected in the plasma samples. Of these 50 metabolites, three were not detected from the plasma samples collected at baseline or endpoint. Furthermore, 12 were excluded based on the rate of successful measurements (i.e., less than 80%, which was employed as the threshold in a previous study utilizing the same metabolomic panel [41], while 35 metabolites were measured in all (i.e., 100%) of the 166 plasma samples (Additional file 2: Figure 1). It has been reported that a large amount of missing data (greater than 10%) can bias the results of subsequent statistical analyses in medical research [43]. Thus, we used the concentrations of these 35 metabolites for further analyses. (Fig. 2d). Although the citric acids level was decreased during the 6-week administration of oxytocin compared with placebo (P = 0.029, d = 0.49, N = 83), the statistical significance did not survive correction (P FDR = 0.51). No significant effects of oxytocin on changes in concentration of the remaining 33 metabolites were found (P FDR > 0.57, Additional file 3: Table 1). Additional analyses confined to psychotropic-free subjects (N = 72) and subjects diagnosed with autistic disorder (N = 62) were conducted and confirmed that the statistical conclusions were not changed by considering these potential confounds with excluding subjects with any psychotropic medication (N = 11) or subjects diagnosed with Asperger's disorder or pervasive developmental disorders not otherwise specified (PDD-NOS) (N = 21). Metabolite concentration changes in participants with ASD Next, to clarify whether the concentration change was related to clinical improvement or attenuation of efficacy, we examined the effects of oxytocin on metabolite levels in the subgroup of ASD individuals with timecourse attenuation in efficacy. This subgroup analysis revealed a significant effect of oxytocin on DMG levels (P FDR = 0.004, d = 1.13, N = 60) (Fig. 2d), but not on the levels of the remaining 34 metabolite levels (P FDR > 0.80, Additional file 4: Table 2). Notably, the effect size of oxytocin on DMG levels was larger in the time-course change subgroup than in the oxytocin-administered group as a whole. Although the age of the time-course change group was significantly older than that of the placebo-administered group, the analyses, controlling age as covariate, did not impact the statistical conclusion (Additional file 5: Table 3). Additional analyses confined to psychotropic-free subjects (N = 54) and subjects diagnosed with autistic disorder (N = 45) also confirmed that the statistical conclusions were preserved. We further conducted correlational analyses to clarify the relationship between the increased DMG levels and the clinical and behavioral effects of oxytocin. The analyses showed that the increase in DMG was significantly correlated with improvement indexed as change from baseline to 2 weeks in log-PDF mode of neutral facial expression (P FDR = 0.006, r = − 0.485, N = 43) (Fig. 3a, Additional file 6: Table 4). Furthermore, the increase in DMG was also significantly related to change from 2 to 4 weeks in log-PDF mode of neutral facial expression in the opposite direction (P FDR = 0.032, r = 0.415, N = 37) (Fig. 3b). In contrast, no significant correlation between the increase in DMG and clinical or behavioral improvements, indexed as changes from baseline to 6 weeks in ADOS repetitive behavior, gaze fixation time on socially relevant regions, and log-PDF mode of neutral facial expression (P FDR > 0.65, Additional file 6: Table 4). In addition, no significant correlation was found between any clinical or behavioral change and change in DMG level in the placebo-administered group (P FDR > 0.23). Additional correlational analyses confined to psychotropic-free subjects and subjects diagnosed with autistic disorder also confirmed that the statistical conclusions were preserved. The correlation between changes in oxytocin level and DMG level was additionally tested, revealing no significant correlation (p = 0.32). The mediation analysis revealed that neither direct (p = 0.096) nor indirect effects (p = 0.235) were statistically significant, although the total effect of oxytocin on facial expression was significant (p = 0.026). Furthermore, by testing the mediating effect of DMG on the 6-week clinical effects of oxytocin, we confirmed that there was no significant indirect (i.e., mediating) effect of DMG (ADOS repetitive behavior: p = 0.609; gaze fixation time on socially relevant regions: p = 0.741) and that there were significant direct and total effects of oxytocin on these clinical measures (ADOS repetitive behavior: direct effect p = 0.017, total effect p = 0.015; gaze fixation time on socially relevant regions: direct effect p = 0.004, total effect p = 0.014). Discussion The current parallel-group comparison of metabolites changes between the oxytocin-and placebo-administered groups revealed a significant increase in plasma DMG levels during the 6-week intranasal oxytocin treatment period. This change was prominent in the participants exhibiting a time-course change in oxytocin efficacy. Furthermore, the increase in DMG was associated with behavioral changes in autistic characteristics of quantified facial expression (i.e., improvements from baseline to 2 weeks and deteriorations from 2 to 4 weeks), although the increase in DMG was not related to improvements in clinical or behavioral outcomes during the 6-week administration period as a whole. Here, we found a significant increase in DMG induced by oxytocin administration in the participants with ASD. DMG, a nutrient supplement and a partial agonist for NMDAR glycine binding sites, is the N,N-dimethylated derivative of glycine. DMG is a natural amino acid found in certain foods, such as beans, cereal grains, and liver. DMG has been marketed in vitamin B15 since 1975 and was subsequently isolated as a single nutritional supplement to serve as an athletic performance enhancer [44]. DMG is an important intermediary in the amino acid metabolism from choline and glycine betaine to sarcosine and glycine [45]. DMG can modulate NMDA receptors (NMDAR), because sarcosine (monomethylglycine) and glycine act as NMDAR co-agonists by occupation of glutamate binding sites in the NMDAR [46,47]. A putative functional partial agonist for glycine sites of the NMDAR produces psychotropic effects [48]. A previous study reported that NMDAR is critical for development and rescuing ASD-like phenotypes observed in Shank2-mutant mice and that by modulating NMDAR, in participants with autism spectrum disorder. One participant in baseline to 2 weeks and seven participants in 2 to 4 weeks were excluded because of recording failure, defocused video images, or poor facial recognition rate at least one assessment point among these. Regression lines (solid) and 95% confidence band (dashed) were fitted using simple linear regression metabotropic glutamate receptor 5 may provide a novel treatment target for ASD [49]. DMG derivatives also exhibit pharmacological activities in the central nervous system, decreasing oxidative stress [50], improving immune responses [51], and exhibiting anticonvulsant activity in animal models [52]. Psychotropic effects of DMG have also been reported in animal studies as having antidepressant-like effects with reduction in ketamineinduced psychotomimetic behaviors [53] and exerting a preventing effect on NMDAR inhibitor-induced impairment in social recognition memory [54]. Several studies, including randomized controlled trials, have reported lower levels of plasma DMG [55] and clinical effects of administration of DMG [56][57][58][59] in individuals with ASD, although the results remain controversial. Our current study provides the first clinical evidence for a relationship between changes in DMG and oxytocin treatment in subjects with ASD. Together with previous animal studies showing interactions between central oxytocin and NMDAR such as central oxytocin release stimulated by NMDAR glycine site agonists [10,60,61], the current study supports the potential combination therapy of DMG or a NMDAR modulator and oxytocin for ASD. DMG is a partial agonist at the glycine binding site of NMDA receptors. However, although DMG alone did not alter the NMDA receptor-mediated excitatory field potentials, DMG acts as an agonist at the glycine binding site of NMDA receptors in combination with glutamate [62]. Therefore, the agonist effect of DMG on NMDAR can be decreased with decreased medial prefrontal glutamate-glutamine concentration, and decreased NMDAR expression has been reported to occur during chronic administration of oxytocin [10]. As demonstrated in our previous study [11], the behavioral effects of oxytocin can be observed at a maximum of 2 weeks, with deterioration at 4 weeks, in a 6-week treatment period. Taken together, these findings suggest that in the acute phase (e.g., 0-2 weeks), increases in DMG induced with oxytocin administration can act as a partial agonist at the glycine binding site of NMDAR with glutamate [62]. In addition, administered oxytocin has clinical effects during the acute phase. In the chronic phase (e.g., 2-4 weeks), increases in DMG induced with administered oxytocin do not have an agonist effect on NMDAR under a decrement of glutamate [10]. As decreases in NMDAR induce decreased secretion of oxytocin [63], the clinical effects of oxytocin may also decrease. Although this interpretation is speculative, the current finding of increased DMG levels associated with oxytocin administration and its relationships with the emergence of positive behavioral effects of oxytocin between 0 and 2 weeks and the inverse deterioration of the effects of oxytocin on behavioral symptom, an autistic characteristic of facial expression, between 2 and 4 weeks is consistent with the interpretation. Furthermore, this notion is also consistent with the results of mediation analyses showing that both DMG increases and behavioral changes were associated with oxytocin administration in a parallel way, rather than clinical effects of oxytocin being mediated by increased DMG. Hence, our findings suggest that DMG and its interactions with NMDAR and glutamate are associated with modulation of oxytocin secretion and that the modulated secretion is associated with both emerging and deteriorating clinical effects of oxytocin, and further explain the lack of consistency in beneficial effects of chronic oxytocin in previous studies. Future studies will be needed to test this hypothesis in a design involving longitudinal assessment of DMG, glutamate, NMDAR, oxytocin levels, and behavioral evaluations. The increase in DMG was most prominent in the ASD participants exhibiting a time-course change in oxytocin efficacy. In addition, although the increase in DMG was not related to clinical or behavioral improvements during the 6-week administration period as a whole, the increase was associated with improvements from baseline to 2 weeks and also with deterioration from 2 to 4 weeks, assessed as behavioral changes in quantified characteristics of autistic facial expression. The mediation analyses revealed that oxytocin affected DMG levels and quantified facial expression in a parallel way. Collectively, our findings indicate that both the upregulation of DMG and time-course changes in quantified social behavior are associated with the efficacy of oxytocin for ASD. Together with previous animal studies on the relationship between oxytocin efficacy and NMDAR-dependent neural plasticity [10,64,65], our present clinical study further supports an association between NMDAR and neural plasticity in time-course changes, such as improvement in and subsequent deterioration of oxytocin efficacy. Previous animal studies support a relationship between oxytocin and neural plasticity via glutamatergic transmission-oxytocin enhances excitatory synaptic transmission [66] and facilitates long-term potentiation [64]. Our recent human clinical trial and animal study [10] further supports a relationship between NMDAR and oxytocin: Repeated administration of oxytocin downregulates medial prefrontal glutamatergic metabolites (i.e., N-acetylaspartate and glutamate-glutamine), measured with 1 H-magnetic resonance spectroscopy, compared with acute oxytocin [13]. The decreases in these metabolite levels were negatively and specifically correlated with oxytocin-induced improvements in medial prefrontal function. Furthermore, we showed that repeated administration of oxytocin decreased expression of the transcript for NMDA receptor type 2B in the medial prefrontal region, in contrast to acute oxytocin, in wild-type mice [10]. The current study further shows a link between changes in NMDA and time-course change in the efficacy of repeated administration of oxytocin in individuals with ASD. The present study with a peripheral metabolomics supports the possibility that changes in blood DMG level can briefly monitor the efficacy and its time course of oxytocin. Previous studies have suggested that metabolomics analyses are likely to sensitive for interactions among metabolite levels and the presence of a disorder such as ASD as well as factors such as severity of the disorder, comorbid conditions, diet, supplements, sex, genome, and other environmental factors [67]. Thus, metabolic signatures for psychiatric disorders could promote the identification of biomarkers for disease, for progression of disease or for response to therapy. In addition, it was proposed that metabolomics provides powerful tools for the process of drug discovery and development by providing detailed biochemical knowledge about drug candidates, their mechanism of action, therapeutic potential, and side effects [68,69]. We found no significant correlations between increased DMG and changes in plasma oxytocin levels. A previous study reported that a substantial increase in oxytocin plasma levels 30 min after intranasal administration and group mean oxytocin plasma levels returned to baseline by 90 min post-administration [70]. In contrast, the time course of DMG levels after oxytocin administration is currently unknown. Because the time course of DMG changes is unlikely to exactly match that of oxytocin, the lack of correlation between the changes in DMG and oxytocin levels quantified with blood collected at 60 min after administration of oxytocin is not surprising. Limitations There are several potential limitations to the current study. First, the participants in this study were all Japanese, adult, males with high-functioning ASD. Therefore, although the uniformity in demographic backgrounds enhanced the ability to detect metabolomics changes in the current study, the current findings should carefully be generalized to other clinical or non-clinical populations. Second, the metabolites changes caused by oxytocin administration were quantified using peripheral blood, and therefore may not reflect central nervous system changes. Further study is needed to clarify the interaction between oxytocin and molecular systems in the central nervous system. Third, considering the potential effects of nutrition on metabolite levels, we confirmed that there was no difference in BMI between the oxytocin and placebo groups. Furthermore, we tested correlations between BMI and changes in DMG levels and found no significant correlations. On the day of blood collection, all participants were fasting (> 3 h without consuming any meals or nutritious drinks) before collection from a peripheral vein, to reduce the effects of nutrition. However, it was reported that oxytocin can influence body weight, namely through reduction in food intake as well as increases in energy expenditure and/or lipolysis [71]. Because we did not measure body weight changes or quantify dietary content during the trial period, the possibility that changes in food intake during the 6-week administration period affected DMG levels cannot be completely ruled out. Future study is expected to collect information about individual's diet and add it as a covariate in the analysis. Fourth, although blood samples collected from 2 and 4 weeks after the start of treatment need to be analyzed to further support the "time-course" relationship with N,N-DMG levels, metabolomic analysis on peripheral blood is difficult to repeat, mainly because of the high burden of repeated blood collection on clinical trial participants and the substantial financial cost of metabolomic analyses. Future study with repeated blood collections is needed to see whether there is a consistent/reliable increase in DMG levels following repeated administrations of oxytocin. Conclusions In conclusion, the present high-throughput metabolomic analysis of plasma from a large-scale multi-center randomized controlled trial provides clinical evidence for an association between oxytocin-related increase in DMG and time-course changes in the efficacy of oxytocin for ASD social core symptoms. The results further support a contribution of NMDAR and neural plasticity to the time-course change. Our findings might suggest a potential optimization of oxytocin-based combinatorial therapy of an NMDAR modulator and oxytocin for ASD, such as for individuals showing deterioration in efficacy.
2021-02-23T15:05:00.996Z
2020-08-04T00:00:00.000
{ "year": 2021, "sha1": "a745826d4486f30881f30e5193657e28a494d03d", "oa_license": "CCBY", "oa_url": "https://molecularautism.biomedcentral.com/track/pdf/10.1186/s13229-021-00423-z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a745826d4486f30881f30e5193657e28a494d03d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119318226
pes2o/s2orc
v3-fos-license
Remarks on mass and angular momenta for $U(1)^2$-invariant initial data We extend Brill's positive mass theorem to a large class of asymptotically flat, maximal, $U(1)^2$-invariant initial data sets on simply connected four dimensional manifolds $\Sigma$. Moreover, we extend the local mass angular momenta inequality result Ref [1] for $U(1)^2$ invariant black holes to the case with nonzero stress energy tensor with positive matter density and energy-momentum current invariant under the above symmetries. Introduction In [2] Brill proved a positive energy theorem for a certain class of maximal, axisymmetric initial data sets on R 3 . Brill's theorem has been extended by Dain [3] and Gibbons and Holzegel [4] for a larger class of 3 dimensional initial data. Subsequently, Chrusćiel [5] generalized the result to any maximal initial data set on a simply connected manifold (with multiple asymptotically flat ends) admitting a U(1) action by isometries. Moreover, in [4] a positive energy theorem was proved for a restricted class of maximal, U(1) 2 -invariant, four-dimensional initial data sets on R 4 . The first purpose of this note is to generalize this latter result to a larger class of 4+1 initial data. In particular, our result extends the work of [4] in three main directions: (1) We consider the general form of a U(1) 2 -invariant metric (i.e. we do not assume the initial data has an orthogonally transitive U(1) 2 isometry group) on asymptotically flat, simply connected, four-dimensional manifolds Σ admitting a torus action. (2) The orbit space B ∼ = Σ/U(1) 2 of Σ belongs to a larger class Ξ which is defined below in Definition 2.2. The boundary conditions on axis and fall-off conditions at spatial infinity are weaker than those considered in [4]. In particular they include the data corresponding to maximal spatial slices of the Myers-Perry black hole. (3) The manifold Σ may possess an additional end (either asymptotically flat or asymptotically cylindrical of the form R × S 3 ). Such Σ arise in the example of complete initial data for black hole spacetimes. The existence of non-trivial topology is also required for initial data to carry non-vanishing angular momenta. The results also hold for data satisfying (1) and (2) on R 4 . The second main result of this work is to extend the local mass-angular momenta inequality proved in [1] to the non-vacuum case with positive energy density and vanishing Date: December 8, 2015. 1 energy current in directions tangent to the generators of the isometry group. This result naturally extends the result of [6] to the 4+1-dimensional setting. Positivity of mass An asymptotically flat maximal initial data set (Σ, h, K, µ, j) must satisfy the Einstein constraint equations where µ is the energy density, j is an energy-momentum current, and R h and |K| 2 h are respectively the Ricci scalar curvature and full contraction of K with respect to h. Σ is assumed to be a complete, oriented, simply connected asymptotically flat spin manifold with an additional asymptotic end. We now briefly review the discussion 1 in [7]. As proved in [8,9] if the manifold-with boundary M is a spatial slice of the domain of outer communications of an asymptotically flat black hole spacetime admitting an U(1) 2 action, then Σ ∼ = R 4 #n (S 2 ×S 2 ) −B for some integer n where B is a four-manifold with closureB such that ∂B = H and H is a spatial cross section of the event horizon. We obtain a complete manifold Σ by doubling M across its boundary ∂M [7]. For example, complete initial data for the non-extreme Myers-Perry black hole has Σ ∼ = R × S 3 , which has two asymptotically flat ends. For extreme black hole initial data, a spatial slice of the domain of outer communications is already complete (the horizon is an infinite proper distance away from any point in the interior). Complete initial data for the extreme Myers-Perry black hole again has Σ ∼ = R × S 3 , although the geometry is now cylindrical at one end. Note that initial data for non-extreme and extreme black rings have different topology [7]. We consider U(1) 2 = U(1) × U(1) invariant data with generators ξ (i) for i = 1, 2. Σ is therefore equipped with a U(1) 2 action and further L ξ (i) K = L ξ (i) h = 0. It proves useful to represent our space of functions on the two-dimensional orbit space B ≡ Σ/U(1) 2 . in general the action will have fixed points (i.e. on points where a linear combination of the ξ (i) vanish). A careful analysis [10] establishes that B is an analytic, simply connected manifold with boundaries and corners and can be described as follows. Define the Gram matrix λ ij = ξ (i) ·ξ (j) . On interior points of B the rank of λ ij is 2. The boundary is divided into segments. On each such segment the rank of λ ij is one and there is an integer-valued vector v i such that λ ij v j = 0 on each point of the segment (i.e. the Killing field v i ξ i vanishes on this segment). On corner points, where adjacent boundary segments meet, the rank of λ ij vanishes. Moreover, if v s = (v 1 s , v 2 s ) t and v s+1 are vectors associated with two adjacent boundary segments then we must have det(v s , v s+1 ) = ±1 [10]. Finally, we note that since Σ has two asymptotic ends, the two-dimensional orbit space is an open manifold with two ends. Note that at interior points, the orbit space is equipped with the quotient metric The orbit space B is a simply connected, analytic two-manifold with (smooth) boundaries and corners, with two ends. By the Riemann mapping theorem, it can be analytically mapped to the upper half plane of C with a point removed on the real axis (if the point is removed anywhere else, then the region will not be simply connected). The boundary of B is mapped to the real axis with the above point removed by Osgood-Caratheodory theorem [11], which we take to be the origin without loss of generality. We assume that (2.2) admits the global representation where U = U(ρ, z), v = v(ρ, z) are smooth functions and ρ ∈ [0, ∞) and z ∈ R. The asymptotically flat end corresponds to ρ, z → ∞ and the point (ρ, z) = (0, 0) corresponds to the second asymptotic end. We will impose appropriate decay conditions on (U, v) below. The boundary is characterized by ρ = 0 in this representation. The boundary segments, where a particular linear combination of Killing fields vanish, are then described by the intervals I s = {(ρ, z)|ρ = 0, a s < z < a s+1 } where a 1 < a 2 < · · · < a n are referred to as 'rod points'. Asymptotic flatness requires that there are two semi-infinite rods I − = {(ρ, z)|ρ = 0, −∞ < z < a 1 } and I + = {(ρ, z)|ρ = 0, a n < z < ∞} corresponding to the two symmetry axes of the asymptotically flat region. Further details on the orbit space can be found in [7]. Now note det λ(0, z) = 0 on corner and boundary points and smoothness at fixed points requires det λ = ρ 2 + O(ρ 4 ) as ρ → 0. Furthermore since Σ is asymptotically flat, this implies det λ has to approach the corresponding value in Euclidean space outside a large ball (i.e. det λ ∼ r 4 as r → ∞ where r is a radial coordinate in R 4 ). Let φ i be coordinates with period 2π such that the L ξ i φ j = δ j i . Then ξ (i) = ∂ φ i . The four-manifold (Σ, h) may be considered as the total space of a U(1) 2 principal bundle over B, where we identify the fibre metric with λ ij . We use Greek indices α, β = 1, ..., 4 to label local coordinates on Σ. The simplest case is R 4 with its Euclidean metric which in our coordinate system has the representation Asymptotically flat metrics must approach δ 4 with appropriate fall-off conditions. In particular we have det λ → ρ 2 as ρ, z → ∞. This suggests we set λ ij = e 2v λ ′ ij where det λ ′ = ρ 2 and v satisfies appropriate decay conditions at the ends and boundary conditions on the axis. These decay conditions are most appropriately expressed in terms of new coordinates (r, x) defined by where 0 ≤ r < ∞ and −1 ≤ x ≤ 1. The axis Γ now corresponds to two lines I + ≡ {(r, x)|x = 1} and I − ≡ {(r, x)|x = −1} . Note that if the space has a second asymptotic end, then the point r = 0 is removed. In this representation, the Euclidean metric on R 4 takes the form We consider initial data (Σ, h) which are a natural generalization of the well-known Brill data for three-dimensional initial data sets. Motivated by the above discussion, we define this class as follows: Definition 2.1 (Generalized Brill data). We say that an initial data set (Σ, h, K, µ, j) for the Einstein equations is a Generalized Brill (GB) initial data set with local metric where (x 1 , x 2 ) = (ρ, z), det λ ′ = ρ 2 and U = V − 1 2 log 2 ρ 2 + z 2 if it satisfies the following conditions. (1) (Σ, h) is a simply connected Riemannian manifold and M end is diffeomorphic to (2) The second fundamental form satisfies and λ ′ ij satisfy the following decay conditions, which are best expressed in terms of the (r, x) chart given by (2.6): where h c = e 2V dx 2 4(1−x 2 ) +σ ij dφ i dφ j is a metric on N. (d) as ρ → 0 and w = w i ∂ ∂φ i is the Killing vector vanishes on the rod I s λ ′ ij w j = O(ρ 2 ), and others λ ′ ij = O(1) . and to avoid conical singularities on the axis Γ we have We remark that any sufficiently smooth, asymptotically flat metric on a simply connected 3-manifold with additional asymptotic ends obtained by removing points form R 3 and admitting a U(1) isometry can be written in the above form, with i = 1 [5]. It is natural to expect a similar result holds in the present case, up to some additional conditions. Note that the one-forms A i = A i a dx a may be considered as a local connection on the U(1) 2 bundle over B. The initial data sets defined above encompass a large class of possible data sets, which include in particular initial data for extreme and non-extreme black rings. It proves useful to restrict attention to a subclass of data, which includes initial data for the Myers-Perry black hole. Let a fixed GB data set have orbit space B with rod points a 1 , a 2 . . . a n . Via the transformation (2.6) these points map to I + and I − . We arrange these points in order of increasing r and denote by b s , for The I F is the asymptotically flat end and I E is another asymptotic end or just the origin of half plan (ρ, z). Remark 2.1. The regions B s correspond to annuli in the (ρ, z) representation of B and (finite, infinite, or semi-infinite) rectangles on the (y, x) representation where y = log r. Remark 2.2. The geometry of a second asymptotic end of data belonging to Ξ must have N = S 3 (or a Lens space quotient). This follows from the classification of orbit spaces N/U(1) 2 obtained in [10] when distinct Killing fields vanish on I + and I − . The ADM energy 4 and momenta for a generalized Brill data set (Σ, h, K, µ, j) are given by where S 3 r refers to a three-sphere of coordinate radius r with volume element ds = r 3 4 dxdφ 1 dφ 2 in the Euclidean chart outside a large compact region and n is the unit normal. Then we have the following positive mass theorem. 4 We will refer to this as the 'mass' hereafter. (a) Orbit space as half plane Orbit space as infinite strip Figure 1. The orbit space can be subdivided into subregions B s which are half-annuli in the (ρ, z) plane and rectangles in the (y, x) = (log r, x) plane. In this case n = 6. The dashed line I E can represent origin or in the case of black holes is another asymptotic end. Moreover, we have m < ∞ if and only if we have Proof. Consider the GB data (Σ, h, K, µ, j). We can write the metric in conformal form as where Φ = e v . Then by the asymptotic decay properties of GB data at the asymptotically flat end we have Then the integrand in the expression for the ADM mass (2.8) is where we used Φ = e v = 1 + o 1 (r −1 ) as r → ∞ in first equality. The second equality follows from U(1) 2 -invariant symmetry of v and definition of I F = {(r, x) : r = ∞, −1 ≤ x ≤ 1}. Now we find the ADM mass of the conformal metrich. Lemma 2.2. Consider a GB data (Σ, h, K, µ, j) with the rescaling (2.11). Then Proof. Consider the flat metric in Cartesian coordinates (y i ) 1 − x 2 , z = r 2 2 x, φ 1 , φ 2 )) for GB conformal metric with transformation First we write the conformal metric in the (r, x, φ 1 , φ 2 ) chart: We compute the ADM mass of each one of these terms : • C I : This is a conformally flat metric and by (2.14) we obtain (2.21) m C I = 1 16π lim r→∞ Sr −3∂ r e 2V − 1 ds. Then by definition of ADM mass (2.8) we obtain • C III : This is similar to C II and we have Hence the ADM mass of B I + B II is where in the second line we used part (3)-a of Definition 2.1. We consider the term B III We prove ADM mass of the D I and D II parts are zero and the argument for the other terms are similar. As in the argument used for C II and C III , we consider D I as the following metric Then the integrand appearing in the ADM mass expression is Now consider D II as a metric (D II ) ab = 1 2 r 2 (1 + x)dφ 1 A 1 z dz = 1 2 (y 1 dy 2 − y 2 dy 1 ) A 1 z d y 2 1 + y 2 2 − y 2 3 + y 2 4 = A 1 z z (y 1 dy 2 − y 2 dy 1 )(y 1 dy 1 + y 2 dy 2 ) − A 1 z z (y 1 dy 2 − y 2 dy 1 )(y 3 dy 3 + y 3 dy 3 ). Then the ADM mass is Therefore, the ADM mass of the conformal metric is zero, that is mh = 0. Returning to the mass of GB data we have Then we define three one-form ω, χ 1 and χ 2 where ∆ 3 is Laplace operator respect to δ = dρ 2 + dz 2 + ρ 2 dφ 2 be metric on R 3 and ∆ 2 = ∂ 2 ρ + ∂ 2 z . Now by asymptotes of GB data set, we list the behaviour of χ 1 and χ 2 at boundary of the orbit space ∂B = Γ ∪ I F ∪ I E where Γ = I + ∪ I − . The first equality follows from Stokes theorem and the last equality follows from equation (2.31) and orientation of (r, x) chart. We next compute the scalar curvature ofh αβ . After a conformal rescaling we have where ∇ is the derivative with respect to δ ab and Rh is Ricci scalar ofh. Now similar to the calculation in [13] we compute 5 the Ricci tensor ofh αβ : Here D a and 2R ab are the Levi-Civita connection and Ricci tensor with respect to q ab = e 2U δ ab . Then the scalar curvature is By equations (2.41) and (2.45) we have The inequality follows from H ij , R h ≥ 0. Now we use the argument of Section 5 of [7] to establish positivity of m over each annulus B s . Fix B s and without loss of generality we can select the following parameterization of the 3 independent functions contained in λ ′ ij and v: (2.49) where v s = ∂φ1 s and w s = ∂φ2 s vanish on I + ∩ B s and I − ∩ B s , respectively such that where for fixed s we have det(α j sk ) = det = ±1 [10]. Recall that this relation must hold between two bases that generate the U(1) 2 action. The functions V s 1 , V s 2 and W s are C 1 functions whose boundary conditions on the axis are induced from those of λ ′ ij and v in Definition 2.1. In particular, we have det λ ′ = ρ 2 and to remove conical singularities on I ± by Definition 2.1-(3d) we require: ij and v are continuous across the boundary of B s , this will impose boundary conditions on the parameterization functions in adjacent subregions. Then we have The final inequality follows from [7,14] (see also [4] . We prove it by the technique we used to prove positivity of m in each B s . Fix B s and a parametrization (2.49). Then by (2.52) we have To show this, one should expand the derivatives with respect to r and x and use an argument similar to that given in [4,14]. The details are straightforward but tedious. Since W s = 0 on I ± , we have W s ≡ 0. Also by equations (2.49) and (2.54), we have ∇v = 0 and by Definition 2.1, v vanishes at infinity. This implies v ≡ 0. Note that in particular this implies there could not be another asymptotic end as r → 0, since v ∝ − log r in that case. Moreover, by definition of v in the parametrization (2.49) and v = 0, we have V s 1 = −V s 2 =constant. This means for each B s we have where k = j and k, j = 1, 2. If we consider the last annulus B n ′ which extends to spatial infinity, i.e. I F , then by the asymptotic conditions of λ ′ ij in Definition 2.1 and ∇V n ′ 1 = 0, we obtain V n ′ 1 = V n ′ 2 ≡ 0. Moreover, if we consider the common boundary of B n ′ −1 and B n ′ , by the continuity of V s 1 through boundary of B s and (2.50), we have where for fixed k, α l (n ′ −1)k = (α 1 (n ′ −1)k , α 2 (n ′ −1)k ) and α tl (n ′ −1)k = (α 1 (n ′ −1)k , α 2 (n ′ −1)k ) t . These conditions arise by expressing λ ′ ij in B n ′ −1 (2.55) in the fixed basis ξ (i) using the transformation (2.50). Since V n ′ −1 1 =constant in the above equation and right hand side is a function of x for some α l (n ′ −1)k , then we reach to a contradiction and this implies n ′ = 1. This is equivalent to Σ having the trivial orbit space, i.e. B Σ = B R 4 . Moreover, we obtain λ ′ ij = σ ij = r 2 2 diag(1 + x, 1 − x) and by straightforward computation it implies (2.57) − det ∇λ ′ 2ρ 2 = 0, Then, the equation (2.46) reduces to (2.58) ∆ 2 V = 0, V vanishes on axis and infinity . By maximum principle on open set O R,ǫ = {(ρ, z) : ǫ < ρ < R}, we have V ≡ 0 as R → ∞ and ǫ → 0. By (2.53) the one form β i = A i ρ dρ + A i z dz is close and simply connectedness of Σ implies that there exists a function ψ i such that β i = dψ i , i.e. β i is exact. Then the metric has the following global representation where γ i are new rotational angles with period 2π. Hence, h is flat metric and Σ = R 4 . It is natural to expect this positivity result should extend to GB data that do not belong to Ξ. We will return to this point in the final section. Mass-angular momenta inequality In [1] a local version of a mass-angular momenta inequality for a class of asymptotically flat, maximal, U(1) 2 -invariant, vacuum black holes was shown. The U(1) 2 isometry group was assumed to act orthogonally transitively (i.e. there exist two-dimensional surfaces orthogonal to the surfaces of transitivity at every point). There is a question regarding the extension of our proof to the non-vacuum case and considering the general U(1) 2 -invariant metric equation (2.7). The main problem in the non-vacuum case is whether angular momenta are conserved quantities and twist potentials exist globally. The ADM angular momenta related to the Killing vector ξ (i) for the GB data set (Σ, h, K, µ, j) is This is a well-defined quantity and it is a conserved quantity in U(1) 2 -invariant vacuum spacetimes. With matter source we show under appropriate conditions it remains a conserved quantity. In the previous section we showed that the ADM mass has lower bound, the right hand of equation (2.48). By the Hamiltonian constraint equation we have In order to prove a local mass angular mometa inequality following the argument of [1] we need to first show the global existence of the potentials where ⋆ is the Hodge star operator with respect to h. Lemma 3.1. Consider the GB initial data set (Σ, h, K, µ, j). If ι ξ (i) j = 0, then J (i) are conserved and global twist potentials Y i exist. Proof. Let N ⊂ Σ and S 1 , S 2 are two 3 dimensional surfaces with isometry subgroup Thus the angular momenta are conserved quantities. For the second part, let then by the Killing property of ξ (i) and constraint equation we have ⋆d⋆S (i) = −ι ξ (i) divK = −ι ξ (i) j = 0. Therefore, since Σ is simply connected the potentials Y (i) globally exist. Note that the above result can be extended to D-dimensional initial data with U(1) D−2 commuting Killing vectors [14]. Recall that t − φ i symmetric data consists of the subclass of GB initial data with the property that h αβ → h αβ and K αβ → −K αβ under the diffeomorphism φ i → −φ i [15]. It can be shown that for vacuum (µ = j = 0) t − φ i -symmetric data, the metric takes the form (2.7) with A i a = 0 and the extrinsic curvature is determined fully from the twist potentials Y i [7]. Thus this data is characterized by five scalar functions, or equivalently, the triple u = (v, λ ′ , Y ), where v is a function, λ ′ is a positive definite symmetric 2 × 2 matrix, and Y is a column vector [7]. Explicitly, for vacuum t − φ i symmetric data, we can express the extrinsic curvature as (1) , ξ α (2) ) t is a column vector and S = (S 1 , S 2 ) t is a column vector with components S i defined by (3.3) [14]. This motivates the following definition. Definition 3.1. Let (Σ, h, K, µ, j) be a GB initial data set with µ ≥ 0 and ι ξ (i) j = 0. We define the associated reduced data to be the vacuum t − φ i -symmetric data characterized by the triple u = (v, λ ′ , Y ) where (v, λ ′ ) is extracted from the original data and Y is defined in (3.3). The ADM mass of a given GB data set is bounded below by the ADM mass of its associated reduced data. This can be shown as follows. Let introduce the co-frame of one forms {θ α } θ a = e v+U dx a , θ i+2 = e v dφ i + A i a dx a , so that the metric can be expressed as with associated dual frame of basis vectors where x a = (ρ, z).Then we have where ǫ ab is the volume form on the flat two-dimensional metric. Noting K bi = K(e b , e i ) = K(θ b , e i ) we read off Noting that in this basis, where Y = (Y (1) , Y (2) ) t . Using (3.2) and (2.48) we arrive at Then it follows directly form the results of [7] that we can rewrite the right hand side of equation (3.13) as 6 (3.14) M ≡ π 4 B − det ∇λ ′ 2ρ 2 + e −6v ∇Y t λ ′−1 ∇Y 2ρ 2 + 6 |∇v| 2 dµ + π 4 rods Is log V s dz . which defines the mass functional M = M (v, λ ′ , Y ) and V s is defined in Definition 2.1-3d. M evaluates to the ADM mass for vacuum, t − φ i symmetric data. Thus we have shown that m ≥ M = m R where m R is the ADM mass of the associated reduced data. One would expect the mass functional is positive definite for all orbit spaces on asymptotically flat Σ with positive scalar curvature. However, positivity of M has been only established for B ∈ Ξ [7]. Thus we have the following conjecture. Conjecture 3.2. Consider GB initial data set then M (v, λ ′ , Y ) is a non-negative functional for any orbit space. We setū = (v,λ ′ ,Ȳ ) whereλ ′ is a symmetric 2 × 2 matrix such that detλ ′ = 0. Considerū as a perturbation about some fixed initial data u 0 defined in Definition 3.2 . This should consist of five free degrees of freedom, and the apparent restriction detλ ′ = 0 is simply a gauge choice that preserves the condition det λ ′ = ρ 2 under the perturbation. Let ρ 0 > 0 and Ω ρ 0 ≡ {(ρ, z, ϕ)|ρ > ρ 0 } and select the perturbationȲ and λ in C ∞ c (Ω ρ 0 ). Now for a (unbounded) domain Ω, we introduce the following weighted spaces of C 1 functions with norm and similar to [1] we define the extreme class of initial data Definition 3.2. The set of extreme class E is the collection of data arising from extreme, asymptotically flat, R × U(1) 2 invariant black holes which consist of triples u 0 = (v 0 , λ ′ 0 , Y 0 ) where v 0 is a scalar, λ ′ 0 = [λ ij ] is a positive definite 2 × 2 symmetric matrix, and Y 0 is a column vector with the following bounds for ρ ≤ r 2 (1) (2) C 1 ρI 2×2 ≤ λ 0 ≤ C 2 ρI 2×2 and C 3 ρ −1 I 2×2 ≤ λ −1 0 ≤ C 4 ρ −1 I 2×2 in Ω ρ 0 (3) ρ 2 ≤ X 0 in R 3 where X 0 = det λ 0 and X 2 0 ≤ C ′ ρ 4 in Ω ρ 0 where lim ρ 0 →0 C ′ = ∞ (4) |∇v 0 | 2 ≤ Cr −4 , |∇ ln X 0 | 2 ≤ Cρ −2 in R 3 and ∇λ 0 λ −1 0 2 ≤ Cρ −2 in Ω ρ 0 (5) V =V (x)r −2 + o 1 (r −2 ) and 1 −1V (x)dx = 0 as r → ∞. 6 There is a sign mistake in [7] because of orientation. The sign of summentaon over rods should be positive. This definition was motivated by studying the geometry of the initial data for the extreme Myers-Perry and black ring solutions. In has been established that such geometries are local minimizers of the mass amongst suitably nearby data with the same orbit space [1]. We can now state our second result: Theorem 3.3. Let (Σ, h, K, µ, j) be a GB initial data set with mass m and fixed angular momenta J (1) and J (2) and fixed orbit space B ∈ Ξ satisfying µ ≥ 0 and ι ξ (i) j = 0. Let u = (v, λ ′ , Y ) describe the associated reduced data as in Definition 3.1 and write u = u 0 +ū where u 0 is extreme data with the same angular momenta and orbit space of the GB initial data set. Ifū ∈ B is sufficiently small then for some f which depends on the orbit space B. Moreover, m = f (J (1) , J (2) ) for GB initial data set in a neighbourhood if and only if the data are extreme data and µ = j = 0. Proof. First, consider the GB data with µ ≥ 0 and ι ξ (i) j = 0. Then by Lemma 3.1, there exist global potentials Y i such that |K| h satisfies in inequality (3.12) and it yields m ≥ M(u), where u is the associated reduced data. Second, since u = u 0 +ū, then all the assumptions of Theorem 1.1 of [1] hold and it follows that there exists ǫ > 0 such that if ū B < ǫ, then m ≥ M(u 0 ). Finally, by [1] it follows the inequality is saturated if and only if the data is extreme data.
2015-12-06T18:40:43.000Z
2015-08-10T00:00:00.000
{ "year": 2015, "sha1": "f1fa42bf18ce9b07a3cb9cdac9ba193144b2b403", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "f1fa42bf18ce9b07a3cb9cdac9ba193144b2b403", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
182908490
pes2o/s2orc
v3-fos-license
Assessment of vasectomy awareness in different communities of Gujurat Vasectomy is an important surgical method of birth control, which is of significance in overpopulated countries, such as India and china, its importance is dictated by a significant number of factors including its superstitious perceptions which deter the men from undergoing such a procedure, leading to woman who sometimes, are forced to undergo contraceptive procedures for the same reasons, we are discussing here, the causes and how to further improve its accessibility and spread knowledge about vasectomy , to prevent overpopulation and further propagate family planning. Introduction A vasectomy is considered a permanent method of birth control. A vasectomy prevents the release of sperm when a man ejaculate. It is a very effective contraceptive method only 1 to 2 women out of 1,000 will have an unplanned pregnancy in the first year after their partners have had a vasectomy [1]. There are several types of vasectomies and the procedures to go about these may vary. These procedures are incision and no-incision (or "no-scalpel") methods. Vasectomies are usually done to men who make a decision to no longer continue his biological family, believe or are told that other methods of contraception are unacceptable, do not want to pass down a hereditary illness or disability has a partner whose health would be threatened by a future pregnancy ,has concerns along with his partner about the side effects of other methods of contraception, agrees with his partner agree that their family is complete, and no more children are wanted ,wants to spare his partner the surgery and expense of tubal sterilization (sterilization for women is more complicated and more costly) In India, men are often paid to get a vasectomy under the terms of different government schemes usually for reasons such as population control [2,3]. However, despite the certainty, safety and even cost benefit of vasectomy, Indian men seldom undergo the procedure. This is mostly due to social stigma, fear of the unknown, and possibly because many people are averse to any medical procedures (since the alteration of a part of the body is a significant decision). Along with these reasons, the concept of contraception itself is commonly taken lightly in many developing countries. Even after various campaigns to promote methods of contraception for reasons such as population control, its awareness is not given much importance; and for some areas under the poverty line, it is usually absent. The social stigma surrounding vasectomy has a considerable influence on the opinions of Indian men and sometimes also their wives on considering undergoing the procedure. In a society where patriarchal traditions prevail, succumbing to the ideas of undergoing what some may see as "castration" seems masculine and looked down upon by others. For many, lack of education of the subject often leads to a fear of unknown consequences of the procedure. For example, some may believe that it results in fluctuating hormonal changes or even permanent changes to the physical appearance of a man. Certain uneducated persons may also have a negative perception that the removal of a reproductive organ which makes them barren will label them as no longer an 'ideal' female or male. With these varying opinions about the idea of male sterilizationparticularly the procedure of vasectomy a study was conducted on males aged between twenty to forty-five years having at least one child, from different areas in Ahmedabad, Gujarat, India about the awareness of vasectomy and to assess their attitudes towards vasectomy. Review of literature Vasectomy is a surgical procedure for permanent contraception in males. The procedure is regarded as permanent because its reversal is costly and often does not restore the male sperm count or sperm motility to pre-vasectomy levels. During the procedure, the male vas deferens is severed and then tied or sealed in a manner to prevent the sperm from entering into the seminal stream that is ejaculation and thereby prevent fertilization. Men with vasectomies have a very small chance (nearly zero) of successfully impregnating a woman but a vasectomy has no effect on rates of sexually transmitted infections (STIs) for married and monogamous men. After vasectomy, the testes remain in the scrotum where Leydig cells continue to produce testosterone and other main hormones that continue to be secreted in the bloodstream. Thus, vasectomy has no effect on the masculinity of a person. After a short recovery at the hospital, the patient is sent home to rest. Since the procedure is minimally invasive the vasectomy patients can resume their sexual behavior within a week with little or no discomfort. When the vasectomy is complete with recovery, sperms cannot exit the body through the penis. Sperms are still produced by the testicles however they are soon broken down and absorbed by the body. Membranes in the epididymis absorb much of the fluid content and much solid content is broken down by the responding macrophages and then reabsorbed via the bloodstream. Hospitalization is not normally required as the procedure is not complicated, incisions are small however some may find it necessary for varying reasons. It's a very effective method for contraception though it does not exclude the chances of STDs. It is in fact of lower cost and less invasive than tubal ligation. However, current possible short-term complications include infection, bruising and bleeding into the scrotum resulting in the collection of blood known as hematoma. The primary long-term (yet rare) complications of vasectomy are chronic pain conditions or syndromes that can affect any of the scrotum, pelvis or lower abdominal regions collectively known as Post-Vasectomy Pain Syndrome. Because the procedure is considered a permanent method of contraception and is not easily reversed, men are usually counseled/ advised to consider how the long-term outcome of vasectomy might affect them both emotionally and physically. The procedure is not often encouraged for young, single men as their chances for biological parenthood are thereby more or less permanently reduced to almost zero. Government schemes in India have made vasectomy manageable for men and have brought many benefits to the patient as well as compensation for any deaths or failures of the procedure. The government criteria for male sterilization is having two children and being married. However, after one child, one is still eligible for the process. Under present rules, all the beneficiaries of vasectomy operation are given Rs. 1100/-cash as motivation. In India, the vasectomy and vasectomy (sterilization) program is non-compulsory and the couples choose a method best suited to them. In the year 2013 to 2014, 4092806 sterilization operations have been performed in the country [2]. In case of death or failure of sterilization, the government provides compensation as per details given below. Under section I-IA, Rs.2,00,000 for: "Death following sterilization (inclusive of death during process of sterilization operation) in hospital or within 7 days from the date of discharge from the hospital" 2 Under Section I-IB, Rs.50,000 for: "Death following sterilization within 8 to 30 days from the date of discharge from the hospital" 2 Under Section I-IC, Rs.50,000 for: "Failure of Sterilization" 2 Under I-ID, actual not exceeding Rs 25,000"Cost of treatment in hospital and up to 60 days arising out of Complication following Sterilization operation (inclusive of complication during process of sterilization operation) from the date of discharge" 2Under II, Up to Rs. 2,00,000/-"Indemnity Insurance per Doctor/facility but not more than 4 cases in a year" [2]. Methods and materials Design of study-cross sectional study Implementation of program/methodology A questionnaire was prepared keeping in mind various characters necessary for assessment of knowledge and attitude of Married men between 20-45 years of age. The questionnaire was used to collect data from door to door community visit after taking the oral consent of the respective individual. The data obtained through the questionnaire was entered in an Excel spreadsheet. The EPI info software was used to analyze the data and find statistical correlation and significance using the Chi Square Test Results and discussion Out of the total men, majority of them being Hindus, fell in age group of 27 to 32 years and had received primary education. Most of the men had 2 children with their primary method of contraception being a condom. Assessing their knowledge, it was found that 47% of the men believed that vasectomy will make them lose their sexual abilities. Majority of Interpretation: Majority of the males who believed that the responsibility of decision regarding the methods of family planning solely rests on them also believed that the permanent sterilization should only be for females. Association between education and myth (p:0.0043). Interpretation Education is strongly associated with myth that a vasectomy will make a man impotent. As literacy increases proportion of men those who believe in this myth decreases. This association is statistically very signi�icant because p≤0.05 Association between knowledge of chances getting STDs in polygamous men and attitude towards post vasectomy promiscuousness (P-Value:0.0001). Interpretation: Majority of Males whose decision depended upon the recovery time taken after vasectomy also believed that vasectomy causes long lasting pain. Association between belief of pain and Attitude towards Vasectomy (p value=0.004). P -Value: 0.0001 2 = 52.96 Degree of freedom: 9 Interpretation: Majority of the males who believed that the responsibility of decision regarding the methods of family planning solely rests on them also believed that the permanent sterilization should only be for females. Association between education and myth (p:0.0043). Interpretation Education is strongly associated with myth that a vasectomy will make a man impotent. As literacy increases proportion of men those who believe in this myth decreases. This association is statistically very signi�icant because p≤0.05 Association between knowledge of chances getting STDs in polygamous men and attitude towards post vasectomy promiscuousness (P-Value:0.0001). Interpretation: Majority of males who disagreed on having considered getting a vasectomy also believed that it causes long lasting pain. them had no idea about the nearest centers performing vasectomy, the government schemes or about insurance available for vasectomy. Assessing their attitude, it was found that (Figures 1-8 and Table 1): • 61% men disagreed on having considered getting a vasectomy. • 54% men also had a wrong notion that it makes a man more promiscuous. • 54% of the men believed that the male should be the sole decision maker on the methods of family planning to be used. • 46% of the men also feared their acceptance in society as vasectomy is considered as a taboo among the Indians. • 43% surprisingly also believed that permanent contraception should only be for females. • 42% of men also believed that the time taken for recovery would be a hindrance to their work. • 39% believed that it was against their religion • On analyzing the data, following associations were established. • Most of the men who believed that the decision of methods of family planning solely rests on them, also believed that the permanent sterilization should be for women. • According to education, it was found that poorly educated people had more incorrect notions about vasectomy. • A strong association between the men who believed that it increases the chances of STDs having an extramarital affair also believed that vasectomy makes the men more promiscuous. • Men whose decision depended upon the recovery time taken after the procedure also believed that it caused long lasting pain. • Men who did not even consider having considered vasectomy done also believed it caused long lasting pain. Summary A study was conducted regarding the vasectomy awareness in 117 males in the age group 20 to 45 years (mean=33.95 years) having at least one child from different areas of Ahmedabad. Assessing their knowledge, it was found that men are poorly aware about vasectomy (mean score=5.42/11, 47.27%). It was found that they were unaware regarding insurance plans available for vasectomy (67%) as well as the nearest centers that perform vasectomy operation (65%). Majority of the males were also unaware about the government schemes (55%). Many, [falsely] believe that the operation is irreversible (54.7%) [3][4][5][6][7]. The assessment also shows that men falsely believe that vasectomy will make men more promiscuous (54%), it can make men lose their sexual abilities (47%), will increase their chances of STDs (41.9%), can create long lasting pain (36.8%), and that it is a similar process to castration (31%). 47% males feared their acceptance in society. A strong association was found between the myths of vasectomy operation and the education of the assessed males. Out of the total number of men assessed, 61% disagreed on having considered getting a vasectomy. Conclusion The men assessed during the project did not have the essential knowledge regarding the procedure of vasectomy. Many also did not possess the appropriate attitude towards undergoing it and many were unwilling to undergo the procedure due to various myths. Recommendation The problem of lack of awareness about vasectomy and the government schemes regarding the same can be eliminated. There should be more awareness regarding the "no scalpel" surgery in their content of public awareness to eliminate the myth regarding long lasting pain after surgery. The family planning counsellor should inform the couple regarding the benefits of vasectomy over tubectomy during the cafeteria approach. To build the right kind of attitude, to create awareness "Social Marketing" for vasectomy still needs to be addressed with more intense and specific IEC in the protocol.
2019-06-07T21:13:29.748Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "282f05c426be325039a692f280cf45d6c7b1e367", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/BRCP-3-175.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0d13fc14ee6890fd6edea2612a5109e8dac129a5", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
7906938
pes2o/s2orc
v3-fos-license
Multiple Gastric Metastases from Ovarian Carcinoma Diagnosed by Endoscopic Ultrasound with Fine Needle Aspiration Metastasis to the stomach from nongastric tumors is a rare event. We present a case of ovarian cancer metastasis to the gastric wall that presented as multiple subepithelial gastric lesions. A 55-year-old female with known stage III b serous ovarian cancer was admitted to the hospital with melena and anemia. A 1.5 to 2 cm subepithelial mass with superficial overlying erosion in the antrum was seen in Esophagogastroduodenoscopy (EGD). Initial endoscopic mucosal biopsies were normal. An Endoscopic Ultrasound (EUS) was performed, which revealed two subepithelial lesions with the typical appearance of a gastrointestinal stromal tumor. Fine needle aspiration (FNA) of both masses revealed papillary adenocarcinoma from an ovarian papillary serous adenocarcinoma. This is the first reported case of multiple gastric metastatic lesions from ovarian cancer diagnosed by EUS FNA. Introduction Metastasis to the stomach is uncommon. Ovarian tumors comprise 0.013% to 1.6% of all gastric metastatic tumors [1,2]. Gastrointestinal involvement from these tumors is often mucosal and associated with ulceration [3]. We present a case of ovarian cancer metastasis to the gastric wall, which presented as multiple subepithelial gastric lesions. This was diagnosed by endoscopic ultrasound with fine needle aspiration (EUS-FNA). Case Presentation The patient is a 55-year-old female with known stage III b serous ovarian cancer. She had undergone an abdominal hysterectomy and bilateral salpingo-oophorectomy with omentectomy, followed by 6 cycles of carbo/taxol chemotherapy with complete clinical response. She was free of disease for 2 years until her disease recurred and was treated with Carboplatin and Taxol. The carboplatin was eventually switched to Doxil. However, the repeat positron emission tomography (PET) scan at that time showed progression of her disease. Thus she underwent exploratory laparotomy with removal of a splenic mass. She was noted to have peritoneal carcinomatosis at that time and was then treated with Gemzar. The patient had stable disease after this treatment. Five years after the initial diagnosis, the patient was admitted to the hospital with anemia, hemoglobin of 7.0 gm/dl, fatigue, and melena. Computerized Tomography (CT) of the abdomen without IV contrast was obtained on admission, which revealed calcified, heterogeneous, mixed intermediate and high-density deposits worrisome for peritoneal carcinomatosis (Figure 1, arrows). No IV contrast was administered due to her poor kidney function. She was referred for an EGD, which showed a 7 mm erythematous lesion at the gastroesophageal junction and a 1.5 to 2 cm subepithelial mass ( Figure 2) with a superficial overlying erosion in the antrum, but no obvious source for any active bleeding. Initial endoscopic biopsies of the gastroesophageal junction lesion showed granulation tissue polyp with foveolar hyperplasia, and the antral biopsies were normal. Due to the presence of a subepithelial lesion in the antrum, the patient was referred for EUS. Two subepithelial lesions were discovered by EUS, one in the antrum measuring 3.4 × 3.7 cm ( Figure 3) and one in the body of the stomach 1.2 × 0.8 cm (Figure 4). The lesion in the body of the stomach was not appreciated during the EGD. The lesions were hypoechoic masses emanating from the muscularis propria and had the typical appearance of gastrointestinal stromal tumors. FNA was performed of both masses. Both sites revealed papillary adenocarcinoma from an ovarian papillary serous adenocarcinoma primary ( Figure 5). Immunostains for progesterone receptor, estrogen receptor and p 53 were focally positive and confirmatory. The patient was treated with Taxol and is undergoing surveillance imaging. Discussion The tumors most commonly reported to metastasize to the stomach include melanoma, breast, lung, and esophageal carcinoma [1,2,4,5]. Clinical manifestations of metastasis to stomach are variable and include epigastric pain, melena, anemia from occult gastrointestinal blood loss, nausea, and vomiting [1-3, 6, 7]. Ulcerated nodules, ulcerated submucosal masses, umbliciated nodules with central exudate, and necrotic ulcers with heaped up margins were reported to be the most common endoscopic findings by Kadakia et al. [8]. The gastric metastasis can be solitary (62.5%-65%) or multiple (35%-37.5%) and more commonly located in the middle or upper third of the stomach [2,9]. Ovarian tumor metastasis to the stomach is uncommon [1,2]. Ovarian carcinoma is usually confined to the peritoneal cavity at presentation and throughout its course in approximately 85% of patients [3]. It regularly metastasizes to peritoneal surfaces by exfoliating cells that implant throughout the peritoneum and the intraperitoneal route of dissemination is considered the most common [3,10,11]. Gastrointestinal involvement is usually limited to seromuscular layer of the small and large bowel and its mesentery [12]. However, it may also metastasize through the lymphatic channels and hematogenous route [11]. Based on the presence of peritoneal carcinomatosis, intraperitoneal route of dissemination of the ovarian carcinoma to gastric wall may be possible in our case. However, hematogenous spread cannot be ruled out in the presence of the wellcircumscribed lesions in the gastric wall without adjacent intraperitoeal mass. Gastrointestinal involvement is most often superficial, and transmural invasion is less common [9]. Even though it may present as a gastric metastasis at advanced stages there have been some reports in the literature describing gastric metastasis as an initial presentation of ovarian cancer [2,3]. Similar to our case, there have been other reported cases of metastatic ovarian cancer presenting as single subepithelial gastric lesions. The diagnoses in these cases were made by surgical exploration and endoscopic submucosal dissection [3,13]. There have been two cases reported in the literature where EUS-FNA was utilized for diagnosis of single gastric metastasis from ovarian carcinoma [14,15]. Alternatively, we present a case of multiple gastric metastatic lesions from ovarian carcinoma diagnosed by EUS FNA. The fine needle aspiration was imperative in the diagnosis of this patient as the lesions appeared by endoscopic ultrasound to be gastrointestinal stromal tumors given their location and ultrasound appearance. Conclusion Metastatic disease should be in the differential diagnosis of the patient presenting with subepithelial gastric lesions. Endoscopic ultrasound with fine needle aspiration is invaluable for making the correct diagnosis of gastric subepithelial lesions and should be considered in all cases if available.
2016-05-12T22:15:10.714Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "9ebc140ebeffbbfe20239e084e33a627be6b4976", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crigm/2012/610527.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4990059e143e9e129d2af9a11ed275ccaca029a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244591371
pes2o/s2orc
v3-fos-license
Democracy and Corruption : I examine the relationship between democracy and the perceived risk of corruption in a panel of 130 countries. My panel model controls for country fixed effects and enables the estimation of a within-country relationship between democracy and corruption. My main finding is that democracy significantly reduces the risk of corruption, but only in countries where ethnic fractionalization is low. In strongly fractionalized countries a transition from autocracy to democracy does not significantly reduce corruption. One explanation for these findings is that the corruption-reducing effect of greater accountability of politicians under democracy is undermined by the common pool problem; fractionalization increases the severity of the common pool problem. Introduction There exists a large empirical literature that has documented significant negative effects of corruption on the economy. Examples of empirical papers, that date back to the 1990s and 2000s, are Mauro (1995), Rose-Ackerman (1999), Fisman and Svensson (2007), Svensson (2004, 2005), and Olken (2006). Examples of more recent empirical papers include Dincer (2019), Gruendler and Potrafke (2019), and Keita and Laurila (2021). Theoretically, a compelling reason for why corruption reduces a country's welfare is that corruption leads to a misallocation of resources and entrepreneurial talent (Murphy et al. 1991(Murphy et al. , 1993Shleifer and Vishny 1993). Is there less corruption in democracy than autocracy? Consider the following principalagent problem: political leaders may allocate tax revenues to public spending, and they may use public office to appropriate resources for private gains. Free and fair elections and political competition are two important characteristics of democratic institutions that make political leaders responsive to the demand of citizens. In democracies, politicians are less corrupt because being corrupt significantly increases the probability of losing office. I will refer to this throughout the paper as the accountability effect of democracy. The experience of the 1990s has shown that not all episodes of democratization were associated with a significant reduction in the risk of corruption. For instance, in some countries-such as Russia after the end of the Soviet Union, or the Democratic Republic of Congo-there was, according to Political Risk Services data, no significant reduction in the perceived risk of corruption following democratization. I argue that whether there is less corruption in democracy than autocracy crucially depends on fractionalization. The reason why fractionalization matters for the relationship between corruption and democracy is that in countries where populations are strongly fractionalized the politicians who get voted into office differ in their policy platform. In democracies, politicians cater to the demands (i.e., preferences) of their constituency. In an autocracy, the ruler may also cater to a specific group of the population that supports him. However, in an autocracy it is less likely that there exist members of government who represent the interests of the other groups of the population; and even if such members of government do exist, it is unlikely in an autocracy that these members of government have any significant de facto power over the government budget. The main point is this: there is more heterogeneity of politicians in a democracy than in an autocracy. In a fractionalized country with democratic institutions, each politician has a strategic interest in over-extracting resources (i.e., so that in sum, considering all politicians, more resources are extracted than one single central planner would extract) for private gain, because doing so reduces the amount of resources left to the government budget from which public goods are financed. A politician who extracts resources for private gain does not take into account the negative externalities associated with resource extraction from a particular industry or group in the presence of positive demand complementarities. I will refer to this throughout the paper as the common pool problem. The severity of the common pool is increasing under fractionalization. More ethnic fractionalization, by definition, means that the population of a country is more heterogenous along ethnic lines. The heterogeneity of the population along ethnic lines implies greater heterogeneity of politicians, especially so under democracy. This is why fractionalization attenuates the corruption-reducing effect of democracy. India is a perfect example that illustrates this point. According to the Polity IV project, India has been a democracy for a long period of time: dating back as far as 1950 to 2018, the polity score that the Polity IV project assigns India has been, consistently, above 6 during the 1950-2018 period. This puts India in about the top one-quarter of countries in the world with regard to the polity score. India, however, ranks poorly in terms of corruption: the country is at about the bottom one-quarter of countries in the world according to data provided by Political Risk Services. Corruption in India is very high by international comparison. 1 An explanation for why corruption is so high in India, despite the country having democratic institutions, that is consistent with the argument developed in this is paper is provided by ethnic fractionalization: India is among the most ethnically fractionalized countries in the world. In the empirical part of the paper I provide estimates of the effects that democracy has on corruption in a panel of 130 countries. My econometric model controls for country fixed effects, which is important: estimates of an econometric model with fixed effects provide a within-country effect. It is the within-country effect that is relevant from a policy point of view; not the across-country effect. From a policy point of view, the question that one would like to have an answer to is: what happens to corruption in a country when moving from autocracy to democracy (or vice versa). This requires estimates of a within-country effect. Such a within-country effect is obtainable from a panel model that includes country fixed effects; but it is not obtainable from a panel model that does not control for fixed effects. My first main finding is that, on average, increases in countries' polity scores are associated with a significant reduction in the risk of corruption. This is consistent with the view that in a democracy there is less abuse of public office for private gains, because in a democracy there is greater accountability. My second main finding is that the effect of democracy on corruption is significantly attenuated by fractionalization: in countries with high ethnic fractionalization, democracy has no significant effect on corruption. This finding is highly relevant from a policy point of view. It implies that efforts to promote democracy in countries which are strongly fractionalized will not have much of an effect: Corruption will remain high in strongly fractionalized countries even if there are free and fair elections. The remainder is organized as follows. Section 2 provides a conceptual framework that clarifies why in a fractionalized country corruption is not much lower under democracy than autocracy. Sections 3 and 4 discuss the estimation strategy and data. Section 5 presents the main results. Section 6 presents robustness checks. Section 7 concludes. The Effect of Democracy on Corruption: Accountability vs. the Common Pool Problem One of the key features that distinguishes democracy from autocracy is that political leaders are elected by the people. Democratic elections ensure that the most preferred candidates hold office and hence political power inherently has a principal-agent problem attached. A common view in the literature is that political competition reduces political corruption: competition acts as a disciplining device on politicians who are tempted to abuse office for private purposes (see for instance Przeworski et al. 1999). Because in democracy politicians are faced with the threat of not being re-elected (or impeached) due to corrupt behavior, elections create political accountability that reduce the overall pay-offs to corruption. In autocracy, on the other hand, the likelihood of a dictator losing political power due to corrupt behavior (or policies that are generally disliked by the public) is much smaller since the costs of replacing the dictator are usually very high (e.g., Padro i Miquel 2007). From an accountability point of view, the incentives to not engage in corrupt behavior are therefore much stronger in democracy than they are in autocracy. There exists, however, a countervailing channel that not received as much attention in the literature: the common pool problem. In the political economy literature on debt stabilization, it is well understood that financing a reduction of public debt is associated with externalities that are not internalized by the politicians who hold office when there are multiple parties contesting for political power (see for instance Persson and Tabellini 2000). 2 In a dynamic setting there will be overspending by the party in charge because doing so reduces the possibility for other parties (i.e., the competitors) to implement their preferred policy platform due to intertemporal budget constraints. Likewise, in a static model, there is an incentive for a politician leader-who caters to regional preferences-to overspend if spending is financed from a common pool (i.e., from taxes collected in the entire country). This is because the political leader of a region only pays a fraction of the total expenditure. For corruption, a similar line of reasoning applies. If a politician who is in power today has the option of engaging in corrupt activity but is faced with the possibility of having to hand over political power in the next period to another politician-who is substantially different in his preferred policy platform-then there are strong incentives for the political leader holding office today to be excessively corrupt. This is because by being excessively corrupt he not only increases his current utility in terms of collecting bribes (or, say, by stealing directly from the budget), but also reduces the possibility for future politicians to implement their preferred policy platform. 3 In a static setting a similar logic applies: a political leader of a region does not internalize externalities of his corrupt activities on economic activity in other regions. Since corruption is usually carried out in secrecy it is also unlikely that there exists a Coasian solution to the problem because claims cannot be settled in court (e.g., Shleifer and Vishny 1993). Hence, in a democracy, where leaders are elected in each period and where different parties may hold political power in different regions, there exists a common pool problem that undermines the accountability channel. In an autocracy, in contrast, the presence of a single ruler (dictator) does not create such a common pool problem since the dictator fully internalizes externalities. The key question therefore is which of these two forcesthe accountability or common pool problem-is likely to be more relevant. While this is difficult to answer per se, more ethnic diversity in the population, and hence influencing the preferred policy platform of different political leaders, will exacerbate the common pool problem due to the 1/n problem emphasized by Weingast et al. (1981). 4 One can interpret 1/n as the probability that a partisan political leader from group n will be re-elected. The incentives not to be excessively corrupt while holding political power diminish as the number of different groups, n, increases. Hence, the partisan politician from group n holding political power will be more excessively corrupt the larger the fractionalization of the country. A similar line of reasoning applies to the static common pool problem. Interpreting n as the number of different districts, if districts' expenditures are financed by a common pool, then the incentives of elected politicians to not engage in corrupt activities decrease as the number of districts increases. This is because each politician only has to bear 1/n of the costs-in terms of foregone resources from which to finance public goods provision-that are due to his corrupt behavior. Hence, as fractionalization of a country increases, the severity of the common pool problem increases. Estimation Methodology To explore empirically and hence quantify the link between democracy, ethnic fractionalization, and corruption I estimate the following econometric model: where α c are country fixed effects, β c *t are country-specific time trends, and γ t are year fixed effects. ε c,t is an error term that is clustered at the country level to allow for arbitrary serial correlation. Note that democracy enters with a one-year lag and hence the identifying assumption made is that future changes in corruption do not have systematic effects on current political institutions. Equation (1) will be estimated by least squares. To reduce concerns of endogeneity bias I will also present estimates of a dynamic version of Equation (1), which I estimate using system-GMM (Blundell and Bond 1998). Corruption. Country-year level corruption data were obtained from Political Risk Service (PRS). The PRS corruption data are available from 1984 onwards and cover a total of 139 countries. They yield a total of 2898 country-year observations, covering a much longer time period than any other comparable corruption dataset. According to PRS the corruption data capture the likelihood that government officials will demand special payments and the extent to which illegal payments are expected throughout government tiers. PRS corruption scores range between 0 and 6, with higher values indicating less corruption. As a robustness check, estimates will also be presented based on the corruption scores provided by Kaufmann et al. (2008) and Transparency International. These alternative corruption scores are available from 1996 onwards only, and therefore cover a much shorter time-period than the PRS corruption score. A more detailed discussion of the above corruption measures can be found in Svensson (2005). Democracy. My main measure of democracy is the revised combined Polity score (Polity2) of the Polity IV database (Marshall et al. 2005). The measure ranges from −10 to +10, with higher values indicating more democratic institutions. The Polity IV database also provides data on so-called concept scores for political competition and the openness and competitiveness of executive recruitment. While political competition measures the extent to which alternative preferences for policy and leadership can be pursued in the political arena, openness and competitiveness of executive recruitment measures the extent to which the politically active population has an opportunity to attain the position of chief executive through a regularized process and the degree to which prevailing modes of advancement give subordinates equal opportunities to become super-ordinates. The political competition variable ranges from 1 to 10; the openness and competitiveness of executive recruitment variable ranges from 1 to 8. Higher values denote more political competition. 5 In my empirical analysis I will also consider the use of a democracy indicator variable following Persson and Tabellini (2003, 2006. The democracy indicator variable takes on a value of 1 if the Polity2 score is strictly positive and zero in all other cases. As a further robustness check I will also consider the use of the political rights score from Freedom House, which ranges from 1 to 7 with greater values denoting less political rights. 6 The Freedom House political rights variables are rescaled by −1 so that higher values denote stronger democratic institutions. Ethnic Fractionalization. I obtain data on ethnic fractionalization from Alesina et al. (2003), who constructed a comprehensive dataset of fractionalization for more than 190 countries. Ethnic fractionalization of a country is calculated as: where s ij is the share of ethnic group j in country i's total population. An important property of the fractionalization index is that it strictly increases along with the number of ethnic groups. This contrasts to polarization measures which capture how close the distribution of groups is from a bipolar distribution (see for instance Esteban andRay 1994, or Montalvo andReynal-Querol 2005). Intuitively, the fractionalization index measures the probability that two randomly selected individuals in a country will not belong to the same ethnic group. Other Control Variables. Other control variables included in the empirical analysis are real per capita GDP and the share of mineral exports in total exports which are taken from the World Bank (2009); data on the share of Muslims in the population, Socialists, and French legal origin are from Treisman (2007). For summary statistics on these variables, see Tables 1 and 2. Main Results Column (1) of Table 3 shows estimates of the effect that the polity2 score has on corruption, obtained from a pooled panel regression which does not control for timeinvariant country unobservables (country fixed effects). The regression controls for a set of cross-sectional variables such as ethnic fractionalization, indicators of Socialist membership or French legal origin, the share of Muslims in the population, the share of mineral exports to total exports, a per capita GDP. The main result is that the estimated coefficient on the Polity2 score is positive and significantly different from zero at the 5% level. Thus, a pooled panel regression suggests that more democratic countries have lower levels of corruption. Note: The method of estimation in columns (1)-(4) is least squares, column (5) system-GMM; t-values shown in parentheses are based on Huber-robust standard errors that are clustered at the country level. The dependent variable is the PRS corruption score, with higher values indicating less corruption. * Significantly different from zero at 90 percent confidence, ** 95 percent confidence, *** 99 percent confidence. Regarding the other variables in column (1) of Table 3, the pooled panel regression shows that countries with higher levels of ethnic fractionalization have on average higher levels of corruption. Countries which are Socialist and of French legal origin, and countries that have a larger share of mineral exports in GDP are more corrupt on average. Corruption is not systematically higher in countries with a larger share of Muslims in the population. I included these variables as controls in the model, following early empirical literature that dates back to the 2000s, e.g., Treisman (2007). In the estimates that follow, I will include country fixed effects as controls. Inclusion in the model of country fixed effects accounts for any country-specific, time-invariant variable. Hence, the control variables of column (1) in Table 3 are no longer included in the model; these variables are perfectly collinear with the country fixed effects. To examine whether the corruption-reducing effect of democracy is also present at the within-country level on average, I show in column (2) of Table 3 estimates of a panel model that controls for country fixed effects. The panel fixed effects regression yields a positive coefficient on the Polity2 score that is slightly smaller than the coefficient on the Polity2 score that is obtained from the pooled panel regression (see column (1) for comparison). The estimated coefficient on the Polity2 score in column (2) is significantly different from zero at the 90 percent level (p-value 0.067). Hence, panel fixed effects estimates show that a within-country increase in the polity2 score leads, on average, to a significant decrease in corruption. In column (3) a false experiment is carried out by including as an additional righthand-side variable the t + 1 Polity2 score. Including the Polity2 score in year t + 1 has little consequence on the estimated coefficient on the t − 1 Polity2 score. In column (3) the estimated coefficient on the t − 1 Polity2 score is positive and significantly different from zero at the 5% significance level. In column (3) the estimated coefficient on the t − 1 Polity2 score is around 0.023 and has a standard error of around 0.011. The estimated coefficient on the t + 1 Polity2 score is quantitatively small, around −0.005, and has a standard error of 0.008. One cannot reject at the conventional significance levels that the estimated coefficient on the t + 1 Poltiy2 score is equal to zero. Columns (4) and (5) of Table 3 report estimates of a dynamic panel model that includes the t − 1 corruption score as a right-hand-side control variable. The dynamic panel fixed effects regression shows that there is a significant proportion of persistence in corruption. The estimated AR (1) coefficient is about 0.7 and implies a half-life in the PRS corruption score of about 2 years. OLS, see column (4), yields an estimated coefficient on the t − 1 Polity2 score of 0.013 with a standard error of 0.006. Sys-GMM, see column (5), yields an estimated coefficient on the t − 1 Polity2 score of 0.024 with a standard error of 0.007. In dynamic panel models with fixed effects, least squares estimates are biased; the sys-GMM estimator, developed by Blundell and Bond (1998), is unbiased. Quantitatively, the estimated average effects that the polity2 score has on corruption are economically meaningful. The estimates in column (5) of Table 3 imply that a one standard deviation (7.2) increase in the t − 1 Polity2 score decreases the corruption score in year t by about 0.18 units; this is equivalent to about 0.1 standard deviations. The long-run effect is larger, amounting to around 0.5 standard deviations. In Table 4, how cross-country differences in ethnic fractionalization affect the relationship between democracy and corruption are examined. The regressions continue to control for country fixed effects, country-specific time trends, as well as year fixed effects (which are all jointly significant at the 1% level). The main result from estimating this interaction model is that: [i] there is a significant positive linear effect of democracy on corruption; and [ii] the interaction effect between ethnic fractionalization and democracy is significantly negative. Taking derivatives of Equation (1) with regard to Polity2 and using the estimates in column (1) yields: Table 4. Democracy, Ethnic Fractionalization, and Corruption (Heterogeneity). PRS Corruption (1) Hence, while there is a significant positive average effect of democracy on corruption, at higher levels of ethnic fractionalization this effect goes towards zero and turns statistically insignificant. Columns (2) and (3) of Table 4 show that this result continues to hold when a dynamic panel model is estimated, either by OLS or sys-GMM. Columns (4) and (5) of Table 4 show that the result is also robust to inclusion of a quadratic term of the polity2 variable as a control. Quantitatively, the estimates from the interaction model imply that the within-country effect of democracy on corruption is substantially different between countries with low fractionalization and countries with high fractionalization. Recall from the descriptive statistics in Table 1 that the ethnic fractionalization index ranges from 0.002 to 0.930. Consider now the estimates in column (5) of Table 4. At the sample minimum ethnic fractionalization, a one standard deviation increase in the t − 1 Polity2 score decreases corruption in year t by about 0.05 units; this is equivalent to about 0.25 standard deviations. The long-run effect is larger, amounting to about 0.8 standard deviation (0.25/(1-0.704)). In contrast, at the sample maximum ethnic fractionalization, a one standard deviation increase in the t − 1 Polity2 score decreases corruption in year t by only about 0.005 units, which is equivalent to about 0.02 standard deviations. The long-run effect is also small, amounting to about 0.07 standard deviations. Hence, in the most ethnically homogenous country democratic institutions are about ten times more effective in reducing corruption than in the ethnically mostly fractionalized country. Figures 1 and 2 provide a graphical illustration of the nonlinear effect of democracy on corruption by plotting local polynomial estimates separately for countries with above and below median ethnic fractionalization. The nonparametric local polynomial estimates are computed using an Epanechnikov kernel, with bandwidth selection based on crossvalidation criteria. Figure 1 shows that there is a strong upward sloping relationship between the Polity2 score and PRS corruption score in countries that are relatively ethnically homogeneous. In particular, the nonparametric estimates reported in Figure 1 show that in ethnically homogeneous countries democratic improvements are particularly effective in reducing corruption at very low Polity2 scores (e.g., in deep autocracies). On the other hand, Figure 2 shows that in ethnically heterogeneous countries the relationship between democracy and corruption is flat and not significantly different from zero at the conventional levels; this is true regardless of whether countries are deep autocracies or partial autocracies. Observations Note: The method of estimation in columns (1), (2), and (4) is least squares; columns (3) and (5) system-GMM. t-values shown in parentheses are based on Huber-robust standard errors that are clustered at the country level. The dependent variable is the PRS corruption score, with higher values indicating less corruption. * Significantly different from zero at 90 percent confidence, ** 95 percent confidence, *** 99 percent confidence. Quantitatively, the estimates from the interaction model imply that the within-country effect of democracy on corruption is substantially different between countries with low fractionalization and countries with high fractionalization. Recall from the descriptive statistics in Table 1 that the ethnic fractionalization index ranges from 0.002 to 0.930. Consider now the estimates in column (5) of Table 4. At the sample minimum ethnic fractionalization, a one standard deviation increase in the t − 1 Polity2 score decreases corruption in year t by about 0.05 units; this is equivalent to about 0.25 standard deviations. The longrun effect is larger, amounting to about 0.8 standard deviation (0.25/(1-0.704)). In contrast, at the sample maximum ethnic fractionalization, a one standard deviation increase in the t − 1 Polity2 score decreases corruption in year t by only about 0.005 units, which is equivalent to about 0.02 standard deviations. The long-run effect is also small, amounting to about 0.07 standard deviations. Hence, in the most ethnically homogenous country democratic institutions are about ten times more effective in reducing corruption than in the ethnically mostly fractionalized country. Figures 1 and 2 provide a graphical illustration of the nonlinear effect of democracy on corruption by plotting local polynomial estimates separately for countries with above and below median ethnic fractionalization. The nonparametric local polynomial estimates are computed using an Epanechnikov kernel, with bandwidth selection based on crossvalidation criteria. Figure 1 shows that there is a strong upward sloping relationship between the Polity2 score and PRS corruption score in countries that are relatively ethnically homogeneous. In particular, the nonparametric estimates reported in Figure 1 show that in ethnically homogeneous countries democratic improvements are particularly effective in reducing corruption at very low Polity2 scores (e.g., in deep autocracies). On the other hand, Figure 2 shows that in ethnically heterogeneous countries the relationship between democracy and corruption is flat and not significantly different from zero at the conventional levels; this is true regardless of whether countries are deep autocracies or partial autocracies. Figure 3 shows that for the three selected countries with low ethnic fractionalization, which are Bangladesh, Haiti, and the Philippines, increases (decreases) in the Polity2 score were followed by reductions (increases) in corruption. Figure 4 shows that for three selected countries with intermediate ethnic fractionalization, which are Ghana, Mexico, and Thailand: there is no systematic change in corruption following changes in the Polity2 score. Figure 5 shows that for three selected countries with high ethnic fractionalization, which are Kenya, Nigeria, and Uganda, increases (decrease) in the Polity2 score are followed by increases (improvements) in corruption. To provide some specific country examples that fit the results from the regressions, Figures 3-5 plot the time-series of the Polity2 score and the PRS corruption score for three selected countries with low, intermediate, and high levels of ethnic fractionalization. Both the Polity2 score and the PRS corruption score have been normalized to range on the 0 to 1 interval. Higher values of the normalized Polity2 score denote stronger democratic institutions. Higher values of the normalized PRS corruption score denote less corruption. Figure 3 shows that for the three selected countries with low ethnic fractionalization, which are Bangladesh, Haiti, and the Philippines, increases (decreases) in the Polity2 score were followed by reductions (increases) in corruption. Figure 4 shows that for three selected countries with intermediate ethnic fractionalization, which are Ghana, Mexico, and Thailand: there is no systematic change in corruption following changes in the Polity2 score. Figure 5 shows that for three selected countries with high ethnic fractionalization, which are Kenya, Nigeria, and Uganda, increases (decrease) in the Polity2 score are followed by increases (improvements) in corruption. Table 5 documents that the results of the previous section are robust to controlling for various interaction terms. Column (1) includes as an additional control variable an interaction term between the Polity2 score and an indicator variable for Socialist origin; column (2) includes as an additional control variable an interaction term between the Polity2 score and an indicator variable for French legal origin. Both the interaction between the Polity2 score and the Socialist origin indicator as well as the interaction between the Polity2 score and the French legal origin indicator are insignificant. The interaction between ethnic fractionalization and the Polity2 score remains statistically significant at the 1% level. Column (3) reports estimates that control for an interaction between the Polity2 score and the share of Muslims in the population. Column (4) reports estimates that control for an interaction between the Polity2 score and cross-country differences in per capita GDP. Column (5) reports estimates that control for an interaction between the Polity2 score and indicators of unity for sub-Saharan African countries. The main result is that, in these robustness checks, the estimated coefficient on the interaction between the Polity2 score and ethnic fractionalization is negative and significantly different from zero at the 1% level. To check on the robustness of the results to the specific democracy measure used, and to document the political competition channel discussed in Section 2, columns (1) and (2) of Table 6 report estimates when ethnic fractionalization interacts with the Polity IV political competition and competitiveness of executive recruitment score. The main result is that, for the average country, political competition reduces corruption and ethnic fractionalization significantly attenuates the corruption-reducing effect of political competition towards zero, so much so that in very ethnically fractionalized countries the effect of political competition on corruption is not significantly different from zero. Robustness Checks Column (3) of Table 6 shows results for a democracy indicator variable that takes on the value of 1 for strictly positive Polity2 scores (democracy) and zero else (autocracy). Consistent with the previous results that were based on variations in the Polity2 score, the estimates in column (3) of Table 6 suggest that, on average, a transition from autocracy to democracy reduces corruption. Ethnic fractionalization significantly attenuates this effect towards zero. In strongly fractionalized countries, a transition from autocracy to democracy has no significant effect on corruption. Column (4) of Table 6 shows results for the Freedom House political rights score. One can see from column (4) of Table 6 that the estimated coefficient on the political rights score is positive and the coefficient on the interaction between the political rights score and ethnic fractionalization is negative. (The original political scores from Freedom House were multiplied by −1 for the regressions, so that higher values denote stronger political rights.) Each of the estimated coefficients in column (4) of Table 6 is significantly different from zero at the 1 percent level. The interpretation of the estimates in column (4) of Table 6 is that stronger political rights are associated with a reduction in corruption, but only so in countries with low ethnic fractionalization. In strongly fractionalized countries, political rights have no significant effect on corruption. Table 7 reports results for corruption variables from other, alternative datasets. Columns (1) and (2) of Table 7 report estimates where the dependent variable is the Control of Corruption variable from Kaufmann et al. (2008). Columns (3) and (4) of Table 7 report estimates where the dependent variable is the Corruption Perception Index from Transparency International. Because the time-period that these alternative corruption variables cover (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007) is considerably shorter than the time-period covered by the PRS corruption indicator , columns (1) and (3) report baseline estimates that control for year fixed effects only, and as a further robustness check columns (2) and (4) report estimates that control also for country fixed effects. Note: The method of estimation is least squares; t-values shown in parentheses are based on Huber-robust standard errors that are clustered at the country level. The dependent variable in columns (1) and (2) is the Kaufmann et al. (2008) corruption indicator; in columns (3) and (4) the dependent variable is the Transparency International corruption indicator. Both corruption indicators have been rescaled to have a [0, 6] range, with higher values indicating less corruption. * Significantly different from zero at 90 percent confidence, ** 95 percent confidence, *** 99 percent confidence. column (2)). When data on corruption are from Transparency International, the estimated coefficients on the Polity2 score and on the interaction term are not significantly different from zero at the conventional levels (see column (4)). The number of observations for the alternative datasets on corruption is less than half the number of observations that the PRS corruption data provides. It is hence understandable that, statistically, the results in Table 7 are somewhat weaker than the baseline estimates in Table 4. Conclusions Governments of western countries and international organizations have undertaken great efforts to promote democracy in the world. 7 One of the main arguments for promoting democracy in developing countries is that there is less corruption in democracy than autocracy. The empirical results in this paper showed that, indeed, there is less corruption in democracy than autocracy for the subset of countries with low or intermediate ethnic fractionalization. This subset of countries makes up about two-thirds of all countries in the world. For the remaining one-third of countries in the world where ethnic fractionalization is high, there is no significant corruption-reducing effect of democracy. Conflicts of Interest: The author declares no conflict of interest. 1 For an in-depth discussion of corruption and democracy in India, see Sridharan (2014). 2 See also Persson et al. (1997) who show that separation of powers will lead to public officials increasing the amount of resources diverted from the economy due to the common pool problem if public officials have conflicting interests and policies are implemented unilaterally. Related, Besley and Coate (1998) show that representative democracy leads to inefficient public investment in a dynamic model where policy authority is delegated directly to citizens who are heterogeneous in productive abilities. 3 If corruption comes in the form of the politician directly stealing from the budget, then it is clear why excessive corruption reduces the possibility of future politicians implementing their preferred policy platform (the intertemporal budget constraint has to be satisfied). If on the other hand the politician simply abuses office by collecting bribes, then one would have to argue that the politician implements policies due to these bribes that obstruct future politicians' possibilities to implement their preferred policy when in power. 4 See also Alesina et al. (1999) who show that public good provision is significantly worsened by ethnic fractionalization. For an overview of the literature on ethnic fractionalization and economic policies and outcomes, see Alesina and La Ferrara (2005). 5 In using these concept variables I code all values corresponding to "system missing" (−66), "interregnum" (−77), and "transition" (−88) as missing, as it is unclear what score they should be assigned for the time-series analysis. 6 This variable has been used, for instance, in the democracy and growth literature by Barro (1999).
2021-10-17T15:08:51.362Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "26cf190130ffb18e170d459a9499618c6158c54b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/14/10/492/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8ec89a98136c491811b7e21f049c253eca076f0d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
92122882
pes2o/s2orc
v3-fos-license
Phenological and physicochemical properties of Pereskia aculeata during cultivation in south Brazil Pereskia aculeata, known as ora-pro-nobis in Brazil, is native from tropical dry forests. This Cactaceae plant possesses succulent and edible leaves, which contain high amounts of protein, minerals, vitamins and fiber. Nutritional properties and ability to grow under limited water supply of ora-pro-nobis are known, but little information is available about the growth behavior and nutritional composition of this plant when cultivated under temperate humid climate. Therefore, we evaluated the phenology of the plant, including observation of new leaves, flowering, fruiting and relating it with the climate changes. Also, we analyzed some physicochemical characteristics (humidity, leaf area, height, protein, color, total phenolic content and antioxidant activity) of ora-pro-nobis cultivated in Pelotas, Rio Grande do Sul, Brazil. We observed that ora-pro-nobis developed normally, but with a quiescent state in the winter, without producing leaves. Flowering of the plant started in March and the fructification started one month later. All physicochemical characteristics varied through the period of cultivation. Our findings support that cultivation of ora-pro-nobis for production of leaves is feasible under temperate and humid climate. Research Hortic.bras., Brasília, v.36, n.3, July -September 2018 P ereskia aculeata (ora-pro-nobis) is a plant member of the Cactaceae family and is found in tropical areas from south United States to south Brazil.It is known, popularly, as ora-pro-nobis and in some Latin American countries is known as Barbados Gooseberry (Takeiti et al., 2009;Sharif et al., 2013). Ora-pro-nobis is a perennial shrub, very resistant to drought, and has scramble vine characteristics.Flowers are white and small, fruits are small yellow berries, and also, the plant has spines at the stems and large leaves (Brasil, 2010). Ora-pro-nobis has succulent and edible leaves, which can be used in many preparations, such as salads, stews, flours, breads, pies and pastas (Rocha et al., 2008).Recent studies (Silva et al., 2017) show that ora-pronobis is safe for consumption in terms of acute toxicity and cytotoxicity. Other than food, the plant can be used ornamentally or cultivated for honey production once it is rich in pollen and nectar.Folk medicine practitioners have been known to use ora-pro-nobis as antiinflammatory, emollient, expectorant and antisyphilitic (Sartor et al., 2010).Ora-pro-nobis leaves are rich in protein (28.4 g 100 g -1 of dry weight) when compared with other vegetables source of protein, like black beans [8.8 g 100 g -1 of cooked weight (dw)], garbanzo beans (8.9 g 100 g -1 of dw) and lentils (9.0 g 100 g -1 of dw) (Takeiti et al., 2009;USDA, 2014). Therefore, ora-pro-nobis could be a good alternative to many common food sources, especially for vegetarians, because it has high levels of minerals and proteins (Takeiti et al., 2009). Ora-pro-nobis is known to be native from tropical dry forests (Takeiti et al., 2009;Brasil, 2010).To our knowledge, growth behavior and composition of this plant when cultivated under a temperate humid climate is unknown.Therefore, the goal of this study was to evaluate phenological and physicochemical characteristics of ora-pro-nobis cultivated in Pelotas, Rio Grande do Sul, Brazil, under temperate and humid climate. Plant material Ora-pro-nobis was cultivated at the Brazilian Agricultural Research C o r p o r a t i o n ( E m b r a p a C l i m a Temperado), located in Pelotas, Brazil.The 21 plants utilized in this experiment were divided into 3 lots.Plants were spaced 100 cm x 80 cm without irrigation and pest control.Harvest was done monthly, between December 2012 and June 2013.the moment of harvest, the 21 plants were measured, to determine height.Also, phenological aspects were determined observing new leaves, flowering and fruiting.These processes were related with climate changes, as precipitation, radiation and temperature (Marques & Oliveira, 2004). Harvested leaves were cleaned, milled to a fine powder in a ball mill in liquid nitrogen and stored at -80°C for future analyzes (antioxidant activity, phenol and protein).Fresh leaves were also analyzed for humidity, color and area. Physicochemical evaluations To determine humidity, 10 leaves per lot were weighted and dried during 24 hours in a forced air oven at 105°C, according AOAC (2012).Leaf area was determined in 10 leaves per lot, based on the use of an automated infrared imaging system, LI-COR-3100C (LI-COR Inc., Lincoln, Nebraska, USA).Plant height was assessed using a tape measure, according to Silva et al. (2012). Color was determined using a colorimeter (Minolta ® , Model CR 300).We measured lightness (L*), redness (a*) and yellowness (b*), in 3 parts of each leaf, and 10 leaves per lot, with a total of 30 leaves per month.The parameters of color were expressed in lightness, where L*= 0 is completely black, and L*= 100 is completely white; and Hue angle (H*), calculated from H*= [Arc tan (b/a)], where H*= 0 is red, H*= 90 is yellow, H*= 180 is green and H*= 270 is blue, as described by Cogo et al. (2011). Antioxidant activity was estimated using the free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) scavenging assay method, adapted from Kedare & Singh (2011).Extraction was performed using methanol with 1:4 proportion and stored at 4 o C for 24 h; after that, the extract was centrifuged (12000 rpm) for 15 minutes.The absorbance of samples was measured at 517 nm and antioxidant activity was expressed as g of trolox/kg of fresh leaves. The total phenolic content of orapro-nobis was determined by Folin-Ciocalteu method, adapted from Medina (2011).Extraction lasted 2 h, using methanol in the proportion 1:10 and stirring every 15 minutes.Results were expressed as g of galic acid equivalent (GAE) per kg of fresh leaves. Protein of fresh leaves was estimated using the microkjeldahl method, according to the AOAC (2012).The value 6.25 was used to convert nitrogen into protein. Statistical analysis All analyzes were performed in triplicate.Obtained data of all physicochemical evaluations were analyzed by descriptive statistics for every month and compiled into graphs using SigmaPlot 10.0 software.Person's correlation was used to compare antioxidant activity and total phenolic content, using the Statistical Analysis Software (SAS) for Windows V8, at 5% significance (p< 0.05). RESULTS AND DISCUSSION It is possible to cultivate ora-pronobis in temperate and humid areas.However, it is important to consider that, during winter, when exposed to low temperatures and frost, the plant loses leaves and stays in a quiescent state.Quiescence is a common state in seeds, but it can occur also in the entire or in parts of the plant.Normally, quiescence is a preparation for winter and is a strategy to conserve energy and carbohydrate by restraining growth (Luo et al., 2011).We started to analyze the plants in June 2012, but because of the quiescent state, we only had enough leaves to analyze between December 2012 and June 2013. Phenology is the sequential developmental stages of the annual growth cycle and their timing.In our study we analyzed the occurrence of new leaves, flowering and fructification (Table 1). Observations were made at the end of spring, in December, when temperature was higher, frost subsided, and new leaves started to grow back.After three months, in March, we noticed that flowering had started in the first plants, and one month later, in April, the fruiting started.Again, when temperature decreased, in June, leaves started to fall, ending the cycle of orapro-nobis in south Brazil.Temperature was found to be an essential factor for growth and development of this species.It is known that temperature affects photosynthesis and development of shoots (Ushio et al., 2008). In temperate climates, warm temperatures often act as flowering triggers.Also, rain is an important timer for flowering in shrubs.There is a difference between fruiting in temperate and tropical areas.In temperate regions fruiting normally starts late in the summer or in the autumn, lasting for one and a half month in average.The fruit production is largely controlled by the accumulation of enough photosynthesis, which can only occur towards the end of the growing season (Fenner, 1998).In our study we observed that temperature was the most important factor for flowering.Also, fruiting started in autumn, lasting two months in average.A single-year study was sufficient to demonstrate the ability of ora-pro-nobis to grow under temperate and humid climate, but cannot provide complete information on phenological changes among the years.According to Fenner (1998), long-term (3-5 year) investigation is required to determine phenological modifications accurately. Color, expressed as lightness and Hue angle, is shown in Figure 1.Lightness presented the higher level in March (L= 51.32), meaning that this was the month when leaves had the lightest colors, coinciding with flowering start.The values of Hue angle showed that leaves presented colors between yellow and green during all months.In December and March, leaves were more yellowish and in January more greenish.These color changes of leaves are probably due to local climate conditions.In January, when leaves were more greenish, the solar radiation reached its maximum (548.5 cal/cm 2 /d) (Table 1).Therefore, it could be argued that solar radiation was a limiting factor for the plant development under temperate climate.Pereskia species have typically been described as drought deciduous, suggesting that Pereskia water relations are different from those of specialized core cacti and that Pereskia regulates water loss in the same way as a typical C3 woody plant (Edwards & Diaz, 2006).Therefore, in comparison to the stem color of specialized core cacti, the color of leaves from Pereskia species is expected to respond faster to changes in climate conditions (rain, temperature and solar radiation).Chlorophyll is the most common pigment found in plants and is responsible for their characteristic green color.The bright green color of vegetables is often associated to their freshness (Calvano et al., 2015). Average height of plants increased almost 4 folds during the seven months period of analyzes.The least average leaf area (14.82 cm 2 ) was observed in December, due to the quiescence state period months before.After that, leaves started to grow, reaching the largest average size (33.11cm 2 ) in February (Figure 2).Monitoring changes of leaf area is important for assessing growth and vigor of plants.Frost, storm, defoliation, drought, and management practice commonly cause reduction of leaf area, therefore decreasing the productivity of the plant (Breâda, 2003).Humidity of ora-pro-nobis leaves remained around 880 g kg -1 for 7 months, reaching the lowest average value (861.11g kg -1 ) in February.Protein content was higher in December 2012 (27.23 g kg -1 ) and June 2013 (27.22 g kg -1 ) and lower in February (21.35 g kg -1 ) (Figure 2).Maintenance respiration is known to be increased by higher temperatures (Modi, 2007).In this study, higher leaf protein turnover under higher temperature conditions may have been responsible for tendency of lower protein contents under higher temperature growth conditions.Antioxidant activity, measured with the DPPH scavenging assay, as well as total phenolic content, reached the highest level, 44.99 g of Trolox/kg of fresh plant and 2.66 g of GAE/kg of fresh plant , respectively, on April (Figures 2E and F).Antioxidant activity and total phenolic content had some correlation (r= 0.71; p<0.0001).Pinto et al. (2012), researching ora-pro-nobis leaves, found by thin layer chromatography, that phenol was the main antioxidant compound.There are many studies showing that total phenol compound has stronger positive correlation with antioxidant activity in vegetables (Aires et al., 2011;Bhandari & Kwak, 2015).Phenolic compounds are among the most important components on the quality of vegetables and fruits.They contribute to organoleptic characteristics like color and taste and promote beneficial effect to human health (Sancho et al., 2011;Zielinski et al., 2014). Ora-pro-nobis developed adequately, but with a quiescent state in the winter (without producing leaves).Flowering of the plant started in March and fructification one month later.The physicochemical characteristics (humidity, leaf area, protein, color, total phenolic content and antioxidant activity) varied throughout cultivation period.All our findings support that cultivation of ora-pro-nobis for production of leaves is feasible under temperate and humid climate in south Brazil.
2019-04-03T13:07:56.543Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "e80b77c65d436cbaf9cc46cc5c10a516dbf7382a", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/hb/v36n3/1806-9991-hb-36-03-325.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e80b77c65d436cbaf9cc46cc5c10a516dbf7382a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
54968738
pes2o/s2orc
v3-fos-license
PHOTOGRAMMETRIC ARCHAEOLOGICAL SURVEY WITH UAV This document describes a way to obtain various photogrammetric products from aerial photograph using a drone. The aim of the project was to develop a methodology to obtain information for the study of the architecture of pre-Columbian archaeological sites in Mexico combining the manoeuvrability and low cost of a drone with the accuracy of the results of the open source photogrammetric MicMac software. It presents the UAV and the camera used, explains how to manipulate it to carry out stereoscopic photographs, the flight and camera parameters chosen, the treatments performed to obtain orthophotos and 3D models with a centimetric resolution, and finally outlines the quality of the results.  INTRODUCTION The measurement system of the pre-Columbian builders is still an enigma to archaeologists today.A comprehensive study in this field should be based on accurate measurements of dimensions and alignments of each structure.Maps of Mexican archaeological sites available in the literature have an insufficient precision to perform such research; consequently, for the study of each archaeological site, a precise survey must be done. Several solutions are available for archaeological survey: a topographical survey can produce accurate measurements that can be processed with software to obtain maps and 3D models.Nevertheless, with a total station, it is impossible to register each stone and deformation of the ruins.Thus the representation is a synthesis and an interpretation of the information.A second solution would be using a laser scanner.Unfortunately, archaeological sites can be quite widespread and have many monuments, so the number of stations to be performed and the number of point cloud to be processed would demand too much post processing resources.A third solution is to use a drone to acquire aerial photographs of sites, and process them to create orthophotos and 3D point clouds.It is this third option that has been adopted here, and will be described below.It was applied to the survey of the archaeological site of Cempoala (Mexico).This project was made during a student internship and is part of a work about pre-Columbian architecture carried out by a research team from the Institute of Aesthetic Research (IIE) of the University of Mexico (UNAM). The objectives of this project were multiple.First, the development of a working method to achieve stereoscopic photographs with the Institute's drone.Then, to realize a photogrammetric aerial photograph of an archaeological site with the drone.Finally, to carry out the treatments required to obtain orthophotos and a 3D model of the site.A working method was developed, as well as a calculation method for the various parameters of the flight, and quality control was conducted to determine the precision of the results. UAV Drones can be considered dangerous because they do not have a transmitter signalling their position to other aircrafts, and in some countries, they may be used by amateurs who might not have received previous training.However, they have many advantages, such as their affordability and manoeuvrability.When the first drones appeared in the 70s, they were either wind sensitive or subject to significant vibration.Since 2000, drones have become more adapted to aerial photography, and the first studies on the quality of the results were performed (Eisenbeiß, 2013).Nowadays, drones commonly have automatic drivers and automatic image acquisition.A predefined point to rejoin if the connection is lost can even be registered in the memory of the UAV.Models with rotors are appreciated for their vertical takeoff and landing on a small areas, and their workability.As for aircraft models, they are preferred for their greater autonomy. Drones are now used in many fields: military, agriculture, tectonic, geology, atmospheric, archaeology, extreme sport...The drone in possession of the working group is a hexacopter Spreading Wings S800 (Figure 1), equipped with the on-board computer Wookong-M associated with IMU (Inertial Measurement Unit) and GNSS receiver.Both are developed by the Chinese company DJI.The drone is controlled by a remote Futuba 7C.The drone has a maximum horizontal speed of 25 m/s and a vertical speed limit of 5 m/s.The maximum recommended distance between the remote control and the drone is 500 m in a city and up to 1 km in open field.The drone is operated using pairs of batteries, which are recharged with devices able to deliver a 30 amps electric current.The group has three sets of batteries, each allowing about fifteen minutes of flight. The UAV can fly using the remote control in three different modes: -The GPS mode, the most advanced one, in which the drone uses information from the GPS and the IMU to better respond to the instructions of the driver-handled remote control.This mode also helps to maintain a stable position and attitude when the drone receives no movement command from the ground. -The ATTI mode, which does not use the position information from the GPS, but only those of the inertial unit.In this mode, the drone does not maintain its position, but only its attitude (orthogonal to the ground).Therefore, the drone presents inertia at the end of his movements, even though the remote control indicates a stationary position.In a windy environment, it would be carried adrift. -A fully manual mode, in which the movements of the drone are only governed by indications from the remote control.This method is difficult to use and not recommended because without GPS and IMU data, the drone preserves neither a stable position, nor a stable attitude. Ground Station software The drone is sold with the software Ground Station which provides a variety of aids for the manipulation of the drone.This software displays on Google Earth the trajectory and the position taken by the drone. The green line corresponds to the drone trajectory. Unfortunately, its extraction is not allowed.The red arrow represents the drone, with the tip symbolizing its front.The height of the drone over its takeoff point is indicated in blue. (Figure 2) A flight can be planned with the software.Indeed, a theoretical trajectory can be defined before the flight, and then, once on the field, the UAV can follow it using its GPS (Figure 3).Thanks to this autopilot, equidistant flight axis can be planned, and this ensures constant speed and altitude of the device.Consequently, during the flight, the remote control lets the control of the drone to the software, even if the driver can regain it at any time. At the time we realized this project (August 2013), this software integrated different functional possibilities depending on the purchase price.The least expensive version allowed only indicating a return point while the more expensive version was conceived for photogrammetric use and allowed to design a flight with 50 vertices.The new version of the software includes the 50 points, nevertheless, at that time we had to work with the two points version. Camera and its automatic shutter release The camera used is a Sony Nex-7.It has a 24 Megapixel matrix with 28 mm diagonal and it is equipped with a zoom lens ranging from 18 to 55 mm.The camera is attached to the drone with a MRT Crane 2 Camera Gimbal Axis 2 mount, which allows countering the inclination of the device due to its movement using the guidance provided by the inertial unit, and thus, to always maintain the camera orthogonal to the ground.The capture of the pictures is controlled by the gentLED-TRIGGER-triggerPLUS infrared trigger.It can, in theory, have a minimum rate of 2 seconds shooting, but it turned out that, in practice, it does not go below 2.3 seconds. Georeferencing accuracy We saw previously that flight planning associated with autopilot flight mode greatly simplifies the flight.However, it is interesting to measure the accuracy of the GNSS positioning before letting him the control. Two factors have an influence on the positioning of the drone: the georeferencing accuracy of Google Earth images, on which the trajectory of the drone is defined, and the accuracy of the GPS (helped by the IMU). Concerning the GPS-IMU couple, the manufacturer indicates an in-flight accuracy of 0.5 m vertically and 1 m horizontally. To have an idea of the accuracy of Google Earth images georeferencing, a comparison of coordinates given by the GPS of the drone and by Google Earth on the same points were made.After coordinate transformation in the same system, deviations exceeding GPS accuracy given by the manufacturer are calculated.This difference can be associated partially to the georeferencing of Google Earth images.The maximum obtained is 6.5 m and it does not exclude that in other places, the difference could be larger.However, these coordinates were recorded with a stationary GPS, eliminating the correction from the IMU.Indeed, the IMU measures angular accelerations and velocities when the UAV is moving.So, positioning is improved when the drone is moving, although differences between the real position of the drone and its location indicated on the Google Earth API persist: this is the case in figure 2, where the trajectory was obtained with the drone lying inside a car that was obviously not rolled over parking spaces. Thus, during flight planning, the distance between the axis of flight must be chosen taking into account the few meters imprecision, in order to prevent a possible gap of the real axis of flight and therefore of the footprint of the photographs. Parameters of the camera Obtaining photogrammetric products with the MicMac software requires identical camera settings throughout the shooting.The optical parameters must be set prior to each flight, according to the brightness (CIPA, 1988).To have good quality pictures, the ISO must not exceed 800.After some tests to compare different settings for opening and exposure time, we concluded that it was preferable to use an exposure time of 1/2000 or more, and adapt the opening with the brightness.These three parameters will thus remain fixed throughout the shooting.The camera is at least several dozens of meters from the ground (ground distance), so there will be no problem with depth of field, even if the opening is small. The drawback in the use of this camera is its focus ring which is endless and without graduations, making it uneasy to set the focus to infinity, and to fix it. Flight parameters Flight parameters include: flying height above the highest point of the site, flight speed, focal length, shooting rate.They will directly define the image resolution, the footprint of a photo and the overlap between photos (Ferrières (de), 2004).The stereoscopic base is the distance between two summits S i , which is also the difference between the footprint and the overlap (Figure 4).Recoveries between images needed for the proper functioning of the MicMac software are 60% along the flight axis and 20% between bands.However, to guard both against a slower rate of the infrared trigger, and against the vagueness of Google Earth georeferencing, it is better to fix the parameters of the aerial involving 65% in forward overlap and 60% between strips for side overlap (Figure 5). Figure 5. Representation of overlap from above The footprint of the photos must be calculated with a flying height starting at the highest point of the site in order for buildings not to get out of the aerial photograph, and present enough overlap at the top (Figure 6).The choice of the flight parameters is then a compromise between flying height and focal length.These two parameters are conditioned by technological constraints such as the battery range of the UAV (which affects the duration of the flight, thus the speed of the drone and the rhythm of the photos), and the photographic card memory.It is also important to make sure that the ratio (stereoscopic baseline) / (fly height) is between 1/6 and 1/2 for a better homolog rays intersection (straight lines linking a point on the ground with its corresponding pixels on each picture).A useful formula for the determination of these parameters is: ( Archaeological site of Cempoala The archaeological site considered is situated at Cempoala, village of the state of Veracruz (Figure 7).This is a pre-Columbian ceremonial centre which covers an area of 450 x 250 m.It was the religious centre of a vibrant city at the time of conquest and had to be abandoned due to heavy epidemics that decimated the population.Outside the site itself, many pyramids dot the village and the surrounding fields.The topographical measurements of control points on the ground are carried out with the total station positioned at the top of the central pyramid of the central site.The total station is a Leica TCR703, allowing an angular accuracy of 1 mgon and 3 mm + 2 ppm on the distances, according to the manufacturer.Moreover, surveys of the edifices on the site are made using the same total station and a prism.  TREATMENTS To obtain orthophotos and 3D models, MicMac software, developed by Marc Pierrot-Deseilligny and the IGN (French National Institute of Geographic and Forestry Information), was used. We got 364 images that were processed to obtain the aerial photograph (Figure 9). Tie points MicMac software starts searching the tie points (equivalent points) between the images, before calculating the distortion parameters and thus the calibration of the camera.It turned out that the 3D point cloud representing the tie points presents a bending which is not in accordance with the reality (Figure 10).Thanks to measurements made with the tachymeter on the seven ground control points, the data have been scaled, and oriented roughly as the total station itself has been oriented.However, they are not georeferenced and thus remain in a local reference frame.This use of coordinates obtained with the tachymeter unfortunately has very little impact on the bending, as they are in the same plane.The addition of complementary reference points allowed correcting this defect. Orthophoto calculation After this, the commands for the creation of orthophotos and 3D models can be launched.MicMac computes the orthophoto for each individual image, and assembles them to create the orthophoto of the entire site. As the resolution of the final orthophoto is huge, MicMac segments the archaeological site into six pictures, which are assembling manually to give the entire site's orthophoto.(Figure 11).The final image presents a darker zone in its central part, which is due to the presence of a few clouds.As each photographic parameter is fixed before the beginning of the flights in order to avoid any optical movement that would imply the calculation of different calibration models, this effect cannot possibly be avoided.The different illuminations observed on the final orthophoto form geometric shapes due to the assembling of individual orthophoto performed by the software to create the overall orthophoto (Figure 12). 3D model The 3D model has a lot of holes mainly where there are trees and in large grassy surfaces (Figure 13).This latter defect could have been avoided with larger footprints of the photos.There are also information gaps for vertical surfaces (Figure 14).This can be due to a too low focal length.Indeed, if the lens angle is low, vertical walls are barely visible in the photographs, and Micmac will consequently struggle to correlate on the walls.Maybe an aerial photograph of the walls with the camera inclined at an angle of 10 ° should also be done, which is possible with the mount in possession of the working group. Quality control Two factors must be taken into account for centimetric quality control: resolution and accuracy. The resolution is the size of the ground surface represented by a pixel on the image.The precision corresponds to the positioning error of the pixel. To obtain these two values, the seven ground control points whose topographic coordinates are known will be used, as well as field points whose coordinates were recorded during the survey made with the total station.This second data set has the disadvantage of being less accurate because of the ambiguity due to the positioning of the prism which must be imagined on the pictures because at first it was not intended to be used for other purposes than to draw a plan.Nevertheless, this second set of data was not involved in MicMac calculations and conserves greater neutrality. There are seven ground control points, and six points from the tacheometer survey are selected.For each, cartesian coordinates in the local reference system are calculated from data taken at the total station, and the corresponding image coordinates (in pixels) of the orthophoto are noted, as well as their coordinates in the 3D model.The distances are then calculated and compared. The average size of the pixel of the orthophoto obtained with the seven ground control points is 1.1 cm.The same value is obtained with the points from the survey.The final orthophoto (3.55 GB) consists of 26868 * 47348 pixels for a ground footprint of 270 * 500 m, so the pixel size found previously is confirmed (10.0 * 10.5 mm).With regard to accuracy, the error on distances is 1.7 mm/m if the calculations are carried out with the ground control points and of 5.1 mm/m if the calculations are carried out with the survey points.This larger error with the second data set can be due to the greater vagueness of the prism position, and the shorter distances between points. The same procedure is applied to the 3D model.The 3D model was obtained with only the seven ground control points and not recalculated after the addition of reference points.The accuracy of the 3D model therefore remains highly flawed vertically.The target number 2, located on the top of the pyramid presents incorrect distances of several dozens of centimeters with other targets. The error per meter is 5.7 cm in average, taking into account the seven control points.If measurements on the second target are removed, this error drops to 1.2 cm/m.  CONCLUSIONS Aerial photograph with UAV is a method appreciated for its relatively low cost, handiness, and the amount of information it captures.It provides, through photogrammetry software such as MicMac, satisfactory results with high resolution, but remains dependent on a good calibration to obtain reasonable accuracy (Remondino, 2011).Its use can be added to total station surveying, it is an ideal solution when the archaeological area is not heavily wooded and when the structures are not protected by a roof, which is the case in many Mexican sites. It provides a huge amount of information in a very short time and post processing is mainly computer work which does not require as many man-hours of work as traditional solutions. Figure 2 . Figure 2. Ground Station software interface Figure 3 . Figure 3. Ground Station's flight plan editor The planned trajectory is displayed in blue.Yellow pins correspond to the vertices of the path.The red lines project the position of the UAV on the ground.The vertices' order numbers are written in blue with their height, the distances between two consecutive vertices of the path are in yellow.Furthermore, the distance and the duration of the whole flight are calculated and indicated by Ground Station software. Figure 4 . Figure 4. Representation of aerial photos parameters It was decided that the resolution of the photos should be close to one centimeter.It is obtained as a function of the pixel size of the camera, of the focal length and the flying height.The resolution of the ground is given by the following formula: Figure 6 . Figure 6.Flight height and overlap above relief on the ground of a photograph  AERIAL PHOTOGRAPH ON SITE Figure 8 . Figure 8. Cempoala flight plan4.2Topographic surveyBefore the flight, seven targets were placed on the ground at the four corners of the site, in the middle of the two long sides and at the centre of the site.They have served as points of control, as they were photographed during flights.The targets are all visible from the top of the central pyramid. Figure Figure 9. Shot summits Figure 10 . Figure 10.Side view of the point cloud Figure 11 . Figure 11.Orthophoto of the archaeological site of Cempoala Figure 13 . Figure 13.3D model of the archaeological site of Cempoala, from above Figure 14 . Figure 14.Detail of the 3D model of the archaeological site of Cempoala
2018-12-14T19:36:34.948Z
2014-05-28T00:00:00.000
{ "year": 2014, "sha1": "dd4640d91fb902cbe977fa04afb7aaceab97dfcd", "oa_license": "CCBY", "oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-5/251/2014/isprsannals-II-5-251-2014.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd4640d91fb902cbe977fa04afb7aaceab97dfcd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256488710
pes2o/s2orc
v3-fos-license
Lattice Deformation of Tb0.29Dy0.71Fe1.95 Alloy during Magnetization In Tb-Dy-Fe alloy systems, Tb0.29Dy0.71Fe1.95 alloy shows giant magnetostrictive properties under low magnetic fields, thus having great potential for transducer and sensor applications. In this work, the lattice parameters of Tb0.29Dy0.71Fe1.95 compounds as a function of a magnetic field were investigated using in situ X-ray diffraction under an applied magnetic field. The results showed that the c-axis elongation of the rhombohedral unit cell was the dominant contributor to magnetostriction at a low magnetic field (0–500 Oe). As the magnetic field intensity increased from 500 Oe to 1500 Oe, although the magnetostrictive coefficient continued to increase, the lattice constant did not change, which indicated that the elongated c-axis of the rhombohedral unit cell rotated in the direction of the magnetic field. This rotation mainly contributed to the magnetostriction phenomenon at magnetic fields of above 500 Oe. The structural origin of the magnetostriction performance of these materials was attributed to the increase in rhombohedral lattice parameters and the rotation of the extension axis of the rhombohedral lattice. Introduction Tb-Dy-Fe alloy is known as giant magnetostrictive material because of its strong magnetostrictive properties, which can be used in transducers, sensors, actuators, and other devices [1,2].In Tb l−x Dy x Fe 2 alloy systems, the composition x = 0.67-0.73 is frequently used and is also the composition used as a giant magnetostrictive material.For a long time, it was thought that Tb l−x Dy x Fe 2 had a cubic Laves phase (C15) structure with a lattice parameter of 0.73 nm [3,4].Its cubic Laves phase compounds, ReFe 2 (Re = rare earth), are well known to exhibit giant magnetostriction at room temperature [5].In the compound ReFe 2 , the rare earth spins are taken to be parallel to one another and antiparallel to the iron spins, showing large magnetic anisotropy.Rare earth compounds with iron in the Laves (C15) phase are strongly magnetic well above room temperature [6].In the C15 crystal structure, each transition metal atom is surrounded by six other atoms as its nearest neighbors.It also had long been believed that Tb 0.3 Dy 0.7 Fe 2 alloy had a C15-type cubic Laves phase structure across each transition [7].With the improvement in device resolution, researchers have gained a new understanding of crystal structure.In recent years, synchrotron data have shown that the ferromagnetic transition in ReFe 2 compounds results in a low crystallographic symmetry conforming to the spontaneous magnetization direction [8,9].Ferromagnetic Tb l−x Dy x Fe 2 materials have been shown to consist of coexisting rhombohedral and tetragonal crystallographic structures at room temperature, as measured via high-resolution X-ray diffraction and AC magnetic susceptibility measurements [8][9][10].Tb 0.3 Dy 0.7 Fe 2 , a typical composition of the Terfenol-D giant magnetostrictive material (GMM), has been shown to consist of coexisting rhombohedral and tetragonal phases over a wide temperature range, and the local rhombohedral and tetragonal domains can easily respond to a low external magnetic field, thus facilitating easy magnetization rotation and high magnetostrictive properties [11][12][13].As the resolution of the synchrotron XRD instrument is unable to distinguish small tetragonal distortions from a cubic structure, the tetragonal structure is generally fitted and calculated as a cubic structure [10,14].The rhombohedral lattice constant of Tb 0.3 Dy 0.7 Fe 2 was determined by Yang et al. [8] using high-resolution synchrotron radiation XRD equipment as a = 7.336 Å, α = 89.91 • .Gong et al. [14] measured the lattice constants of cubic (tetragonal) and rhombohedral structures as a = 7.329 Å and a = 7.334 Å, respectively.After heat treatment, the lattice constant of the sample was deformed by about 1‰, but the magnetostrictive performance of the sample was significantly improved.It was found that although the difference of lattice constants between the two structures is small, the magnetostrictive properties of the two structures change greatly when the crystal structure parameters change slightly.The crystal structure seems to profoundly influence magnetostriction phenomena.Therefore, the structure of Tb-Dy-Fe alloy was used as a standard C15 structure for many years due to the insufficient resolution of the equipment.The relationship between the subtle changes in lattice parameters and magnetostrictive properties needs to be further studied. The magnetostriction effect is a physical phenomenon in which the shape and size of a material change when it is magnetized.The magnetostriction phenomena of Tb l−x Dy x Fe 2 have been fully studied and explained in terms of magnetic domains, the anisotropic energy of magnetic crystals, and domain structures [15][16][17][18].According to the theory of magnetic domain [19][20][21][22], when a Tb-Dy-Fe material is at a temperature lower than the Curie temperature, it spontaneously magnetizes, forming magnetic domains in various directions.During magnetization, magnetic domain rotation and domain wall displacement occur, resulting in magnetostriction.Nevertheless, magnet domain theory is only a phenomenological theory, which does not involve any crystal structure parameters, only describing macroscopic phenomena.The theory of magnetostriction still needs to be studied and improved.In order to obtain magnetostrictive materials with higher performance, we generally need to regulate the materials.At present, Tb-Dy-Fe alloy is generally regulated via heat treatment.Most of the heat treatment methods used involve improving the magnetostrictive properties of Tb-Dy-Fe materials with uniform composition, uniform phase structure, and specific magnetic domain orientation [23][24][25].However, there is no perfect theory guiding the regulation of the Tb-Dy-Fe crystal structure.The deformation of the crystal structure could be another factor that greatly influences magnetostrictive performance.For Tb l−x Dy x Fe 2 compounds, one of their prominent features is their localized 4f electrons and itinerant 3d electrons, and the 4f electrons of Tb and Dy make the main contribution to magnetostriction [17].When the sample is magnetized by an external magnetic field, the distribution of 4f electrons related to the crystal electric field also changes accordingly.Changes in size or orientation of the Tb and Dy magnetic moment are reflected in a change in the 4f charge distribution, which in turn forces the surrounding atoms to attain new equilibrium positions, minimizing the total energy [6].That is to say, this series of changes produces lattice deformation in the crystal structure, and the end result is the phenomenon of magnetostriction.As a consequence, large magnetostriction originates from magneticfield-induced large lattice deformation [26].However, due to the lack of measurement accuracy, the fact that the texture in a directionally solidified sample is too strong to obtain an accurate lattice constant, and the fact that a powder sample easily moves in a magnetic field, measurement of the lattice deformation of Tb-Dy-Fe under different magnetic field distributions remains a challenge.There have been few studies on crystallography during magnetization, and the lattice deformation resulting from magnetostriction at low fields is poorly understood.Therefore, it is necessary to conduct some further research on the lattice deformation of Tb-Dy-Fe alloy during magnetization. In the present study, the crystal structure and lattice deformation of polycrystalline compounds with the nominal composition Tb 0.29 Dy 0.71 Fe 1.95 during magnetization were investigated.We aimed to understand the deformation of the crystal structure during magnetostriction and to gain a deeper understanding of magnetostriction.We also hoped to provide theoretical guidance for improving magnetostrictive properties by regulating crystal structure. Material and Methods In order to obtain excellent magnetostrictive properties from Tb-Dy-Fe alloys, it is of crucial importance to fabricate oriented polycrystalline crystal via directional solidification.An alloy with the nominal composition Tb 0.29 Dy 0.71 Fe 1.95 was prepared from highly pure Fe (99.9 wt.%), Tb (99.99 wt.%), and Dy(99.99% wt.%) via the Bridgeman directional solidification process in an argon atmosphere.Then, the alloy was annealed at 1060 • C for 2 h in an argon atmosphere.This composition ratio of the alloy ensured that the main phase was all RFe 2 phase without any RFe 3 phase, as shown in reference [14].The directionally solidified alloys prepared via this common method have a strong texture, with <110> axial preferred orientation generally.Thus, the diffraction peaks of many other crystal planes are very low or do not appear in the XRD patterns.To characterize the general law of magnetostriction, an isotropic sample was prepared by grinding the directionally solidified samples into a 25~40 µm powder mixed with a small amount (about 5 wt.%) of epoxy resin, so that the powder would not move or freely rotate in an external magnetic field.The powder was ground by hand instead of using a high-energy ball mill.The particle size of the powder was screened using a standard sieve.It should be noted that these operations were all carried out in an argon atmosphere glove box.The powder was not exposed to the air, as much as possible, to avoid oxidation.The isotropic sample was cured in an argon atmosphere for more than 24 h; after the end of curing, the sample would not easily oxidize.X-ray powder diffraction (XRD) patterns were obtained with Cu-Kα radiation (with wavelengths λ-Kα 1 = 1.54059Å and λ-Kα 2 = 1.54431Å) on a Rigaku (Smart Lab 9Kw, Tokyo, Japan) X-ray diffractometer, and the step scan increment (2θ) was 0.004 degrees.The sample table of the X-ray diffractometer was improved by using nonmagnetic material, and an in situ magnetic field experiment was carried out.By using a fixed device, the sample remained in a fixed position.In the process of changing the number of NdFeB magnets, the position of the sample remained unchanged.The magnetic field intensity was controlled during the in situ magnetic field XRD process by increasing the number of NdFeB magnets, as shown in Figure 1.The magnetic field on the upper surface of the sample was monitored using a Hall probe.All the obtained patterns were analyzed using the Rietveld method and Fullprof software (https://www.ill.eu/sites/fullprof/php/downloads.html,accessed on 29 November 2022).A measuring device for magnetostrictive materials using a strain gauge was employed to measure the magnetostriction coefficient of this powder-bonded sample. compounds with the nominal composition Tb0.29Dy0.71Fe1.95during magnetization wer investigated.We aimed to understand the deformation of the crystal structure during magnetostriction and to gain a deeper understanding of magnetostriction.We also hoped to provide theoretical guidance for improving magnetostrictive properties by regulating crystal structure. Material and Methods In order to obtain excellent magnetostrictive properties from Tb-Dy-Fe alloys, it is o crucial importance to fabricate oriented polycrystalline crystal via directional solidifica tion.An alloy with the nominal composition Tb0.29Dy0.71Fe1.95 was prepared from highly pure Fe (99.9 wt.%), Tb (99.99 wt.%), and Dy(99.99% wt.%) via the Bridgeman directiona solidification process in an argon atmosphere.Then, the alloy was annealed at 1060 °C for 2 h in an argon atmosphere.This composition ratio of the alloy ensured that the main phase was all RFe2 phase without any RFe3 phase, as shown in reference [14].The direc tionally solidified alloys prepared via this common method have a strong texture, with <110> axial preferred orientation generally.Thus, the diffraction peaks of many othe crystal planes are very low or do not appear in the XRD patterns.To characterize th general law of magnetostriction, an isotropic sample was prepared by grinding the di rectionally solidified samples into a 25~40 µm powder mixed with a small amount (abou 5 wt.%) of epoxy resin, so that the powder would not move or freely rotate in an externa magnetic field.The powder was ground by hand instead of using a high-energy ball mill The particle size of the powder was screened using a standard sieve.It should be noted that these operations were all carried out in an argon atmosphere glove box.The powde was not exposed to the air, as much as possible, to avoid oxidation.The isotropic sample was cured in an argon atmosphere for more than 24 h; after the end of curing, the sampl would not easily oxidize.X-ray powder diffraction (XRD) patterns were obtained with Cu-Kα radiation (with wavelengths λ-Kα1 = 1.54059Å and λ-Kα2 = 1.54431Å) on Rigaku (Smart Lab 9Kw, Tokyo, Japan) X-ray diffractometer, and the step scan incremen (2θ) was 0.004 degrees.The sample table of the X-ray diffractometer was improved by using nonmagnetic material, and an in situ magnetic field experiment was carried out.By using a fixed device, the sample remained in a fixed position.In the process of changing the number of NdFeB magnets, the position of the sample remained unchanged.Th magnetic field intensity was controlled during the in situ magnetic field XRD process by increasing the number of NdFeB magnets, as shown in Figure 1.The magnetic field on the upper surface of the sample was monitored using a Hall probe.All the obtained pat terns were analyzed using the Rietveld method and Fullprof softwar (https://www.ill.eu/sites/fullprof/php/downloads.html,accessed on 29 November 2022) A measuring device for magnetostrictive materials using a strain gauge was employed to measure the magnetostriction coefficient of this powder-bonded sample. Results and Discussion Figure 1 shows a schematic diagram of the in situ magnetic field XRD experiment.A cylindrical sample with a thickness of 4.5 mm was fixed on the sample rack, and several NdFeB magnets with a thickness of 2 mm were applied below.Based on measurements of the magnetic field on the upper surface of the sample with a Hall sensor, magnetic fields of 250 Oe, 500 Oe, 850 Oe, 1200 Oe, 1500 Oe, 1800 Oe, and 2500 Oe were obtained.Magnetostriction mainly occurred in the axial direction of the sample, and the plane scanned by the X-rays was perpendicular to the direction of the magnetostriction.Figure 2a shows the XRD patterns of the sample at room temperature under different magnetic fields.Comparing the XRD pattern of the 0 magnetic field with the standard PDF cards #33-0680 and #65-5127, the positions, quantity, and the relative strength of the diffraction peaks were all similar to the standard sample.The patterns indicated that the sample was isotropic without texture, with a typical ReFe 2 (Re = Tb, Dy) Laves phase.The sample was tested in different magnetic fields ranging from 0 to 2500 Oe.After the application of a magnetic field of 1500 Oe, there was still no obvious change in the peak relative strengths, indicating no obvious texture.However, a closer comparison of the 440 peaks revealed subtle changes in the position and intensity of the peaks, as shown in Figure 2b.This indicated a change in the lattice constants or crystal orientation.The peak pattern consisted of cubic 440 and rhombic 208, 220 peaks, which are the same as those reported in the literature, indicating the coexistence of a cubic structure and a rhombohedral structure in the crystal [27,28].To obtain accurate lattice parameters, the Rietveld refinement [29][30][31] method was used to fit the full XRD patterns.The XRD patterns of eight different magnetic fields were refined in the same process.For example, the fitting of the full XRD pattern obtained under a 1500 Oe magnetic field is shown in Figure 2c. Results and Discussion Figure 1 shows a schematic diagram of the in situ magnetic field XRD experiment.A cylindrical sample with a thickness of 4.5 mm was fixed on the sample rack, and several NdFeB magnets with a thickness of 2 mm were applied below.Based on measurements of the magnetic field on the upper surface of the sample with a Hall sensor, magnetic fields of 250 Oe, 500 Oe, 850 Oe, 1200 Oe, 1500 Oe, 1800 Oe, and 2500 Oe were obtained.Magnetostriction mainly occurred in the axial direction of the sample, and the plane scanned by the X-rays was perpendicular to the direction of the magnetostriction.Figure 2a shows the XRD patterns of the sample at room temperature under different magnetic fields.Comparing the XRD pattern of the 0 magnetic field with the standard PDF cards #33-0680 and #65-5127, the positions, quantity, and the relative strength of the diffraction peaks were all similar to the standard sample.The patterns indicated that the sample was isotropic without texture, with a typical ReFe2 (Re = Tb, Dy) Laves phase.The sample was tested in different magnetic fields ranging from 0 to 2500 Oe.After the application of a magnetic field of 1500 Oe, there was still no obvious change in the peak relative strengths, indicating no obvious texture.However, a closer comparison of the 440 peaks revealed subtle changes in the position and intensity of the peaks, as shown in Figure 2b.This indicated a change in the lattice constants or crystal orientation.The peak pattern consisted of cubic 440 and rhombic 208, 220 peaks, which are the same as those reported in the literature, indicating the coexistence of a cubic structure and a rhombohedral structure in the crystal [27,28].To obtain accurate lattice parameters, the Rietveld refinement [29][30][31] method was used to fit the full XRD patterns.The XRD patterns of eight different magnetic fields were refined in the same process.For example, the fitting of the full XRD pattern obtained under a 1500 Oe magnetic field is shown in Figure 2c.The tetragonal structure was fitted with the cubic Fd3m symmetry [6,7], as the distortion of the tetragonal structure was too small to be distinguished from the cubic structure using XRD [4,6].The rhombohedral R3m(H) (No. 166) model and Fd3m (No. 227) model were adopted for the fitting, as in the literature [6,17].The space group R3m (No. 166) characterizes the rhombohedral crystal structure, which can be equivalently described by the hexagonal crystal structure R3m(H).The hexagonal (rhombohedral) crystal structure of Tb-Dy-Fe is equal to a distortion of the Laves cubic structure along the [111] direction [17].The [0001] direction (c-axis) of the hexagonal structure is parallel to the [111] direction of the cubic structure, while the [1010] direction (a-axis) of the hexagonal structure is parallel to the [110] direction of the cubic structure.In the R3m(H) crystal structure model, a = b = c, α = β = 90 • and γ = 120 • ; the Tb and Dy atomic position coordinates are both (0, 0, 0.125); Fe atoms exist in two positions, (0, 0, 0.5) and (0.5, 0, 0).In the Fd3m model, a = b = c, α = β = γ = 90 • ; the Tb and Dy atomic position coordinates are both (0, 0, 0); and the Fe atomic coordinate is (0.625, 0.625, 0.625).During refinement, we mainly refined the lattice parameters, scale factors, preferred orientation, asymmetry parameters, shape parameters, and global parameters such as instrumental profile, background, and so on.As the of the Tb 1−x Dy x Fe 2 crystal, as Tb and Dy atoms are similar in size, as their characteristic peaks are difficult to distinguish accurately in X-ray diffraction, and considering the characteristics of the R3m(H) (No. 166) and Fd3m cell models, the site occupancy (Occ) and isotropic thermal parameter (B) were not used as the focus of refinement.The results of the refinement procedure with satisfactory fits, including lattice parameters, cell volume, and phase fraction of the Tb 0.29 Dy 0.71 Fe 1.95 compound in the magnetization state, are presented in Table 1.The satisfactory fit parameters of all full-pattern fittings are small (χ 2 < 2) and within a reasonable range.The displacement errors of the instrument during the refinement were equal for all XRD patterns, so the final results accurately indicated the relative changes in the crystal structure parameters.In order to show the lattice parameters intuitively, we drew Figure 3 with the main parameters.Figure 3 shows the variation in the crystal structure parameters with the magnetic field obtained through refinement.The c-axis lattice parameter of the rhombohedral structure (R-c) was equivalent to the cubic structure lattice parameter expanded along the <111> direction, that is, the easy magnetization axis direction.Between 0 and 500 Oe, the most obvious change was in the c-axis lattice parameter of the rhombohedral structure, with an elongation of approximately 2.4 parts per thousand.In addition, the a-axis lattice parameter of the rhombohedral structure (R-a) decreases.The rate of change in the cell volume was calculated to be between 0.2 and 0.65 parts per thousand (shown in Table 1), which is an order of magnitude smaller than the rate of change in R-c.We noted that the lattice constant of the cubic structure (C-a) increased slightly at 250 Oe and then flattened out, until it exceeded 1500 Oe.The reason may be that a magnetic field of 0-250 Oe in size can overcome a low energy barrier and increase C-a slightly.If C-a continued to increase, a larger magnetic field was needed to overcome the high energy barrier.The rate of change in the lattice parameter of the cubic structure was approximately 0.4 parts per thousand.Therefore, the R-c elongation of the rhombohedral crystal mainly contributed to the magnetostriction phenomenon under a low magnetic field. volume was calculated to be between 0.2 and 0.65 parts per thousand (shown in Table 1), which is an order of magnitude smaller than the rate of change in R-c.We noted that the lattice constant of the cubic structure (C-a) increased slightly at 250 Oe and then flattened out, until it exceeded 1500 Oe.The reason may be that a magnetic field of 0-250 Oe in size can overcome a low energy barrier and increase C-a slightly.If C-a continued to increase, a larger magnetic field was needed to overcome the high energy barrier.The rate of change in the lattice parameter of the cubic structure was approximately 0.4 parts per thousand.Therefore, the R-c elongation of the rhombohedral crystal mainly contributed to the magnetostriction phenomenon under a low magnetic field.The lattice parameters of both the rhombohedral and cubic structures did not significantly change between magnetic field intensities of approximately 500 Oe and 1500 Oe.However, the magnetostriction coefficient of the powder-bonded sample still increased with increasing magnetic field intensity, as shown by the blue curve in Figure 4. Therefore, we propose that the orientation of the rhombohedral crystal structure is arranged in different directions and can rotate under the action of a magnetic field.This view can be confirmed in Figure 2c, where the relative strength of rhombohedral peak 208R and (220R + 440C) peaks varies in a magnetic field with a ratio of 0.73 at 500 Oe and 0.80 at 1500 Oe.Since the results of the Rietveld refinement show that the ratio of rhomboidal structure to cubic structure remains unchanged, that is, the relative strength of the 440c peaks does not change, the relative strength of the 208R and 220R peaks should change in the magnetic field, which means that the Rhomboidal structure is oriented in the magnetic field.However, the crystal cell does not actually rotate, according to the principle of energy minimization [6,12]; but, through the displacement of atoms to a nearby position by overcoming the lowest barrier, the direction of extension of the crystal lattice is rotated, as shown in Figure 5.The rhombohedral c-axis elongation in the sample may be along any direction in the initial phase.When the direction is inconsistent with the magnetic field H, with increasing magnetic field, the atoms overcome the barrier and move towards a nearby position.In Figure 3, the atoms in positions A, G, B, and H move towards positions A′, G′, B′, and H′, respectively.After all the atoms (including atoms in the C, D, E, and F positions) have moved to new equilibrium positions, the elongation direction of the cell is rotated from the initial AG direction to the H′B′ direction, which is parallel to the magnetic field.The rhombohedral cell "rotation", in this way, mainly contributes to the magnetostriction phenomenon for 500-1500 Oe magnetic fields.The lattice parameters of both the rhombohedral and cubic structures did not significantly change between magnetic field intensities of approximately 500 Oe and 1500 Oe.However, the magnetostriction coefficient of the powder-bonded sample still increased with increasing magnetic field intensity, as shown by the blue curve in Figure 4. Therefore, we propose that the orientation of the rhombohedral crystal structure is arranged in different directions and can rotate under the action of a magnetic field.This view can be confirmed in Figure 2c, where the relative strength of rhombohedral peak 208 R and (220 R + 440 C ) peaks varies in a magnetic field with a ratio of 0.73 at 500 Oe and 0.80 at 1500 Oe.Since the results of the Rietveld refinement show that the ratio of rhomboidal structure to cubic structure remains unchanged, that is, the relative strength of the 440c peaks does not change, the relative strength of the 208 R and 220 R peaks should change in the magnetic field, which means that the Rhomboidal structure is oriented in the magnetic field.However, the crystal cell does not actually rotate, according to the principle of energy minimization [6,12]; but, through the displacement of atoms to a nearby position by overcoming the lowest barrier, the direction of extension of the crystal lattice is rotated, as shown in Figure 5.The rhombohedral c-axis elongation in the sample may be along any direction in the initial phase.When the direction is inconsistent with the magnetic field H, with increasing magnetic field, the atoms overcome the barrier and move towards a nearby position.In Figure 3, the atoms in positions A, G, B, and H move towards positions A , G , B , and H , respectively.After all the atoms (including atoms in the C, D, E, and F positions) have moved to new equilibrium positions, the elongation direction of the cell is rotated from the initial AG direction to the H B direction, which is parallel to the magnetic field.The rhombohedral cell "rotation", in this way, mainly contributes to the magnetostriction phenomenon for 500-1500 Oe magnetic fields. Similar to how a magnetic domain is deflected in the direction of the magnetic field [32], the lattice is also deflected in the direction of the magnetic field.We found that this process is reversible and repeatable through the process of magnetostrictive coefficient measurement.Therefore, we think that the increase in the magnetostriction coefficient in the magnetic field at 500 Oe-1500 Oe is due to the gradual rotation of the R-c direction of the rhombohedral lattice; the R-c whose initial direction is not in line with the magnetic field direction gradually shift to the magnetic field direction.This rotation process may continue until the magnetic field exceeds 2500 Oe and approaches the saturation magnetic field.The linear magnetostriction for high magnetic fields is mainly caused by rotation, and the rate of magnetostriction gradually decreases with the increase in the magnetic field.Since the main change occurring after the magnetic field intensity exceeds 2000 Oe is the growth in the cubic lattice parameter C-a, and it is thought that 2000 Oe can overcome the barrier of continued expansion of the cubic lattice, volume magnetostriction [33] may begin at this point.We note that the proportions of the rhombohedral structure and cubic structure hardly change, so the change in lattice parameters is the main factor in this magnetization process. Micromachines 2023, 14, x FOR PEER REVIEW 7 Similar to how a magnetic domain is deflected in the direction of the magnetic [32], the lattice is also deflected in the direction of the magnetic field.We found tha process is reversible and repeatable through the process of magnetostrictive coeffi measurement.Therefore, we think that the increase in the magnetostriction coefficie the magnetic field at 500 Oe-1500 Oe is due to the gradual rotation of the R-c directi the rhombohedral lattice; the R-c whose initial direction is not in line with the mag field direction gradually shift to the magnetic field direction.This rotation process continue until the magnetic field exceeds 2500 Oe and approaches the saturation netic field.The linear magnetostriction for high magnetic fields is mainly caused b tation, and the rate of magnetostriction gradually decreases with the increase in magnetic field.Since the main change occurring after the magnetic field intensity exc 2000 Oe is the growth in the cubic lattice parameter C-a, and it is thought that 200 can overcome the barrier of continued expansion of the cubic lattice, volume mag Similar to how a magnetic domain is deflected in the direction of the magnetic field [32], the lattice is also deflected in the direction of the magnetic field.We found that this process is reversible and repeatable through the process of magnetostrictive coefficient measurement.Therefore, we think that the increase in the magnetostriction coefficient in the magnetic field at 500 Oe-1500 Oe is due to the gradual rotation of the R-c direction of the rhombohedral lattice; the R-c whose initial direction is not in line with the magnetic field direction gradually shift to the magnetic field direction.This rotation process may continue until the magnetic field exceeds 2500 Oe and approaches the saturation magnetic field.The linear magnetostriction for high magnetic fields is mainly caused by rotation, and the rate of magnetostriction gradually decreases with the increase in the magnetic field.Since the main change occurring after the magnetic field intensity exceeds 2000 Oe is the growth in the cubic lattice parameter C-a, and it is thought that 2000 Oe can overcome the barrier of continued expansion of the cubic lattice, volume magnetostriction [33] may begin at this point.We note that the proportions of the rhombohedral Dynamic magnetostriction (d 33 ) was calculated using dλ/dH, as shown by the red curve in Figure 4.The higher the value of dλ/dH, the lower the field needed to trigger large magnetostriction, and the more sensitive the sample deformation to the magnetic field.In practical applications, the Tb-Dy-Fe alloys with a higher dλ/dH can help realize the miniaturization of devices.The largest dλ/dH appears at about 700 Oe and reaches the maximum value of 0.3 ppm/Oe.In the range of 500 Oe-900 Oe, dλ/dH maintains a relatively large value.When the rhomboidal structure begins to rotate, the sample deformation is most sensitive to changes in the magnetic field.It can be deduced that the highest contribution efficiency of rhomboidal lattice deformation to magnetostriction occurs after the beginning of rotation.When the magnetic field exceeds 700 Oe, the dλ/dH value slowly decreases from the highest value.This is because rhombohedral structures with certain favorable angles to the direction of the magnetic field rotate preferentially, and rhombohedral structures with other angles rotate successively after the magnetic field continues to increase.This results in a departure from the linearity of the response of the lattice strain to the applied magnetic field and easily increase the saturation magnetic field.If there are regulatory measures to make the R-c direction of all rhomboid cells rotate at the most favorable rotation position in the beginning and rotate together after reaching 500 Oe, the saturation magnetic field greatly reduces and the dλ/dH increases.This topic clearly needs further study. The magnetostrictive curve of Tb-Dy-Fe has complex nonlinearity, which seriously limits the accuracy of device control.The more linear the performance curve of a magnetostrictive material, the more accurate the microdevices made of it in application [34].This requires dλ/dH to decline as slowly as possible after reaching the highest value.From the perspective of crystal structure, the more rhomboidal structures rotate and the longer the distance of rotation, the better the linearity of the magnetostrictive curve.In other words, via crystal structure regulation, the initial direction of R-c elongation of more rhomboidal structures is perpendicular to the magnetic field direction, which improves the linearity of the magnetostrictive curve.Considering symmetry, the longest rotation path occurs when the initial direction is 90 degrees from the final direction.These results give us a deep understanding of the crystal structure of Tb-Dy-Fe and the magnetostriction principle in Tb-Dy-Fe materials from the perspective of crystal structure deformation. Conclusions In conclusion, we performed XRD studies on Tb 0.29 Dy 0.71 Fe 1.95 compounds under different magnetic fields and employed the Rietveld method to refine the XRD patterns.Rhomboidal cells play an important role in linear magnetostriction.The elongation of the rhombohedral structure along the c-axis under a low magnetic field (0-500 Oe) was evidenced.The rhombohedral crystal structures were randomly oriented in the case without a magnetic field, and the application of a magnetic field yielded rhombohedral crystal structure rotation.The model of crystal structure rotation was given.The c-axis direction of the R3m(H) symmetrical crystal structure was arranged in every direction, and rearrangement along the magnetic field direction mainly occurred after the magnetic field strength exceeded 500 Oe.Shortly after the rhomboidal structure began to rotate (under a 700 Oe magnetic field), the resulting strain was most sensitive to changes in the magnetic field, and dλ/dH reached its maximum value.Conversion between the rhombohedral and cubic structures was rare under the magnetic fields.Therefore, the main source of magnetostriction was not the transformation of the crystal structure but the change in the lattice parameters and the rotation of the extension axis of the rhombohedral lattice. Figure 1 . Figure 1.Schematic of the experimental setup for XRD.By stacking NdFeB magnets, magneti fields of up to 2500 Oe were generated.The direction of the magnetic inductance lines is perpen dicular to the X-ray scan surface. Figure 1 . Figure 1.Schematic of the experimental setup for XRD.By stacking NdFeB magnets, magnetic fields of up to 2500 Oe were generated.The direction of the magnetic inductance lines is perpendicular to the X-ray scan surface. Figure 2 . Figure 2. XRD patterns of the Tb0.29Dy0.71Fe1.95sample under different magnetic fields.(a) XRD patterns at magnetic fields of 0 and 1500 Oe using a cubic structure index, compared with standard PDF cards.(b) The peak shape at 2θ = 73° formed by the superposition of rhomboidal and cubic structure peaks, and the intensity of the small peak on the right is half of the (220R + 440C) peak, indicating the Ka2 diffraction peak, which had no influence on the analysis.(c) Plot of the Rietveld Figure 2 . Figure 2. XRD patterns of the Tb 0.29 Dy 0.71 Fe 1.95 sample under different magnetic fields.(a) XRD patterns at magnetic fields of 0 and 1500 Oe using a cubic structure index, compared with standard PDF cards.(b) The peak shape at 2θ = 73 • formed by the superposition of rhomboidal and cubic structure peaks, and the intensity of the small peak on the right is half of the (220 R + 440 C ) peak, indicating the Ka 2 diffraction peak, which had no influence on the analysis.(c) Plot of the Rietveld refinement of the XRD diffraction pattern recorded at 1500 Oe.The first and second rows of green Bragg peaks refer to the hexagonal and cubic types of Tb-Dy-Fe, respectively. Table 1 . Lattice parameters, cell volume, phase fraction, and satisfactory fits of XRD patterns of Tb 0.29 Dy 0.71 Fe 1.95 compound in the magnetization state.To facilitate the distinction between lattice constants of different structures, R-a, R-c, and C-a are defined. Figure 3 . Figure 3. Magnetic field dependence of the lattice parameters and rhombohedral structure proportions of Tb0.29Dy0.71Fe1.95.The illustration shows a diagram of the rhombohedral R3 � m(H) model (No. 166).R-c and R-a are the c-axis and a-axis lattice parameters of the rhombohedral structure, respectively.C-a is the lattice parameter of the cubic structure. Figure 3 . Figure 3. Magnetic field dependence of the lattice parameters and rhombohedral structure proportions of Tb 0.29 Dy 0.71 Fe 1.95 .The illustration shows a diagram of the rhombohedral R3m(H) model (No. 166).R-c and R-a are the c-axis and a-axis lattice parameters of the rhombohedral structure, respectively.C-a is the lattice parameter of the cubic structure. Figure 5 . Figure 5.The extension direction of the lattice rotates towards the magnetic field direction.H resents the applied magnetic field.The letters A-G represent the location of the atom. Figure 5 . Figure 5.The extension direction of the lattice rotates towards the magnetic field direction.H represents the applied magnetic field.The letters A-G represent the location of the atom. Figure 5 . Figure 5.The extension direction of the lattice rotates towards the magnetic field direction.H represents the applied magnetic field.The letters A-G represent the location of the atom.
2023-02-02T16:26:15.664Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "d0bdecdf1be8ba14b1b3c5b5a95c14fc6b2493bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/10/1861/pdf?version=1695895505", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96a6e59c0b6f2969d477e3dc5c673e69e8b55279", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
254125530
pes2o/s2orc
v3-fos-license
Automated grading system of retinal arterio-venous crossing patterns: A deep learning approach replicating ophthalmologist’s diagnostic process of arteriolosclerosis The morphological feature of retinal arterio-venous crossing patterns is a valuable source of cardiovascular risk stratification as it directly captures vascular health. Although Scheie’s classification, which was proposed in 1953, has been used to grade the severity of arteriolosclerosis as diagnostic criteria, it is not widely used in clinical settings as mastering this grading is challenging as it requires vast experience. In this paper, we propose a deep learning approach to replicate a diagnostic process of ophthalmologists while providing a checkpoint to secure explainability to understand the grading process. The proposed pipeline is three-fold to replicate a diagnostic process of ophthalmologists. First, we adopt segmentation and classification models to automatically obtain vessels in a retinal image with the corresponding artery/vein labels and find candidate arterio-venous crossing points. Second, we use a classification model to validate the true crossing point. At last, the grade of severity for the vessel crossings is classified. To better address the problem of label ambiguity and imbalanced label distribution, we propose a new model, named multi-diagnosis team network (MDTNet), in which the sub-models with different structures or different loss functions provide different decisions. MDTNet unifies these diverse theories to give the final decision with high accuracy. Our automated grading pipeline was able to validate crossing points with precision and recall of 96.3% and 96.3%, respectively. Among correctly detected crossing points, the kappa value for the agreement between the grading by a retina specialist and the estimated score was 0.85, with an accuracy of 0.92. The numerical results demonstrate that our method can achieve a good performance in both arterio-venous crossing validation and severity grading tasks following the diagnostic process of ophthalmologists. By the proposed models, we could build a pipeline reproducing ophthalmologists’ diagnostic process without requiring subjective feature extractions. The code is available (https://github.com/conscienceli/MDTNet). Introduction Retina provides a window to directly visualize vascular structure in vivo, and ophthalmologic examination has been regarded as an important routine for detecting not only eye diseases but also ocular manifestations of cardiovascular diseases or their accumulated risks [1]. Among these detectable retinal vascular signs, arteriolosclerosis is critical yet asymptomatic, of which diagnosis requires detailed retinal observation. It is not widely conducted in the modern medical practice as it depends on mostly subjective qualitative observations, and most importantly, it requires vast experiences. Assessment of arterio-venous crossing points in retinal images provides rich cues for screening arteriosclerosis and for evaluating accumulated cardiovascular risks. Typically, arterio-venous crossing points are classified into severity grades [2]. The assessment is based on some diagnostic criteria, for example, Scheie's classification [3], as shown in Fig 1(b)-1(e). The grades are described as follows: (i) none (no anomaly observed); (ii) mild (slight shrink in the caliber at venular edges); (iii) moderate (narrowed caliber at a single venular edge); and (iv) severe (narrowed caliber at both venular edges). However, human graders are subjective and usually with different levels of experience, and there has been a criticism of the low reproducibility of severity grading, which makes grading results from human graders unreliable for clinical practice, screening, and clinical trials [4]. Also, considering the ever-increasing demand for ophthalmologic examination, computeraided diagnosis (CAD) is extremely helpful for quick screening. Yet, retinal image analysis for CAD is a challenging task due to the high complexity of the vessel system and huge visual differences among retinal images. In fact, most researchers in this area have been focusing on preliminary tasks, such as vessel segmentation [5][6][7], artery/vein classification [8][9][10], etc. A few works address higher-level tasks [4,11], mostly on top of vessel segmentation, such as vessel width measurement, vesselto-vessel ratio calculation, etc. However, they usually struggle in actual diagnoses: Firstly, vessel segmentation in retinal images per se is a challenging task. The vessel maps in Fig 1(c)-1(e), which are produced by the state-of-the-art segmentation model [12], cannot capture such deformation. This may imply that deformation is too minor to be captured by segmentation models, although such kind of segmentation-based approach is a typical solution for automatic severity grading. Secondly, the existing methods detect arterio-venous crossing points by applying some morphological operators to vessel maps [13]. This approach may not be accurate enough to find crossing points that satisfy diagnostic requirements. For example, we can only use crossing points at which the artery is above the vein for diagnosis, and Fig 1(a) is not a diagnostic crossing point since the artery goes below the vein. Instead of fully relying on segmentation results, we propose a multi-stage approach, in which segmentation results are used only for finding crossing point candidates, and actual prediction of the severity grade is conducted for an image patch around each crossing point after validating if the crossing point is an actual and informative one. To the best of our knowledge, this is the first work proposing a fully-automatic methodology aiming at grading arteriolosclerosis through the joint detection and analysis of retinal crossings. Another issue in our severity grading task, which is very common in medical imaging, is the imbalanced label distribution. Most patients in our dataset have the slightest signs (none and mild) of arteriolosclerosis while only a few patients suffer from the severe grades of artery hardening. Also, the boundaries among different severity labels are not always obvious, making accurate diagnosis challenging. Inspired by the concept of the multidisciplinary team [14], which strives to make a comprehensive assessment of a patient, we propose a multi-diagnosis team network (MDTNet) in this paper to address the imbalanced label distribution and label ambiguity problems at the same time. MDTNet can combine the features from multiple classification models with different structures or different loss functions. Some of the underlying models in MDTNet use the class-balanced focal loss [15] to handle hard or rare samples, of which the original version requires hyperparameter tuning, while MDTNet can utilize the advantage of the focal loss without tuning its hyperparameters. Our main contribution is two-fold: (i) We propose a whole pipeline for an automatic method for severity grading of artery hardening. Our method can find and validate possible arterio-venous crossing points, for which the severity grade is predicted. (ii) We design a new model, MDTNet, which uses the focal loss to address the problem of data ambiguity and unbalance. Typical examples of our prediction targets. Images in the first and second rows are raw retinal patches and automatically-generated vessel maps with manually-annotated artery/vein labels, respectively. Red represents arteries while blue represents veins. (a) is false crossing (the vein runs above the artery), while (b)-(e) are for none, mild, moderate, and severe grades, respectively. Note that even the state-of-the-art segmentation techniques cannot capture caliber narrowing, therefore, the arterioloscleroses are not very obvious in the vessel maps. https://doi.org/10.1371/journal.pdig.0000174.g001 Ethics statement This study was performed in accordance with the World Medical Association Declaration of Helsinki. Patients gave written informed consent to participate and the study protocol was approved by the institutional review board of the Osaka University Hospital. We built a vessel crossing point dataset extracted from our retinal image database of the Ohasama study, a cohort to study cardiovascular diseases risk, where we could utilize 1, 440 images in the size of 5, 184 × 3, 456 pixels, which are captured by the CR-2 AF Digital Non-Mydriatic Retinal Camera (Canon, Tokyo) between 2013 and 2017 as JPEG files. This database includes the medical data of 684 people, which are with an average age of 64.5 (standard deviation: 6.1). The ratio between female and male is 65.2% : 34.8% and 47.6% of all participants have hypertension. Details of the study profile were published elsewhere [16]. To find crossing points in these images (Fig 2(a)-2(d)), we used a segmentation model ( [12]) to get vessel maps. We then classified each pixel on extracted vessels into artery/vein using [17]. We combine the vessel segmentation and classification results to find crossing points because classification results, which are more beneficial for crossing point detection, tend to have more errors while segmented vessel maps are more accurate. Therefore, we refine the classification results based on the vessel maps. A classic approach then finds crossing points in these refined artery/vein maps. Specifically, we find the artery pixels neighbouring vein pixels and check whether it is a crossing point or not using the skeletonized vessel map. The points marked in yellow in Fig 2 are detected crossing point candidates. Note that for cup zones as indicated by a pink circle and dot in Fig 2, we exclude candidates because the vessel system in this area is with high complexity and thus segmentation and classification are not reliable. Image patches are of size 150 × 150, centered at the crossing point candidates. Consequently, we detected 4, 240 crossing points and extracted corresponding image patches, centered at these crossing points. Each image patch was carefully reviewed by a highly experienced ophthalmologist. Due to the errors in vessel segmentation and artery/vein classification, the detected crossing points may not be actual or informative. Therefore, the specialist first annotated each image patch with a label on its validity, i.e., if the image patch contains an actual and informative crossing point (true) or not (false). The numbers of true and false crossing points are 2, 507 and 1, 733, respectively. For each true crossing point, the specialist gave its severity label in C = {none, mild, moderate, severe}. The numbers of image patches with respective labels are 1, 177, 816, 457, and 57. In both tasks, the datasets will be divided into training, validation, and test sets following a ratio of 8:1:1. As an examinee may have multiple retinal images, it is important to strictly put them into one same subset to prevent the training data contamination. Severity grading pipeline Our method forms a pipeline with three main modules, i.e., preprocessing, patch validation, and severity grade prediction. The whole pipeline is shown in Fig 2. Preprocessing Steps (a)-(d) in the figure are preprocessing, in which the same processes as our dataset construction are applied to get image patches of 150 × 150 pixels with crossing point candidates. Crossing point validation Both crossing point validation and severity grading are classification problems, whereas validation is easier because the label distribution is more balanced and the differences between real and false crossing points are more obvious. We find that commonly used classification models, such as [18][19][20], work well for our validation task (refer to Experiments and Results Section). Severity grade prediction The severity grade prediction task is much more challenging: Firstly, the label distribution is highly biased. For example, samples with the none label account for 68% of the total samples, while ones with the severe label only take up 3%. Secondly, the difference among samples with different labels may not be clear enough. Even medical doctors may make diverse decisions on a single image patch. For such classification tasks with ambiguous or imbalanced classes, the focal loss [15] has been used, which makes a model more aware of hard samples than easy ones. The focal loss introduces a hyperparameter γ, on which a model's performance depends significantly. Tuning this hyperparameter is extremely important yet computationally expensive [21]. A greater γ may make the model focus too much on hard samples, spoiling the accuracy of other samples, while a smaller γ may decrease its ability to classify hard samples. We propose a multi-diagnosis team network (MDTNet) to address the aforementioned problems in severity grade prediction. As shown in Fig 3, MDTNet consists of three modules, i.e., a base module, a focal module, and a fusion module. The base and focal modules have multiple sub-models, and all of them take the same image patch as input. The difference between the sub-models in the base and focal modules is the losses: Ones in the base module adopt the cross entropy (CE) loss while ones in the focal module use the focal loss. These sub-models are trained independently with respective losses. The fusion module concatenates all features (i.e., the outputs of the second last layers of the sub-models) into a single vector, which is then fed into two fully-connected layers to make the final prediction. The focal loss is originally designed for object detection [15], defined as where t is the one-hot representation of label and y is the softmax output from a model (t l and y l are the l-th entries of t and y); γ is a hyperparameter to weight hard examples. The focal loss reduces to the CE loss when γ = 0, and a larger γ weights more on hard examples. One possible criticism of the focal loss is its sensitivity to γ. We therefore propose to ensemble sub-models with different γ's. The hypothesis behind this choice is that different γ's may rely on different cues for prediction and aggregating respective features may help in improving the final decision. This is embodied in the focal module. The same idea can also be applied to different network architectures, embodied in the base module. These sub-models thus provide diagnostic features that may complement each other. To cope with the imbalanced class distribution, we adopt class weighting [22,23]. We multiply weight α l = ln N l /ln N to each term (i.e. different l's) in the CE/focal loss, where N and N l are the numbers of all samples and of samples with the label corresponding to the l-th entry of t. We pre-train the sub-models using their own classifiers and losses, and then freeze their weights to train the additional two fully-connected layers for the final decision. Data augmentation We adopt extensive data augmentation. During the training process, the input images have 50% chance of getting each operator in Fig 4. Among them, (b*h) are used for shape modification, changing the locations and the shapes of the attention areas of the deep learning models; (i*k) are to provide variety on imaging quality by blurring or adding random noises; (l) represents sensor characteristics of color (hue and saturation). Implementation For sub-models in the base module, we used ResNet [18], Inception [20], and DenseNet [19]. In the focal module, DenseNet with γ = 1, 2, or 3 were used. All these models are pre-trained over the ImageNet dataset [24]. The fully-connected layers in the fusion module are followed by the ReLU nonlinearity. For optimization, Adam [25] was adopted with a learning rate of 0.0001. Models are trained on the training set, and the weights with the highest performance on the validation set are selected as the best models, which will be evaluated on the test set. Performance of base models We first evaluated the performance of the base module's sub-models for the crossing point validation and severity grade prediction tasks. For comparison, we also give the results of models without pre-training (w/o PT) and without data augmentation (w/o DA), as well as models using only the green channel (GC Only). The crossing point validation performances are shown in the left part of Table 1. We use two metrics, precision and recall, and the time measurement to show the timing performance. We can see that pre-training and data augmentation can improve the overall performance of the crossing point validation. The Inception model with PT and DA achieved the best recall and the second-best precision. Note that PT and DA will not change the running time of the model because they do not modify the network structure. The right part of Table 1 gives the results of the base models on the severity grade prediction task, and Table 2 presents the performance of MDTNet and models using the focal loss. In addition to the classification accuracy, we also adopt Cohen's kappa, which can measure the agreement between the ground-truth labels and predictions. We can see that, compared with the focal loss models, the DenseNet can achieve higher overall accuracy with the CE loss. However, the combination of different models, different losses, as well as different γ values can boost the performance. MDTNet achieved the highest performance in this experiment when n = 3. To better analyze the severity grade prediction performance, we present the confusion matrices in Fig 5. It can be seen that, with the increment of the underlying sub-models, MDTNet gains the classification ability. The ground-truth labels are respectively mild and moderate and were both correctly predicted. We can see the artery runs over the vein deforming the vein. Being different from the example in (a) and (b), the model looks at the crossing points and looks for possible shape deformations and their extent. Conclusion The paper presents a method to automatically classify the arteriolosclerosis severity from retinal images following ophthalmologists' diagnostic process. To improve the accuracy for ambiguous and unbalanced samples, we design the multi-diagnosis team network (MDTNet), which can jointly consider diagnostic cues from multiple sub-models, without tuning the hyperparameter for the focal loss. Experimental results show the superiority of our method, achieving over 91% accuracy. Most importantly, the whole process can be checked to see how the grading was determined as it is designed to be a step-by-step approach replicating ophthalmologists' diagnostic process. Therefore, the proposed method can serve as a supporting tool for experienced ophthalmologists to efficiently grade the images in a consistently reproducible manner. A quality checklist [27] for the proposed deep learning method is shown in Table 3. Data curation: Liangzhi Li, Bowen Wang. The clinical problem in which the model will be employed is clearly detailed in the paper. 2-3 The research question is clearly stated. 3 The characteristics of the cohorts (training and test sets) are detailed in the text. 3-4 The cohorts (training and test sets) are shown to be representative of real-world clinical settings. 3-4 The state-of-the-art solution used as a baseline for comparison has been identified and detailed. 7-8 Data and optimization Page Notes The origin of the data is described and the original format is detailed in the paper. 3-4 Transformations of the data before it is applied to the proposed model are described. 6-7 The independence between training and test sets has been proven in the paper. 4 Details on the models that were evaluated and the code developed to select the best model are provided. 7 Is the input data type structured or unstructured? ☑ Structured The primary metric selected to evaluate algorithm performance, including the justification for selection, has been clearly stated. 7-8 The primary metric selected to evaluate the clinical utility of the model, including the justification for selection, has been clearly stated. 7-8 The performance comparison between baseline and proposed model is presented with the appropriate statistical significance. 7-9 Model examination (Part 5) Page Notes Examination technique 7-9 A discussion of the relevance of the examination results with respect to model/algorithm performance is presented. 7-9 A discussion of the feasibility and significance of model interpretability at the case level if examination methods are uninterpretable is presented. Investigation: Liangzhi Li.
2020-11-10T02:00:59.039Z
2020-11-07T00:00:00.000
{ "year": 2023, "sha1": "eaf90345808632e730c67478cdd8aaf0ab2cf4f0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000174&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "472ba2caaf716bd10a48960a9f43144b5b7070b9", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ] }
204126019
pes2o/s2orc
v3-fos-license
Whole-plant optimality predicts changes in leaf nitrogen under variable CO2 and nutrient availability Vegetation nutrient limitation is essential for understanding ecosystem responses to global change. In particular, leaf nitrogen (N) is known to be plastic under changed nutrient limitation. However, models can often not capture these observed changes, leading to erroneous predictions of whole-ecosystem stocks and fluxes. We hypothesise that an optimality approach can improve representation of leaf N content compared to existing empirical approaches. Unlike previous optimality-based approaches, which adjust foliar N concentrations based on canopy carbon export, we use a maximisation criteria based on whole-plant growth and allow for a lagged response of foliar N to this maximisation criterion to account for the limited plasticity of this plant trait. We test these model variants at a range of Free-Air CO2 Enrichment (FACE) and N fertilisation experimental sites. We show a model solely based on canopy carbon export fails to reproduce observed patterns and predicts decreasing leaf N content with increased N availability. However, an optimal model which maximises total plant growth can correctly reproduce the observed patterns. The optimality model we present here is a whole-plant approach which reproduces biologically realistic changes in leaf N and can thereby improve ecosystem-level predictions under transient conditions. at a range of Free-Air CO 2 Enrichment (FACE) and N fertilisation 22 experimental sites. 23 • We show a model solely based on canopy carbon export fails to repro-24 duce observed patterns and predicts decreasing leaf N content with 25 increased N availability. However, an optimal model which maximises 26 total plant growth can correctly reproduce the observed patterns. In this study, we test the hypothesis that changes in leaf N concentra-129 tions can be explained by two main drivers: (1) the limitation to growth by 130 N availability caused by an increase in leaf N content and, (2) the increase 131 in carbon export, and thus growth, through an increase in leaf N content. 132 The first driver is the one used by existing models to describe variations 133 in leaf N, while the second is commonly used in optimality approaches. 134 To explore these two drivers we use four different model setups for rep-135 resenting leaf N: fixed leaf N content, empirical (which includes only the 136 N availability criteria), optimal C export (which includes only the max-137 imum C export criteria) and optimal growth (which includes both). We 148 QUINCY represents fully coupled C, N and P as well as water and 149 energy cycles (Fig. 1). The model employs a multi-layer canopy scheme, 150 which includes a representation of photosynthesis and canopy conductance, The structural N fraction is expressed as a linear function of leaf N con- The N content of leaves and fine roots at each half-hourly timestep ∆t is 190 updated given a direction variable, D N (unitless), and a parameter repre-191 senting the maximum rate of change, δ N (day −1 ), so that: The N leaf values here refer to canopy average values of leaf N content. 193 To conserve the mass balance, new tissue is added with a leaf N equal to 194 that calculated above, while old tissue changes its N content towards the 195 target C:N by recycling N through the labile pool at a timescale of 10 days. 196 All values of leaf N content below refer to this target average leaf N value. 197 The C:N ratio of fine roots is represented as being directly proportional 198 to that of leaves (Zaehle & Friend, 2010 can take values between -1 and 1 but the same notation has been kept for 227 consistency. the optimal leaf N for which net carbon export is maximised through a 236 numerical approach, as follows. 237 We calculate the direction in which the leaf N content needs to be mod- We then calculate the net assimilation A n,δ given the new N leaf,δ as: where f resp,air and f resp,soil are the maintenance respiration rate per unit 243 N given the temperature of the air and soil respectively. A g,δ is the gross 244 canopy assimilation given the new leaf N, calculated for each canopy layer. The A n values are calculated given the average meteorological condi-251 tions over a given time period, τ N (30 days). Note that this approach means 252 that plants do not reach absolute optimality but there is a rate of change 253 in the optimal direction, with the parameter δ N denoting the maximum 254 amount by which the leaf N content can change in a timestep (Table 1). 255 We then calculate the direction variable D N so that the new actual leaf 256 N increases for a positive return in A n and decreases for a negative return: The optimal direction resulting from this criterion will vary with en- In addition to the optimal C export criterion, here we introduce an ad-264 ditional condition for calculating the D N direction variable, based on the 265 relationship between the potential N-limited growth, N growth , and the po-266 tential C-limited growth, C growth . The C-limited growth can be calculated 267 as: where τ growth is the timescale of plant growth, in this case equal to one 269 year, and C labile is the C content of the labile pool, i.e. the C available for 270 immediate growth. Note that the NPP value here is not the same as A n in 271 previous equations as it also takes into account growth respiration and stor-272 age fluxes into the reserve pool. The N-limited growth term is calculated given the available N and the stoichiometry of new tissue, χ CN growth : Here, f N up refers to the N uptake by the roots, k N resorb is a parameter Table S1. Both C and N availability is calculated at each timestep and In addition to these terms, the N-limited growth N growth , includes an ad-284 ditional flux, the nutrients reabsorbed before leaf shedding. All variables 285 are averaged over a period of τ N , as above. 286 Given the availability of C and N, the direction variable D N is then calculated as: For the purpose of the optimal C export and optimal growth model variants, We use a numerical approach to solve the optimality problem for two 301 reasons. The first is that, while using a TBM with an explicitly layered 302 canopy and complex representation of photosynthesis produces more real-303 istic predictions, it also means that the problem is non-linear and has no 304 analytic solution. The second reason is that one of the central concepts of 305 our approach is that we are not solving for the leaf N values that gives the 306 actual maximum C export or growth at any point in time but rather assume 307 that plants tend towards equilibrium, given physiological and biochemical 308 constraints to their rate of change. 350 We extract all biomass and leaf N response data from the respective exper-351 iment papers where available. See Table S2 for details for each site. For Both the optimal C export and optimal growth introduce two new, 410 PFT-independent parameters, in addition to those present in the standard 411 QUINCY model, δ N and τ N . In comparison, the empirical model requires 412 two PFT-specific parameters for leaf CN ratio bounds, two empirical pa-413 rameters that drive the shape of the curve and the two parameters it has 414 in common with the optimal variants (Table 1). 415 To test the model stability to variations in parameters, we perform a 416 parameter sensitivity analysis, detailed in Section S1 of the Supplementary 417 material 1. tion values compared to photosynthesis, thereby shifting the NPP response 437 curve ( Fig. S3 (a) and (b)) and increasing the optimal leaf N concentration. 438 As there is a slight increase in LAI predicted by all model variants with 439 an increase in soil N (Fig. 3(c)), there is a resulting slight increase in leaf 440 N predicted by the optimal C export variant with increased N availability. therefore a lower overall growth (Fig. 3(b)), demonstrating that a canopy 451 C export only optimal approach does not produce physiologically realis-452 tic predictions. The optimal growth variant results in the highest NPP 453 for most of the soil N range, as expected from the optimal criteria that 454 maximises growth. At high soil N however, the optimal C export variant predicts a slightly higher NPP, as its higher N demand caused by the high 456 leaf N content, can be met by the available soil N. 457 In terms of predictions under elevated CO 2 (Fig. 3 (d) -(f)), both the 458 empirical and optimal growth versions show a decrease in leaf N, strongest 459 at low N availability, while the optimal C export shows only a very small 460 change. The optimal C export model also shows an overall less pronounced 461 response to elevated CO 2 than the empirical and optimal growth versions. observations also show a decrease (Fig. 4(c)), although not so pronounced 479 and increasing slightly at the end of the experiment. While both model 480 variants overestimate the magnitude of the change, the optimal growth 481 does so to a lesser degree (-24.1% empirical and -14.3% optimal growth, 482 Fig. 5(c)). In the case of the ORNL site, the optimal growth variant gives 483 the prediction closest to observations (observed -13.6 %, empirical -6.1 %, 484 optimal growth -15.8 %, Fig. 5(d)). 485 All model variants predict similar NPP at ambient CO 2 , generally un- 486 derestimating observed values at both sites ( Fig. 4(e) and (f)). The optimal 487 C export variant predicts an even lower NPP at ORNL, caused by the pre-488 dicted high leaf N value, which leads to a higher growth demand for N and 489 therefore a lower resulting growth. optimal growth 17.4 %). All models predict a lower NPP response than 500 expected for the entire duration of the ORNL experiment (observed 12.0 501 %, empirical 4.5 %, optimal growth 6.3 % at the end of the experiment, 502 Fig. 5(d)). 503 All model variants predict a lower response in total canopy C at both the model predicts a low change in LAI and therefore a corresponding low 524 response in leaf N (Fig. 7(c) and (d)). This is expected from the model 525 assumptions (Fig. 2) and discussed for the at-equilibrium simulations. The Fig. 7(c)). However, they show a 535 better fit for the high N addition (observed 29.9 %, empirical 30.9 %, opti-536 mal growth 26.0 %, Fig. 7(d)). All variants underestimate the magnitude 537 of observed CWI for the control plot ( Fig. 6(d)). 538 To test the generality of the findings at the Harvard Forest site, we run 539 our model for a selection of forest N fertilisation sites (Fig. 8). The magni-540 tude of the predicted growth response for both the empirical and optimal 541 growth model variants is linked to the average ambient temperature of the 542 site, with a stronger response at colder sites ( Fig. 8(a) and (b)). This is 543 because the soil N availability, and therefore plant N limitation status is 544 strongly dependent on temperature. The temperature dependency of plant 545 response to N addition is present in reality, however this relation between 546 observations and temperature is less evident than in the case of the model. responses for the colder sites ( Fig. 8(a)). On the other hand, the optimal 552 growth variant has a tendency to underestimate the biomass response, es-553 pecially for the higher observed responses (Fig. 8(a)). It is worth noting 605 It is important to note that the model has not been calibrated to any of 606 the sites used in this study. In fact, one of the advantages of the optimality 607 approach is that the property considered optimal, in our case leaf N content, 608 becomes an emergent property of the model. This means that optimal 609 models are more general and portable across sites and ecosystems. has shown that models that include an empirical variation in leaf N content 646 tend to overestimate the decrease in leaf N, something which this study also 647 shows, specifically at the Duke site (Fig. 4(c)). The optimal growth model 648 also predicts a too strong response in leaf N, both at the Duke FACE and 649 the Harvard N addition sites but to a lesser extent. From the theoretical, 650 at-equilibrium results shown in Fig. 3 we can see that the differences in 651 response to elevated CO 2 between the empirical and the optimal growth 652 vary with soil N availability so that it is possible that the mismatch be- as this depends strongly on LAI (Fig. 3), which will vary strongly with 724 changes in SLA. 725 Our results highlight a number of key discrepancies in model predic-
2019-10-03T09:04:57.076Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "d84ca6c12645a075710036ffa47611700c78f9ed", "oa_license": "CCBYNC", "oa_url": "https://nph.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.16327", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "f4fa62311c569f5212a9007d128635983c0d07b5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Mathematics" ] }
245222048
pes2o/s2orc
v3-fos-license
Identification of DIR1-Dependant Cellular Responses in Guard Cell Systemic Acquired Resistance After localized invasion by bacterial pathogens, systemic acquired resistance (SAR) is induced in uninfected plant tissues, resulting in enhanced defense against a broad range of pathogens. Although SAR requires mobilization of signaling molecules via the plant vasculature, the specific molecular mechanisms remain elusive. The lipid transfer protein defective in induced resistance 1 (DIR1) was identified in Arabidopsis thaliana by screening for mutants that were defective in SAR. Here, we demonstrate that stomatal response to pathogens is altered in systemic leaves by SAR, and this guard cell SAR defense requires DIR1. Using a multi-omics approach, we have determined potential SAR signaling mechanisms specific for guard cells in systemic leaves by profiling metabolite, lipid, and protein differences between guard cells in the wild type and dir1-1 mutant during SAR. We identified two long-chain 18 C and 22 C fatty acids and two 16 C wax esters as putative SAR-related molecules dependent on DIR1. Proteins and metabolites related to amino acid biosynthesis and response to stimulus were also changed in guard cells of dir1-1 compared to the wild type. Identification of guard cell-specific SAR-related molecules may lead to new avenues of genetic modification/molecular breeding for disease-resistant plants. INTRODUCTION Since the dawn of agriculture, epidemics of plant pathogens have caused devastating impacts to food production. The plant bacterial pathogen Pseudomonas syringae (including more than sixty known host-specific pathovars) infects broad-ranging and agriculturally relevant plants (Saint-Vincent et al., 2020). Although it was first isolated from lilac (Syringa vulgaris) in 1899, strains of P. syringae are found in many important crops, including beans, peas, tomatoes, and rice (Saint-Vincent et al., 2020). P. syringae pv tomato (Pst) is a pervasive phytopathogenic bacterium that causes damage to a wide range of host crop species. It has been a useful model pathogen for studying host immune response since the sequencing and annotation of the 6,397,126-bp genome and two plasmids which was funded by the NSF Plant Genome Research Program (Hirano and Upper, 2000). Pst infects leaves for chemical nutrients such as carbohydrates, amino acids, organic acids, and ions that are leaked to the leaf apoplast during phloem loading/unloading (Hirano and Upper, 2000). Pst causes bacterial brown spot disease in fruit and leaves, damaging crop plants. However, more devastating than brown spot is the unique ability of Pst to nucleate supercooled water to form ice. In the species of P. syringe exhibiting the ice nucleation phenotype, ice-nucleation proteins on the outer membranes of bacterial membranes form aggregates that arrange water into arrays and promote phase change from liquid to solid. The frost-sensitive plants are injured when ice forms in leaf tissues at subzero temperature (Hirano and Upper, 2000). Pst has been used extensively to study pathogen infection in numerous host plants including tomato and Arabidopsis. The latter is a reference dicot species with a short life cycle, fully sequenced genome, and rich genetic resources, providing an ideal system to understand how plants may be modified to improve their defense and productivity. Systemic acquired resistance (SAR) is a long-distance plant immune response that improves the immunity of distant tissues after local exposure to a pathogen (Chester, 1933;Ross, 1966;Shah and Zeier, 2013;David et al., 2019). Mobile SAR signaling molecules reach distal tissues from pathogen-infected tissues via the plant vasculature and are perceived by cells in the systemic tissues to initiate the global SAR response (Conrath, 2006;Chanda et al., 2011). Perception of mobilized SAR signals in systemic tissue activates cellular defense responses leading to a "primed" condition in systemic, noninfected tissues. Priming enables the plant to maintain a vigilant or alarmed status by which they react faster and more effectively to pathogen attack (Conrath, 2011;Misra and Chaturvedi, 2015). Stomatal pores on leaf surfaces formed by pairs of guard cells are common entry sites for pathogenic bacteria. The specialized guard cells control the opening and closure of stomatal pores in response to environmental conditions (Melotto et al., 2008). When stomatal guard cells recognize Pst via pattern recognition receptors, stomata close within 1-2 h and reopen after 3 h. Reopening is due to a phytotoxin produced by some strains of P. syringae called coronatine (COR), which structurally mimics the active form of the plant hormone jasmonic acid-isoleucine (Melotto et al., 2008). As a primary entry site for bacteria into the plant tissue, the stomata are at the frontline in plant immune defense (Zhu et al., 2012). Our previous research showed that systemic leaves of Pst-primed wild-type (WT) Arabidopsis have smaller stomata apertures in distant leaves than mock-primed plants, and Pst does not widen stomata aperture in Pst-primed leaves, as it does in mock-primed plants (David et al., 2020). Reduced stomatal aperture of Pst-primed plants associated with reduced bacterial entry into the leaf apoplastic space and reduced bacterial proliferation (David et al., 2020). Using a 3-in-1 extraction method to obtain proteins, metabolites, and lipids from the same guard cell samples, we conducted multi-omics to identify SAR-related components in guard cells of WT Arabidopsis and a T-DNA insertional mutant of defective in induced resistance 1 (DIR1). DIR1 encodes a putative apoplastic lipid transfer protein involved in SAR. Arabidopsis plants with mutations in DIR1 exhibit WT-level local resistance to avirulent and virulent Pst, but pathogenesisrelated gene expression is abolished in uninoculated distant leaves, and mutants fail to develop SAR (Maldonado et al., 2002). Champigny et al. (2013) examined the presence of DIR1 in petiole exudates from SAR-induced Arabidopsis leaves that were injected with Pst. The exudates from the Pstinjected leaves showed the presence of DIR1 beginning at 30 h post-infection (hpi) and peaking at ∼45 hpi (Champigny et al., 2013). Interestingly, the small 7-kD DIR1 protein was also detected in dimeric form in the petiole exudates (Champigny et al., 2013). DIR1 is conserved in other land plants including tobacco and cucumber, and several identified SAR signals are dependent on DIR1 for long-distance movement, e.g., dehydroabietinal (DA) (Chaturvedi et al., 2012), azelaic acid (AzA) (Jung et al., 2009) and glycerol-3-phosphate (G3P) (Nandi et al., 2004;Adam et al., 2018). Although most of the LTPs have basic pIs, DIR1 has an acidic pI of 4.25. Martinière et al. (2018) found that the apoplastic environment has a more acidic pH than the cellular environment, ranging between 4.0 and 6.3, so perhaps the acidic pI of DIR1 relates to its function in a more acidic environment where it may be neutral, similar to abscisic acid which is transported in the apoplast during stress response (Cornish and Zeevaart, 1985). However, to date there is no evidence that DIR1 is transported in the apoplast. DIR1 is comprised of 77 amino acids, but despite having cystine residues characteristic with lipid transfer proteins (LTPs), it has a low sequence identity with the previously characterized LTP1 and LTP2 in Arabidopsis (Lascombe et al., 2006). Lascombe et al. (2008) used x-ray crystallography to compare the structures of DIR1 to LTP1 by examining their interactions with and without various lipid substrates, including lysophosphatidylcholines (LPCs) with various fatty acid chain lengths (LPC C14, LPC C16, and LPC C18). The results showed that DIR1 showed a greater affinity for LPCs with fatty acid chain lengths with >14 carbon atoms than LTP1. For the LPC with C18 fatty acid tails, the nonpolar C18 end was completely buried within the barrel structure of the DIR1 protein. DIR1 is unique among the LTPs due to its large internal cavity, capable of carrying two lipid molecules, and a proline-rich PxxPxxP motif (including Proline 24 to Proline 30). The Proline-rich regions of DIR1 may be involved in protein-protein interactions, as these regions are located at the surface of the protein and are fully accessible to the aqueous environment (Lascombe et al., 2008). These regions are putative candidates for docking of a protein signaling partner, or to other cell components. These features may lend themselves well to its role at a SAR-induced LTP because DIR1 is hypothesized to form a complex with azelaic acid-induced 1 (AZI1), localize to the endoplasmic reticulum and plasmodesmata Yu et al., 2013), and function as a carrier for neutral fatty acids in the apoplast. Many "box-like" LTPs, like DIR1, have a "lid"-like structure that encloses the lipid ligands inside the hydrophobic cavity during transport in the aqueous environment and have structural motifs that undergo conformational shifts to allow for lipid loading and unloading (Wong et al., 2019). In this study, a multi-omics approach was employed to identify SAR signaling mechanisms in stomatal guard cells. The results show potential involvement of DIR1 in amino acid biosynthesis and carbon metabolism in guard cells during SAR. Importantly, four lipid components with long-chain fatty acids were identified as putative DIR1-related SAR signals in guard cells. Understanding molecular changes in guard cells during SAR response not only had led to new insights into the basic function of guard cells in the plant immune response but also may facilitate biotechnology and marker-based breeding for enhanced crop defense. Plant Growth and Bacterial Culture A. thaliana WS (CS915) and dir1-1 (CS6389) seeds were obtained from the Arabidopsis Biological Research Center (Columbus, OH, USA), and plants were grown as described in David et al. (2020). Briefly, seeds were vernalized at 4°C for 2 days before planting in soil and grown in controlled environmental chambers in a short-day (8-h light/16-h dark) environment with temperatures at 22°C under light and 20°C in the dark, a lighting set at 140 μmol m −2 s −1 , and a relative humidity of 60%. Two-week-old seedlings were transferred into individual pots and gown until mature rosette (stage 3.9) was observed at 5 weeks of age. Pseudomonas syringae pv. tomato DC3000 (Pst), the model pathogen for Arabidopsis SAR induction, was used for all the experiments and was cultured on agar media plates, made using autoclaved King's B media, and antibiotics rifampicin (25 mg/l) and kanamycin (50 mg/l) were added once the solution is cooled. After overnight incubation at 28°C, Pst colonies were grown in King's B media without agar in solution overnight, pelleted by centrifugation at 6,000 g for 10 min, and used for treatment of Arabidopsis plants. Stomata Aperture Measurements Inoculations and stomata aperture measurements were performed as described in David et al. (2020). Briefly, one fully expanded rosette leaf was given a primary inoculation via a needleless syringe, with Pst DC3000 (OD 600 0.02) suspended in 10 mM MgCl 2 . This plant is called of Pst-primed. At the same time, another plant was similarly inoculated with 10 mM MgCl 2 only, and it is called mock-primed (Supplementary Figure S1). This mock and Pst-priming experiment was repeated three times. At 3 days postinoculation, one mature rosette leaf opposite to the injected leaf was detached from each plant and floated either in 10 mM MgCl 2 or in Pst DC3000 (OD 600 0.2, in 10 mM MgCl 2 ) in small petri dishes for 0, 1, or 3 h of secondary treatment in the growth chamber under the light conditions. A total of 150 stomata were measured for each treatment by collecting measurements of 50 stomata from three leaves taken from three individual plants for each treatment, and then the entire experiment was replicated three times. The leaves were collected and peeled using a clear tape; the peel from the abaxial side of the leaf was then placed on a microscope slide and imaged using a DM6000B light microscope (Leica, Buffalo Grove, IL USA) at 0 1 and 3 h post-secondary treatment. The stomatal apertures were analyzed using ImageJ software (National Institutes of Health, Bethesda, MD, USA; http://imagej.nih.gov/ij/). Two-way ANOVA and unpaired Student's t-test were conducted. The p-values less than 0.05 were considered statistically significant. The data were plotted as mean with 95% confidence interval. Statistically significant different groups were marked by different letters. Pst DC3000 Entry and Growth Assays Pst entry assays were used to measure how much bacteria entered the apoplast after 3 h of Pst exposure. The 5-week-old plants were Pst-primed and mock-primed as described in the previous section. Three days after the primary inoculation, one uninfected leaf opposite to the one infected was detached and floated in Pst (OD 600 0.2) solution for 3 h, then washed by vortexing in sterile water containing 0.02% Silwet (Su et al., 2017) and dried with sterile Kim wipes. In the aseptic environment of the laminar flow hood, an autoclaved hole puncher was used to obtain one 0.5-cm disk from each leaf. Leaf disks were ground with a sterilized plastic tip in 100 µl sterile H 2 O followed by a 1: 1,000 serial dilution in sterile H 2 O for plating. A volume of 100 µl from each dilution was plated on agar media containing rifampicin (25 mg/l) and kanamycin (50 mg/l). Colonies were counted after 2 days of incubation at 28°C. The entire experiment was replicated three times using one leaf from three individual plants each time for a total of nine biological replicates from three independent experiments. The bacterial counts of nine replicates were used to calculate mean and 95% confidence interval. The statistical analysis was done using 2-way ANOVA and unpaired t-test. The Pst growth experiment determines how much bacteria grow in the apoplast after 3 days. As described above in the Pst entry experiment, nine independent replicates of 5-week-old Pstprimed and mock-primed plants were prepared, and 3 days after primary inoculation of one rosette leaf, all other uninfected rosette leaves were sprayed with Pst DC3000 (OD 600 0.2) and placed under a dome for top for 24 h to maintain humidity. After 24 h, the dome was removed, and the infected plants stayed in the growth chamber for an additional 48 h. One opposite leaf of each plant was then detached and washed in 0.02% Silwet, and one disk was taken from the leaf to make a 1: 1,000 serial dilution and plated on media. Colonies were counted to determine how much bacteria were able to grow in the apoplast. The experiment was repeated three times with three sets of nine plants, and bacterial counts were used to calculate the mean and 95% confidence interval. Statistical analysis was done using 2-way ANOVA and unpaired t-test. Statistically significant different groups were marked by different letters. Isolation of Enriched Guard Cells for Multi-Omics Experiments Enriched guard cell samples were prepared as described in David et al. (2021). Briefly, for each sample 144 mature leaves were collected. After removing the midvein with a scalpel, the leaves were blended for 1 min in a high-speed blender with 250 ml of deionized water and ice. The sample was then filtered through a 200-µm mesh filter. This process was repeated 3 times to obtain intact stomatal guard cells, which were collected immediately into 15-ml Falcon tubes, snap frozen in liquid nitrogen, and stored in -80°C. Guard cell viability and purity were verified by staining with fluorescein diacetate and neutral red dye, which showed that guard cells remained intact and viable. Purity of the guard cell preparation has been verified by transcript abundances of six guard cell marker proteins and chlorophyll contents (Zhu et al., 2016). 3-In-1 Extraction of Proteins, Metabolites and Lipids From Guard Cell Samples We adapted a protocol to simultaneously extract metabolites, lipids, and proteins from a single whole leaf or guard cell sample . Briefly, a chloroform and methanol solution is added to samples that are in an aqueous isopropanol solution. This process induces the formation of two solvent layers-an upper aqueous phase containing hydrophilic metabolites and a lower organic phase containing lipids and other hydrophobic metabolites. The proteins are at the interphase. Components were normalized from internal standards that were added during the first step of extraction. Internal standards included, for proteins, 60 fmol digested bovine serum albumin (BSA) peptides per 1 µg sample protein; for metabolites, 10 µl 0.1 nmol/μl lidocaine, and camphorsulfonic acid; and for lipids, 10 µl 0.2 μg/μl deuterium labeled 15:0-18:1(d7) phosphatidylethanolamine (PE) and 15: 0-18:1(d7) diacylglycerol (DG). The lipid extracts were dried under nitrogen gas to prevent oxidation and stored in −80°C. The lipid extract was later dissolved in 1 ml isopropanol for LC-MS/ MS analysis. Aqueous metabolites were lyophilized and placed at −80°C, and pellets were later solubilized in 100 µl 0.1% formic acid for LC-MS/MS analysis. Protein was precipitated in cold 80% acetone at−20°C overnight, followed by removal of acetone using glass pipettes, and then protein samples were dried in a speedvac. Protein Digestion and LC-MS/MS Four biological replicates of mock-primed and Pst-primed guard cell samples from WT and dir1-1 genotypes were prepared for proteomic experiments. Protein samples were resuspended in 50 mM ammonium bicarbonate, reduced using 10 mM dithiothreitol (DTT) at 22°C for 1 h, and alkylated with 55 mM chloroacetamide in the darkness for 1 h. Trypsin (Promega, Fitchburg, WI, USA) was added for digestion (enzyme: sample 1 : 100, w/w) at 37°C for 16 h. The digested peptides were desalted using a micro ZipTip minireverse phase (Millipore, Bedford, MA, USA) and then lyophilized to dryness. The peptides were resuspended in 0.1% formic acid for mass spectrometric analysis. The bottom-up proteomics data acquisition was performed on an EASY-nLC 1200 ultraperformance liquid chromatography connected to an Orbitrap Exploris 480 with a FAIMS Pro instrument (Thermo Scientific, San Jose, CA, USA). The peptide samples were loaded in 5-µl injections to an IonOpticks Aurora 0.075 × 250 mm, 1.6-µm 120-Å analytical column, and the column temperature was set to 50°C with a sonation oven. The flow rate was set at 400 nl/min with solvent A (0.1% formic acid in water) and solvent B (0.1% formic acid and 80% acetonitrile) as the mobile phases. Separation was conducted using the following gradient: 3-19% B in 108 min; 19-29% B in 42 min; 29-41% B in 30 min. The full MS1 scan (m/z 350-1,200) was performed on the Orbitrap Exploris with a resolution of 120,000. The FAIMS voltages were on with a FAIMS CV (V) set at -50. The RF lens (%) was set to 40, and a custom automatic gain control (AGC) target was set with a normalized AGC target (%) set at 300. Monoisotopic precursor selection (MIPS) was enforced to filter for peptides with relaxed restrictions when too few precursors are found. Peptides bearing two to six positive charges were selected with an intensity threshold of 5e3. A custom dynamic exclusion mode was used with a 60-s exclusion duration, and isotopes were excluded. Datadependent MS/MS was carried out with a three FAIMS CV loop (-50, -65, -80). The MS/MS Orbitrap resolving power was set to 60,000 with a 2-m/z quadrupole isolation. The top speed for data-dependent acquisition within a cycle was set to 118 m of maximum injection time. The MS/MS mass tolerance was set to 10 ppm. Fragmentation of the selected peptides by higher-energy collision dissociation (HCD) was done at 30% of normalized collision energy and a 2-m/z isolation window. The MS2 spectra were detected by defining first the mass scan range as 120 m/z and the maximum injection time as 118 m. Metabolite and Lipid LC-MS/MS The untargeted metabolomic approach used the high-resolution Orbitrap Fusion Tribrid mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) with Vanquish ™ UHPLC liquid chromatography and is described in detail in David et al. (2020). An Accucore C18 (100 × 2.1 mm, particle size 2.6 µm) column was used for metabolites with solvent A (0.1% formic acid in water) and solution B (0.1% formic acid in acetonitrile). The column chamber temperature was to 55°C. The pump flow rate was set to 0.45 ml/min. The LC gradient was set to 0 min: 1% of solvent B (i.e., 99% of solvent A), 5 min: 1% of B, 6 min: 40% of B, 7.5 min: 98% of B, 8.5 min: 98% of B, 9 min: 0.1% of B, 10 min stop run. To enhance identification, an Acquire X MSn data acquisition strategy was used which employs replicate injections for exhaustive sample interrogation and increases the number of identified compounds in the sample with distinguishable fragmentation spectra. Electrospray ionization (ESI) was used in both positive and negative modes with a spray voltage for positive ions (V) 3,500 and a spray voltage for negative ions (V) 2,500. Sheath gas was set to 50, auxiliary gas was set at 1, and sweep gas was set to 1. The ion transfer tube temperature was set at 325°C, and the vaporizer temperature was set at 350°C. Full MS1 used the Orbitrap mass analyzer (Thermo Fisher Scientific, Waltham, MA, USA) with a resolution of 120,000, scan range (m/z) of 55-550, MIT of 50, AGC target of 2e5, one microscan, and RF lens set to 50%. For untargeted lipidomics, a Vanquish HPLC-Q Exactive Plus system was used with an Acclaim C30 column (2.1 mm × 150 mm, 3 µm). Solution A for lipids consisted of 0.1% formic acid, 10 mM ammonium formate, and 60% acetonitrile. Solution B for lipids consisted of 0.1% formic acid, 10 mM ammonium formate, and 90:10 acetonitrile: isopropyl alcohol. The column chamber temperature was set to 40°C. The pump flow rate was set to 0.40 ml/min. The LC gradient was set to 0 min: 32% of solvent B (i.e., 68% of solvent A), 1. USA) with the search engine SEQUEST algorithm to process raw MS files. Spectra were searched using the TAIR10 protein database with the following parameters: 10 ppm mass tolerance for MS1 and 0.02 da as mass tolerance for MS2, two maximum missed tryptic cleavage sites, a fixed modification of carbamidomethylation (+57.021) on cysteine residues, dynamic modifications of oxidation of methionine (+15.996), and phosphorylation (+79.966) on tyrosine, serine, and threonine. Search results were filtered at 1% false discovery rate (FDR), and the peptide confidence level was set for at least two unique peptides per protein for protein identification. Relative protein abundance in Pst-primed and control dir1-1 and WS guard cell samples was measured using label-free quantification in Proteome Discoverer ™ 2.4 (Thermo Scientific, Bremen, Germany). Proteins identified and quantified in all four out of four sample replicates were used. Peptides in mock-primed and Pst-primed samples were quantified as area under the chromatogram peak. Peak areas were normalized by total protein amount. The average intensity of four Pst-primed dir1-1 vs. four Pst-primed WS samples was compared as a ratio, and two criteria were used to identify significantly altered proteins: 1) increase or decrease of 2-fold (Pst-primed dir1-1/Pst-primed WS) and 2) p-value from an unpaired Student's t-test less than 0.05. For untargeted metabolomics, Compound Discover ™ 3.0 Software (Thermo Scientific, Bremen, Germany) was used for data analyses. Raw files from four replicates of dir1-1 Pst-primed and four replicates of WS Pst-primed guard cells were used as input. Spectra were processed by aligning retention times. Detected compounds were grouped and gaps filled using the gap filling node in Compound Discover that fills in missing peaks or peaks below the detection threshold for subsequent statistical analysis. The peak area was refined from normalized areas while marking background compounds. Compound identification included predicting compositions, searching the mzCloud spectra database, and assigning compound annotations by searching ChemSpider; pathway mapping to KEGG pathways and to Metabolika pathways was included for functional analysis of the metabolites. The metabolites were scored by applying mzLogic, and the best score was kept. Peak areas were normalized by the positive and negative mode internal This mass list was used for compound identification along with predicted compositions, searching the mzCloud spectra database, and assigning compound annotations by searching ChemSpider. Peak areas were normalized by median-based normalization. For both metabolomics and lipidomics, the average areas of four dir1-1 Pst-primed vs. four WS Pst-primed metabolite samples were compared as a ratio and two criteria were used to determine significantly altered metabolites or lipids: 1) increase or decrease of 2-fold (dir1-1 Pst-primed/WS Pst-primed) and 2) p-value from an unpaired Student's t-test less than 0.05. Accession Numbers and Data Repository Information The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found as follows: all protein MS raw data and search results have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the data set identifier PXD024991. All metabolite and lipid MS raw data and search results have been deposited to the MetaboLights data repository with the data set identifier MTBLS2614. RESULTS AND DISCUSSION The altered stomatal priming response in dir1-1 is associated with increased bacterial colonization. We have previously characterized that a smaller stomatal aperture in the distant leaves of Pst-primed WT Arabidopsis improves immunity by allowing fewer bacteria to enter apoplastic spaces (David et al., 2020). The experimental design is illustrated in Supplementary Figure S1. In this study, we examined the role of DIR1 in the priming of guard cells during SAR using the dir1-1 T-DNA insertion mutant (Maldonado et al., 2002) and its WT ecotype WS. As previously reported for the Arabidopsis Columbia ecotype (Melotto et al., 2008;David et al., 2020), the basal immune response of the mock-primed WT WS stomata closed after 1 h of exposure to Pst and then reopened after 3 h. In contrast, Pst-primed WT WS leaves (distant, non-injected leaves of Pst-primed plants) did not exhibit such stomatal immune responses and maintained a small stomatal aperture during the entire period of Pst exposure, similar to that previously observed in the WT Columbia (David et al., 2020) ( Figure 1A). There was no significant difference in the stomatal aperture from the Pstprimed leaves taken at 0, 1, and 3 h after Pst exposure ( Figure 1B). However, guard cells of distant leaves of dir1-1 mutant plants showed an altered response to priming and remained more open at 0 and 3 h compared to WT. It can be noted that due to the perception of pathogen-associated molecular patterns (PAMPs), the 1-h mock-primed and Pstprimed WT and dir1-1 apertures are similar. Specifically, the average stomatal aperture of Pst-primed dir1-1 leaves was 1.99 vs. 1.67 µm in WT at 0 h. At 3 h it was 2.80 vs. 1.87 µm for dir1-1 and WT, respectively. Interestingly, mock-primed dir1-1 also showed a larger stomatal aperture at 3 h after exposure to Pst when compared to mock-primed WT with an average of 3.60 and 2.69 µm, respectively ( Figure 1B). In the dir1-1 mutant, we found that both the control (mockprimed) and Pst-primed dir1-1 stomatal aperture differed from WT stomata with the same Pst treatments (Figure 1). In control plants (mock-primed), we found that the initial (0 h) and PAMP responses (1 h) of the dir1-1 stomata were not statistically different from those of WT stomata to Pst exposure. However, at 3 h after exposure to Pst, the dir1-1 mutant displayed a wider stomatal phenotype, indicating that COR secreted from Pst had a greater effect on the dir1-1 stomata than on the WT ( Figures 1A,B). The effect of priming on the stomatal aperture of dir1-1 was also different than that of WT. Intriguingly, the dir1-1 Pstprimed stomata apertures in distant leaves at 0 h were significantly narrower than the control (mock-primed) dir1-1 stomata, but less narrow than the WT Pst-primed stomata. The mock-primed WT and dir1-1 stomata apertures had no significant difference at 0 h (2.29-and 2.20-µm averages, respectively). After priming, the WT stomatal aperture decreased to 1.63 µm, but the dir1-1 stomata aperture was reduced to only 1.99 µm, making the Pst-primed dir1-1 stomata apertures significantly different from both the control (mock-primed) dir1-1 stomata and the Pst-primed WT stomata. The 1-h response to PAMPs from Pst was similar regardless of genotype (WT vs. dir1-1) or priming, showing the specific response of stomatal closure after PAMP perception. However, at 3 h post Pst treatment, the dir1-1 Pst-primed stomata phenotype is different from both the dir1-1 control and the WT Pst-primed stomata. Like the stomatal phenotype seen at 0 h, the dir1-1 Pst-primed stomata had a narrower aperture (2.8 µm) than the dir1-1 mock-primed stomata (3.6 µm) but were less narrow than the WT Pst-primed stomata (1.87 µm). This suggests that although the dir1-1 mutant appears to be less resistant to the COR from the Pst than the WT, it does have improved resistance to COR with priming ( Figures 1A,B). Pst entry and Pst growth are not significantly different in the mock-primed dir1-1 vs. WT plants, consistent with previous evidence that the dir1-1 mutant is defective in SAR response, but not in basal pathogen response (Maldonado et al., 2002). Importantly, the altered stomatal phenotype of dir1-1 is directly associated with Pst entry into the apoplastic spaces of the leaves and reduced stomatal immunity ( Figure 1C). There was no FIGURE 1 | Pathogen entry and growth differences in mock-primed and Pst-primed dir1-1 mutant and wild-type (WT) Arabidopsis leaves. (A). Images showing representative stomatal apertures in distant leaf of mock-primed and Pst-primed dir1-1 and WT Arabidopsis after 0, 1, and 3 h exposure to Pst DC3000 (please refer to Supplementary Figure S1 for experimental design). (B). Quantitative measurements of 150 stomata from three independently replicated experiments. Two-way ANOVA and unpaired Student's t-test were conducted. The p-values less than 0.05 were considered statistically significant. The data were plotted as mean with 95% confidence interval for convenient visualization of statistical significance. Statistically significant different groups were marked by different letters. (C). Pst DC 3000 entry results obtained from nine biological replicates of Pst-primed and mock-primed dir1-1 and WT plants obtained from three independently replicated experiments. The data are presented as mean with 95% confidence interval. (D). Pst DC 3000 growth at the 3-day post Pst exposure. The results were obtained from nine biological replicates of Pst-primed and mock-primed plants. The data are presented as mean with 95% confidence interval. cfu, colony-forming unit. Frontiers in Molecular Biosciences | www.frontiersin.org December 2021 | Volume 8 | Article 746523 6 significant difference in the number of Pst that were able to enter the apoplast of mock-primed WT, mock-primed dir1-1, or Pstprimed dir1-1 leaves. Only Pst-primed WT stomata were able to reduce Pst entry after 3 h exposure to the bacterial pathogen ( Figure 1C). Although overall immune response of the dir1-1 mutant is reduced, dir1-1 plants are still able to mount a SAR response, as indicated by the overall decreased Pst growth after 3 days of exposure in the dir1-1 Pst-primed leaves compared to the WT leaves ( Figure 1D). dir1-1 is deficient in both local and systemic guard cell immune responses. Although SAR has largely been studied at the level of leaf or whole plant level, we have recently shown evidence that SAR affects guard cell response to the bacterial pathogen Pst (David et al., 2020). DIR1 is required for movement of several chemically diverse SAR signals including DA, G3P, AzA, and possibly MeSA (Park et al., 2009;Adam et al., 2018). As we have recently reported stomatal movement and guard cell molecular changes underlying stomatal SAR responses (David et al., 2020), here we first characterized the stomatal movement phenotype of the dir1-1 mutant versus WT in response to Pst. Results from our work and previous studies (Melotto et al., 2008;Pang et al., 2020) clearly showed that stomatal guard cells from different ecotypes of Arabidopsis (WS and Columbia) exhibited similar basal immune responses. After priming for 3 days, stomata in distant leaves from the WT WS had an initial narrow aperture compared to control (mock-primed) stomata, and they maintain this narrow aperture during PAMP perceptions at 1 h and also at 3 h after Pst treatment (Figure 1). This result is also similar to the WT Columbia plants (David et al., 2020). In the dir1-1 mutant, at 3 h after exposure to Pst the dir1-1 mutant displayed a larger stomatal aperture, suggesting that COR secreted from Pst had a greater effect on the dir1-1 guard cells than on the WT. The effect of priming on the dir1-1 stomata was also different from that on the WT stomata. The dir1-1 Pstprimed stomata apertures at 0 h were narrower than the mockprimed dir1-1, but less narrow than the WT Pst-primed stomata. At 3 h post Pst treatment, the dir1-1 Pst-primed stomatal aperture is smaller than mock-primed, but less narrow than the WT Pstprimed. The altered stomatal aperture of dir1-1 directly associates with Pst entry into the apoplastic space ( Figure 1). Clearly, although the dir1-1 mutant appears to be less resistant to the COR (from Pst) than the WT, it does have improved resistance after priming. This result is consistent with previous literature, which showed a partial SAR-competent phenotype of dir1-1 (Champigny et al., 2013). Although the partial SAR-competent phenotype of dir1-1 was able to reduce the Pst growth, it did not decrease the entry of Pst via the stomatal pores. Therefore, dir1-1 is deficient in both local and systemic guard cell immunity. Differentially abundant proteins in the Pst-primed dir1-1 and WT guard cells are observed. Proteomic analysis of WT versus dir1-1 Pst-primed guard cell samples taken from distal leaves 3 days after Pst treatment identified 2,229 proteins, each with more than one unique peptide (1% FDR). Of the identified proteins, 155 showed differential abundances in the Pstprimed WT guard cells compared to the dir1-1 guard cells, with 25 which increased in abundance and 130 which decreased in abundance, by >2-fold and a p-value <0.05 (Figure 2A). Of the differentially abundant proteins in dir1-1 Pst-primed versus (vs.) WT Pst-primed, only seven were differentially abundant in dir1-1 mock-primed vs. WT mockprimed, indicating that most changes in protein abundance were due to SAR response, rather than to genotype differences. Of the 155 differential proteins, 76 were mapped to the Arabidopsis KEGG pathway. Again, only three of the 76 were differentially abundant in dir1-1 mock-primed vs. WT mock-primed. They were phosphoribosylformylglycinamidine cyclo-ligase (mapped to purine metabolism and biosynthesis of secondary metabolites), vacuolar-sorting protein (in endocytosis pathway), and 40S ribosomal protein (in the ribosome pathway). Interestingly, about 88% of the identified proteins in this study could be found in previously published guard cell transcriptomics and proteomics papers (Supplementary Table S1), highlighting that highly enriched guard cell samples were used in this study. Carbon metabolism-related proteins included fructosebisphosphate aldolase 3 (FBA3), an enzyme involved in the reversible cleavage of fructose-1,6-bisphosphate into dihydroxyacetone phosphate (DHAP) and glyceraldehyde-3phosphate (GA3P), and two triosephosphate isomerases (TIM and TPI) that catalyze the reversible isomerization between DHAP and GA3P. There, three enzymes exhibited 2-fold decreases in the dir1-1 Pst-primed guard cells compared to WT. Because of the overlap of the carbon metabolism and amino acid biosynthetic KEGG pathways, some differentially abundant proteins were involved in both biological processes, including a pyruvate kinase family protein (PKPα) and an enolase (LOS2). Both were decreased more than 2-fold in dir1-1 Pstprimed guard cells compared to WT Pst-primed (Figure 3). The second largest group of differential proteins is related to amino acid metabolism and other pathways with 32 differential proteins between dir1-1 vs. WT Pst-primed guard cells. Some of the proteins are also identified in the KEGG biosynthesis of secondary metabolites. For example, maternal effect embryo arrest 32 (MEE32) is a putative dehydroquinate dehydratase and putative shikimate dehydrogenase. It is found in multiple KEGG pathways including biosynthesis of amino acids, metabolic pathways, phenylalanine, tyrosine, and tryptophan biosynthesis, and biosynthesis of secondary metabolites. Another example is aconitase 2 (ACO2) which is also found in multiple KEGG Frontiers in Molecular Biosciences | www.frontiersin.org December 2021 | Volume 8 | Article 746523 7 pathways, e.g., biosynthesis of secondary metabolites, carbon metabolism, 2-oxocarboxylic acid metabolism, glyoxylate and dicarboxylate metabolism, biosynthesis of amino acids, citrate cycle (TCA cycle), and metabolic pathways. Amino acid biosynthesis-related proteins included aberrant growth and death 2 (AGD2), which encodes a diaminopimelate aminotransferase involved in disease resistance against Pst and the lysine biosynthesis via diaminopimelate; methionine synthase 2 (MS2), cysteine synthase C1 (CYSC1), and cystathionine betalyase (CBL), which are all involved in cysteine and methionine biosynthesis; and an acetylornithine deacetylase involved in arginine biosynthesis. All mentioned amino acid biosynthesisrelated proteins were decreased more than 2-fold in dir1-1 Pstprimed guard cells compared to WT Pst-primed (Figure 3). Differentially abundant proteins involved in redox pathways included glutathione synthetase 2 (GSH2) and glutathione S-transferase TAU 20 (GSTU20) related to redox signaling (Mallikarjun et al., 2012). A pathway enrichment analysis was conducted for the differentially abundant proteins using AGRIGO Singular Enrichment Analysis (SEA) (Du et al., 2010) (Supplementary Figures S2 and S3). A graphical representation of GO hieratical groups with all statistically significant terms classified levels of enrichment with corresponding colors. The functional enrichment was found in three general groups including response to stimulus, amino acid metabolic processes, and carbohydrate metabolic processes (Supplementary Figure S2). AGRIGO singular enrichment analysis for cellular components revealed enrichment in intracellular organelles including intracellular membranebounded organelles, plastids, and chloroplast stroma (Supplementary Figure S3). Differential metabolites in the Pst-primed dir1-1 and WT guard cells were observed. A total of 728 metabolites were identified, and 55 metabolites showed significant changes after the priming treatment in the dir1-1 versus WT guard cells, with 16 increased and 39 decreased in abundance, by > 2-fold and a p-value <0.05 ( Figure 2B). Of these differential metabolites, 34 were mapped to KEGG pathways. When grouping by biological function, the largest group of differentially abundant metabolites found in KEGG pathways was related to biosynthesis of secondary metabolites 19) ( Figure 2B). Several differential metabolites are involved in amino acid biosynthesis and hormone metabolism. For example, SA was decreased by more than 40-fold in the dir1-1 Pst-primed guard cells compared to WT samples ( Figure 4). However, it should be noted that in dir1-1 mock-primed versus WT mock-primed, SA abundance is also decreased by more than 40-fold. Metabolites involved in lysine biosynthesis were decreased more than 2-fold in the dir1-1 Pst-primed guard cells compared to WT. They included gly-leu, niacin, acetyl-leucine, and desaminotyrosine. Metabolites involved in arginine biosynthesis were also changed. For example, pyroglutamic acid decreased more than 2-fold, and aminolevulinic acid increased more than 4-fold in the dir1-1 Pstprimed guard cells compared to WT guard cells. Malic acid, which is related to carbon metabolism, was increased 1.8-fold in dir1-1 versus WT Pst-primed guard cells but was decreased by nearly 2-fold in dir1-1 vs. WT mock-primed. When malic acid in the guard cell is pumped out to the apoplast, water moves out reducing turgor pressure in the guard cells and closing the stomata (Santelia and Lawson, 2016). Proteomic and metabolomics results indicate that DIR1 affects guard cell carbon metabolism and amino acid biosynthesis during SAR. The majority of the differential proteins and metabolites were in the carbon metabolism, amino acid biosynthesis, and secondary metabolite biosynthesis pathways (Figures 2, 3). Most of the molecules were lower in the Pst-primed dir1-1 guard cells than the Pst-primed WT guard cells. These results indicate that DIR1-dependent SAR is necessary for regulation of amino acid biosynthesis and secondary metabolites in guard cells. It also indicates that guard cells attenuate their carbon metabolic pathways to divert resources to amino acid biosynthesis in response to priming in WT and that this process is at least partially dependent on DIR1 in guard cells. In addition, the differential proteins enriched for plastid and chloroplast components again support alterations in carbon metabolic pathways induced by SAR. Similarly, distant leaves of A. thaliana after infection by P. syringe have shown alterations in primary metabolism, including nitrogen metabolism and amino acid content (Schwachtje et al., 2018). We propose that reorganization of primary metabolism and amino acid biosynthesis during SAR is partially dependent on DIR1. With data from both proteomics and metabolomics, the changes of at least 15 proteins were correlated with the metabolite changes in the same KEGG pathways (Supplementary Table S2). One metabolite aminolevulinic acid increased in Pst-primed dir1-1 compared to Pst-primed WT, but the proteins in the same KEGG pathway decreased. The rest of the proteins and metabolites showed the same trend of changes, suggesting translational level regulation (Supplementary Table S2). One interesting aspect of our results is that we did not identify changes in pathogenesis-related (PR) proteins in the dir1-1 Pstprimed guard cells. Similarly, the abundance of AzA was not significantly different in the Pst-primed dir1-1 versus WT guard cells. On the other hand, the key regulatory SAR metabolite SA showed a 40-fold decrease in the Pst-primed dir1-1 vs. WT guard cells. Previously, we reported that Pst-primed guard cells in uninoculated leaves of Arabidopsis narrowed stomatal apertures and reduced entry of Pst into the leaves and had increased SA in Pst-primed guard cells compared to mockprimed guard cells (David et al., 2020). The lower SA in the FIGURE 3 | Overview of the role of DIR1 in carbon metabolism, amino acid biosynthesis, and hormone biosynthesis in guard cells during systemic defense response. Loss of DIR1 results in altered abundance of proteins, metabolites, and lipids involved in carbon metabolism, amino acid biosynthesis, and biosynthesis of plant hormones and secondary metabolites. Proteins that were decreased in dir1-1 guard cells in the carbon metabolism metabolic pathway included FBA3, TIM, TPI, LOS2, PKPα, and ACO2. Proteins that were decreased in dir1-1 guard cells in the amino acid biosynthesis metabolic pathways included AGD2, CYSC1, CBL, MS2, MTO1, AT4G17830, and AQI, and decreased metabolites in these pathways included gly-leu, niacin, acetyl-leucine, desaminotyrosine, and pyroglutamic acid. One increased metabolite in dir1-1 guard cells in the arginine biosynthesis pathway was aminolevulinic acid. Proteins that were decreased in dir1-1 guard cells in the biosynthesis of hormones and secondary metabolites metabolic pathways included MEE32 and MOD1, and decreased metabolites and lipids in these pathways included salicylic acid, stearic acid (FA 18:0), behenic acid (FA 22:0), cetyl oleate (WE 16:0/18:1), and ethyl myristate (WE 16:0). One protein, EFE, and one lipid, FAO2 18: 1, were increased in these pathways in the dir1-1 Pst-primed guard cells versus WT Pst-primed guard cells. Please refer to Supplementary Table S3 for Pst-primed dir1-1 guard cells associates well with our previous findings and suggests that DIR1 is required to transmit the longdistance SAR signal to the guard cells in uninfected leaves and increase SA in the Pst-primed guard cells. Recently, translocation of SA from primary infected tissue to distal uninfected leaves was shown to likely occur via the apoplastic space between the cell wall and the plasma membrane (Lim et al., 2016). Unlike the SAR-induced signals G3P and AzA, for which evidence exits are preferentially transported via symplastic transport and through plasmodesmata, pathogen infection resulted in increased SA accumulation in the apoplastic compartment, and SARinduced accumulation was unaffected by defects in symplastic transport via plasmodesmata (Lim et al., 2016). Mature guard cells have callose depositions that block plasmodesmata (Lee and Lu, 2011). Thus, SAR signals that can be transported via the apoplast, rather than the symplast, would logically be able to affect the guard cells during SAR, much like ABA in the apoplast can also affect guard cells (Wittek et al., 2014). Alternatively, SA could be de novo synthesized in the Pst-primed guard cells. This SA biosynthesis is also affected by DIR1 mutation. How DIR1 regulates SA biosynthesis is not known. Differential lipids in the Pst-primed dir1-1 and WT guard cells were observed. A total of 1,197 lipids were identified, and 88 lipids showed significant changes in guard cells after the priming of the dir1-1 vs. WT guard cells (with 37 increased and 49 decreased by >2fold). Of the differential lipids, 15 were mapped to KEGG pathways and their biological functions largely fell into two categories: biosynthesis of fatty acids and biosynthesis of secondary metabolites. Notably these lipids included FAO2 18:1, isoleukotoxin diol (DiHOME) involved in linoleic acid metabolism (a precursor for jasmonic acid). It was increased 2.1fold in the dir1-1 vs. WT Pst-primed guard cells. We also found two long-chain fatty acids (FA) including stearic acid (FA 18:0) and behenic acid (FA 22:0) and two wax esters (WE) including cetyl oleate (WE 16:0/18:1) and ethyl myristate (WE 16:0). They were all decreased more than 2-fold in the Pst-primed dir1-1 vs. WT guard cells (Figures 3, 4). Ethyl myristate is a long-chain fatty acid ethyl ester resulting from the condensation of the carboxy group of myristic acid with the hydroxy group of ethanol. Palmityl oleate is a wax ester obtained by the condensation of hexadecan-1-ol with oleic acid. Interestingly, both stearic acid and behenic acid were not significantly changed in the dir1-1 mock-primed vs. WT mock, indicating that this change in FA amount is due to priming, further supporting that they may be the long-chain lipid signals potentially transported by DIR1. As to the two wax esters (cetyl oleate and ethyl myristate), they were already more than 2-fold reduced in dir1-1 mock-primed vs. WT mock-primed, indicating genotypic difference rather than priming effect. Previously, we found that fatty acids were increased in the Pst-primed WT guard cells (David et al., 2020). Here we compared the levels of lipids found in Pst-primed WT guard cells to those in the dir1-1 mutant. Our goal was to identify lipids in guard cells that are dependent on DIR1 during priming. DIR1 has been characterized as an LTP, and the core of its structure forms a left-handed super helical arrangement of four α-helices building the hydrophobic central cavity. Lascombe et al. (2008) demonstrated that DIR1 showed a greater affinity for LPCs with fatty acid chain lengths with >14 carbon atoms and that nonpolar C18 fatty acid tails were completely buried within the barrel structure of the DIR1 protein, presumably allowing non-polar fatty acids to be transported in polar cellular environments. Here, lipidomic results revealed four longchain fatty acids associated with DIR1. The two long-chain fatty acids (stearic acid (FA 18:0) and behenic acid (FA 22:0)) and two wax esters (cetyl oleate (WE 16:0/18:1) and ethyl myristate (WE 16:0)) were all decreased > 2-fold in the dir1-1 guard cells compared to WT guard cells (Figures 3, 4). As both stearic acid and behenic acid were not significantly changed in dir1-1 mock-primed vs. WT mock-primed, this change in FA levels is likely due to priming, further supporting that they may be the long-chain lipid signals transported by DIR1. Further analysis is required to determine the relationship between DIR1 and these long-chain fatty acids. It is reasonable to propose that DIR1 may transfer stearic and behenic acid to guard cells during SAR. Previously, we identified an increase in palmitic acid and its derivative 9-(palmitoyloxy) octadecanoic acid in Pst-primed WT guard cells and proposed that fatty acids could allow for the development of lipid rafts or other alterations of membrane structure in guard cells, modulating stomatal immune responses (David et al., 2020). Plant wax esters are neutral lipids with long-chain (C16 and C18) or very-long-chain (C20 and longer) carbon structures and are mostly found in cuticles where they provide a hydrophobic coating to shoot surfaces (Li et al., 2008). Recently, the cuticle has been indicated to regulate transport of SA from pathogeninfected to uninfected parts of the plant via the apoplast during SAR (Lim et al., 2020). Lim et al. (2020) found that cuticle-defective mutants with increased transpiration and larger stomatal apertures reduced the apoplastic transport of SA and caused defective SAR response. It is interesting to note that our results demonstrate that WT stomata maintain narrow stomata apertures after priming, potentially to reduce transpiration and increase water potential, and possibly routing SA to the apoplast. The dir1-1 mutant, on the other hand, had larger stomatal apertures, perhaps resulting in defect in SA movement in the apoplast. It is not known whether the mutant has defect in cuticle structure due to the decreases of wax esters (cetyl oleate and ethyl myristate). However, since the decreased cetyl oleate and ethyl myristate in dir1-1 guard cells after priming were already >2-fold reduced in dir1-1 mockprimed vs. WT mock-primed, this was a genotypic difference, rather than a result of priming. If, as reported by Lim et al. (2020), defects in the cuticle reduce transport of SA, the reduced wax esters in dir1-1 vs. WT could explain the reduce SA in dir1-1 guard cells (both mock-primed and Pst-primed) and contribute to the SAR defect of the dir1-1 mutant. One cuticle-defective mutant was a knockout of MOD1, an enoyl-[acyl-carrier-protein] reductase which transports a growing FA chain between enzyme domains of FA synthase during FA biosynthesis (Nguyen et al., 2014;He and Ding, 2020). The mod1 mutant is defective in the key FA biosynthetic enzyme enoyl-ACP reductase and has reduced levels of multiple FA species and total lipids (Lim et al., 2020). Interestingly, we also found that MOD1 was >2-fold lower in Frontiers in Molecular Biosciences | www.frontiersin.org December 2021 | Volume 8 | Article 746523 dir1-1 guard cells versus WT guard cells after priming (Figure 3). This result supports our previous results that FA synthesis plays a key role in SAR priming in guard cells (David et al., 2020). However, how DIR1 affects MOD1 and FA biosynthesis awaits further investigation. Potential DIR1-interacting proteins are shown. Using the Interaction Viewer at the Bio-Analytic Resource for Plant Biology (BAR) (bar.utoronto.ca/eplant), localizations of DIR1 and proteins that interact with DIR1 (AT5G48485) were determined ( Figure 5A). Cellular localizations of DIR1 included peroxisomes, Golgi apparatus, endoplasmic reticulum, and plasma membrane. Protein-protein interactions that have been experimentally determined, indicated by the straight, green lines, occur between DIR1 and both ubiquitin-like protein (AT1G68185) and chitin elicitor receptor kinase 1 (CERK1, AT3G21630). Based on Araport 11 annotation, CERK1 is a LysM receptor-like kinase and has a typical RD signaling domain in its catalytic loop and possesses autophosphorylation activity. GO biological functions of CERK1 include perception and transduction of the chitin oligosaccharide elicitor in innate immune response to fungal pathogens. CERK1 is located in the plasma membrane and cytoplasm and phosphorylates LIK1, an LLR-RLK that is involved in innate immunity (Junková et al., 2021;Rebaque et al., 2021). However, neither the ubiquitin-like protein nor CERK1 was identified in our proteomics results (Supplementary Table S3). The GeneMANIA tool was used to predict other gene products associated with DIR1. Predicted, co-expression, and genetic interaction networks found associated gene products ( Figure 5B). In addition to DIR1, our proteomics identified several LTPs including LTP1, LTP5, LTP6, plastocyanin (PETE1), and LTPG6 (AT1G55260) from guard cell samples. LTPG6 is a glycosylphosphatidylinositol-anchored LTP involved in defense response to fungus. LTP1 (AT2G38540) is a non-specific LTP that binds calmodulin in a Ca 2+ -independent manner. LTP1 is specifically expressed in the L1 epidermal layer and is localized to the cell wall (Fahlberg et al., 2019). LTP1 RNAi lines are specifically defective in systemic, but not local, resistance to Pst, providing evidence that LTP1 may also play a role in SAR (Carella et al., 2017). LTP1, LTP5 (AT3G51600), and LTP6 (AT3G08770) are predicted to encode pathogenesis-related (PR) proteins and are members of the PR-14 protein family (Sels et al., 2008). The mRNA of LTP1 is cell-to-cell mobile (Bogdanov et al., 2016). PETE1 is one of two Arabidopsis plastocyanins (PETE1 and PETE2). Its mRNA expression is one-tenth of the level of PETE2. Although PETE2 is involved in copper homeostasis, PETE1 is not responsive to increased copper levels, but it may participate in electron transport during copper-limiting conditions (Weigel et al., 2003;Abdel-Ghany, 2009). DIR1 was not present in our dir1-1 knockout mutant samples, and LTP6 was significantly decreased in the dir1-1 versus WT after priming. LTP1 was increased in dir1-1 versus WT, FIGURE 5 | Identification of potential interacting proteins with DIR1. (A). Protein interaction image was generated using Interaction Viewer at bar.utoronto.ca/ eplant. Border color indicates protein location. Green lines indicate protein and DNA interactions that have been experimentally determined. (B). GeneMANIA tool from bar.utoronto.ca/eplant was used to predict other genes/gene products associated with DIR1 (AT5G48485). Predicted, co-expression, and genetic interaction networks found associated genes/gene products. Proteins identified in guard cell samples are circled. Circle colors indicate increased (red), decreased (green), or unchanged (blue) proteins in dir1-1 versus WT Pst-primed guard cells. CONCLUSION Guard cells that control stomatal aperture respond to various abiotic and biotic signals and have membrane-bound pattern recognition receptors that perceive bacterial pathogens. One neglected area of SAR research has been the role that stomatal guard cells play in SAR. This work investigates the role of SARrelated LTP DIR1 in guard cell-specific SAR. After priming and also after exposure to the bacterial pathogen Pst, the stomata of WT remain at a narrow aperture. In contrast, the dir1-1 mutant showed defects in stomatal closure. Based on the multi-omics data, proteins and metabolites related to amino acid biosynthesis, secondary metabolism, and response to stimulus were altered in guard cells of dir1-1 compared to WT. For example, several proteins in the methionine biosynthesis pathway and a protein related to ethylene biosynthesis were decreased in the dir1-1 Pst-primed guard cells compared to WT. It is known that ethylene is biosynthesized via methionine and ethylene plays a role in SA-regulated stomatal closure by mediating ROS and nitric oxide (Wang et al., 2020). A putative shikimate dehydrogenase was also decreased in the dir1-1 guard cells after priming. As SA is a product of the shikimate pathway and was also lower in dir1-1 guard cells, the slowdown in this pathway could explain the defect of stomatal closure and defense observed in the dir1-1 mutant during priming. Our lipidomics results highlight a role for fatty acid signaling and cuticle wax esters in the Pst-primed guard cells, i.e., two long-chain (18C and 22C) fatty acids as putative lipid mobile signals and two 16C wax esters dependent on DIR1. These results are also associated with a decrease in the MOD1 in the dir1-1 guard cells. As mod1 mutants have been shown to have cuticle defects and reduced transport of SA to distal tissue during SAR, this relates to the decreased SA in the dir1-1 guard cells. Multi-omics has shown utility in discovering DIR1dependent molecular networks in stomatal immunity. The improved knowledge may facilitate effort in biotechnology and marker-based breeding for enhanced plant disease resistance. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the following: all protein MS raw data and search results have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the data set identifier PXD024991. All metabolite and lipid MS raw data and search results have been deposited to the MetaboLights data repository with the data set identifier MTBLS2614. AUTHOR CONTRIBUTIONS SC and LD conceived and designed the research; LD, J.K, J.N, and CD carried out all experimental work; LD conducted the data analysis and LD and SC prepared the manuscript; and SC finalized the manuscript with input from all the co-authors. FUNDING This material is based upon work supported by the National Science Foundation under Grant No. 1920420. This work is also supported by United States Department of Agriculture grant no. 2020-67013-32700/project accession no. 1024092 from the USDA National Institute of Food and Agriculture.
2021-12-17T14:21:21.503Z
2021-12-17T00:00:00.000
{ "year": 2021, "sha1": "9fc99eca6395f05447841fddddfdcdcc9155690a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9fc99eca6395f05447841fddddfdcdcc9155690a", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
244832736
pes2o/s2orc
v3-fos-license
Reducing Drought Stress in Plants by Encapsulating Plant Growth-Promoting Bacteria with Polysaccharides Drought is a major abiotic stress imposed by climate change that affects crop production and soil microbial functions. Plants respond to water deficits at the morphological, biochemical, and physiological levels, and invoke different adaptation mechanisms to tolerate drought stress. Plant growth-promoting bacteria (PGPB) can help to alleviate drought stress in plants through various strategies, including phytohormone production, the solubilization of mineral nutrients, and the production of 1-aminocyclopropane-1-carboxylate deaminase and osmolytes. However, PGPB populations and functions are influenced by adverse soil factors, such as drought. Therefore, maintaining the viability and stability of PGPB applied to arid soils requires that the PGPB have to be protected by suitable coatings. The encapsulation of PGPB is one of the newest and most efficient techniques for protecting beneficial bacteria against unfavorable soil conditions. Coatings made from polysaccharides, such as sodium alginate, chitosan, starch, cellulose, and their derivatives, can absorb and retain substantial amounts of water in the interstitial sites of their structures, thereby promoting bacterial survival and better plant growth. Introduction Drought is a major consequence of global climate change and causes decreases in microbial functions that are essential for ecosystem sustainability and crop production. Jansson and Hofmockel [1] explored the impacts of climate change on soil microorganisms and potential ways that microbes can help to mitigate the negative consequences of climate change. Drought reduces soil organic carbon decomposition, lowers microbial biomass, and causes less CO 2 production [2]. Drought has long-lasting impacts on the soil microbiota because it shifts vegetation to more drought-tolerant plant species and subsequently selects for root-associated microorganisms [3,4]. Santos-Medellin et al. [5] reported that long-term drought stress resulted in a sustained enhancement in growth-promoting Actinobacteria in the rice endosphere microbiome. Grassland studies have revealed a greater sensitivity to drought among soil bacteria than among fungi [6,7]. However, soil microorganisms have developed some strategies, such as osmoregulation, dormancy, reactivation, biosynthesis of extracellular enzymes, and biofilm production, that promote their survival under drought stress. Some bacteria, including Actinobacteria and Bacilli, conserve activity and become dormant under drought stress conditions to survive in drought-impacted soil [8,9]. Xerophytic plants are an essential source of drought-tolerant microorganisms. For example, 22 Bacillus spp. strains were isolated from the rhizosphere of guinea grass. These drought-tolerant rhizobacteria alleviated drought stress in guinea grass by the induction of proline accumulation and glutathione reductase activity [10]. Raheem et al. [11] have also isolated bacterial strains of Bacillus, Enterobacter, Moraxella, and Pseudomonas from Acacia, a xerophytic plant. Their studies revealed the ability of these bacterial strains to improve yields of wheat under drought stress. Plants exposed to drought stress conditions utilize three survival strategies: escape, avoidance, and tolerance. The ability of the plant to complete its life cycle before the onset of drought is termed drought escape. The escape mechanisms involve rapid plant development, the shortening of the life cycle, and selfpollination. The ability of the plant to maintain high tissue-water content, despite a reduced water content in the soil, is termed drought avoidance. Increasing water uptake from the established root system and reductions in stomatal transpiration are examples of droughtavoidance mechanisms. The ability of the plant to endure low tissue water content through adaptive traits is termed drought tolerance. Osmotic adjustment, antioxidant defense mechanisms, and increased root:shoot ratios are various mechanisms that plants utilize to tolerate the adverse effects of drought stress [12][13][14]. Association with beneficial soil bacteria is another strategy that enhances drought tolerance in plants [15]. Therefore, the direct application of plant growth-promoting bacteria (PGPB) into the soil can enhance soil properties and increase mineral fertilizer efficiency and plant nutrient acquisition. Drought is a concern that adversely affects crop yield, but it also affects the survival of beneficial microbes. Agriculturally beneficial soil microorganisms have, therefore, been encapsulated inside polymer coatings for protection against adverse environmental conditions [16,17] to improve their effectiveness in promoting plant growth under drought stress. Achieving a suitable formulation by encapsulation is a novel technology for bacterial agents, resulting in the gradual release of encapsulated bacteria into the soil, increasing the survival of bacterial agents, and thus improving their activity to reduce drought stress in plants. This subject could be a new horizon for future research. In this review article, we discuss the importance of the encapsulation of PGPB for promoting tolerance to drought stress in plants, and we summarize the current status of this research area. Plant Responses to Drought, from Morphological to Physiological Levels Plants perceive water deficit conditions in their roots, and molecular signals move from the roots to shoots [18]. These signals, which can include hydraulic signals, electric currents, calcium waves, reactive oxygen species (ROS), phytohormone movements, and hormone-like peptides, mediate drought stress responses in plants [19,20]. For example, an accumulation of abscisic acid (ABA) occurs in the vascular tissues of leaves in response to drought [21]. ABA promotes plant resistance to drought stress by regulating stomatal closure and inducing stress-responsive gene expression [22]. Similarly, cell elongation is inhibited under severe water deficiency [23], and drought stress reduces photoassimilation and the production of the metabolites required for cell division [24,25]. At the morphological level, lateral root growth is reduced under drought stress, whereas the primary root is not affected [26]. Another adaptive plant strategy is the generation of small roots with root hairs to provide a greater absorptive surface and thereby increase the uptake of available water. Hormonal cross-talk mediated by auxin, cytokinin, gibberellin, and ABA modulates root-system architecture under water stress [27]. The induction of enzymes related to root morphology has been reported under mild drought stress [28]. Plants also improve their tolerance to water-stress conditions by the formation of specialized tissues, such as a rhizodermis characterized by a thickened outer cell wall, a suberized exodermis, and reduced numbers of cortical layers [26,29]. Henry et al. [30] showed a decrease in the suberization and compaction of the sclerenchyma layer cells in rice plants exposed to drought stress. Drought stress influences plants throughout the whole life cycle. The severity, duration, and timing of drought stress, and the interactions between different stresses and other factors, determine the severity of the damage experienced by drought-stressed plants [31]. At the physiological level, drought reduces plant growth and development and hampers flower production and grain filling [25]. Photosynthetic rates are reduced under drought-stress conditions mainly because of stomatal closure and metabolic impairment [32]. Chlorophyll content is strongly influenced by drought stress, with changes in activities of Rubisco and other enzymes associated with photosynthesis, resulting in oxidative damage under water deficit and the loss of photosynthetic pigment content [33,34]. Water stress also influences the acquisition of nutrients by the root and their transport to shoots. Generally, drought stress induces an increase in nitrogen, a decline in phosphorus, and no definitive effects on potassium levels [35]. Nevertheless, differences are evident in the various reports of changes in nutrient uptake under water deficit. For example, potassium uptake is decreased under water stress, as reported by Hu and Schmidhalter [36], whereas the accumulation of manganese, copper, molybdenum, zinc, calcium, potassium, and phosphorus is increased in soybean under drought stress [37]. Similar to other abiotic and biotic stresses, drought stress leads to the generation of ROS and to subsequent oxidative damage in plants [38]. Plants produce antioxidant enzymes and non-enzymatic components to protect themselves against oxidative stress. Of these, superoxide dismutase, catalase, peroxidase, ascorbate peroxidase, and glutathione reductase are the most important antioxidant enzymes, while the key non-enzymatic compounds include cysteine, ascorbic acid, carotenes, and reduced glutathione [39]. A higher antioxidant capacity was reported in drought-tolerant tomato genotypes by Shamim et al. [40]. In addition to the enhanced production of antioxidants and enzymes, plants produce osmolytes and hormones at the biochemical level to improve their tolerance against drought stress. The accumulation of osmolytes, such as glycine betaine, mannitol, trehalose, and proline, is necessary for osmoprotection and osmotic adjustment under water-deficit conditions [41,42]. Proline accumulation diminishes lipid peroxidation and ROS levels to allow the maintenance of membrane integrity [43]. The application of these compatible solutes exogenously is also effective for enhancing drought tolerance in plants [44]. Plants growing under water stress can be induced to synthesize compatible solutes by the application of selenium [45]. This mineral enhances plant growth and protective enzymatic activity levels, while reducing oxidative stress damage, increasing oxidative stress under light stress, enhancing antioxidant production to prevent senescence, and regulating the water balance of the plants for tolerance of drought stress [46]. Several studies have also demonstrated that the exogenous application of silicon can improve drought tolerance in plants [39,47,48]. For example, water-stressed wheat plants fertilized with silicon showed higher relative water contents and increased shoot dry matter, compared to unfertilized control plants under water stress [49]. Application of the phytohormone auxin also improves plant drought tolerance by regulating root development, the functioning of ABA-related genes, and ROS metabolism [50]. ABA increases drought tolerance in plants by stimulating stomatal movement, altering root architecture, regulating photosynthesis, and promoting the expression of ABA-induced genes encoding drought-related proteins [51]. Jasmonic acid is another hormone that can improve drought tolerance in plants [52]. PGPB Mitigate the Adverse Effects of Drought on Plants The growth improvement by root-colonizing plant growth-promoting rhizobacteria (PGPR) or bacteria (PGPB) has been studied in many research scenarios [53][54][55]. PGPB play an essential role in the defense of plants against biotic pests, and the role of these microorganisms against abiotic stresses is undeniable. Water scarcity is one of the threatening environmental issues arising from climate change, and drought can reduce water availability and water quality, thereby imposing negative economic impacts, both directly and indirectly, on agriculture. Water scarcity is a severe problem and is one of the main reasons for low crop yields worldwide. Production of drought-resistant cultivars with high yields and with adaptations to different geographical areas requires long-term breeding programs and genetic engineering. Therefore, the use of beneficial bacteria with known positive roles in increasing yield and stimulating plant growth makes sense in the face of biotic and abiotic stress factors. PGPB are viewed as a safe and ecologically complementary solution to the food security problem, along with traditional crop-breeding and genetic engineering. PGPB are associated with the rhizosphere and can improve crop productivity and plant tolerance against stresses through nitrogen fixation [56]. The mechanisms associated with induced systemic tolerance and crops with better tolerance to drought include antioxidant defenses, osmotic adjustment by accumulation of compatible solutes, production of 1-aminocyclopropane-1carboxylate (ACC) deaminase and exopolysaccharides (EPS), phytohormone production (e.g., indole-3-acetic acid (IAA), ABA, gibberellic acid, and cytokinins), and defense strategies, such as the expression of pathogenesis-related genes [15,[57][58][59][60][61]. The mechanism of plant drought tolerance induced by PGPR has been described in a recent review [62]. Bacterial strains isolated from foxtail millet in a semi-arid agroecosystem were capable of alleviating drought stress in millet by producing ACC deaminase and EPS [15]. Ghosh et al. [63] reported that drought-tolerant bacteria, such as Pseudomonas aeruginosa, Bacillus endophyticus, and B. tequilensis, improved drought tolerance in Arabidopsis seedlings by the secretion of phytohormones and EPS. Metabolomics analyses of Sorghum bicolor inoculated with rhizobacterial isolates revealed the development of systemic tolerance in plants against drought [64]. A role for EPS-producing bacterial strains for the mitigation of drought stress in wheat was demonstrated by Ilyas et al. [65], who revealed that Azospirillium brasilense and B. subtilis produced appreciable amounts of EPS and osmolytes that improved plant drought tolerance. The combination of these bacterial strains resulted in the production of higher amounts of EPS and proline (an osmolyte), and changed the levels of stressinduced phytohormones. For example, the concentration of ABA increased, whereas the concentration of other phytohormones decreased following the co-inoculation of these bacterial strains. However, seed germination, the seedling vigor index, the promptness index, and plant growth increased in response to these strains in plants under osmotic stress [65]. Medicago truncatula inoculated with Sinorhizobium sp. responded to drought stress by upregulation of translation of the jasmonic acid signaling pathway and downregulation of ethylene biosynthesis, resulting in an enhanced tolerance to drought [66]. Potato plants treated with B. subtilis HAS31 had higher contents of chlorophyll, soluble proteins, and total soluble sugars, and higher activities of catalase, peroxidase, and superoxide dismutase enzymes under drought stress, when compared to untreated drought-stressed control plants [67]. Table 1 summarizes some other studies on the effects of PGPB on several crops and their ability to reduce drought stress and induce systemic tolerance. Pseudomonas aeruginosa Mung bean (Vigna radiata) production of ROS; increased root length, shoot length, dry weight, relative water content; and upregulation of three drought stress-genes (dehydration-responsive element-binding protein, catalase, and dehydrin). [71] Burkholderia phytofirmans improved photosynthetic rate, water-use efficiency, chlorophyll content, nitrogen, phosphorus, potassium, and protein levels in the grains of wheat [72] improved the ability to uptake nutrients, and increase the shoot length [74] Azospirillum sp. Wheat (Triticum aestivum) production of plant hormones IAA, increased root growth, and formation of lateral roots, and uptake of water and nutrients [75] Pseudomonas putida Soybean (Glycine max) increased plant growth and production gibberellins [76] Pseudomonas fluorescens Encapsulation of PGPBs Encapsulation tends to stabilize cells, protect against exposure to abiotic and biotic stresses, and potentially enhance bacterial cell viability and stability during the production and storage of agriculturally important strains. It also confers additional protection during rehydration [83,84]. The encapsulation of microorganisms is one of the newest and most efficient techniques to protect bacterial cells and allow for better survival in the soil after inoculation [85]. Encapsulated bacteria can be released slowly into the soil, thereby providing long-term beneficial effects on plant growth under adverse conditions [83]. The encapsulation of PGPB has been used in agriculture to obtain a structure that promotes the protection, release, and functionalization of microorganisms, stabilizes the cells, protects against exposure to abiotic and biotic stresses, and potentially enhances PGPB viability and stability during the production, storage, and handling of their agriculturally utilized forms [84,98]. Table 2 shows the traditional carriers used for microbial inoculants. These carriers have several disadvantages, but the most important is their short-term effects. For example, formulations of B. subtilis, P. corrugata, and A. brasilense in peat or liquids have shown severe reductions in the bacterial populations [83,99], and this short-term effect has prevented any long-term impact on plant stress. Therefore, encapsulation absolutely requires the presence of a substance that is compatible with nature and that can protect bacteria from the adverse effects of stress. Carriers Advantages Disadvantages References Peats complex organic material with a high variability decrease in cell concentration and adverse effects on the quality of the final product [93,100] Liquid inoculants direct contact between seeds and microorganisms, increased survival of bacteria on roots decrease in bacterial survival rates [83,101] Clays (as granules, suspensions, and powder) storage for dried inoculants (large surface area, pore size distribution, and total porosity), increase the survival of rhizobia in the soil inaccessible to predators [83,102,103] Protection for PGPB must be non-toxic, preservative-free, capable of degradation in soil by microbial action, and resistant to destructive environmental factors present in the soil. Encapsulating materials must be able to maintain cell viability for different periods in the soil, preserve cell viability for three years of shelf storage, allow the progressive release of the encapsulated bacteria into the soil, be stable when stored at room temperature for extended periods, increase the number of encapsulated bacteria inoculated into the soil, and control the release of bacteria. These properties would facilitate their application to the farmer, generate an adhesive effect on seeds, and create an adequate microenvironment to preserve microbial viability and biological activity during long periods [16,83,99,101,[104][105][106]. Encapsulation of beneficial PGPB has been proposed as a suitable solution to deal with drought and salinity stresses by increasing the efficiency of PGPB and reducing costs [100,107]. Schoebitz et al. [85] reported that the formulations used in the polymer mixtures for use as vehicles are essential parameters for encapsulation of PGPB to obtain successful microbial inoculants [83]. Enhancement of Drought Tolerance by Encapsulation of PGPBs Drought stress is the primary reason for crop damage and losses, and many efforts are aimed at reducing or minimizing the effect of droughts. One promising strategy is to use nitrogen-fixing bacteria to decrease plant water use, as well as the negative environmental impact of chemical fertilizers [56]. A method is needed that can encapsulate the PGPB with a coating that will increase the efficacy and quality of the bioinoculants, while reducing the costs of application and the environmental impact [108]. Bacteria produce polysaccharides, proteins, and other biopolymers to form a protective biofilm that encourages community growth [109]. The encapsulation of bacteria within a matrix that mimics their natural environment is therefore an important strategy for protecting crops against abiotic stress. This matrix-focused strategy has already shown promise, as polymer-coated fertilizers are now confirmed to improve nutrient use efficiency [110] and to promote tolerance to salinity and drought stress. Different studies have shown that PGPB populations are drastically reduced when inoculated directly into the soil under adverse (drought, salinity, and metal toxicity) conditions due to loss of their biological activity and effectiveness [111,112]. Therefore, using a protective method that traps bacteria inside a coating but that still maintains their beneficial effects under adverse conditions is a significant challenge. Many studies on encapsulation have investigated drought stress, which indicates the usefulness of this method for dehydration problems. The encapsulation of PGPB in microcapsules is a crucial method for improving cell protection and for recovering and protecting plants from abiotic stresses such as drought. Figure 1 shows the goals underlying the inoculation of plants with PGPB, while Figure 2 schematically shows the mechanism of action of polymer-PGPB soil inoculants for protection of plants under drought stress [15,62,65,101,113,114]. with PGPB, while Figure 2 schematically shows the mechanism of action of polymer-PGPB soil inoculants for protection of plants under drought stress [15,62,65,101,113,114]. Polysaccharides for Encapsulation of PGPBs Polysaccharides are extensively used as natural capsule materials for cell encapsulation [115]. Figure 3 shows the advantages of polysaccharides over polymers [115,116] and polymeric inoculants for formulation and encapsulation [101]. Polysaccharides for Encapsulation of PGPBs Polysaccharides are extensively used as natural capsule materials for cell encapsulation [115]. Figure 3 shows the advantages of polysaccharides over polymers [115,116] and polymeric inoculants for formulation and encapsulation [101]. The hydrogels made of polysaccharides, such as ALG, chitosan, starch, cellulose, and their derivatives, can absorb and retain an immense amount of water in the interstitial sites of their structures. The resulting polymeric hydrogels have properties of biocompatibility, biodegradability, and natural abundance, and can be widely used in medical, agricultural, and industrial applications [117]. Polymeric hydrogels have been extensively employed in agricultural systems in the past decades for the enhancement of soil density, structure, texture, water retention, and filtration rates [118]. These features come with features that favor the carrying and release of agrochemicals [119] that can improve plant resistance to drought [117,120]. Sodium Alginate Sodium alginate (ALG) is a natural anionic polysaccharide obtained from brown algae and some bacteria. It consists of alternating units of α-L-guluronic acid and β-D-mannuronic acid linked by α-1,4-glycosidic bonds. ALG is widely used as a gelling agent in many biotechnological and medical processes and in agriculture. Stable hydrogels can be obtained under mild conditions by adding divalent metal cations (Ca 2+ , Sr 2+ , and Ba 2+ ) to an aqueous solution of ALG. Different biologically active compounds can be trapped inside the ALG gel and then released by ALG gel degradation [121][122][123]. ALG is the most commonly used material for the encapsulation of biological control agents (PGPB) and has been extensively used to encapsulate microbial inoculants due to its simplicity of handling, viscosity, and gel-enhancing properties. Generally, ALG is safe, has a high oxygen blocking capability when dry that does not disrupt bacterial bioactivity, has no effect on the survival of bacteria even after several days of encapsulation, and is an The hydrogels made of polysaccharides, such as ALG, chitosan, starch, cellulose, and their derivatives, can absorb and retain an immense amount of water in the interstitial sites of their structures. The resulting polymeric hydrogels have properties of biocompatibility, biodegradability, and natural abundance, and can be widely used in medical, agricultural, and industrial applications [117]. Polymeric hydrogels have been extensively employed in agricultural systems in the past decades for the enhancement of soil density, structure, texture, water retention, and filtration rates [118]. These features come with features that favor the carrying and release of agrochemicals [119] that can improve plant resistance to drought [117,120]. Sodium Alginate Sodium alginate (ALG) is a natural anionic polysaccharide obtained from brown algae and some bacteria. It consists of alternating units of α-L-guluronic acid and β-Dmannuronic acid linked by α-1,4-glycosidic bonds. ALG is widely used as a gelling agent in many biotechnological and medical processes and in agriculture. Stable hydrogels can be obtained under mild conditions by adding divalent metal cations (Ca 2+ , Sr 2+ , and Ba 2+ ) to an aqueous solution of ALG. Different biologically active compounds can be trapped inside the ALG gel and then released by ALG gel degradation [121][122][123]. ALG is the most commonly used material for the encapsulation of biological control agents (PGPB) and has been extensively used to encapsulate microbial inoculants due to its simplicity of handling, viscosity, and gel-enhancing properties. Generally, ALG is safe, has a high oxygen blocking capability when dry that does not disrupt bacterial bioactivity, has no effect on the survival of bacteria even after several days of encapsulation, and is an ecologically friendly hydrophilic material. The encapsulation of bacteria in ALG beads improves cell protection and provides a prolonged release and gradual colonization of roots [56]. Successful ALG encapsulations have been reported for bacteria associated with wheat. In important crops like wheat, the factor that most limits its productivity is water availability. Drought affects the yield of wheat depending on its intensity and the phenological stage of the plant [124,125]. For example, nitrogen-fixing bacteria of the Azotobacter genus were isolated from the rhizosphere and used as an encapsulated inoculum to evaluate wheat growth under drought stress [56]. The isolated bacteria were screened for their nitrogenase activity and EPS production, and they were encapsulated using a sterile sodium solution. The characteristics of bead formation (encapsulation), Azotobacter morphology, and wheat plant growth were then evaluated. A. chroococcum was encapsulated in the inoculant and improved the grain yield and harvest index of the wheat under drought stress [56]. Azotobacter, through the colonization of the plant rhizosphere and EPS production, also alleviated the adverse effects of drought stress on wheat [56,81]. The ALG-encapsulated bacteria enhanced the activity of oxidative enzymes and improved the plant growth, physiological characteristics, and water utilization efficiency under drought stress [56]. The ability of B. subtilis B26 to reduce drought stress in Brachypodium grass involves an interaction with epigenetic variation (DNA methylation), the upregulation of different drought-response marker genes, and an increase in total soluble sugars and starch. Treatment of the drought-sensitive forage grass Timothy (Phleum pratense L.) with polymerencapsulated B. subtilis increased plant biomass, photosynthesis, and stomatal conductance under both optimum and drought conditions. The contents of sucrose, fructans, and key amino acids (asparagine, glutamic acid, and glutamine) were also increased. A pea protein isolate-calcium alginate (PPI-ALG) matrix has been evaluated as a carrier for B. subtilis B26 cells for agricultural use, and the PPI-ALG microcapsules proved to be an excellent inoculation material for the release and protection of the inoculum population of bacteria in soil over a long period (112 days). The B. subtilis B26 cell integrity was preserved, the survival of bacterial cells was prolonged under different storage temperatures, and the release of bacterial cells from the microcapsules was detected inside the plant root and leaf tissues. The mechanism by which B. subtilis B26 improves plant growth under drought stress apparently involves the modification of osmolyte accumulation in the roots and shoots [126]. Another study investigated two strains of B. subtilis (XT13 and XT14), selected for their potential for mitigation of drought stress in guinea grass (Megathyrsus maximus) and maize (Zea mays) plants, and evaluated their effect on the stress response of guinea grass under drought. The bacterial strains were mixed with ALG to produce the formulated ALG microbeads [10] and incorporated into the soil. The dry weight of shoots and roots, the total biomass production, protein content, digestibility percentage, neutral detergent-soluble fiber percentage, ascorbate peroxidase, and proline content were all measured after 105 days. The plants under drought stress showed an increase in proline concentration and ascorbate peroxidase activity, but the co-inoculation of Bacillus sp. XT13 + XT14 formulated in ALG microbeads significantly enhanced the crude protein content, digestibility, and nutritional quality, while also increasing the yield of guinea grass under drought conditions [112,127,128]. The encapsulation of PGPB in microbeads positively influenced drought-stress adaptation and tolerance in guinea grass [112]. The induction of biofilm formation in Paenibacillus lentimorbus by ALG and calcium chloride (CaCl 2 ) and its effects on drought stress were investigated in chickpea by Khan et al. [129]. The development of a biofilm is a protective strategy used by bacteria for survival in adverse conditions [130]. P. lentimorbus strain B-30488, with the ability to form biofilms, was isolated from cow milk under stress conditions, and this bacterium improved plant growth under non-stress and stress conditions [131]. The B-30488 strain was treated with 1% ALG and 1 mM CaCl 2 solution, and plant seeds were submerged in the bacterial suspension until it covered the entire surface of all the seeds. The chickpea plants were harvested 120 days after sowing. During the growing period, the plants were exposed to drought conditions, with no irrigation other than one light rain event (1 mm). Several traits, such as harvest index, grain yield, and drought tolerance efficiency, were measured. RNA was extracted from the bacterial treated and untreated plants exposed to drought stress, and semi-quantitative RT-PCR was performed. The chickpea plants inoculated with B-30488+ALG+CaCl 2 under drought stress conditions showed an increase in shoot and root length, total chlorophyll content, and total plant biomass. The RT-PCR data analysis revealed the enhancement of dehydrin 1, lipid transfer protein, and prolyl-4-hydroxylase expression in B-30488r+ALG+CaCl 2 treatment, compared to control plants. The ALG (1%) and CaCl 2 (1 mM) also enhanced chemotaxis and biofilm formation of strain B-30488 under in vitro conditions. The B-30488 strain encapsulated in ALG and CaCl 2 improved plant health and biomass yield, confirming it as a beneficial agent for drought stress amelioration in plants growing in arid areas [129]. Both ALG and CaCl 2 are non-toxic to plants and to the environment and are useful for plant nutrition and health [132]. Chitosan Chitosan is a cationic polysaccharide produced by the deacetylation of chitin, another abundant natural biopolymer. Chitosan consists of randomly distributed β-(1→4)-linked D-glucosamine and N-acetyl-D-glucosamine residues [133]. Chitosan has been evaluated as a potential bioinoculant carrier and can be helpful for both nutrient and mineral sequestration [134,135]. Chitosan can promote the activity of microorganisms such as PGPB, and it can induce plant responses to biotic and abiotic stresses [136][137][138]. Chitosan has bio-adhesion and cellular transfection properties [133] and can interact with PGPB. Its properties can be enhanced by combining it with other materials, making it an essential polymer for medical, agricultural, and industrial applications [139,140]. A complex of chitosan-Methylobacterium oryzae enhanced tomato plant growth under greenhouse conditions [141]. Chitosan nanoparticles in barley plants and pearl millet (applied by soil and foliar routes and as an emulsion) reduced the harmful effects of drought stress and increased plant growth and yield [142,143]. Plants treated with these nanoparticles showed significant increases in antioxidant defense system activity, production of phenolic compounds and osmoregulators, and crop yield [139]. Therefore, the beneficial microorganisms in these hydrogels can also be used to activate the plant's own defense, enzymatic, and physiological systems to protect the plant from drought. Other Polysaccharides Starch combined with silicon dioxide and Pseudomonas putida has been used as a seed coat cover in cowpea (Vigna unguiculata) seeds. The seed coating containing Pseudomonas increased the final plant root weight, total biomass, and seed yield. Water-use efficiency (WUE) under drought stress was increased in plants grown from seeds inoculated with P. putida. The complex of silicon dioxide and starch with P. putida caused the accumulation of potassium in cowpea shoots [144]. This element is an essential nutrient for plants and plays a vital role in ameliorating drought stress and retaining cell membrane stability [144,145]. Carboxymethyl cellulose and starch form a superabsorbent material that, because of its biodegradability and stability, has been used as a hydrogel to hold irrigation water. Plants treated with these compounds continued to grow even after the cessation of irrigation [146]. Superabsorbent hydrogels have been used to manage water in the plant rhizosphere [147]. An acrylic-cellulosic superabsorbent composite containing the PGPB Pseudomonas (strains N33 and M25) was tested in Eucalyptus grandis for water-retention and protection from drought stress. The superabsorbent material served as a carrier to inoculate beneficial bacteria in the soil surrounding the eucalyptus seedlings in greenhouse conditions. This polymeric composition preserved the viability of PGPB in the soil for a long time (3 months). PGPB can stimulate plants to deploy an early response to water deficits and close stomata under drought conditions. The combination of superabsorbent material and beneficial bacteria represents an environmentally friendly system for invoking resistance to abiotic stress in plants [148]. Conclusions Drought is one of the main abiotic factors that can severely affect the yield and quality of crops. Decreasing total yearly rainfall and increased concentration of salts in the soil are being exacerbated by climate change, making drought and salinity two critical environmental and interdependent factors with negative impacts on crop production. The production of resistant cultivars is one important strategy that can reduce crop damage caused by drought. However, the production of resistant and adaptable cultivars for different geographical areas requires long-term breeding programs. In the rhizosphere, biological interactions occur between microorganisms and plant roots. PGPR or PGPB, such as Pseudomonas, Bacillus, and Azotobacter, increase the ability of plants to absorb water and nutrients and improve root growth, and play an essential role in the nutrient cycling of nitrogen, phosphorus, and potassium. These bacteria help to maintain the ecological balance of the soil and increase plant resistance to drought by affecting root morphology, plant physiological and biochemical activities, and plant growth. Different studies have shown that PGPB populations are drastically reduced when inoculated to the soil under adverse conditions, including drought, salinity, and metal toxicity, and their biological activity and effectiveness are therefore reduced. The use of environmentally adaptive compounds, such as polysaccharide polymers, as encapsulation coatings for bacterial inocula can stabilize the bacterial cells, minimize the pressure imposed by exposure to abiotic and biotic stresses, and enhance the potential viability and stability of the bacteria during commercial production and storage as agricultural formulations. The encapsulation of PGPB is one of the newest and most-efficient techniques for protecting the cells and improving the survival of the bacteria in the soil after inoculation. PGPB can slowly penetrate from the capsules and colonize root surfaces to improve physiological and biochemical activities and the molecular signals responsible for inducing long-term resistance to drought in plants (i.e., induced systemic tolerance). Natural polysaccharides, such as ALG, chitosan, starch, cellulose, and their derivatives, can absorb and retain immense amounts of water in the interstitial sites of their structures, which aids in bacterial survival and effectiveness. The interactions between the four critical factors of polymers, PGPB, rhizospheres, and plant roots can create drought resistance or tolerance in plants growing in arid or low rainfall areas.
2021-12-03T16:21:49.343Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "796ba51711941ec9911f64f2657dbb159fb5163b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/23/12979/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebced176a2461591267a5bf045fc48db88a65f9d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
54545204
pes2o/s2orc
v3-fos-license
Photon Beam Transport and Scientific Instruments at the European XFEL European XFEL is a free-electron laser (FEL) user facility providing soft and hard X-ray FEL radiation to initially six scientific instruments. Starting user operation in fall 2017 European XFEL will provide new research opportunities to users from science domains as diverse as physics, chemistry, geoand planetary sciences, materials sciences or biology. The unique feature of European XFEL is the provision of high average brilliance in the soft and hard X-ray regime, combined with the pulse properties of FEL radiation of extreme peak intensities, femtosecond pulse duration and high degree of coherence. The high average brilliance is achieved through acceleration of up to 27,000 electron bunches per second by the super-conducting electron accelerator. Enabling the usage of this high average brilliance in user experiments is one of the major instrumentation drivers for European XFEL. The radiation generated by three FEL sources is distributed via long beam transport systems to the experiment hall where the scientific instruments are located side-by-side. The X-ray beam transport systems have been optimized to maintain the unique features of the FEL radiation which will be monitored using build-in photon diagnostics. The six scientific instruments are optimized for specific applications using soft or hard X-ray techniques and include integrated lasers, dedicated sample environment, large area high frame rate detector(s) and computing systems capable of processing large quantities of data. Introduction During the last decades, the development of X-ray light sources based on low emittance electron accelerators has enabled spectacular increases in the average and peak brilliances.Brilliance corresponds to the number of photons per phase space element of the emitted X-rays and is the parameter best describing the performance of these sources.Electron accelerators optimized for free-electron lasers (FEL) use low emittance injectors to create electron bunches, linear accelerators and electron beam optics to minimize the emittance growth during acceleration and transport, and bunch compression to generate ultrashort bunches.The resulting low emittance and high peak current of the electron bunches are the key performance parameters for these facilities.In undulator sections much longer than for synchrotron radiation sources, the electron bunches are transported with high precision and collimation to enable the self-amplified spontaneous emission (SASE) process leading to FEL gain and occurring in a single-pass of the electron bunch [1,2].In the SASE process, the electron bunch typically undergoes a degradation of its properties and cannot be reused for another FEL source.It is instead dumped at the end of the beam transport.FELs therefore, in general, are single-user machines making their operation costly and the access to them much more limited than storage ring sources where electron bunches are circulated and are reused by several insertion devices.Since X-ray FEL radiation is generated by an intrinsically coherent SASE process it provides huge pulse energies of 10 mJ and possibly beyond, and pulse durations as short as single femtoseconds.These properties correspond to peak brilliances eight to nine orders of magnitude higher than obtained by storage ring sources.Exploiting this brilliance in FEL experiments allows embarking on yet impossible X-ray experiments that will lead to interesting and valuable scientific and technological applications. Two technologies have been pursued to construct electron accelerators for FEL applications: warm, normal conducting machines and cold, super-conducting accelerators.The latter allowing acceleration of electron bunches at a much higher repetition rate, thereby boosting the average brilliance and enabling to distribute electron bunches to many FEL sources.The first short-wavelength FEL user facility starting user operation was FLASH at Deutsches-Elektronen-Synchrotron (DESY) in Hamburg (Germany), which uses super-conducting accelerator technology [3] and provides FEL radiation in the XUV and soft X-ray spectral region up to the water window [4].In the following years, several normal conducting accelerator-based FELs started operation at SLAC National Accelerator Laboratory [5], ELETTRA [6], SPring-8 [7], and the Pohang Accelerator Laboratory (PAL) [8].The SwissFEL facility at the Paul-Scherrer-Institute (PSI) [9] is nearing completion.The European XFEL [10,11] employs the same super-conducting accelerator technology [12] used for FLASH and is currently under commissioning for first user experiments in 2017.European XFEL enables the acceleration of up to 27,000 electron bunches per second with an electron energy of up to 17.5 GeV (compare Table 1) which are distributed to several FEL sources.Starting from 2019 European XFEL will operate a regular user program with initially six instruments continuously receiving X-ray beams.At SLAC currently a 4 GeV super-conducting accelerator is under construction for LCLS-II, based on the same technology used for FLASH and European XFEL plus enabling continuous wave (cw) acceleration [13]. 1 Bunches are generated and distributed in 10 Hz bursts of 2700 bunches each; 2 Using nominal operation parameters for energy, bunch charge and repetition rate. The article is organized as follows: Following a description of the overall European XFEL facility, we shall first describe the photon beam transport and photon diagnostics systems, before introducing the individual science instruments.Finally, an outlook to future developments of the facility is provided. Overview European XFEL X-ray FEL radiation is characterized by its ultrashort pulse duration, high pulse energies and a high degree of coherence.Scientific applications of soft and hard X-ray FEL radiation make use of these properties, in particular in the investigation of ultrafast processes in atoms, ions, simple and very complex molecules, clusters or condensed matter.The high pulse energies allow the collection of meaningful data sets from single pulses, thereby enabling the study of non-reversible processes.Coherence properties are exploited in imaging techniques that aim to obtain atomic spatial resolution for weakly scattering systems, in part combined with a corresponding temporal resolution [16].Finally, the very high X-ray pulse energies, which combined with ultrashort pulse durations correspond to very high peak powers of up to several tens of GW, promise to enable access to new information of excited solids through non-linear X-ray scattering [17]. There are several classes of these experiments requiring not only a high peak brilliance, but also high average brilliance.Here European XFEL has a clear advantage compared to other X-ray FEL facilities.Examples comprise studies of ultra-dilute systems, very small cross-section processes, non-linear X-ray processes, or particle-particle/particle-X-ray coincidence spectroscopy experiments.Furthermore it is possible to use subsequent X-ray pulses to probe equilibrium dynamics at frequencies up to 4.5 MHz, and beyond [18]. To enable the use of the highest repetition rates in a pulse-resolved (non-integrating) manner has been the biggest instrumentation challenge for European XFEL.Such a mode of operation requires that diagnostic and X-ray detection systems operate at repetition rates of up to 4.5 MHz.In addition, optical lasers used to excite samples in a well-controlled manner and sample injections systems need to accommodate these high event rates.The development of this non-standard high repetition rate instrumentation is one key expertise of European XFEL and its partners. Layout of the European XFEL Facility The European XFEL facility consists principally of three sections.The first section includes the superconducting low emittance 17.5 GeV electron accelerator and the distribution of electron bunches to two beam lines comprising the FEL undulator sources.The electron beam transport is designed to accommodate up to five FEL sources.Each FEL source has a dedicated photon beam transport section to transport, steer, focus, and diagnose the X-ray FEL beams prior to their entry to the experiment hall.Mirrors in the photon beam transports will direct the X-ray FEL beam to one of the scientific instruments located at the respective FEL source.The third section is the experiment hall in which the scientific instruments are located and where the experiment program is run.In its first installment, only three FEL sources are constructed each leading to two scientific instruments.Figure 1 provides an overview of the European XFEL facility.Completion of the facility with five FEL sources and up to fifteen scientific instruments is expected to take place in the years following start of user operation. The layout of European XFEL is governed by a few basic conditions.First, the goal to reach FEL radiation exceeding 20 keV with high pulse energies and outstanding coherence properties has been driving the definition of the maximum electron energy to be 17.5 GeV.Using an acceleration gradient of 23.6 MeV/m this alone results in a length of nearly 1000 m for electron acceleration.A second condition was to expand the highly collimated FEL beam to a size of order 1 mm, requiring with divergences of order of 1 µrad for hard X-rays another approximately 1000 m free-space transport.This is accompanied by another requirement of achieving lateral separations on the order of 17 m for the beam lines of the different FEL sources when arriving at the experiment hall. The facility is located in the western part of the Metropolitan area of Hamburg, reaching from the DESY campus in Hamburg Bahrenfeld to the town of Schenefeld, Schleswig-Holstein.Located in a partly inhabited area, the facility had to be built in underground tunnels, fully immersed in the ground water in this location.Access to tunnels is enabled by shaft buildings at the start and end of each tunnel, and access to the experiment hall is provided from the office and laboratory building build on top of this hall.the experiment hall.Mirrors in the photon beam transports will direct the X-ray FEL beam to one of the scientific instruments located at the respective FEL source.The third section is the experiment hall in which the scientific instruments are located and where the experiment program is run.In its first installment, only three FEL sources are constructed each leading to two scientific instruments.Figure 1 provides an overview of the European XFEL facility.Completion of the facility with five FEL sources and up to fifteen scientific instruments is expected to take place in the years following start of user operation. The Super-Conducting Electron Accelerator The European XFEL accelerator has the task of providing electron bunches for the FEL process.It consists of a photo-injector, the main linac and the different beam line sections.In the 43 m long photo-injector section [19,20], electron bunches are generated by means of the photoelectric effect from a CsTe cathode.The photocathode is located inside a normal-conducting cavity to immediately accelerate the electrons to 6 MeV before injection into the first super-conducting accelerator module.This module is directly followed by a super-conducting 3.9 GHz acceleration module needed to linearize the longitudinal phase of accelerated electrons.At 130 MeV the electrons enter a diagnostic section enabling the measurement of the phase space properties of individual electron bunches in the bunch train and even of slices of these bunches.At the end of the injector an electron beam dump allows standalone operation of the injector over its full parameter range, such that commissioning and further development can be performed independently of the operation of the main linac.Electron extraction from the cathode is achieved by frequency-quadrupled 257 nm laser pulses supplied by a dedicated Nd:YLF photo-injector laser.The laser is synchronized to the accelerator radio-frequency and delivers a time pattern corresponding to the burst mode repetition rate of electron bunch delivery.The photo-injector performance determines the smallest obtainable emittance of the entire accelerator and its design has been optimized in this regard.Space charge driven emittance growth is the most important limiting effect and is minimized by relatively long laser pulses (up to 20 ps) and very high acceleration gradients of up to 50-60 MV/m at the cathode.Furthermore, the spatial and temporal profile of the laser pulse ideally has a top-hat-like shape when hitting the cathode.The operation with electron bunch charges from 0.02 to 1.0 nC at different emittances and enabling different bunch durations is foreseen.Initial commissioning of the injector was concluded in summer 2016 [21]. The main linac accelerates the electrons to a final energy of up to 17.5 GeV by means of 96 accelerator modules operated at 2.2 K, built by an international collaboration for European XFEL based on the TESLA design [14].Each module is 12 m long, weighs eight tons and comprises eight nine-cell Nb cavities.A total of 768 couplers provide the radio-frequency (RF) fields generated by 24 RF stations.The accelerator is operated in a 10 Hz pulsed RF mode (see Figure 2) for a maximum beam power of 500 kW, exceeding by far the power that can be reached by any normal conducting machine.The design gradient is 23.6 MV/m and the final installation considers the actual performance of each accelerator module by tuning the RF distribution system.The injector and the three linac sections L1, L2 and L3 are separated by three electron bunch compressors BC0, BC1 and BC2.These are used to compress the electron bunches in steps from their initial approximately 20 ps duration to as short as a few fs, hence reaching the design peak current of 5 kA, depending on the bunch charge.The electron energy at the end of L2 is 2.4 GeV and will be kept constant during operation in order to optimize the performance of the accelerator.Dedicated diagnostics sections for the measurement of integrated and slice bunch parameters are located after compressors BC1 and BC3.Cool-down of the linac was started end of 2016 and commissioning with electron beam commenced in 2017 [22,23]. Appl.Sci.2017, 7, 592 5 of 34 integrated and slice bunch parameters are located after compressors BC1 and BC3.Cool-down of the linac was started end of 2016 and commissioning with electron beam commenced in 2017 [22,23].The last section comprises a total of approximately 3 km of electron beam transport systems and starts with a collimation section removing halo and electrons at non-matching energies, i.e. dark current, from the beam.Downstream of this section electron bunches are distributed to either one of the two beam lines with the FEL undulators, denoted the North and South electron branches, or to a dump beam line.Distribution between the two branch lines is performed by a precise flat top kicker magnet with fast falling edge operating at 10 Hz to switch once during each bunch train.Likewise a dedicated portion of the 600 µs bunch train is first kicked to the South branch line.After a switching time of approximately 20 µs electrons continue without kicking into the North branch line.An additional fast kicker can operate at up to 4.5 MHz and is used to deflect single bunches to the dump beam line.It is used to, e.g., generate the time window needed for switching between south and north branch and furthermore enables a free choice of the bunch pattern delivered to the two FEL beam lines while the accelerator is operated at constant loading.At the end of each electron beam line a solid state dump is capable of absorbing up to 300 kW of beam power.In case the accelerator is operated at full beam power, electrons will be distributed over more than one dump thereby limiting the absorbed power.In the beam transport section of the accelerator many important electron diagnostics systems are located.They measure beam position with µm accuracy, arrival time of bunches relative to a precise laser synchronization system with down to a few fs accuracy, and electron energy in dispersive sections to a level better than 10 −4 .The long pulse trains allow using an initial portion of the train for intra-train feedback scheme, thereby enabling a higher stability and performance of the electron beam delivery in the remaining portion of the train delivered to the two branch lines. The FEL Undulator Sources The initially three FEL undulator sources denoted SASE1, SASE2 and SASE3 will provide FEL radiation ranging from the carbon K-edge to very hard X-rays for user experiments.SASE1 and SASE2 serve the hard X-ray regime from approximately 3 to 25 keV in the first harmonic.SASE3 produces soft X-rays from approximately 250 eV to 3 keV.These ranges are achieved by a combination of electron energy set points and gap tuning (see Table 2).The lengths of the FEL undulators have been determined after simulation of the saturation length for the highest photon energy at the largest considered electron beam emittance.All FELs are therefore much longer than the saturation length in the middle of the tuning range, around 10 keV (SASE1 and SASE2) or up to 1 keV (SASE3), which allows for special modes of operation, e.g., the implementation of self-seeding.In addition, each FEL has additional space before and after the device for optional extensions, e.g., for laser-driven beam manipulation or so-called afterburners.The last section comprises a total of approximately 3 km of electron beam transport systems and starts with a collimation section removing halo and electrons at non-matching energies, i.e., dark current, from the beam.Downstream of this section electron bunches are distributed to either one of the two beam lines with the FEL undulators, denoted the North and South electron branches, or to a dump beam line.Distribution between the two branch lines is performed by a precise flat top kicker magnet with fast falling edge operating at 10 Hz to switch once during each bunch train.Likewise a dedicated portion of the 600 µs bunch train is first kicked to the South branch line.After a switching time of approximately 20 µs electrons continue without kicking into the North branch line.An additional fast kicker can operate at up to 4.5 MHz and is used to deflect single bunches to the dump beam line.It is used to, e.g., generate the time window needed for switching between south and north branch and furthermore enables a free choice of the bunch pattern delivered to the two FEL beam lines while the accelerator is operated at constant loading.At the end of each electron beam line a solid state dump is capable of absorbing up to 300 kW of beam power.In case the accelerator is operated at full beam power, electrons will be distributed over more than one dump thereby limiting the absorbed power.In the beam transport section of the accelerator many important electron diagnostics systems are located.They measure beam position with µm accuracy, arrival time of bunches relative to a precise laser synchronization system with down to a few fs accuracy, and electron energy in dispersive sections to a level better than 10 −4 .The long pulse trains allow using an initial portion of the train for intra-train feedback scheme, thereby enabling a higher stability and performance of the electron beam delivery in the remaining portion of the train delivered to the two branch lines. The FEL Undulator Sources The initially three FEL undulator sources denoted SASE1, SASE2 and SASE3 will provide FEL radiation ranging from the carbon K-edge to very hard X-rays for user experiments.SASE1 and SASE2 serve the hard X-ray regime from approximately 3 to 25 keV in the first harmonic.SASE3 produces soft X-rays from approximately 250 eV to 3 keV.These ranges are achieved by a combination of electron energy set points and gap tuning (see Table 2).The lengths of the FEL undulators have been determined after simulation of the saturation length for the highest photon energy at the largest considered electron beam emittance.All FELs are therefore much longer than the saturation length in the middle of the tuning range, around 10 keV (SASE1 and SASE2) or up to 1 keV (SASE3), which allows for special modes of operation, e.g., the implementation of self-seeding.In addition, each FEL has additional space before and after the device for optional extensions, e.g., for laser-driven beam manipulation or so-called afterburners.All FELs are segmented into 5 m long planar undulators and 1.1 m long intersections.Undulators are equipped with permanent hybrid NdFeB magnet technology for a minimum magnetic gap of 10 mm allowing the use of aluminum vacuum chambers with an inner opening of 8.8 mm for the electron beam.Out of vacuum magnetic structures were chosen to minimize radiation damage of the magnets, but also to reduce resistive wall wake fields.The intersections carry a quadrupole for electron beam focusing, a phase shifter for matching the radiation field and the micro-bunched electron beam, an electron beam position monitor, and vacuum devices.All parts for the total 91 undulator segments have been produced and assembled by industry.Magnetic tuning was performed by the European XFEL undulator group using the pole-height tuning technique [25]. The FEL source properties depend on a large number of parameters, not only of the FEL undulators, but also the electron beam properties, e.g., peak current, emittance, or energy spread.Table 3 shows simulation results for the saturation point for a selection of photon energies and electron beam parameters.The full set of properties can be found in refs.[26,27].In practice, FELs are often operated well beyond saturation thereby boosting the emitted pulse energies but also sacrificing some of the other properties.Table 3. FEL radiation properties at saturation for selected photon energies and electron parameters optimized for specific bunch charge working points [26]. The Experiment Hall and Ancillary Instrumentation The experiment hall has a size of 50 m along the beam direction and 90 m across to install five beam line areas.The tunnels housing the X-ray beam transports enter with a separation of approximately 17 m.For each of the five beam line areas installation of up to three scientific instruments is considered with an X-ray beam separation of 1.4 m at the entrance to the hall.Each beam line area includes dedicated enclosures for X-ray optics, X-ray experiment, controlling the experiment, the pump-probe laser system, instrument laser hutches, and in some cases also preparatory labs.Control and data acquisition electronics are generally placed in separate rack rooms located on-top of the beam lines for fire protection purposes.Here also most of the air-conditioning systems are located, used to stabilize dedicated temperature zones to ±0.1 • C while special care has been taken to avoid vibrations.The hall is connected via stairs and elevators to the laboratory and office floors in the building above.In the ground floor in total ~2500 m 2 is available for laboratories, comprising rooms for sample preparation and characterization, chemistry and biochemistry laboratories.Furthermore, cleanrooms for optics, detector and vacuum part assembly and testing and several laser labs for research and development are found here. Large Area Detectors for European XFEL Already in 2006 European XFEL launched a significant program for the development of large area detectors for FEL experiments, since it was clear that the requirements to detectors for FEL experiments in general and European XFEL specifically could not be fulfilled by existing devices [28].General requirements include an integrating operation mode, enabling the detection of several X-ray photons per pixel and per pulse, very low noise, enabling to detect single X-ray photons, and a high dynamic range, enabling to count 10 4 or more X-ray photons in a single pixel.Specific requirements for European XFEL include the need for frame rates of up to 4.5 MHz, in order to be compatible with X-ray pulse delivery within the pulse train structure (compare Figure 2), high throughput, to collect as many images as possible per unit time, and radiation hardness.More recently, the possibility of vetoing specific events was added as a requirement.Three large projects were selected and are pursued together with external partners.Laboratory infrastructure, in particular for detector calibration and characterization, has been designed and is operated by the European XFEL detector group [29].In addition, a few smaller projects consisted in modifying existing cameras, mostly designed for 10 Hz operation, and in upgrading the Gotthard one-dimensional strip detector [30] to 4.5 MHz repetition rate.In the following the three large area detector projects are described briefly. The Adaptive Gain Integrated Pixel Detector (AGIPD) is developed by a consortium led by DESY [31].The main features of this detector are 200 × 200 µm 2 pixels, dynamic gain switching with 3 stages, a dynamic range of ~10 4 at 12 keV, single photon detection (6σ) above 7 keV, in-vacuum operation, the capability of storing up to 352 images within the 600 µs pulse train, and to read out these data in-between pulse trains.As sensor material 500 µm thick Si is used.Two 1 Mpixel AGIPD devices constitute the primary 2D detectors at the SPB/SFX and MID instruments.A 4 Mpixel device is under development as part of the SFX User Consortium, as is a 1 Mpixel device with GaAs sensor for the HIBEF User Consortium. The Large Pixel Detector (LPD) is developed by a consortium led by STFC [32].The main features of this detector are 500 × 500 µm 2 pixels, three amplifiers with different gain per pixel, a dynamic range of up to ~10 5 at 12 keV, single photon detection (3σ) above ~12 keV, the capability of storing up to 512 images within the 600 µs pulse train, and to read out these data in-between pulse trains.As sensor material, 500 µm thick Si is used.A 1 Mpixel LPD device will be employed at the FXE instrument, primarily for liquid scattering experiments. The DepFET Sensor with Signal Compression (DSSC) detector is developed by a consortium initially led by the Max-Planck-Society (MPG) [33].The main features of this detector are hexagonal 236 µm diameter pixels, a non-linear gain, a dynamic range of ~6 × 10 3 at 1 keV, single photon detection (5σ) above 0.7 keV, in-vacuum operation, the capability of storing up to 800 images within the 600 µs pulse train, and to transfer up to 640 frames in-between pulse trains.The highest frame rate of the DSSC detector therefore is 6.4 kHz.As sensor material 300 µm thick Si is used.One 1 Mpixel DSSC device is considered as the primary 2D detector for the SCS and SQS instruments.In a first phase, a simplified detector with 1 Mpixel Si drift diodes with reduced performance will be available for experiments. Optical Lasers for European XFEL The usage of synchronized optical laser pulses is foreseen at all scientific instruments and opens various possibilities for time-resolved pump-probe studies and laser-controlled manipulation of electronic relaxation and excitation processes.A dedicated development has been initiated to meet the requirements of delivering 800 nm radiation, 10-100 fs pulse duration, 0.1-4.5 MHz selectable pulse delivery and 0.1-1 mJ pulse energy.After the successful completion of a first design phase [34] the implementation of three pump-probe burst-mode optical (PP) lasers systems was started, each serving one beam line area.The final amplification of the laser pulses employs the Non-collinear Optical Parametric Amplifier (NOPA) scheme.Three NOPA stages allow providing highest pulse energies at reduced repetition rate, 3.25 mJ for 0.1 MHz, and reduced pulse energies at highest repetition rate, 0.08 mJ at 4.5 MHz.A Pockels cell and polarizer before the NOPA amplifiers enable picking of arbitrary pump pulse sequences from the amplified burst at frequencies up to 4.5 MHz.An additional output delivers 1030 nm pulses with energies up to 40 mJ at 0.1 MHz and duration of 400 ps or 800 fs (compressed).Full performance of the system has been demonstrated recently [35].In order to synchronize the delivery of optical laser and X-ray pulses, RF and optical lasers a laser-based synchronization system [36] is employed with the goal of reaching an accuracy better than 20 fs rms [37]. The PP lasers are placed in dedicated laser rooms at each beam line.Laser beams are transported to dedicated instrument laser hutches (ILH) adjacent to the X-ray experiment areas.The separation from the X-ray hutch provides the possibility to work on these systems without disturbing the X-ray program.In the ILHs, e.g., delay stages, frequency conversion optics, and laser diagnostics are placed.Laser pulses are transported in a time stretched mode and final compression occurs close to the experiment location.Particular care needs to be taken with respect to the dispersion management of the optical laser pulses in order to achieve the shortest pulse duration. The User Program The European XFEL is conceived as a user facility with the main emphasis of providing excellent conditions for FEL research with soft and hard X-rays.To reach this goal an accelerator operation of approximately 5600 h annually is foreseen to provide 4000 h of user operation, 800 h for accelerator and another 800 h for X-ray systems maintenance, research and development.Operation will be continuous for several weeks with interruptions during two workdays, mainly for setup changes, maintenance, and tuning for the next experiments.With initially two scientific instruments per FEL source each of them schedules ~2000 h per year for users.In regular operation this should allow for >200 user experiments annually, thereby significantly increasing the European and worldwide accessibility of FEL experiments.User experiments will be selected by peer-review using scientific excellence as criterion given that technical feasibility and safety requirements are fulfilled.User groups will be supported by the instrument staff and the scientific support groups in preparing and executing the experiment and analyzing the data.Due to the complexity of FEL experiments, often requiring expertise in X-rays, optical lasers, sample delivery, detectors and data analysis, it is the goal to provide these systems to the users, thereby facilitating the use of European XFEL and lowering the entry level to FEL experiments.During experiments scientific staff of European XFEL will continuously support user groups and ensure that the various sub-systems are functional. European XFEL Governance and Organization The operation of the overall European XFEL facility is entrusted to the European XFEL GmbH, based on an intergovernmental agreement between the participating countries Denmark, France, Germany, Hungary, Italy, Poland, Russia, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom.The largest contributors are Germany with 58% and Russia with 27% of the total construction cost.Each participating country determines a legal entity to hold their shares and to represent the country in the Council, the superior governance board of European XFEL.The construction costs of the facility can be sub-divided into the three major areas: civil construction, accelerator complex, and X-ray systems with shares of roughly 30:50:20.About 50% of the construction cost was provided through in-kind contributions by the participating countries.The annual costs for operation are shared amongst the participating countries initially according to the participation in the construction period.Starting in 2023, 50% of the annual operation costs will be distributed according to real usage of the facility by research groups from the participating countries, calculated using a three-year average. The construction, commissioning and operation of the superconducting accelerator and its ancillary systems depend on the expertise residing at DESY, a major accelerator and photon science laboratory located in Hamburg, Germany.During the construction phase DESY led the international Accelerator Consortium that designed, built, and commissioned the accelerator.European XFEL staff has been responsible for the X-ray systems and ancillary instrumentation, including the undulators.For the operation phase European XFEL and DESY have concluded an agreement according to which DESY provides the personnel and expertise to operate and further develop the accelerator.European XFEL takes the responsibility for the X-ray systems and the user program of the facility. X-ray Photon Beam Transports The X-ray optical systems that transport X-ray photons from the undulators to the experiment hall are located in long underground tunnels.From the source point of FEL radiation located within the last segments of the FEL undulators to the scientific instruments in the experiment hall these beam transport paths are up to 1 km long [38,39].The key optical elements of each transport system are three mirrors: Mirrors "1" and "2" create a horizontal offset of the X-ray FEL beam.This offset prevents unwanted background radiation, consisting of Bremsstrahlung and high-energy spontaneous radiation also produced in the long FEL undulator, to be transported into the experiment areas.The spontaneous radiation has a critical energy of typically 200 keV and is not reflected by the offset mirrors, but is rather absorbed by the first mirror or transmitted and then stopped by a massive tungsten beam stop.Only the desired X-ray FEL photons in the energy ranges of 3-25 keV (SASE1 and SASE2) and 0.25-3 keV (SASE3) can pass the offset mirror chicane.Mirror "3" (distribution mirror) can be optionally inserted to reflect the photons to the HED, FXE, or SCS instruments, while the undeflected beam passes to the MID, SPB, and SQS instruments, respectively (compare Figure 3).In the case of the FXE, MID and HED instruments, multi-bounce crystal monochromators (optional) are integrated in the beam transport.For SASE3 a grating monochromator has been integrated that can be used by the SCS and SQS instruments at this FEL source.The beam transport layout includes for each of the systems the possibility of integrating a third beamline to a third instrument.Such an extension is currently in preparation at SASE3. The distance from the source point to the first mirror is between ~245 and 290 m, which is enough to expand the beam to a size filling the about 1 m long mirrors under grazing incidence angle and thereby reducing damage and heat load effects.In order to handle the demanding power densities, a surface coating with boron carbide is applied on the single crystalline silicon mirrors.A liquid indium gallium eutectic film, which is in contact with water-cooled copper blades, is used to remove excess heat from all mirrors of the beam transport system.Because the X-ray FEL radiation is close to the diffraction limit, its divergence is roughly proportional to the photon wavelength.To utilize the full length of the offset mirrors for all photon energies, their reflection angle can be varied from 1.1-3.6 mrad for the hard X-ray beam transports and from 6-20 mrad for SASE3.The reflection angles define the energy cut-off and, thereby, the transport of higher harmonic radiation, too.The distribution mirror operates at a fixed angle, which is defined by the distance from the mirror to the experiment hall and the lateral distance (1.4 m) between the shutters of instruments operating at the same SASE beamline.To avoid over-illumination of the distribution mirror, the second offset mirror is bendable and can slightly focus the beam towards the distribution mirror.Alternatively, Be Compound Refracting Lenses (CRL) positioned upstream of the offset mirrors of the SASE1 and SASE2 beam transports can be used to collimate the beam or to produce a similar confocal beam situation with an intermediate focus behind the distribution mirror.The most crucial requirement to the X-ray mirrors is preservation of the almost perfect wave front created by the lasing process [40].Source properties and source-to-mirror distances at the European XFEL lead to requirements of about 2 nm peak-to-valley shape error for all mirrors of the beam transport systems, corresponding to roughly 50 nrad rms in slope error.The mirrors were manufactured in Japan and Germany by deterministic polishing techniques, where material is iteratively removed on atomic length scales according to a very precise metrology map of the mirror before each polishing step. One important constraint of X-ray beam optics at an X-ray FEL is the so-called single-pulse damage.Because a large number of photons (corresponding to pulse energies of the order of mJ per pulse) arrive within 10-100 fs, thermal transport does not remove any heat during the pulse, even for excellent heat conductors like copper or diamond.For focused beam conditions (typically smaller than 50 µm diameter), most materials will vaporize on an ultrafast time scale due to the absorption of energy from a single X-ray FEL pulse within the X-ray penetration depth.More resistant materials are the ones with low atomic number where the absorption per atom is lower.Most components directly exposed to X-ray FEL radiation, or at least their beam facing surfaces, are therefore made of The most crucial requirement to the X-ray mirrors is preservation of the almost perfect wave front created by the lasing process [40].Source properties and source-to-mirror distances at the European XFEL lead to requirements of about 2 nm peak-to-valley shape error for all mirrors of the beam transport systems, corresponding to roughly 50 nrad rms in slope error.The mirrors were manufactured in Japan and Germany by deterministic polishing techniques, where material is iteratively removed on atomic length scales according to a very precise metrology map of the mirror before each polishing step. One important constraint of X-ray beam optics at an X-ray FEL is the so-called single-pulse damage.Because a large number of photons (corresponding to pulse energies of the order of mJ per pulse) arrive within 10-100 fs, thermal transport does not remove any heat during the pulse, even for excellent heat conductors like copper or diamond.For focused beam conditions (typically smaller than 50 µm diameter), most materials will vaporize on an ultrafast time scale due to the absorption of energy from a single X-ray FEL pulse within the X-ray penetration depth.More resistant materials are the ones with low atomic number where the absorption per atom is lower.Most components directly exposed to X-ray FEL radiation, or at least their beam facing surfaces, are therefore made of boron-carbide or diamond, for example slits, beam stops, shutters, collimators and attenuator plates.Exceptions are the cryogenically-cooled silicon monochromators in the hard X-ray beam transports SASE1 and SASE2, but also beam position screens and a few other components.For these components, it is required to carefully monitor the impinging beam size to avoid single-pulse damage effects. Another big challenge is the total power of a train of X-ray pulses which could reach values as high as several kW depending on pulse energy and number of pulses within the train of 600 µs duration.An automatic protection system will trigger a reduction of the maximal number of pulses if equipment is inserted that does not withstand full pulse trains.In addition, long X-ray pulse trains, when missteered, could easily damage the stainless steel pipes of the vacuum system.To prevent this, photon beam loss monitors have been implemented at strategic places along the beam transport.Up to four diamond plates can be adjusted around the beam trajectory.In case of unwanted beam motions (e.g., due to mechanical drifts of the mirror mounts) the X-ray beam would hit a diamond and produce optical light fluorescence.This light is captured by a photomultiplier and triggers via the machine protection system a rapid interruption of the beam (within the same pulse train). Photon Diagnostics Systems X-ray photon diagnostics is required for monitoring the photon pulse parameters generated by the European XFEL [41][42][43][44].The diagnostics systems provide essential information to the machine for setup, operation and optimization of the accelerator, undulator and X-ray optics, especially during commissioning.Diagnostics is also mandatory for normalization and interpretation of the experimental data.Several beam properties will be measured by so-called online methods, that is, for each photon pulse and with minimal distortion of the pulse.Examples are the pulse energy and beam position, but also spectral content and information about temporal properties can be collected through these systems.Pulse-to-pulse capability is challenging because of the 4.5 MHz repetition rate, but it is particularly important to be able to normalize data for fluctuations of photon pulse parameters due to the SASE process or due to electron or X-ray beam instabilities.In addition, for setup and specific measurements several invasive photon diagnostic systems are installed which stop the X-ray pulses, or at least severely modify the pulse properties. In this section, we only describe photon diagnostic devices that will be employed in the photon transport sections inside the tunnels.Further systems are integrated in the scientific instruments in the experiment hall.These include, in particular, the temporal diagnostic systems [45,46] employed to monitor the X-ray pulse arrival, pulse duration, and, ideally, the temporal shape as shown previously [47][48][49][50]. Online Photon Diagnostic Systems These systems can be separated into residual gas systems, naturally interfering only minimally with the X-ray beam, and systems using very thin solid films or crystals, thereby only absorbing a minor fraction of the FEL pulse.This latter method is only applicable to hard X-ray radiation as otherwise the absorption is too strong. For residual gas diagnostic systems photoionization of rare gases (Xe, Ne, Ar or Kr) or nitrogen is applied making these devices indestructible and highly transparent [51].This non-invasive diagnostic method is best suited for high peak energies and high average flux since there is no issue with damage or heating due to the absorbed X-ray pulse energy.At European XFEL these systems are employed in the beam transports to measure pulse energy, beam position and polarization of the X-ray pulse.Residual gas monitors can operate continuously up to very high pulse repetition rates, limited by the flight time of ions and electrons used for the measurement of pulse properties, and work even for hard X-rays if a sufficient sensitivity is able to compensate for the reduced cross-sections.As of today no reports about distortion of coherence and wavefront properties due to residual gas monitors have been reported, however for highest repetition rates and elevated gas pressures depletion may occur [52]. Online solid-state systems employ either thin foils to scatter a fraction of the X-ray beam, using the detection of this scattered fraction to measure the pulse energy and, in a special configuration, beam position [53], or thin curved crystals to disperse the incident spectrum on a position sensitive detector [54].In both cases, only a small fraction of the X-ray beam is absorbed or scattered, however, these systems face limitations when it comes to very high pulse energies.In particular, heat transport limitations of thin films restrict their high repetition rate applications. X-ray Gas Monitors The X-ray gas monitors (XGM) are pulse energy (photon number and flux) and position monitors that resolve individual photon pulses at MHz rates (temporal resolution better than 100 ns) [51].Due to a gain of up to 10 6 , individual X-ray pulses with femtosecond durations containing 10 7 up to 10 15 photons can be measured with better than 10% absolute accuracy, and with better than 1% relative (pulse-to-pulse) accuracy for pulses with more than 10 10 photons.The beam position is monitored in both transverse directions with an accuracy on the order of ±10 µm within a range of ±1 mm.There is an XGM installed in the direct beam of each FEL, upstream of the double mirror systems, monitoring the source properties.Three more XGMs are placed closely upstream of the scientific instruments SPB/SFX, SCS, and HED, to monitor the pulse properties actually delivered to the experiments after passing several X-ray optics elements in the tunnels. Photoelectron Spectrometer The photo-electron spectrometer (PES) measures the spectrum and polarization of the photon pulse based on an angular resolved time-of-flight measurement of photo-electrons [55,56].This device is integrated initially only in the SASE3 beam transport, because for soft X-rays one cannot employ crystal-based schemes to measure the spectrum, and instead the energy distribution of XFEL-generated photo-electrons can be used to deduce the center and width of the photon energy spectrum.In addition, it is planned to employ variable polarization schemes at the SASE3 FEL source, hence requiring measuring and monitoring of the polarization state.The PES has a spectral resolving power of ∆E/E ≤ 10 −4 and the polarization direction and degree can be measured with an accuracy of 1%. The HIREX Spectrometer The HIgh REsolution hard X-ray single-pulse diagnostic spectrometer (HiREX) spectrometer is an online device, based on a diamond diffraction grating used in transmission to split off a small fraction (0.1%) of the photon beam, a bent crystal as a dispersive element, and a MHz-repetition rate strip detector.The grating and crystal chambers are separated by 10 m distance.Gratings with pitches of 150 nm and 200 nm were installed.While beam transmission depends on the photon energy, typically 95% transmission is achieved.Five percent is then spread into all diffraction orders.The first order diffracted beam from the grating is sent to a bent crystal for energy dispersion under Bragg condition [54].The 10 µm thick bent silicon Si crystals have (110) or (111) orientations and are mounted with fixed bending radii of 75 mm, 100 mm or 150 mm.Two detectors are available for data acquisition: an optical camera for full transverse 2D imaging at low repetition rate, and a modified Gotthard-II 1D strip detector for fast data acquisition at 4.5 MHz. Invasive Photon Diagnostics Systems The invasive diagnostics is either used for initial commissioning with spontaneous radiation, for FEL commissioning, or for setup purposes prior to or during measurements. MCP Based Detector When all undulator segments are inserted to establish the SASE condition, this detector measures intensities from the initial signs of lasing up to saturation [57].Two horizontal manipulators insert either 15 mm diameter MCP discs for integral intensity monitoring with 1% rel.accuracy over a large pulse energy range (1 nJ-10 mJ), a photodiode (Hamamatsu, 10 × 10 mm 2 , 300 µm thick), or a larger MCP-intensified phosphor screen providing an intensified beam image with 30 µm resolution via an optical camera setup. Undulator Commissioning Spectrometer This spectrometer analyses spontaneous radiation from one or few undulator segments to measure their individual undulator parameter K [58,59].These measurements are necessary for an independent measurement and setting of all undulator segments with ∆K/K < 10 −4 and to further adjust the individual phase shifters in-between undulators.The filter chambers of the systems at SASE1, SASE2 and SASE3 contain five filter foils of Al, Mo, Cu, Ni and Al with a diameter of 30 mm and varying thicknesses for attenuation and also for spectroscopy by scanning across their K-edges.The monochromator itself, called K-mono, contains two Si channel-cut crystals which can be used in twoor four-bounce geometry.The Bragg angle range is 7 • to 55 • to cover an energy range from 2.5 keV to 16 keV with Si (111) (7.5 to 48 keV with Si (333)).The resolution is ∆E/E = 2 × 10 −4 for Si (111) (10 −5 for Si (333)).The crystals are retracted in horizontal direction from the beam.Detection is realized by a photodiode or the highly sensitive SR-imager (see below). Imagers There are almost 30 imaging units distributed over the photon tunnels which serve different purposes and therefore have different resolutions, fields of view, and geometries [60].All of them contain one or more scintillators, mostly Ce:YAG, sometimes additionally polycrystalline diamond, and all but one type have stationary optics with sCMOS GigE cameras and fixed focus lenses. • Transmissive imagers (1 per FEL) are closest to the source and have the thinnest scintillators to allow transmitting the beam for recording another image of the same photon pulse at a downstream imager.By this method beam pointing and beam offset data can be obtained simultaneously. • The SR imagers (1 per FEL) are optimized for highest photon sensitivity to detect spontaneous radiation from single undulator segments when applied in conjunction with the K-mono in undulator commissioning.Their optical resolution is 25 µm (FWHM) and field of view (FOV) 26.6 × 15 mm 2 using YAG:Ce and ceramic Gd2O2S:Pr scintillators. • The FEL imagers (1 per FEL) are optimized for detailed spatial characterization of the FEL beam to measure the transverse intensity profile with beam position, size and shape.Their optical resolution is 28 µm (FWHM) and FOV is 16 × 22 mm 2 .These imagers have redundancy scintillators of several different materials. • Pop-in monitors (15 in total) are the basic imagers for beam finding and alignment.These monitors are placed downstream of major optical elements like mirrors and monochromators.Their horizontal FOV is large as to cover the variable beam offset without scintillator or optics movements.Various geometries are employed.Most devices put the scintillator at 45 • to the XFEL beam, but some have the scintillator at normal incidence and an additional optical mirror.Optical resolutions range from 35 to 83 µm (FWHM) and FOVs from 22.7 × 40 up to 150 × 30 mm 2 . • Exit slit imagers are installed on the two exit slits of the SASE3 monochromator for beam alignment, but more importantly to deliver single-pulse soft X-ray spectra with a resolution of ∆E/E ≥ 10 −5 . Scientific Scope and X-ray Techniques The Single Particles, Biomolecules and Serial Crystallography (SPB/SFX) scientific instrument's [61] primary goal is to enable three-dimensional imaging, or three-dimensional structure determination, of micrometer-scale and smaller objects.A particular focus is placed on biological objects-including viruses, biomolecules, and protein crystals-though the instrument will also be capable of investigating non-biological samples using similar techniques.This structure determination is not limited to static structures-three-dimensional time-resolved structures are within scope too.One of the main driving factors for such studies is to ultimately enable rational drug design through understanding the structure, and hence the function, of arbitrary biomolecules.Studies in structural biology with X-rays have a long history and have exploited ever-brighter X-ray sources as they have been developed [62]. X-ray FELs, as the most recent phase in X-ray source development, offer yet additional benefits to structural biology with X-rays.In particular, they offer the possibility to investigate radiation damage sensitive samples (such as proteins with important metal centers), samples that scatter only weakly (small crystals or non-crystalline specimens), time-resolved processes that are irreversible, as well as other cases that inherently require many incident X-ray photons in a single pulse [63,64].Unprecedented possibilities are opening to observe weakly scattering samples, such as small crystals of proteins or perhaps even non-crystalline bio-samples such as viruses, which are largely unable to be seen at synchrotron or lab based X-ray sources.Nevertheless, techniques that are relatively simple at conventional sources, such as tomography, are not viable at an X-ray FEL where samples are typically destroyed by the act of illumination in a single projection.This reality means that many frames of data from different projections of a crystal or (reproducible) particle must be combined to form a complete three-dimensional diffraction volume that can be interpreted later as structure [65,66].These methods require as many as tens of thousands or hundreds of thousands of "good" hits for a single structure [67,68]-and many more should one wish to look at a series of structures resolved in time for example. Requirements These experiments require X-ray instrumentation in a traditional forward scattering geometry to collect diffraction at angles up to those commensurate with atomic resolution.Crystallography requires photon energies up to about 16 keV-beyond the Selenium edge-to aid in anomalous diffraction measurements for structure determination.On the low energy side, single particle imaging, which deals with typically very low diffraction signals, requires as low a photon energy that permits the desired resolution for the system under study.Furthermore, mitigation of radiation damage requires optimization of the beam power, that is, to carefully trade highest pulse energy versus shortest pulse duration. X-ray FEL serial crystallography and imaging experiments are primarily performed in a mode that is destructive to the sample.The goal is to illuminate the specimen with as many X-ray photons per pulse as possible, to maximize the scattered signal from each particle.To do so, one must have an optical system that is highly transmissive, as well as focusing to a spot size that is comparable to the size of the sample(s) under investigation.The "small" crystals used in serial crystallography at XFELs tend to be around 1 µm diameter in size, with some variation larger or smaller.Relevant biological single particles range from biomolecules some tens of nanometers across to large viruses up to 500 nm in diameter.To accommodate this wide range of sample sizes, the SPB/SFX instrument plans to deliver two different focal spots-a 1 µm-scale focus and a 100 nm-scale focus.Coherent diffraction imaging of individual particles further requires precise knowledge of the wavefront of the incident X-ray pulse leading to stringent requirements on the selection of the X-ray optical components and their performance. Of particular importance for the experiments to be performed at SPB/SFX will be the performance of the large area detectors.The two primary requirements are a very high dynamic range and single photon sensitivity.An ideal 2D detector for serial crystallography should have a high dynamic range much higher than the four [31] or five [32] orders of magnitude presently achievable, as the intensities of individual Bragg peaks can vary enormously and these intensities must be determined very accurately for successful phasing and structure determination (though nevertheless a detector with 10 4 or smaller dynamic range can be successfully used).Single photon sensitivity is important for detecting weaker scattering, such as from single non-crystalline particles or weak Bragg peaks.The detectors should be compatible with detection at the 4.5 MHz intra-train repetition rate to ensure collection of as much images as possible within a meaningful time frame during which particles can be injected.This requirement leads to the further need for stringent data reduction techniques to avoid a data deluge.Finally the detector(s)' mechanical design must be compatible with the instrument.This means a pixel size that is not too large (≤200 µm), a number of pixels commensurate with the number of resolution elements desired in any given structure (i.e., ≥1 MPixel for ~200 linear resolution elements) and an operation that is ideally compatible with the sample environment (for the upstream interaction region at SPB/SFX this means in vacuum operation).The detector is required to operate in vacuum and be placed as close as 129 mm from the upstream interaction region, resulting in a better than 2 Å geometrical resolution limit for 9 keV photon energy.It can also be placed downstream as far as 6 m from the interaction region, allowing for appropriate sampling of diffraction data from samples as large as almost 1 µm at the lowest energies. SPB/SFX Instrumentation and Capabilities The SPB/SFX instrument is a 3 to 16 keV, forward scattering instrument [61] with a 1 µm-scale and a 100 nm-scale focus in the upstream interaction region [69,70], and optics to refocus the upstream focal point to a second interaction region further downstream (about 12 m) in the experiment hutch.This refocused beam allows for a second, in series, experiment to be performed simultaneously with a measurement in the upstream interaction region (see Figure 4).The Serial Femtosecond Crystallography (SFX) User Consortium provides the vast majority of the instrumentation for the downstream interaction region including, but not limited to, a 4 Mpixel detector (AGIPD) and an alternative detector (Jungfrau), the refocusing CRLs, sample delivery technologies (largely liquid jet delivery in various forms and fixed target systems) as well as various diagnostics and sundry apparatus.The 1 µm-scale and 100 nm-scale focal spots for the upstream interaction region of the SPB/SFX instrument are to be produced by mirror optics due to their high transmission and potential for making very neat and well confined focal spots.The mirrors are all designed with 950 mm clear aperture, with working angles of 4 mrad and 3.5 mrad, respectively.repetition rate to ensure collection of as much images as possible within a meaningful time frame during which particles can be injected.This requirement leads to the further need for stringent data reduction techniques to avoid a data deluge.Finally the detector(s)' mechanical design must be compatible with the instrument.This means a pixel size that is not too large (≤200 µm), a number of pixels commensurate with the number of resolution elements desired in any given structure (i.e., ≥1 MPixel for ~200 linear resolution elements) and an operation that is ideally compatible with the sample environment (for the upstream interaction region at SPB/SFX this means in vacuum operation).The detector is required to operate in vacuum and be placed as close as 129 mm from the upstream interaction region, resulting in a better than 2 Å geometrical resolution limit for 9 keV photon energy.It can also be placed downstream as far as 6 m from the interaction region, allowing for appropriate sampling of diffraction data from samples as large as almost 1 µm at the lowest energies. SPB/SFX Instrumentation and Capabilities The SPB/SFX instrument is a 3 to 16 keV, forward scattering instrument [61] with a 1 µm-scale and a 100 nm-scale focus in the upstream interaction region [69,70], and optics to refocus the upstream focal point to a second interaction region further downstream (about 12 m) in the experiment hutch.This refocused beam allows for a second, in series, experiment to be performed simultaneously with a measurement in the upstream interaction region (see Figure 4).The Serial Femtosecond Crystallography (SFX) User Consortium provides the vast majority of the instrumentation for the downstream interaction region including, but not limited to, a 4 Mpixel detector (AGIPD) and an alternative detector (Jungfrau), the refocusing CRLs, sample delivery technologies (largely liquid jet delivery in various forms and fixed target systems) as well as various diagnostics and sundry apparatus.The 1 µm-scale and 100 nm-scale focal spots for the upstream interaction region of the SPB/SFX instrument are to be produced by mirror optics due to their high transmission and potential for making very neat and well confined focal spots.The mirrors are all designed with 950 mm clear aperture, with working angles of 4 mrad and 3.5 mrad, respectively.The 100 nm-scale design is a traditional Kirkpatrick-Baez (KB) design.The 1 µm-scale mirrors are four-bounce-with a plan horizontal mirror followed by a focusing ellipse in the horizontal and then a focusing vertical mirror with a plane vertical mirror.This four bounce design mitigates vibrational issues and a large displacement from the direct beam over the long (~24 m) mirror to interaction region distance.For early user operation in 2017, the mirrors will not yet be installed.Instead, Beryllium compound refractive lenses (CRLs) will be used to produce an approximately 2.5 µm spot in the interaction region.After mirror installation, the CRL unit will be moved to the The 100 nm-scale design is a traditional Kirkpatrick-Baez (KB) design.The 1 µm-scale mirrors are four-bounce-with a plan horizontal mirror followed by a focusing ellipse in the horizontal and then a focusing vertical mirror with a plane vertical mirror.This four bounce design mitigates vibrational issues and a large displacement from the direct beam over the long (~24 m) mirror to interaction region distance.For early user operation in 2017, the mirrors will not yet be installed.Instead, Beryllium compound refractive lenses (CRLs) will be used to produce an approximately 2.5 µm spot in the interaction region.After mirror installation, the CRL unit will be moved to the refocusing position and new lenses installed to refocus the upstream spot to ~3-5 µm in the second interaction region downstream. In addition to focusing elements, a variety of beam conditioning apparatus (slits, apertures and attenuator) will be installed to aperture the beam upstream of the optics (the so-called power slits), clean up tails and streaks from the optics immediately downstream of them (the so-called cleanup slits) and apertures near the focal plane that further clean up the beam (termed apertures and will likely be sacrificial).This beam conditioning is essential for single particle imaging where a very neat, clean and well-understood beam is necessary for the successful observation and interpretation of the weak diffraction data collected. The primary 2D detector at SPB/SFX is a 1 Mpixel AGIPD detector [31].It will be mounted in a vacuum chamber directly attached to the upstream sample chamber.Using a longitudinal translation, it can be placed as close as 129 mm from the interaction region, resulting in a better than 0.2 nm geometrical resolution limit for 9 keV photon energy.It can also be placed downstream as far as 6 m from the interaction region, allowing for appropriate sampling of diffraction data from samples as large as almost 1 µm at the lowest energies.The detector mechanics consists of four panels mounted on x-y-translations to adjust the central hole for letting the X-ray beam pass. The destructive nature of these experiments and the high repetition rate of the European XFEL necessitate rapid delivery (and replenishment) of sample at the interaction region.Furthermore, for biological systems the samples must be appropriately hydrated and handled to ensure an intact and representative sample is brought to the XFEL beam.Three primary sample delivery mechanisms exist for the delivery and replenishment of samples: liquid jet injectors, aerosol injectors and fixed target stages, all of which will be deployed at the SPB/SFX instrument. Scientific Scope and X-ray Techniques The Femtosecond X-ray Experiment (FXE) scientific instrument has a primary scientific focus in the field of photo-induced chemical dynamics in liquid environments [71].The interplay between nuclear, electronic, and spin degrees of freedom during the course of an ongoing reaction will be monitored using a suite of X-ray techniques, thereby offering new observables in the femtosecond time domain to deliver this information.The FXE instrument will permit structural studies on the 25 fs time scale and below, with ultrafast X-ray Absorption Near Edge Structures (XANES), Extended X-ray Absorption Fine Structure (EXAFS), Resonant Inelastic X-ray Scattering (RIXS), non-resonant X-ray Emission Spectroscopy (XES), and Wide Angle X-ray Scattering (WAXS) from liquids being key techniques to unravel new details about the very first steps in these reacting systems.One fundamental goal is to eventually record a complete molecular movie, observing not only the structural rearrangements occurring in the system but also of the underlying electronic structure changes.Together with ultrafast optical spectroscopy techniques it will become possible to understand the ensuing photo-physical behavior. One particular interesting area of research concerns catalytic activity and solar energy conversion schemes, which occur in several transition metal compounds.Such compounds are key ingredients in certain proteins, and are often at the very beginning of light-driven biological functions [72].They are also studied in chemistry due to their rich magnetic switching behavior [73], their charge-transfer properties in light-harvesting applications [74], or for their ability to form highly reactive intermediate species, which enhance further reaction steps, e.g., towards more efficient catalytic behavior [75].These compounds are believed to exhibit correlated electron dynamics in a regime in which the Born-Oppenheimer rule is not valid.The direct observation of elementary steps towards, e.g., spin transition dynamics has so far been impossible which is expected to change due to the possibility of studying new observables in X-ray FEL experiments [76,77]. Requirements The different X-ray techniques offered at FXE have quite different requirements to the FEL source.XES requires merely the photon energy to be well above the absorption edge of the selected element.The same condition applies to WAXS while the bandwidth of SASE radiation is perfectly suited for diffuse scattering measurements [77].Therefore both techniques can be applied simultaneously.More demanding X-ray beam properties exist for XANES, EXAFS or RIXS techniques, requiring smaller bandwidth of the incident beam (typical ∆E/E ~10 −4 ), and scanning the photon energy over a certain range at the selected absorption edge.This scanning requires tuning of the undulator gap together with a primary monochromator. All X-ray techniques need to be used in concert with an incident laser beam, whose femtosecond pulses are synchronized to the X-ray source, and with sufficient intensity to trigger the desired reactions.The experiments also require appropriate handling of the probed samples.For (bio)chemical systems in liquid solutions, the sample should be removed after each pump-probe event, to permit the next measurement to be recorded on a fresh sample, which has not been exposed to neither the optical laser nor X-ray beams before.To preserve the femtosecond time resolution, the time spread through the sample via the group velocity mismatch between optical and X-ray pulses needs to be minimized.In general, this requires a jet thickness below 10-20 um. FXE Instrumentation and Capabilities X-ray FEL radiation from the SASE1 FEL is collimated by means of Be compound refractive lenses (CRL) 900 m upstream from the experiment hall in XTD2 in order to maintain a beam size in the 1-2 mm (FWHM) diameter range (for all X-ray energies in the 5-20 keV range), when the beam enters the primary four-crystal monochromator, diamond beam splitter grating and eventually the FXE experiment hutch (compare Figure 5). for diffuse scattering measurements [77].Therefore both techniques can be applied simultaneously.More demanding X-ray beam properties exist for XANES, EXAFS or RIXS techniques, requiring smaller bandwidth of the incident beam (typical ∆E/E ~ 10 −4 ), and scanning the photon energy over a certain range at the selected absorption edge.This scanning requires tuning of the undulator gap together with a primary monochromator. All X-ray techniques need to be used in concert with an incident laser beam, whose femtosecond pulses are synchronized to the X-ray source, and with sufficient intensity to trigger the desired reactions.The experiments also require appropriate handling of the probed samples.For (bio)chemical systems in liquid solutions, the sample should be removed after each pump-probe event, to permit the next measurement to be recorded on a fresh sample, which has not been exposed to neither the optical laser nor X-ray beams before.To preserve the femtosecond time resolution, the time spread through the sample via the group velocity mismatch between optical and X-ray pulses needs to be minimized.In general, this requires a jet thickness below 10-20 um. FXE Instrumentation and Capabilities X-ray FEL radiation from the SASE1 FEL is collimated by means of Be compound refractive lenses (CRL) 900 m upstream from the experiment hall in XTD2 in order to maintain a beam size in the 1-2 mm (FWHM) diameter range (for all X-ray energies in the 5-20 keV range), when the beam enters the primary four-crystal monochromator, diamond beam splitter grating and eventually the FXE experiment hutch (compare Figure 5).The primary Si four-crystal monochromator (ΔE/E = 10 −5 -10 −4 ) maintains the same beam axis for the X-ray beam onto the sample as the pink beam (thus without monochromator), this way we ensure that the laser beam always strikes the X-ray illuminated volume, and that the X-rays always take the same path through the entire optics branch including the long stretch downstream to the beam stop.This arrangement eliminates the need to geometrically adjust the beam(s) for varying conditions, as demanded by the specific experiment.Only the timing changes considerably between pink and monochromatic pulses entering the sample, especially the monochromatic beam has different arrival times (with respect to the exciting optical laser pulses), which can be tabulated for each energy. At the end of the tunnel section a diamond grating generates side maxima (diffraction orders) The primary Si four-crystal monochromator (∆E/E = 10 −5 -10 −4 ) maintains the same beam axis for the X-ray beam onto the sample as the pink beam (thus without monochromator), this way we ensure that the laser beam always strikes the X-ray illuminated volume, and that the X-rays always take the same path through the entire optics branch including the long stretch downstream to the beam stop.This arrangement eliminates the need to geometrically adjust the beam(s) for varying conditions, as demanded by the specific experiment.Only the timing changes considerably between pink and monochromatic pulses entering the sample, especially the monochromatic beam has different arrival times (with respect to the exciting optical laser pulses), which can be tabulated for each energy. At the end of the tunnel section a diamond grating generates side maxima (diffraction orders) of the main X-ray beam, which are used for X-ray beam diagnostics inside the experiment hutch (similar to what has been described in Ref. [78]).The side beams can be used to measure the incident spectrum of the X-ray beam via a curved crystal spectrometer, and the actual arrival time of the X-ray pulse with respect to the optical laser pulse.Together with beam shaping slits and an intensity position monitor the conditions of the beam entering the sample are thus well characterized. With a second stack of Be CRL lenses the X-ray spot size on sample can be freely tailored to values in the 2-200 um range (FWHM).The X-rays enter the sample area via a diamond window separating the ultrahigh vacuum optics branch from the sample environment under ambient conditions (He atmosphere at room temperature).A liquid flat sheet jet with adjustable thickness in the 2-200 um range provides a defined surface for optical excitation and X-ray probing.Two secondary spectrometers are available for XES experiments, and each can also be rotated around the sample from forward to nearly backward scattering angles: a Johann spectrometer with up to 5 spherically bent crystals collects single emission wavelengths with a resolution of ∆E/E = 10 −4 .This spectrometer has a large solid angle and spectra are obtained by scanning both the crystal rotation with the collecting detector on a Rowland circle.Alternatively, a 16 element von Hamos type spectrometer collects the entire XES spectra at a resolution of ∆E/E ~10 −3 without moving elements, thus enabling single-pulse experiments. The forward WAXS scattering pattern is collected using the LPD detector [32] having moveable quadrants for a central hole for the X-ray beam with adjustable size in the 1-10 mm range.A post-diagnostics bench can then record the beam properties (spectrum, intensity, and timing) of the transmitted beam, before it finally strikes the copper beam stop. Scientific Scope and X-ray Techniques The SQS (Small Quantum Systems) scientific instrument is dedicated to investigations of fundamental processes of light-matter interaction in the soft X-ray wavelength regime.In particular, studies of non-linear phenomena, such as multiple ionization and multi-photon processes, time-resolved experiments following dynamical processes on the femtosecond timescale, and investigations using coherent scattering techniques are targeted [79].Principal research targets are isolated species in the gas phase, such as atoms, molecules, ions, clusters, nanoparticles and large bio-molecules.The use of soft X-ray photons enables controlled excitations of specific electronic subshells in atomic and site-or element specific excitation in molecular targets.One of the main goals of the SQS instrument is the complete characterization of the ionization and fragmentation process, at least for smaller systems, by analyzing all products created in the interaction of the target with the FEL pulses. Experiments at SQS typically use X-ray pulses of highest intensity to drive the probed system to highly excited states or initiate non-linear X-ray processes.The additional use of synchronized optical laser pulses will be applied to controlled manipulations of the electronic states and nuclear movement.Probing of the X-ray FEL interaction with the sample system will be performed either by direct coherent X-ray scattering to obtain structural information or by spectroscopic techniques.A focus is put on a variety of particle spectroscopy techniques, such as energy-and angle-resolved electron and ion spectroscopy allowing the determination of kinetic energies and momenta of the charged particles, and additional options for XUV and soft X-ray spectroscopy.In particular, the very open and flexible arrangement of the spectrometers will enable the application of various coincidence techniques, such as electron-electron, electron-ion and photon-electron/ion coincidences, which all require and therefore take full advantage of the high repetition rate available at the European XFEL. Requirements Located at the SASE3 FEL the photon energy of the radiation will range from about 250 eV up to 3000 eV, i.e., covering the energy range of ionization thresholds for numerous relevant atoms such as the K-edges of carbon, nitrogen, oxygen as well as of phosphor and sulfur, the L-edges of the 3d transition and rare earth metals and K-and L-edges of various ions.Pulse durations as short as 2 fs, available in the 0.02 nC low-charge mode, enable in combination with the synchronized optical laser time-resolved studies in the few-femtosecond time domain.Furthermore, pulse energies of up to 10 mJ are produced at 1 nC high-charge mode.This high pulse energy corresponds to 2 × 10 14 photons per pulse and is the main requirement for the study of non-linear processes, since intensities of more than 10 18 W/cm 2 can be reached by focusing, e.g., the 10 mJ/100 fs FEL beam to a diameter of about 1 µm. Ultra-high vacuum conditions in the experimental area are required for coincidence techniques in order to minimize signals caused by ionization of residual gas.For this reason most of the experiments will operate at background pressures of about 10 −10 mbar or less and supersonic molecular jets and specially designed quantum-state-, size-, and isomer-selected beams of polar molecules and clusters (COMO for "Controlled Molecules") will be used for sample delivery.These vacuum conditions will be reduced for experiments on larger targets requiring the use of the large DSSC imaging detector and of dedicated cluster or nanoparticle beam devices. SQS Instrumentation and Capabilities The optical layout of the beam transport system enables experiments using the direct beam from the variable gap SASE3 undulator or the reduced bandwidth radiation (∆E/E ≤ 10 −4 ) from the soft X-ray monochromator.A Kirkpatrick-Baez adaptive mirror system assures a tight focusing of the beam down to spot sizes as small as about 1 micron (see Figure 6).The bendable high-polished mirrors allow adjustments of the focal spot size and displacement of the focus to three different interaction regions separated by 39 and 200 cm, respectively.The FEL radiation properties, such as pulse energy, pulse duration, arrival time, spectral distribution and focal spot size, are monitored with the help of several diagnostic devices installed downstream and upstream of the interaction regions inside the dedicated and enclosed experiment area. of more than 10 18 W/cm 2 can be reached by focusing, e.g., the 10 mJ/100 fs FEL beam to a diameter of about 1 µm. Ultra-high vacuum conditions in the experimental area are required for coincidence techniques in order to minimize signals caused by ionization of residual gas.For this reason most of the experiments will operate at background pressures of about 10 −10 mbar or less and supersonic molecular jets and specially designed quantum-state-, size-, and isomer-selected beams of polar molecules and clusters (COMO for "Controlled Molecules") will be used for sample delivery.These vacuum conditions will be reduced for experiments on larger targets requiring the use of the large DSSC imaging detector and of dedicated cluster or nanoparticle beam devices. SQS Instrumentation and Capabilities The optical layout of the beam transport system enables experiments using the direct beam from the variable gap SASE3 undulator or the reduced bandwidth radiation (∆E/E ≤ 10 −4 ) from the soft X-ray monochromator.A Kirkpatrick-Baez adaptive mirror system assures a tight focusing of the beam down to spot sizes as small as about 1 micron (see Figure 6).The bendable high-polished mirrors allow adjustments of the focal spot size and displacement of the focus to three different interaction regions separated by 39 and 200 cm, respectively.The FEL radiation properties, such as pulse energy, pulse duration, arrival time, spectral distribution and focal spot size, are monitored with the help of several diagnostic devices installed downstream and upstream of the interaction regions inside the dedicated and enclosed experiment area.The general concept of the instrument is based on a two-chamber system thus separating applications on "Atomic-like Quantum Systems" (AQS), such as free atoms, atomic ions, and small molecules, and on "Nano-size Quantum Systems" (NQS), such as clusters, nanoparticles and large biomolecules, all typically larger objects [79].The AQS chamber will be equipped with a set of spectrometers enabling the analysis of electrons, ions and photons with high-energy resolution and the determination of the angular distribution of the particles.Six electron time-of-flight (TOF) analyzers can be used for angle-resolved high kinetic energy resolution experiments at distinct angles in the dipole and in the non-dipole planes.A velocity-map-imaging (VMI) spectrometer The general concept of the instrument is based on a two-chamber system thus separating applications on "Atomic-like Quantum Systems" (AQS), such as free atoms, atomic ions, and small molecules, and on "Nano-size Quantum Systems" (NQS), such as clusters, nanoparticles and large biomolecules, all typically larger objects [79].The AQS chamber will be equipped with a set of spectrometers enabling the analysis of electrons, ions and photons with high-energy resolution and the determination of the angular distribution of the particles.Six electron time-of-flight (TOF) analyzers can be used for angle-resolved high kinetic energy resolution experiments at distinct angles in the dipole and in the non-dipole planes.A velocity-map-imaging (VMI) spectrometer provides the full information about the angular distribution of the emitted electrons and ions, and is designed for electrons up to about 1000 eV kinetic energy.The single pulse analysis of very dilute samples or of processes characterized by extremely low cross sections is possible by means of a magnetic bottle electron spectrometer, which collects electrons over the full solid angle.Finally, a specially designed 1D-imaging XUV spectrometer is dedicated to the analysis of fluorescence emission at high spectral resolution.The use of Wolter optics and a 2D-imaging detector is enabling a spatial resolution of about 10 µm along the beam propagation direction and thereby a temporal resolution of about 30 fs in crossed beam experiments. The NQS chamber will have as particular feature the option to use the DSSC detector [33] in forward diffraction geometry.Due to the high scattering cross sections in the soft X-ray regime, single pulse imaging of larger molecules and particles becomes possible and will be applied to structural analysis at reduced spatial resolution.The DSSC detector is also used in combination with various particle spectrometers (TOF, VMI) to characterize size and shape of clusters and nanoparticles in parallel to the determination of kinetic energies, fragmentation patterns and emission angles of ions and electrons produced in the interaction volume. In addition, a third, specially designed ultra-high vacuum chamber will host a reaction microscope (SQS-REMI) for the complete characterization of molecular fragmentation processes by the application of electron-ion coincidence techniques [80], taking full advantage of the high repetition rate (until 27,000 pulses per second) available at the European XFEL.Using large area position sensitive delay-line detectors and a well-defined arrangement of magnetic and electric fields to extract and guide the electrons and ions, the kinetic energies of all fragments as well as their relative emission angles can be determined in a single molecule ionization event. Specially designed in-and out-coupling units for the optical laser are available to provide the optical radiation to all three interaction points in collinear geometry.In general, great emphasis is placed on a flexible design and arrangement of the experimental chambers and the various spectrometers, which will enable users to make optimal use of all the specific characteristics of the European XFEL, in particular of its uniquely high repetition rate.Furthermore, several extensions of the FEL beam parameters (e.g., variable polarization or two-color operation), beam delivery capabilities (beam splitter and delay device) and instrument layout are already decided or under investigation. Scientific Scope and X-ray Techniques The Spectroscopy and Coherent Scattering (SCS) scientific instrument is located at the SASE3 FEL source and aims at time-resolved experiments to unravel the electronic, spin and structural properties of materials in their fundamental space-time dimensions.Scientific objectives include, but are not limited to the understanding and control of complex materials [81][82][83], the investigation of ultrafast magnetization processes on the nanoscale [84,85], the real-time observation of chemical reactions at surfaces and in liquids [86,87], and the exploration of nonlinear X-ray spectroscopic techniques that are cornerstones at optical wavelengths [17,88]. The SCS instrument operates in the soft to tender X-ray regime (250 eV-3000 eV) covering a wide range of core level resonances: K-edges of most 2p and 3p elements (starting from carbon), L 2,3 -edges of 3d and 4d elements (transition metals) and M 4,5 edges of 4f elements (lanthanides).Time-resolved resonant spectroscopy offers element-, site-, orbital-, and spin-selective probing of complex material dynamics that is either directly related to or indirectly coupled to the valence electrons.Physical properties such as oxidation state, magnetism, local symmetries and ordering as well as elementary excitations can be investigated using X-ray Absorption Spectroscopy (XAS) [86,89], X-ray Resonant Diffraction (XRD) [81][82][83] and Resonant Inelastic X-ray Scattering (RIXS) [87].A particular aim of the SCS instrument is to combine these powerful spectroscopic techniques with X-ray diffraction and microscopy methods, which provide nanometer spatial-and femtosecond time resolutions.Such experiments open up a route to follow the dynamics in complex systems on their relevant length and time scales.The SCS instrument further implements Coherent Diffraction Imaging (CDI) techniques, i.e., X-ray holography [90,91].A time series of reconstructed CDI images can elucidate excited state dynamics in real space. Requirements The monochromatic-beam operations described in Ref. [92] are key to the success of the SCS instrument.The SASE3 soft X-ray monochromator is equipped with two gratings and a flat mirror that allows for monochromatic beam operation at high (∆E/E = 2.5 × 10 −5 ) and medium energy resolutions (∆E/E = 1 × 10 −4 ) as well as non-monochromatized beam operations without changing the beam transport to the sample.A tunable grating illumination concept is therefore implemented to provide a minimum spectral bandwidth-time duration product [92].In this way, RIXS experiments with high energy-resolution and lower time resolution as well as ultrafast dynamics studies at reduced energy resolutions (e.g., femtosecond surface chemistry and magnetism) can be performed at the same experiment station of SCS.New developments for the FEL source, such as full polarization control and undulator gap scanning techniques, will be implemented at SCS.These are nowadays standard capabilities at synchrotron facilities for X-ray spectroscopy investigations. The majority of the experiments will require X-rays to impinge on fixed solid targets that cannot easily be replenished between X-ray pulses, in contrast to liquid jet or particle injection schemes.This sets particular constraints on the optical and X-ray pulse energies in pump-probe experiments when sample damage or degradation by the radiation and heating have to be mitigated to a level that sufficient data acquisition is possible between the sample exchanges.Heat dissipation schemes have to be developed in order to reach the ultimate 4.5 MHz burst mode operation, where the sample relaxation time and the heat dissipation in the probed area must be shorter than 220 ns. While time-resolved spectroscopy experiments on fixed targets require high average photon flux, CDI and nonlinear X-ray-matter-interaction experiments need the highest pulse energies.In single shot imaging experiments the number of incoming photons determines the attainable resolution and therefore, depending on the damage threshold, requires the experiment to be carried out in a "diffraction before destruction" mode [93].In this case, a new sample has to be repositioned in the beam between the X-ray bursts at 10 Hz repetition rate and a pulse selection mode is necessary. SCS Instrumentation and Capabilities One of the major goals of the technical design was to implement a diverse platform for spectroscopy and coherent scattering techniques that is realized in a modular instrumentation of experiment stations and detectors.The mirror benders of the SCS Kirckpatrick-Baez refocusing optics deliver the beam to two X-ray interaction regions separated by 2 m (see Figure 7).This allows not only a small beam focus of 1-2 µm for CDI experiments but also a variable beam diameter of up to 500 µm.In this way, time-resolved spectroscopic studies can be carried out making the best use of the high-average photon flux without further beam attenuation for avoiding sample damage. One of the major goals of the technical design was to implement a diverse platform for spectroscopy and coherent scattering techniques that is realized in a modular instrumentation of experiment stations and detectors.The mirror benders of the SCS Kirckpatrick-Baez refocusing optics deliver the beam to two X-ray interaction regions separated by 2 m (see Figure 7).This allows not only a small beam focus of 1-2 µm for CDI experiments but also a variable beam diameter of up to 500 µm.In this way, time-resolved spectroscopic studies can be carried out making the best use of the high-average photon flux without further beam attenuation for avoiding sample damage.The SCS instrument comprises two distinct experimental setups, the Forward-scattering Fixed-Target (FFT) chamber and the XRD chamber.Both chambers have the same mechanics on their base that locks to three fixation points on the floor, one set per interaction region.This allows for faster exchange of experiment stations and reproducible repositioning of the chambers. The FFT chamber is optimized for forward-scattering geometries such as XAS in transmission, small-angle X-ray scattering (SAXS) and CDI experiments.Besides optical and THz beam delivery the sample environment encompasses static magnetic fields up to 0.5 T and a fast sample scanner that fits 50 × 50 mm 2 sample arrays.The diffraction signal from the samples is collected downstream on the primary area detector, DSSC.The detector is mounted on a girder with a 5 m long translation stage.The closest sample-detector distance is 350 mm, corresponding to spatial frequencies near the wavelength limit of a few nm at soft X-ray energies.Objects of up to 3-5 µm in diameter can be reconstructed using CDI at a sample-detector distance of 5 m and photon energies below 1.5 keV.The missing low-q data passes through a hole in the center of the detector and is recorded downstream as an integrated sample transmission signal.Since the monochromatic beam intensity jitter is large, data collected from low intensity pulses can be vetoed using the DSSC detector [33].In this way, the signal to noise level of the data can be improved and data acquisition time is optimized. The XRD chamber enables a range of time-resolved spectroscopy and scattering methods for which a variable scattering angle is needed.The most relevant techniques are time-resolved XRD and RIXS as well as nonlinear X-ray studies (stimulated emission and scattering).The XRD setup is equipped with a diffractometer where a diode array can be rotated by nearly ±180 • in the horizontal scattering plane.The sample motion system provides six degrees of freedom and enables temperature-dependent studies between room temperature and cryogenic temperatures (liquid He cryostat).A detector flange with 90 • continuous rotation can interface with large detectors and spectrometers.The Heisenberg-RIXS (hRIXS) User Consortium is contributing a high-resolution spectrometer (∆E/E = 0.25 − 1 × 10 −4 ) that facilitates state-of-the-art RIXS experiments with unprecedented time-resolution at the SCS instrument.The 5 m long hRIXS spectrometer can rotate around the sample position, hovering 50 µm above a high-planarity floor (250 µm peak-to-valley over 37 m 2 ) on air pads. Both experiment chambers are designed for solid targets and operate in the 10 −9 mbar pressure regime depending on the detector vacuum.The chambers are equipped with a sample transfer system for exchanging samples under vacuum conditions.Optical laser delivery can be either collinear to the X-ray beam or arranged in off-axis geometry.THz generation and focusing takes place close to the interaction region and temporal diagnostics at the sample interaction point is realized.The hRIXS User Consortium will contribute an additional experiment station that provides a chemical sample environment including liquid jet systems of different geometries and couples to the hRIXS spectrometer. Scientific Scope and X-ray Techniques The Materials Imaging and Dynamics (MID) instrument of the European XFEL facility, located at the SASE2 beamline, will provide unique capabilities in ultrafast imaging and dynamics of materials, with particular focus on the application of coherent X-ray scattering and diffraction techniques.Coherent diffractive imaging (CDI) [94,95] and X-ray photon correlation spectroscopy (XPCS) [18,[96][97][98] experiments are at the heart of the activities planned.In addition, high resolution time-resolved scattering [99,100], nano-beam scattering/imaging [101,102] and novel correlation techniques [103] are foreseen at MID taking advantage of the unique time structure and high peak intensity of the European XFEL beam.The instrument can operate in small-angle (SAXS) and wide-angle (WAXS) X-ray scattering configurations with a movable large area detector.A large field-of-view configuration where the detector covers a maximum of reciprocal space is also possible.The instrument is optimized for windowless operation over a wide range of photon energies, 5-25 keV, and possibly higher in the future depending on the development of novel lasing schemes using the SASE2 FEL. Requirements The MID design has been guided by several goals.Firstly, the aim is to preserve the high average and peak brilliance provided by the source and make use of as many photons as possible in the experiments.At the same time, optimum conditions for beam tailoring must be ensured concerning focusing, energy selection, and spectral purity in a setup providing high beam stability (position, intensity) and fast and efficient data collection with the highest possible resolution.A versatile setup was required to enable the breadth of experiments that will take place at MID.The experimental setup is hence windowless (optional), multi-purpose and also contains beam diagnostics tools, both for the X-ray beam and the optical pump laser.MID strives to provide the best possible conditions for materials science experiments using hard X-ray FEL radiation, for instance in the studies of nanostructured materials, phase transitions and metastable states, liquid dynamics, and low-temperature physics and magnetism. MID Instrumentation and Capabilities The MID instrument is mainly installed in two safety hutches, an optics hutch (OH) and an experiment hutch (EH), but several essential components are also placed inside the SASE2 photon beam transport tunnel (see Figure 8).The OH contains a Si(220) monochromator to reduce the bandwidth of the SASE radiation to ∆E/E ~6.1 × 10 −5 if required.An additional Si( 111) mono (∆E/E ~1.4 × 10 −4 ) installed in the SASE2 tunnel can be used to pre-monochromatize or separately.Alternatively, the SASE beam can be applied directly (∆E/E ~1 × 10 −3 ) or in self-seeded mode [104,105] (∆E/E ~1 × 10 −5 ) once self-seeding becomes available at SASE2.Together with undulator tapering this will allow achieving more than 10 12 ph/pulse and a record high spectral peak brightness of more than 10 14 ph/s/meV at 9 keV [106].The OH contains beam attenuators and slits for further beam tailoring as well as an imager system to provide in-situ visualization of the beam size, shape, and intensity.A split-and-delay line (SDL) [107,108] will also be installed in the OH and will give the possibility of modifying the time-structure of the beam.Normally, the European XFEL delivers ultrashort (~1-100 fs) pulses of photons every 220 ns (4.5 MHz), but with the SDL under construction for MID it is possible to reduce this spacing to any value from ~10 fs to 800 ps [109].This enables particular experiments requiring such an X-ray pulse pattern, e.g., speckle visibility techniques [110,111] for ultrafast dynamics or X-ray pump X-ray probe, possibly in combination with an optical fs laser pump [112].The latter gives the additional option of performing X-ray probe-Optical pump-X-ray probe measurements where the two pulses from the SDL are not only delayed in time but also are hitting the sample at different angles of incidence.This provides a unique possibility to distinguish the two X-ray diffraction patterns hitting the detector and a spatial encoding of ultrafast dynamics can hence be obtained to yield a time-resolution much better than the 4.5 MHz detector speed [112].A focal spot down to 50 × 50 nm 2 or smaller is enabled by the nano-focusing system [112].Assuming full transmission of the FEL pulses this could enable peak intensities of beyond 10 21 W/cm 2 [39,113,114] allowing to explore non-linear X-ray interactions with matter, e.g., two-or multi-photon processes in scattering and absorption [115][116][117].The PP laser beam is delivered to EH via a transfer pipe to a laser table next to the MPC allowing additional tailoring of the beam before it is directed towards the sample position.Temporal and spatial overlaps of the optical laser beam and the X-rays can be controlled through imaging and timing diagnostics [49] and tuned by adjusting optical components in the laser beam path located in the ILH.The radiation scattered from the sample is measured using the AGIPD detector.In SAXS configuration the distance from sample to detector can be varied from ~200 to 8000 mm.This provides an angular detection resolution between 1 mrad and 25 µrad and a field-of-view between 1 rad and 25 mrad.With the direct beam in the center of the detector at 10 keV, it translates into a q-resolution and q-range of 5.0 × 10 −3 and 2.3 Å −1 for 200 mm, and 1.3 × 10 −4 and 6.3 × 10 −2 Å −1 for 8000 mm, respectively.Special configurations with even shorter sample-detector distance and exploitation of the full energy range of the instrument (5-25 keV) allow tuning these values.In the SAXS case, a hole in the center of the detector (adjustable by movable quadrants) permits unhindered passage of the direct beam, i.e., without destroying the sensor.The exit port of AGIPD is connected to a diagnostics end-station where intensity, size and spectrum of the transmitted beam can be quantified with high resolution.In particular, a semi-transparent bent diamond spectral analyzer has been developed allowing to quantify the SASE spectrum down to a resolution of ~0.1 eV [118].This spectrometer will operate in parallel with AGIPD and the spectral information together with scattering data enable a better data analysis as well as easy tuning of the self-seeded mode.A similar transparent diamond spectrometer can be inserted upstream of the MPC to measure the spectrum before interaction with the sample [118].In this manner absorption spectroscopy [119] can be combined with, e.g., pump-probe and coherent scattering techniques providing unique new possibilities of investigating interactions of ultra-bright fs X-ray pulses with matter. The MID instrument also features the option of measuring in a horizontal WAXS geometry (scattering angle up to ~55° with the sample-detector distance varying between 2000 and 8000 mm.This will enable high resolution detection at large q (beyond 10 Å −1 ) investigating (coherent) diffraction originating from, e.g., structural, charge, or magnetic ordering in combination with the pulsed magnetic field or the fs pump laser to access ultrafast dynamics processes.In the EH, the beam first passes through a double mirror system (if inserted) that allows reflecting the X-ray beam downwards for grazing incidence liquid surface scattering.Another mirror reflecting upwards provides the aforementioned option of different incidence angles for the two split beams from the SDL.Downstream of the mirror system the ultra-high vacuum (~10 −9 mbar) section of the instrument terminates and it is necessary to operate at a lower vacuum level or even at ambient conditions due to the presence of outgassing substances, sample environments, and electronics.This transition is ensured either by insertion of a beam transparent diamond window, or by use of the differential pumping section positioned immediately downstream of the mirror.A large multi-purpose sample chamber (MPC) hosts local optics for nano-focusing, a hexapod sample manipulation stage, as well as different sample environments, e.g., providing low-temperatures via He cryo-cooling, pulsed high magnetic fields, fast sample scanning, sample injection by liquid jets, aerosol injection, etc.To ensure a maximum of stability the stages carrying the nano-focusing setup and the sample hexapod are decoupled from the vacuum pipes and chamber walls and connected directly, via vacuum feedthroughs, to a several ton heavy granite block below the MPC.A focal spot down to 50 × 50 nm 2 or smaller is enabled by the nano-focusing system [112].Assuming full transmission of the FEL pulses this could enable peak intensities of beyond 10 21 W/cm 2 [39,113,114] allowing to explore non-linear X-ray interactions with matter, e.g., two-or multi-photon processes in scattering and absorption [115][116][117].The PP laser beam is delivered to EH via a transfer pipe to a laser table next to the MPC allowing additional tailoring of the beam before it is directed towards the sample position.Temporal and spatial overlaps of the optical laser beam and the X-rays can be controlled through imaging and timing diagnostics [49] and tuned by adjusting optical components in the laser beam path located in the ILH. The radiation scattered from the sample is measured using the AGIPD detector.In SAXS configuration the distance from sample to detector can be varied from ~200 to 8000 mm.This provides an angular detection resolution between 1 mrad and 25 µrad and a field-of-view between 1 rad and 25 mrad.With the direct beam in the center of the detector at 10 keV, it translates into a q-resolution and q-range of 5.0 × 10 −3 and 2.3 Å −1 for 200 mm, and 1.3 × 10 −4 and 6.3 × 10 −2 Å −1 for 8000 mm, respectively.Special configurations with even shorter sample-detector distance and exploitation of the full energy range of the instrument (5-25 keV) allow tuning these values.In the SAXS case, a hole in the center of the detector (adjustable by movable quadrants) permits unhindered passage of the direct beam, i.e., without destroying the sensor.The exit port of AGIPD is connected to a diagnostics end-station where intensity, size and spectrum of the transmitted beam can be quantified with high resolution.In particular, a semi-transparent bent diamond spectral analyzer has been developed allowing to quantify the SASE spectrum down to a resolution of ~0.1 eV [118].This spectrometer will operate in parallel with AGIPD and the spectral information together with scattering data enable a better data analysis as well as easy tuning of the self-seeded mode.A similar transparent diamond spectrometer can be inserted upstream of the MPC to measure the spectrum before interaction with the sample [118].In this manner absorption spectroscopy [119] can be combined with, e.g., pump-probe and coherent scattering techniques providing unique new possibilities of investigating interactions of ultra-bright fs X-ray pulses with matter. The MID instrument also features the option of measuring in a horizontal WAXS geometry (scattering angle up to ~55 • with the sample-detector distance varying between 2000 and 8000 mm.This will enable high resolution detection at large q (beyond 10 Å −1 ) investigating (coherent) diffraction originating from, e.g., structural, charge, or magnetic ordering in combination with the pulsed magnetic field or the fs pump laser to access ultrafast dynamics processes. Scientific Scope and X-ray Techniques The High-Energy Density (HED) instrument aims at the investigation of matter at extreme states of temperature, pressure, density, and/or electromagnetic fields using hard X-ray FEL radiation.For this goal the HED instrument will provide a unique combination of the drivers to create extreme states in the laboratory and hard X-ray laser pulses [120].HED offers a wide range of time-resolved X-ray techniques reaching from diffraction, by imaging to different spectroscopy techniques for measuring various geometric and electronic structural properties.Research areas at HED include the investigation of properties of matter in solar and extra-solar planets, where high pressures of several 100 GPa at moderate temperatures (<10,000 K) are expected, and of properties of matter in the presence of both strong electric and magnetic fields.High-temperature superconductivity will be studied using pulsed magnetic fields generated in coils with field strengths up to 60 T. Extreme electromagnetic fields also occur during and after the interaction of short-pulse high-intensity lasers with solids and liquids, forming a dense plasma and accelerating electrons to up to several MeV kinetic energy.These induce very intense, transient magnetic fields, which could shed light on properties of matter at temperatures of several kT (~11,000 K). Requirements The use of a large variety of X-ray techniques creates a broad band of requirements to FEL operation and properties.Most important are the need for a small bandwidth, typically smaller than 10 −4 in order to perform inelastic scattering experiments with sufficient resolution and throughput.Furthermore, as many experiments study or use low cross-section processes, high pulse energies are very important.This becomes particularly relevant for experiments at the highest photon energies above 20 keV.For experiments using high energy drivers to create extreme states and operating at reduced repetition rates of 1 Hz, or even far below, it is conceivable to switch beam to other stations at this FEL.Such an operation mode, however, requires that the experiments use the same, or at least very similar, X-ray properties. HED Instrumentation and Capabilities The HED instrument is installed at the SASE2 beamline and features an optics hutch (OH) and an experiment hutch (EH).In addition, an X-ray monochromator, focusing devices, a split and delay line optics and a pulse picker are placed inside the preceding tunnel section (see Figure 9).The four-bounce Si- (111) monochromator can reduce the SASE bandwidth to ∆E/E ~10 −4 at 5-25 keV, while a high-resolution Si-(533) monochromator will allow for a 5 × 10 −6 bandwidth at 7.5 keV.Focusing of 5-25 keV X-rays to foci of 1-200 µm at the sample position is established by several sets of Be compound refractive lenses (CRLs), located in the tunnel section (2×) and in OH.A fourth lens set close to the sample position will allow for sub-micron foci.A multilayer-based split-and-delay line has been designed, was constructed by the University of Münster (Germany) and is currently installed at HED [121].This device allows splitting the X-ray pulse into two with a tunable intensity ratio and to separate them with a maximum delay of 2 ps (at 20 keV) and 23 ps (at 5 keV).A pulse picker will allow selecting X-ray pulses for 10 Hz, 1 Hz or pulse-on-demand operation, thereby synchronizing X-ray and optical laser delivery to the sample.The 9 × 11 m 2 experiment hutch is enclosed by a heavy concrete wall of thicknesses between 0.5 and 1.0 m to establish radiation shielding for high energetic electrons generated by the relativistic laser-matter interaction processes when focusing the multi-100 TW laser on the sample.In EH two interaction areas IA1 and IA2 have been defined.In IA1, a large vacuum interaction chamber (IC1) with inner dimensions 2.6 × 1.7 × 1.5 m 3 (LWH) accommodates several configurations for diffraction, imaging or low/high resolution spectroscopy and inelastic X-ray scattering.The IC1 vacuum of ~10 −4 mbar is separated from the X-ray optics by differential pumping stage or, above 10 keV, by a diamond window.At IA2 various setups can be interchanged.A second interaction chamber (IC2) with 1 m diameter is dedicated to dynamic diamond anvil cell (DAC) experiments and high-precision dynamic laser compression experiments in a standardized configuration.Alternatively, a goniometer with a pulsed magnetic coil and a cryogenic sample environment shall be placed here.While in IA1 all X-ray and laser beams are available, IA2 has access to the X-ray FEL and the nanosecond laser beams only. X-ray detectors inside IC1 need to be vacuum-compatible with compact dimensions, low weight, modular assembly, and >10 Hz repetition rate.HED plans to have several detectors installed.Two EPIX100 modules [122] offer a 35 × 38 mm 2 chip with 50 µm pixel pitch and 10 2 dynamic range at 8 keV.These detectors will be coupled, e.g., to crystal spectrometers.Three EPIX10k modules [123] have identical chip size, 100 µm pixel pitch, but offer 10 4 dynamic range by gain switching.With the same dynamic range, four Jungfrau modules offer 40 × 80 mm 2 chips each with 75 µm pixel pitch and 10 4 dynamic range at 12 keV [124].The latter two gain-switching detectors are ideally suited to record dedicated parts of an X-ray diffraction pattern.For both, IA1 and IA2, a detector bench at the end of EH will offer a possibility to place large area detectors, e.g., for imaging or SAXS type experiments.This bench allows adjusting the distance from IA1 and IA2 to the detector.On this bench, the HIBEF consortium plans to integrate an AGIPD 1M detector [31], a Perkin-Elmer 4343CT flat-panel large-area detector, and high-resolution CCD cameras for X-ray phase contrast imaging and ptychography applications. Several drivers to generate extreme states of matter will be available at HED, e.g., two high energy optical lasers, diamond anvil cells, and pulsed magnetic fields, contributed and operated by the international HIBEF user consortium.The all-diode pumped high energy (HE) nanosecond DiPOLE-100X laser is developed by STFC CLF (UK) [125].It delivers up to 80 J at 515 nm wavelength with pulse durations of 2-15 ns with a maximum repetition rate of 10 Hz.This laser will be primarily used for shock compression experiments and its pulses can be temporally shaped to enable Power slits in OH can tailor the wings of the beam monitored by a beam-imaging unit.Using diamond gratings in first order a fraction of the incident beam can be steered to a single-pulse spectrometer using a bent Si crystal to monitor the incident X-ray spectrum.X-ray beam position and intensity are monitored by two intensity-position monitors, using backscattering from thin foils.Alternatively, real-time intensity monitoring is possible with a scintillator-coupled fast-frame CCD which picks up the other 1st order diffraction from the diamond grating.The quality of the photon beam can be further improved by cleanup slits for both high and low photon energies, located close to the interaction chamber in EH. The 9 × 11 m 2 experiment hutch is enclosed by a heavy concrete wall of thicknesses between 0.5 and 1.0 m to establish radiation shielding for high energetic electrons generated by the relativistic laser-matter interaction processes when focusing the multi-100 TW laser on the sample.In EH two interaction areas IA1 and IA2 have been defined.In IA1, a large vacuum interaction chamber (IC1) with inner dimensions 2.6 × 1.7 × 1.5 m 3 (LWH) accommodates several configurations for diffraction, imaging or low/high resolution spectroscopy and inelastic X-ray scattering.The IC1 vacuum of ~10 −4 mbar is separated from the X-ray optics by differential pumping stage or, above 10 keV, by a diamond window.At IA2 various setups can be interchanged.A second interaction chamber (IC2) with 1 m diameter is dedicated to dynamic diamond anvil cell (DAC) experiments and high-precision dynamic laser compression experiments in a standardized configuration.Alternatively, a goniometer with a pulsed magnetic coil and a cryogenic sample environment shall be placed here.While in IA1 all X-ray and laser beams are available, IA2 has access to the X-ray FEL and the nanosecond laser beams only. X-ray detectors inside IC1 need to be vacuum-compatible with compact dimensions, low weight, modular assembly, and >10 Hz repetition rate.HED plans to have several detectors installed.Two EPIX100 modules [122] offer a 35 × 38 mm 2 chip with 50 µm pixel pitch and 10 2 dynamic range at 8 keV.These detectors will be coupled, e.g., to crystal spectrometers.Three EPIX10k modules [123] have identical chip size, 100 µm pixel pitch, but offer 10 4 dynamic range by gain switching.With the same dynamic range, four Jungfrau modules offer 40 × 80 mm 2 chips each with 75 µm pixel pitch and 10 4 dynamic range at 12 keV [124].The latter two gain-switching detectors are ideally suited to record dedicated parts of an X-ray diffraction pattern.For both, IA1 and IA2, a detector bench at the end of EH will offer a possibility to place large area detectors, e.g., for imaging or SAXS type experiments.This bench allows adjusting the distance from IA1 and IA2 to the detector.On this bench, the HIBEF consortium plans to integrate an AGIPD 1M detector [31], a Perkin-Elmer 4343CT flat-panel large-area detector, and high-resolution CCD cameras for X-ray phase contrast imaging and ptychography applications. Several drivers to generate extreme states of matter will be available at HED, e.g., two high energy optical lasers, diamond anvil cells, and pulsed magnetic fields, contributed and operated by the international HIBEF user consortium.The all-diode pumped high energy (HE) nanosecond DiPOLE-100X laser is developed by STFC CLF (UK) [125].It delivers up to 80 J at 515 nm wavelength with pulse durations of 2-15 ns with a maximum repetition rate of 10 Hz.This laser will be primarily used for shock compression experiments and its pulses can be temporally shaped to enable isentropic ramp compression techniques.The multi-100 TW Ti:Sapphire (HI) laser system, currently under construction by Amplitude (France), will deliver 4-10 J of 800 nm light in ultrashort pulses of less than 25 fs at a repetition rate of 10 Hz.The pulses of this laser can be focused to a few µm 2 spot by means of an off-axis parabola, reaching on-target intensities of the order of 10 20 W/cm 2 .This laser will primarily be used for relativistic laser-matter interaction experiments.In addition, the standard PP laser of the European XFEL will be available.All three lasers have to be precisely timed with respect to the X-ray pulses and are synchronized to the master oscillator.The timing jitter between the PP laser and the incident X-rays is monitored by photon-arrival diagnostics with a precision on the order of a few femtoseconds.Timing between the HI laser and the X-rays is realized indirectly using the characterized PP laser in an optical-optical balanced cross-correlator.Timing between the HE laser and the X-rays is less demanding and achieved via fast photo diodes that detect both X-rays and optical light with a resolution of few 10 ps.Matter in magnetic fields of up to 60 T can be studied in a solenoid coil.The timescale of the field build-up of 0.6 ms is perfectly adapted to the length of a 4.5 MHz pulse train of the facility. Future Developments Being a brand new facility and observing much progress in the field of FEL sources, FEL instrumentation, and novel types of scientific experiments, a rich variety of further developments is expected to be implemented during the coming years.Developments going beyond the baseline scope of European XFEL have already started using external funding.Most notable is the construction and implementation of self-seeding for the hard X-ray FEL sources [104,105].This is on-going for the SASE2 FEL and under preparation for SASE1.The FEL radiation performances of SASE3 would benefit enormously by the provision of variable polarization that can be switched between linear and circular with full flexibility [56].The installation of an SASE3 afterburner is therefore under preparation.This afterburner consists of several 2 m long APPLE-type undulators, which will be added to the main SASE3 undulator.Furthermore, the installation of a chicane in the SASE3 undulator will enable operation at two widely separated photon energies for time-resolved X-ray-X-ray pump-probe investigations.Another development concerns the construction of additional scientific instruments.Using funds from user consortia, the additional end-station at SPB/SFX for serial femtosecond crystallography and a third beam transport system, vacuum port and experiment hutch at SASE3 are pursued.A completely different area is that of further developing the PP laser towards providing much longer wavelengths.Pumping solids in the THz regime has many scientific applications and is vigorously requested by part of the user community.The feasibility and possible implementation of laser-and accelerator-based techniques to produce intense, ultrashort duration, and monochromatic THz pulses is currently studied. Naturally, the completion of the remaining two yet unoccupied FEL sources and the construction of further scientific instruments are expected to become major activities of European XFEL once the regular operation of the facility is achieved successfully.In a more distant future, a modification of the superconducting accelerator to include a cw mode of operation is very interesting for scientific applications, as is also indicated by the LCLS-II project [13].Such an upgrade first requires developing an additional low emittance injector operating in cw mode.Since the electron energy will be significantly smaller than with the present pulsed RF system, this upgrade also requires a modified concept for the FEL sources.One possibility would be to direct the electrons to a second switchyard with novel FEL undulators specifically designed for the smaller electron energies and providing at the same time space for a second experiment hall hosting additional scientific instruments. Figure 1 . Figure 1.Overall layout of the European XFEL facility.The electron accelerator leads into two electron beamlines with up to five FEL sources.Each of these has dedicated X-ray transport sections leading towards the experiment hall, where the up to fifteen scientific instruments can be installed.For abbreviations see text. Figure 1 . Figure 1.Overall layout of the European XFEL facility.The electron accelerator leads into two electron beamlines with up to five FEL sources.Each of these has dedicated X-ray transport sections leading towards the experiment hall, where the up to fifteen scientific instruments can be installed.For abbreviations see text. Figure 2 . Figure 2. Time pattern of the electron bunch train in the linac.The RF field is pulsed with 10 Hz and has a flat top region of ~1.2 ms clearly exceeding the duration of the electron bunch train of 600 µs.The bunch train can be separated into portions with different function.The header H is typically used for fast intra-bunch feedback.The next portion S is dedicated for the South branch with SASE2.Following a short gap to switch the flat top kicker magnet the last portion N will be send to the North branch with SASE1 and SASE3.The smallest separation of electron bunches is 222 ns in standard operation, corresponding to 4.514 MHz and up to 2700 bunches per train.Operation at bunch separations of 886 ns (1.128 MHz) and 10 µs (0.1 MHz) is possible, too. Figure 2 . Figure 2. Time pattern of the electron bunch train in the linac.The RF field is pulsed with 10 Hz and has a flat top region of ~1.2 ms clearly exceeding the duration of the electron bunch train of 600 µs.The bunch train can be separated into portions with different function.The header H is typically used for fast intra-bunch feedback.The next portion S is dedicated for the South branch with SASE2.Following a short gap to switch the flat top kicker magnet the last portion N will be send to the North branch with SASE1 and SASE3.The smallest separation of electron bunches is 222 ns in standard operation, corresponding to 4.514 MHz and up to 2700 bunches per train.Operation at bunch separations of 886 ns (1.128 MHz) and 10 µs (0.1 MHz) is possible, too. Appl.Sci.2017, 7, 592 10 of 34 mirror, the second offset mirror is bendable and can slightly focus the beam towards the distribution mirror.Alternatively, Be Compound Refracting Lenses (CRL) positioned upstream of the offset mirrors of the SASE1 and SASE2 beam transports can be used to collimate the beam or to produce a similar confocal beam situation with an intermediate focus behind the distribution mirror. Figure 3 . Figure 3. Optical layout of the three photon beam transports with some of the most important elements in the sequence towards the instruments located in the experiment hall.The most southern beam line is: SASE2 (a); then SASE1 (b); and SASE3 (c). Figure 3 . Figure 3. Optical layout of the three photon beam transports with some of the most important elements in the sequence towards the instruments located in the experiment hall.The most southern beam line is: SASE2 (a); then SASE1 (b); and SASE3 (c). Figure 4 . Figure 4. Overview schematic of the SPB/SFX scientific instrument at the SASE1 FEL beamline.The sketch indicates major instrumentation items installed in the photon beam transport tunnel, optics and experiment hutch.Not shown is the PP-laser instrumentation. Figure 4 . Figure 4. Overview schematic of the SPB/SFX scientific instrument at the SASE1 FEL beamline.The sketch indicates major instrumentation items installed in the photon beam transport tunnel, optics and experiment hutch.Not shown is the PP-laser instrumentation. Figure 5 . Figure 5. Overview schematic of the FXE scientific instrument at the SASE1 FEL beamline.The sketch indicates major instrumentation items installed in the photon beam transport tunnel and experiment hutch.Not shown is the PP-laser instrumentation. Figure 5 . Figure 5. Overview schematic of the FXE scientific instrument at the SASE1 FEL beamline.The sketch indicates major instrumentation items installed in the photon beam transport tunnel and experiment hutch.Not shown is the PP-laser instrumentation. Figure 6 . Figure 6.Schematic outline of the SQS scientific instrument at the SASE3 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the three interchangeable experimental vacuum chambers AQS, NQS and SQS-REMI. Figure 6 . Figure 6.Schematic outline of the SQS scientific instrument at the SASE3 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the three interchangeable experimental vacuum chambers AQS, NQS and SQS-REMI. Figure 7 . Figure 7. Schematic outline of the SCS scientific instrument at the SASE3 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the Heisenberg RIXS spectrometer contributed by the hRIXS user consortium. Figure 7 . Figure 7. Schematic outline of the SCS scientific instrument at the SASE3 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the Heisenberg RIXS spectrometer contributed by the hRIXS user consortium. Figure 8 . Figure 8. Schematic outline of the MID scientific instrument at the SASE2 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel, optics hutch and experiment hutch comprising the beam transport, focusing and diagnostic devices. Figure 8 . Figure 8. Schematic outline of the MID scientific instrument at the SASE2 FEL.Shown are major instrumentation items installed in the photon beam transport tunnel, optics hutch and experiment hutch comprising the beam transport, focusing and diagnostic devices. Figure 9 . Figure 9. Schematic outline of the HED scientific instrument at the SASE2 FEL beamline.Shown are major instrumentation items installed in the photon beam transport tunnel, optics hutch and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the large optical lasers, second interaction area instrumentation and detectors contributed by the HiBEF User Consortium. Figure 9 . Figure 9. Schematic outline of the HED scientific instrument at the SASE2 FEL beamline.Shown are major instrumentation items installed in the photon beam transport tunnel, optics hutch and experiment hutch comprising the beam transport, focusing and diagnostic devices as well as the large optical lasers, second interaction area instrumentation and detectors contributed by the HiBEF User Consortium. Table 1 . Comparison of accelerator parameters of hard X-ray FEL facilities.
2018-12-03T15:14:20.946Z
2017-06-09T00:00:00.000
{ "year": 2017, "sha1": "60b94137e60be7f10ec6dc93eabab00c10ecc115", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/7/6/592/pdf?version=1497009188", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "60b94137e60be7f10ec6dc93eabab00c10ecc115", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
62835023
pes2o/s2orc
v3-fos-license
Effect of Method of Secondary Fermentation and Type of Base Wine on Physico-Chemical and Sensory Qualities of Sparkling Plum Wine Plum base wines prepared with potassium metabisulphite or sodium benzoate were converted into sparkling wine, either by `Methode Champenoise' or tank method with artificially carbonated wine serving as a control. In both the secondary fermentation methods ethanol and low temperature acclimatized yeast; Saccharomyces cerevisiae UCD-595 with optimized sugar (1.5%) and di-ammonium hydrogen phosphate (0.2%) were used. Both methods of sparkling wine production and the type of base wine affected the physico-chemical and sensory characteristics of the sparkling wine produced. In the secondary fermented wines, most of the physico-chemical characteristics were altered compared to that of artificially carbonated wines except volatile acidity, methanol, propanol and ethanol. Furthermore, these wines contained lower proteins, minerals and amyl alcohol than the base wine. In general, the sparkling wines produced by either of the secondary fermentation method had lower sugar, more alcohol, higher macro elements but lower Fe and Cu contents than the artificially carbonated wines. An overview of the changes occurring in the sparkling wine in comparison to artificially carbonated wine revealed that most of the changes took place due to secondary fermentation. The bottle fermented wine recorded the highest pressure, low TSS and sugars. The secondary bottle fermented wine was the best in most of the sensory qualities but needed proper acid-sugar blend of the base wine before conducting secondary fermentation. Sparkling wine made from base wine with sodium benzoate was preferred to that prepared with potassium metabisulphite. The studies showed the potential of plum fruits for production of sparkling wine. INTRODUCTION Plum (Prunus salicina L.) is cultivated all over the world including India (Bhutani and Joshi 1995). Its fruit is considered a potential substrate for the preparation of alcoholic beverags including wine and vermouth because of its attractive colour and good fermentability (Amerine et al. 1980, Vyas and Joshi 1982, Joshi et al. 1991, Joshi 1997).Since the fruit has a very short postharvest life, a large portion of the produce goes waste during peak season.The processing industry, which at present is at growing stage, is mainly confined to the canning and drying of select varieties (Woodruf and Luh 1986).Production of alcoholic beverages from plum could increase the utilization of fruit. Sparkling wine seems to have a good potential to be a profitable outlet.Champagne is the most popular sparkling wine throughout the world (Lee andBaldwin 1988, Montemiglio 1992) but the technology of champagne making is available only in a few countries especially France.Besides, there are many steps in its production technology with a very limited published information (Amerine et al. 1980).Even out of the scattered information available, a major part is patented.Most of the sparkling wines are produced from grapes.Recently, sparkling wines from orange and elderflower have also been developed (Rose 1992, Dirker 1992).But no information is available on the suitability of plum fruits for the preparation of sparkling wine, its composition and quality of wine made by different methods.We have earlier reported the composition and quality of plum base wine using two different preservatives (Joshi and Sharma 1995).The factors important in secondary fermentation of plum base wine such as yeast acclimatization to ethanol and low temperature fermentability, amount of sugar and nitrogen source required have earlier been reported (Sharma and Joshi 1997).In this paper, we report the results of our study on physico-chemical and sensory qualities of sparkling wines prepared by bottle and tank fermentation methods along with artificially carbonated wines, using two types of base wines -potassium metabisulphite (KMS) and sodium benzoate (NaB) treated. MATERIALS AND METHODS In the preparation of sparkling plum wine two methods; bottle and tank fermentation were used with artificial carbonated wine as a control.The procedure for yeast acclimatization and amelioration with sugar and nitrogen source used were the same as reported earlier (Sharma and Joshi 1997). Bottle fermentation: KMS and NaB treated base wines were ameliorated with 1.5% sugar and 0.2% di-ammonium hydrogen phsophate (DAHP) and inoculated with acclimatized Saccharomyces cerevisiae var.ellipsoideus UCD-595 (0.3%, v/v inoculum).After bottling, corking and labelling, the bottles were incubated at 14±2 o C for three months, keeping them in slanting position.Afterwards these were placed upside down with neck downward to get the yeast onto the cork as practiced for champagne making (Amerine et al. 1980).These bottles were opened after chilling to remove the sediments and were used for evaluation of various quality parameters after another three months of aging. Tank fermentation: An autoclave was suitably modified for this purpose.Fermentation was carried out after ameliorating both the base wines, as described for bottle fermentation.After incubation for a period of two weeks at 14±2 o C, the tank was chilled and the wine was bottled and corked immediately, followed by evaluation after three months of aging in bottles.Carbonated wine: Before artificial carbonation of plum wine, the total soluble solids (TSS) of both the KMS and NaB treated wines was raised to 10 o B by addition of sugar syrup and the these were filtered and chilled.Initial optimization was carried out by carbonating both the wines in a carbonating machine to give a pressure of 30 psi.The machine had all the contact parts of stainless steel. Physico-chemical analysis: The wines were evaluated for physico-chemical characteristics and sensory quality.TSS, acidity, pH, sugar, crude proteins, preservatives, total anthocyanins and pressure of the wine were measured by the standard procedures (Amerine et al.1980, Rangamma 1986).The ethanol was measured as per the method of Caputi et al. (1968).Methanol and other alcohols in the wines were estimated by GC method as described earlier (Joshi and Sharma 1995).For total phenols, esters, volatile acidity and aldehyde estimation, the prescribed methods were followed (Amerine 1980 et al.).Mineral contents (macro and micro) were analysed by the method reported earlier (Bhutani et al. 1989). Sensory evaluation: The sensory analysis of wines was conducted by a panel comprising 10 members. Coded samples of the chilled products were presented to the members in separate booths for evaluation.A recommended proforma except with an additional attribute of extent of carbonation was used for evaluation of wines (Amerine et al. 1980).The overall quality of the wines was determined by obtaining a sum of scores (out of maximum score of 20) from all the parameters.The judges served as replications.The data obtained from physico-chemical parameters and sensory qualities were analysed by CRD and RBD, respectively (Cockrane and Cox 1963). RESULTS AND DISCUSSION Physico-chemical characteristics: The data presented in Table 1 show that the tank fermented wine retained more sugar as reflected by the total and reducing sugars and TSS than bottle fermented wine.Among the two base wines, sodium benzoate treated wine had higher of these contents.The higher sugar content of tank fermented wine was the consequence of its slower fermentation than bottle fermentation.Overall, reduction in total sugars of the wines obtained by secondary fermentation (bottle or tank) is due to the consumption of sugar by yeast in production of CO 2 and ethanol (Markides 1986).NaB treated wine had signifi-cantly higher acidity than the KMS wine but different methods were indistinguishable in this respect (Table 1).The consumers reportedly prefer sparkling wines of high acidity and acidification of wine with citric acid for carbonation is normally practiced (Amerine et al. 1980).In comparison to the reported values of acidity for champagne, it was higher in plum wine, traceable to the original acidity of the fruit.The pH of tank fermented wine was significantly higher than that of bottle fermented or artificial carbonated wines, which is in consistence with the pH values of these wines (Table 1).Under the given conditions, the wine obtained by either method had similar colour as revealed by the red and yellow colour units. Bottle fermented wine accumulated higher pressure (5.25 psi) than tank fermented wine.Although, the same cuvee was prepared for both the methods, yet it gave different results in two types of fermentations.A low pressure in tank fermented wine in comparison to the bottle fermented wine was due to multiple factors.The size of fermenter (bottle and tank) and the rate of fermentation affect the pressure.The bigger size of fermenter in tank fermented wine coupled with relatively less fermentation, indicated by higher total sugars (Table 1) are responsible for the low pressure in tank fermented wine.Since the use of fermentation tank in the bottling procedure was known to reduce the CO 2 pressure (Janke and Rohr 1960), this appears to be the another reason for low pressure. The highest pressure (60.0 psi) obtained in KMS treated bottle fermentation was slightly lower than the commercially prepared champagne (Amerine et al. 1980).The type of base wine and method of secondary fermentation did not affect the residual preservative level.The level was below the permissible limit. Results also revealed that both the secondary fermented wines had more ethanol than the artificially carbonated wine and KMS treated wines yielded more ethanol than NaB treated wines (Table 1).Increase in ethanol content of secondary fermented wine is understandable because of re-fermentation of base wine after addition of sugar and nitrogenous source in contrast to the artificially carbonated wine.The methanol content of spark-ling wine was not affected significantly by either the different methods or by the type of wine. Lack of substrate (pectin) for methanol production which is removed from the base wines by clarification after primary fermentation, probably has contributed to this effect.Secondary fermentation led to significantly higher amount of propanol and amyl alcohol than artificially carbonated wines (Table 1).Tank fermented wines contained more amyl alcohol than bottle fermented or artificially carbonated wines and the KMS treated wine contained higher amount of propanol than the sodium benzoate treated wine.These alcohols are the byproducts of alcoholic fermentation (Amerine et al. 1980), because of this, secondary fermented wine contained higher amount of both propyl and amyl alcohol than the artificially carbonated wine.For this parameter, the values for NaB treated wine out of the two sparkling wines was favourable.Secondary fermentation either in bottle or tank reduced the aldehyde content and enhanced the total esters, total phenols, total anthocyanin and crude protein content of wines, significantly as compared to the artificially carbonated wines.The wines produced from KMS contained more aldehyde, ethanol and esters, but was having less phenols, anthocyanins and crude proteins than that obtained with sodium benzoate.Since low is preferred for preparation of sparkling wine either method of secondary fermentation using NaB had an edge over artificially carbonated wine.Higher amount of esters are desirable as they enhance the flavour of wine.The phenols play an important role in sensory characteristics of wine, and both the type and quantity of phenolic are significant in imparting astringent taste to the wine (Amerine et al. 1980).Anthocyanins are important pigments (Leroy et al. 1990) contributing to the colour appeal of the wines from fruits like plum.Neither the method nor the type of wine influenced the volatile acidity (Table 1).But its value lower than the prescribed limit (0.4% A.A) indicates the soundness of fermentation.A high crude protein in the secondary fermented wines is the result of autolysis of yeast during secondary fermentation as observed earlier in grape wine (Leroy et al. 1990).All composition parameters are related with fermentation and thus the change could be correlated with secondary fermentation.Since the wine with KMS had higher fermentability than sodium benzoate, these changes are more pronounced in the KMS wine than sodium benzoate.Among the two methods of secondary fermentation, the bottle fermentation gained most of these characteristics desirable due to better fermentation conditions in the bottle than tank. Tank fermentation method is known to provide more aerobic conditions than bottle giving rise to loss of alcohol, production of more higher alcohols, loss of pressure etc. Bottle fermentation method thus, appears to be the most suitable in this respect. The secondary fermentation retained higher contents of major elements (Na, K, C, Mg) and trace elements except for Cu and Fe as compared to the artificially carbonated wines.The mineral composition of sparkling wine obtained by lether method, however, was more or less similar.The type of base wine had significant impact on mineral contents of the sparkling wine.The sodium benzoate treated wine when compared with KMS treatment contained higher amount of Na, K and Mg and low of Ca, Fe and Zn.The difference in contents of other microelements, Cu and Mn were non-significant.The difference in elements level may arise due to more than one reasons.Extent of fermentation, degree of autolysis and aging period can influence solubility and precipitation of the elements.Higher Fe and Cu content in artificially carbonated wine seems to be the result of contamination of wine with these metals during carbonation.However, low Fe, Cu, Zn and Mn are desirable for stability and prevention of metallic taste in the wines (Amerine et al. 1980). Sensory evaluation: Wines produced by different methods were evaluated significantly different in terms of sensory qualities by the judges (Table 2).The bottle fermented wine was adjudged superior in most of the characteristics except sweetness and body.The sparkling wine prepared from the sodium benzoate treated base wine was superior mainly because of aroma and bouquet, astringency and overall impression.The total scores of the wines prepared by both the methods indicate that these were commercially acceptable.Since bottle fermented wine did not score well with respect to acid-sugar balance level, there is a need for its improvement with respect to this attribute.Besides, there was a less stability of CO 2 in the wine.Foam stability is a very important characteristic of sparkling wine.It improves during aging which is responsible for release of certain compounds including proteins, lipids and polysaccharides that contribute to foam stability. Better quality of bottle fermented wine could be attributed to the process of yeast autolysis which plays an important role in producing aroma and flavour (Jordan and Napper 1986).It was indicated by the panelist that compared to KMS treated wine, the NaB treated wine had desirable astringency and smooth taste as was the case with its base wine discussed earlier (Joshi and Sharma 1995). From this study, it is concluded that the bottle fermented wine made from base wine with sodium benzoate had desirable level of CO 2 , low aldehyde, higher esters, more crude proteins, better colour, higher sensory qualities than the one obtained by tank fermentation process or that carbonated artificially.Compared to the artificial carbonated wines those with secondary fermentation had many superior characteristics.Since the maturation was done only for three months, a longer period might have produced wine of still better quality.Nevertheless, the plum fruit has a potential to produce sparkling wine by the methode champanoise. Comparison of methods and types of wines for important characteristics of spakling plum wine Table1:
2019-01-01T16:50:26.975Z
1999-01-01T00:00:00.000
{ "year": 1999, "sha1": "59eac6489093751b65eeca48b875c2c709357079", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/babt/a/Ct64BW8CnrjLQsDVv5jZXMt/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "59eac6489093751b65eeca48b875c2c709357079", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
222501258
pes2o/s2orc
v3-fos-license
DISASSEMBLABLE VACUUM CHAMBER AS AN INNOVATIVE TEST STAND DESIGNED FOR RESEARCH ON IMPROVING THE OPERATIONAL PARAMETERS OF POWER SWITCHING APPARATUS The Polish power industry is characterized by outdated elements and is in poor technical condition. This applies mainly to overhead lines operating at medium and high voltage (MV and HV) levels. Moreover, the Energy Regulatory Office (ERO) requires the Distribution System Operators to supply electricity with specified parameters, ensuring uninterrupted electricity supply to end users. Failure to meet these conditions results in specific financial penalties. In connection with the above, there is a strong need to upgrade the existing electricity grids using modern equipment. The article presents an innovative, original research position based on the so-called dismountable vacuum chamber, which allows research to be conducted on improving the performance of modern switching equipment used in Smart Grid networks. The article also presents the results of electric strength tests of the inter-contact break in order to confirm the correctness of operation of the described test stand. Introduction Currently, the most frequently used centres of work and electric arc extinguishing in medium-voltage switchgear (disconnectors, switches) are based on sulphur hexafluoride (SF6) and a vacuum ( fig. 1, fig. 2). The problems with the use of SF6 gas result mainly from its properties which are harmful to the climate. It is an odourless, colorless (under normal conditions), nonpoisonous and non-flammable gas. However, it has been classified as a fluorinated greenhouse gas (F-gas), which, while being in the atmosphere for a longer period of time, influences the temperature rise. Moreover, it is the strongest greenhouse gas known to date. Its global warming potential (GWP) is 22200, which means that 22,200 kg of carbon dioxide has the same effect as 1 kg of sulphur hexafluoride. The full decomposition of this gas in the atmosphere takes 3200 years. The use of sulphur hexafluoride has not gone unnoticed by various organisations and institutions. The Montreal Protocol, signed in 1987 [8], provides for the prevention of the ozone hole and also introduces the necessity to reduce the use of the substances which destroy this ozone hole is. The Kyoto Protocol, in force since 2005, also assumes the reduction of F-gas emissions [5]. The current plan assumes the reduction of gas emissions by 80-95% by 2050. The European Union has also formulated appropriate action plans related to environmental protection. The regulation related to fluorinated greenhouse gases provides for a reduction of gas emissions in the European Union countries by about 73% by 2030 and by about 75% by 2050 compared to 1990 levels. Fig. 2. Medium-voltage switchgear using a vacuum as an insulating medium: KTR recloser manufactured by Tavrida Electric and EKTOS disconnector manufactured by EKTO and Lublin University of Technology An excellent alternative to the abovementioned SF6 gas is vacuum technology. Currently, there is no other environmentally friendly extinguishing medium in use in switchgear. Vacuum extinguishing interrupters are characterized by high switching durability, any working position, quick recovery of electrical strength and no explosion hazard [11]. The demand for new, innovative devices meeting a number of growing requirements related to the reliability of electricity distribution requires newer solutions. A number of new opportunities in terms of research on prototypes of new, innovative devices lead to the research position described in this article based on a so-called disassemblable vacuum chamber. Electrical breakdown in vacuum According to the definition, a vacuum in the literal sense is a space completely devoid of matter. In technical terms, it is a state of high dilution of gas. In the case of a gas pressure of a value at which the average free path of electrons and molecules is greater than the contact distance, the development of electron avalanches, which initiate an electric leap, is impossible. The mean free paths of electrons L e and molecules L cz in gas can be calculated using the following relationships [3,7]: where T is the absolute gas temperature and p is the gas pressure. Taking air as a gas consisting mainly of nitrogen and oxygen, it has been calculated that for an average electron-free pathway and a gas molecule of 10 mm, the gas pressure should be 3.8 Pa and 0.67 Pa, respectively. In such a situation, the development of an electron avalanche is impossible due to the fact that each electron reaches the anode unhindered. Therefore, the initiation of an electrical surge in a vacuum environment is possible through the existence of physical phenomena other than electron avalanches. In fact, the initiation of an electrical surge in a vacuum environment is caused by the interaction of several physical phenomena. The basic condition for the development of a discharge in a vacuum is the presence of carriers of electric charges and molecules or vapours which, after ionisation, will cause the current to increase to an appropriate value. There are many hypotheses of the process of initiation and development of a leap in a vacuum environment, which can be divided into the following groups [9]:  Hypotheses of mechanisms of exchange of charged particles In these hypotheses, it is assumed that a random electron located in the inter-contact space, accelerated by the action of an electric field, knocks positive ions and photons out of the anode, and these in turn knock subsequent electrons out of the cathode [6,7,13]. In this case, cumulative ionization is assumed to occur and thus an electrical breakdown. The criterion for the development of the discharge in these hypotheses has the following form: where A is the number of positive ions ejected from the anode by one electron, B is the number of electrons released from the cathode by one positive ion, E is the number of negative ions produced by one positive ion and F is the number of positive ions produced by one negative ion.  Hypotheses of mechanisms initiating an electrical breakdown over the field emission of electrons The hypotheses in this group assume the occurrence of microsharpening on the cathode surface and thus an increase in the electric field strength [1,4,10,14]. Thanks to this, field emission of electrons and thus subsequent phenomena that may lead to electric discharge is possible. The hypotheses in this group are based on two mechanisms deciding on the electrical breakdown: the anode mechanism, consisting in heating the anode, and the cathode mechanism, consisting in heating the cathode.  Hypotheses of mechanisms initiating electrical breakdown through micro-particles The hypotheses included in this group are characterized by the assumption that charged microparticles break away from the electrodes and then move due to electrostatic forces. As a result of this movement, their kinetic energy increases. If the kinetic energy of a microparticle is high enough, after a collision with the opposite electrode, its material evaporates due to an increase in temperature or the emission of subsequent lumps of material. These phenomena may cause the development of electrical discharge. The criterion of initiating an electrical breakdown according to this hypothesis was determined by Cranberg using the following relation [2]: where E p is the average current of the macroscopic electric field at the point where the microparticle breaks off at U voltage at the terminals of the system equal to the voltage of breakdown and C k is a constant proportional to the critical energy density in the impact area of the microparticle, depending on the material and surface condition of the electrode.  Hypothesis of desorption mechanism of jump initiation This hypothesis says that for pressures between 10 -5 ÷ 10 -2 Pa, there is a layer of impurities and gases on the walls of the vacuum interrupter and on the electrode surfaces. After applying voltage of an appropriate value to the poles of the chamber, a desorption of neutral particles and ions occurs, caused by the action of an electric field and an increase in the surface temperature. With an appropriate number of gas molecules in the inter-electrode space, the discharge develops and then the desorption process is repeated so that the final electrical breakdown between the electrodes takes place [12]. Test stand The basic element of the test stand is a discharge chamber equipped with a contact system in which the contacts are made of tungsten filtered with copper at a ratio of 70% W to 30% Cu. The innovativeness of the stand is manifested in the way it is made, namely in allowing free access to the interior of the chamber. Thanks to this, it is possible to change the contact pads, which provides many possibilities for future research, which includes determining the influence of the type of contact materials on the electrical strength of the contact break and on the burning process of the electric arc in the chamber. Additionally, the discharge chamber has been equipped with glass sight windows thanks to which it will be possible to precisely analyse the processes taking place inside the chamber with the use of, among others, an ultrafast camera or an optical fiber spectrophotometer. The stand is also equipped with a contact distance adjustment system, consisting of an integrated mobile contact unit with an actuator and a displacement sensor. The contact distance can be set by means of buttons located on the operator's panel or by means of a remote control to ensure safe operation. The value of the specified contact distance is displayed on an LCD display attached to the stand's structure, which has been designed and made in a mobile way and ensures failure-free work. A view of the test stand is shown in Fig. 4. In order to obtain a vacuum of a specific pressure inside the disassemblable chamber, a vacuum system is used, consisting of a set of vacuum pumps (turbomolecular and vacuum) with a capacity of 90 l/s, a dedicated vacuum meter and a manual valve enabling the maintenance of a specific pressure value. The construction of the test stand is also equipped with a dedicated platform on which the set of vacuum pumps can be placed in a stable way. The power supply kit for this test stand consists of a high voltage transformer with a maximum output voltage of 50 kV and a nominal power rating of 2.5 kVA, in a paper-oil insulated resin housing, equipped with a two-section primary winding, allowing the transformer gearbox to be changed for two secondary voltage ranges. The transformer works with a dedicated control panel with a rated power of 6 kVA. Precise voltage measurement is enabled by a capacitive divider, designed and manufactured based on lowloss polypropylene capacitors. The internal equipment of the control panel is mainly a brush voltage regulator with a drive and a SIMATIC S7 1200 PLC with a dedicated measuring module, which is responsible for the control part. Convenient and safe operation is ensured by using the HMI operator panel and a set of indicator lamps and buttons. The power supply for the test stand was provided using a YHAKXS 1x120/50 type power cable terminated with an angular connector head. A view of the test set is shown in Fig. 5, while the electrical diagram of the complete test system is shown in Fig. 6. Results of electrical strength tests Tests of the electrical strength of the inter-contact break in the tested discharge chamber for contact distances in the range of 2 ÷ 5 mm for pressures were performed in the range of 4.0×10 -4 ÷ 4.4×10 2 Pa. Figure 7 shows the relationship between the electrical strength of thrust U d as a function of pressure p. It can be seen that for pressures below 3×10 -1 Pa, the breakthrough voltage in the discharge chamber maintained a constant value of approximately 17, 23, 31 and 33 kV for contact distances of 2, 3, 4 and 5 mm, respectively. The described relation creates a safe zone for the vacuum chamber, which guarantees failure-free operation of the switching equipment using vacuum extinguishing chambers. When increasing the pressure values above 3×10 -1 Pa, there was a rapid drop in the values of the breakthrough voltages, whose values were the same for each contact distance. From a pressure value of 5×10 1 Pa, the minimum value of the breakthrough voltage of approx. 0.3 kV was reached. Figure 8 shows the relation of the breakthrough voltage as a function of the contact distance for selected pressures. When interpreting the above characteristics, it can be seen that at lower pressures inside the tested discharge chamber, the value of the breakthrough voltage is mainly influenced by the contact distance. When the chamber is gradually aerated, the characteristics are flattened so that from pressure values equal to 4,5×10 -1 Pa, they are completely horizontal. This shows that the value of the breakdown voltage for certain pressures is not affected by the inter-contact distance. Conclusions The innovative test stand based on the so-called disassemblable vacuum chamber is an original stand designed and built for the purpose of conducting research on improving the operational parameters of electrical switching equipment. Its use creates an opportunity to examine, among other things, the influence of the content of various gas mixtures in the discharge chamber, the material of electrodes and their construction on the discharge processes taking place in the inter-electrode space. Thanks to the design of glass sight windows, it is possible to observe the arc processes taking place in the chamber in detail. For this research, an ultra-fast camera and a fiber-optic spectrophotometer will be used. The conducted tests of the electric strength of the inter-contact break confirm the correctness of the described test stand. The analysis of the obtained characteristics confirms the previously known laws and relationships related to the properties of vacuum insulation systems.
2020-10-15T20:37:00.688Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "a2cd3dd3fb3c2f1ed2791c8cce489a43f9d4ba00", "oa_license": "CCBYSA", "oa_url": "https://ph.pollub.pl/index.php/iapgos/article/download/1922/2000", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2cd3dd3fb3c2f1ed2791c8cce489a43f9d4ba00", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
235486836
pes2o/s2orc
v3-fos-license
The New Is Old: Novel Germination Strategy Evolved From Standing Genetic Variation in Weedy Rice Feralization of crop plants has aroused an increasing interest in recent years, not only for the reduced yield and quality of crop production caused by feral plants but also for the rapid evolution of novel traits that facilitate the evolution and persistence of weedy forms. Weedy rice (Oryza sativa f. spontanea) is a conspecific weed of cultivated rice, with separate and independent origins. The weedy rice distributed in eastern and northeastern China did not diverge from their cultivated ancestors by reverting to the pre-domestication trait of seed dormancy during feralization. Instead, they developed a temperature-sensing mechanism to control the timing of seed germination. Subsequent divergence in the minimum critical temperature for germination has been detected between northeastern and eastern populations. An integrative analysis was conducted using combinations of phenotypic, genomic and transcriptomic data to investigate the genetic mechanism underlying local adaptation and feralization. A dozen genes were identified, which showed extreme allele frequency differences between eastern and northeastern populations, and high correlations between allele-specific gene expression and feral phenotypes. Trancing the origin of potential adaptive alleles based on genomic sequences revealed the presence of most selected alleles in wild and cultivated rice genomes, indicating that weedy rice drew upon pre-existing, “conditionally neutral” alleles to respond to the feral selection regimes. The cryptic phenotype was exposed by activating formerly silent alleles to facilitate the transition from cultivation to wild existence, promoting the evolution and persistence of weedy forms. INTRODUCTION Feralization of crop plants has aroused an increasing interest in recent years, not only for the reduced yield and quality of crop production caused by feral plants but also for the rapid evolution of novel traits that facilitate the persistence of feral populations in natural or semi-natural habitats (Gressel, 2005;Vigueira et al., 2013). Feral plants often partially or totally lose traits associated with domestication and re-acquire wild-like traits, such as freely-dispersing seed and strong seed dormancy. This process is, however, not a mere reversal of domestication (Gering et al., 2019). The wild-like traits may reemerge through different genetic mechanisms in the polyphyletic feral plants that have independently evolved from different populations or varieties of the same cultivated progenitor in ecologically similar yet geographically distant environments (Wu et al., 2021). The feral plants and their cultivated relatives thus provide appealing systems for studying convergent or parallel evolution. Understanding the genetic basis underlying the key traits associated with weed success will not only gain insight into the molecular mechanisms underpinning feralization, but also provide clues to devising more effective weed control strategies and breeding better crops. Weedy rice (Oryza sativa f. spontanea), also referred as red rice, is a conspecific weed of cultivated rice (O. sativa), which has caused significant reduction in rice grain yield and quality worldwide (Delouche et al., 2007). Weedy rice can originate either directly from cultivated rice via de-domestication (endoferality) or by hybridization between cultivated rice and its wild relatives (exoferality) (Ellstrand et al., 2010). Independent and recurrent origins of weedy rice have been reported in many rice-planting regions around the world, though some weedadaptive traits, such as seed shattering, strong seed dormancy and pericarp pigmentation, are shared by most independentlyevolved weedy rice strains (Cao et al., 2006;Huang et al., 2017;Vigueira et al., 2019;Hoyos et al., 2020). Evolutionary genomic studies of weedy rice have revealed a large number of gene loci associated with weed adaptation. However, little overlap occurred between the weed-adaptive loci and those related to domestication, and between the loci that have contributed to rice feralization in different regions Qiu et al., 2017Qiu et al., , 2020. The re-occurrence of wild type traits in weedy rice are thus not based on simple back mutations from the allele associated with domesticated traits to the wild-type allele of the wild progenitor at domestication-related loci, and the shared weedy traits of distinct weedy rice strains most likely occur through different genetic mechanisms (Qi et al., 2015;Qiu et al., 2020). The weedy rice strains distributed in eastern and northeastern China are considered to have an endoferal origin since they exist outside the range of the wild progenitor of rice. Genomic sequencing and genetic diversity analyses have shown that the weedy rice populations from northeastern China are strongly associated with japonica rice cultivars and those from eastern China with indica rice cultivars, suggesting the independent feralization events occurred with distinct cultivated progenitors in different regions (Cao et al., 2006;Qiu et al., 2017). In addition to genetic divergence, morphological variations in hull color, awn length and color, grain length/width ratio, etc., have also been reported within and among populations . Nevertheless, a notable feature that is shared by the weedy rice from eastern and northeastern China is that their seeds do not have primary dormancy. Instead, these plants have developed a temperature-sensing mechanism to control the timing of seed germination during feralization (Xia et al., 2011). This trait is not present either in cultivated or in wild rice; thus, its evolution is essentially not a case of de-domestication. Meanwhile, divergence in the minimum critical temperature for germination has been detected between the populations from the northeastern region and those from the eastern region, suggesting rapid adaptive evolution in germination behavior to local climatic conditions (Xia et al., 2011). It has been well-recognized that, during the domestication process, crops usually experience genetic bottlenecks (Doebley et al., 2006;Burke et al., 2007). The small initial population size and the intense selection pressure for agronomic traits can lead to dramatic reduction in genetic diversity (Eyre-Walker et al., 1998;Allaby et al., 2017). It is now known that, similar to domestication, there exist severe genetic bottlenecks during the de-domestication process, as shown in weedy rice Qiu et al., 2017). The decrease in genetic diversity subsequent to bottleneck events and the deficient heterozygote associated with the predominant selfing nature of weedy rice can extensively reduce its fitness and evolutionary potential. Yet, rather than suffering from the detrimental effects of low genetic variation, weedy rice has rapidly evolved a novel germination strategy in the past few decades to adapt to temperature changes with latitude. It seems paradoxical: How does weedy rice overcome the hazards of genetic homogeneity that might have been caused by the bottleneck event, selfing and genetically homogeneous cultivated progenitors, to persist and evolve rapidly at different geographical regions under contrasting feral selection regimes? To understand the genetic basis of adaptive divergence in weedy rice germination, whole-genome resequencing of weedy rice from different populations was first conducted in this study to detect the region-specific SNPs. Then, the SNP data was integrated with the gene expression profiles of weedy rice at different germination stages to identify genes that were differentially expressed and possessed distinct alleles between the samples from different regions. We hypothesize that the differentially expressed genes harboring region-specific SNPs are more likely to be associated with adaptive diversification in germination. The allele frequencies of potential beneficial SNPs were further validated by targeted SNP genotyping. Our results suggested that the feral selection regimes might have driven the fixation of pre-existing, "conditionally neutral" alleles in weedy rice. Weedy Rice Sampling A total of 20 weedy rice populations located in Heilongjiang (HLJ), Jilin (JL), Liaoning (LN), and Jiangsu (JS) provinces, respectively, were investigated (Figure 1). The seeds of 30 weedy rice individuals and 3-5 co-existing cultivated rice individuals were collected from each population, and put in separate paper bags. The bags were stored in sorting boxes with silica gel at room temperature. Serial fixed-point investigation and sampling were conducted in 2013, 2015, and 2017, respectively. The seeds collected in 2013 and 2015 were used respectively in the common garden experiments. Seed Germination Experiment To validate the divergence in germination behavior under different temperature regimes, the seeds of two weedy rice populations from each province were used for germination test Table 1). The seeds of each population were divided into nine groups, with 50 seeds in each group. Seeds of each group were sown in a petri dish with 10 mL pre-cooled distilled water and two filter papers. The dishes were then placed in dark growth chambers at 9, 12, and 15 • C, respectively, with three dishes for each population under each condition. Seeds were considered germinated when radicle protrusion was visible. The number of germinated seeds were counted daily for a period of 30 days. Germination tests were performed three times at 6, 12, and 24 months after collection, respectively, to assess the potential effects of after-ripening period on seed germination. To evaluate the stability of the adaptive divergence in germination behavior, the seeds collected from common garden experiments were also subjected to germination tests by using the method mentioned above. Whole-Genome Resequencing (WGS) and SNP Calling Based on phenotypic traits and germination characters of the weedy rice seeds collected, individuals from six populations (HLJ-2, JL-1, JL-2, LN-1, JS-1, JS-2) were used for whole genome resequencing (Supplementary Data 1). Twenty individuals were chosen randomly from each of the populations. Genomic DNAs were isolated from the fresh leaf tissue of seedlings using DNAsecure Plant Kit (TIANGEN, Beijing, China). DNA quality was checked by 1.2% agarose gel and NanoDrop 2000c spectrophotometer (Thermo Scientific). Paired-end libraries were constructed and sequenced on an Illumina HiSeq2000 system according to the manufacturer's instructions by Novogene (Beijing, China). Identification of SNPs Potentially Involved in Local Adaptation Genetic differentiation between two populations was quantified as absolute allele frequency difference (AFD) (Berner, 2019) across genome-wide SNPs detected in weedy rice. The 32 million SNP dataset of 3,024 cultivated rice based on whole genome resequencing were downloaded from Rice SNP-Seek Database (http://snp-seek.irri.org/) (Alexandrov et al., 2015). The SNP subset dataset of 121 japonica and 330 indica rice varieties originated from China at shared SNP loci were extracted from the total dataset for comparison. To reduce the bias of allele frequency estimation due to missing data, only SNP sites having sequencing reads from at least five individuals in each population were retained for AFD calculation. AFDs between the weedy rice from northeastern China and japonica rice varieties was first calculated. Based on the distribution of AFD values, the SNPs from the top 5% (AFD = 0.490) of the genome-wide AFD distribution in each population comparison were considered as significantly divergent SNPs (Laurentino et al., 2020). Then, AFDs between weedy rice populations from northeastern China and eastern China were calculated. To ensure comparability, the same threshold value was used to filter out SNPs showing no significant divergence between the weedy rice from two different regions. Finally, the SNPs potentially underlying adaptation were predicted by identifying the allele with elevated frequency in the weedy rice populations from northeastern China compared to those from eastern China and japonica rice. Gene Expression Analysis The seeds collected from the populations HLJ-2, LN-1, and JS-2 were germinated at 6 months after collection in a dark incubator at 12 • C. Seeds were collected respectively at 4 days after imibibition, the day just before radicle emergence (about 9 days after imbibition), and the time of radicle protrusion. Because the seeds from JS-2 did not germinate at all until 30 days after sowing, we put the dish in a dark growth chamber at 28 • C to collect the germinated seeds. Total RNAs were extracted using the method described by Li and Trick (2005) with some modifications. DNase treatment was performed with RNase-free DNase (TIANGEN, Beijing, China). Total RNAs were purified and concentrated with RNeasy MiniElute Cleanup Kit (QIAGEN, Hilden, Germany). RNA quality was checked by NanoDrop 2000c spectrophotometer and gel electrophoresis. RNA integrity was further verified by Agilent BioAnalyzer 2100 (Agilent Technologies). Nine libraries were constructed using Ion Total RNA-Seq kit v2 (Life Technologies) according to the manufacturer's instructions and sequenced on an Ion Proton platform (Life Technologies) in BGI (Shenzhen, China). After removal of adapter sequences, reads with length <30 were discarded. Average quality of 20 bases from 3 ′ end was calculated until average quality is larger than 9, then the bases that have been counted were trimmed. The resulting clean reads were mapped against the japonica rice reference genome (Nipponbare, IRGSP-1.0) with the TMAP v3.4.1 software (Life Technologies) and two mismatches were allowed. Gene expression levels were calculated by the RPKM method. Differentially expressed genes (DEGs) between samples were identified by the DEGseq package (Wang et al., 2010) in R. The false discovery rate (FDR) control method (Benjamini and Yekutieli, 2001) was adopted to correct p-values in multiple hypothesis tests. A gene was considered to be differentially expressed between two samples if it had an absolute value of |log2Ratio| ≥ 1 and an FDR < 0.001. GO enrichment analyses for DEGs and summary of the enriched GO terms were conducted with the topGO package in R Bioconductor and the REVIGO web server (Supek et al., 2011), respectively. Validation of SNP Allele Frequency and the Expression Pattern of Candidate Genes Seeds from populations HLJ-2, LN-1, JS-1, and JS-2 were used for targeted SNP genotyping to validate the SNP allele frequency determined by WGS. Two other populations, HLJ-1 and LN-2, which were not included in WGS were also selected for SNP genotyping for evaluating the spread of potential adaptive alleles in northeastern populations. Thirty weedy rice individuals and one co-existing cultivated rice individual were used for each population. Genomic DNAs were extracted using the same method as described above. Genotyping was performed by applying the Sequenom MassArray platform in BGI (Shenzhen, China). The information of the 10 potentially adaptive SNPs genotyped by MassArray was listed in Supplementary (Xia et al., 2019) were also used for tracing the origin of potential beneficial SNPs. The expression dynamics of five genes possessing distinct region-specific dominant alleles were validated by qRT-PCR. Seeds were germinated in a dark chamber at 9 • C and collected at 0.5, 1,2,4,6,8,10,12,14,16,18, and 20 d, respectively, after sowing. Total RNAs were extracted using the same method as described above. First-stranded cDNA was synthesized with PrimeScript RT Master Mix Perfect Real Time (TaKaRa, Dalian, China). Gene specific primer sequences used for qRT-PCR were listed in Supplementary Table 3. RT-PCR was performed on the Roche LightCycler96 system using SYBR R Premix Ex TaqTM II (TliRNaseH Plus) (TaKaRa, Dalian, China). UBQ5 was selected as the reference gene based on RNA-Seq profiles. Each qRT-PCR was performed on three biological replicates with three technical replicates each. Relative expression levels were calculated using the 2 − Ct method. Seeds of the weedy rice from Jiangsu at 12 d after imbibition were used as controls to calibrate relative gene expression. Variations in Seed Germination Under Different Temperatures Significant differences in seed germination were observed not only between the seeds of different populations from different regions, but also between the seeds from the same population with different storage time. In the first seed germination experiment (Figure 2A), only a few of the seeds collected from LN-1 population germinated at 9 • C, taking more than 20 days. The seeds from northeastern China showed nearly 100% germination at 12 and 15 • C, while those from the eastern China region did not germinate at 12 • C and showed a lower germination ratio ranging from 10.0 to 30.0% at 15 • C. Compared to the results of the first seed germination experiment, all populations from northeastern China showed very high germination (>90%) at 9 • C in the second germination experiment ( Figure 2B). The seeds from eastern China populations still showed no germination at 9 • C but showed moderate germination ratio range from 56.0 to 64.0% at 12 • C and exhibited almost complete germination at 15 • C during the second germination experiment. Similar patterns of germination were found in the third seed germination experiment ( Figure 2C). In accompany with variations in germination ratio, changes in germination time were also observed between populations at different temperatures. In general, seeds germinated more slowly under lower temperature, and the seeds from northeastern populations germinated faster than those from eastern populations. After 6 months storage, the germination time tends to be decreased in all populations under different temperatures in the second germination experiment, possibly suggesting a small degree of after-ripening in the weedy rice seeds though they can germinate immediately after harvest under favorable conditions. There is no significant difference between the results of the second and the third germination experiments. The stability of divergence in germination behavior between northeastern and eastern weedy rice populations was verified by the results of germination FIGURE 2 | Germination ratio (left) and germination time (right) of weedy rice seeds from 8 populations at 9, 12, and 15 • C, respectively. Vertical bars indicate the standard error of the mean. Germination tests were conducted at 6 (A), 12 (B), and 24 (C) months after collection, respectively. tests of the seeds collected from common garden experiments (Supplementary Figure 1). Region-Specific SNPs Potentially Underlying Adaptation A total of 4,489,300 high-quality SNPs were identified in all weedy rice samples by WGS. Among them, 4,082,695 SNPs were retained after filtering and used for calculating the AFD value among different populations. A relatively lower level of differentiation was found between the northeastern weedy rice populations and japonica rice varieties (mean AFD = 0.0701). However, numerous SNP sites stood out from this background level of differentiation, with separate alleles reaching fixation in weedy rice populations and japonica rice varieties, respectively. The SNPs from the top 5% of the AFD distribution (AFD > 0.490) were considered as outliers and potential targets of adaptation ( Supplementary Figure 2A). Greater differentiation was found between the weedy rice populations from northeastern and those from eastern regions, with a mean AFD value of 0.462 across all genome SNP sites (Supplementary Figure 2B). The same AFD threshold of 0.490 was used to identify high-differentiation SNPs between the weedy rice populations from northeastern and eastern China. After checking the direction of allele frequency changes, a total of 66,327 SNPs were found to show significantly higher allele frequencies in the weedy rice populations from northeastern China than in those from eastern China and in japonica rice varieties. Of them, 4,680 were annotated as non-synonymous variants or located in the UTR regions of 2,173 genes. The red, blue, yellow, green, and orange dots represent the alleles located in Os03g0122600, Os10g0136150, Os10g0155800, Os11g0191300, and Os08g0227200, respectively. Their espression patterns were validated by qRT-PCR. Region-Specific DEGs During Germination A total of 235,937,605 single-end clean reads of nine libraries were obtained. Around 21.2 million reads per library were uniquely mapped to the Nipponbare reference genome (IRGSP-1.0). In total, expressions of 29,204 genes were detected in at least one sample. The average number of expressed genes was 20,682 per sample (Supplementary Table 4). Comparing with the JS samples, genes that were commonly up-or down-regulated in both HLJ and LN samples at the same germination stage were identified as region-specific DEGs. A total of 6,459, 4,697, and 7,512 DEGs were identified at the three germination stages, respectively. Most genes were merely upor down-regulated at a single stage (Supplementary Figure 3). There were 35 and 44 genes that were constantly up-or downregulated, respectively, in northeastern weedy rice across all three germination stages. GO enrichment analyses for DEGs revealed functional distinctions between the genes differentially expressed at various stages of germination (p-value < 0.05, Supplementary Data 2, Supplementary Figures 4-9). DEGs Harboring Potential Beneficial SNPs and Genotype Frequencies Integrated analysis of the WGS and RNA-Seq data revealed 314 genes that were up-regulated in the northeastern weedy rice populations before seed germination, with each gene containing at least one highly differentiated SNP in the exon region. A total of 768 SNPs were located in the 314 up-regulated genes. The predominant alleles at these SNP loci in the northeastern weedy rice populations were distinct from those of the eastern populations and japonica rice varieties, but most of them were presented in cultivated japonica rice with medium and low frequencies (Figures 3, 4-left). Some of the northeastern weedy rice predominant alleles could even be traced back to wild rice with different frequencies, suggesting that the potential adaptive alleles may already exist in wild rice or were produced during cultivated rice domestication. A few of the SNP alleles predominant in the northeastern weedy rice were also present in the eastern weedy rice populations and in indica rice, mostly at lower frequencies. The allele frequency and genotype distribution of the SNPs contained in 10 most significantly differentially expressed genes were validated by targeted SNP genotyping. The results showed no significant differences in the allele frequencies estimated by WGS (paired t-test, p-value > 0.05). Moreover, the results demonstrated that the northeastern predominant alleles identified by WGS were also predominant in the northeastern weedy rice populations that were not subjected to WGS (Figure 4-left). Dynamic Expression of Candidate Adaptive Genes The expression patterns of 10 genes subjected to targeted SNP genotyping assay were validated by qRT-PCR. Five of them showed consistent up-regulation at the early stage of germination in the samples from the northeastern region, that was clearly distinct from those of the eastern weedy rice and co-existing cultivars (Figure 4-right). The expression peaks of the weedy rice from LN occurred a little earlier than those of HLJ weedy rice, that was consistent with the slight differences in germination time between LN and HLJ weedy rice at 9 • C. At later germination stages, the expression levels of all genes fluctuated and maintained at lower levels either a bit higher or temporally lower than those of JS weedy rice and the co-occurring cultivated rice. Of these genes, Os03g0122600 encodes a putative MADSbox-containing transcription factor, homologous to Arabidopsis SOC1. Os10g0155800, Os11g0191300, and Os08g0227200 encode a leucine-rich repeat receptor-like protein kinase, a DNA topoisomerase 2-binding protein and a BTB/POZ and MATH domain-containing protein, respectively, while Os10g0136150 encodes an uncharacterized protein (Figures 4A-E). The SNP of Os03g0122600 was located in the 3 ′ -untranslated region. Instead, the other genes each contained a non-synonymous SNP in coding regions. DISCUSSION Weedy rice derived from cultivated progenitors developed an array of weed-adaptive traits to enhance their survival and persistence in the semi-natural habitats in agricultural landscapes. Convergent evolution has been described for some weedy traits, such as seed shattering and pericarp pigmentation, in populations with separate origins. However, unlike the weedy rice originated through exoferality in other regions of China, the weedy rice plants distributed in eastern and northeastern China did not diverge from their cultivated ancestors by reverting to the pre-domestication trait of seed dormancy during feralization. Instead, they developed a strategy to keep seeds from random germination under unfavorable conditions by sensing the appropriate ambient temperature cues. Subsequent population divergences have also been detected in the critical temperature for germination that were correlated with local habitat temperature at latitudinal gradient. The weedy rice seeds from higher latitudes could germinate at a lower temperature, whereas the cultivated rice seeds from corresponding latitudes and the weedy rice from JS did not show such a pattern. The region-specific differentiation in critical temperature ensured the feral plants to germinate at right time at different regions, which not only allowed the coordination of seed germination and plant establishment with the environment but also helped weedy rice plants to escape weed management tactics and outcompete the co-existing cultivated rice. Integrated analysis of SNP allele frequency and expression data revealed genes potentially underpinning local adaptation. These genes possessed region-specific "weedy" alleles with high frequencies in northeastern populations, and were significantly up-regulated prior to germination at low temperature. Among them, OsMADS50 (Os03g0122600) encoded a MADS-boxcontaining transcription factor, homologous to Arabidopsis SOC1. Previous studies have shown its function as a member of the molecular regulatory network of flowering by photoperiod and temperature, regulating long day (LD)-dependent flowering in rice (Ryu et al., 2009;Song and Luan, 2012). This function was similar to that of Arabidopsis SOC1, which mediated the crosstalk between cold response and flowering (Seo et al., 2009). Shao et al. (2019) showed that OsMADS50 was also involved in the regulation of crown root development. However, up to now little is known about its functions in seed germination. As key regulators of plant development, MADS-box transcription factors seemed to be involved in almost every development process of plants, including seed development and germination (Smaczniak et al., 2012). It has been revealed that Arabidopsis AGL21 acted as environmental surveillance of seed germination by regulating ABI5 expression (Yu et al., 2017), while ANR1 acted as a negative modulator of seed germination by activating ABI3 expression (Lin et al., 2020). The FLOWERING LOCUS C (FLC, also known as AGL25), a key regulator of flowering, has also been reported to be involved in temperature-dependent seed germination by influencing the ABA catabolic and GA biosynthetic pathways (Chiang et al., 2009). The results of this study suggested that OsMADS50 might play a vital role in integrating environmental information to regulate seed germination, and underpin the rapid local adaptation of weedy rice in germination behavior, although the mechanisms of function and regulation remained to be characterized. The potential role of OsMADS50 as a thermo sensor was supported by other studies. Papaefthimiou et al. (2012) showed that the SOC1like homolog in barley, HvSOC1-like1 was induced in two winter barley cultivars after vernalization treatment, and they predicted that the presence of the HvSOC1-like1 transcripts at the later stages of seed development might imply a role in the processes of dormancy breaking and germination. Voogd et al. (2015) also showed the role of kiwifruit SOC1-like genes in dormancy break, while Trainin et al. (2013) demonstrated that ParSOC1 was linked with chilling requirements for dormancy break in apricot. Given that the expression of ParSOC1 was sensitive to the environment with respect to the daily cycle in apricot, Trainin et al. (2013) also suggested that ParSOC1 might be part of a pathway that modulates circadian rhythms in response to environmental cues and function as a circadian clock gene to be involved in dormancy break. The function of other genes harboring potentially adaptive "weedy" alleles have not yet been well-characterized, but all the four genes subjected to genotyping and validation by qRT-PCR were predicted to be involved in response to abiotic stress based on GO annotations. Os08g0227200 encoded an MATH-BTB domain containing protein which could act as a substratespecific adaptor to bind with the E3 ubiquitin ligase component Cullin-3 to target protein for ubiquitination (Bauer et al., 2019). Juranić and Dresselhaus (2014) reported an expanded and highly divergent group of MATH-BTB proteins in different grass species. Functional analyses of the core-clade plant MATH-BTB proteins revealed their interaction with transcription factors involved in plant stress tolerance (Weber and Hellmann, 2009;Lechner et al., 2011). Os10g0155800 encoded a leucine-rich repeat receptor-like protein kinase, predicted to be involved in stress related signal transduction pathway (Hossain et al., 2016). Os11g0191300 was a homolog of Arabidopsis MEI1 which has been reported to participate in DNA repair events (Mathilde et al., 2003). Os10g0136150 encoded an uncharacterized F-box domain-containing protein. Although the specific roles of these genes remained to be characterized by experiments, the extreme allele frequency differences they exhibited between eastern and northeastern populations, as well as the unusual correlation between allele frequencies, allele-specific gene expression and feral phenotypes, suggested a polygenic nature of adaptation in seed germination behavior, and that weedy rice underwent rapid evolutionary changes at these loci. Trancing the origin of potential adaptive alleles based on genomic sequences revealed the presence of most selected alleles in both wild and cultivated rice genomes, with a few alleles (such as the "weedy" allele of OsMADS50) being present in cultivated but not in wild rice genomes, indicating that weedy rice drew upon the cryptic genetic variation accumulated within wild and cultivated rice genomes to respond to environmental and anthropogenic pressures. The results also suggested the crucial role of standing genetic variation in facilitating the rapid adaptation of weedy rice to feral environments. It has been widely recognized that standing genetic variation could lead to faster adaptation than new mutations when environment changes, because adaptive alleles present as standing genetic variations were immediately available at higher frequencies, which might result in shorter fixation time (Barrett and Schluter, 2008). The potential adaptive alleles identified in this study were silent or cryptic in wild and cultivated rice genomes, showing no obvious phenotypic effects and retained with comparatively low frequencies, for that wild rice modulated seed germination by dormancy and cultivated rice seeds were usually germinated under human-controlled conditions. These hidden alleles were released in weedy rice in response to the feral selection regimes, promoting rapid evolution and population divergence in germination behavior. These alleles could thus be considered to be "conditionally neutral" that they were favored in one environment but neutral in others (Anderson et al., 2013). The conditional neutrality of the potential adaptive alleles might be reflected not only in their environment-dependent nature of activation but also in that their overt phenotypic expression was dependent upon the developmental context (Mee and Yeaman, 2019). Given that multiple potentially adaptive genes were detected with extreme levels of allele frequency divergence among populations, and that the predominate genotype combinations founded in northeastern weedy rice populations were absent from JS weedy rice and cultivated rice, we postulated that the rapid evolution of the novel germination strategy in weedy rice was probably reached via an increased covariance of a complementary set of loci with different fitness effects of alleles rather than via the allele frequency shift at a single locus (Le Corre and Kremer, 2012). Crop domestication is a complex evolutionary process driven by strong artificial selection. Most of the domesticated traits are deleterious in natural environments but meet the demands of human. During domestication, plants mostly experienced severe genetic bottlenecks, leading to dramatic reduction in genetic diversity. Thus, it was usually recognized that domesticated organisms were incapable of rapid evolution, due to their genetic homogeneity and poor adaptive potential in environments outside of domesticated settings. The feralization process of crop species seems changing our understanding of the capacity of domesticated plants to evolve in the face of changing environments. Despite undergoing two successive bottleneck events associated respectively with domestication and dedomestication, weedy rice has rapidly evolved out of a novel strategy to precisely regulate seed germination via thermal sensing in response to regional differences in environmental temperature. This result indicated that feralization was not just a process of atavism and loss of domestication-related traits, but included the appearance of new traits that might facilitate ferality. Other studies have also shown that the reacquisition of wild-like traits during feralization was mostly not through changes at the loci related to domestication and that crop ferality was underpinned by novel and independent genetic mechanisms (Thurber et al., , 2013Qi et al., 2015). Together these findings support the view that domestication is not a dead end (Gering et al., 2019). The altered selection regimes imposed on feral populations can release the cryptic beneficial alleles preexisted in cultivated progenitor populations, leading to shifts in allele frequency from values observed as standing neutral variation in the ancestral population to the very high frequencies of predominant alleles in feral populations, and generate previously unobserved phenotypes. Genetic drift seemed to have a limited effect on the dynamics of allelic frequency in this study, since that the potential weed-adaptive alleles approached fixation in northeastern weedy rice populations but not in eastern populations. Weedy rice distributed in different areas were subjected to selection pressures not only from local environmental stressors (such as regional ambient temperatures), but also from the varied influence of regional agricultural practices for rice planting, for them coexisting in close proximity with cultivated rice in crop fields instead of returning to truly wild habitats. Dual selection pressures placed on northeastern and eastern weedy rice populations have driven rapid evolution and population divergence in germination behavior. CONCLUSION Feral species have attracted the attention of researchers since Darwin (Mabry et al., 2021). Rapid accumulation of complete genome sequences in the last decades have aroused great concern about the evolutionary processes and mechanisms underlying feralization (Wu et al., 2021). Routes leading to the transition of a population from domestic to feral are diverse (Qiu et al., 2020). While many studies have shown phenotypic changes in feral plants similar to their wild ancestors, it should be noted that the evolution of ferality is not necessarily based on returning to ancestral states. The altered selection regimes associated with the transition from cultivation to wild existence can expose the cryptic phenotypes by activating formerly silent alleles, facilitating swift responses to the feral selection regimes and promoting the evolution and persistence of weedy forms. DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found here: NCBI Sequence Read Archive (SRA) (http://www.ncbi.nlm.nih.gov/sra) under the BioProject ID PRJNA723638. AUTHOR CONTRIBUTIONS JY, BL, and LL designed the research. CZ, YF, MW, JJ, and GL sampled materials. CZ performed laboratory work, analyzed sequence data, and drafted the manuscript. YW, WZ, and ZS advised on design and discussed the results. All co-authors read and edited the manuscript. All authors contributed to the article and approved the submitted version.
2021-06-21T13:21:49.353Z
2021-06-21T00:00:00.000
{ "year": 2021, "sha1": "0676e1d63f0ca98c8a53fe4bd3d45b7b104bee02", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.699464/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0676e1d63f0ca98c8a53fe4bd3d45b7b104bee02", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
225119077
pes2o/s2orc
v3-fos-license
New Bismuth Sodium Titanate Based Ceramics and Their Applications Ferroelectric materials are widely investigated due to their excellent properties and versatile applications. At present, the dominant materials are lead-containing materials, such as Pb (Zr,Ti)O 3 solid solutions. However, the use of lead gives rise to environmental concerns, which is the driving force for the development of alternative lead-free ferroelectric materials. (Bi 0.5 Na 0.5 )TiO 3 -based ceramics are considered to be one of the most promising lead-free materials to replace lead-containing ferroelectric ceramics due to their excellent ferroelectric properties, relaxation characteristics, and high Curie point. After decades of efforts, great progress has been made in the phase structure characterization and properties improvement of BNT based ceramics. However, most of the studies on BNT system mainly focuses on its piezoelectric properties and application of piezoelectric sensors and strain actua-tors, little attention is paid to its ferroelectric properties and related applications. In this chapter, new BNT-based ceramics via composition modification and special focuses on the ferroelectric properties, phase transition behaviors under external fields and related applications, such as application in energy storage, pulsed power supply and pyroelectric detection were proposed. Introduction .5 Na 0.5 )TiO 3 (BNT) was first reported by Smolenskii et al. in 1960 [1]. BNT ceramic is a kind of ABO3 type ferroelectrics which is replaced by Na + and Bi 3 + complex ions at A-site. The A-site ions of BNT ceramics are located at the eight corner positions of octahedron, and the B-site ions are at the body center of octahedral structure [2]. Well sintered BNT ceramics have been obtained by hot pressing sintering method with d 33 of 94-98 pC/N [3]. BNT ceramics exhibit a high Curie temperature (~320°C) and high polarization of 38 μC/cm 2 , which is considered to be one of the most promising environment-friendly ceramic system to replace lead-based ceramics [4]. Pure BNT ceramics exhibits some problems such as high conductivity, and large coercive field, consequently giving problems in the poling process [4], which seriously hinder its practical application. Studies show that the comprehensive properties of BNT system can be significantly improved by doping or by incorporation New BNT-based ceramics for energy storage applications BNT-based materials possess a superior potential for energy storage due to their high saturation polarization which originates from hybridization between the Bi 6p and O 2p orbitals. However, the pure BNT materials at room temperature own a ferroelectric perovskite structure with the polar R3c space group, usually exhibiting a saturated polarization loop with high remnant polarization, which is very unfavorable to obtain good energy storage performance [19]. Fortunately, the BNT materials can show an antiferroelectric-like behavior at around 200-320°C, which opens a door to the energy storage application of BNTbased materials, and the 200°C is identified as the depolarization temperature (T d ) of the BNT materials, which correspond with a peak in the temperaturedependent dielectric loss curve. The structure at this temperature range is still under debate. Zvirgzds et al. [20] proposed a rhombohedral (R3c)-tetragonal (non-polar P4bm) phase transition over the broad temperature range (255-400°C). Moreover, Schmitt et al. [21] suggested the phase transformation from non-polar P4bm phase to polar R3c phase under applied electric field BNT-based piezoelectric ceramics d 33 (pC/N) Ref. (Bi 1/2 Na 1/2 )TiO 3 94-98 [3] (Bi 1/2 K 1/2 )TiO 3 69 [10] (1-x) (Bi 1/2 Na 1/2 )TiO 3 -x(Bi 1/2 K 1/2 )TiO 3 140-192 [11] (Na 1-xK x ) 0 accounted for the antiferroelectric-like characteristic, but this could not reasonably explain a large temperature hysteresis of different physical properties about the phase transition between 200 and 320°C. Dorcet et al. [22] revealed a modulated phase at 200-300°C through in-situ Transmission electron microscope (TEM) characterization, it was formed of Pnma orthorhombic sheets which are locally analogous to an antiferroelectric phase, and these sheets are twin boundaries between R3c ferroelectric domains. The phase structure evolution disclosed by Zvirgzds et al. [21] well matches the macroscopic physical properties of BNT materials during the heating process. In 1947, Sakata et al. reported an antiferroelectric-like behavior in the 0.85BNT-0.15SrTiO 3 ceramics [23]. Later, Zhang et al. introduced (K, Na)NbO 3 (KNN) into BNT-BaTiO 3 (BT) ceramics to low the phase transition temperature and achieved the antiferroelectric-like behavior in BNT-BT-KNN ceramics with slanted polarization hysteresis loops at room temperature [24]. In 2011, Gao et al. [25] first investigated the energy storage properties of the BNT-BT-KNN system, the 0.89BNT-0.06BT-0.05KNN ceramics was chosen as the object, Figure 1(a) is the temperature-dependent dielectric properties of the 0.89BNT-0.06BT-0.05KNN ceramics, it can be seen that these ceramics showed much lower T d compared with pure BNT materials, indicating the antiferroelectric-like behavior at a lower temperature. Figure 1(b, c)show the temperature dependence of polarization hysteresis loops of the 0.89BNT-0.06BT-0.05KNN ceramics under different electric fields. At 20°C, the polarization hysteresis loop was more of ferroelectric featured with coercive field E c = 0.9 kV/mm and remnant polarization P r = 6.2 μC/cm 2 under 6 kV/mm. At 110°C, the polarization hysteresis loop was more of an antiferroelectric-like feature with a pronounced shrinkage in both E c and P r compared with those at 20°C. The energy density as a function of the temperature of the 0.89BNT-0.06BT-0.05KNN ceramics are displayed in Figure 1(d). An energy density of [25]. around 0.59 J/cm 3 under 5.6 kV/mm at 10 Hz was obtained in 0.89BNT-0.06BT-0.05KNN ceramics from 100 °C to 150 °C, indicating high stability of temperature in the antiferroelectric-like region. Although the obtained energy density was very small and only existed above 100°C, this work is still meaningful because it inspires the further way for studying energy-storage in BNT-based materials. After, researches about the energy storage properties in BNT-based ceramics have been extensively reported. Ren et al. [26] reported that the introduction of KNN would decrease the T d of BNT-BiAlO 3 (BA) ceramics and the KNN content exerts a significant influence on the polarization hysteresis loops of BNT-BA-KNN materials as shown in Figure 2b. For 0.93 (0.96BNT-0.04BA)-0.07KNN ceramics, the T d was below the room temperature as depicted in Figure 2a and these ceramics were more of antiferroelectriclike behavior. Ren et al. [26] also investigated the energy storage properties of 0.93 (0.96BNT-0.04BA)-0.07KNN ceramics, an energy storage density of 0.65 J/cm 3 was obtained under 8 kV/mm at room temperature, and these ceramics exhibited good stability of energy density as a function of temperature and frequency at 7 kV/mm, which can be seen from Figure 2c,d. Due to the high energy loss of the antiferroelectric-like BNT-based materials, the BNT-based relaxor ferroelectrics have attracted more and more attention for energy storage and usually can show superior energy storage performance. In fact, by modifying composition and temperature in BNT-based systems, a normal or square P-E loop can transform into a slim P-E loop due to the occurring of an ergodic relaxor phase, which can be contributed to the energy storage properties. Wu et al. [27] focused on the energy storage characteristics of BNT-based relaxor ferroelectric ceramics and introduced Sr 0.85 Bi 0.1 □ 0.05 TiO 3 (□ represents the A site vacancy) and NaNbO 3 into the BNT matrix as illustrated in Figure 3. The introduced A site vacancy and Sr 2+ , Nb 5+ ions replaced the A-and B-sites ions respectively, which led to the stress mismatch and charge imbalance. These effects acted together to effectively form a local random field, which broke the long-range ordered structure of the dipole in the matrix and formed a weakly coupled polar nanodomain. Under the applied electric field, the modified ceramics exhibited a small hysteresis and a small remnant polarization, achieving high energy storage density (3.08/cm 3 ) and high energy storage efficiency (81.4%). To evaluate the practicability of the modified ceramic, energy storage performance test in a wide range of temperature and frequency found that the variations of its energy storage performance at RT ~ 100°C and 1 Hz ~ 100 Hz was less than 10%. The modified ceramics with excellent application prospects are excellent candidate materials for dielectric energy storage capacitors. New BNT-based ceramics for pulse power supply application Ferroelectric materials have an important application in pulse power supply due to their shock compression induced depolarization behavior [28]. At present, the main material systems studied are PZT52/48 piezoelectric ceramics [28], PZT95/5 ceramics [28,29] and PIN-PMN-PT single crystals [30]. However, due to the toxicity of Lead, it is urgent to develop lead-free materials for high ferroelectric pulse power supply. Bi 0.5 Na 0.5 TiO 3 (BNT) is explored as an alternative lead-free candidate for pulse power supply, in view of its high P r , high breakdown strength E b , low bulk density, and relatively high Curie temperature (T c ). Gao et al. [31] reported that the BNT can be fully depolarized by shock compression and generate a giant power output (3.04 × 10 8 W/kg). This power output is mainly attributed to a two-step polar-nonpolar phase transition from rhombohedral to orthorhombic under shock pressure. Figure 4 shows that BNT is polar phase and rhombohedral (space group R3c) at low pressure, and transforms via a first-order phase transition to a nonpolar phase (space group Pnma), which is orthorhombic and centrosymmetric. The electrical output of BNT from depoling under shock compression can be attributed to the ferroelectric-to-paraelectric (R3c − Pnma) phase transition. The energy output under shock compression in BNT is larger than that reported in other ferroelectric materials, mainly due to a first-order R-O phase transition under high dynamic pressure. This phase transition undergoes two steps, which correspond to the unitcell shrinkage and O 2− ions chain rearrangement, respectively, as shown in Figure 5. These results will extend the potential application of the pressure induced depolarization effects and guide the application and development of BNT ferroelectric materials. Liu et al. [32] report the pressure driven depolarization behavior in 0.97[(1-x) Bi 0.5 Na 0.5 TiO 3 -x BiAlO 3 ]-0.03K 0.5 Na 0.5 NbO 3 (BNT-xBA-0.03KNN) ceramics. Particularly, with increasing hydrostatic pressure from 0 MPa to 495 MPa, the polarization of BNT-0.04 decreases from 30.7 μC/cm 2 to 8.2 μC/cm 2 , decreas-ing~73%. The observed depolarization effect is associated with the pressure induced polar ferroelectric -nonpolar relaxor phase transition. The results revealed that BNT-xBA-0.03KNN ceramics as promising lead-free candidates for energy conversion applications based on the pressure driven depolarization effect. A and B). When the pressure is below 1.9 GPa (region A), the enthalpy of R3c increases sharply due to the volume decreasing as shown in (b). When the pressure is above 1.9 GPa (region B), the enthalpy of R3c phase increases gently, which is mainly due to the O2−ions displacing following the red arrows in (c). the FE-ER phase transition. This is quite similar to the case of Nb doped PZT95/5, in which pressure can drive the larger volume FE phase to transform into the smaller volume AFE phase. Peng et al. [33,34] report the depolarization behavior of lead-free ternary 0.99[0.98 (Bi 0.5 Na 0.5 )(Ti 0.995 Mn 0.005 ) O 3 -0.02BiAlO 3 ]-0.01NaNbO 3 (BNT-BA-0.01NN) ferroelectric ceramics under shock wave compression. Particularly, approximately complete depolarization under shock compression was observed in the poled BNT-BA-0.01NN ceramics, releasing a high discharge density J of 38 μC/cm 2 . The released J was 96% of thermal-induced discharge density (~40 μC/cm 2 ). This discharge density J was 18% higher than that of PZT95/5 ceramics [29]. The shock-induced depolarization mechanism can be attributed to the ferroelectric-ergodic relaxor phase transition. These results reveal the BNT-based ceramics as promising candidates for pulsed power applications. Figure 8 shows the BNT-based ceramics were almost completely depolarized, similar to PZT95/5 ceramics [29] and PIN-PMN-PT crystals [30], which indicate a similar depolarization mechanism, that is, a stress-induced phase transition. Although the released J in BNT-based ceramics is 26% lower than that obtained in PIN-PMN-PT crystals, the simple preparation methods together with environmental friendliness will be a benefit to their applications in the future. Figure 9 unveils the possible shock-induced depolarization mechanism of BNT-BA-0.01NN ceramics. The pinched P-E loops gradually emerge and the sharp current peak splits into four peaks, indicating a pressure-induced FE-ER phase transition. It is suggested that applying compressive pressure favors the formation of the ER phase for its smaller volume. [41][42][43][44]. Among them, BNT-based ceramics have been regarded as one of most promising alternative lead-free ceramics due to its high pyroelectric coefficient (p), high remnant polarization P r (around 38μC/cm 2 ), high Curie temperature T c (around 320°C), low-cost, and simple synthesis process. In recent decades, pyroelectric properties of BNT-based materials, including pyroelectric coefficient and detection rate, have been greatly improved. The pyroelectric coefficient of BNT-based lead-free pyroelectric materials has been comparable to commercial PZT [45][46][47]. However, the enhanced pyroelectric property is usually at the cost of degraded depolarization temperature (<150°C) and thermal stability, which are the hurdles to application. The BNT-based pyroelectric ceramics with low Td will depolarize partially or completely during the heat treatment (typically >100°C) processes, causing degradation of pyroelectric performance. Therefore, from the viewpoint of practical application, it is urgent for BNT-based materials to optimize their depolarization temperature, thermal stability and pyroelectric performance, thus further to promote their applications in infrared detection [48,49]. BNT-BNN pyroelectric ceramics (1-x)(Bi 0.5 Na 0.5 )TiO 3 -xBa(Ni 0.5 Nb 0.5 )O 3 lead-free pyroelectric ceramics (abbreviated as (1-x)BNT-xBNN) were synthesized by a conventional solid-state reaction method [50], and the thermal stability and depolarization temperature is enhanced at the same time as the excellent pyroelectric performance is maintained. BNN is a compound with a mixed valence state at the b position, which can be solidsolved with BNT and expand a wide range of composition adjustment. The (1-x) BNT-xBNN take into account the advantages of b-position acceptor substitution and donor substitution. The effect of BNN content on phase structure, electrical properties and thermal stability was systematically studied. After the solid-state reaction of BNN, (1-x)BNT-xBNN exhibits enhanced pyroelectric performance with a high depolarization temperature. In addition, it can be exposed to temperature up to ~145°C with negligible deterioration of pyroelectric properties, showing excellent thermal stability. The temperature-dependent properties of poled (1 − x)BNT-xBNN ceramics are displayed in Figure 10a-d. With the increasing BNN content, the Curie temperature Tc indicated by the maximum dielectric constant decreases and dielectric constant and dielectric loss of BNN decrease first and then increase. The minimum value of dielectric constant and dielectric loss occurs when the BNN content is 2%, which further improve the pyroelectric detection rate figure of merit. The depolarization temperature T d can be characterized by the first anomal point of temperature dependent dielectric properties, and the content of 2% has the highest depolarization temperature. As shown in Figure 10e, after the increase of BNN, the room temperature p values rise from 3.01 × 10 −8 C/cm 2 K of pure BNT to 5.94 × 10 −8 C/cm 2 K of 0.96BNT-0.04BNN with the increasing addition of BNN, which gains advantage compared with many other lead-free ceramics. The p value of (1-x)BNT-xBNN ceramics increases sharply, which indicates that the (1-x)BNT-xBNN sample is sensitive to ambient temperature. In addition, it can be seen that the p value increases with the increasing temperature, which indicates that the (1 − x)BNT-xBNN samples are sensitive to the surrounding temperature. Besides, 0.98BNT-0.02BNN ceramics have the best thermal stability and it can withstand heat treatment at 145°C without depolarization (Figure 10f), which is attributed to the domain switching and phase transition. BNT-BT pyroelectric ceramics BNT-BT possesses a rich phase structure, which can be easily adjusted by varying the BT content. Because of the low tripartite-tetragonal transition barrier, the morphotropic phase boundary (MPB) of BNT-BT, located at where the BT content is approximately 6%, exhibits the best pyroelectric properties and has received much attention. But it is not advisable to blindly pursue a high pyroelectric coefficient. The improvement of pyroelectric performance is often at the cost of low depolarization temperature, which is not helpful to practical applications. However, it is found that the sample with high BT content is in the tetragonal phase, which brings a higher T d than that of the tripartite, but there is no relevant report on the pyroelectric performance of high BT content. Based on the above ideas, the tetragonal phase 0.8BNT-0.2BT lead-free pyroelectric material with high BT content was successfully prepared, and the microstructure, dielectric properties, pyroelectric properties, and thermal stability were studied [51]. Owing to its high T d , this composition can endure high-temperature environment (180°C) for half hour with the value of p at room temperature remains ~90% of its initial value, demonstrating that the 0.8BNT-0.2BT samples show excellent thermal stability. Moreover, the T d of the samples is up to ~209°C, which is far higher than that of the reported BNT-based, pyroelectric materials, and it is also comparable to the commercial PZT materials. The pyroelectric properties of 0.8BNT-0.2BT pyroelectric ceramics between 25 and 70°C are investigated. With the increase of temperature, the pyroelectric performance shows an increasing trend, indicating that the material has good pyroelectric performance in a wide temperature range. Meanwhile, because the 0.8BNT-0.2BT sample has a low dielectric constant and dielectric loss, it will show a larger detection merit (Figure 11a). In order to study the depolarization temperature of the material, the dielectric thermo diagram of the sample was shown in Figure 11b. When the temperature rises to about 209°C, the dielectric constant of the sample suddenly increases with a dielectric loss peak appearing, indicating that this temperature is the (c) pyroelectric coefficient at room temperature after annealing at Ta. The inset shows the temperaturedependent pyroelectric coefficient on heating after annealing at Ta [51]. depolarization temperature T d . Notably, the depolarization temperature of reported BNT-based pyroelectric materials is generally lower than 180°C. The materials with high Td (209°C) and high pyroelectric coefficient discovered lay the foundation for the further development of lead-free pyroelectric materials. Moreover, it can be observed from Figure 11c that the room temperature pyroelectric coefficient of 0.8BNT-0.2BT maintains about 90% of the original data after being treated at 180°C, indicating that the material has good temperature stability and can withstand high temperature treatment up to180°C without pyroelectric performance loss. BNT-BA-NN pyroelectric ceramics A new ternary system 0.98BNT-0.02BA-xNN ceramic was obtained by solid solution of NaNbO 3 (NN) in the BNT-BA system and Mn element substitution modification [52]. The NN solution significantly affect the microstructure, phase transition and pyroelectric properties of 0.98BNT-0.02BA-xNN ceramics. It was found that NN addition tends to reduce the rhombohedral phase while favoring the formation of the tetragonal phase. The compositions exhibit excellent pyroelectric performance. All components exhibit excellent ferroelectric properties at room temperature, and the Pr values are all higher than 35 μC/cm 2 , of which the Pr of the x = 0.03 component is the largest, reaching 45 μC/cm 2 . Furthermore, the influence of NN solid solution on the relaxation characteristics and phase transition of BNT-BA-based ceramics was analyzed by testing the temperature-changing dielectric properties in Figure 12a. Figure 12b shows the change curve of the pyroelectric coefficient of 0.98BNT-0.02BA-xNN after polarization with temperature changing. The FE-RE phase transition occurs at T d , corresponding to the sudden drop in the polarization intensity P r . the largest peak appears at the composition x = 0.03, reaching 441.0 × 10 −8 C/ cm 2 K, which is much larger than other BNT-based ceramics reported. As the NN content increases, the T d continuously decreases. Notably, t the T d of the x = 0.02 component is still as high as 155°C. It can be observed from Figure 12c that the introduction of NN significantly improves the room temperature pyroelectric coefficient. With the increase of NN content, the p under room temperature (25°C) first increases and then decreases, and the maximum value is obtained at x = 0.03 (p = 8.45 × 10 −8 C/cm 2 K), which improved about 54% compared to the matrix (x = 0, p = 3.87 × 10 −8 C/cm 2 K). Moreover, the optimal figure of merit (FOMs) at room temperature were obtained at x = 0.02 with F i = 2.66 × 10 −10 m/V, F v = 8.07 × 10 −2 m 2 /C, and F d = 4.22 × 10 −5 Pa −1/2 (Figure 12d-f). Furthermore, the compositions with x ≤ 0.02 possess relatively high depolarization temperature (≥155°C). Those results unveil the potential of 0.98BNT-0.02BA-xNN ceramics for infrared detector applications. Conclusion Due to its strong ferroelectric properties, BNT-based ceramics exhibit great potential in the fields of energy storage, pulsed power supply and pyroelectric applications. In this chapter, new bismuth sodium titanate ceramics were synthesized and characterized via composition modifications, the ferroelectric properties, phase transition behaviors under external fields and related applications were proposed in this chapter. To detail, BNT-BT-KNN, BNT-BA-KNN, and BNT-SBT-NN ceramics for energy storage application, BNT, BNT-BA-KNN, and BNT-BA-NN ceramics for pulsed power supply, as well as BNT-BNN, BNT-BT, and BNT-BA-NN for pyroelectric detection application were presented.
2020-10-29T09:08:54.352Z
2020-10-14T00:00:00.000
{ "year": 2020, "sha1": "d73460e195754c5f0ab0eb6b0a93f8f87bdd7bfc", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/73538", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2ebb7b62a0d6960904e04721728a0541222d238b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
246527629
pes2o/s2orc
v3-fos-license
Spatial analysis for land suitability of Arabica coffee (Coffea arabica L.) in Bogor District Arabica coffee requires land suitability criteria to support its growth and productivity. If an area has infertile soil conditions and has a climate that is not in accordance with the criteria for growing Arabica coffee, then several alternative solutions are needed to determine the suitability of land in that area. Insufficient knowledge about suitable land can contribute to inefficient land use. Information on land suitability for Arabica coffee in Bogor district is not yet available. This study aimed to analyse the spatial distribution of potential land to develop of Arabica coffee commodities in Bogor. Hence, spatial analysis of land suitability was carried out by utilizing the capabilities of the Geographic Information System used an overlay based on Digital Elevation Model (DEM), agroclimatic variables, physical (adequate depth, soil texture), soil chemical properties (pH, base saturation, cation exchange capacity/CEC and land use information. The proportion of land suitability for Arabica coffee was classified into suitable, marginally suitable, and not suitable consisted of 398.68 ha, 32,209.4 ha, and 266,617 ha of which area size. The potential land for Arabica coffee showed that there was 126 ha for S2 suitability class, 18,681.00 ha for S3 suitability class, and 280,418.58 ha for the non-potential land. Introduction Coffea arabica L. and Coffea canephora Pierre ex A. Froehner are the two major coffee species that play an important economic value in the world. Arabica produces around 60% of the world's coffee production. Compared with Robusta coffee, Arabica coffee grows at higher altitudes with lower yields. Arabica also has lower resistance to weather shocks, pests, and diseases than Robusta. Even though there is a considerable variation, Arabica coffee is twice as expensive as Robusta. Arabica coffee quality is measured through an exercise called cupping and is the most critical determinant of Arabica coffee prices in high-value markets [1]. Arabica beans tend to have a sweeter, softer taste, with flavors of sugar, fruit, chocolate, and berries. Arabica coffee contains more lipids and sugars. It also has higher (sometimes wine-like) acidity than Robusta. As one of the coffee producers, Indonesia is known as the world's five largest exporters of coffee. The average national coffee consumption was 370,000 tons, with 8.22% of the growth rate per year. It shows that Indonesia has good prospects in the development of coffee products [2]. However, farmers face several problems, particularly for smallholder farmers which dominate the number of coffee plantations. Bogor district is one of the Arabica coffee-producing areas in West Java Province, Indonesia. The area of coffee plantations continued to increase to 6,407.70 ha in 2019. Currently, the area of Arabica coffee reaches 584 ha, with produce 155,993 kg of coffee beans. Areas of Arabica coffee are located in Sukamakmur, Cisarua, Megamendung, Pamijahan and Babakan Madang sub-districts [3]. Arabica coffee requires land suitability criteria to support its growth and productivity. Insufficient knowledge about suitable land can contribute to inefficient land use [4]. The expansion of coffee plantation land is one of the main components in the revitalization of plantations and development of Bogor coffee commodities by the Government. Based on data, the land use for coffee plantations is still deficient. Hence, to optimize the potential of Arabica coffee, the analysis is required to determine the level of regional suitability by utilizing the geographic information system. It will help in analyzing and defining the suitability of land and spatial distribution of potential land for the development of coffee commodities in Bogor district. This study aimed to analyze the spatial distribution of potential land to develop Arabica coffee commodities in Bogor. Materials and Method The research was held from July to November 2020 at Bogor, West Java. Bogor is located at the coordinate point 6° 18'NL dan -6° 47' SL and 106° 01'− 107°103' EL ( Figure 1). Mainly the city lies on the highlands, hills, and mountains with the constituent rocks dominated by volcanic eruptions consisting of andesite, tuff, and basalt. Soil type is dominated by volcanic material, including latosols, alluvial, regosols, podsolic, and andosols. This research was conducted by spatial analysis of the land suitability of Arabica coffee in Bogor. It was based on the Digital Elevation Model (DEM), agroclimatic variables, soil maps, physical, chemical properties of the soil, and land use by utilizing GIS. GIS techniques can be a powerful tool for assessing the suitability of agricultural land. The analysis was carried out using the analytical tools available in the GIS. The input data was processed using an interpolation and classification process based on the land suitability criteria of each parameter. Data or parameters were collected into a single data unit through an overlay process by combining data using union tools. These data were used as the basis for determining land suitability through a comparison method (matching) with the conditions criteria for growing Arabica coffee (Table 1) [5]. Overlays are maps of various themes that can be overlapped to produce new mapping units with new information [6]. Merging data was done to facilitate the subsequent analysis process. The technological approach by utilizing GIS makes it easy to process input, merge, and analyze spatial data. The technological approach defines GIS as a set of tools for input, storage and retrieval, manipulation, analysis, and output of spatial data. The flow chart of land suitability analysis is shown in Figure 2. Results and discussions Rainfall maps are obtained from the average rainfall data that occurred in each station climatology subdistrict in the Bogor district. Rainfall and dry months data in the study area were based on agroclimatic variables required for Arabica growing. It was interpolated to produce a spatial distribution of rainfall as given in Table 2. The highest average rainfall in Bogor District appeared in December-May, ranging from 232-517 mm/month, so it was considered the rainy season, with the highest peak of rain was in April. Meanwhile, the lowest average rainfall happened in June-September, which reached 32.6-99.5 mm/month, with the lowest rainfall occurring in July. The dry month affects the process of primordial flower formation, initiation of flowering, pollination, and fertilization [7]. During the flowering process, coffee requires a dry month for the success of the pollination process so that it can produce fruit. If rain drops during the pollination period, usually the pollen will clump, and the flowers will be damaged so that pollination will not occur or fail to become fruit. Based on the rainfall parameters, the criteria for S1 class in Bogor covered an area of 8 466.18 ha, while S2 covered 93,355.56 ha, S3 for 95,822.85 ha, and an area of 97,281.20 ha was for N class. Based on the number of dry months, the area classified as S1 was 97,760.18 ha, S2 was 111,131.20 ha, S3 was 76,008.10 ha, and N was 12,165.77 ha. The spatial distribution of land suitability based on rainfall and dry months in Bogor District is indicated in Figure 3. The next growing criteria were the altitude to compile data that was suitable for Arabica coffee cultivation in all areas of Bogor. It required elevation data as the contour data. This data is obtained from the Indonesian Earth Map in the form of a shapefile with a line type. The contour data was further processed based on the selection of altitude lines following the requirements of growing Arabica coffee. Data then were grouped by land suitability class. The spatial distribution of land suitability for Arabica coffee based on altitude in Bogor Regency is shown in Figure 4a. Altitude, a known surrogate variable for temperature, is the main driver of the epidemics of coffee leaf rust (CLR) diseases. Incidence and severity were highest in the lowland fields than highland, where poorly managed plantations of local varieties which are grown under the open sun were also more dominant. CLR intensity decreased with the increase in altitude at the highlands, where well-managed and improved varieties are grown under the shade [8]. The land areas which were suitable for Arabica coffee growth are classified as very suitable (S1) for 23,124.50 ha, quite suitable (S2) was 17,180.35 ha, marginal (S3) was, 23,216.14 ha, and not suitable (N) was 233,932.20 Ha. The parameters of the observed physical properties of the soil included effective depth, soil texture, and drainage speed. Soil texture was determined by the size of the soil particles represented by the percentage of sand, silt, and clay in the soil [9]. The texture was the relative ratio between the fractions of sand, silt, and clay, i.e., soil particles which effective diameter was 2 mm. This parameter was one of The the most essential soil characteristics that affected soil moisture, drainage, infiltration, and retention of nutrient and water capacity [10]. The results showed that the distribution of land suitability for coffee based on the physical properties of the soil were in the S1-S3 class. It ranged from very suitablemarginally appropriate. There was no land is which was not suitable based on the parameters of soil physical properties and drainage speed. The spatial distribution of land suitability based on the physical properties of the soil is shown in Figure 5. Based on the parameters of soil chemical properties, general, Bogor was very suitable (S1) and quite suitable (S2) for coffee growth. Soil pH value can be used as an indicator of soil chemical fertility because it can reflect the availability of nutrients in the soil. Base saturation indicated the ratio between the number of all cations (acidic cations and basic cations) contained in the soil adsorption complex. The soil was fertile if the base saturation was >80%, the soil fertility was moderate if the base saturation was between 50-80% and infertile if the base saturation was <50%. Based on the nature of the soil with 80% base saturation, is based cation will be free so it could exchange more quickly than soil with 50% base saturation [11]. Cation exchange capacity (CEC) is one of the chemical properties of soil closely related to the availability of nutrients for plants. It is an indicator of soil fertility and nutrient retention capacity. CEC is the capacity of clay to adsorb and exchange cations. CEC is influenced by clay content, clay type, and organic matter content. Soil CEC describes soil cations such as Ca, Mg, Na and can be exchanged and absorbed by plant roots [9]. The spatial distribution of land suitability based on chemical properties can be seen in Figure 6. c) Figure 6. Spatial distribution of land suitability based on a) pH b) base saturation, c) cation exchange capacity. The area of land suitability class for Arabica coffee, which was based on each parameter (agroclimatic variable, soil physical, and chemical properties) is presented in Table 3. Table 4 shows the suitability class for Arabica coffee in Bogor, which was in the moderately suitable (S2) class of around 398.68 ha, marginal (S3) 32,209.42 ha, and not suitable (N) 266,617.31 ha. Based on Table 3, rainfall and altitude in Bogor district are growth limiting factors in the S1 suitability class (very suitable). According to the regulation of the Minister of Public Works and Public Housing concerning guidelines for the preparation of provincial spatial plans, No. 15/PRT/M/2009, protected areas consisted of protected forest areas, water catchments, river/lake/reservoir borders, local wisdom, nature reserves/cultural reserves, disaster-prone areas, geological protections, and animal protection. Some parameters were not only protected but some ideal distances or boundaries must be avoided in land use. These parameters consisted of river/lake bodies, roads, and railroads/railways. The lake/situ borderline was set around the lake/situ at least 50 (fifty) meters from the edge of the highest water level that has ever occurred. Table 3. The suitability of land in the area (Ha) based on the category of very suitable (S1), quite suitable (S2), marginal (S3), and not suitable (N) for each parameter of Arabica coffee in Bogor. Parameter Land The protected forest areas are located in the southern part of Bogor district with 41,176.303 ha. The bodies of rivers and lakes/situ are almost evenly distributed and concentrated in the central and northern parts. The river and lake/situ bodies occupied a land area of 16,077.782 ha. The road was the second parameter after the railway line, which had a small distribution. The road covered up to 659.247 ha of land. The types of roads are observed as barriers were toll roads, arterial roads, and collector roads, which were primarily in the northern part. The railroad tracks were found only in the north and central parts from north to south, with 102.213 ha. The unique parameter was built-up land or settlements, which were almost evenly distributed and concentrated in the north. This can happen because Bogor district is directly adjacent to DKI Jakarta in the north. Residential land has an area of 47,786.507 ha. The distribution of limiting parameters is given in Figure 7. The results of overlaying land suitability map with protected areas, built-up land, lakes/situ, roads, and railways, the potential land for Arabica coffee were 126 ha for S2 suitability class, 18,681.00 ha for S3 suitability class, and 280,418.58 ha for non-potential land (Table 5). Figure 7. The socio-economic parameters that limit the determination of land suitability for Arabica coffee in Bogor consist of a) protected forests, b) rivers or lakes, c) roads, d) railroads, and e) settlements or built-up land. The land suitability class was the S2 class which the highest limiting factor was the rainfall parameter. Land suitability classes with limiting drainage speed, and altitude factors lied in the range of 1,500 -1,750 meters above sea level. The area for this suitability is 126 ha or 0.04% for the S2 class and 6.24% for the S3 class from 18,681 ha a total area. For the N class, the heaviest limiting factor was the altitude parameter. The lowlands areas are not recommended for Arabica coffee growing. The suitability class remains in the N range has 280,418.58 ha or 93.71% of the total area. Potential lands for Arabica coffee development were located in Sukajaya, Babakan Madang, Caringin, Ciawi, Cigombong, Cigudeg, Cijeruk, Cisarua, Jonggol, Klapanunggal, Megamendung, Nanggung, Sukamakmur, Sukaraja and Tanjungsari sub-districts. The spatial distribution of potential land for Arabica coffee can be seen in Figure 8. Figure 8. The spatial distribution of potential land for Arabica coffee in Bogor Conclusion Based on the spatial analysis of the suitability of Arabica coffee land that refers to the Digital Elevation Model (DEM), agroclimatic variables, soil map, soil physical/chemical properties, and land use information the proportion of land suitability class for Arabica coffee that is suitable (S2), marginally suitable (S3), and not suitable (N). The size of each area was 398.68 ha for S2 suitability class, 32,209.4 ha for S3 suitability class, and 266,617 ha for N or not suitable. The suitability of Arabica coffee land referred to the results of overlaying land suitability map with protected areas, built-up land, lakes, roads, and railways, and the potential land for Arabica coffee was 126 ha for S2 suitability class, 18,681.00 ha for S3 suitability class, and 280,418.58 ha for non-potential land.
2022-02-04T20:10:00.290Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f136ea077652cf51a57ff5f35be5912654678c96", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/974/1/012096", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f136ea077652cf51a57ff5f35be5912654678c96", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
191524081
pes2o/s2orc
v3-fos-license
Cultural Elements Found in Laura Ingalls Wilder’s Little House in the Big Woods for supporting English Teaching The research aims at finding the cultural elements in Laura Ingalls Wilder’s Little House in the Big Woods. This is a descriptive qualitative research. Data source of this research is Laura Ingalls Wilder’s Little House in the Big Woods. The data of this research are words, phrases, clauses, sentences and conversation with cultural elements found on the data source. Technique for collecting data is documentation. The data is classified based on the elements of culture. Technique of data analysis consists of data reduction, data display, and data verification. Theory triangulation is used for data validation. The results of the study show that the cultural elements found in Laura Ingalls Wilder’s Little House in the Big Woods are (1) those dealing with the geographical situation such as animals, plants, and season (2) those relating to the tradition such as eating tradition (meal time, food and beverage, and way of cooking), cloth tradition such as types of clothes, medical tradition, hunting tradition, needle work, sugar snow tradition (3) those dealing with religion and folklore (4) those relating to social values such as speaking manner, eating manner, and meeting people manner. INTRODUCTION In 1989s the Indonesian television audience was quite familiar with TV series produced by National Broadcasting Company called Little House on the Prairie. The TV series were played on TVRI, the only national television in Indonesia that time. The TV series tells about the Ingalls family (Charles Ingalls, Caroline Ingalls and their three daughters named Mary, Laura, Carry) living in a farm area near Walnut Grove, Minnesota around 1870-1800's. The TV series were adapted from Little House Series Books written by Laura Ingalls Wilder. These books are very popular so they are adapted into TV series that ran for nine seasons from 1974 up to 1983. These books are also translated into several languages such as Indonesia (Rumah Kecil di Padang Rumput), Japanes (Tsubasa Bunko), and Spanish (La casa de la pradera). Even in 2005 Disney made the mini series version. Culture is closely related to language since language is part of culture. National Center for Cultural Competence (NCCC) defines culture as an integrated pattern of human behavior that includes thoughts, communications, languages, practices, beliefs, values, customs, courtesies, rituals, manners of interacting and roles, relationships and expected behaviors of a racial, ethnic, religious or social group; and the ability to transmit the above to succeeding generations. From the definition it is noticed that language is one of elements building culture. Previously it is mentioned that culture can be communicated and transmitted across the generation and when talking about communication, language plays a very important part inside. So it can be concluded that language is a means of communication that can be used to transfer and spread the culture among the society. Besides, communication can be comprehended perfectly if there is compatibility with culture. It means it is difficult to comprehend a goal of communication without sharing the same cultural knowledge or context. For example when there is an Indonesian traveler asking where they will perform pray to the tour leader in the middle of their travelling in Indonesia, it is probably easy for him to understand when the tour leader says that they will pray in the next gas station. In Indonesia gas station is facilitated by a building named Mushola where the travelers can perform their pray. The tour leader's answer can be hard to understand by the travelers from other countries who have no same concept about praying place in the gas station. Language is not only a means of communication but also a reflection of the society and their world view. Sapir in Seelye (1993:6) states that the world view of a speech community is reflected in the linguistic patterns they use. Language is an indication of how speakers of that language view the world; and, inversely, how they view the world depends on language system they have. Tense or time in English language system plays an important rule. Tense can influence the meaning of sentences in English language system so that's why most English people pay a great attention on time such as being punctual and no wasting time. Meanwhile, culture can determine how the speakers of a certain language use it. Culture can be the reason of choosing the language forms used by the speakers. For example the young Javanese should use Krama Inggil (the highest rank of language in Javanese) if they talk to the Elders. The reason of choosing this kind of language is being respectful to the Elders as demanded in Javanese culture. Another example is found in vocabulary. Eskimo people have many vocabularies dealing with the word snow in their language. The same phenomenon happens in Indonesia which is known as an agricultural country. Indonesian has many terms for the word rice such as padi, gabah, beras, nasi. On the other hand, English only knows the word rice for all terms belonging to Indonesian that has been mentioned previously. The relation between language and culture can also be captured in translation. "Her face is as white as snow" is preferably translated into "Wajahnya seputih kapas" in Indonesian. Indonesia has no snow in their season, so to make the good translation the word snow is domesticated into kapas (cotton) which looks like snow in color. Proverb is another proof that culture influences the language use. Indonesian knows a proverb saying "Nasi sudah menjadi bubur" which has the closest meaning with English proverb saying "It's no use crying over spilled milk". Those proverbs mean that it is useless feeling sorry about something that has already happened. Indonesian chooses the word Nasi while English prefers the word milk. The reason of choosing those words is culture. Nasi is the main staple food in Indonesia so it is considered as an important thing. On the other hand, English uses the word milk because milk is so close with its society. Milk is one of food and beverage served in the community. b. Culture and Teaching Language The language learners must be familiar with four language skills that they have to master when they study a foreign language such as listening, speaking, reading and writing. There is still the fifth skill that language learners should know, i.e. the cultural skill. Studying a language especially a foreign language is not only studying about its skill and elements but also about its culture. A language learner may be able to make a sentence correctly but the sentence cannot be accepted by the speaker's culture. For instance Indonesian learner will translate the sentence of "Jariku terpotong" into "My finger is cut". Grammatically the sentence is correctly constructed but that construction is not common used in English which prefers saying I cut my finger. Introducing culture such as its norms, tradition and customs is necessary in studying a foreign language to over come misunderstanding among the native speakers and the learners. The example for this case is when asking a foreigner a question such as "Where are you going?" The question sentence is grammatically correct so it is very simple for them to answer it. They, however, do not really like to answer it because asking about where they going are not common and being considered as interrupting their privacy so that can make them uncomfortable. Misunderstanding happened in communication process can be solved by knowing a bit about the culture of its language. Grammatical errors in communication can be tolerated but cultural errors cannot be. Culture needs to be learned in language learning. Douglas (2007:189-190) mentions that the acquisition of a second language, except for specialized, instrumental acquisition (as may be the case, say, in acquiring a reading knowledge of a language for examining scientific text), is also the acquisition of a second culture. It emphasizes that studying a language is also studying about its culture. Actually cultural elements have already been introduced in some methods of foreign language learning such as English. Introducing culture can reduce the culture shock in the communication process. The cultural skill will be useful when the language learners really live in the countries where the foreign language is spoken. Introducing culture can also give understanding about stereotype. Stereotype is an opinion about individuals based on the group of society where the individual belongs to. Example of stereotype is all Americans are generally considered to be friendly, generous, and tolerant but also arrogant, impatient and domineering. Understanding about others culture actually can increase awareness of the learners own culture. Knowing others culture also teaches the learners how to respect others culture and makes them proud of their own culture. c. Cultural Materials in Language Class Introducing culture can be done through simple ways such as introducing geography, tradition, customs, and life style of the native speakers. Those, however, should be integrated in language learning such as listening, speaking, reading, and writing. Introducing culture in language class can reduce boredom in the class and increase students' interest because learning language from other perspective. There are some materials that can be used to teach language with culture elements. The followings are some materials dealing with culture that can be applied in language class; Food and beverage Every nation has its own food and beverage. It can introduce the students to the new vocabularies dealing with name of food and beverage. Food and beverage topics can be used to teach procedure text. Holiday and Festival There are many holidays and festival found around the world such as Bank Day, Muharam Day, New Year Day, Ngaben, Thanksgiving, and Halloween. Knowing holiday and festival of other countries can increase the students' knowledge dealing the culture. The language learners can learn what the people usually do in those holidays, how they celebrate those holidays, etc. These materials can be used to teach descriptive text such as describing about certain holidays or festival. Besides it can be used to teach recount text such as telling or writing about the last New Years celebration of each language learners, the experience of watching Ngaben celebration in Bali, Indonesia. Clothes Clothes match with the culture, geographical situation, as well as seasons. Knowing kinds of clothes enables the language learners to increase the vocabularies dealing with kinds of clothes, fabric. Besides, the language learners can also learn about traditional clothes of certain countries such as Kebaya from Indonesia and Kimono from Japan. Material on clothes can be developed into descriptive text material such as describing the traditional clothes. Music Music can cover kinds of music, traditional music instruments or traditional song. These materials will help the students to learn new vocabularies. Songs can be very useful to develop the students' listening ability and pronunciation. Currency Learning about currency will increase students' knowledge on currency such as the nominal value and the history of the currency. Traditional stories Every country has traditional stories such as Timun Mas (Indonesia), Mulan (China), and Robin Hood (England). These stories can be used for the material of reading recount and narrative text. They can also be used to explore the writing skill. Furthermore they can increase the students' vocabulary. Religion Religion such as the celebration of certain holiday can be used to introduce culture in language class. The description of certain celebration such as Eids Festival is good sources for teaching reading using descriptive and news report text. This material can also add students' vocabulary dealing with religion activity. Family Family is a good source to introduce culture, such as the concept about who the family is. Pa, the address term used to call parents instead of calling them father-mother or daddy-mommy), her elders sister (Marry) and her younger sister (Carry). The book also tells about some traditions such as sugar snow which deals much with the process of making maple syrup. There are stories about how to slaughter a pig and how to cook it into many kinds of meal and snack, how to make cheese, etc. The book also explore about the recipes of traditional cake, pie, bread. METHODOLOGY This is a descriptive qualitative research. This research describes the phenomenon dealing with culture found in the LHitBW written by Laura Ingalls Wilder. The data are all cultural elements found in Laura Ingalls Wilder's LHitBW. The technique of data collection is documentation in which the data are collected from written object such as books, magazine, documents, notes and other documentation. After that the data were classified based on the elements of culture. The analysis data consists of data reduction for reducing some data that do not have relation to cultural elements, data display, and data verification. Theory triangulation is used for data validation. DISCUSSION The Cultural Elements Found in Laura Ingalls Wilder's LHitBW Geographical Situation The setting of LHitBW is in a little city called Pepin, Wisconsin, Minnesota in United State of America. Minnesota is a big woods area which consists of prairie. The cultural elements belonging to the geographical situation are plant, animal and season. a. The Plants The plants found in this book are trees, flowers, and vegetables. The researchers found oak tree, maple tree, cherry tree, pine tree, hickory tree and walnut tree. Those kinds of trees are commonly grown and found in the Big Woods area. Flowers appears in this book are buttercup, violet, tiny starry grass flower. The vegetables found in the book are potato, carrot, beet, turnips, cabbage, pumpkin, and squashes. Besides, the plants found are corn and wheat that belongs to staple food there in that time. b. The Animals The researchers classified the animals found in this research into wild animal, rodent, the prey, and livestock. The wild animals found are wolf, bear, panther, black cat, and fox. The researchers found rodents such as muskrat, mink, otter, and squirrel. The preys found are rabbits, deer, and even bears. Cow and chickens belong to common livestock there. Besides, there are horses that are commonly used to pull the wagon. c. Seasons Minnesota is in America which has four seasons namely winter, spring, summer, and fall (autumn). Season has big influences in the culture such as the clothes and the way to adapt. 1) Winter is the coldest season of the year. Winter is usually on December, January, and February. Snow falling is always at the beginning of winter time. Day time is longer than night time in winter. To decrease the coldness in winter the people usually keep their fire on in their house. They people prefer staying at home to having their activity outside. People wear thick layer clothes to keep their body warm during having activities outside. Before the winter comes, people will prepare their winter needs. The people usually store their vegetables, fruit, fish, and meat for winter in the attic. Sugar snow marks the coming of winter. Sugar snow is harvest time of sugar from maple tree. Maple tree will be sapped and the sap will be made into sugar. 2) Spring comes after winter from March, April, and May. Snow is lesser during this season and finally disappears so the people start to cultivate their field. The plants grow and flowers bloom in spring. The weather is warm and sunny. The day is longer than the night. The animal hibernated in winter wakes up on spring. No more hunting in spring, spring is not appropriate for hunting since the animals which woke up from hibernated is not quite fat for being hunted. The livestock such as cows are taken to the wood to eat grass. 3) The next season is summer. It is the hottest season of the year. It usually takes place from June up to August. There are many activities done in this season, most is visiting family and relatives. Day time is shorter and night time is longer in summer. No more hunting in summer because they have already had enough food from cultivating the field since spring. 4) The last season is fall or autumn. This season usually starts from September up to December. The day is getting shorter and the weather is getting cooler because it is close to winter time. The leaves change their color and finally fall. Tradition The traditions found in LHitBW deal with (a) meal tradition such as food and beverage, meal time, how to cook meal, (b) clothing tradition, (c) traditional medical treatment (d) hunting, (e) needle work (f) snow sugar. 1) Meal Time Meal time consists of breakfast, dinner, supper. There is no term of lunch found in this book. Dinner is used to change the term of lunch. So far most of Indonesian learners know that lunch and dinner is different. Dinner is a meal eaten in the evening especially a formal one. Lunch is a meal taken in the middle of the day but in British English this can be called dinner. 2) Food and Beverage The food found in LHitBW is vegetable, meat, bread, cake, and milk. a) Meat Meat is the main food in the story especially in winter time. Meat is got from hunting. To keep the meat, the people use very traditional ways such as salting and smoking. The people have venison (deer) and pork (pig). Pork is the most favorite meat on the story. Bacon can be made into a lot of kind of food such as; (1) ham is pork that has been preserved through salting, smoking, and wet curing (2) shoulder is a piece of meat that includes the upper part of an animal's front leg (3) side meat is slabs of meat taken specifically from the sides of pig (4) spare ribs is a cut of meat from the rib section, especially of pork or beef, with some meat adhering to the bones (5) belly is a boneless cut of fatty meat from the belly of a pig. Besides, there is still some part of pig that is cooked such as liver, tongue, and head. The head is made into headcheese. The smallest meat is made into sausage which is seasoned with salt, spices. Pig produces lard which is used as spice when cooking and is sometimes made into butter. The pork rind is fried into a snack called cracking. b) Fish The people also consume fish. They keep fish by salting them and store it in the barrels. c) Bread and cake Bread and cake is also among the menu served on their meal. The cake found is salted-rising bread and rye'n Injun bread. Salted-rising bread is thick white bread that is made with no rising. Salted-rising bread is made from wheat, water or milk and corn. One typical way of serving this bread is as toast. Rye'n Injun bread is made from corn and rye. Injun is one of Indian tribes as the native of America. Corn is the first native staple food that was adapted by the European immigrants when they came to the New World (America). Rye 'n Injun usually is served with baked beans. Cake found in this book is Johnny cake. Johnny cake is also popular with jonny cake or Shawnee cake or Johnny bread or corn bread, jonikin or mushbread. This bread is made from corn mixed with salt and warm water or milk and sugar. This is a native food of North America. This bread is served with maple sugar as its topping. This bread is baked in the fire place. Johnny cake is easily made and prepared to have a journey so many people call it Journey cake too. d) Pie The next food is pie. Pie is a baked dish made of a pastry dough casing. Pie is filled with some kind of filling such as meat, fruits, and vegetables. Pie is kind of dessert. Pie found in this book is pumpkin pie, vinegar pie, and dried apple pie. Pumpkin pie is sweet dessert made from pumpkin, milk and eggs baked on a pie plate. People usually serve this pie in autumn and in the early winter for Thanksgiving and Christmas. Vinegar Pie is a pie filling with water, vinegar and butter then is sweetened with brown sugar. This is a traditional pie with traditional recipe. Dried apple pie is a sweet pie made from dried apple baked in a pastry. e) Pancake Pancake is a flat thin round cake made from flour, eggs, milk, and butter. Pancake is cooked on a hot surface such as frying pan. Pancake is considered as breakfast menu in America. Pancake is served with jam, fruits, syrup, chocolate, and meat as its topping. Pancake is a bit similar with Srabi in Indonesia. f) Sandwich Sandwich is food consisting of one or more types of food like vegetable (lettuce, tomato), meat, egg, and cheese that are placed between slices of bread. Sandwich is popular in America but now it spreads around the world. Sandwich is usually served as breakfast menu. People, however, often bring sandwich to work place, school or picnic for lunch menu. 3) How to cook There are many procedures of cooking some food, such as how to smoke meat, how to cook pork, how to color the butter, how to make pudding, how to make cheese, how to make candy, how to make pumpkin pie. a) How to smoke meat The people on the story preserve the meat they have by smoking and salting. Salting is done by spreading salt over the meat then it is wrapped in paper and is stored in the attic. Smoking is done in a simple way. They use the hollow log applied with chain of nails inside the hollow to hang the meat. In the top of the log is covered with roof and in the bottom of it is put a small door. The meat that has been salted is hung in the nails inside the hollow log. Just inside the little door in the hollow log there is a fire of tiny of bark and moss to smoke the meat. To keep the fire on some of the chips is put on it. When the meat has been smoked enough, then the fire is put off and all the strips and pieces of meat out of the hollow tree. The meat is wrapped each neatly in the paper and hung them in the attic. b) How to cook pork. Pork is the most favorite meat. Having cut, some of the pork is preserved by smoking and salting. Others is cooked into some food such as Headcheese The head is scraped and cleaned carefully and then boiled till all the meat fell of the bones. The melt meat is chopped fine and seasoned with pepper, salt, and spices. Then it is mixed with the pot-liquor and set in a pan to cool. When it is cool it will be cut into slices and that is headcheese. Sausage The little pieces of meat lean and fat that has been cut off the large piece is chopped and chopped well. Then it is seasoned with salt, pepper, and dried sage leaves from the garden. After that, the meat is tossed and turned until it was well mixed and molded into balls. Then the balls are put in a pan out in the shed to freeze them. How to color the butter The people color the butter especially in the winter because butter looks pale white in the winter. They use carrot that has been grated for coloring the butter. How to make candy Candy is the most favorite snack for children especially in Christmas. Most of the family make candy for their children. They make candy from molasses and sugar. First they boil the sugar and molasses on the pan so they become thick syrup. After that the syrup will be poured in little stream into another pan filled with clean snow. How to make pumpkin pie Pumpkin pie is sweat dessert made from orange-colored pumpkin. The way to make it is simple. First is cut the big orange-colored pumpkin into halves so it is easy to clean the seeds out of the centre and cut the pumpkin into long slices. After that pare the rind and cut the slices into cubes. The next is put them into the big iron pot filled with some water on the stove and boil slowly until the pumpkin becomes thick, dark and good smelling. b. Clothing Tradition Season influence much on the clothes worn by the people on the story, for instance on winter they prefer thick clothes to keep them warm. Even they wear layers of clothes such as coat, robe with muffler and shawls. The clothes they usually wear are: 1) Head cover such as veils, sunbonnets, and cape hood for women and girls, hat and cap for boys and men. Head cover is used to protect them from sun heat but sometime it is for fashion too. 2) Coat and robes 3) Petticoat, dress and gown for women and girls. 4) Shawls and mufflers for covering shoulders and neck. 5) Mitten and gloves for keeping hands warm. 6) Shoes and stocking for footwear. c. Traditional Medical Treatment The tradition for medical treatment found here deals with how to cure the bee sting (Yellow Jacket). The first step to help someone bitten by Yellow Jacket is covering him or her with mud on the part of body bitten. To reduce the fever that comes along after the biting, give some traditional herbs. d. Hunting Tradition Hunting has become a tradition of the people in the story especially approaching the winter time. They hunt for meat and fur. They bring the fur to the town for trading. They hunt for rabbits, deer, and even bears. They hunt with guns and snare. There are two activities dealing with hunting tradition found in the book such as making bullets and deer-lick. 1) Making Bullets They make their own bullets by melting the bits of lead in the big spoon with the coal. After that the melted lead is poured into the little hole in the bullet-mold. When the bullets are done then dropped them out of the mold. 2) Deer-lick Another thing deals with hunting is deer-lick. It is a place in the forest where the deer come to get salt. The people sprinkle the salt over the ground and wait for the deer coming to the area and licking the salty place in the ground so that they can hunt the deer easily. e. Needlework Needlework is a very popular skill in women and girl activity. Many women and even young girls have good ability in needlework such as sewing, knitting and quilting. Wives prefer sewing clothes for themselves and their family. There are two kinds of needlework found in LHitBW, such as; 1) Nine-patch work Nine-patch quilt is closely related to patchwork quilt. It is a kind of needlework that deals with combining two or more layers fabric. The first layer or the top layer consists of repeat patterns built up with different fabric shapes. The second layer or the bottom consisting of another fabric becomes the backing. Both layers are quilted by hands or machine using running stitch. This needlework is popular in American especially in Midwest (Illinois, Indiana, Iowa, Kansas, Michigan, Missouri, North Dakota, South Dakota, and Wisconsin). This skill is popular among women and girls activity just like found in this book. Mary, a seven year old girl character in this book, is able to sew on her nine-patch work creation. 2) Knitting Another kind of needlework that is popular among the women and girls is knitting. Knitting is a method by which yarn is manipulated to create a textile or fabric. The knitting is also recognized in this book. One of the characters naming Laura who is younger than Mary is also able to knit on the tiny mittens. f. Sugar Snow Tradition Sap of a special type of Maple tree can be made into sugar. It is called Maple Sugar. The best time for making this sugar is in the end of winter. Winter and snow can hold the leafing of the tree and make a longer run of sap. When there is a long run of sap, it means people can make maple sugar to last all the year. This is known as sugar snow. Even though, sugar snow is usually on winter but its preparation has been made since springtime. There are some utensils to prepare for sapping the maple tree such as wooden buckets and little troughs from cedar and white ash for those kinds of woods will not give bad taste to maple syrup. To sap the maple tree, there are some steps. First is to bore a hole in each maple tree and hammer the round end of little trough into the hole. Second is to set a cedar bucket on the ground under the flat end. Third is to gather the sap into barrel then ready to make into syrup. There are some procedures to make maple syrup. The first step is put the sap inside the iron kettle. Then setting a big bonfire under the kettle and boiling the sap. The fire must be hot enough to keep the sap boiling. Every few minutes the sap must be skimmed with a big long handled wooden ladle. When the sap has boiled down enough, fill the bucket with the syrup. After that, boiling the sap until it grains and cooling it in a saucer. Then ladles the thick syrup into the milk pans that are standing ready and it will turn to cakes of hard brown maple sugar. Belief Belief in this research finding is classified into religion and folklore. a. Religion The main religion in this book is Christian. They are very loyal Christians. It can be seen from their daily life. One of the significant religion lives in this story is about Sunday. For most Christians at that time, Sunday is observed as a day of worshiping and rest. They hold it as the Lord's Day and the day of Christ's resurrection. Sunday is started from Sunday morning up to Sunday evening. Previously Sunday is started from Saturday night up to Sunday evening. The people are not allowed to do all kinds of housework. All they have to do are worshipping such as going to church, reading bible at home, praying to God. b. Folklore Folklore can be described as traditional art, literature, knowledge, and practices that are passed on in large part through oral communication. Folklore expresses the ideas and values of a particular group. There is some folklore found in LHitBW such as: 1) Jack Frost Jack Frost is an imagined old man with pail and brush who painting leaves on autumn. Jack Frost is believed to be the person who turns the green leaves into red, yellow, orange, brown and finally fell down in autumn. Besides, Jack Frost is believed to be the one who makes the weather becomes so frozen on winter by touching his nose and foot steps on the window. 2) Santa Claus Santa Claus is known as Saint Nicholas or often is popular as Santa. He is a mythology figure dealing with Christmas so that's why he is well-known as Father of Christmas. Santa Claus is usually described as a portly, joyous, white bearded man with red coat and trousers and black leather belt and boots. He carries a bag full of gifts for children celebrating Christmas. He usually comes at the mid night of Christmas Eve through the chimney. It is told that Mary, Laura, and their cousins celebrating Christmas in their house, hung their stockings by the fireplace. 3) The moon is from green cheese. It is mentioned in the story that some people still said that the moon is from green cheese. This saying is not true because the moon is not really from green cheese. The green cheese means the unripe cheese. The saying is a metaphor which compares two unlike objects i.e. the moon and the cheese. The comparison makes sense actually. The saying compares the appearance of the moon which is round and rough because of some holes on its surface. That appearance looks like a surface of cheese. Social Value The social value found in LHitBW deals with some speaking manner, eating manner, and meeting manner. a. Speaking manner There are some manners dealing with speaking found in LHitBW such as no interrupting while someone is speaking, polite language usage for the Elders and thanking to other's giving. b. Eating manner No speaking while eating and no playing with food are manners dealing with eating manner found in LHitBW. c. Meeting manner When meeting people for the first time, it is a must to greet them "How do you do?" CONCLUSSION LHitBW is one of Little House series written by Laura Ingalls Wilder. It is the first book of Little House series published in 1832. This book can be used to introduce culture to the English learners. This book consists of many interesting cultural elements that can increase the learners' knowledge about the culture of the target language in this case is English. The cultural elements found first are those relate to the geographical situation covering animals, plants, and season. Second ones are those dealing with tradition such as eating tradition (meal time, food and beverage, and way of cooking), cloth tradition such as type of clothes, medical tradition, hunting tradition, needle work, sugar snow tradition. Third ones are those relating to belief such as religion and folklore. The fourth ones are those dealing with social value such as speaking manner, eating manner, and meeting people manner.
2019-06-19T13:21:18.811Z
2015-10-25T00:00:00.000
{ "year": 2015, "sha1": "7762de8d1ec1976ecf67a4fe760cabec34c50828", "oa_license": "CCBYNC", "oa_url": "http://arbitrer.fib.unand.ac.id/index.php/arbitrer/article/download/64/52", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "90b4a0466fdf48d8ae43fa4269ea171677acf4fd", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Art" ] }
265358116
pes2o/s2orc
v3-fos-license
Fiscal Policy Analysis Bekasi District 2021 Fiscal Year This study aims to analyze the fiscal policy implemented by the Bekasi Regency local government. The goal is to improve the financial structure by increasing regional financial capacity, which ultimately creates a positive fiscal gap as an indicator of policy success. This study used a qualitative literature review approach, where data was gathered through observations, literature study, and the examination of various secondary sources. The collected data was then analyzed using a descriptive model, which involves describing the data as it is without any alterations or modifications. The findings revealed that the Bekasi Regency Government has put in place various measures to achieve a positive fiscal gap. These include Extensification, which involves maximizing Regional Original Revenue (PAD), and Intensification, where the Tax Object Sale Value (NJOP) is adjusted to optimize the revenue from Fees for the Acquisition of Rights on Land/Building (BPHTB). INTRODUCTION Regional financial policy today has become a very important issue, especially in the era of regional autonomy, through law no.23 of 2014 concerning Regional Government implicitly emphasizes that regions are required to be able to create conditions that are conducive to creating a climate for development and public services in the regions that are adapted to the needs and conditions of their communities. Furthermore, regions are required to be able to innovate in order to increase the fiscal gap in the regions by utilizing the authority they have.The direction of this innovation is to strive to provide public services and facilities.through various development programs that can take advantage of developments in technology and science, giving rise to various innovative applications and/or adaptive programs at the regional level that make it easier for people to access public services, in addition to contributing to the region in terms of fiscal revenues from these public services. The philosophy of fiscal policy is based on Keynes' theory which was born as a reaction to the great depression that hit the American economy in the 1930s.Keynes criticized the opinion of classical economists who stated that the economy will always reach full employment so that every additional government spending will cause a decrease in private spending (crowding out) by the same amount or in other words every additional government spending will not change aggregate income.Keynes argued that a free market system would not be able to make adjustments towards full employment conditions.To achieve this condition, government intervention is needed in the form of various policies, one of which is fiscal and monetary policy.(Setiawan, 2018) The aim of this research is an effort to analyze how the implementation of fiscal policy in Bekasi Regency stimulates or increases regional financial capacity, regional financial management and regional financial supervision.This policy is expected to increase the contribution of Original Regional Income to the APBD.Based on Law Number 23 of 2014 concerning Regional Government article 285, regional income sources consist of Original Regional Income, Transfer Income, and other Legitimate Regional Income.Regional revenue management aims to optimize regional income sources in order to increase regional fiscal capacity in order to maximize regional government administration in providing services and welfare to the community.Regional income budget policies include how regions are able to manage Original Regional Income, Transfer Income and other Legitimate Regional Income to provide regional expenditure funds.And none other than that fiscal policy is a stimulus rather than increasing Original Regional Income (PAD) which has a big influence on the development of a region.This means that the availability of funds in the region is influenced by the extent of the region's ability to create sources of revenue in the region. Public policy James Anderson (1979) as quoted by Riant Nugroho (Nugroho. D, 2004) defines public policy as a relatively stable, purposive course of action followed by an actor or set of actors in dealing with a problem or matter of concern.This means that in a relatively stable period of time, a public policy is a deliberate action and is followed by an actor or group of actors to overcome problems or issues that are of great concern to these actors and must be addressed immediately, for example the regional financial condition which is experiencing a deficit and is felt to be very serious.If policy actors are concerned, action must be taken, one of which is by creating a policy to overcome the deficit problem. Thomas R. Dye (1981) provides a basic understanding of public policy as what the government does or does not do.This understanding was then developed and updated by scientists working in public policy as a refinement because if this meaning is applied, the scope of this study becomes very broad, apart from the study which only focuses on the State as the subject of study, (Subianto, 2020) Public policy itself consists of several process stages, as stated by Nugroho (Nugroho. D, 2004), namely: 1. Policy formulation 2. Policy implementation 3. Policy evaluation and additions 4. Policy revision, to create more adaptive or appropriate policies with the conditions of the evaluation results.In this regard, regional financial policy is an effort made by regional governments to overcome development and public service problems in the region through planning and budget allocation.Development and public services in the regions are the authority of the region and to make this happen, budget allocations are needed, so the regional government needs to make policies -fiscal gap policies in the region can be covered with various sources of revenue that are the authority of the region Fiscal policy Fiscal policy is an economic policy in order to direct economic conditions to become better by changing government revenues and expenditures, (Rahayu, 2014).Fiscal Policy is a series of activities related to the effectiveness of the income and expenditure budget, which seeks to maximize financial capacity by accumulating sources of original revenue.In the regional context, fiscal policy means seeking to receive regional income from the original regional revenue sector in the form of directed regional taxes and levies.To create fiscal decentralization and regional financial independence, this is very important because one indicator of the success of regional government administration is the creation of regional financial independence.In this way, regions have more freedom in allocating budgets for development and public services, which do not always depend on funding transfers from the central government. In this regard, policies are needed that can increase the effectiveness of regional tax and levy collection, namely by intensification and extensification.Intensification is increasing genuine regional revenue from all legitimate sources of regional revenue, which can be done through pattern pick-up, persuasion or coercion, while extensification is trying to explore the potential of existing regions to become legitimate sources of revenue, this requires an innovative and creative attitude.regional government, (Ferizaldi, 2016) Measuring the performance of regional fiscal policy can use five variables, namely (General allocation funds, Routine Expenditures, Expenditures for Transportation, Taxes and Retributions) (Sebayang, 2005).FurthermoreAccording to (Suparmoko, 2016) the objectives of decentralization policy are: 1).Realizing justice between regional capabilities and rights.2).Increase in Original Regional Income (PAD) and reduction in subsidies from the central government.3).Encourage regional development in accordance with the aspirations of each region.In this regard, fiscal decentralization can be realized by creating local policies by utilizing existing authority to be explored according to its availability. METHODS The research conducted in this study adopts a library research or literature study methodology, relying on diverse literature sources to gather research data.A qualitative approach is employed as the data generated manifests in the form of descriptive narratives or textual information.Library research, also known as literary research, entails an investigative process centered in the library or literature.In this particular study, the research process involves the exploration of studies that share similarities or relevance, as highlighted by Purwanto (2016).Library or literature studies are characterized by certain attributes: researchers engage directly with data, primarily sourced from existing literature rather than firsthand field observations; the information derived from the library serves as a secondary source rather than originating as primary data; and library data is not constrained by spatial or temporal limitations (Z, 2008). Moreover, this study employs two distinct data collection techniques: primary data and secondary data.The primary data source involves observational data, meticulously recorded and subsequently analyzed using literary studies.On the other hand, secondary data refers to information readily available, sourced from Regional Government Reports, Publications, and mass media.The data analysis and interpretation encompass systematic organization and retrieval of research findings, incorporating observations and other relevant elements.This process contributes to an enriched understanding of the research focus, allowing the researcher to compile, refine, condense, and present the findings effectively, as expounded by Tohirin (2012). RESULTS AND DISCUSSION Fiscal policy is basically a policy that is decided and managed by the Ministry of Finance and generally aims to manage and maintain the welfare of the money circulation sectors because the main point or most prominent thing about fiscal policy is taxes.In connection with research regarding Fiscal Policy Analysis in Bekasi Regency FY 2021, it is of course related to the form and type of fiscal policy taken and implemented by the Bekasi Regency Government itself so that it can creating a fiscal level as seen from its Original Regional Income (PAD).One form of fiscal policy taken by the Bekasi Regency Government in optimizing its Original Regional Income (PAD) is as follows: Viewed from an extension perspective, in optimizing Original Regional Income (PAD) the Bekasi Regency regional government is encouraging tax extension through digitalization of PAD which was prepared by the Regional Digitalization Acceleration and Expansion Team (TP2DD).This involves Bappeda which is also collaborating with the West Java Regional Police Traffic Directorate and the Metro Police which is expected to increase motor vehicle tax revenue, as well as supervision.The PAD target in 2022 is set at IDR 22.8 trillion, up from 2021, namely IDR 21 trillion.Then IDR 1.5 trillion is targeted to start from regional taxes, regional levies, wealth and others with an intensification and extensification process ( Pemerintah Kabupaten Bekasi, 2021). Apart from that, through a special policy of using local products through the "BEBELI'' policy where the Bekasi Regency Government implements a special policy for all State Civil Apparatus (ASN) to require the purchase of goods produced by local Bekasi Regency Micro, Small and Medium Enterprises (MSMEs) via the e-catalog application or Bekasi brave shop Dare to Buy (BEBELI).Bekasi Berani Beli (BEBELI) itself is an electronic shopping application to support the use of domestic micro, small and medium enterprise products.The launch of BEBELI is also a follow-up to the direction of Indonesian President Joko Widodo as stated in Presidential Instruction Number 2 of 2022 regarding accelerating the increase in the use of domestic products and products of micro businesses, small businesses and cooperatives in order to make the proud national movement made in Indonesia a success in the implementation of government procurement of goods and services.BEBELI has grouped sellers based on their respective sub-district regional clusters to facilitate product delivery.A total of 250-300 small business actors assisted by the Bekasi Regency Cooperatives and MSMEs Service have also stated that they are ready to join this online shop application. The implementation of purchasing MSME products will later be regulated in a Regent's Regulation (Perbub) as one of the regulations in reducing regional inflation rates.This is a new innovation policy for the Bekasi Government.In the future, not only government agencies, government agencies, the private sector, even all levels of society will also be required to make the BEBELI policy a success.This is one of the right steps in controlling the rate of regional inflation, and is a business opportunity in the economy of the people in Bekasi Regency and will grow the economy and enthusiasm of MSMEs because it involves ASN as well as other sectors (Dedy S, 2023). Next, look at the PolicyIntensification, where the Bekasi Regency Government (Pemkab) targets Regional Original Income (PAD) of IDR 2.7 trillion in 2023 and has a strategy to achieve the PAD target, mainly through intensification mechanisms and exploring existing potential, namely by optimizing Acquisition Fee revenues Land/Building Rights (BPHTB) by adjusting the Sales Value of Tax Objects (NJOP) to achieve the PAD target.This is very urgent considering the inconsistency of the applicable tariffs (Hariani, 2023). Furthermore, the Bekasi Regency Government will also maximize restaurant tax revenues from catering businesses.Currently there are still hundreds of companies that do not have a Regional Taxpayer Identification Number (NPWPD), so the potential for restaurant tax revenue from catering businesses is not yet maximized.Bapenda (Regional Revenue Agency) of Bekasi Regency which was specifically assigned to gather catering entrepreneurs. Then, optimize groundwater tax revenues by coordinating with the West Java Provincial Government.Because the determination of permits for the tax sector is the authority of the province, even though the payment scheme in each city/district is by invoking the provincial DPMPTSP (One Stop Investment and Integrated Services Service), including Samsat which also concerns parking tax and several matters relating to provincial authority.and center to synchronize. Simultaneously, the Bekasi Regency Government will explore the potential for advertising taxes.Because the acceptance of this sector is inversely proportional to the existence of advertising which is increasingly mushrooming in Bekasi Regency.Because many tax objects do not pay due to the fact that the permit period has expired, even though the activities are still ongoing, they are reluctant to extend the permit.The Bekasi Regency Government is also committed to continuing to maintain the investment climate and competitiveness by making licensing easier, including improving tax administration, improving regulations, and maintaining good regional conduciveness (Taufik D, 2022). Next throughDiversification, In order to increase the PAD of Bekasi Regency, the Regency Government itself has made a diversification policy for Oil and Gas Regional Owned Enterprises (BUMD), where the development of the Regional Owned Enterprises (BUMD) business is important so that it continues considering that the decreasing gas reserves at the Tambun refinery automatically have an impact decreased gas allocation from Pertamina.The Bina Bangun Wibawa Mukti Limited Liability Company (BBWM) itself makes a sizable contribution to Bekasi Regency's Original Regional Income (PAD).This input of Regional Original Income (PAD) is quite a good achievement.In conditions of continued decline in gas supply, the Regency Government Bekasi Also planning waste management innovations could be an additional option.For example, waste management at Burangkeng TPA where there is an additional five hectares of land and this could be a new business opportunity for Bina Bangun Wibawa Mukti (BBWM) (Bekasikab.Go.Id, 2022). Types of Bekasi Regency Fiscal Policy: The first is Functional Policy.This policy was taken to improve the quality of the economy at a macro level, with impacts that will only be visible in the long term.Bekasi Regency itself, in this case, issued a start-up development policy in Bekasi Regency, especially by Diskominfosantik in collaboration with the Lyrid Team, which is a company operating in the IT sector.This policy was issued for the economic recovery of the community, especially those who have MSMEs.The community will be empowered by participating in training called Bootcamp Startup Accelerator activities which are a collaboration between the Bekasi Regency Government and Correctio Jababeka.This activity was also carried out through Fablab Correctio Jababeka in collaboration with Diskominfosantik Bekasi Regency and Lyrid, a software company based in the United States. Second, namely deliberate/planned policy.This policy was taken by the Bekasi Regency Regional Government to deal with the Covid-19 pandemic problem yesterday by forming a team for the intensification and extensification of regional taxes and regional levies (PDRD) in order to optimize regional revenue potential to the maximum amidst the limited revenue potential due to Covid-19.Apart from that, the team does not only focus on PAD but also carries out the task of optimizing the receipt of revenue sharing funds for tax objects or subjects located in Bekasi Regency.The team's tasks are not only carried out by the revenue-generating regional work units (SKPD), but are also assisted by other SKPDs. Then expansionary fiscal policy is a policy taken by the government when the economy is weakening by increasing the spending budget and reducing or eliminating taxes for certain sectors.The function of expansionary fiscal policy is to increase the purchasing power of goods so that companies can continue to produce without laying off workers.One of the things the Bekasi Regency Government is doing is increasing people's purchasing power by encouraging three main sectors, namely the large industrial sector, small and medium industries (IKM/UKM) and the agricultural sector. For large industries, government policies such as relaxation, incentives and infrastructure will be monitored so that they can run.For small and medium industries and SMEs, capital assistance and marketing facilitation will be provided.Then for the agricultural sector, apart from the assistance that has been provided so far, such as seeds, fertilizer and other subsidies, sales of agricultural products through exports will also be facilitated.The policy implemented is a constructive breakthrough and is suitable for filling regional autonomy, especially regional fiscal decentralization, that the main objective of regional autonomy is to improve public services and advance the regional economy.Basically, the three main missions of implementing regional autonomy and fiscal decentralization are: improving the quality and quantity of public services and community welfare, creating efficiency and effectiveness in regional resource management and empowering and creating space for the community to participate in development (Mardiasmo, 2021). Furthermore, looking at several details of the Original Regional Income (PAD) of Bekasi Regency, the Original Regional Income for Fiscal Year 2021 is budgeted at IDR 6,021,823,091,630.00 with a realization of IDR.6,015,706,801,836.00 or 99.90%, there is a target shortfall of Rp. 6,116,289,794.00. Next,look 13,093,773,434,52.00.Regional levies are grouped into general service levies, business service levies and certain licensing levies. Original Regional Income originating from Separated Regional Wealth Management Results is a Part of the Profit from Capital Participation in Regionally Owned Companies owned by Bekasi Regency which consists of the Regional Drinking Water Company (PDAM) Tirta Bhagasasi and PT Bina Bangun Wibawa Mukti, as well as income from the share profit on capital participation in PT Bank Jabar.This income is budgeted at Rp. 20,176,437,653.00with a realization of Rp. 18,729,447,485.00or 92.83% there is a target shortfall of Rp. 1,446,990,168.00. In the 2021 Fiscal Year, Other PAD revenues are obtained from 9 types of income, namely current account services, interest income, regional compensation claims, income from fines for late work implementation, income from tax fines, income from retribution fines, income from refunds, income from public service agencies areas and income from excess returns.This income is budgeted at Rp. 299,673,834,154.00 realization amounted to IDR 362,821,578,640.92or 121.07%, there was an excess target of IDR 63,147,744,486.92 (Badan Pusat Statistik Kabupaten Bekasi, 2022). If we look at the target and budget realization as mentioned above, it can still be said to be realistic, where the difference in realization is still reasonable and there are several items that are in excess of the target, because realization is related to regional economic conditions, so fiscal decentralization will always be parallel to regional economic conditions.In the context of regional autonomy, one of the factors that influences regional economic growth is fiscal decentralization.Most economists believe that fiscal decentralization can encourage economic growth, improve equality, and improve the quality of public services and community welfare and others hold the opposite view (Saputra & Mahmudi, 2012). CONCLUSION The conclusion that can be drawn in the discussion of this research simply states the analysis that the fiscal policy implemented by the region, especially in Bekasi Regency itself is directed at improving a better financial structure through increasing regional financial capacity, regional financial management, and regional financial supervision.This policy is expected to increase the contribution of Original Regional Income.Bekasi Regency itself, if seen from the form and type of fiscal policy taken by the Regency Government to optimize its original regional income, can be said to have very little fiscal decentralization.The efforts that must be made in the future by the Bekasi Regency Government are to continue to innovate in making more innovative fiscal policies.according to regional conditions and characteristics. Journal of Research Trends in Social Sciences and Humanities Volume 2 No 2, 2023 83-90 89 is at Regional Tax Revenue which includes 10 types of taxes, namely Hotel Tax, Restaurant Tax, Entertainment Tax, Advertisement Tax, Street Lighting Tax, Parking Tax, Ground Water Tax, Swallow's Nest Tax, Tax on Acquisition of Land and Building Rights and Property tax.This income budgeted atRp.2,065,328,229,205.00 with a realization of Rp. 2,008,212,803,072.60 or 97.23%, there is a target shortfall of -Rp.57,115,426,132.40.Regional levy results include 20 types of levies, the budgeted income from regional levies is IDR.167,329,690,000.00realizationofRp.154,235,916,565.48or92.17%, there is a target shortfall of Rp. APLIKATIF:
2023-11-23T16:25:21.894Z
2023-11-19T00:00:00.000
{ "year": 2023, "sha1": "de9a447edee94b2249fe86660724ed3b2399a6c1", "oa_license": "CCBYSA", "oa_url": "https://rcsdevelopment.org/index.php/aplikatif/article/download/256/129", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d3a8cf0b88bcdff91672860c3de4f93240509f32", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [] }
62823090
pes2o/s2orc
v3-fos-license
Catalytic Combustion of Low Concentration Methane over Catalysts Prepared from Co / Mg-Mn Layered Double Hydroxides A series of Co/Mg-Mn mixed oxides were synthesized through thermal decomposition of layered double hydroxides (LDHs) precursors. The resulted catalysts were then subjected for catalytic combustion of methane. Experimental results revealed that the Co 4.5 Mg 1.5 Mn 2 LDO catalyst possessed the best performance with the T 90 = 485 C. After being analyzed via XRD, BET-BJH, SEM, H 2 -TPR, and XPS techniques, it was observed that the addition of cobalt had significantly improved the redox ability of the catalysts whilst certain amount of magnesium was essential to guarantee the catalytic activity. The presence of Mg was helpful to enhance the oxygen mobility and, meanwhile, improved the dispersion of Co and Mn oxides, preventing the surface area loss after calcination. Introduction Low concentration methane such as coal mine methane is usually emitted directly into the atmosphere or burned immediately due to its low calorific value, resulting in either greenhouse effect or a huge waste of resource.Moreover, NO x can be also produced during thermal combustion.Therefore, it is necessary to develop a cost-effective technology for capturing and utilizing the low concentration methane. So far, a number of studies have focused on the catalytic oxidation of methane, especially on the noble metal catalysts [1][2][3][4].Nevertheless, noble metal catalysts normally show some shortcomings for real application like volatility, high sintering rates, poisoning in the presence of disturbing compounds, and high price [5].As such, the mixed metal oxide catalysts have been attracting rising concern recently for methane catalytic combustion [6,7].And some reporter showed that the activity of mixed metal oxide catalysts was not necessarily worse than noble metal catalysts for VOCs catalytic oxidation [8][9][10]. Layered double hydroxides (LDHs) are known as a series of anion layered compounds whose chemical composition can be represented as M II and M III are bivalent and trivalent metal cations, A − is an n-valent anion, and the values of x usually range from 0.20 to 0.33.After thermal decomposition, LDHs could be potential materials for total oxidation catalysts due to their large surface area, high metal dispersion, small crystallite size, and stability against sintering [11].Up to now, a lot of efforts have been dedicated to developing the LDHs catalysts for volatile organic compounds (VOCs) degradation and the methane catalytic combustion [5,12].In light of the above points, we synthesized a series of LDHs related catalysts, in which the transition metals Co and Mn were induced as bivalent and trivalent metal cations, respectively, for the methane catalytic combustion.Then, the catalysts were characterized by various methods to show the relationships between the chemophysical properties of the catalysts and their performances. Experimental Section 2.1.Catalyst Preparation.The LDHs precursors Co/Mg-Mn were prepared by coprecipitation method at a constant pH value (10 ± 0.5).Mixed salt solution (100 mL) and 2 M NaOH (100 mL) solution were added dropwise into 100 mL 0.45 M Na 2 CO 3 solution simultaneously under vigorous mechanical stirring at 60 ∘ C in water bath for 1 h.The mixed salt solution consists of a total cation concentration of 2 M from Mg(NO 3 ) 2 ⋅6H 2 O, Co(NO 3 ) 2 ⋅6H 2 O, and Mn(NO 3 ) 2 with (Mg + Co)/Mn = 3.0.The addition of the alkaline solution and the pH value were controlled by a pH meter.Precipitates formed were aged with the mother liquid overnight at 60 ∘ C under vigorous stirring and then filtered and thoroughly washed with distilled water to pH = 7.0.After that, the resulting solid (LDHs precursors) was dried at 100 ∘ C for 12 h.Then, these LDHs precursors (denoted as Co Mg 6− Mn 2 LDH) were calcined at 600 ∘ C for 4 h to derive Co/Mg-Mn mixed oxide catalysts (denoted as Co Mg 6− Mn 2 LDO).Finally, the oxide catalysts were crushed and sized in 40-60 mesh for activity test and characterizations. Catalytic Activity Testing. The catalytic activity tests of methane were performed in a fixed bed with an inside steel tube reactor operated at atmospheric pressure.0.5 g catalyst diluted with equal volume silica sand was used during the test with a gas mixture of CH 4 : O 2 : N 2 = 1.6 : 16 : 144 at a total flow rate of around 160 mL/min passing through the reactor.The gas flow rate was controlled by mass flow meter.The internal diameter of the reactor is 8.20 mm and the length is 450 mm.Catalytic activity tests were carried out with a gas hourly space velocity (GHSV) of 25,000 h −1 , at the temperature ranging from 350 to 600 ∘ C. The reactants and products were online analyzed by Agilent GC and equipped with flame ionization detection (FID).Before each test, the catalysts were preheated at 500 ∘ C for 30 min and then cooled to 350 ∘ C to start the test.The methane conversion rate was calculated based on the integrated GC peak areas. Catalytic Characterization. X-ray diffraction patterns (XRD) were recorded using a Rigaku D/Max RA diffractometer to identify the crystal structure of the samples with the monochromatic Cu K radiation at 2 ranging from 5 to 80 ∘ with a scanning rate of 4 ∘ /min.The textural properties of the derived oxides were analyzed by N 2 adsorption/desorption at 77 K in a JW-BK132F static adsorption analyzer.Before each test, the catalyst was pretreated at 100 ∘ C for 3 h in vacuum.The specific surface area was calculated with the BET equation, and the pore volume and pore size distribution were obtained by the BJH method.The micromorphology characteristics were examined by scanning electron microscope (SEM: Ultra 55, Carl Zeiss AG, USA).A custom-made TCD setup was employed to run the temperature programmed reduction (TPR) test by using 50 mg catalysts.Prior to the test, a preheat treatment was performed in 5% O 2 /He at 500 ∘ C for 2 h.Then, the sample was cooled down to 100 ∘ C and kept at this temperature for 30 min.After that, TPR tests were carried out at a heating rate of 10 ∘ C/min with 30 mL/min 6% H 2 balanced with He going through.X-ray photoelectron spectroscopy with Al K X-ray (h] = 1486.6eV) radiation operated at 150 W (Thermo ESCALAB 250Xi, USA) was used to investigate the surface atomic state of the sample.All the binding energies obtained were corrected using the C 1s level at 284.8 eV as an internal standard. Results and Discussions XRD patterns of all the Co Mg 6− Mn 2 LDH precursors and calcined mixed oxides are exhibited in Figures 1 and 2, respectively.From Figure 1, a well-crystallized LDHs phase (2 = 11.5 ∘ , 23 ∘ , 34 ∘ ) could be detected in all the samples except the Co 6 Mn 2 LDH.Meanwhile, the reflections corresponding to MnCO 3 (PDF-No.44-1472) also appeared in the former three samples [13].However, the intensity referred to Co 6 Mn 2 LDH sample was particularly low.And its crystallized phases could not be identified based on the phase analysis software, Jade5.The similar finding had been also reported by Kovanda et al. [14] in their study regarding Co and Mn containing layered double hydroxides.After calcination at 600 ∘ C for 4 h, spinel-type mixed oxides were formed in all the samples including Co 6 Mn 2 LDO.Such samples probably consist of one or mixture of MnCo 2 O 4 (PDF number: 23-1237), CoMn 2 O 4 (PDF number: 01-1126), Mg 6 MnO 8 (PDF number: 19-0766), Co 3 O 4 (PDF number: 43-1003), and MgCoMnO 4 (PDF number: 39-1157), considering that their peaks were almost superimposed.At low Co loading content (Co 1.5 Mg 4.5 Mn 2 LDO), MnCo 2 O 4 could be the main phase.And, with the increase of Co content, the peaks would gradually shift to higher angle range.For the sample Co 6 Mn 2 LDO, the obtained phases were mainly Co 3 O 4 phase.Compared with the other three samples, it could be also found that the signal of Mg-free sample (Co 6 Mn 2 LDO) became sharper and higher, indicating that the primary particle size was probably larger than the others.For instance, the primary particle size calculated by Scherrer equation increased from 8.1 nm for Co 4.5 Mg 1.5 Mn 2 LDO to 12.9 nm for Co 6 Mn 2 LDO. From the activity tests (which will be discussed in Figure 6), we found that the activity of Co 6 Mn 2 LDO sample was lower than the Mg-containing sample Co 4.5 Mg 1.5 Mn 2 LDO.To further investigate the reason, we tried to get a clue from the SEM images of their Co Mg 6− Mn 2 LDH precursors.The results are shown in Figure 3. From Figures 3(a) and 3(c), the Co 4.5 Mg 1.5 Mn 2 LDH samples showed a flake-like morphology and the size of flakes was approximately 400-450 nm.Without addition of magnesium, the sheet structure also appeared in the precursors, but the size and thickness grew larger.At the same time, we could also observe a lot of sintered particles from the image.This was probably attributed to the aggregation of the primary particles, since Mg-free sample should be of smaller primary particle size according to the XRD results in Figure 1.And the aggregation of the particles would lead to the easy sintering of the calcined oxides.This was consistent with the results of the LDO-XRD (Figure 2). The specific areas of the Co Mg 6− Mn 2 LDH precursors and the derived LDO samples are presented in Table 1.It could be observed that the surface area of LDH increased from 49.09 m 2 /g to 130.88 m 2 /g, which was in accordance with the result of LDH-XRD (Figure 1) that the peaks of the samples were getting broader with the addition of Co, indicating the decreasing of the grain size.However, the Mg-free LDO sample got the greatest loss in surface area (from 130.88 to about 45 m 2 /g), together with the decreases of the pore diameter and volume.This fitted well with the result of LDO-XRD (see Figure 2) that the grain size would grow bigger without Mg being involved.According to the literatures [15][16][17], the alkaline earth could isolate transition metals effectively by strong interaction to reduce the agglomeration of the particles, thus reducing the loss of the specific area after calcination. H 2 -TPR (see Figure 4) showed the differences in reducibility of the prepared LDO samples caused by the increasing content of Co.Based on the literatures [18,19], the broad reduction peaks from 250 to 500 ∘ C were the results of overlapping peaks related to the reductions of Co/Mn oxides, which consists of the reduction process of Co III → Co II → Co 0 in Co 3 O 4 phase and Mn IV species reduction.With the increase of Co content, the peaks first shifted to higher temperature and then the peaks shifted obviously to the lower temperature and a shoulder peak appeared at 284 ∘ C, which could be assigned to the reduction of Co III → Co II [18], while, for the catalyst Co 6 Mn 2 LDO, the peaks shifted to higher temperature again and an additional peak at about 680 ∘ C became more evident.This high temperature peak might be attributed to the reduction of Co-Mn mixed oxide due to the sintering of catalyst in absence of Mg.The possible reason is that the existence of Mg could suppress the sintering and improve the dispersion of the active substance. Oxygen normally played a very important role in the decomposition of methane according to the catalytic mechanism [20].As such, the oxygen states on the catalysts herein were investigated by XPS characterization and the binding energies of O1s were given in Figure 5. Generally, there are three different types of oxygen in the catalysts with binding energy of O1s electrons, 529.0-530.0eV, 531.3-531.9eV, and 532.7-533.5 eV, which could be ascribed to lattice oxygen (O a ), surface adsorbed oxygen (O b ), and oxygen bonded in OH group (O c ), respectively [21][22][23].In case of our catalysts (see Figure 5), oxygen with binding energy of 529.5-530.3eV was the most intensive form and could be assigned to the lattice oxygen [22].And oxygen species with binding energy of 531.0-531.5 eV can be attributed to the surface adsorbed oxygen (O b ) which could be considered the most active oxygen [24].The percentage of O b in total surface oxygen is listed in Table 2, from which we could clearly notice the tendency that O b decreased with the introduction of Co, especially for the Mg-free one.In other words, the magnesium was necessary for keeping a certain amount of surface active oxygen possibly due to its well electronic donation property as a typical alkali earth metal.At last, oxygen with binding energy of 532.1-532.9eV was originated from carbonated species or molecular absorbed water [25]. The catalytic activities of the Co Mg 6− Mn 2 LDO samples derived from LDH precursors in methane total oxidation tested within the temperature range of 330-600 ∘ C are shown in Figure 6, where the methane conversion is plotted as a function of the reaction temperature.As shown in Figure 6, it could be clearly observed that the 90 (defined as 90% methane conversion) went down with the increase of Co content at first and then increased for the Co 6 Mn 2 LDO sample.The Co 4.5 Mg 1.5 Mn 2 LDO catalyst showed the best performance with the 90 = 485 ∘ C.This result was in agreement with our previous H 2 -TPR characterization where Co 4.5 Mg 1.5 Mn 2 LDO exhibits the best low-temperature reducibility.As we know, active oxygen was a critical factor for the catalytic oxidation reaction.The various valences of transition metal Co could supply abundant active oxygen in the catalysts, so the activity improved with the increase of Co at first, but when Mg is totally substituted by Co, the activity decreases because Mg could act as an electronic promoter in the catalysts to guarantee the oxygen mobility which was of great importance for the catalytic oxidation reactions.The enhanced oxygen mobility after Ca-doping was reported in the literatures, which might indirectly support our assumption [24,26]. Conclusions A series of Co/Mg-Mn mixed oxide based catalysts were prepared via the calcination of layered double hydroxides with (Co+Mg)/Mn ratio at 3 and Co/Mg ratio from 1.5/4.5 to 6/0, respectively.And then these catalysts were characterized by various methods and tested for their activities for methane catalytic combustion.The typical LDH lamellar structures were identified by X-ray diffraction except for the last Co 6 Mn 2 LDH sample.But spinel structures were formed in all samples after calcination.The catalytic activities of the prepared catalysts expressed as the methane conversion decreased in the following sequence: Co 4.5 Mg 1.5 Mn 2 LDO > Co 6 Mn 2 LDO > Co 3 Mg 3 Mn 2 LDO > Co 1.5 Mg 4.5 Mn 2 LDO.Though the addition of Co would increase the redox ability of the catalyst, a certain amount of magnesium was necessary for high activity.Magnesium oxide would be beneficial to the dispersion of the cobalt and manganese mixed oxides.Furthermore, the existence of Mg could reduce the surface area loss after calcination and increase the surface active oxygen content. Table 1 : Porous properties of the Co Mg 6− Mn 2 LDH (a) and LDO (b). Table 2 : The amount of the surface elements of Co Mg 6− Mn 2 LDO.
2018-12-24T17:21:07.842Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "f93175c4e9e3860502768acef7fe3c795e6ab421", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2014/751756.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f93175c4e9e3860502768acef7fe3c795e6ab421", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
8388985
pes2o/s2orc
v3-fos-license
Global copy number analyses by next generation sequencing provide insight into pig genome variation Background Copy number variations (CNVs) confer significant effects on genetic innovation and phenotypic variation. Previous CNV studies in swine seldom focused on in-depth characterization of global CNVs. Results Using whole-genome assembly comparison (WGAC) and whole-genome shotgun sequence detection (WSSD) approaches by next generation sequencing (NGS), we probed formation signatures of both segmental duplications (SDs) and individualized CNVs in an integrated fashion, building the finest resolution CNV and SD maps of pigs so far. We obtained copy number estimates of all protein-coding genes with copy number variation carried by individuals, and further confirmed two genes with high copy numbers in Meishan pigs through an enlarged population. We determined genome-wide CNV hotspots, which were significantly enriched in SD regions, suggesting evolution of CNV hotspots may be affected by ancestral SDs. Through systematically enrichment analyses based on simulations and bioinformatics analyses, we revealed CNV-related genes undergo a different selective constraint from those CNV-unrelated regions, and CNVs may be associated with or affect pig health and production performance under recent selection. Conclusions Our studies lay out one way for characterization of CNVs in the pig genome, provide insight into the pig genome variation and prompt CNV mechanisms studies when using pigs as biomedical models for human diseases. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-593) contains supplementary material, which is available to authorized users. Background Copy number variations (CNVs) distribute ubiquitously in the human genome [1,2] and belong to the spectrum of genetic variation ranging from 50 base pairs to larger structural events [3]. As an important form of genetic variation complementary to single-nucleotide polymorphisms (SNPs), CNVs have attracted extensive attentions and unprecedented successes have been achieved in detection of CNVs as well as segmental duplications (SDs) in the human genome [4][5][6][7]. Multiple studies indicated that CNVs have been associated with a variety of human diseases [8][9][10][11][12]. Together with SNPs, CNVs are becoming recognized as an important source of genetic variance [13] and may account for some of the missing heritability for complex traits [14]. Benefitting from the achievements of pioneering CNV studies in humans, substantial progress has been made in the discovery and characterization of CNVs in livestock genomes. In the past few years, a significant amount of research on genome-wide CNV identification was conducted in various domestic animal species, including cattle [15,16], dog [17][18][19], sheep [20], goat [21], chicken [22], turkey [23] and pig [24,25]. A suite of genes with copy number alteration were exploited contributing to variation of either Mendelian phenotypes [26][27][28] or complex production traits [29]. Based on these findings, it was expected that CNV studies could advance the studies of genetic diversity, evolution, functional genomics as well as genome assisted prediction. However, a potential issue with majority of previous CNV studies in livestock species displayed as a lack of power and accuracy for CNV identification due to the technical limitations of two most frequently used detection platforms, i.e., SNP chips and array comparative genome hybridization (aCGH) [3,6,15,30]. This obviously highlights the need to pursue more powerful and sensitive tools for construction of high resolution CNV map. To achieve this goal, Bickhart et al. [15] performed CNV detections in individual cattle genomes using the next-generation sequencing (NGS) technique combined with mrFAST/ mrsFAST and whole-genome shotgun sequence detection (WSSD) analytical methods [5,6,31] based on the findings of SD detection [32]. Their work demonstrated that the NGS has superiority over SNP chip and aCGH in CNV deteciton in livestock genomes. Besides the platforms employed in CNV detection, the other crucial factor determining the abundance of detected CNV is the experimental population investigated. Findings from several studies [17,24,33] indicated that a considerable proportion of CNVs likely segregate among distinct breeds, such that a sufficiently high-resolution CNV map would require the survey of multiple breeds/ populations [34]. In the past few years, much effort has been taken to detect CNVs in pig genome using three main genome-wide CNV identification technologies, i.e., aCGH [35][36][37], SNP genotyping array [24,25,[38][39][40] and genome re-sequencing based on the next generation sequencing [41][42][43]. However, compared to humans and other model organisms, relatively few studies have investigated CNVs in pigs and little is known about how CNVs contribute to normal phenotypic variation and to disease susceptibility in this species. Since CNVs play a vital role in genomic studies, and pigs act as one of the most economically important livestock worldwide as well as popular model for various human diseases [44], it is an imperative need to develop a comprehensive, more accurate and higher resolution porcine CNV map and in-depth characterize CNVs across pig genomes for follow-up CNV functional investigation. To achieve the aforementioned goal, we performed the current study to systematically exploit features of SDs and CNVs present in the pig genome using high throughput NGS data of diverse pig breeds in the framework of the pig draft genome sequence (Sscrofa10.2) [45]. We designed the studies considering the following two aspects: (1) CNVs mostly occurred with different probabilities among different populations; and (2) A number of Chinese local breeds conferred much larger variability and higher average heterozygosity than European breeds [46]. Beyond the definition of CNVs, some CNVs may be fixed in the population and (if they are in state of gain) can also be detected across the genome as SDs [47] which are generally defined as >1 kb stretches of duplicated DNA with 90% or higher sequence identity [48]. It was also believed that an SD-rich region would generate more CNVs than other regions [48], showing a close association with CNVs near or around it. Considering the potential link between SDs and CNVs across the genome, we employed the NGS data of genomes of experimental individuals as well as the reference genome of Duroc 2-14 to construct individualized SD and CNV maps and in-depth characterize global CNVs via the commonly used analytical approaches, i.e., whole-genome assembly comparison (WGAC) and wholegenome shotgun sequence detection (WSSD) [6,7,49]. To pursue a reliable CNV map, in the present study, we employed individual genomes from multiple populations, including all six types of Chinese indigenous breeds, one Asian wild sow, as well as three commercial breeds. Additionally, we have improved the original read depth (RD) method in WSSD analyses through adjusting the bias in CNV calling due to fragmented sequences in the process of hard masking of reference genome. This enhanced the detection power, lowered the false positive findings and increased copy number estimation accuracy, especially for NGS data with long sequencing reads. Our work is of importance to researchers working with swine genomics and would lay a solid foundation for future CNV functional researches in the pig genome. Sequencing data set statistics Based on Illumina HiSeq 2000, we obtained NGS data of 13 pig individuals, which were selected to cover a broad representation of pig diversity of both modern commercial pigs and Chinese domestic and wild pigs. The sequencing data set statistics have also been summarized in Table 1. The depth of coverage for each animal varied from 10.4× to 17.4×, which is sufficient for genome-wide CNV detection using RD method according to the previous studies [5,6,15]. SD map construction for the reference genome Using WGAC, we initially detected a total of 902,068 pairwise alignments with an aligned length of >1 kb and identity of >90%, which showed an excess of SD contents compared to previous results in other species [32,49,50]. After removal of high-copy repeats, the filtered detections consisted of 28,509 pairwise alignments, of which 10,128 (35.5%) involved unplaced scaffolds (presented in Additional file 1: Table S1). Furthermore, 77.9% (22,214 of 28,509) of these alignments had an identity of >99% that may contain numerous artificial duplications due to local assembly errors [49]. The remaining alignments (6,295 of 28,509) had identities varying from 90% to 99%. The distribution profile of the identities for these 6,295 alignments was presented in Additional file 2: Figure S1, which showed an approximately uniform distribution within the interval of 0.90-0.98 while exhibiting a sharp increase in alignment frequency within the interval of 0.98-0.99. We further merged all of 28,509 alignments into 43,071 non-overlapping sequence intervals. The total length of these intervals reached 542.6 Mb, amounting to 19.3% of the reference genome, which indicated an excessive content of duplicated bases. Specially, 8,620 of 43,071 intervals were mapped to unplaced scaffolds, accounting for 121.0 Mb (57.1% of all the unplaced scaffolds). Among the 3,882 unplaced scaffolds >1 kb in size, 2,396 (61.7%) contained SD and 1,478 (38.1%) had >70% of duplicated bases (Additional file 2: Figure S2). The high content of SD in unplaced scaffolds was considered to be related to the difficulty in placing the scaffolds into the assembly [49]. In WSSD analyses, a total of 1,714 unique intervals (67.3 Mb) were predicted as listed in Additional file 1: Table S2. Similar to the strategy of Bailey et al. [7], we further filtered the WGAC alignments of ≥94% identity with SD calls by WSSD to remove artifactual duplications. After filtering, the final WGAC dataset consisted of 5,534 pairwise alignments (Additional file 1: Table S3), out of which 131 were mapped to unplaced scaffolds, and five were mapped to pig mitochondrion. Of the 20 chromosomes (1-18, X and Y), 4,529 of 5,398 (83.9%) pairwise alignments were intrachromosomal and most pairwise alignments were within the distance of 1 Mb between each other ( Figure 1). The profile of the SD map with WGAC is presented in Figure 2 and the features of SDs across different chromosomes are also detailed in Table 2, which is similar to the duplication pattern of mouse [51], dog (22) and cattle [7,18,32,51] while quite different from the interspersed segmental duplication pattern that predominates in human [7,18,32,51]. Previous studies (8,47) suggested that abundant interspersed segmental duplications may be specific for human and great apes genomes and play a vital role during the evolution of their gene families. The final pig SD database was constructed through integrating low-identity WGAC (<94%), filtered highidentity WGAC (≥94%) and the WSSD estimates. Overlapping segments by either WGAC or WSSD were simply merged into one single SD, the endpoints of which are outermost bases of the overlapping segments. Excluding unplaced scaffolds and mitochondrion, the pig SD database contained 2,860 intervals which totaled 73.5 Mb in size and 2.8% of all the chromosomes (1-18, X, Y) (Additional file 1: Table S4). The proportion of duplicated bases varied from 1.2% to 6.9% across different chromosomes as showed in Additional file 2: Figure S3. Compared to previous studies on other species [7,18,32,51], the estimates of pig SD are relatively conservative. One possible reason may be due to exclusion of the unplaced scaffolds in our WSSD analysis. Individualized CNV discovery Using our improved strategy, a total number of 13,517 segmental duplication/deletion calls were predicted from all the 13 individuals after artifact removal. The number of CNV events varied across different pig individuals, ranging from 870 (Yorkshire) to 1,311 (Duroc) with an average of 1,040 per individual (see Table 3). The overall profile of these identified segmental duplications/ deletions across the genome for each individual is illustrated in Additional file 2: Figure S4, as well as detailed in Additional file 3: Table S5. Accordingly, all detected CNV segments were further merged into 3,131 unique CNVRs across all experimental animal genomes following the criteria that the union of overlapping CNVs across individuals are considered as a CNVR [4]. Concerning copy number status, the numbers of gain, loss and both events (loss and gain within the same region) were 1702 (54.36%), 1366 (43.63%) and 63 (2.01%), respectively. Gain events were more common than loss events in CNVRs, and had slightly larger sizes than losses on average (36.15 kb vs. 23.99 kb). The CNVRs totaled 102.8 Mb in length with an average of 32.8 kb, amounting to 4.0% of the 20 chromosomes based on the porcine genome (Sscrofa 10.2). The distribution and the status of these identified CNVRs are plotted in Additional file 2: Figure S4, and a full list of CNVRs and corresponding features are provided in Additional file 3: Table S6. We further summarized the numbers and the lengths of CNVRs on different chromosomes in Additional file 3: Table S7, which illustrated nonuniform patterns across the genome. This is consistent with previous reports on heterogeneous distributions of CNVs in human and other species [4,15]. Figure 3 demonstrates the spectrum of sizes of all detected CNVRs across the genome. It shows that most CNVRs fell into the interval between 10 kb and 20 kb, and the frequency of CNVRs tends to decrease with the increase of the length. It is notable that in our RD analyses, CNVs were called using the criterion that at least 6 of 7 sequential long sliding windows showed RD values significantly deviating from the RD average; thus, CNVs >10 kb in length were kept in the final dataset. This indicates that our RD analyses are prone to detection of large structural variation events, and a significant amount of variation in length <10 kb would be precluded from the final findings. This filtering process is a routine strategy in recent similar studies [5,6,15] to assure high confident positive findings in RD detection. We investigated further to see if potential population/ breed specific CNVs exist. Specifically, of the 3,131 total CNVRs, 1,679 (53.6%) were merely identified in a single breed/population, confirming that segregating CNVs exist across various breeds. Additionally, out of the 3,131 CNVRs, 612 (19.5%) were called merely in the three modern commercial breeds, while 1,513 CNVRs (48.3%) were detected specific in the nine Chinese indigenous pigs as well as the wild sow. These potential population/breed specific CNVRs can be considered as good candidates for determining breed-specific characteristics, although it is necessary to confirm phenotypic effects of these CNVs using more experimental samples. On the other hand, we scanned all CNVRs and merely found nine of them (4 duplications, 5 deletions) ubiquitously existing in the same state among the 13 animals. Except these nine potentially fixed SDs/deletions, the states of other SDs/deletions are variable across all 13 individuals. This clearly demonstrates CNVs are widely present in genomes across different population/breeds. We compared the length as well as the numbers of SDs/deletions identified between each pair of individuals. As given in Additional file 3: Table S8 and S9, the number of Quality assessment of CNVs by using aCGH data and qPCR Using two complementary methods, aCGH and qPCR, we performed experimental validation to confirm individual copy number variants. One custom-designed 2.1 M aCGH (Roche-NimbleGen) based on the Sscrofa10.2 porcine assembly was used to assess the CNVs by RD. In aCGH hybridizations, the individual D4 (Duroc) was used as the reference, while the other 12 individuals as the test samples. We employed a method initially proposed by Alkan et al. [6] to assess the RD called CNVs with aCGH data using the individual D4 (Duroc) as the reference sample. Overall, the Pearson's correlation coefficient between variables, defined as the log 2 (copy number-ratio) value and the mean of probe log 2 ratios varied from 50.0% (C3) to 80.9% (R2) for each of the test animals, with an average of 62.5% (Additional file 4: Table S10). The degrees of consistency of quality assessment herein are similar with those in human and cattle [6,15]. Additionally, we found that the level of correlation coefficient for the CNVs validation is highly dependent on the copy number differences of CNV intervals between the reference sample and the test sample, i.e., the less difference of copy number, the lower the calculation of correlation coefficient. The trend of this dependence has also been clearly exemplified in Figure 4. This may be because the aCGH data is not sensitive to detect small copy number difference between test sample and reference sample due to the impact of noise signals, especially in highly duplicated regions. In the qPCR confirmation, based on the copy numbers of every individual predicted by RD and qPCR method, we systematically assessed performance of the RD-called CNVs through three evaluation criteria in the process of validation, including the overall agreement rate of RD with qPCR results, the prediction power of RD and the positive prediction rate of RD. All the primers used and qPCR results are listed in Additional file 4: Table S11 and S12. Overall, the agreement rate, detection power and the positive prediction rate for the RD validation are 74.9%, 71.2% and 95.1%, respectively. The result demonstrated that qPCR experiments agreed well with the prediction by RD method. The discrepancies between the qPCR and results identified by RD method may be caused by potential SNPs and small indels, which influence the hybridization of the qPCR primers in some individuals, resulting in unstable quantification values or lowering primer efficiency. Additionally, we performed qPCR validation for the CNV findings based on the original detection strategy within the same regions for comparing with those based on our improved strategy. The qPCR validation results showed that the corresponding agreement rate, detection power and the positive prediction value were 68.7%, 63.1% and 94.6%, respectively. The comparison between the two different CNV calling strategies clearly showed the credible evidences on the advantage of the improved strategy proposed herein over the original. Comparison with previous studies We also compared CNVRs in this study with previous pig CNV studies [24,25,35,36,39,41,42]. After merging the results of recent reports, a total of 849 out of 3,131 CNVRs (27.75%) with the length of 33.02 Mb in our study overlapped with those previously reported (see Table 4). This indicates about one-third of CNVRs identified in our study was validated by previous studies, and most are firstly detected herein. Besides different algorithms for CNV calling, a difference between these NGS data-based CNV studies and the current study lies in that merely the current study employed SD information of the reference genome in the process of CNV detections, such that the short-read artifacts were removed from the detections in current study. Additionally, compared with the study by Rubin et al. [42], the different point is that the current study is based on individualized sequencing while that of Rubin et al. is based on sequencing of pooled samples. As a consequence the current study has a better power to detect CNVs with rare frequency, while the study of Rubin et al. is prone to find common CNVs. Association of CNVRs with SD and other genomic features It has been reported that CNVs may be facilitated by ancestral SDs through the occurrence of non-allelic homologous recombination (NAHR) [52], showing enrichment around ancestral SDs. To further confirm if the similar CNV formation mechanism occurs in the swine genome, we picked out SDs with <95% identity (Additional file 1: Table S3) that was postulated as the ancestral SDs that happened at earliest~5 million years ago when Sus scrofa just emerged in South East Asia [45] according to the traditional sequence divergence rate of 2% per million years [53]. These putative ancestral SDs were then merged into non-overlapping regions that would be used in the enrichment analysis. Simulation results clear demonstrated the strong statistical evidence (13.9-fold enrichment; P < 0.001) according to the empirical distribution, indicating that the CNVRs are significantly associated with ancestral SD regions of the reference genome. Furthermore, we also tested the correlation between CNV hotspots and ancestral SDs. Accordingly, we picked out 659 regions as CNV hotspots from 3,131 putative CNV regions (CNVRs) using the criteria that at least two of the three commercial pigs and at least two of ten Chinese pigs should be detected as having duplication/ deletion within the CNVR (Additional file 4: Table S13). The simulation tests showed that 1,313 ancestral SDs overlapped with CNV hotspots while only 41 in random situation (32.0-fold; P < 0.001). The 32.0 fold SD enrichment for CNV hotspots was much larger than the 13.9-fold enrichment for all CNVRs, implying the special effect of ancestral SDs on evolution of CNV hotspots [52]. In addition, we explored if CNV breakpoints were enriched for GC-rich regions which were likely to show high rate of homologous recombination [54]. Based on the criteria of Berglund et al. [55], the breakpoints were defined by the CNVR boundaries covering a 2-kb length segment. Accordingly, we found a significantly higher GC content in these locations (44.0%; P < 1.0E-6) than that in the genomic background (41.6%). As reported by Berglund et al. [55], a GC-peak can be determined when a 500-bp sliding window centered in a 10 kb background sliding window has a 1.5-fold increase in GC content, we searched for GC-peaks across the pig genome. After performing a randomization test, we found a 1.7-fold GC-peaks enrichment in CNV breakpoints (P < 1.0E-6). Besides previous reports in dogs [55], the findings herein further confirmed the strong association between CNV and GC-peaks. However, the proportion of breakpoints within a 1-kb region of GC peak merely reached 3.1% in present study, which is mainly due to the sparse distribution of GC peaks across the pig genome (4.6 per Mb in average). This clues us the difference of CNV formation mechanisms among distinct species, and GC-peaks may be just one of potential CNV formation mechanisms of pig CNVs. Table S14) and human disease gene orthologs (Additional file 5: Table S15), providing the evidence that CNVs may be associated with or affect animal health and production traits under recent selection. Since some QTLs have too large confidence interval, we focused on the 3,789 QTLs with confidence interval less than 5 Mb. Out of the 3,789 QTLs, 1,077 (28.4%) overlapped with the CNVRs identified in this study, which are involved in a wide range of traits, such as growth, meat quality, reproduction, immune capacity and disease resistance. For the human disease gene orthologs, we found 102 CNVRs identified in the study overlapped 210 genes associated with human diseases, such as Stiff skin syndrome, Leukemia, polycythemia vera, autism, and Complement factor H deficiency. This demonstrates that, in accordance with previous studies, CNVs play an important role in phenotypic variation and are often related with disease susceptibility [9,56]. Out of the 23,641 porcine genes locating in the 20 chromosomes, a total of 3,644 porcine genes (Additional file 6: Table S16) were completely or partially overlapped with CNVRs, including 2,773 protein-coding genes, 821 pseudo genes, 3 tRNA genes, 17 miscRNA genes and 30 genes with other types. It is notable that these genes are distributed merely in 1,820 CNVRs (58.1%) of all identified CNVRs, i.e., the remaining 41.9% CNVRs do not contain any annotated genes. The distribution of genes among CNVRs from the present studies is similar with those in other studies [4,15,25]. To test if the genes are enriched in these CNVRs, an empirical distribution of genes among CNVRs were constructed through 10,000 simulations. Consequentially, we found that the genes trended to enrich within the CNVRs (1.8-fold enrichment; P < 0.001), especially for the proteincoding genes (1.6-fold enrichment; P < 0.001), reflecting that porcine CNVs occurred in gene-rich regions in the genome. Genomic effects of CNVs In order to provide insight into the functional enrichment of the CNVs, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses were performed for the genes in CNVRs with the DAVID bioinformatics resources. The GO and pathway analyses revealed that there were 12 significant terms (Additional file 6: Table S18) and 8 significant pathways after Benjamini correction. Our results are consistent with previous studies in other mammals that CNVRs are particularly enriched in genes related to immunity, sensory perception of the environment (e.g. smell, sight, taste), response to external stimuli and neurodevelopmental processes [57]. Copy number variable genes in the CNVRs According to the copy windows, we estimated the CNs for all genes in the CNVRs identified by RD. In total, there were 2,223 genes assigned copy numbers (Additional file 6: Table S16). The results showed that some of genes with high copy numbers belong to some multiple-member gene families, such as olfactory receptor (OR), protein FAM22G, UDP-glucuronosyltransferase, ATP-binding cassette subfamily G, butyrophilin subfamily 1 member A1, leukocyte immunoglobulin-like receptor subfamily, melanoma-associated antigen, tumor necrosis factor receptor superfamily member, and cytochrome P450. This is consistent with previous studies that high copy number genes often belong to multiple-member gene families [5,15]. Excepting the above mentioned copy number variable gene families and those uncharacterized genes, there were 123 protein-coding genes with copy number range more than 2.0 among the individuals investigated (Additional file 6: Table S19). Further probing the potential functions of these 123 copy number variable genes, we found a suite of genes related to the immune response, meat quality, sexual and reproduction ability, nutrients metabolism and coat color, which representing a valuable resource for future studies on the relation between CNV genes and phenotype variation. In particular, the KIT gene is the most obvious copy number variable gene with functional significance, which has been confirmed that gene duplication and a splice mutation leading the skipping of exon 17 is responsible for the dominant white phenotype [58,59]. In our studies, we estimated the copy numbers of the KIT, and obtained the copy number of the KIT gene of 4.50 and 3.81 in the solid white breeds Yorkshire and Landrace, respectively, while about two copies (ranged from 1.71 to 1.97) in all other pigs having colored phenotypes (see Additional file 2: Figure S5 for read depth of all samples within the region). This is consistent with the causative relation between KIT duplication and dominant white coat color identified before [58,59]. In particular, no CNVs were found in the KIT gene of the Rongchang pig (copy number = 1.94), which is the Chinese indigenous breed that is characterized for its solid white coat color on body and some black patches around the eyes and ears. The result confirmed the previous finding that the white coat colors in Chinese pigs were not caused by the dominant white allele of KIT [60]. Among these 123 copy number variable genes, some genes were existed in specific breed or population. For instance, kynurenine/alpha-aminoadipate aminotransferase (AADAT) and zinc finger protein 622 (ZNF622) have extremely high copy numbers in the re-sequenced Meishan individuals (above 5.0 and 9.0 for AADAT and ZNF622, respectively) compared to the other individuals. To further explore copy number distributions of them at population levels across multiple breeds and mine potential function contributing to formation of particular breed features, we determined the absolute copy numbers of these two genes via qPCR. A total of 174 unrelated individuals from six pig breeds (Meishan, Tibetan, Daweizi, Yorkshire, Landrace and Duroc) were employed in the confirmation study. The primers used, average copy number estimates for these two genes in each breed are presented in Figure 5 and Additional file 6: Table S20. The validation outcomes showed the consistent tendency with that in RD analyses, i.e., both AADAT and ZNF622 have above 8.0 in average in Meishan breeds, being approximately 2-to 4-folds higher than those in the other five breeds. In mouse, the activity of the rat and mouse's AADAT gene is associated with the transamination of alpha-aminoadipic acid, which is the final step in the major pathway (the saccharopine pathway) for the catabolism of L-lysine (AADAT NCBI reference). ZNF622 pertains to the zinc finger gene family and has been proved involved in embryonic development [61]. Concerning potential function of AADAT and ZNF622, we can speculate that extraordinary high copy numbers of AADAT and ZNF622 likely account for the typical features, such as high fertility, roughage-resistance, lower growth rate in Meishan pigs. b a Figure 5 Box plot of gene copy number quantification for AADAT (a) and ZNF622 (b). The gene copy number was measured by qPCR assays across six pig breeds, including Meishan pig, Daweizi pig, Tibetan pig, Duroc pig, Landrace pig and Yorkshire pig. Boxes indicate the interquartile range between the first and third quartiles, and the bold line indicates the median. Whiskers represent the minimum and maximum within 1.5 times the interquartile range from the first and third quartiles. Outliers outside the whiskers are shown as circles. Discussion In current study, we developed a SD map of reference genome with 2,860 intervals and systemically performed the first genome-wide analysis of recent SDs using the newest build of porcine genome (Sscrofa 10.2) by both WGAC and WSSD methods. The construction of SD map herein presented essential SD features of pig genome, like inter-/intra-chromosomal patterns of SDs and the identity of pairwise alignments, etc., aiding understanding of genome innovation, genomic rearrangements, and occurrences of CNV hotspots within species [4,18,51,62]. It has been reported [52,63] that SDs may contribute to the formation of some CNVs through the occurrence of NAHR mechanisms. Certain ancestral SDs that were transmitted to their descendants may facilitate separate NAHR in them, leading to the genesis and maintenance of CNVs. The impact of SD on the CNVs has also been reflected by our findings that there are significant association between the ancestral SDs and CNVRs and CNV hotspots. From the practical perspective, the reference genome SD database generated in our study also provides a very useful calibration for filtering short-read artifacts, which is necessary for duplication/deletion detection in WSSD analyses of individual NGS data. Besides the SD map of the pig reference genome, we also constructed a CNV picture involving 3,131 unique regions using WSSD through re-sequencing 13 highly representative individuals from ten distinct breeds or populations. To our knowledge, this is the highest resolution CNV map so far in the pig genome. The abundance of CNV outcomes in our study further confirmed our initial expectation that individuals from multiple breeds, especially Chinese indigenous breeds, can greatly contribute to the CNV identification. The alteration of copy numbers of these genes within CNVRs may be responsible for the genetic diversity among diverse breeds with distinctive natures, especially for those entailed in various Chinese indigenous breeds. Additionally, we further confirmed the previous findings that the duplication of KIT gene is responsible for the dominant white phenotypic breeds like Landrace and Yorkshire, while with the exception of Chinese indigenous solid white breeds like Rongchang pig surveyed. In our study, besides those multiple-member gene families and uncharacterized genes, a total number of 123 copy number variable genes have been mined within CNVRs across 13 individuals with different genetic backgrounds from ten distinct breeds, which merit functional validation in depth in follow-up studies. Especially, the two genes, AADAT and ZNF622, entail obviously high copy numbers merely in Meishan pigs, which can be considered as promising candidate functional genes in CNV-related association studies in the future. In CNV detection, we adopted the read depth specific analytical tool mrsFAST to map sequence reads to the reference genome. Compared with other read depth methods considering merely one mapping location per read, mrsFAST can map sequence reads to all possible locations for a sequence read, demonstrating advantages of detection power in searching for SD regions. Highlights in our analyses involve three aspects: Firstly, we proposed an enhanced strategy to determine three different types of sliding windows to adjust the bias in CNV calling due to fragmented sequences in the process of hard masking of the reference genome, especially for NGS data with long sequence reads. We defined sliding windows based on unique hits where short-reads can be forward aligned with the reference sequence rather than non-masked bases employed in the original mrCaNa-VaR. This could largely conquer the inaccuracy of read depth calculation for each type of sliding windows arising from hard masking of the reference genome. Accordingly, we could use more reliable read depth statistics to infer duplication/deletion and estimate copy number, leading to better sensitivity and specificity of duplication/deletion detection as well as increased accuracy of copy number estimation. The performance gain of the enhanced strategy over the original has been verified by qPCR as well as through simulation analyses. Secondly, we probed formation signatures of both SDs of the pig reference genome and individualized CNVs in an integrated fashion. Based on the identified CNVs and SDs, we systemically explored associations of CNVRs with various genome features, building a comprehensive profile of genome-wide CNVs in swine. Finally, we exploited CNVs across the pig genome among ten distinct breed populations and dug out corresponding genes within these specific regions, which may be considered as the most important copy number variable genes responsible for genetic diversity and specific breed features. Furthermore, we predicted absolute copy number of completely all genes within CNVRs across the genome and sifted out 123 protein-coding genes. Most of these specific CNVs and CNV-related genes are firstly reported by our studies. The WGAC and WSSD methods employed in this study have demonstrated obvious advantages. However, some limitations still exist in detecting SDs and CNVs. Specifically, WGAC can identify whole-genome SDs with the length of >1 kb and determine accurate SD breakpoints, but it does depend on the whole genome assembly of the individual investigated. It is also difficult for WGAC to dissect high-identity SDs, which should be further filtered by WSSD. The WSSD method has inevitable weakness in determining breakpoint due to its nature of relying on pre-defined sliding windows. Considering the sliding length (generally set as 1 kb), the WSSD method can merely identify a rough position of CNV breakpoint. The inaccuracy of CNV breakpoint determination limited our view about the CNV formation. In this study we specially focused on recurrent CNVs instead of non-recurrent ones. Recurrent CNVs show recurrent breakpoints in SDs, arising by meiotic unequal or non-allelic homologous recombination [64]. In contrast, non-recurrent CNVs have unique breakpoints that are not dependent on SDs, possibly arising by nonhomologous end-joining (NHEJ), microhomology-mediated end-joining (MMEJ), fork stalling and template switching (FoSTeS), or microhomology-mediated break-induced replication (MMBIR) [64]. Our study showed a significant association between CNVs and ancestral SDs in pig genome, giving evidence on the abundance of recurrent CNVs in our results. Though it is possible to distinguish recurrent and non-recurrent CNVs based on their differences in breakpoint distribution (common versus variable) and association with SDs (dependent versus independent) [64], the ambiguity of CNV breakpoints due to the shortness of the WSSD method made it unfeasible to achieve this goal. Conclusion In the present study, we proposed an enhanced strategy to determine three different types of sliding windows to adjust the bias in CNV calling due to fragmented sequences in the process of hard masking of the reference genome, and then exploited both segmental duplications (SDs) and individualized CNVs across the pig genome among ten distinct breed populations and dug out corresponding genes within these specific regions. Our studies lay out one way for characterization of CNVs in the pig genome, provide insight into the pig genome variation and prompt CNV mechanisms studies when using pigs as biomedical models for human diseases. Ethics statement The whole procedure for collection of the ear tissue samples of all animals was carried out in strict accordance with the protocol approved by the Institutional Animal Care and Use Committee (IACUC) of China Agricultural University. Selection of pig breeds and experimental animals In this study, a total number of 13 pig samples originated from ten distinct populations were chosen for sequencing. These samples comprised one Asian wild pig, three modern commercial pigs (1-Landrace, 1-Duroc and 1-Yorkshire), and nine pigs selected from six Chinese indigenous breeds (2-Tibetan pig, 2-Diannan small-ear pig, 2-Meishan pig, 1-Min pig, 1-Daweizi pig, and 1-Rongchang pig). Duroc, Yorkshire and Landrace are considered as the representatives of modern commercial breeds, while the six Chinese indigenous breeds, each belonging to a specific population type, are considered as the representatives of Chinese indigenous population. The illustration of the features of six Chinese indigenous breeds were detailed elsewhere [65]. Furthermore, to explore the phylogeny relationships among them, the 13 individuals were genotyped by Porcine SNP60 BeadChip (Illumina). SNPs with 100% call rate (n = 55,438) from these 13 samples were used to construct the Neighborjoining tree using MEGA version 5.0 [66]. As shown in Additional file 2: Figure S6, the experimental samples can well represent diverse populations of the commercial breeds and Chinese indigenous breeds. Re-sequencing and data acquisition Genomic DNA of 13 individuals was extracted from the ear tissue using Qiagen DNeasy Tissue kit (Qiagen, Germany). All DNA samples were analyzed by spectrophotometry and agarose gel electrophoresis and sequenced using the Illumina HiSeq 2000 technology. All paired-end reads reached the length of 100 bp, with an average insert size of 460-490 bp and the standard deviation of 11-14 bp estimated for all samples. The reads which contain more than 50% low quality bases (quality value ≤5) or more than 10% N bases were removed. The Q20 bases rate of reads of each individual is above 90%. Developing an enhanced strategy in WSSD analyses In RD approach for WSSD analyses, hard masking of genome sequences is a routine process for generating more accurate read depth statistics of long window, short window and copy window. However, hard masking may produce biases in both duplication and deletion detection, especially for long sequence reads (e.g., ≥100 bp). We define here this kind of bias as the fragmentation effect, which received seldom attention preciously since it does not matter due to the length of reads is merely 36 bp in most of earlier studies [5,6,15]. To reduce potential fragmentation effects, we modified mrCaNaVaR to optimize the way in defining the three windows, i.e., long window, short window and copy window. Specifically, the sizes of windows are based on the number of unique hits where short-reads can be forward aligned with the reference sequence rather than the accumulative counts of nonmasked characters employed in the original mrCaNaVaR. Accordingly, the biases in duplication/deletion detection and CN estimation due to fragmentation effects can be largely corrected. A more intuitive illustration on the so-called fragmentation effect and our improved strategy were also given in Figure 6. The more details on our enhanced strategy were given in the supplementary method, Section 1 of Additional file 7. To further validate the performance of the enhanced strategy herein, extensive simulation analyses were conducted to systematically compare the detection power, accuracy of copy number estimates between the original and the enhanced strategy herein (for details, see Additional file 7, Section 2). Construction of SD map for the reference genome We performed both WGAC and WSSD analyses to map SDs based on Sus scrofa 10.2 genome assembly (Sscrofa 10.2). These two analytical algorithms were initially performed in human genome [7,49], which can provide comprehensive and complementary SD findings with different levels of sequence identity and resolution. The specific process for porcine SD map development by WGAC and WSSD approaches is detailed in the supplementary method, Section 1 of Additional file 7. After finishing both WGAC and WSSD analyses for the reference genome, to further remove artifactual duplications, we filtered the WGAC alignments of ≥94% identity with the WSSD dataset following the criteria proposed by [7]. We finally developed a pig SD database based on the union of low-identity WGAC (<94%), filtered high-identity WGAC (≥94%) and the WSSD estimates. Detection of duplication/deletion for re-sequenced individuals Based on the SD findings in the pig reference genome, we employed RD method to detect both SDs/deletions for the re-sequenced samples through running mrsFAST and our improved mrCaNaVaR program. The specific steps for SDs/deletions calling are given in the supplementary method, Section 1 of Additional file 7. Validation of pig CNVs using aCGH and qPCR We employed aCGH with a custom-designed 2.1 M oligonucleotide array (Roche-NimbleGen) based on the Sscrofa10.2 porcine assembly for CNV validation. The array contained 2,167,769 oligonucleotide probes (50-75 mers), with an average interval of 889 bp between probes, covering 18 autosomes and two sex chromosomes. Details for aCGH analyses are presented in the supplementary method, Section 1 of Additional file 7, Section 1. Besides aCGH, qPCR was used to validate CNVRs identified by NGS data in the study. The control region is determined within the region of the glucagon gene Figure 6 Illustration of the modified method of windows definition. As showed in the top of the graph, on a 4 kb genome sequence, black regions represent A/T/C/G characters and grey regions denote N characters. Due to hard masking, 50 bp N blocks are uniformly distributed on the first 2 kb sequence, resulting in no any 100 bp reads being mapped there. According to copy window definition by the original method that every 1,000 bp of non-masked characters are defined as one copy window, the whole 4 kb long masked genome sequence is divided into three copy windows and the first 2 kb long sequence is defined as one copy window. The three copy windows have read counts of 0, 4 and 5, respectively. Thus the hard masked sequence of the first 2 kb may be considered as deletion. In contrast, the modified method we proposed herein defines every 1,000 unique locations where short reads can be mapped as one copy window, so the masked genome sequence is accordingly divided into two copy windows with read counts of 4 and 5, respectively, avoiding false prediction of deletion for the hard masked region. (GCG), which is highly conserved between species and has been proved to have a single copy in animals [67]. The specific process of qPCR analyses and the criteria for quantifying the performance of RD-based CNV calling are detailed in the supplementary method, Section 1 of Additional file 7. Gene content and functional analyses Pig CNVRs were annotated using NCBI gene information (ftp://ftp.ncbi.nih.gov/genomes/Sus_scrofa/mapview/ seq_gene.md.gz; ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/ GENE_INFO/Mammalia/Sus_scrofa.gene_info.gz). Those genes overlapping with CNVRs completely or partially were considered as copy number variable and picked out for further analyses. Copy number of each variable gene was estimated as the median of copy numbers corresponding to copy windows within the region of the gene. To provide insight into the functional enrichment of copy number variable genes, annotation analyses were performed with the DAVID (http://david.abcc.ncifcrf.gov/) for Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses. Since only a limited number of genes in the pig genome have been annotated, we firstly converted the pig EntrezGene IDs to orthologous human RefSeq genes by BioMart (http://www.biomart.org/) ahead of GO and pathway analyses. Statistical significance was assessed using a modified Fisher's exact test while considering multiple testing correction based on Benjamini's method. Pig CNV distribution and association with SDs and other genomic features We performed simulations to probe if the identified CNVs are associated with SD regions and other genomic features, such as protein-coding genes (ftp://ftp.ncbi.nih. gov/genomes/Sus_scrofa/mapview/seq_gene.md.gz). Specifically, for SD region association analyses, we randomly assigned each of identified CNVRs a putative position with no overlap with each other in the genome. The number of SDs overlapping with CNVRs was calculated in each simulation, and finally we created empirical distribution of the hits via 10,000 independent replications. Thus the significance of pig CNV enrichment/depletion in SD regions could be determined by the thresholds based on the empirical distribution. Similarly the association analyses were further conducted for other genomic features investigated, i.e., genes and protein-coding genes. Data access The complete SNP array data and aCGH data have been submitted to the Gene Expression Omnibus (http://www. ncbi.nlm.nih.gov/geo/) and released under the accession number GSE46733 and GSE46847, respectively.
2017-03-31T17:30:03.436Z
2014-07-14T00:00:00.000
{ "year": 2014, "sha1": "31dcb43debd270d3cc49beb1e1324202df111cd0", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-593", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31dcb43debd270d3cc49beb1e1324202df111cd0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227247662
pes2o/s2orc
v3-fos-license
The effects of drift and winds on the propagation of Galactic cosmic rays We study the effects of drift motions and the advection by a Galactic wind on the propagation of cosmic rays in the Galaxy. We employ a simplified magnetic field model, based on (and similar to) the Jansson-Farrar model for the Galactic magnetic field. Diffusion is allowed to be anisotropic. The relevant equations are solved numerically, using a set of stochastic differential equations. Inclusion of drift and a Galactic wind significantly shortens the residence time of cosmic rays, even for moderate wind speeds INTRODUCTION Cosmic rays (CRs) propagate in the Galaxy and through the surrounding halo around the Galactic disk by a combination of diffusion, drift through the ambient magnetic field and advection by a large-scale wind e.g. Strong et al. (2007). These processes are usually studied using by solving a diffusion-advection equation. In addition CRs can gain (through re-acceleration) or lose (through expansion losses in a wind) energy during propagation. During propagation CR composition is changed due to spallation on ISM nuclei or by radioactive decay of unstable nuclei. The charged CR nuclei (and CR electrons and positrons) are collisionally coupled to a possible Galactic wind, causing them to be advected by the bulk flow, see for instance Skilling (1975). This coupling is due to frequent scattering of the CRs as a result of wave-particle interactions with low-frequency MHD waves. The intensity of these waves is determined by the CR density gradient, which causes the excitation of Alfvén waves, see Wentzel (1974) or Skilling (1975). The mechanisms driving a Galactic winds include the deposition of mechanical energy into the ISM by core collapse supernovae, see for instance Martin (1999), and the effects of radiation-or CR pressure e.g. Hopkins et al. (2012). The effect of such a large-scale wind is now routinely included in numerical simulations of CR propagation. In the diffuse Galactic disk there is a rough equipartition of the CR energy density and the energy density of the Galactic magnetic field, see for example Beck & Krause (2005). This implies that CRs can play a significant role in the dynamics of the ISM. That last point will not be addressed in this paper. CR drift motions with respect of the large-scale magnetic field are usually a combination of gradient and curvature drifts. These are indispensable ingredients in the study the CR propagation in the Galaxy and (on a much smaller scale) in the Solar Wind. For example: Jokipii et al. (1977) has presented a model of CR propagation in the solar wind that includes drift. Those authors studied the effects of gradient drifts on CR transport, with the magnetic field taken to be an Archimedean spiral. In this paper we take into account the effects of cross-field drift in the curved magnetic field, and the effects of CR advection away from the disk by the Galactic winds on the propagation of CRs in the Galaxy. A number of analytical models for the Galactic magnetic field have been published in recent years, see for instance: Sun et al. (2008), Jaffe et al. (2010), and (Jansson & Farrar 2012a,b). The (Jansson & Farrar 2012a,b) model, hereafter JF12, does include a detailed model for the vertical field. In this paper we use a simplified GMF model (see below), based on JF12 model. This model preserves most of the features of the JF12 model: the field in the plane of the disk (horizontal field) is essentially unchanged, but, close to the disk mid-plane, the vertical field is taken to be perpendicular to the Galactic disk. The reason for this approach is mainly that the simplified model, unlike the original JF12 model, allows a (relatively) simple analytical calculation of CR drifts, which can then be used to check the numerical results. The reason for this approach is mainly that the simplified model, unlike the original JF12 model, allows a (relatively) simple analytical calculation of CR drifts. These analytical results, summarized in the Appendix, are used to calculate the drift speed in the advective step. Other than the inclusion of CR drift and advection by a Galactic wind, the numerical methods used in this Paper are identical to those used in the two previous papers ( AL-Zetoun & Achterberg (2018), and AL-Zetoun & Achterberg (2020)). As a result, the performance and efficiency of the code is comparable to what was found before. The rest of the paper is organized as follows: In Section 2.1, we describe the large scale Galactic magnetic field model. We discuss our propagation model using relevant input, like the diffusion tensor, the path length and the grammage distribution, advection by Galactic wind, and drift velocity in Section 2.2, 2.3, and 2.4, respectively. In Section 3 we discuss the spatial distribution of CRs in the Galaxy when we include the drift motion and the advection by Galactic wind. Finally, Section 4 contains the conclusions. In the Appendix we give the details of the CR drift in the modified Jansson-Farrar field. the Galactic magnetic field model We briefly discuss our modification of the GMF model of Jansson & Farrar (2012a) and Jansson & Farrar (2012b). The JF12 model has three distinct components: spiral disk field, a poloidal X-shaped field, and a toroidal halo field. Our simplifications involve the disk component as well as the X-field component, as explained immediately below. (i) For a distance |z| ≤ h = 0.4 kpc from the disk mid-plane we take the field to be Here the first term is the spiral field in the disk plane, while the second term is the vertical X-field. The spiral pitch angle p = 11.5 degrees and the value of B D 0 is different in the 8 spiral sections of the field. The disk field scales with Galacto-centric radius r as: B D ∝ r −1 , the radius r0 can be chosen arbitrarily, in our simulations we use the value of r0 = 5 kpc, see Jansson & Farrar (2012a) and AL-Zetoun & Achterberg (2018) for details. We take the X-field to be purely vertical with the same properties as the vertical component of the JF12 field: B X 0 = 4.6 µG, HX = 2.9 kpc, and tan i(rp) = (rX/rp) tan i0 for rp < rX = 4.8 kpc and tan i(rp) = tan i0 for rp ≥ rX. Here rp = r/(1 + h/rX tan i0) for rp < rX and rp = r − h/ tan i0 for rp ≥ rX and tan i0 = 1.15 (i0 = 49 degrees). (ii) For |z| > h we assume that the disk field vanishes abruptly. In the JF12 model this transition is more gradual. The X-field remains in the form given in Jansson & Farrar (2012a): We neglect the relatively weak halo field. The equations for CR propagation Recently, AL-Zetoun & Achterberg (2018) presented the results from a fully three-dimensional simulation of CR propagation, based on the Itô formulation of the Fokker Planck in terms of a set of stochastic differential equations. The results allowed for anisotropic diffusion but neglected the effects of CR drift and the Galactic wind. In this paper we include these effects. In finite-difference form the Itô formulation advances the position x of a simulated CR as the sum of a regular advective step and a diffusive stochastic (random) step. In a time span ∆t one has Propagation of Galactic cosmic rays: effect of drift and winds 3 The proper definitions of the wind speed V w and the drift speed V dr are given directly below. The diffusive step ∆x diff has the form: It involves Gaussian random steps with rms size 2D ∆t in the direction along the magnetic field, and random steps with rms size √ 2D ⊥ ∆t in the two directions in the plane perpendicular to the magnetic field. For more details about these aspects of the model, see AL-Zetoun & Achterberg (2018). To achieve this, the random variables ξ1, ξ2 and ξ3 are independently drawn from a Gaussian distribution with zero mean and unit dispersion. In our simulations we use a constant value for the ratio D ⊥ /D ≡ . The scaling with CR rigidity R = pc/qB is D ∝ R δ . Values quoted for D are for protons with an energy of 1 GeV. Path length and the grammage distribution The path length distribution (PLDs) is an important quantity that can be determined from measurements of the CR composition at Earth. It determines the number of spallation reactions that a typical primary CR undergoes, that can be measured by using the ratio of fluxes of secondary-primary nuclei, like Boron to Carbon ratio. In our calculation the path length increases by δ = v ∆t over a time span ∆t, with v the instantaneous CR velocity. The grammage increases as: where ρ(r) is the density of the diffuse gas at CR position r, v c is the velocity of the CR, and rcr is the instantaneous position of the CR inside the Galaxy. The radial scale length Rc in the density distribution equals Rc = 7 kpc. The vertical density scale height H d (r) = H0 exp(r/R h ). Here R h 9.8 kpc and H0 0.063 kpc. Model for the Galactic wind Several theoretical papers e.g: Breitschwerdt et al. (1991), Zirakashvili et al. (1996), and Pakmor et al. (2016) conclude that CRs can play an important role in launching Galactic winds. For instance: Breitschwerdt et al. (1991), and Breitschwerdt et al. (1993) showed that the Galactic winds are accelerated by the pressure of the CRs, as well as by gas-and MHD wave pressure. As a result the wind velocity can reach several hundred km/s. Everett et al. (2008) shows that the initial velocity, close to the disk, is about 200 km/s and increases to 600 km/s. When CRs couple to the plasma via scattering by MHD waves, the Galactic winds develop and CRs are picked up at the height |z| ∼ D/Vw by the wind with velocity Vw. They are then transported out of the Galaxy (i.e: CRs will generally not return). Since our simulations propagate test particles in a prescribed magnetic field and/or flow, we can not simulate the self-consistent launch of a CR-driven wind. Instead we use a simple analytical model. The velocity of a steady and axi-symmetric Galactic wind in the MHD approximation must take the form (e.g Weber & Davis (1967)): Here Bp is the poloidal magnetic field: Bp = (Br , 0, Bz). In our model we will neglect the motion in the azimuthal (φ−)direction since our model (including the CR source distribution) is axially symmetric, retaining only the wind component Vp along the poloidal field. We do not employ a full model for the Galactic wind. Rather we assume that the poloidal wind speed varies with height z above the disk as: a reasonable approximation sufficiently close to the disk for a wind accelerating away from the Galactic Disk. We use Hw = 20 kpc in these simulations, and vary V0 between 0 and 600 km/s. The importance of CR advection by this wind is determined by the dimensionless parameter: Here Dzz is the zz-component of the CR diffusion tensor. Advection away from the disk becomes the dominant transport mechanism for CRs when Ξw 1. Of course, in this model (with Vp ∝ |z|) it is essential that diffusion first transports the CRs some distance away from the disk mid-plane. As an illustration: if the CR is 'picked up' by the wind at some height h * Hw from the mid-plane, the ratio of the diffusion time t diff = H 2 w /2Dzz to a height Hw and the advection time tw = (Hw/V0) ln(Hw/h * ) to the same height is: In practice h * will roughly equal the thickness of the stellar disk of the Galaxy, h * 0.2 − 0.4 kpc. Effective drift speed in the Itô formulation The precise treatment of drift and diffusion needs some discussion. Without scattering, the drift velocity of a charge q with momentum p and velocity v in a static magnetic field B(x) is a combination of gradient B drift and curvature drift, which equals (see Appendix A): when averaged over an isotropic distribution of momenta so that < p ⊥ v ⊥ > /2 =< p v >= pv/3, where the brackets are the average over momentum direction and the subscript ⊥ ( ) refers tho the component perpendicular (parallel) to the magnetic field. This is the (slow) drift of the guiding center, the average position of the charge when one averages over the rapid gyration around the magnetic field. These drifts are fully discussed in the classic paper of Northrop (1961). The well-know E×B drift is included in the wind velocity since the MHD condition applies so that E = −(V w×B)/c. The full diffusion tensor, in a simple collisional model with collision frequency νs, takes the form in component notation (e.g. Miyamoto (1980), Ch. 7.3): Here bi is the i − th component of the unit vectorb of the ordered magnetic field, and ijk is the totally anti-symmetric symbol in three dimensions. The three fundamental diffusion coefficients appearing in this expression are: Here Ω = qB/γmc is the gyration frequency of the charge with γ = 1/ 1 − v 2 /c 2 its Lorentz factor. When this diffusion tensor is used in the diffusion equation the term involving Da leads to an advection term (and not to a diffusion term because of the anti-symmetry of this term in the indices i and j), with an effective guiding center drift velocity equal to: In our simple model we assume that D ⊥ /D ≡ is a constant, where (12) then yields = ν 2 s /(ν 2 s + Ω 2 ). Then, in order to be consistent, the guiding center drift velocity must be defined as: It reduces to the standard (collisionless) form when 1 (νs Ω) and vanishes for = 1, the case of isotropic diffusion. This is physically correct. The diffusive random step ∆x diff in (3) only involves D and D ⊥ , the two diffusion coefficients that determine the symmetric part of the diffusion tensor that can be written in dyadic notation as Dsymm ≡ D bb + D ⊥ I −bb . If there are gradients in the field direction or in the coefficients D and D ⊥ one must -in the Itô formulation (3) of the equations-include the gradient drift velocity equal to: V gr = ∇ · Dsymm. The total drift velocity V dr = V gr + V gc becomes, writing the diffusion tensor as the sum of the symmetric and the anti-symmetric part D ≡ Dsymm + Da: This is the 'standard form' found in the mathematical literature on the the Itô formulation. In Appendix A we give explicit analytical expressions for the drift velocity in our adopted magnetic field. RESULTS OF THE SIMULATIONS We present results from our simulations for two different values of the ratio: = D ⊥ /D : = 0.01 (strongly anisotropic diffusion) and = 0.5 (mildly anisotropic diffusion). The diffusion coefficients D ⊥ and D are kept constant for a given CR energy. Figure 1 shows the position of CR protons, projected onto the Galactic plane, at the moment they reach the upper (lower) boundary of the CR halo, located at z = +Hcr (z = −Hcr) with Hcr = 4 kpc, or when they reach the outer radius of the Galaxy, taken to be rmax = 20 kpc. In these simulations there is no Galactic wind. All CRs were injected at (X , Y ) = (7 kpc, 0). For strongly anisotropic diffusion (D ⊥ /D = 0.01, the left two panels) the CRs are mostly follow the spiral field. In the mildly anisotropic case (D ⊥ /D = 0.5, the right two panels) CRs spread out almost isotropically from the injection site. In the two top panels the drift motion is neglected. In the bottom two panels the drift motion is taken into account. Without drift, CR protons spread over a larger region of the disk before escaping. The drift motion leads to a faster escape of CRs, and as a result compresses the distribution of the CRs. It also leads to a bulk inward drift to smaller radii. As the effective drift velocity is proportional to − 1, the effect of drift is smaller for the case = 0.5. Figure 1 over the accumulated grammage, calculated at the moment of escape from the Galaxy. In the red histogram the drift motion is neglected, while in the blue histogram the drift motion is taken into account. The right column of Figure 2 shows the grammage distribution of these CRs observed around the Solar System, without (in red) and with (in blue) drift. Figure 2 (left column) shows the distribution of the CRs of Without drift, the accumulated grammage is larger as CRs spend more time in the CR halo. This allows them to spread out over a larger range in galactic radius before they escape. This agrees with the spatial distribution (projected onto the Galactic disk) shown in Figure 1. In conclusion: given D and , the drift significantly decreases the residence time in the CR halo. CR advection by a Galactic wind Figures 3 through 5 show the effect of CR advection by a Galactic wind. The D in these simulations is kept constant at D = 3 × 10 28 cm 2 s −1 . The wind velocity is taken to increase linearly with height |z| away from the disk mid-plane, see prescription (7). The resulting CR transport is diffusive close to the Galactic disk. It becomes convective further out, i.e. there is a strongly diminished chance that CRs return to the mid-plane of the Galactic disk. We then extend the CR halo to a height Hcr = Hw = 20 kpc in these simulations. The left row is for D ⊥ /D = 0.01 and the right row for D ⊥ /D = 0.5. The CRs where injected at (X , Y ) = (7 kpc , 0). We use three different velocities (see Eqn. 7): V0 = 600 km/s (upper plot), 200 km/s (middle plot), and 0 km/s (lower plot). In all cases one sees outward CR transport, along the X-field lines perpendicular to the disk. Comparing the top two plots with the wind-less bottom plot it is evident that, with a wind, CRs escape sooner and -as a consequence-fan out less in both the radial and azimuthal directions. While in the right column the effect of the wind is much less evident. Figure 4, left column, shows the normalized distribution in the age (residence time) of the CR protons at the moment they escape the Galaxy. The right column shows the normalized distribution over the age of the CR protons that observed in the local volume of 1 kpc radius around the Solar System. In Figure 5 we show the average CR age at the moment of escape as a function of wind velocity (first row). The second row we show the average CR age as a function of Ξw as defined in Eqn. (8). Now CRs are injected over the entire Galactic disk. The CRs are given a weight ∝ Nsnr(rinj), with rinj the injection radius and N (r) is the Galactic surface density of supernova remnants, taken to be the sources of these CRs. We employ the SNR surface density given by Case & Bhattacharya (1996): where R = 8.5 kpc is the position of the Sun, α = 1.1, and Rsnr = 8.0 kpc. We employ five values for the characteristic wind speed V0: between 0 km/s to 600 km/s. In these simulations we take D = 3 × 10 28 cm 2 s −1 , and we choose D ⊥ /D = 0.01 and D ⊥ /D = 0.5. As clearly seen the average age of CRs decreases as the wind velocity increases. Even though the escape boundary is now at Hcr = 20 kpc, the typical residence time (shown in figure 5) is still around 1 Myr, comparable to what we find in the simulations without a wind where we put Hcr = 4 kpc. In the pure diffusion case one would expect an increase of the residence time (∝ H 2 cr ) by a factor ∼ 16. This shows that advection by the wind rapidly becomes important. The same behavior is seen if one plots the average CR age as a function of Ξw. In conclusion: given D , increasing the wind velocity leads to a reduction of the CR residence time in the Galaxy. Finally, in Figure 6 we show the B/C ratio as a function of kinetic energy per nucleon. We used the weighted slab technique using the Path Length Distributions (PLDs) as described in AL-Zetoun & Achterberg (2020). for D ⊥ /D = 0.01. In this Figure, the red curve is without drift motion, the blue curve takes drift motion into account, while the black curve takes the wind velocity into account. The experimental data from AMS-02 (Aguilar et al. 2016), PAMELA (Adriani et al. 2014), CREAM (Ahn et al. 2008), and HEAO3 (Engelmann et al. 1990) are shown for comparison. It is possible to get a satisfactory agreement between our results and observational data. The parallel diffusion coefficient is assumed to scale with CR energy as D = D0 (R/1 GeV/c) δ with δ = 0.33. For the calculation without drift (red curve) we use D0 = 3 × 10 28 cm 2 s −1 , for the calculation with drift (blue curve) we chose the value of D0 = 6.1 × 10 28 cm 2 s −1 , and for the calculation with a wind we chose the value of D0 = 2.7 × 10 29 cm 2 s −1 in order to match the observed B/C ratio at 1 GeV/nucleon. CONCLUSIONS In this paper we have investigated by means of numerical simulations the effect of drift motion, as well as the effect of advection of CRs away from the disc by a Galactic wind on the propagation CRs in the Galaxy. We modified the magnetic field model of Jansson and Farrar, while retaining essential features of this model, such as the spiral structure close to the mid-plane of the Galactic disk. The main results are as follows: • We show that the drift motion alone affects the transport of CRs in the Galaxy, by compressing the CR distribution and by shifting them inward to smaller Galacto-centric radii; • We show how a Galactic wind affects the transport of CRs in the Galaxy, by advecting them away from their sources. This significantly reduces (for given D and = D ⊥ /D ) the residence time in the Galaxy and the accumulated grammage, as expected from simple arguments. This implies that, given the observed grammage derived from observations of (for instance) the B/C ratio, the diffusion coefficient D must increase for larger values of the Galactic wind speed in order to reproduce the observations. This implies that the sources contributing to the CR flux at Earth must be closer compared to the case without a Galactic wind. • Away from the disk the flaring vertical X-field leads to a more rapid (mostly advective) transport of CRs to larger Galacto-centric radii when a wind is present. • As is the case without drift and wind, the accumulated grammage and the residence time depend strongly on the diffusion ratio D ⊥ /D , as already found in AL-Zetoun & Achterberg (2018) for the case without drift or a wind. DATA AVAILABILITY The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Figure 1. The distribution of CR protons projected onto the Galactic plane at the moment of escape. The upper two plots show the results when the drift motion is not considered, in the lower two plots drift motion is taken into account. The right panels are for D ⊥ /D = 0.01, the left panels are for D ⊥ /D = 0.5. In these simulations we take D = 3 × 10 28 cm 2 s −1 . The CRs were injected at (X , Y ) = (7 kpc , 0). . The ratio of Boron over Carbon abundance ratio as a function of kinetic energy per nucleon. The gray triangles, yellow crosses, orange points, and red stars are the results of measurements by AMS-02 (Aguilar et al. (2016)), PAMELA (Adriani et al. (2014)), CREAM (Ahn et al. (2008)), and HEAO3 (Engelmann et al. (1990)) respectively. The curves illustrate our results for the case of the diffusion coefficient D ⊥ /D = 0.01 once when the drift motion is considered in the calculation (blue curve), the drift motion is not considered in the calculation(red curve), and the wind velocity is considered in the calculation (black curve). We use D 0 = 6.1 × 10 28 cm 2 s −1 for the drift motion (blue curve), with D 0 = 3 × 10 28 cm 2 s −1 when the drift motion is neglected (red curve), and D 0 = 2.7 × 10 29 cm 2 s −1 for the Galactic wind ( black curve) to produce the best fit with the observational data. APPENDIX A: GUIDING CENTER AND GRADIENT DRIFT VELOCITIES We briefly give the analytical results for the drift speeds as they apply in the simplified Jansson-Farrar field employed in this paper. A1 Guiding center drift without scattering The motion of charged particles with charge q in a non-uniform magnetic field B(x) and a sufficiently weak electric field E(x) (with |E| |B|) can be described as a combination of rapid gyration, fast motion along the magnetic field with velocity v and a (slow) drift motion of the guiding center (center of the gyro-orbit). The fast motion along the field is subject to scattering and is taken into account by the parallel diffusion term with diffusion coefficient D . Here we concentrate on the slow drift. If we denote the position of the guiding center by R, the drift velocity (to leading order) without scattering equals We assume that there are no other (non-electromagnetic) forces acting on the charge, neglect the polarization drift, which is allowed for slow variations in the electric field. We also take the CR momentum distribution to be isotropic in momentum space. The first term is the well-known E×B drift. The second term is a combination of the drift due to the gradient of the magnetic field strength, the curvature drift and the parallel drift. As such it is the average over solid angle in momentum space of (see Northrop (1961) for details) with p ⊥ and p (v ⊥ and v ) respectively the components of momentum (velocity) perpendicular to and along the magnetic field. Then -on average-< p v >=< p ⊥ v ⊥ > /2 = pv/3 and one finds the second term in Eqn. (A1). The E×B drift is included automatically if one allows for a bulk flow (wind) with velocity |V w| c and uses the ideal MHD condition, E = −(V w×B)/c. In that case has to interpret the particle momentum and velocity as those in the local rest frame of the bulk flow, and add the wind speed V w to the (average) CR velocity. This is what we do here. We neglect the small drift that results from the fact that this rest frame is -generally speaking-not an inertial frame. A2 Guiding center drift with scattering As argued in the main paper the guiding center drift involves a reduced effective drift velocity when scattering is important, with = D ⊥ /D . This velocity can be rewritten as where the dimensionless vector ∆gc equals If we define a typical gyroradius by rg = pc/qB, the factor in front of ∆gc, which determines the typical guiding center speed, can be written as A3 Gradient drift For constant D and D ⊥ there is a gradient drift due to changes in the direction of the magnetic field. The associated velocity V gr = ∇ · Dsymm is It can be written as with the dimensionless vector Θ defined as Here we used D = λsv/3 with λs = v/νs the parallel scattering length, employedb = B/B and ∇ · B = 0. The factor in front of Θ in relation (A8) gives the typical magnitude of the gradient drift speed. Comparing this with the guiding center drift speed (A6) one finds that |V gr| |V gc| λs rg . The two speeds have a similar magnitude when the parallel scattering mean-free-path becomes comparable with the gyro radius, the case of Bohm diffusion where CR diffusion is almost isotropic. Strongly anisotropic diffusion occurs when λs rg, in which case |V gr| |V gc|. A4 Velocities in the simplified JF field Table A1 below give the parameters needed to calculate the guiding center drift and the gradient drift. It lists the components of ∆gc ≡ (∆r , ∆ φ , ∆ θ ) and of Θ ≡ (Θr , Θ φ , Θz). The table lists the results for z ≥ 0. For z < 0 both ∆z and Θz have the opposite sign. In these expressions we use for |z| ≤ h the parameters BD = BD0(r/r0) and BX(r) = (B X 0 sin i) exp(−rp/HX) and B = B 2 D + B 2 X . The value of BD0 at r0 = 5 kpc is listed in Table 1 of Jansson & Farrar (2012a) for all the spiral sections of the disk field. Also: B X 0 = 4.6 µG. The inclination angle i of the X-field and the radius rp have been defined above. Again for |z| ≤ h we define the quantity The length parameters appearing in Table A1 are: h = 0.4 kpc, HX = 2.9 kpc and rX = 4.8 kpc, taken from Jansson & Farrar (2012a) and Jansson & Farrar (2012b).
2020-12-03T02:47:47.183Z
2020-12-02T00:00:00.000
{ "year": 2020, "sha1": "6526a6e4d6b3d7ffd1fbada20b002e24664c14a1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.01038", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6526a6e4d6b3d7ffd1fbada20b002e24664c14a1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264182730
pes2o/s2orc
v3-fos-license
Exploring the Effect of Adding an Interactive Lecture to a Standardized Patient Curriculum on the Attitudes of Third-Year Medical Students About Patients With Obesity: A Quasi-Experimental Study OBJECTIVES Anti-obesity bias is pervasive among medical professionals, students, and trainees. Stigmatization of patients leads to suboptimal care and clinical outcomes. Educational strategies in medical training are needed to reverse these attitudes. The aim of this study was to evaluate the effect of an innovative didactic intervention and a standardized patient (SP) exercise on attitudes towards patients with obesity among medical students. METHODS In 2016, a quasi-experimental study design was used at a US medical school. The class was divided into 2 groups according to a pre-determined protocol based on their clinical schedule, one assessed after exposure to a SP group and the other after exposure to the SP and an interactive lecture (IL + SP group) with real patients. The Attitudes about Treating Patients with Obesity and The Perceived Causes of Obesity questionnaires measured changes in several domains. A generalized estimating equations model was used to estimate the effect of the interventions both within and between groups. RESULTS Both groups showed improvements in negative and positive attitudes, although the reduction in scores for the negative attitude domain did not reach statistical significance in the IL + SP group (for the SP group, P = .01 and  < .001, respectively; for the IL + SP group, P = .15 and .01, respectively). For perceived causes of obesity, there were no statistically significant changes for pre–post survey measures within each group, except for the physiologic causes domain in the SP group (P = .03). The addition of an IL to a SP curriculum did not result in any changes for any domain in between-group analyses. CONCLUSIONS Although adding a novel intervention utilizing real patients to a SP curriculum failed to show an additional educational benefit, our study showed that it is possible to influence attitudes of medical students regarding patients with obesity. Introduction Over 40% of the American adult population has obesity. 1 Globally, it is estimated that 600 million adults are obese. 2rom a public health perspective, many believe that reversal of these trends will occur only with social and political regulation of the food industry and physical environment. 3Until that occurs however, obesity rates will continue to climb, and society will bear the burden of its heavy economic impact.The health profession will continue devoting considerable resources to the management of its comorbidities, and individuals will suffer relentless emotional and physical ramifications.Not only has obesity been characterized as a disease by the World Health Organization, the Canadian Medical Association, 4 and the American Medical Association, 5 but it is also considered a chronic, relapsing, metabolic condition. 6This concept is not without controversy however, and although they continue to evolve, insurance policies for evidence-based obesity treatments remain inconsistent in the United States. 5Future medical professionals therefore have a moral and ethical responsibility to possess at least basic competency in the management of patients who suffer the consequences of excess adiposity. Unfortunately, contemporary evidence suggests that clinicians lack awareness regarding the complexity of obesity and have little, if any, formal education or training in its treatment. 7 survey of Canadian final-year medical students revealed a low level of knowledge and competence for managing patients with obesity. 8Excess weight has been shown to be a commonly and strongly stigmatized characteristic. 9Primary care physicians, medical residents and students, nurses, and other providers hold negative opinions, both explicit and implicit, towards patients with obesity. 10,11Bias against individuals with excess weight has been shown to be as pervasive among healthcare professionals as it is among the general public. 11ecently, international experts and scientific societies have formally called for efforts to end stigma and weight bias in academic institutions and professional organizations. 9Notably, weight bias in the healthcare system is associated with inferior health outcomes, 12 avoidance of care, 13 and less cancer screening. 14Breaking this cycle of stereotyping would therefore seem like a logical strategy in medical school and other professional curricula.A handful of studies have investigated interventions to combat discrimination in medical settings.Reports have consistently found that teaching students about the multifactorial etiology of obesity, particularly the contribution of genetic and environmental factors, was crucial to reducing measured weight bias. 15,16Various methods and strategies have been studied.Some specific evidence-based principles have been recommended for undergraduate medical education, such as brevity (<3 h) of interventions, the use of video-clips, supporting techniques to promote behavioral change, in-person contact, and teaching the pathophysiology of obesity. 17To date however, there is no standardized curriculum for educating the nation's medical students around the care and management of patients with obesity.It remains unclear which types of educational approaches or methodologies are most effective and specifically which characteristics of the exposure contribute to reducing medical students' negative feelings towards patients with excess weight. The objective of this study was to assess the effect of an innovative interactive lecture (IL) and a standardized patient (SP) exercise on attitudes of third-year medical students towards patients with obesity. Methods The current study is a prospective education intervention using a quasi-experimental design with 2 questionnaires to assess preand post-measures.Our hypothesis was that the addition of an IL to an established SP exercise would result in further benefit for changing attitudes towards patients with obesity. Sample In March 2016, at the University of California San Diego (UCSD), 122 medical students participating in a required primary care education module on the topic of obesity were eligible to participate.The study was approved by the Institutional Review Board (IRB) at UCSD.In accordance with IRB endorsement, students were advised prior to the sessions that participation was voluntary, that completion of the surveys represented consent to participate and that anonymity was maintained.Declining to participate was the only exclusion criterion.Surveys were delivered and collected by members of the research team.Questionnaires were de-identified and labeled with pre-determined codes. Intervention Third-year medical students at UCSD participate in a primary care core clerkship which includes monthly 4-h sessions (including time for orientations and breaks) on different topics commonly encountered in the primary care setting.One of those sessions is dedicated to obesity.Although the didactic structure can vary between modules, all sessions include small group meetings with 6 to 8 students and 2 primary care faculty members.Some of the topics utilize SPs, portrayed by professional actors, and trained by education faculty members, in the specific condition encountered.The obesity module is composed of a large class lecture and an SP session in small group format facilitated by the faculty.The 1-h lecture was created and delivered by a faculty member certified by the American Board of Obesity Medicine (ABOM) and included content addressing the biology and pathophysiology of obesity and evidence-based therapy recommendations, including lifestyle and behavior modification, pharmacotherapy, and bariatric surgery.Importantly, an innovative strategy during the presentation was the presence of 4 real patients treated with the interventions discussed in the lecture: one with behavior therapy and pharmacotherapy, one with behavior therapy and a low-calorie diet using meal replacements, and 2 with bariatric surgery.Students were also required to read on these topics before the session.The intent of the IL was not only to provide the patients' experiences relating to their respective therapies but also to foster an interactive environment whereby students could gain a better appreciation for the struggles, discrimination, and stigma individuals with obesity face in the healthcare system.The patients were unrehearsed.By combining the patients' experiences with biological underpinnings of obesity, the aim was to change anti-obesity attitudes.The patients were prompted to explicitly address this towards the end of the lecture.Because the use of SPs had been the standard education activity for several years, the introduction of the IL was considered the novel intervention to evaluate.The effectiveness of the SP experience was not empirically examined prior to this study. The SP encounter was created by an ABOM-certified faculty member and education professionals associated with medical teaching.The duration of the SP activity was 1-h and 45 min.The actors, all of whom had obesity, were trained to present with weight-related comorbidities commonly encountered in the primary care setting.Students volunteered 2 Journal of Medical Education and Curricular Development to interview the SP across 3 different simulated visits, each lasting 10 to 15 min.As standard practice throughout the course, they take turns playing the role of the primary care physician, interspersed with feedback and discussion from the faculty facilitators and other students.Although time allowed for only 3 or 4 volunteers, all students participate in role playing at least once throughout the year across the various disease modules.They were instructed to use motivational interviewing (MI) techniques to elicit behavior changes that promote weight reduction.All students had exposure to MI skills in previous curricular activities.The SP initially exhibits resistance to recommendations for weight loss interventions but is trained to respond to effective MI techniques.The actors were also trained to provide feedback to the students on communication skills, MI competence, and effectiveness in fostering confidence for behavior change.By design, the interaction reflects real-world experience and can be frustrating to some students. The class is normally divided into 2 groups, receiving the same education content on different days.Students are assigned to a specific day based on their clinical preceptorship schedules.Three days separate the sessions.The predetermined class assignment allowed for a quasi-experimental study design, as it was not practical to randomly assign students to specific class cohorts.The sequence of didactic activities was not thought to detract from their educational value.The first cohort received the SP exercise followed by the interactive patient lecture, with the pre-post assessment tools administered before and after the SP intervention only (SP group), considered the control group.The second cohort was provided the IL before the SP experience, with the assessment tools administered before the lecture and after the SP interaction.This group was considered the intervention group (IL + SP group).In the IL + SP group, the lecture was delivered before the SP activity for logistical convenience of distributing and collecting the questionnaires.For educational requirements, both groups received the same duration and content for each activity.It should therefore be noted that the IL + SP group had a longer total time of exposure between the pre-post surveys.Figure 1 shows the study design flowchart. Surveys and outcome measures To evaluate attitudes and biases towards obesity, students were administered 2 surveys: Attitudes about Treating Patients with Obesity (ATPO) 10,18 and Perceived Causes of Obesity (PCO) 11,19 before the beginning of the session (pretest), followed by a second set of the same surveys (posttest) at the corresponding time points.Both instruments were previously developed and validated to assess weight bias among healthcare trainees. 10,19he 23-item ATPO and the 14-item PCO questionnaires, each using a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree and 1 = not at all important, 5 = extremely important, respectively), were used to assess changes.Four items in the ATPO survey were discarded as they were not relevant to the responses being analyzed.The remaining 19 items in the ATPO are converted into 2 domains: negative attitudes (AT.N) about obesity and positive attitudes (AT.P) about obesity (Table 1).The items in the PCO questionnaires are converted into 3 domains: physiological causes (PC.P), behavioral causes (PC.B), and environmental causes (PC.E) (Table 2). 11,19A higher score indicates a greater attribution of the domain for causing obesity.(The actual questionnaires administered are available in the Supplemental material S1).These sub-scales have been used previously with acceptable scale reliability, 18 with slight modification for our study to better fit respective categories.We used comparison of responses to the ATPO to assess our primary outcome of interest, a change in attitudes among healthcare professionals treating individuals with obesity.The subjects of our study had experienced at least 9 months on inpatient wards and outpatient clinics interacting with patients.Assessing perceptions about the causes of obesity using the PCO measure was our secondary outcome. Demographic information (age, sex, race/ethnicity, and body mass index [BMI]) was collected to determine the degree, if any, of covariance. Statistical analysis Each domain of both surveys was analyzed using descriptive statistics including the mean and standard deviation (SD).Counts and percentages were used for categorical data.A Table 1.Survey measuring attitudes of students about treating patients with obesity 10,18 . Attitudes about treating patients with obesity 1.I often feel frustrated with patients who have obesity a 2. Patients with obesity can be difficult to deal with a 3.I feel that it is important to treat patients with obesity with compassion and respect b 4. I dislike treating patients with obesity a 5.I feel confident that I provide quality care to patients with obesity b 6.I feel professionally prepared to effectively treat patients with obesity b 7. I feel that patients with obesity are often non-compliant with treatment recommendations a 8.I feel that patients with obesity lack motivation to make lifestyle changes a 9. Treating patients with obesity is professionally rewarding b 10.Patients with obesity tend to be lazy a 11.Treating a patient with obesity is more frustrating than treating a patient without obesity a 12.I feel more irritated when I am treating a patient with obesity than a patient without obesity a 13.I feel disgust when treating a patient with obesity a 4 Journal of Medical Education and Curricular Development generalized estimating equations (GEE) model was used to assess the effect of the didactic sessions on attitudes using a time-based model (pre-time and post-time). 20For those domains with a significant time by cohort interaction detected, we further examined differences between the 2 groups of medical students to test the effect of the IL intervention.To investigate whether any differences identified were the result of demographic variables, we included age, sex, race, and BMI as covariates in the model to assess whether the significance changed for time by cohort interaction.For each domain of items, α was set at 0.05.Effect sizes were estimated by calculating partial eta squared (η 2 ).Details are provided in Supplemental material S2.For missing data, the GEE analysis allows for the use of respondent data with at least one observation at pre-or post-intervention. To assess both the effect of the obesity module on survey measures in both groups and to assess the effect of the IL intervention on the treatment group, we used the following GEE model: where Time post is an indicator with a value of 1 for posttreatment and 0 otherwise, and Treatment 2 is the indicator with a value of 1 for the treatment (IL + SP) group and 0 for the control (SP) group.Pretreatment is the referent for Time and the control (SP) group is the referent for Treatment.This model controls for baseline differences between the groups. Hypothesis 1: We test the treatment effect for each group by: H 0 : β 1 = 0 (no treatment effect for the SP group) H 0 : β 1 + β 3 = 0 (no treatment effect for the IL + SP group) Hypothesis 2: We test whether the interactive lecture has an effect on the pre-post changes by The data were analyzed using "geepack" version 1.3-1 with R version 3.6.3(R Core Team). 21 Sample characteristics Considering the overall class size, 122 students were eligible; 73 in the SP group and 49 in the IL + SP group.Four chose not to participate, 1 student and 3 students in each group, respectively.The numerical imbalance between the 2 groups reflects the The items are divided into 3 domains: behavioral causes (PC.B), physiologic causes (PC.P), and environmental causes (PC.E).Students responded using a 5-point Likert scale ranging from ( 5) extremely important to (1) not at all important. Grunvald et al. distribution of availability based on their clinical preceptorship assignments.Overall, the demographic characteristics between the 2 groups were well balanced, with a mean age of 26.8 years, 44.1% female (n = 52), and normal BMI (Table 3).More than 40% (n = 52) of the students were Asian, approximately one-third (n = 35) were Caucasian, and the remainder were a composite of African American, Hispanic/Latino, or not specified.Subjects with missing data are tabulated in Table 4. Attitudes about treating patients with obesity Table 4 shows the results of the primary outcome measure scores.The obesity module resulted in a reduction of AT.N scores for both cohorts.Using our GEE model, the SP exercise was associated with a statistically significant effect of lowering negative attitudes (P = .01,partial η 2 = 0.009).However, the change in the IL + SP group did not result in statistical significance (P = .15).Both the SP and IL + SP interventions resulted in a statistically significant improvement in positive attitudes (P < .001,partial η 2 = 0.084 and P = .01,partial η 2 = 0.037, respectively).Figure 2A shows the effect of the interventions on the direction of change for negative and positive attitudes.Using our model, addition of the IL intervention did not result in any significant difference for changes in negative or positive attitudes between the 2 groups (P = .84and P = .47,respectively), shown in Table S3 in the Supplemental material.6 Journal of Medical Education and Curricular Development Perceived causes of obesity There was a reduction in the mean score for the physiologic causes domain in the SP group and an increase in the IL + SP group (Table 4), but only the former reached statistical significance (P = .03,partial η 2 = 0.005 and P = .67,respectively).Both groups demonstrated an increase in mean scores for the behavioral causes domain for the PCO questionnaire, but neither reached statistical significance.There was no significant change in the environmental causes domain for either group.When comparing the 2 groups for effect of adding the IL to SP intervention, there was no significant change for any of the domains (PC.B, P = .87;PC.E, P = .41;PC.P, P = .13;Table S3).Figure 2B shows the effect of the interventions on the direction of change for the PCO domains.Demographic covariates (age, sex, ethnicity, BMI) had no significant effect on changes within any of the domains for either questionnaire (data not shown). Discussion Our study sought to evaluate the effect of an IL format, added to a SP exercise, on changes in anti-obesity attitudes among third-year medical students.Although our investigation revealed that the use of SPs improved attitudes (an educational activity not previously tested in our course), our GEE model did not support our stated hypothesis.Both groups demonstrated statistically significant increases in positive attitudes and the SP group, but not the IL + SP group, a decrease in negative attitudes.Effect sizes were small for all estimates.For our secondary outcome measures regarding perceived causes of obesity, only the SP group demonstrated a statistically change (with a small effect size) in the domain of physiologic causes, but not in the other domains. To our knowledge, this is the first report describing an educational activity combining traditional content delivery with real patient experiences in an interactive format.The intervention of the IL did not, however, result in a significant difference in change for any of the domains in the between-group analyses.There are several possible explanations.First, it is possible that the SP component of the module was sufficient for this purpose using the measures in the present study.Second, it is also possible that the faculty facilitators in the SP small group sessions exerted a powerful influence on the items measured as they were trained by the obesity module leaders.Third, the students may have been "primed" by previous education.During the first 2 years of medical school, they were exposed to lectures on metabolism, human weight regulation, and genetics of obesity.Additionally, one of the required readings for the educational session in the present study informed students on the same concepts. 22Fourth, this sample of learners had a mean BMI in the normal category, which may have impacted the outcome measures.Finally, the unpredictable nature of unrehearsed real patients in medical education has been identified as a limitation. 23Our patients were unrehearsed, and it is possible that the self-perceptions of their own experiences were discordant with the intended goal of the lecture, namely the importance of uncontrollable contributors to obesity (neuro-enteroendocrine regulation of weight) and evidence-based therapies, potentially causing confusion and ambivalence among students. Others have shown that educational interventions can promote favorable attitudes towards patients with overweight and obesity. 16,24,25There are few reports in the literature specifically assessing the effectiveness of SPs for changing antiobesity attitudes, and the results are mixed.One study evaluated the correlation between the acceptance of negative attributes of individuals with obesity among learners and patient-centered behaviors using a simulated clinical scenario, but it was not an intervention study. 26Using SPs after an IL on counseling and behavior change in another study, students' attitudes on the utility of counseling did not improve, but in this project the focus was on nutrition counseling, without specific attention to weight reduction. 27One study similar to our use of SPs demonstrated a reduction of anti-obesity stereotyping and increased empathy, but there was no control group. 24ne issue of concern with all analyses using questionnaires is whether the correct tool is utilized to answer the main study question.The ATPO measure has been shown to have adequate scale reliability among cohorts similar to the ones used in the present study. 10,12Likewise, the PCO measure has been utilized previously with good reliability. 11,19In our analysis, however, there was no effect of the educational activity on the learners' perceptions, with the exception of one domain in one cohort.The psychometric properties of this questionnaire in previous studies were based on populations that were very different than ours.The subscale categorization in our analysis was slightly modified, which needs to be recognized as a possible limitation.Moreover, the items in this tool were somewhat vague and may have reduced the validity with respect to the content delivered in the lecture.Knowledge regarding biological and genetic underpinnings of human weight regulation has advanced since development of this instrument and perhaps a more updated questionnaire would improve its psychometric properties. Although both groups demonstrated desirable changes in attitudes towards patients with obesity, effect sizes were small.It is not clear whether a different educational strategy or content would have resulted in a stronger quantitative change.It is also possible that these students had favorable baseline attitudes towards treating patients with obesity, with differential changes unlikely to be altered by only one brief intervention.In fact, previous work has also suggested that among medical professionals in training, there may be less weight bias in comparison to their instructors and more experienced peers in practice. 10Others have documented a high degree of stigmatizing attitudes among primary care Grunvald et al. physicians. 11It is unclear whether there is a true discordance between learners and their contemporary practicing counterparts and if so, whether this is a generational difference or a result of the refractory nature of weight management in real clinical settings. The discordance in the significance of change for the negative attitudes domain in the ATPO may be from lack of statistical power given the relatively small sample size, but we cannot rule out a negative impact of the IL on the SP activity due to temporal proximity of the 2 interventions, or vice versa.Although the IL did not demonstrate a significant change in the between-group analysis, our data suggest that, in contrast to the SP exercise alone, it may have exerted an undesirable effect on negative attitudes.We did not assess a group of students before and after the lecture without an SP activity, making it difficult to reach any conclusions regarding the effect of the lecture alone.Evaluating more longitudinal educational interventions throughout the course of the medical school curriculum and temporally separating divergent strategies may help isolate their effects on learners.Future studies should assess the value of using SPs compared to interventions that may require less cost and resources. Although the best educational intervention for reducing anti-obesity stigma and bias among students and trainees in healthcare professions remains to be identified, adequate attention for curriculum development remains very challenging. 28ecause obesity medicine is rarely covered on licensing and certification examinations, education and training programs have little incentive to prioritize obesity topics in already crowded curricula. 29,30Obesity education and training have been shown to improve confidence and competence for treating patients with excess weight. 31Recently, competencies have been developed for training programs, hopefully standardizing the development of obesity medicine education in medical schools. 32ur study has limitations that should be recognized.First, our sample size was relatively small and limited to 1 year at 1 institution.A post-hoc power analysis was not performed due to its inherent limitations on analytical validity and reliability. 33uture studies should aim to span a longer time period and multiple medical schools to enhance validity, reliability, and generalizability.Second, the obesity module was of short duration and the longitudinal assessment measured only immediate effects.It is unclear whether the changes seen in attitudes are durable or whether they extinguish with time.Third, it is possible that the use of other measures would yield different results.Disparate findings have been documented in other studies. 16ourth, a quasi-experimental study design is not without significant limitations.Although this approach may have better internal and external validity than retrospective observational studies, we cannot rule out other unrecognized confounders contributing to our findings.For example, we did not confirm the presence or absence of contamination between the 2 groups.Lastly, it should be noted that many items in the ATPO survey addressed attitudes of practitioners treating people with obesity.Although students in the present study possessed at least 9 months of clinical clerkships interacting with patients, their relative lack of clinical experience may have impacted the external validity of this particular instrument. Conclusions Anti-obesity bias and stigma are major obstacles to the provision of high quality and effective clinical care.Medical schools should therefore develop education interventions to reduce negative attitudes towards patients with obesity.Our study adds to other work showing the positive impact of using SPs to this end.We can conclude that our educational exercise, using a trained SP, with or without the provision of content and context using a patient IL, resulted in the desired outcome of changing anti-obesity attitudes among medical students.Although a novel approach of combining a traditional lecture with an interactive patient panel did not add value using the measures chosen for this analysis, much more research is needed to find educational interventions that effectively and efficiently reduce weight bias among our future physicians. Figure 1 . Figure 1.Study design flowchart: A flow diagram of the educational sequence in both groups and time points of the pre-and post-surveys.A readiness assessment test (not a component of this study) and the pre-survey were administered during the hour prior to the start of the educational activity.There was a 15 min break between the 2 activities.The total duration of the session was 4 h. 14.I feel indifferent to the obesity when I am treating a patient with obesity a 15.It is difficult to feel empathy for a patient with obesity a 16.Treating a patient with obesity is more emotionally draining than treating a patient without obesity a 17.Treating a patient with obesity is more stressful than treating a patient without obesity a 18.Treating a patient with obesity repulses me a 19.I would rather treat a patient without obesity than a patient with obesity a 20.Other practitioners who treat eating disorders often have negative stereotypes towards patients with obesity 21.I have heard/witnessed other professionals in my field make negative comments about patients with obesity 22.My colleagues tend to have negative attitudes toward patients with obesity23.Practitioners feel uncomfortable when caring for patients with obesityThe items are divided into 2 domains: negative attitudes (AT.N) and positive attitudes (AT.P).Students responded using a 5-point Likert scale ranging from (5) strongly agree to (1) strongly disagree.Items 20-23 were excluded as they were not relevant to the outcomes being analyzed.The survey has been modified from the originally published version to reflect less stigmatizing language. Students participating in a primary care clerkship obesity education session at a US medical school divided to assess the effect of standardized patient and interactive lecture interventions.The SP group underwent pre-and post-assessments before and after the SP exercise only.The IL + SP group underwent pre-and post-assessments before and after the IL intervention and SP exercise.Abbreviations: AT.N and AT.P, negative and positive attitude domains, respectively, for the ATPO questionnaire; IL, interactive lecture; NA, not available; PC.B, PC.E, and PC.P, behavioral, environmental, and physiologic causes of obesity domains, respectively, for the PCO questionnaire; SP, standardized patient.Scores are reported as mean values (SD). Figure 2 . Figure 2. Changes in measures of anti-obesity attitudes and perceived causes of obesity among third-year medical students.Pre-and post-test results for the medical student class presented as composite scores, IL+ SP group, and SP group.Panel 2A shows the results of the ATPO items and panel 2B shows the results for the PCO items.Asterisks mark changes in scores with P < .05.The results are reported as the mean scores with corresponding 95% confidence intervals. Table 3 . Characteristics of third-year medical students participating in a primary care clerkship obesity education session. Students were divided into 2 cohorts to assess the effect of standardized patient and interactive lecture interventions.Abbreviations: NA, not available. Table 4 . Changes in mean scores of survey measures among 2 cohorts of third year medical students.
2023-10-18T15:07:53.403Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "7e73d893b3aad419ddc2d1a868ff9eee2415d75b", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e6df2c57bdf13aa2797f1b085b9b88df9b0f4f98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10986970
pes2o/s2orc
v3-fos-license
Routine infant immunization with the 7- and 13-valent pneumococcal conjugate vaccines: current perspective on reduced-dosage regimens The 7 and 13-valent pneumococcal conjugate vaccines are mostly used in routine infant immunizations to prevent the development of pneumococcal disease. Currently, the dosing schedule approved and recommended for PCV7 and PCV13 in infants is 3 primary doses followed by a booster dose in the second year of life. However, a number of countries use a 2-dose only primary series with a booster dose in the second year of life. This review is aimed at providing the reader with a broad perspective on the currently available evidence which supports the clinical use of such reduced dosing schedules for the PCV7 and PCV13 vaccines. Recent evidence has been able to promulgate the immunogenicity and in some cases the effectiveness of the reduced dosing schedule for these vaccines. These findings may reduce costs as well as minimize supply and administration problems relating to the provision of the pneumococcal conjugated vaccines (PCVs). However, some caution is warranted since some inferior data have emerged with regards to the antibody immune response to certain pneumococcal serotypes following the implementation of such reduced dosing regimens. In addition, it is proposed that prospective surveillance be undertaken in all countries which have adopted the reduced-dosage immunization programs. This review may go some way in educating healthcare practitioners and healthcare policy decision makers at large. Introduction Streptococcus pneumoniae (S. pneumoniae) has been implicated as an important cause of otitis media, sinusitis, pneumonia, and invasive pneumococcal diseases (IPD) such as meningitis, bacteremia, and bacteremic pneumonia [1]. Streptococcus pneumoniae is a major cause of morbidity and mortality worldwide, in young children, individuals with chronic cardiopulmonary disease, the elderly, and immunocompromised individuals of all ages [2][3][4][5]. As a result, the prevention of pneumococcal disease is an important public health care goal. The 7 and 13-valent pneumococcal conjugate vaccines are used commonly in routine infant immunizations to prevent the development of pneumococcal disease. In the USA and in Europe, the 7-valent pneumococcal conjugate vaccine (PCV7) is licensed for use among infants. In Feb-ruary 2010, the Advisory Committee on Immunization Practices (ACIP) issued recommendations for the usage of a newly licensed 13-valent pneumococcal conjugate vaccine (PCV13) [6]. PCV13 contains the seven serotypes in PCV7 (4, 6B, 9V, 14, 18C, 19F, 23F) and six additional serotypes (1, 3, 5, 6A, 7F, 19A). PCV13 was licensed by European Medicines Agency (EMA) in the year 2009. This vaccine is approved for use in children aged between 6 weeks and 59 months and is considered by many health care practitioners as a successor to PCV7. There is much evidence to show that the development of conjugate vaccines and their adoption within routine childhood immunization programs has presented a major step forward in preventing invasive pneumococcal disease. Whilst this is an important advancement, there is still much discussion about the exact immunization schedules which should be followed to prevent the occurrence of invasive pneumococcal disease. In some countries, infants below 6 months of age receive their first primary dose of PCV7 or PCV13 followed by two additional primary doses of the same vaccine at intervals of approximately 2 months. These primary doses are then followed by a fourth booster dose in the child's second year of life [2]. In such countries, the ACIP adopted the manufacturers' recommended schedule since the pre-licensure development program for PCV7 did not include studies to assess the immunogenicity of a 2+1 reduced dosage schedule [6]. However, in a number of other countries, a reduced 2-dose schedule (2+1) for this vaccines has been adopted [7][8][9]. Recent evidence has been able to promulgate the effectiveness of the 2+1 reduced dosing schedule for PCV7. Similarly, current relevant reports indicate that a reduced 2+1 dosing schedule for PCV13 may also be effective. However, the number of reports indicating the latter is scarcer. This review is aimed at providing the reader with a perspective on the currently available evidence which supports the clinical use of reduced 2+1 dosing schedules for PCV7 as well as PCV13. To this end, an extensive systematic literature review was undertaken pertaining to the immunogenicity and effectiveness of the reduced-dosage regimens of PCV7 and PCV13. Information was collated from: expert-opinion articles located within EMBASE, PubMed and The Cochrane Library; additional information obtained from article reference lists; and from the Internet. This review is aimed at informing the reader about whether pneumococcal immunization schedules may be simplified and yet still ensure effective immunity. This issue has remained of interest to some medical practitioners since the routine childhood vaccination schedule in many countries is becoming increasingly crowded. The number of vaccine injections that infants must receive as well as the associated costs has slowed the immunization rate in many geographical areas [7]. Such knowledge may also be used in preventing a potential shortage in the supply of the pneumococcal conjugate vaccine. The latter has been shown to greatly influence the type of dosing regimen which is used [10][11][12][13]. This review may also go some way in defining vaccination policies which are used in areas of the world which have not yet introduced the vaccine routinely. Furthermore, it may also educate medical personnel who may be confused by seemingly conflicting advice relating to vaccine schedules. In addition, this article may also shed some light on the processes which have been used for evaluating the efficacy and safety of the pneumococcal vaccines. 7-valent pneumococcal conjugate vaccine (PCV7) Immunogenicity of the reduced-dose schedule in children A number of reports have been published which investigated the immunogenicity following a 2+1 reduced dosage regimen and compared the results with a 3+1 dosage schedule for PCV7. The studies are summarized in Table I. Rennels et al. assessed the IgG antibody titer response when healthy 2-month-old infants were immunized with the PCV7 vaccine [16]. Employing an open-label, self-controlled study with a cohort study group of 212 subjects, the investigators showed that serotype-specific IgG GMC values for serotypes 6B, 9B and 23F were lower for the abbreviated PCV7 schedule compared to after three primary doses. Ekström et al. employed an open-label, self-controlled study design (n = 56) to assess serotypespecific IgG GMC values after two and three primary doses of PCV7, respectively [17]. They reported that antibody responses following two doses were generally lower than after three doses for serotypes 6B and 23F. The immunogenicity and safety of PCV7 used in a 2+1 immunization schedule (3, 5, 11 months) were also studied in pre-term infants (PT). Esposito et al. conducted a study in which 92 infants were enrolled in an open-label, uncontrolled clinical study [18]. An antibody titer concentration ≥ 1.0 μg/ml was usually reached after the third dose. Importantly, by comparing previously published results obtained with the 4-dose schedule, the authors were able to show that the reduced 3-dose schedule was comparable in terms of immunogenicity [16,[19][20][21][22]. In 2005 Käyhty et al. published the results of a study which aimed at assessing the immunogenicity of PCV7 [23]. The primary vaccination con-sisted of 2 doses (administered at 3 and 5 months of age) and a third booster dose given at 12 months of age. Käyhty et al. concluded that PCV7 was immunogenic when given in the abbreviated schedule. Importantly, the results suggested that the pneumococcal antibody concentrations following primary as well as booster doses were comparable to the results which were obtained with the 4-dose schedule. Russell et al. investigated several pneumococcal vaccination strategies for resource-poor countries using a randomized controlled clinical study [24][25][26]. The cohort group consisted of healthy Fijian infants and the investigators showed that two primary doses of PCV7 achieved GMC levels which were lower for serotypes 6B, 14, and 23F compared to the 3-dose primary schedule. However, the investigators reported that this difference was small. In 2010, the same authors showed that the immune response towards all serotypes following a 2 or 3 primary series was not statistically different but that again the immune responses were lower for serotypes 6B, 14, and 23F following the abbreviated immunization schedule. In an open-label, uncontrolled study, Rodenburg et al. evaluated the immunogenicity of PCV7 following a 2+1 or 3+1 dosing schedule [27]. The authors reported that for serotypes 6B and 19F, significantly lower antibody levels were reported for the reduced dosage regimen compared to the 3+1 dosing schedule. By undertaking various open-label uncontrolled clinical studies, Goldblatt et al. investigated the immunogenicity of 2+1 PCV7 dosing schedules [28]. The authors showed that the immune response for serotypes 6B and 23F was lower for the abbreviated PCV7 schedule compared to subjects who had received 3 primary doses. Givon-Lavi et al. used randomized controlled trials to design and compare the immune response in healthy infants following a two-dose and a threedose primary series [7]. The proportion of subjects enrolled in the study who received post-primary serotype specific IgG antibody concentrations ≥ 0.35 μg/ml was significantly greater after three primary doses for serotypes 6B, 14, 18C and 23F compared to two primary doses. Post-booster analysis further revealed that serotype-specific IgG GMC values were significantly greater for serotypes 6B, 18C and 23F in the 3+1 group compared to those in the 2+1 group. For all the studies mentioned above, the immune response following primary doses for the 2+1 or 3+1 PCV7 dosing schedules were generally similar for serotypes 4, 9V, 14, 18C and 19F. However, nearly all of the literature indicated that post-dose two response rates, as well as IgG GMC values, for serotypes 6B and 23F were significantly lower compared with post-dose three responses. Furthermore, an analysis of antibody values following the booster dose generally indicates that the abbreviated dosing schedule of PCV7 achieves comparable antibody levels to those achieved with a 3+1 dosing schedule. Effectiveness of the reduced-dose schedule (2+1) in children There is compelling evidence which confirms the clinical efficacy of the reduced dosage schedule in the USA, Canada and Europe. This section of the review presents a summary of the reports which document the effectiveness of the reduced-dosage regimen (2+1). The studies are summarized in Table II. In Canada, a 7-valent pneumococcal conjugate vaccine was licensed for use in 2001. A case-control study on vaccine effectiveness (VE) against IPD was conducted. The investigators reported that in Canada the effectiveness of the 2+1 dosing schedule for PCV7 was 100%, which is similar to the 3+1 dosing schedule for the same vaccine [29,30]. The effectiveness of the reduced-dosage 2+1 regimen for PCV7 was estimated in a retrospective matched case-control study [31]. The investigators determined that the abbreviated 2+1 PCV7 dosage regimen resulted in 98% effective immunity against vaccine serotype-related invasive pneumococcal disease. The authors concluded that this result was comparable to that obtained with a four-dose PCV7 vaccination series. In Denmark, the Danish Childhood Vaccination Registry used surveillance and vaccine uptake data to estimate the effectiveness of the reduced-dosage schedule for PCV7 [32]. The authors reported that the administration of PCV7 was followed by a marked decline in the incidence of IPD in both vaccinated and non-vaccinated individuals. The results were comparable to those previously obtained with the 3+1 dosing regimen for PCV7. A government-sponsored reduced-dose PCV7 vaccination program was also introduced in Italy [33,34]. Statistically significant declines were seen in all-cause pneumonia, pneumococcal pneumonia, and otitis media between the cohorts. The observed significant reduction in pneumococcal disease was non-inferior to that observed with the 3+1 dosing schedule [35][36][37]. In 2006, a compulsory, free-of-charge, reduceddose PCV7 vaccination program was introduced in Poland. Patrzałek et al. investigated the influence of pneumococcal vaccination on the radiologically confirmed pneumonia admission rates in the hospital [8]. The results compare well with the 3+1 PCV7 dosing schedule [35][36][37]. The PCV7 vaccine was introduced into the routine childhood immunization program of the UK in 2006. The vaccine is given as a 3-dose schedule at 2, 4 and 13 months of age. Since the vaccine was introduced, there has been a marked reduction in the rate of cumulative increase of IPD cases caused by the 7 serotypes in PCV7 [38]. Whilst the early estimate of VE was 84%, the paper highlighted that protection against serotype 6B was reduced and serotype replacement was also evident. The investigators also indicated that VE of the reduceddosage schedule for PCV7 is expected to increase once the impact of the booster dose is taken into consideration. The Belgian national immunization program began in 2007 with a 2, 4, 12 vaccination schedule [39]. Surveillance reports compiled by the Belgium Public Institute of Health indicate that the incidence of vaccine serotype-related IPD in children < 2 years of age decreased by 86% [16]. In Norway, a national immunization program for PCV7 was implemented in July 2006. The immunization program follows a 3-dose schedule given at 3, 5 and 12 months of age. Cases of IPD were surveyed and Vestrheim et al. published the results of this surveillance and reported a decrease in IPD in all age groups under the age of 5 years [40]. Furthermore, the incidence of IPD and vaccine serotype IPD was reported to have declined significantly in almost all age groups. Notably, the effectiveness of the vaccination program in children aged < 2 years was 74% [40][41][42]. The studies mentioned above indicate that the 2+1 dosing schedule for PCV7 is effective in immunizing against invasive and non-invasive pneumococcal disease. This is especially the case in countries which are characterized by good primary series uptake, compliance, and implementation of a catchup program in the older infant or children population groups. Undoubtedly, long-term prospective surveillance programs need to be maintained in order to determine the long-term beneficial effects of the reduced-dose PCV7 schedule. 13-valent pneumococcal conjugate vaccine (PCV13) Immunogenicity data from the PCV13 clinical development program For the 7 common serotypes in PCV13 and PCV7 (4, 6B, 9V, 14, 18C, 19F, 23F), it is already known that PCV13 is comparable to PCV7 when administered in a 3+1 dosing schedule [43]. Since PCV7 has been documented to be effective when given in a 2+1 dosing schedule, a few studies have compared the immunogenicity data when PCV13 and PCV7 are given in accordance with this dosing regimen [8,9,12,19]. To the best of our knowledge, only two non-inferiority clinical trials have been undertaken in order to assess the immunogenicity responses following a 2+1 dosing schedule of the PCV13 and PCV7 vaccines [44]. For the 2 primary doses of PCV13, slightly lower polysaccharide-binding antibody titer concentrations were obtained for serotypes 6B and 23F, whilst the immune response for the remaining 5 serotypes was comparable between the two vaccines. For the 7 common serotypes in PCV13 and PCV7, similar immune responses were obtained following completion of the 2 primary doses and the booster dose. Thus, it is tentatively expected that, for the 7 common serotypes, the clinical efficacy of PCV7 and PCV13 will be similar. For the 6 additional serotypes, the percentage of infants achieving a clinically effective antibody threshold concentration ≥ 0.35 μg/ml after the second dose ranged from 79.2% to 98.5%. Post-booster antibody GMC levels in a 2+1 schedule were comparable to those achieved with a 3+1 schedule. These results indicate that PCV13 can be given safely and effectively in a reduced-dosage schedule. This strategy could provide protection against a broader spectrum of pneumococcal serotypes as well as improving herd immunity. Conclusions Some countries have adopted the 3+1 dosing schedule for PCV7. However, as shown above, clinical data are now available which demonstrate that PCV7 can be safely and effectively administered in a reduced 2+1 dosing regimen. With regards to PCV13, the manufacturer's recommended dosing schedule is also 3+1. The immunogenicity data which have been obtained for PCV13 following two primary doses as well as after a third booster dose tentatively indicate that this vaccine may be administered safely and effectively in a reduced-dosage schedule. We propose that administering the PCV in a reduced-dosage regimen is advantageous to the alternative 3+1 dosing regimen and that indeed this should be adopted by the countries who have implemented the latter regime. However, some caution is warranted since some inferior data have emerged with regards to the antibody immune response to certain pneumococcal serotypes. The reduced-dosage regimen may go some way in reducing costs as well as minimizing supply and administration problems relating to the provision of the pneumococcal conjugated vaccines. Nonetheless, caution is warranted since a reduced dosing regimen could result in compliance failures having a larger than expected effect on population immunity. However, such concerns relating to compliance may need to be investigated thoroughly in order to reach firm conclusions. Furthermore, it is noteworthy that prospective continued surveillance of the occurrence of invasive pneumococcal disease should take place in all countries which have adopted PCV immunization programs in order to fully clarify the clinical influence of the PCV reduceddosage regimen on pneumococcal-induced morbidity and mortality. This would allow for the detection of any increase in the rate of vaccine failure with either regimen and may in turn educate medical practitioners and healthcare policy decision makers. employed by Pfizer Poland. The authors thank Proper Medical Writing (infrared group s.c.) for technical and language assistance in the preparation of this paper. Proper Medical Writing was sponsored by Pfizer Poland. Re f e r e n c e s
2016-05-15T04:43:06.272Z
2012-07-04T00:00:00.000
{ "year": 2012, "sha1": "ca37516c05380f900177efbbcd33a9cb0d603430", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ca37516c05380f900177efbbcd33a9cb0d603430", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84306762
pes2o/s2orc
v3-fos-license
Initial performance of pineapple and utilization of rock phosphate applied in combination with organic compounds to leaf axils Os fosfatos naturais de rocha apresentam baixa solubilidade em água, sendo mais solúveis em meio ácido. O uso de compostos orgânicos em conjunto com essas fontes de fósforo, aplicados na axila foliar basal dos abacaxizeiros, poderia incrementar a solubilização dessa fonte fosfatada e aumentar a disponibilidade de fósforo para a cultura, tornando-se assim o objetivo do presente estudo. Foi conduzido um experimento em casa de vegetação, com aplicação de fosfato natural de Araxá (10 g), combinado ou não com soluções contendo concentrações crescentes de ácido húmico (0 a 40 mmol L -1 de carbono), na presença ou ausência de ácido cítrico (0,005 mmol L ), na axila basal das folhas de abacaxizeiros. Posteriormente foram medidas as características de crescimento e nutricionais da parte aérea das plantas. Os resultados mostraram que os índices de crescimento da parte aérea e, também, os conteúdos de N, P , K, Ca e Mg aumentaram de forma curvilínea, em razão das concentrações de carbono na forma de ácidos húmicos, sendo os valores máximos observados na concentração de 9,3 mmol L -1 de carbono combinada com ácido cítrico 0,005 mmol L - INTRODUCTION Pineapple is a tropical fruit very much appreciated throughout the world.Thailand, Brazil, Philippines, China and India are the main producing countries, amounting to 2. 70, 2.48, 1.83, 1.40, and 1.23 million tones, respectively (FAO, 2010).Natural phosphates provide phosphorus at a lower cost per unit of P, therefore, studies on the use of rock phosphates in pineapple fertilization can make these P sources economically attractive to the production process (Teixeira et al., 2002). Notably, the advanced weathering of soils in tropical regions tends to favor the specific adsorption and/or precipitation of phosphates, making this nutrient often limiting for plant development (Leal & Velloso, 1973;Novais & Smyth, 1999).Because the soil sink for P tends to be much higher than the plant sink, it is recommended that, whatever the source of soluble phosphorus, management of its use should be localized (Novais & Smyth, 1999).In pineapple, fertilizers can be applied to the soil and to basal leaf axils (Teixeira et al., 2002). Although rock phosphates have low solubility in water, it increases with acidity.The use of rock phosphates together with humic and citric acids in basal leaf axils of pineapple, with some leaking into the soil, can enhance P solubility and availability to the crop.The use of natural rock phosphates treated with organic acids of low molecular weight is a common practice to increase phosphorus availability to plants (Kpomblekou & Tabatabai, 2003;Busato et al., 2005). Humic acids have been designated as supramolecular aggregates (Piccolo, 2001), forming clusters of heterogeneous organic compounds of low molecular weight, containing predominantly hydrophilic (fulvic acids) or hydrophilic/hydrophobic (humic acids) domains.In natural systems, there is a mixture of these domains.Such aggregates are maintained in solution by hydrogen bonds and hydrophobic interactions which, alone, are weak, but, together, can provide structure to these substances and thus result in a just apparent high molecular mass.When, operationally, ionization is promoted using alkali extractants, both groups are dissolved, whereas acidification provides precipitation of the so-called humic acids only, which are less polar than fulvic acids.Additionally, acidification of humic acids solution with citric acid may influence the structure and conformation of the supramolecular arrangement of these humic substances, with a relative disintegration of these clusters, increasing their reactivity (Piccolo et al. 1996a, b;Piccolo, 2001;Simpson, 2001).Sposito (2008) emphasizes that humic substances, designated as supramolecules by Piccolo (2001), would have the properties of biomolecules from which they are derived: fragments that form an integral or labile part of the molecular architecture and thus control their conformation, chemical reactivity and bioactivity.In this context, synergistic effects of humic substances on plant growth have been demonstrated and, in a general way, the hypothesis of a hormonal auxinic effect has been proposed.(Nardi et al., 2002;Canellas et al., 2002Canellas et al., , 2008a, b;, b;Zandonadi, 2006;and Zandonadi et al., 2007).Baldotto et al. (2009) reported a biostimulating effect from foliar application of humic acids, isolated from vermicompost and filter cake, on pineapple in vitro plantlets during the acclimatization period, resulting in increases in N, P, K, Ca, and Mg contents and growth of aerial parts and root system.Canellas et al. (2008b) studied the exudation profile of plants treated with humic substances and observed that the presence of citric acid, as well as malic, tartaric, and oxalic acids, increased in exudates of corn roots.Chromatographic analysis of these solutions revealed the presence of low-molecular-weight substances only, indicating the separation of the humic acids aggregate that was initially applied and corroborating previous results obtained by Piccolo et al. (1996a, b), Piccolo (2001) and Simpson (2001).These findings confirm the biomolecules' provider property conceptualized by Sposito (2008) for humic acids, and also as biostimulants, which was as well observed by Nardi et al. (2002), Canellas et al. (2002Canellas et al. ( , 2008a, b) , b) Zandonadi (2006) and Zandonadi et al. (2007) and Canellas et al. (2008a).Canellas et al. (2008b) found that adding citric acid and humic acids to the growth of maize seedlings resulted in higher biostimulation than the control containing only humic substances, in which the concentration of 0.005 mmol L -1 gave the largest increases.These increases were attributed to the relation between the conformation and structure of humic acids, which was included in the conceptual model of Piccolo (2001).Busato et al. (2005) found that rock phosphate solubilization in solutions containing humic acids increases with decrease in pH (from 7 to 5).Giro (2008 a, b) reported that the addition of citric acid resulted in increase in Araxá phosphate solubilization with solutions of humic acids, since it is expected a higher exposure of dissociable acid groups (Piccolo, 2001;Sposito, 2008).The results also show that the effect of adding citric acid to the solution of humic acids varies quadratically with the dose, in which 0.005 mmol L -1 gave the maximum response.This concentration of citric acid, in the absence of humic acids, did not provide differences in solubilization of rock phosphates compared with the control treated with distilled water.Thus, the combined application of humic acids and citric acid with natural phosphate rocks to the axils of pineapple can promote greater reactivity of humic acids and increased P solubilization.Simultaneously, the combination of these substances can promote physiological stimulation, with auxinic effect, by the release of biomolecules preserved in the supramolecular arrangement of humic acids. Accordingly, the present study aimed to evaluate the initial performance of pineapple in response to fertilization with Araxá rock phosphate combined with solutions containing increasing concentrations of humic acids (0 to 40 mmol L -1 of C) in the presence or absence of citric acid (0.005 mmol L -1 ) applied to basal leaf axils of pineapple cv.Pérola. MATERIAL AND METHODS A greenhouse experiment was conducted at the Norte Fluminense Darcy Ribeiro University, UENF, Campos dos Goytacazes, RJ, Brazil.The treatments were arranged in a 3 +3 +1 +1 Baconian matrix, including: Araxá rock phosphate combined with three concentrations of humic acids, with addition or not of citric acid, Araxá rock phosphate only, and control (Table 1). The experiment was conducted in a randomized block design with six replicates.Fertilization, irrigation and other factors were controlled and kept constant in all treatments, according to recommendations by Novais et al. (1991). Of the eight treatments, seven received 10 g of rock phosphate (Araxá phosphate), with total P concentration equivalent to 240 g kg -1 of P 2 O 5 and one was used as control.The plants were treated with solutions containing concentrations of 10, 20 and 40 mmol L -1 of C in the form of vermicompost-derived humic acids (HA) (Baldotto et al., 2009), with approximately 5 g kg -1 of C, combined or not with citric acid (CA) (C 6 H 8 O 7 .H 2 O) at a concentration of 0.005 mmol L -1 (Canellas et al., 2008b).The vermicompost-derived humic acid was previously isolated and characterized by Baldotto et al. (2007) and citric acid was a pure analytical reagent.Slips of pineapple (Ananas comosus (L) Merril) cultivar Pérola, were previously immersed in solutions containing organic acids of each treatment for 24 hours (Table 1).Immediately after planting, rock phosphate was applied to the axils, and 100 mL of the same solutions of organic acids were applied to the basal leaf axils. At 45 days after planting, the following variables were measured: plant height (PH); length of the "D" leaf, which is inserted at an angle of 45 degrees to the stem, between the ground level and an imaginary axis through the center of the plant (LD); width of the middle one-third of "D" leaf (WD); rosette diameter (RD) and diameter at base (BD), using a caliper; and leaf number (LN).Leaf area (LA) was estimated by image analysis using a 3100 LI-COR meter. Plants were cut close to ground level and aerial plant part was weighed to obtain fresh matter (FM) and then dried in an forced-air oven at 60 o C to a constant weight to determine dry matter (DM).DM was subjected to sulfuric digestion combined with hydrogen peroxide to determine total N, P, K, Ca and Mg.N was determined by the method of Nessler.P content was obtained by molecular absorption spectrophotometry (colorimetry) after reaction with vitamin C and ammonium molybdate, at 725 nm; K was determined by flame photometry; and Ca and Mg were measured by atomic absorption spectrophotometry.All determinations were performed according to the usual methods for pineapple crop of the Mineral Nutrition Sector at the Plant Science Laboratory, UENF (Ramos, 2006).Nutrient contents were calculated by multiplying the dry weight of aerial part by the nutrient content. Data were examined by analysis of variance, and the effects of qualitative factors were decomposed in mean contrasts, according to Alvarez V. & Alvarez (2006).Table 2 shows the coefficients of the contrasts studied. Quantitative factors were studied by regression analysis.Regression equations were adjusted between the mean variables and the humic acid concentrations, combined with phosphate, using or not citric acid.The F test was applied to the decomposed factors at 10, 5 and 1% probability level (Steel & Torrie, 1960).Models from the regression analysis were selected when determination coefficients were above 0.60 (R 2 > 0.60).The regression equations for pineapple dry matter were used to determine the concentrations of maximum physical efficiency Growth characteristics Overall, there were effects of treatments on growth and mass accumulation of pineapple compared with the control (Tables 3 and 4), except for leaf number (LN).For plant height (PH), length and width of "D" leaf (LD and WD), rosette and base diameters (RD and BD), fresh and dry matters, and leaf area (FM, DM and FA) of aerial parts, the increases were 16,9,12,17,17,32,29 and 32% respectively (Table 4). The decomposition of treatment effects showed that there was no significant DM increase in the combination of phosphate and humic acids compared with the control (control versus NF + HA), but with addition of citric acid, the effect was significant and resulted in a 40% DM increase over the control (control versus NF + HA + CA) and approximately 16% DM over the combination phosphate and humic acids (NF + HA versus NF + HA + CA) (Table 4).For leaf area, the effect of the combination of phosphate and humic acids was positive and 25% over the control (control versus NF + HA), increasing to 42% with addition of citric acid (control versus NF + HA + CA). The use of phosphate with humic acids was not superior to phosphate alone (NF versus NF + HA), but in the presence of citric acid (NF versus NF + HA + CA) the effect was 19% higher.The comparison of phosphate rock combined with humic acids with or without citric acid (HA + NF versus NF + HA + CA) showed that the use of the latter increased in 14% the pineapple leaf area (Table 4). In general, the response of pineapple growth characteristics showed curvilinear increases at squared root or quadratic rates as a function of the increasing concentrations of humic acids combined with rock Comparison (1) D.F. The maximum DM accumulation in the aerial parts was 9.6 g per plant for the combination phosphate and humic acids (NF +HA) and 13.2 g per plant for the same combination plus citric acid (NF+ HA + CA) (Table 6).The results also show that the greatest DM accumulation with citric acid occurred at much lower concentrations of humic acids: 9.3 mmol L -1 of C in the form of humic acids relative to 18.8 mmol L -1 of C for treatments without citric acid. Table 7 shows the variables of pineapple aerial parts at the concentration of humic acids of maximum physical efficiency (peak of dry matter).These values were estimated by assigning to the independent variable (x) of the regression equations, in Table 5, the concentrations of Table 6.These results are the expression of the other pineapple growth characteristics for the two main treatment groups, i.e., phosphate combined with humic acids, with and without citric acid, when the dry was highest for these treatments.Hence, it allows the of other plant characteristics in response to both treatments, in the condition of the highest reserve accumulation by pineapple plants (Table 7). Nutritional composition N, P, K, Ca and Mg contents in the pineapple aerial parts varied significantly in response to application of phosphate rock combined with humic acids with or without citric acid (Tables 8 and 9). Table 9 shows increases in the absorption and accumulation of all nutrients for the treatments compared with the control (control versus factorial).For P, the focus of this study, for example, the mean of the treatments was 3.09 mg per plant, 43% greater than the control.The effect of the combination phosphate with humic acids and citric acid compared with the control (control versus NF + HA + CA) was higher than that obtained with natural phosphate alone (control versus NF) or rock phosphate and humic acids (control versus NF + HA), achieving increases of 4.14 mg P per plant, corresponding to 58% P increase over the control. Absorption and accumulation of the other nutrients in response to the treatments compared with the control (control versus Factorial) was also significant, with 22, 20, 29 and 51% increases for N, K, Ca and Mg respectively.Again, on average, the citric acid significantly increased nutrient absorption, with 28, 28, 36 and 61% increase for N, K, Ca and Mg respectively, compared with the control (control versus NF + HA + CA). Similar to what was found for production of DM of aerial parts, the response curves for P contents in different concentrations of humic acids, combined or not with citric acid, resulted in different concentrations for determination of the maximum point of the regression functions (Table 10). The maximum P accumulation points in response to the concentration of humic acids combined with rock phosphate were 20.8 and 9.1 mmol L -1 of C in the form of humic acids, with and without citric acid, respectively.These values are very close to 18.8 and 9.3 mmol L -1 of C of maximum production of DM of aerial parts in response to the same treatments (Table 11).The maximum P contents were 8.39 and 12.37 mg per plant for the concentrations of humic acids, with and without citric acid, respectively.Thus, citric acid promoted a 47% increase in P content in the condition of maximum efficiency. DISCUSSION The best growth performance and nutritional status of pineapple plants occurred in the following order of treatments: control < rock phosphate ~ rock phosphate with humic acids < rock phosphate combined with humic acids and citric acid.The isolated use of rock phosphate resulted in a better initial performance when compared with the control.We can conclude, therefore, that the portion of phosphate accumulated with application of rock phosphate alone was inferior to that obtained when it was combined with organic acids.Thus the increase in P contents, alongside higher growth and higher mass of pineapple plants found for the combination of phosphate with humic acids, especially in the presence of citric acid, resulted in greater absorption and accumulation of this nutrient.Such utilization can attributed to increased P solubility when applied together with organic acids.It is assumed that this increased solubilization may be caused by: i) the proton supply by organic acids; ii) the complexation of Ca 2+ by organic ligands; and iii) the physiological stimulus of plants, i.e., the bioactivity of humic acids, which may have increased the efficiency of plant P transporters. The following reaction represents the acid dissolution of fluorapatite: Ca 10 (PO 4 ) 6 F 2 (s) + 12H + (aq) = 10Ca 2+ (aq) + 6H 2 PO 4 -(aq) + 2F -(aq.)(Equation 1) Equation 1 shows the P solubilization from rock phosphate based on items i and ii above.In the equation, proton consumption and Ca 2+ solubilization can be related to the acidity of humic acids' functional groups and, simultaneously, by cation exchange, to the complexation on the surface of Ca 2+ solubilized by the negatively charged ligand derived from the dissociation.Equation 2 exemplifies this exchange/complexation reaction: SH 2 (s/aq) + Ca 2+ (aq) = S 2 Ca (s/aq) + 2 H + (aq.) (Equation 2), where S represents two moles of charge of the organic acids' functional group.Overall, in this study, carboxyl functional groups tend to predominate as sources of protons in both the organic acids added and those that may have been exuded by plants.The pK values for these functional groups are consistent with the expected dissociation under the experimental conditions of this study. Observing the equations 1 and 2, we can see that protons are reactants and Ca 2+ is a product, that is, the removal of Ca 2+ by complexation with the ligand and the proton ionization are simultaneous and, according to Sposito (2008), occur rapidly.Thus, these facts are in accordance with the argument in iii, since, when stimulated, the plants tend to absorb more P and Ca, and hence, by removing products, the solubilization of the remaining rock phosphate increases.Cation absorption stimulates extrusion of H + , increasing the reagent acidity in Equation 1. It is expected that, by the same mechanism of stimulation attributed to humic acids, namely, activation of cation uptake and extrusion of protons (acidity growth), the treatments resulted in additional increases in P solubility.Additionally, the efficiency of P transporters could have increased, resulting in increased uptake by plants (Canellas et al. 2002;Canellas & Façanha, 2004;Canellas et al. 2006;Canellas et al. 2008;Zandonadi et al., 2007). According to the above discussed, the addition of low concentrations of organic acids tend to diversify and increase the activity of humic acids' components in solution.This therefore leads to an enhanced acid, complexing and biostimulating strength of humic acids and hence explain the positive effect of low concentrations of citric acid (0.005 mmol L -1 ) combined with humic acids on P solubility, resulting in increased accumulation of P and dry matter in plants.Canellas et al. (2008b) reported that the presence of citric acid, as well as malic, tartaric and oxalic acids, occurred and/or increased in the exudates of maize roots in response to concentrations of humic acids. Finally, the combination organic acids and natural phosphate resulted in increased P utilization, better nutritional composition and growth of plants, and consequently superior initial performance of pineapple.This behavior favors plant establishment (a critical physiological stage in this crop) and confer greater fitness in the subsequent growth conditions by increasing reserves and possibilities of water and nutrient uptake and light capture. (1)Generated by the first derivative of the regression equation for dry matter. ( 1 ) Characteristic: LN = leaf number; PH = plant height; LD = length of D leaf; WD = width of D leaf; RD = rosette diameter; BD = base diameter; and FM, DM and LA = fresh matter, dry matter and leaf area of pineapple aerial parts, respectively. Table 2 . Coefficients for treatment contrasts. Table 3 . Growth characteristics of the aerial part of pineapple cv.Pérola in response to application of rock phosphate combined or not with humic acids, with or without citric acid Table 4 . Mean contrasts, relative increments, mean square error (MSE) and coefficient of variation (CV) for growth characteristics of the aerial part of pineapple cv.Pérola in response to application of rock phosphate combined or not with humic acids, with or without citric acid Table 7 . Values of growth characteristics of the aerial part of pineapple cv.Pérola for the treatments rock phosphate combined with humic acids with citric acid (NF + HA) or without citric acid (NF + HA + CA), based on the concentration of maximum physical efficiency (MPE) estimated for the point of highest dry matter, or means, when models showed correlation coefficients below 0.60 Table 6 . Points of maximum dry matter (DM) of the aerial part of pineapple cv.Pérola as a function of concentrations of humic acids (HA) combined with rock phosphate (NF), with or without citric acid (CA) Generated by the first derivative of the regression equation. Table 8 . Nutrient contents in the aerial part of pineapple cv.Pérola in response to application of rock phosphate combined or not with humic acids, with or without citric acid Table 9 . Mean contrasts, relative increments, mean square error (MSE) and coefficient of variation (CV) for nutrient contents of the aerial part of pineapple cv.Pérola in response to application of rock phosphate combined or not with humic acids, with or without citric acid Table 10 . Regression equations for nutrient content (mg/plant) of the aerial part of pineapple cv.Pérola as a function of concentrations of humic acids (HA) combined with rock phosphate (NF), with or without citric acid (CA) Table 11 . Values of nutritional composition of the aerial part of pineapple cv.Pérola for the treatments rock phosphate combined with humic acids, with citric acid (NF + HA) or without citric acid (NF + HA + CA), based on the concentration of maximum physical efficiency(MPE)
2019-02-06T07:06:45.600Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "6ca451ea7fea238bba5632bc65b1736fce51734c", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rceres/a/nFnhqY55cgPK7RWRTGTJxVp/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ca451ea7fea238bba5632bc65b1736fce51734c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
119305513
pes2o/s2orc
v3-fos-license
The resolvent algebra of non-relativistic Bose fields: sectors, morphisms, fields and dynamics It was recently shown [2] that the resolvent algebra of a non-relativistic Bose field determines a gauge invariant (particle number preserving) kinematical algebra of observables which is stable under the automorphic action of a large family of interacting dynamics involving pair potentials. In the present article, this observable algebra is extended to a field algebra by adding to it isometries, which transform as tensors under gauge transformations and induce particle number changing morphisms of the observables. Different morphisms are linked by intertwiners in the observable algebra. It is shown that such intertwiners also induce time translations of the morphisms. As a consequence, the field algebra is stable under the automorphic action of the interacting dynamics as well. These results establish a concrete C*-algebraic framework for interacting non-relativistic Bose systems in infinite space. It provides an adequate basis for studies of long range phenomena, such as phase transitions, stability properties of equilibrium states, condensates, and the breakdown of symmetries. Introduction In a recent article [2] we have established the stability of the gauge invariant (particle number preserving) subalgebra of observables of the resolvent algebra of a non-relativistic Bose field under the automorphic action of dynamics involving pair potentials. It is the aim of the present note to extend this result to a larger field algebra of operators, changing the particle number. Our approach is based on ideas of Doplicher, Haag and Roberts in a general analysis of superselection sectors in relativistic quantum field theory [3]. The resolvent algebra of non-relativistic Bose fields is faithfully represented on Fock space, where the subspaces with fixed particle number are superselection sectors for the subalgebra of observables. We will study here particle number changing isometries, which are contained in a slight extension of the resolvent algebra. Their adjoint action describes non-unital morphisms of the observable algebra. These morphisms are transportable, i.e. they are related by intertwiners (partial isometries) contained in the algebra of observables. Our main result consists of the proof that they are also transportable with regard to the action of space and time translations involving pair interactions, i.e. there exist intertwiners in the algebra of observables between the space and time translated morphisms. The stability of the corresponding field algebra, i.e. the C*-algebra generated by the isometries and the observables, under the automorphic action of these translations then follows. Preliminaries Adopting the notation and definitions in [2], we consider the family of isometries on Fock space F , which are given by the formula where a * (f ), a(f ) are creation, respectively annihilation operators and N f = a * (f )a(f ). They arise from operators a(f ) * (1 + N f ) −κ , κ > 1/2, contained in the resolvent algebra of Bose fields, and satisfy where E f is the projection onto the orthogonal complement of the kernel of a(f ) in F . Note that this space coincides with |f ⊗ s F ⊂ F , where |f ∈ F 1 is the single particle vector corresponding to f ∈ D(R s ) and we identify |f ⊗ s Ω . = |f . The isometries induce morphisms ρ f of the algebra A, given by . They define representations of A on the subspace E f F ⊂ F . The morphisms act nontrivially only on operators which are localized in regions containing the support of f , i.e. they are localized in this support region. It is not difficult to see that these morphisms define covariant representations of A with regard to space and time translations. The spatial translations are determined by the momentum operator on F , where ∂ denotes the gradient. The time translations are fixed by the Hamiltonians H = dx ∂a * (x) ∂a(x) + dx dy a * (x)a * (y) V (x − y) a(x)a(y) , where we restrict attention to pair potentials V ∈ C 0 (R s ). As was shown in [2], the adjoint action of the unitary operators e ixP , x ∈ R s , and e itH , t ∈ R, leaves the algebra A invariant, describing spatial, respectively time translations of the observables. It is apparent that the unitary operators X f e ixP X * f and X f e itH X * f on E f F describe these actions in the representation ρ f of A. It is less trivial to show that the corresponding field algebra R, i.e. the C*-algebra generated by A and the pair X f , X * f , is stable under under the action of the spacetime translations. For the spatial translations, this is a consequence of the fact, established in [2], that the products of isometries X f X * g are elements of A for any normalized pair f, g ∈ D(R s ). Since the space of test functions D(R s ) is stable under spatial translations f → f x , it follows that x ∈ R s . It follows that R is stable under spatial translations. We proceed in a similar manner in case of the time translations and consider the isometries Here we used again the relations X * f X f = 1 and X f X * f = E f . The non-trivial step in our argument consists of the demonstration that It then follows as in case of the spatial translations that the field algebra R, generated by A and X f , X * f , is stable under the adjoint action of the unitary operators e itH , t ∈ R. The proof of (2.2) is given in several steps, where we make use of the particle picture by restricting the above (gauge invariant) operator Γ t (f ) to the subspaces F n ⊂ F , n ∈ N. Next, let O 1 be a single particle operator on F 1 with (distributional) kernel x, y → x|O 1 |y . Its canonical lift to F n , n ∈ N, obtained by forming symmetrized tensor products with the unit operator and amplifying it with the appropriate weight factor n, is given by The field theoretic operator on the right hand side of this equality will be called second quantization of O 1 . Similarly, if O 2 is a two-particle operator acting to F 2 with kernel The operator on the right hand side will be called second quantization of O 2 . Recalling that the Hamiltonians of interest here have the form the first integral is the second quantization of the single particle operator P 2 and the second integral the second quantization of the two-particle operator V ∈ C 0 (R s ). Note that the kernel of proper pair potentials has the singular form which reduces the second quantization of V to a double integral. We will have occasion to discuss also less singular versions of potentials. Given n ∈ N, the restrictions of H to F n can be presented as where the second line represents the familiar version of the operators. The first line will be useful, however, in the subsequent decompositions of these operators. We will also make use of the second quantization N f of the one-particle operator E f,1 , the projection onto the ray of |f . The restrictions of this number operator to F n are . We also note that the projections Hence, decomposing the tensor product into a sum of tensor products of E f,1 and unit operators, it follows from [2,Lem. 3.3] that E f,n ∈ A ↾ F n . We recall that the algebra of observables A is isomorphic to the (bounded) inverse limit of an inverse system of approximately finite dimensional algebras, K . = {K n , κ n } n∈N 0 , satisfying the coherence condition κ n (K n ) = K n−1 for any K = {K n } n∈N 0 ∈ K. The algebras K n are formed by sums of n-fold symmetric tensor products of compact operatars and unit operators. The elements of the algebra A are all bounded operators A on F with the defining property K(A) Note that in order to show that some operator X belongs to A one has to show that (a) X n . = X ↾ F n ∈ K n , (b) κ n (X n ) = X n−1 , n ∈ N 0 , and (c) X is bounded. In view of this fact, we will deal here primarily with the inverse system K. Analysis Turning to the analysis, we need to control the difference (H − X f HX * f ) ↾ E f F of the generators of the dynamics, which enters in the series expansion of the operators Γ f (t). Note that by restricting this operator to a subspace F n , one obtains H n for the first Hamiltonian, and H n−1 for the second Hamiltonian, sandwiched between isometries. So we must compare operators on different subspaces of F . In our first technical lemma we relate operators, defined on F n−1 , with their lifts to the space E f F n = |f ⊗ s F n−1 ⊂ F n , n ∈ N, induced by the field operators. The normalized function f ∈ D(R s ) will be kept fixed throughout the subsequent discussion. Lemma 3.1. Let n ∈ N and let O n−1 be an operator whose domain D n−1 ⊂ F n−1 is stable under the action of the spectral projections of N f,n−1 . Then, for any Φ n−1 ∈ D n−1 , Noticing that the spectral decomposition of N f,n−1 is a finite linear combination of its spectral projections, one sees that the vector X * f (|f ⊗ s Φ n−1 ) is also an element of D n−1 and proving the first statement. We consider now the restrictions H n−1 of the Hamiltonians H of interest to F n−1 , n ∈ N. For these restrictions the spaces are domains of essential selfadjointness. It is also evident that these spaces are stable under the action of the spectral projections of N f,n−1 . So the first part of the preceding lemma applies to X f H n−1 X * f ↾ (|f ⊗ D n−1 ), which induces on D n−1 the action We compare now the operator H ↾ D n−1 with (1 + the latter operator is also defined on D n−1 . HereǍ f.n−1 =Ǎ f ↾ F n−1 ∈ K n−1 , whereǍ f is the difference between the second quantizations of one-and two-particle operators of finite rank and the corresponding transformed operators, obtained by the similarity transformation whereB f is the difference between the second quantization of a modified (localized) pair potential and its similarity transformed version. The localized pair potentialV f,2 is defined on F 2 by The restriction of the resulting operatorB f to F n−1 is given by . Remark: Since the operatorV f,2 is not an element of K 2 , it has to be treated separately. It will be crucial in the subsequent analysis thatV f,2 is effectively localized by the factor (E f,1 ⊗ s 1), next to V . Proof. Making use of the tensor notation, we have . We decompose the operator P 2 , defined on F 1 , into This decomposition is meaningful since |f lies in the domain of P 2 . The first operator on the right hand side of this equality maps the orthogonal complement of the ray of |f into itself and the three remaining operators are of rank one. Similarly, we decompose the pair potential V on F 2 into The first operator on the right hand side of this equality maps the orthogonal complement of |f ⊗ s F 1 ⊂ F 2 into itself. The second up to the fourth terms are operators of finite rank due to appearance of the factor (E f,1 ⊗ E f,1 ). The two terms in the last line form the operatorV f,2 , given in the the lemma. Tensoring these operators with unit operators 1 and multiplying them with factors of n according to their occurrence, we proceed to Since the operators commute with N f,n−1 , they do not contribute to ∆ n−1 . The remaining terms in ∆ n−1 consist of two types. The first one is, for any n ∈ N, a sum of fixed one-and two-particle operators of finite rank which are tensored with unit operators and amplified by factors of n. Since the operators (1 + N f,n−1 ) ±1/2 appearing in the similarity transformation are elements of K n−1 , it follows that the termsǍ f,n−1 are contained in K n−1 ; moreover, they are the restrictions of some global operatorǍ f , as described in the statement. In the second type of terms contributing to ∆ n−1 there enters the second quantization of the localized pair potentialV f,2 . The resulting operatorsB f,n−1 are the bounded restrictions of some unbounded operatorB f , which describes the difference between the localized interaction operator and its similarity transformed version, n ∈ N. Next, we compare the operators H ↾ |f ⊗ s D n−1 and |f ⊗ s H ↾ D n−1 . Here f,n = f ↾ F n ∈ K n , where f is the second quantization of one-and two-particle operators of finite rank, multiplied from the right by the operator Proof. It suffices to establish the statement for vectors of the special form where f 1 , . . . , f n−1 ∈ D(R s ) are members of some orthonormal basis in L 2 (R s ) which includes f . Making use of the fact that the Hamiltonians are symmetrized sums of one-and two-particle operators, one obtains where the symbol i ∨ indicates omission of the single particle component |f i . We must determine the operator on F n which maps the vector |f ⊗ s |f 1 ⊗ s · · ⊗ s |f n−1 to the vector on the right hand side of the preceding equality. Recalling that f, f 1 , . . . , f n−1 are members of some orthonormal basis, we have where n f is the number of factors |f appearing in the vector. This equality holds for arbitrary vectors Φ n−1 if one replaces the number n f by the operator N f,n . Furthermore, since the vector is an element of the space |f ⊗ s F n−1 , it does not change if one multiplies it by the projection E f,n . This gives, n ∈ N, which leads to the definiton Since P 2 E f,1 has finite rank and N −1 f,n E f,n ∈ K n , we conclude that n ∈ K n . Moreover, it is the restriction of an operator f to F n+1 which is the second quantized, localized single particle kinetic energy, multiplied by N −1 f E f . In a similar manner, n ∈ N, where the localized pair potentialV f,2 . = V (E f,1 ⊗ s 1) on F 2 appears in the second line. The resulting bounded operatorsB f,n+1 are the restrictions of some unbounded operatorB f on F , describing the second quantization of the localized interaction potential, multiplied by We have accumulated now the information needed for the description of the structure of the operator where A f,n ∈ K n are the restrictions of the unbounded operator to F n . The operatorsǍ f , f were defined in Lemmas 3.2 and 3.3, respectively. In a similar manner, B f,n = B f ↾ F n are the bounded restrictions of the unbounded operator where the operatorsB f ,B f were also defined in these two lemmas. Proof. Recalling that The first term on the right hand side of this equality coincides according to Lemma 3.3 with ( f,n +B f,n ) (|f ⊗ s Φ n−1 ), where f,n ∈ K n . In the second term we made use of the first part of Lemma 3.1 according to which As has been shown in Lemma 3.2, the second term in the above equality can be presented in the form |f ⊗ s (Ǎ f,n−1 +B f,n−1 )Φ n−1 , and Lemma 3. (Note that the creation and annihilation operators in this equality can be mollified by spectral projections of the number operator N f without affecting their action on F n , cf. also the discussion below.) Summing up the resulting contributions, the statement follows. We turn now to the analysis of the operator function t → Γ f (t), defined above. It is differentiable in t in the sense of sesquilinear forms between vectors in the domains of H, respectively X f HX * f . The derivatives are given by where the second equality holds since X f HX * f commutes with E f . We restrict this equality to F n . By Proposition 3. . = e itHn (A f,n + B f,n )e −itHn , we can solve the above equation by the series where the series converges absolutely in norm since the operators C f,n are bounded. Note that the range of these operators does not lie in E f,n F n . We want to show that Γ f (t) ↾ F n ∈ K n , n ∈ N 0 . As we shall see, it is sufficient to prove that the functions t → t 0 ds C f,n (s) have values in K n and are norm continuous, t ∈ R. For the summand A f,n ∈ K n of C f,n this property follows from the fact that the time evolution acts pointwise norm continuously on K n . The argument for the second summand B f,n is more involved since these operators are not contained in K n . We begin with a technical lemma about integrals of functions having values in operators, respectively linear maps. where λ ∞ denotes the supremum of the norm of s → λ(s) on any bounded subset of R, containing the integration interval. This bound implies that the expression on the first line tends to 0 in the limit m → ∞. Since, by assumption, The statement about the continuity properties of this function is a consequence of the trivial estimate where B 1 ∞ denotes the supremum of the norm of s → B 1 (s) on any bounded subset of R, containing the integration interval. Since the map λ(s 0 ) is normal on B(H 1 ), the first term on the right hand side of this equality vanishes in the s.o. topology in the limit s → s 0 . The second term vanishes in this limit as well, since λ(s) → λ(s 0 ) in the norm topology of B(H 2 ), uniformly on bounded subsets of B(H 1 ). Thus s → λ(s)(B 1 (s)) is continuous in the s.o. topology. As in the preceding step, we partition the integration interval, giving the estimate Because of the continuity properties of s → λ(s), the expression on the first line tends to 0 in the limit m → ∞. Since, by assumption, lt/m (l−1)t/m) ds B 1 (s) ∈ B 1 and λ(lt/m) maps the C*-algebra B 1 into B 2 , 1 ≤ l ≤ m, one obtains again t → t 0 ds λ(s)(B 1 (s)) ∈ B 2 , t ∈ R. The continuity of this function follows from the preceding argument. This lemma will be applied to different types of functions and has therefore been formulated in general terms. As a first application, we consider maps β g,n : B(F n−1 ) → B(F n ), g ∈ L 2 (R s ), given by β g,n ( · ) . = a * (g) · a(g) ↾ F n , n ∈ N . Since a(g) n ≤ n g 2 , hence β g 1 ,n − β g 2 ,n n ≤ n 2 g 1 + g 2 2 g 1 − g 2 2 , these maps are bounded and depend norm continuously on the underlying functions g 1 , g 2 ∈ L 2 (R s ). We will make use of the fact that β g,n maps the algebra K n−1 ⊂ B(F n−1 ) into K n ⊂ B(F n ), n ∈ N. In order to see this, note that one can replace for given n ∈ N the operator a(g) ↾ F n by G n a(g) ↾ F n , where G n is the (finite) sum of the spectral projections of N g,n = g −2 2 a * (g)a(g) ↾ F n . The operator G n a(g) is an element of the resolvent algebra R, and the preceding statements are also true for its adjoint a * (g) G n . Now, given any K n−1 ∈ K n−1 , there is some operator A ∈ A such that A ↾ F n−1 = K n−1 . The gauge invariant operator a * (g) G n A G n a(g) is an element of A, and its restriction to F n coincides with some operator in K n , cf. [2,Lem. 3.3]. This proves that β g,n (K n−1 ) ⊂ K n . It also follows from these arguments that the maps β g,n are normal. In the subsequent corollary we deal with integrals of gauge invariant operator functions, involving the non-interacting time evolution, induced by the Hamiltonian H 0 . We make use of the notation s → B 0 (s) . = e isH 0 Be −isH 0 and put B 0 n (s) Note that these functions are strong operator continuous, so their integrals are defined in this topology. In order to avoid constant repetitions of this fact, we make the following standing declaration. Statement: All integrals appearing in the subsequent analysis are defined in the strong operator topology, unless otherwise stated. Corollary 3.6. Let n ∈ N, let g ∈ L 2 (R s ), and let B n−1 ∈ B(F n−1 ) be such that the function t → t 0 ds B n−1 0 (s) has values in K n−1 . Then t → t 0 ds β g,n (B n−1 ) 0 (s) is norm continuous and has values in K n , t ∈ R. Proof. Consider the function s → β g,n (B n−1 ) 0 (s). Since we are dealing with the noninteracting time evolution, we have β g,n (B n−1 ) 0 (s) = β g(s),n (B n−1 0 (s)), where g(s) ∈ L 2 (R s ) denotes the time translated wave function g, which depends continuously on s ∈ R. Thus s → β g(s),n is norm continuous, normal, and its restriction to K n−1 has values in K n , as was shown above. The function s → B n−1 0 (s) is s.o. continuous and since by assumption t 0 ds B n−1 0 (s) ∈ K n−1 , t ∈ R, the statement follows from Lemma 3.5(ii). In the next lemma we analyze the localized pair potentials which appear as factors in the operatorsB f andB f , defined in Lemmas 3.2 and 3.3. Lemma 3.7. LetV f,2 andV f,2 be the localized pair potentials defined in Lemmas 3.2 and 3.3, respectively. Putting V f,2 for either one of these potentials, one has (i) the function t → t 0 ds V f,2 0 (s) on F 2 is norm continuous and has values in the compact operators; (ii) for any n ∈ N, n ≥ 2, the function t → t 0 ds V f,n 0 (s) on F n is norm continuous and Proof. We give the proof for the potentialV f,2 = V (E f,1 ⊗ s 1). SinceV f,2 also contains the localizing factor (E f,1 ⊗ s 1), the corresponding argument is similar and therefore omitted. (i) First, we consider potentials V having compact support. Choosing some smooth characteristic function x → χ(x) which is equal to 1 for x ∈ supp f ∪ (supp f + supp V ) and has compact support, we can proceed toV f, . is compactly supported on the two-particle configuration space R s × R s . The function s → V 0 χ (s) is continuous in the s.o. topology, t → t 0 ds V 0 χ (s) is norm continuous, and it has values in the compact operators on F 2 ; these facts have been established in previous work, cf. for example [1]. Furthermore, the function, having values in linear maps on B(F 2 ), given by is uniformly continuous (recall that E f,1 is a one-dimensional projection), normal, and it maps compact operators on F 2 into compact operators. Lemma 3.5(ii) therefore implies is norm continuous and has values in the compact operators on F 2 for the restricted class of potentials. Now, and this upper bound implies that the last integral in the preceding equality is norm continuous on F 2 with regard to V ∈ C 0 (R s ). So the preceding result extends to all potentials in C 0 (R s ). (ii) By the very definition of the spaces K n , any compact operator C on F 2 gives rise to elements C ⊗ s 1 ⊗ · · · ⊗ 1 n−2 ∈ K n , n ∈ N. So the second statement follows from the preceding step. As has been mentioned, the same arguments apply to the localized pair potentiaľ V f,2 , completing the proof. In the next step we show that the statement of the preceding lemma also holds for the interacting dynamics. In fact, we will prove a more general result, involving also the maps β g,n , defined above. We recall the short hand notation B 0 (s) . = e isH 0 Be −isH 0 and, omitting the superscript 0, we will use an analogous notation for the interacting dynamics, B(s) . = e isH Be −isH , s ∈ R. We also put B n (s) (ii) t → t 0 ds β g,n (B n−1 )(s) is norm continuous and has values in K n . Proof. Let Θ n (s) . = e isH 0n e −isHn = e isH 0 e −isH ↾ F n and put θ n (s) . = Ad Θ n (s), s ∈ R. The function s → θ n (s) of linear maps on B(F n ) is norm continuous. This is a consequence of its standard series expansion in terms of multiple integrals, cf. [2,Eq. 4.2]. It leads to the estimate, 0 ≤ s 1 ≤ s 2 , where V n is the interaction operator on F n . It was shown in [2,Lem. 4.3] that θ n (s) maps K n onto itself. (This statement was establish in that reference for a larger algebra; but making use of the fact that θ n (s) commutes with the permutations of particle numbers, it holds for the symmetric subalgebra K n , as well.) It is also clear that the maps θ n (s), induced by unitary operators, are normal. Now since these maps are automorphisms, both, of B(F n ) and of K n , all preceding statements hold also for the inverse maps, θ −1 n (s) given by the adjoint action of Θ n (s) −1 = e isHn e −isH 0n , s ∈ R. Hence the maps s → θ −1 n (s) comply with all conditions given in Lemma 3.5(ii). Turning first to statement (i), the function s → B 0 n (s) satisfies the remaining conditions in Lemma 3.5(ii) by assumption. Thus is norm continuous and has values in K n . As to statement (ii), it follows from the assumptions and Corollary 3.6 that s → β g,n (B n−1 ) 0 (s) satisfies the remaining conditions in Lemma 3.5(ii). So is also norm continuous and has values in K n , completing the proof of the statement. We apply now these results to the operator functions t → t 0 ds C f,n (s) which appear in the series expansion (3.1) of Γ f (t) ↾ E f,n F n . It was shown in Proposition 3.4 that C f,n = A f,n + B f,n , where A f,n ∈ K n . So, as a consequence of [2,Prop. 4.4] and the fact that the dynamics commutes with permutations, the function t → t 0 ds A f,n (s), defined by the interacting dynamics, is norm continuous and has values in K n , t ∈ R. Turning to the operators B f,n , we have to cope with the problem that the underlying localized pair potentials are not contained in K n . According to Proposition 3.4 the operators B f,n are given by whereV f,n−1 andV f,n were defined in Lemmas 3.2 and 3.3, respectively, and we made use of the maps β f,n ( · ) = a * (f ) · a(f ) ↾ F n , introduced above. The operatorV f,n−1 and its similarity transformed counterpart in the first term on the right hand side of this equality combine into a finite sum j K ′ n−1,jV f,n−1 K ′′ n−1,j , where K ′ n−1,j , K ′′ n−1,j ∈ K n−1 . In order to see that the, by the interacting dynamics time translated operators integrate to elements of K n−1 , we consider the functions with values in linear maps on B(F n−1 ), given by Recalling that the action of the dynamics on K n−1 is pointwise norm continuous as well as the results of Lemma 3.7(ii), it is apparent that the function of maps s → µ n−1 (s) and the operator function s →V f,n−1 0 (s) comply with the conditions given in Lemma 3.5(ii). Hence t → t 0 ds (µ n−1 (V f,n−1 )) 0 (s) is norm continuous and has values in K n−1 , where we have put µ n−1 . = µ n−1 (0). It then follows from Lemma 3.8(ii) that the function, defined by the interacting dynamics, is norm continuous and has values in K n . In a similar manner one deals with the second termV f,n N −1 f,n E f,n contributing to B f,n . The operator N −1 f,n E f,n is an element of K n , on which the dynamics acts pointwise norm continuously, and the function t → t 0 dsV f,n (s) is norm continuous and has values in K n as a consequence of Lemmas 3.7(ii) and 3.8(i). By the preceding arguments, it follows from Lemma 3.5(ii) that also t → t 0 ds (V f,n N −1 f,n E f,n )(s) is norm continuous and has values in K n . So, to summarize, we conclude that the integral t → t 0 ds C f,n (s), defined with regard to the interacting dynamics, is norm continuous and has values in K n . This information enters in the following result concerning the operators Γ f,n (t) . Proposition 3.9. Let n ∈ N 0 , then t → Γ f,n (t) ∈ K n , and this function is norm continuous, t ∈ R. Proof. We make use of the series expansion (3.1). Since C f,n is bounded, it follows from the argument given in Lemma 3.8, that t → Γ f,n (t) is norm continuous, t ∈ R. For the proof that it has values in K n , it suffices to show that the multiple integrals in the absolutely convergent series expansion (3.1), are norm continuous elements of K n . This is accomplished by induction. For the first term, corresponding to k = 1, these properties were established in the preceding analysis. By the induction hypothesis, t → D k,n (t) shares these properties. For the induction step from k to k + 1, we note that t → D k+1,n (t) = t 0 ds C f,n (s) D k,n (s). According to the induction hypothesis, s → D k,n (s) is norm continuous and has values in K n . Moreover, the linear function (left multiplication) s → λ n (s)( · ) . = C f,n (s) · on B(F n ) is normal, pointwise continuous in the s.o. topology, bounded, and t 0 ds λ n (s)( · ) maps K n into itself, as was shown in the initial step, t ∈ R. Hence, according to Lemma 3.5(i), the function t → D k+1,n (t) has the desired properties, completing the proof. Having seen that the restrictions of the operators Γ f (t) to F n determine operators in K n , t ∈ R, we must show now that these operators form coherent sequences. There the inverse maps κ n : K n → K n−1 enter, n ∈ N 0 . We recall some important properties of these maps, established in [2]. Given any (C m ⊗ s 1 ⊗ s · · · ⊗ s 1 n−m ) ∈ K n , where C m is a compact operator on F m , one has The maps κ n are *-homomorphisms, mapping K n onto K n−1 . In particular, they are norm continuous, κ n (K n ) n−1 ≤ K n n , K n ∈ K n . A sequence {K n ∈ K n } n∈N 0 is said to be coherent if κ n (K n ) = K n−1 , n ∈ N 0 . Such coherent sequences are the elements of the (bounded) inverse limit K of the inverse system {K n , κ n } n∈N 0 . In order to establish the desired result, we make use again of the series expansion (3.2). The essential step in our argument consists of proving the relation for any norm continuous function s → D n (s) with values in K n , n ∈ N 0 . Since the functions s → C n (s) are not contained in that algebra, this requires some work. We begin with the following simple result. Proof. The second quantizations of one-and two-particle operators and their restrictions to F n were discussed in the beginning of this note. So let O be the second quantization of a compact one particle operator. Then O n = n (O 1 ⊗ s 1 ⊗ s · · · ⊗ s 1 n−1 ) ∈ K n and Similarly, if O is the second quantization of a compact two-particle operator, one obtains completing the proof. The non-interacting time evolution does not mix tensor factors and hence maps twoparticle operators into themselves. Adopting as before the notation V f,2 for either one of the localized pair potentialsV f,2 andV f,2 , it follows from Lemma 3.7(i) and the preceding lemma that the second quantizations of the compact operators t 0 ds V f,2 0 (s) satisfy We need, however, stronger results for integrals involving the interacting dynamics, where the localized potentials are sandwiched between operators in K n and acted upon by the maps β g,n , defined above. The relation between the maps β f,n and the maps κ n is established in the subsequent lemma. There we rely on results in [2,Lem. 3.4], which were established by making use of the quasilocal structure of the algebra A. We can determine now the action of κ n on integrals involving the localized pair potentials and acted upon by the interacting dynamics. Lemma 3.12. Let V f,2 be either one of the localized pair potentialsV f,2 andV f,2 , defined in Lemmas 3.2 and 3.3, respectively, let n ∈ N, and let K ′ n , K ′′ n ∈ K n . Then Moreover, if K ′ n−1 , K ′′ n−1 ∈ K n−1 , one has Proof. The argument is identical for the potentialsV f,2 andV f,2 , so we do not need to distinguish between them. In a first step we establish the two statements for the noninteracting time evolution. Turning to the first statement, we approximate as in preceding arguments the first integral t 0 ds (K ′ n V f,n K ′′ n ) 0 (s) ∈ K n by the, in the limit of large m, norm convergent sum Applying to this sum the (norm continuous) homomorphisms κ n , we obtain where we used the relation κ n • α (0) n−1 (s) • κ n , s ∈ R; it follows from the fact that the non-interacting dynamics does not mix tensor factors. Going back to the limit of large m, the sum converges in norm to the second integral t 0 ds (κ n (K ′ n ) V f,n−1 κ n (K ′′ n )) 0 (s) ∈ K n−1 , proving the first relation in the absence of interaction. Turning to the second relation, we proceed as in Corollary 3.6 and put The function s → β f (s),n acts norm continuously on B(F n−1 ), is normal, and it maps K n−1 into K n ; the function s → (K ′ n−1 V f,n−1 K ′′ n−1 ) 0 (s) is s.o. continuous and its integral has values in K n−1 . Hence, according to Lemma 3.5(ii), we can approximate the first integral in the second relation of the statement by the for large m norm convergent sum Applying to this relation the homomorphism κ n , we obtain according to Lemma 3.11 and the preceding step ds (κ n−1 (K ′ n−1 ) V f,n−2 κ n−1 (K ′′ n−1 )) 0 (s) . Proceeding again to the limit of large m, this gives the second integral in the second relation of the statement, thereby completing its proof in the absence of interaction. In order to extend these results to the interacting dynamics, we make use of the maps θ n (s), introduced in the proof of Lemma 3.8. We recall that they were defined by the adjoint action of e isH 0n e −isHn , s ∈ R. Since s → θ n (s) and its inverse are norm continuous and map K n onto K n , we can apply Lemma 3.5(ii) and approximate the integral Applying to these sums the map κ n and making use of the results in the preceding step as well as the relation κ n • θ n (s) = θ n−1 (s) • κ n , established in [2,Lem. 4.5], we obtain This expression converges in the limit of large m in norm to the integral establishing the first relation in the presence of interaction. The argument for the second relation is identical, completing the proof. These results put us into the position to determine the action of the homomorphisms κ n on the operators Γ f,n (t). Here we rely again on the expansion (3.1). Let us recall the information which we have about the operators C n , entering into this expansion. According to Proposition 3.4 and its preceding Lemmas 3.2 and 3.3 they have the structure, n ∈ N, Here the symbols O 1,2 n−1 , O 1,2 n denote the restrictions to F n , respectively F n−1 , of the second quantizations of compact one-and two-particle operators, defined in the abovementioned lemmas. The operatorsV f,n−1 andV f,n are the restrictions of the second quantizations of localized pair potentialsV f,2 andV f,2 , which were also specified in these lemmas. Lemma 3.13. For n ∈ N, let C n be the operators given in (3.3), and let s → D n (s) ∈ K n be norm continuous. Then where the integrals have values in K n , respectively K n−1 . Proof. We begin by proving the statement for the constant function s → D n (s) . = 1 ↾ F n and consider first the contributions to s → C n (s) containing the operators O 1,2 m as a factor, m = n, n − 1. These contributions depend norm continuously on s ∈ R since they are elements of K m and are sandwiched between operators from these spaces. We also recall that β f,n maps K n−1 into K n . Hence one can interchange in these terms the action of κ n with the integration. Making use of these relations and Lemma 3.11, it follows that the statement of the lemma holds for all contributions to C f,n , containing the operators O 1/2 m , m = n, n − 1. For the contributions containing the localized potentialsV f,n−1 ,V f,n , one must integrate the corresponding operators first, since otherwise the action of κ n is not defined. For the integrated operators, the statement follows directly from the results established in Lemma 3.12 and the relations obtained in the preceding step. This completes the proof of the statement for the constant function s → D n (s). Let us turn now to the statement for arbitrary norm continuous functions s → D n (s) with values in K n . There we proceed as in the proof of Proposition 3.9 and consider again the linear function (left multiplication) s → λ n (s)( · ) . = C f,n (s) · on B(F n ). It is normal, pointwise continuous in the s.o. topology, bounded, and t 0 ds λ n (s)( · ) maps K n into itself, t ∈ R. Hence, according to Lemma 3.5(i), we can approximate the integral on the left hand side of the stated equality by the norm convergent sum where we made use of the result obtained in the preceding step. Proceeding in this expression to the limit of large m, we arrive at the integral on the right hand side of the stated equality, completing the proof of the lemma. The preceding lemma is a key ingredient in the proof of the following proposition, which is the main result of this note. Proof. It is apparent that the operators Γ f,n (t) are uniformly bounded in n. The statement that Γ f,n (t) ∈ K n was established in Proposition 3.9, n ∈ N 0 . So it remains to verify the coherence condition. There we make use again of the expansion (3.1). We need to show that the multiple integrals t → D k,n (t) involving the operators C n , cf. equation (3.2), are mapped by κ n into the corresponding integrals with the operators C n−1 . The statement then follows from the norm convergence of the series. For the proof we make use of the inductive argument given in the proof of Proposition 3.9. We have shown in the preceding lemma that κ n (D 1,n (t)) = κ n t 0 ds C n (s) = t 0 ds C n−1 (s) = D 1,n−1 (t) , n ∈ N 0 . Assuming that the analogous relation holds for the k-fold integrals, involving C n , we represent the (k + 1)-fold integral in the form t → D k+1,n (t) = t 0 ds C n (s) D k,n (s), where s → D k,n (s) ∈ K n is norm continuous. Thus it follows from Lemma 3.13 that κ n (D k+1,n (t)) = κ n t 0 ds C n (s) D k,n (s) = t 0 ds C n−1 (s) κ n (D k,n (s)) = t 0 ds C n−1 (s) D k,n−1 (s) , where in the last equality we made use of the induction hypothesis. This establishes the coherence condition and thereby completes the proof. It follows from these results that the field algebra R, i.e. the C*-algebra generated by A and any given pair of isometries X f , X * f for some normalized f ∈ D(R s ), is stable under the adjoint action of the unitary operators e itH , t ∈ R, for all Hamiltonians H with pair potentials V ∈ C 0 (R s ). In analogy to previous results for the observables, one can also establish continuity properties of the corresponding action on R with regard to a locally convex topology induced by a countable family of seminorms.
2018-11-24T17:47:40.000Z
2018-07-20T00:00:00.000
{ "year": 2020, "sha1": "fa26a20591a91d287fb7f5672bf24be0b8638fa2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.07885", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "920e21ac9a8872cea2a2d46d909e26622417625b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
196810102
pes2o/s2orc
v3-fos-license
Prediction of normalized signal strength on DNA sequencing microarrays by n-grams within a neural network model We have shown previously that a feed-forward, back propagation neural network model based on composite n-grams can predict normalized signal strengths of a microarray based DNA sequencing experiment. The microarray comprises a 4xN set of 25-base single-stranded DNA molecule ('oligos'), specific for each of the four possible bases (A, C, G, or T) for Adenine, Cytosine, Guanine and Thymine respectively at each of N positions in the experimental DNA. Strength of binding between reference oligos and experimental DNA varies according to base complementarity and the strongest signal in any quartet should `call the base` at that position. Variation in base composition of and (or) order within oligos can affect accuracy and (or) confidence of base calls. To evaluate the effect of order, we present oligos as n-gram neural input vectors of degree 3 and measure their performance. Microarray signal intensity data were divided into training, validation and testing sets. Regression values obtained were >99.80% overall with very low mean square errors that transform to high best validation performance values. Pattern recognition results showed high percentage confusion matrix values along the diagonal and receiver operating characteristic curves were clustered in the upper left corner, both indices of good predictive performance. Higher order n-grams are expected to produce even better predictions. Background: DNA sequences are strings of hundreds to millions of four nitrogenous bases (Adenine, Cytosine, Guanine and Thymine) represented by the letters A,C,G, and T respectively. Representation of these strings as numerical values enables the application of powerful dig-ital signal processing techniques. Desirable properties of a DNA numerical representations and some examples are given in [3,15]. N-gram method was first introduced by C. E Shannon in 1948 [9]. Neural network learning methods provide a robust approach to approximation of realvalued, discrete-valued and vector-valued target functions, [12] such as numerical DNA data. The study of artificial neural networks has been inspired by the observation that biological learning systems are built of very complex webs of interconnected neurons, [10, 11,12], which communicate through a large set of interconnections assigned variable strengths (weights) in which the learned information is stored, [13]. Each neuron computes a weighted sum of its y input signals. The activation function for neurons is the sigmoid function defined in [12] as σ(y) where y is the weighted sum of the inputs. The output of the sigmoid function ranges from 0 to 1, increasing monotonically with its input and the weights of the interconnections between the different neurons are adjusted during the training process to achieve a desired input/output mapping. is examined by replacement of mono-, di-and tri nucleotide strings with their respective n-gram equivalents. The n-gram ratios for mono-, di-and tri nucleotides are shown in Table 1, Table 2 and Table 3 respectively. The results with 1-gram and 2-gram and their composition have been discussed in [15]. We advance the results obtained previously by examining the influence of 3grams on overall performance of our predictions based on the data evaluation functions. We examine the effect of the different number of neurons in the hidden layer on optimal prediction performance. The output node layer has 4 nodes reflecting our choice of sequence signals to predict. The schematics of DNA neural network architecture are shown in Figure 1. The DNA sequence data are first converted by a sequence encoding schema into neural network input vectors (ratios of n-gram). The neural network then predicts those normalized intensities according to the sequence information embedded in the neural interconnections after network training. Data evaluation functions: In [15], we explained the concept of performance and regression values. We also examined their results using 1-gram, 2-gram and their composition. We now check for consistency of the results with the inclusion of 3-gram using two other Matlab neural network data evaluation functions. Performance and regression values are also considered with this inclusion. Confusion Matrix: This is a 2-dimensional matrix with a row and column for each class for training, validation, testing and all datasets. Each matrix element shows the number of test examples for which the actual class is the row and the predicted class is the column. Good results correspond to large numbers down the main diagonal. The diagonal (green cells) in each table show the number of cases that were correctly classified. The off-diagonal (red cells) show the misclassified cases. Blue cells in the bottom right show the total percent of correctly classified cases (in green text) and the total percent of misclassified cases (in red text). Figure 2 shows a confusion matrix with 4 tables each displaying the network response for the training, validation, testing and all datasets. Methodology: We adopt the same methods as in [15]. The dataset is from the Cambridge Reference Sequence with ascension number NC − 012920 and is made of 15,453 rows and 6 columns where 3 of the columns are the n-grams for n= 1,2, 3 and the other 4 columns represent the normalized intensities for Adenine, Cytosine, Guanine and Thymine. We extract every 26th line of the dataset which reduces the dataset to 594 rows (lines) respectively. 3-gram are used independently to predict the normalized intensities for the four nucleotides ACGT and results obtained are compared with those obtained in [15]. We also use 1-3-gram, 2-3gram and 1-2-3-gram to repeat the analysis and compare with earlier results. The algorithmic steps for our data manipulation are as follows: [1] Compute n-gram profiles of the DNA data set using Python programming language. [2] Calculate the nucleotide, di nucleotide and tri nucleotide frequencies of these profiles. Results: Neural network regression value R, determine how robust the prediction is. Higher R value and a smaller MSE in terms of performance imply good prediction. We compare the performances of the networks with 1-3-gram, 2-3-gram and 1-2-3gram with different number of neurons in the hidden layer using the Matlab regression toolkit. These results are compared with those obtained in [15]. Again, the number of neurons in the hidden layer has been varied between 20 and 40 with step size 5 as a matter of choice and hopefully to find the optimal network architecture. Table 7 gives a summary of the regression and performance values extracted from 1-2-gram and 1-3-gram with variable number of neurons in the hidden layer. Table 8 gives a summary of the regression and performance values extracted from 1-2-gram and 2-3-gram with variable number of neurons in the hidden layer. Table 9 gives a summary of the regression and 392 ©Biomedical Informatics (2019) performance values extracted from 1 2-gram and 1-2-3-gram with variable number of neurons in the hidden layer. Tables 1, 2 and 3 show the percentages (ratios) from Affymetrix [1] dataset of nucleotides, di nucleotides and tri nucleotides respectively. Using pattern recognition toolkit to investigate the behavior of our predictions in terms of confusion matrices (CM) and receiver operating characteristic (ROC) curves, the results with 1-3-gram are shown in Table 10. The results with 2-3-gram and 1-2-3-gram (not shown) are not as good as those obtained using 1-3-gram. This is not necessarily a trivial result, as the predictive function must accommodate all targets in the 4 x 594 sets. Using regression toolkit, we observed that the values in Tables 4, 5 and 6 were generally better than the results obtained in [15] where 1-2-gram composition of the n-grams were used. This is in part due to the increment in the n-grams from 2 (two) to 3 (three). A look at Table 7, Table 8 and Table 9 shows reduction in the validation error and increment in the regression value when we compare the respective n-gram compositions. The average best validation performance (Bvp) and regression values obtained in [15] for 1-2gram was 0.002793 which translated to 99.72% accuracy with average regression value of 0.98803. These numbers decreased (increased) to 0.002499 and 0.99112 when we used the 1-3-gram com-position. Again, a comparison of 1-2-gram and 2-3-gram showed a decrease in best validation performance to 0.002253 and increase in the regression value to 0.99180 for the 2-3-gram. In the case of 1-2-3 gram, the best validation performance value again decreased to 0.002056 or 26.4% when compared with the value obtained with 1-2-gram. The regression value also increased to 0.99191 from 0.98803 obtained with 1-2-gram. This is again due to the increment in the n-gram number. The use of pattern recognition toolkit to investigate the behaviour of the confusion matrices and receiver operating characteristic (ROC) curves showed general confusion matrices value of 99.8% using 40 neurons in the hidden as shown in Table 10 and the points in ROC curve lying in the upper left corner. These are good signs of near expected results. Conclusion: We can predict the signal intensities via their normalized values from Affymetrix data using artificial neural network based n-gram model. It seems the higher the n-gram value and appropriate composition, the better the predictive accuracy of the model. The usage of higher n-gram values and their different compositions are considered in this paper. Efforts could be made to increase the number of n-grams to see if better results can be obtained which we envisage to be true. An effort could also be made to get optimal number of neurons in the hidden layer that give maximal regression values and lower mean square error. An increase in regression value to say 0.999 is indicative of a much better prediction with its attendant low mean square errors which is a measure of performance. As we increase the n-grams, we can also check which composition of the n-grams give better results. Greater confusion matrix values along the diagonal and ROC curves points in the upper most left corner can also be achieved for better classification.
2019-07-18T14:22:04.246Z
2019-05-30T00:00:00.000
{ "year": 2019, "sha1": "2bff31c46d7ac3ad7fcef568f3a9da79e86c323b", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/015/97320630015388.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2bff31c46d7ac3ad7fcef568f3a9da79e86c323b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
9635243
pes2o/s2orc
v3-fos-license
Dissection of the regulatory mechanism of a heat-shock responsive promoter in Haloarchaea: a new paradigm for general transcription factor directed archaeal gene regulation Multiple general transcription factors (GTFs), TBP and TFB, are present in many haloarchaea, and are deemed to accomplish global gene regulation. However, details and the role of GTF-directed transcriptional regulation in stress response are still not clear. Here, we report a comprehensive investigation of the regulatory mechanism of a heat-induced gene (hsp5) from Halobacterium salinarum. We demonstrated by mutation analysis that the sequences 5′ and 3′ to the core elements (TATA box and BRE) of the hsp5 promoter (Phsp5) did not significantly affect the basal and heat-induced gene expression, as long as the transcription initiation site was not altered. Moreover, the BRE and TATA box of Phsp5 were sufficient to render a nonheat-responsive promoter heat-inducible, in both Haloferax volcanii and Halobacterium sp. NRC-1. DNA–protein interactions revealed that two heat-inducible GTFs, TFB2 from H. volcanii and TFBb from Halobacterium sp. NRC-1, could specifically bind to Phsp5 likely in a temperature-dependent manner. Taken together, the heat-responsiveness of Phsp5 was mainly ascribed to the core promoter elements that were efficiently recognized by specific heat-induced GTFs at elevated temperature, thus providing a new paradigm for GTF-directed gene regulation in the domain of Archaea. INTRODUCTION Archaea are prokaryotic microorganisms similar to bacteria in many aspects of morphology and metabolism, but are more closely related to eukarya in the genetic information processing system (1,2). The archaeal basal transcription machinery is fundamentally related to the core components of the eukaryotic RNA polymerase (RNAP) II apparatus, possessing a multi-subunit RNAP and two general transcription factors (GTFs). These GTFs, termed TBP and TFB, are homologues of the eukaryal TATA-box binding protein and transcription factor IIB (TFIIB), respectively (3,4). In the process of transcription initiation, TBP first recognizes and binds to the TATA box, resulting in bending of DNA at the promoter region. Then TFB binds to the TBP-DNA complex, making sequence-specific contact with the BRE (TFB recognition element) upstream of the TATA box. This contact directs RNAP to the promoter, thus specifically initiating transcription at an initiator sequence located about 25 bp downstream of the TATA box (5). Intriguingly, although the archaeal transcription apparatus is eukaryotic-like, many putative transcription regulators encoded by archaea are homologous to those in bacteria (6). Several instances of negative control of archaeal transcription by such regulators have been described. The metal-dependent repressor 1 (MDR1) from Archaeoglobus fulgidus (7) and LrpA from Pyrococcus furiosus (8), were found to bind to the operator sequences overlapping the transcription start sites, whereas the Lrs14 from Sulfolobus solfataricus (9,10) and TrmB from Thermococcus litoralis (11) bind to the sites overlapping the BRE/TATA elements. Thus, these regulators could inhibit transcription initiation through occlusion of RNAP or TBP-TFB recruitment. On the other hand, there are fewer known mechanisms of positive control of archaeal transcription. GvpE, resembling eukaryal basic leucine-zipper protein, has been identified as an activator in the gas vesicle synthesis in haloarchaea (12,13), but the exact mechanism has yet to be elucidated. One of the best characterized archaeal transcriptional activators is Ptr2 from Methanococcus jannaschii. It could bind to the sequences upstream of the core promoter elements of ferredoxin A (fdxA) and rubredoxin 2 (rb2) genes, and activate transcription through direct recruitment of TBP to these promoters (14). It should be mentioned that multiple TBPs and TFBs are present in several archaea, including Halobacterium sp. NRC-1 (15)(16)(17)(18). This raises another possibility that particular TBP-TFB combinations may recognize different promoters and therefore regulate different genes (19). Recently, microarray-based studies have provided evidence that certain GTFs (TBPs/ TFBs) interact with specific groups of promoters and are likely involved in global gene regulation (20), and TBPd and TFBa co-regulate, either directly or indirectly, a subset of genes that account for over 10% of the Halobacterium sp. NRC-1 genome (21). Heat-shock response is a widespread physiological phenomenon in all three domains of life and an attractive process for investigation of gene expression regulation. Current genome projects have identified numerous heatshock proteins in archaea, such as HSP70 (DnaK), HSP60 (GroEL), HSP40 (DnaJ), GrpE and many small heatshock proteins (sHSP) (18,22,23), but no homologues of eukaryotic-type heat-shock transcription factors (HSF) or heat-shock response elements (HSE) have been identified. To date, only a few studies on heat-shock response have been reported in the domain of Archaea. Among the thermophilic archaea, it has been proposed that the Phr from P. furiosus (24,25) and HSR1 from A. fulgidus (26) might specifically bind to the promoters of some heatshock genes under optimal growth temperature, and release from them in response to heat shock. Intriguingly, one of the two TFB-related genes in P. furiosus is transcriptionally heat-inducible, implying it may be involved in heat-shock regulation (27). For extremely halophilic archaea, Daniels and co-workers have studied a heatresponsive promoter of the chaperonin-containing Tcp-1 gene (cct1) in H. volcanii, and revealed that the 5 0 -CGAA-3 0 element upstream of the cct1 TATA box and other two sites downstream of the TATA box are necessary for both basal and heat-shock transcription (28,29). Halobacterium volcanii possesses multiple genes encoding TBP and TFB proteins, among which the tfb2 gene was transcriptionally induced during heat shock at 608C (30), suggesting that TFB-modulated heat-shock response might exist in haloarchaea. Noteworthily, knockout of tbpD and/or tfbA genes in Halobacterium sp. NRC-1 downregulates many genes including two heat-shock genes, hsp1 and cctA (21). In this study, we report a comprehensive investigation of transcriptional control of the hsp5 gene that encodes a sHSP in Halobacterium. Using in-depth genetic and biochemical approaches, we demonstrated, for the first time, that alternative GTFs, rather than bacterial-type regulators, specifically modulated the heat-shock inducibility of the hsp5 promoter in both H. volcanii and Halobacterium cells. Therefore, our results establish a new paradigm of GTF-modulated transcriptional regulation in the domain of Archaea. Strains, plasmids and primers Escherichia coli JM109 was used as a host for the cloning experiments and E. coli BL21 (DE3) (Novagen, Madison, WI, USA) for over expression of recombinant proteins. All E. coli strains were grown in Luria-Bertani (LB) medium at 378C (31). When needed, ampicillin and kanamycin were added to a concentration of 100 and 50 mg/ml, respectively. Unless otherwise noted, H. salinarum CGMCC 1.1959, Halobacterium sp. NRC-1 and H. volcanii DS70 (32) were cultivated at 378C in CM medium (per liter, 7.5 g Bacto casamino acids, 10 g yeast extract, 3.0 g trisodium citrate, 200 g NaCl, 20 g MgSO 4 Á7H 2 O, 2.0 g KCl, 50 mg FeSO 4 Á4H 2 O and 0.36 mg MnCl 2 Á4H 2 O, pH 7.2). When required, mevinolin was added to a concentration of 5 or 10 mg/ml for H. volcanii or Halobacterium sp. NRC-1, respectively. The plasmid pNP22 (33) was used as the source for bgaH gene, while the H. volcanii-E. coli shuttle vector pWL102 (34) was used for constructing the bgaH reporter module. The primers used in this study are listed in Table 1. Cloning the hsp5 gene from H. salinarum CGMCC 1.1959 Using the sequence information of the hsp5 gene (VNG_6201G, in GenBank AE004438) of Halobacterium sp. NRC-1 (18), primers hspF82 and hsp5R were designed to amplify the corresponding gene of H. salinarum CGMCC 1.1959 and its promoter region. The hspF82 primer located 101 bp upstream of the hsp5 start codon, while hsp5R was complementary to an 18 bp DNA region in the 3 0 terminus of the hsp5 open reading frame (ORF). The resulting PCR product was ligated into the vector pUCm-T (Sangon, China) and sequenced. Constructs used for transformation of haloarchaea and reporter gene analysis For analysis of P hsp5 activity in vivo, we used a plasmidbased transcriptional reporter system as described previously (33,35). The P hsp5 region was amplified by PCR using primers hspF82 and hspRNdeI, with CGMCC 1.1959 genomic DNA as template. The primer hspRNdeI was complementary to a DNA region including the first three codons of the hsp5 gene. This PCR product was purified and cleaved with NdeI and ligated to the NdeI/ NcoI-digested bgaH fragment derived from plasmid pNP22. The resulting NdeI-fused fragment was used as a template to amplify the P hsp5 -bgaH fragment using primer hspF82 and bgaHRNcoI, which is complementary to the 3 0 -terminal sequence of bgaH. Then, the PCR product was cloned into the pWL102 at the BamHI and NcoI sites. The resulting plasmid, named pL82, was used for constructing the 5 0 flanking deletion mutants of P hsp5 , named pL52, pL42, pL37 and pL32, using forward primers hspF52, hspF42, hspF37 and hspF32, respectively. The 3 0 flanking deletion mutant (Mdel) was constructed in a similar way to pL82, except that the forward primer hspF37 and reverse primer hspR10 were used to acquire the 3 0 flanking deleted-P hsp5 fragment. In order to generate site-specific mutants, specific forward primers carrying the desired mutated nucleotides (FM1-FM12, Table 1), and the reverse primer bgaHRNcoI were used to amplify the P hsp5 -bgaH fusion fragments from pL37. The resulting PCR products were inserted into pWL102 to generate the desired constructs. Similarly, the P bop -bgaH fusion was generated by PCR amplification using the primers bopF and bgaHRNcoI with the plasmid pNP22 as the template. To generate P bop -P hsp5 chimeras (bBhsp, bThsp and bBThsp), primers containing the BRE or/and TATA box sequence of P bop (Table 1) were used with plasmid pL37 as the PCR template. The P hsp5 -P bop chimeras hBbop, hTbop and hBThsp were acquired by PCR amplification using primers containing the BRE or/and TATA box sequence of P hsp5 (Table 1), and using plasmid pNP22 as template. The PCR products of the promoter chimera-bgaH fusion were cloned into pWL102 at BamHI and NcoI sites. The fidelity of PCR-amplified products in these recombinant plasmids was confirmed by DNA sequencing. H. volcanii DS70 and/or Halobacterium sp. NRC-1 cells were transformed with plasmid DNA isolated from E. coli JM109 as described by Cline et al. (36). Isolation of RNA from cells under heat shock Cells of Halobacterium sp. NRC-1, H. salinarum CGMCC 1.1959 or H. volcanii were grown at 378C until midlogarithmic growth phase, and then shifted to elevated temperatures (45,48,55 or 588C) for heat shock for 15 min. The heat-shocked cells (5 ml) were immediately collected for RNA extraction using TRIzol reagent (Gibco BRL, Gaithersburg, MD, USA) according to the manufacturer's instructions, with cells remaining at 378C as the controls. Northern blot and primer extension analyses Activities of all the promoters in this study were measured by northern blot analysis. For monitoring the gene expression of hsp5, bgaH and bop, the hsp5 probe (228 bp), bgaH probe (340 bp) and bop probe (341 bp) were amplified with primer pairs hsppF/hsppR, bgaHpF/ bgaHpR and boppF/boppR (Table 1), respectively. The 7S RNA was monitored as an internal control by a specific probe (110 bp) amplified with the primers 7SF and 7SR (Table 1). All the PCR products used for probes were labeled with [a-32 P]-dCTP and subjected to northern blot analysis as described previously (37). The northern hybridization signal of the hsp5 or reporter gene (bgaH) was quantified using Quantity One software (Bio-Rad, Hercules, CA, USA) by scanning the exposed X-ray films, and normalized against the signal of the internal control (7S RNA). Heat-shock induction folds were determined by taking the ratio of the normalized heat-shock to nonshock hsp5 or bgaH signals. Quantification of these transcript levels and heat-shock induction folds were based on the results of two or more independent experiments for each promoter. To determine the transcriptional start sites of the P hsp5 -controlled hsp5 in CGMCC 1.1959 and bgaH reporter gene in H. volcanii, the primer hsp5seq hybridizing to 20 nt within the hsp5 gene and the primer bgaHseq complementary to a 20 bp DNA region within the bgaH gene were used. These primers were labeled at the 5 0 -end with [g-32 P]-ATP and were used for both DNA sequencing and primer extension as previously described (37). Overexpression and purification of TFBs The tfbB and tfbG genes were cloned from Halobacterium sp. NRC-1 by PCR with primer pairs tfbBF/tfbBR and tfbGF/tfbGR, respectively, and the tfb2 gene was amplified from H. volcanii with primers tfb2F and tfb2R ( Table 1). All the PCR fragments were sequenced and cloned into the expression vector pET28a at the BamHI/ HindIII sites. The recombinant plasmids were then introduced into E. coli BL21 (DE3). The E. coli recombinants were cultured until mid-logarithmic phase and then induced with 1 mM IPTG for an additional 4 h. All the histidine-tagged proteins were purified by a Ni-NTA agarose column (Novagen) according to the manufacturer's instructions. The eluted solution containing TFB was identified by SDS-PAGE and subsequently pooled and dialyzed against buffer A [50 mM Tris-HCl (pH 8.0), 1 mM EDTA, 10% glycerol, 0.1 mM ZnCl 2 , 50mM MgCl 2 , 2M KCl, and 0.2 g/l of PMSF] and subsequently concentrated by ultrafiltration using an Amicon Ultra-15 centrifugal filter device with 10 kDa molecular-weight cutoff (Millipore, Bedford, MA, USA). The concentrations of purified proteins were determined by using the BCA TM protein assay kit (Pierce, Rockford, IL, USA). Determination of the interaction between P hsp5 and TFBs by EMSA In order to test the specificity of DNA binding of TFB2, TFBb and TFBg, the following DNA fragments were prepared for EMSA: FW (À37 to +10 region of the wildtype P hsp5 ), FM (BRE and TATA box of P hsp5 in FW were replaced by corresponding parts of P bop ) and FD (BRE and TATA box of P hsp5 in FW were deleted) were generated by PCR, using [g-32 P]-ATP-labeled primer pairs hspF37/hspR10, bBThspF/hspR10 and hspF24/ hspR10 (Table 1), respectively. These [g-32 P]-ATP-labeled PCR products were purified with UNIQ-10 Column (Sangon, China). Interaction between P hsp5 and TFBs was performed as described by Ken and Hackett (38) with minor modifications. Briefly, TFBs (0-4 mM) were incubated with 20 fmol 32 P-labeled DNA in a 20 ml reaction mixture containing 0.5 M NaCl, 25 mM EDTA and 3 mg poly (dI/dC) at 37 or 508C for 30 min. The resulting complexes were run on a 5% polyacrylamide gel (acrylamide/bisacrylamide weight ratio of 60 : 1) in 100 mM sodium phosphate buffer (pH 6.0, preheated to 37 or 508C). The gels were electrophoresed at 3 V/cm for 5-6 h. Cloning and transcriptional analysis of hsp5 gene in H. salinarum It has been reported that the hsp5 is one of the most highly upregulated genes under heat shock in Halobacterium sp. NRC-1 (39,40). To determine whether the corresponding gene was also present and heat shock-inducible in some other Halobacterium strains, we have cloned the hsp5 gene and its promoter region from the genome of H. salinarum CGMCC 1.1959. Interestingly, pairwise sequence comparisons showed that hsp5 of CGMCC 1.1959 exhibited 100% identity with that of Halobacterium sp. NRC-1. Moreover, when the CGMCC 1.1959 cells were grown to the mid-logarithmic phase at 378C and then shifted to elevated temperatures (45,48,55 or 588C) for 15 min, northern blotting clearly revealed that the hsp5 transcripts increased upon temperature rising (up to $12-fold at 588C), exhibiting a typical pattern of heat-shock response ( Figure 1A). The transcription initiation site of hsp5 was then demonstrated by primer extension. Under both normal growth temperature and heat-shock conditions, the hsp5 transcripts were initiated from the same residue (G) located 19 bp upstream of the ATG start codon ( Figures 1B and 2A). Further analysis of the hsp5 promoter (P hsp5 ) sequence identified a typical TATA box (À31 TTTTTTA À25) located 25 bp upstream of the transcription initiation site, and a putative BRE (À37 AGAAAA À32) immediately upstream of the TATA box ( Figure 2A). Interestingly, just 2 bp upstream of these putative core promoter elements, there was the stop codon (À41 TGA À39) of the upstream gene. To ascertain if any regulatory elements exist adjacent the BRE/TATA box, we defined the DNA sequence from À82 to +19 as the full-length promoter region ( Figure 2A) for investigation of expression regulation. In vivo analysis of P hsp5 -controlled transcription under heat shock In order to establish a well-defined in vivo system to dissect the regulation mechanism of P hsp5 , the b-galactosidase gene (bgaH) from Haloferax lucentense (previously Haloferax alicantei) (33,41) was placed immediately downstream of P hsp5 region, and cloned into the shuttle vector pWL102. The resulting vector, named pL82 (with full-length P hsp5 , Figure 2A), was introduced into H. volcanii DS70, a widely used strain that lacks detectable bgaH transcripts as well as b-galactosidase activity (32,41). Since the BgaH enzyme was likely unstable at high temperatures (data not shown), and the bgaH expression could only be detected on mRNA levels in the case of weak promoters (33), hence the activities of P hsp5 and its derived promoters were evaluated by a direct assay of the bgaH transcripts with northern blot analysis. First, a time course induction of P hsp5 -controlled bgaH transcription was investigated under both 48 and 588C. The peak for transcription induction occurred at 588C for 15 min ( Figure 2B), resembling that of hsp5 in the wild-type strain ( Figure 1A). Therefore, we selected the treatment of 588C for 15 min as a standard heat-shock stress in all the following experiments. To confirm whether the transcription initiation site was altered by fusing the reporter gene to P hsp5 , primer extension analysis was performed on cellular RNA extracted from the H. volcanii transformants harboring pL82 under both 378C and 588C. As shown in Figure 2C, the transcription initiation site from the P hsp5 -bgaH fusion was exactly the same as the native hsp5 gene for both basal and heat-shock transcription, demonstrating that the transcription start site controlled by P hsp5 was not affected by either the reporter gene or the alternative host strain. Mapping the 5' boundary of the P hsp5 by deletion analysis To determine the minimal region of the promoter P hsp5 for both basal and heat-inducible function, we created a set of promoters with different 5 0 -deletions (from À82 to À32) and the same 3 0 terminus (+28 within the hsp5 coding region) by PCR amplification (Figures 2A and 3A). The full-length and shortened promoters fused with bgaH gene were cloned into plasmid pWL102. These constructs, named pL82, pL52, pL42, pL37 and pL32 ( Figure 3A), respectively, were introduced into H. volcanii. The relative activity of each promoter was tested under both normal growth temperature (378C) and heat shock (588C), by measuring the levels of bgaH transcripts. It was revealed that the full-length promoter in pL82, and 5 0 flankingshortened mutants in pL52, pL42 and pL37 exhibited similar transcription activities, with about 8 to 11-fold upregulation under heat shock (Figure 3), resembling the native promoter in H. salinarum CGMCC 1.1959. However, when the 5 0 -end of P hsp5 was shortened to À32 bp where the putative BRE (À37 AGAAAA À32) was deleted, both the basal and heat-induced transcription activities became hardly detectable ( Figure 3B). Moreover, the putative TATA box (À31 TTTTTTA À25) was also extremely important. Substituting three of the six nucleotide 'T' with 'G' made the promoter completely inactive (data not shown). These results demonstrated that the 5 0 terminus of the functional P hsp5 extends to the position À37, which was exactly the 5 0 boundary of the core promoter elements, the BRE and TATA box. Mutational analysis of the sequence downstream of the TATA box in P hsp5 Since the sequence upstream of the BRE and TATA box was not involved in the transcriptional regulation of the P hsp5 -controlled genes, we then analyzed to determine whether the downstream sequence accounted for the heatshock response. PCR-based scanning mutagenesis was performed to alter the targeted nucleotides downstream of the TATA box. The resulting mutants (M1 to M12, and Mdel), based on pL37, were introduced into H. volcanii. The transcription efficiency of each mutated promoter was determined by northern blot analysis, and was compared with that of the intact functional promoter in pL37. It was shown that both basal transcription and heat induction (12 AE 4 fold) were not significantly changed for these mutated promoters, except for mutant M10 that completely lost transcriptional activity ( Figure 4). Further analysis of the mutations within M10 revealed that the transcription initiation point was altered; thereby, the transcription initiation was inhibited. When the transcription initiation residue (G) was restored in the mutant NM10, it acquired the similar basal and heat-inducible transcription activities as the native promoter ( Figure 4). Noteworthily, there was a large inverted repeat (IR) sequence (-5 TGGCT-N4-TCA-N3-TGA-N2-AGCCA +20), which overlapped the transcription start site (Figure 4), and was likely to be a regulatory element for heat-shock response. However, point mutations of the 5 0 -half (M9 to M12) or even deletion of the 3 0 -half (Mdel) of this IR did not significantly affect the transcription activity of P hsp5 under either normal growth temperature or heat-shock conditions, implying that this region was not involved in heat-shock regulation in H. volcanii. These results suggested that there were likely no heat-shock response elements within the region between the core promoter elements (BRE and TATA box) and the translational start codon. BRE and TATA box are responsible for both basal and heat-induced transcription of P hsp5 The earlier results suggested that only the core promoter elements, the BRE and TATA box, were the likely candidates for regulation of the detectable basal as well as the strong heat-inducible transcriptional activity of P hsp5 . To confirm this, we constructed a set of chimeric promoters by recombining the BRE, TATA box and downstream sequences between the P hsp5 and a nonheatinducible promoter of the bacterio-opsin gene (P bop ) (42). These promoters were then ligated with bgaH ORF and inserted into pWL102. The bop promoter (in the construct bopW) consisted of the TATA box and six upstream nucleotides (we assigned it as the putative BRE in this article) as well as the sequences between the TATA box and the bop ATG start codon, while the wild-type P hsp5 (in construct hspW) used the same sequence as that in pL37. The chimeric promoters bBhsp, bThsp or bBThsp were constructed by substitution of the BRE, TATA box or both elements of the P hsp5 with the counterparts of P bop . Similarly, the chimeric promoters hBbop, hTbop or hBTbop were derived from P bop , by substitution with the BRE, TATA box or both elements of the P hsp5 ( Figure 5A). Each of these constructs was introduced into H. volcanii for transcriptional analysis. Significantly, while P hsp5 was heat-inducible (hspW, Figure 5B) and P bop was not (bopW, Figure 5C) as expected, it was clearly shown that when the BRE/TATA elements of P bop were replaced by the counterparts of P hsp5 , it rendered the nonheat-inducible promoter P bop completely heat-inducible in the resulting chimeric promoter (hBTbop, Figure 5C). On the contrary, if the BRE/TATA elements of the P hsp5 were substituted by those of P bop , the resulting chimeric promoter bBThsp lost heat-inducible activity, and the transcript level of the reporter gene became too low to be detectable by northern blotting ( Figure 5B). These results reinforced the conclusion that only the BRE and TATA elements of P hsp5 accounted for the heat-inducible feature of this promoter in H. volcanii. Interestingly, it is likely that both the BRE and TATA box of the P hsp5 are heat responsive elements, since retaining either the TATA box or BRE in the chimeric promoters derived from P hsp5 (bBhsp and bThsp, Figure 5B), or substitution with either the BRE or TATA box of P hsp5 in the P bop -derived chimeras (hBbop and hTbop, Figure 5C), the resulting chimeric promoters acquired higher transcriptional activities ($2-to 5-fold) at elevated temperature than at normal growth temperature. Thus, both the BRE and TATA box of P hsp5 are important for heat-shock response, while their combination provided the most significant contribution to transcriptional activation under heat shock (hspW and hBTbop, Figure 5B and C). Transcriptional analysis of the chimeric promoter hBTbop in Halobacterium sp. NRC-1 Considering that the promoter P hsp5 was acquired from Halobacterium and our above investigations were mainly performed in Haloferax, we then further asked whether the conclusion made sense in another model haloarchaeon Halobacterium sp. NRC-1, which is phylogenetically closely related to H. salinarum CGMCC 1.1959. First, we analyzed the mRNA levels of bop in Halobacterium sp. NRC-1, and confirmed that the bop promoter was not heat-inducible ( Figure 6A). Then, the construct hBTbop was introduced into Halobacterium sp. NRC-1 and the transcript levels of the reporter gene (bgaH) under both nonshock and heat-shock conditions were determined. Our results confirmed again that the BRE and TATA box were indeed the determinants of the heat-inducible activity of P hsp5 . The chimeric promoter in hBTbop, with the BRE/TATA elements from P hsp5 and downstream sequence from P bop , acquired strong heat-inducible transcriptional activity ( Figure 6B). Transcriptional profiling of GTF genes of Halobacterium sp. NRC-1 in response to heat shock Previous investigations have demonstrated that the gene encoding the transcription factor TFB2 in Haloferax is upregulated under heat shock (30). In order to determine which GTF genes in Halobacterium sp. NRC-1 were responsive to heat-shock stress (if any), we have analyzed our microarray database (39, and unpublished data). As shown in Table 2, the expression level of the six tbp genes (tbpA-F) were scarcely altered after heat shock, ranging from about À1.05 to 1.25-fold. However, among the seven tfb genes (tfbA$G), tfbB and tfbG were significantly upregulated, with fold changes of about 1.68 and 2.41, respectively. Therefore, it was of interest to determine whether these heat-induced GTFs, TFB2 from Haloferax and TFBb or TFBg from Halobacterium, were involved in transcriptional regulation of P hsp5 by recognition of the promoter elements. TFBb and TFB2 specifically bind to P hsp5 at elevated temperature To test whether the P hsp5 was recognized by the heat-induced general transcription factors, TFB2 from H. volcanii, and TFBb and TFBg from Halobacterium sp. NRC-1, they were overproduced and purified in E. coli, and were subjected to electrophoretic mobility shift assay (EMSA) to determine their interactions with the P hsp5 DNA and its mutants (Figure 7). Interestingly, TFB2 could efficiently bind to the wild-type P hsp5 (FW), with even higher binding efficiency at 508C than at 378C, as more DNA-protein complex and less proportion of free FW DNA appeared at 508C when same concentration of TFB2 was included in the reaction ( Figure 7B). This binding appears to be specific, since interaction between TFB2 and the BRE/TATA-deleted fragment (FD) was not detectable in the same EMSA. A relatively weak interaction between TFB2 and FM (BRE/TATA of P bop ) was detectable; however, it only occurred at 508C when high concentrations of TFB2 (e.g. 4 mM) were available ( Figure 7B). These results may help explain the heatinducibility of P hsp5 in Haloferax, as the TFB2 was upregulated under heat shock (30), and could efficiently bind to P hsp5 at high temperature. Significantly, when TFBb and TFBg were incubated with the P hsp5 DNA (FW) and P hsp5 -derived mutants (FM and FD), only TFBb but not TFBg could specifically bind to the P hsp5 DNA at the high temperature (508C), and no detectable interactions were observed for either of the TFBs at the lower temperature (378C) (Figure 7C and D). Moreover, TFBb and TFBg could not interact with the P hsp5 -derived mutants (FM and FD) in EMSA under the same conditions, suggesting that the interaction of TFBb and P hsp5 is specific and likely temperature-dependent. These results indicated that TFBb, but not TFBg, might regulate the hsp5 gene expression at elevated temperature in Halobacterium. Taken together, our results have established a new paradigm for archaeal gene regulation in response to environmental changes. Under heat shock, a few heatinducible GTFs, such as TFB2 in Haloferax or TFBb in Halobacterium, together with the corresponding TBPs, yet to be identified, could immediately modulate a group of downstream target genes, including the small heat-shock gene hsp5, to cope with the environmental stress. DISCUSSION Multiple GTFs are present in haloarchaea and have been speculated to regulate differential gene expression for years (19), and systems approach has provided supports that the GTFs in Halobacterium sp. NRC-1 likely accomplish large-scale regulation of transcription (20,21). However, detailed studies of the role of GTFdirected transcriptional regulation of specific genes in response to environmental signals in archaea are limited. In this article, we demonstrated that the BRE and TATA box of the P hsp5 play a critical role in both basal and heat-induced gene expression, which was confirmed by both genetic and biochemical approaches. Therefore, our work has established a new paradigm for TFB-TBP modulated gene regulation in the domain Archaea. The hsp5 gene and its homologs, encoding sHSPs, are present in numerous haloarchaeal genomes including Halobacterium sp. NRC-1, Haloarcula marismortui and Haloquadratum walsbyi (15,17,18). These proteins belong to the Hsp20/a-crystallin family (43), and act as molecular chaperones to protect cellular proteins against irreversible aggregation during stress conditions (44). The hsp5 gene is upregulated under heat shock in both Halobacterium sp. NRC-1 (39,40) and H. salinarum CGMCC 1.1959 ( Figure 1A), and the hsp5 promoter also exhibited similar heat-inducibility in H. volcanii ( Figure 2B). Deletion analysis demonstrates that the 5 0 boundary of the functional promoter of hsp5 is exactly at the position of the putative BRE and TATA box ( Figure 3). Therefore, there is no upstream activation sequence (UAS) adjacent the BRE/TATA box in the defined full-length promoter P hsp5 . It is noteworthy that there is an IR overlapping the transcription initiation site in P hsp5 (Figure 4). This IR resembles the heat-shock regulatory elements usually presented in many bacterial (45)(46)(47) and some archaeal heat-shock genes (26). For instance, a conserved palindromic motif, CTAAC-N5-GTTAG, located downstream of the BRE/TATA elements of the promoter P hsr1 and P hsp20-2 in A. fulgidus, is involved in heat-shock regulation by binding of the heat-shock repressor HSR1 (26). However, the IR in P hsp5 was not found to be involved in the P hsp5 -controlled heat-shock response in H. volcanii, since mutagenesis of the sequences downstream of the TATA box including this IR did not significantly change the promoter activity, as long as the transcription initiation site was not altered (Figure 4). Moreover, replacement with the BRE/TATA box of P hsp5 , rendered the nonheat-inducible promoter (P bop ) heat-inducible, in both H. volcanii and Halobacterium sp. NRC-1 (Figures 5 and 6). Therefore, there is also no heat-shock response element downstream of the core promoter elements, and the BRE and TATA box of P hsp5 are likely the only elements accounting for both basal of heat-inducible transcription in these haloarchaea. These results are slightly different from the earlier observations for the P cct1 in H. volcanii, where the heat-responsiveness of P cct1 is mapped to the TATA box and surrounding sequences, including the putative BRE and two downstream sites (29). Nevertheless, it is most likely that the sequences surrounding the TATA box in P cct1 are also the contact sites of TFB or TBP; hence both P cct1 and P hsp5 might use the same mechanism of GTFs directed strategy in response to heat shock. This novel strategy of gene expression regulation for P hsp5 was further supported by direct biochemical evidence that P hsp5 was recognized by specific heat-inducible GTFs, TFB2 from Haloferax, and TFBb from Halobacterium (Figure 7). Our EMSA results indicated that both TFB2 and TFBb were able to recognize the corresponding core promoter without the assistance of TBPs, at least in vitro when high concentration of TFBs was supplied ( Figure 7B and C). It was observed that the binding efficiency of TFB2 was likely higher than that of TFBb. Since H. volcanii has a lower salt optimum than Halobacterium strains and both proteins were over expressed in E. coli, this different affinity is likely due to the presence of more properly folded molecules of TFB2, compared to TFBb, in the purified samples. The high molecular weight DNAprotein complexes appeared around the loading wells ( Figure 7B) are likely the aggregation of sufficient TFB2/P hsp5 complexes, which might occur when the complexes were transferred from the EMSA binding buffer (high salt concentration) to the electrophoresis buffer (low salt concentration). However, the formation of these DNA-protein complexes is obviously due to the specific interaction of TFB2 and the P hsp5 DNA but not nonspecific DNA-protein co-aggregation, as such a complex was never generated between TFB2 and the P hsp5 mutants in the same EMSA experiments (FM and FD, Figure 7B). Interestingly, although TFBg is also upregulated in Halobacterium sp. NRC-1 under heat shock (Table 2), amino acid sequence analysis revealed that TFBb shared more homology with TFB2 than TFBg (TFB2/TFBb, 71%; TFB2/TFBg, 62%). Moreover, microarray data has shown that under low temperature the tfbG gene is also upregulated, whereas the expression of hsp5 is highly inhibited (39). All these results indicated that TFBb, but not TFBg, selectively modulates the transcription of hsp5 and probably other heat-shock genes. A recent study on Halobacterium sp. NRC-1 has demonstrated that most of the TFBs, including TFBb, could interact with a single TBP (TBPe) (20), and most TBPs are not significantly upregulated under heat shock (Table 2). Meanwhile, multiple TFBs but only one TBP are found in the genomes of some other haloarchaea, e.g. H. marismortui (17) and Natronomonas pharaonis (16). Thus, it is likely the expanded family of TFBs plays a much more important role in heat-shock response in these investigated haloarchaea. However, the heat adaptability of TBP in interactions with the TATA box of the heatshock promoter should not be underestimated. It was observed that the TATA box of P hsp5 itself could slightly increase gene expression under heat shock ( Figure 5), implying that the corresponding TBP interacts more efficiently with the P hsp5 at elevated temperature. This temperature-dependent interaction manner of GTFs with heat-shock promoters was also observed in other archaea, e.g. the TBP and TFB of Methanosarcina mazeii were suggested to interact more strongly with stress-gene promoters during heat shock (48). Therefore, it is evident that both TFB and TBP contribute significantly to the upregulation of hsp5 under heat shock. It is noteworthy that specific transcriptional repressor modulated heat-shock response has also been reported recently in some thermophilic archaea, such as P. furiosus (24,25) and A. fulgidus (26); however, these kinds of heatshock regulators are still not identified in the extremely halophilic archaea. Interestingly, while many haloarchaea encode multiple TBPs and TFBs (19,20,30), some other archaea only harbor one or two TBPs and TFBs. So it is reasonable that haloarchaea have developed an additional sophisticated strategy of gene transcriptional regulation by selection of alternative TFBs and TBPs, as we have revealed in the hsp5 regulation. This regulatory strategy is conceptually similar to the alternative sigma factors directed transcriptional activation of several heat-shock genes in bacteria (49), and is reminiscent of the HSFs stimulated transcription in eukaryotes (50). Notably, haloarchaea flourish in extremely hypersaline environments and are confronted with many environmental stresses, including frequent changes of temperature. Transcriptional regulation of the important genes including those for sHSPs by GTFs, but not other secondary regulators, would help haloarchaeal cells respond quickly to the environmental challenges, and thereby adapt more efficiently to the harsh environments.
2014-10-01T00:00:00.000Z
2008-04-05T00:00:00.000
{ "year": 2008, "sha1": "87b468845131f0b8ea3b92b351df721e9a4671b3", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/36/9/3031/3852017/gkn152.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87b468845131f0b8ea3b92b351df721e9a4671b3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
266929229
pes2o/s2orc
v3-fos-license
Development of TikTok-based Chemo-entrepreneurship e-Worksheet to Fostering Students’ Entrepreneurial Spirit : This research aims to develop an e-worksheet and analyze its quality based on the assessment of material experts, media experts, and chemistry teachers, as well as knowing students’ responses to TikTok -based chemo-entrepreneurship e-worksheet products. The method used a 4D model consisting of define, design, development, and disseminate stages, but was limited to the developing stage. Data collection techniques in this research were interviews, validation sheets Introduction Chemistry is a science that develops based on observations and experiments to form a scientific attitude, gaining experience in applying a scientific approach, and studying chemical concepts to solve problems in the surrounding environment (Dewi, 2021).Therefore, chemistry learning can directly align with various objects or phenomena around human life (Wibowo, 2018).One of the chemical materials studied by students and closely related to everyday life is the Colligative Properties of Solutions. The topic of colligative properties of solutions is closely associated with daily life activities, for example, making syrup, refining petroleum, refining sugar, and making ice cream.In making traditional ice cream, there is a process of adding salt, which is one application of the colligative properties of the solution, namely decreasing the solution's freezing point.The teacher needs to connect the material on Colligative Properties of Solutions and its application in life through entrepreneurship education.One approach that links material with natural objects around human life is the Chemo-entrepreneurship (CEP) approach. The CEP approach is a contextual chemistry learning approach, namely a chemistry approach that connects the material studied with natural objects (Wibowo, 2018).The CEP approach is aimed at motivating students to have an entrepreneurial spirit.The CEP approach can also train students to process materials into products we often encounter in real life, have economic value, and foster students' entrepreneurial interests (Milaningsih, 2023).Through the CEP approach, students are expected to be more creative in producing products that have economic value because, in reality, not all students continue their higher education after graduating from school (Ruliyanti, 2020). Providing entrepreneurship education at the high school level is essential, considering there is still a high level of open unemployment among educated people, including high school graduates (Pupasari, 2020).Entrepreneurship education is a tool that can be used to reduce unemployment and poverty and can also be used as a means to create a financially independent society so that it is able to achieve prosperity for individuals and the surrounding environment towards a prosperous society (Alstra, 2023).It is hoped that entrepreneurship education will reduce the high unemployment rate, especially among the educated (Nleonu EC, 2020).Schools need to provide students with real-world experience in business as part of entrepreneurship education to gain the necessary knowledge, attitudes, and abilities (Hidayat, 2021).Information and communication technology in the digital era has experienced significant growth (Marti'ah, 2022).This condition is an opportunity to create entrepreneurship, especially in the education sector, which demands a creative, competitive, and innovative generation Yogi, 2021). Sustainable education in the 21 st Century has four pillars of the learning process: "learning to know, learning to do, learning to be, and learning to live together" (Marryono Jamun, 2019).To realize these four pillars of education, teachers, as learning agents, need to study and apply technological developments in the learning process.Based on this, education should use technology to support learning, access information, and support learning activities and assignments (Novita, 2023).Apart from utilizing technology for learning, teachers need to prepare teaching materials for teaching related to the material to help students learn and deepen the material.One of the teaching materials that can be used is Student Worksheets. Student worksheets are a learning tool containing material summaries, practice questions, and instructions for implementing learning tasks that students must complete to build basic abilities according to achievement indicators of the learning outcomes (Mairani, 2022).However, these worksheets are considered less effective in supporting the learning process in the current era of increasingly rapid technology.Student worksheets still have shortcomings, including incomplete presentation of incomplete material, the cover appearance does not stimulate students' learning motivation, and the questions provided are less varied (Rosa, 2020).To optimize Student Worksheets both in terms of appearance and quality of learning, a transformation is needed to increase innovation and student creativity by replacing the function of short student worksheets with electronic versions or eworksheets (Khotami, 2023). E-worksheet is a student work guide to help students understand learning material in electronic form, which is implemented using desktop computers, notebooks, smartphones, and tabs (Firma Kholifahtus, 2021).E-worksheet can display videos, images, text, and questions that can be assessed automatically (Utami, 2022).Using e-worksheets for learning makes student activities more fun, makes learning interactive, and provides opportunities for students to continue to try and motivate themselves while learning (Indahsari, 2020).Apart from that, research by Milaningsih (2023) noted that e-worksheet chemo-entrepreneurship is feasible and effective for cultivating students' entrepreneurial spirit.The existence of eworksheets also increases teachers' creativity so that e-worksheets are more interactive and fun and attract students' interest in learning (Costadena, 2022). Students in the 4.0 era mostly use the internet to complete assignments, one of which is the use of social media.Pardianti (2022) stated that social media can be used as a learning medium.TikTok is a Chinese social network and music video platform launched in September 2016 (Apriyani, 2022).Syarifuddin (2022) research states that TikTok, together with the right use and methods, can be used as an engaging, interactive, and innovative learning media.Research conducted by Putri (2021) states that with the TikTok application, students can quickly create a learning process that attracts their attention. This research aims to produce innovative new teaching materials in the form of chemo-entrepreneurship e-worksheets that are feasible and effective in fostering students' entrepreneurial spirit and to determine the response of students and teachers after conducting chemistry lessons using the developed e-worksheet.The e-worksheet is intended to help teachers deliver the materials, improve student learning outcomes, and foster students' entrepreneurial spirit. Research Method The Research and Development (R & D) method was applied in this study.R & D is a research method used to create a particular product and test the practicality and effectiveness of the product.The product developed in this research was learning media as an e-worksheet containing chemo-entrepreneurship linked to the social media TikTok.In this study, a 4-D research model was used, which consisted of 4 stage, namely: Define, Design, Develop, and Disseminate (Sugiyono, 2019), but was limited to the developing stage. The define stage defines the requirements needed to develop the product.The activities carried out were needs analysis, student analysis, task analysis, material concept analysis, and formulation of learning objectives.The define stage was obtained through interviews and observation of chemistry teachers and high school students.The design stage is the stage of designing the media to be developed.The design phase is implemented by selecting media, selecting formats, collecting references, making instruments, and making initial designs.The development stage is the stage for testing and improving the product.Products are assessed by experts so that quality products are produced. Data collection techniques in this research were interviews, validation sheets, and student response questionnaires.The e-worksheet being developed was validated by experts (material and media) to determine its feasibility.Product quality was validated and assessed using a Likert scale questionnaire, while student responses were obtained using a Guttman scale questionnaire.The data analysis technique was carried out by changing assessment data from media experts, material experts, and reviewers into qualitative assessment data based on a Likert scale with answer choices Excellent (E), Good (G), Fair (F), Poor (P), Bad (B) where each option has a score of 5, 4, 3, 2, 1 (see Table 1).Next, the average value of each and the overall assessment aspects of the scores obtained were calculated.The average value was calculated using the formula: = X = average score X  = total score n = number of experts Score obtained then altered to qualitative with classification total assessment, such as seen in Table 1.Score range (i) quantitative Qualitative category x i + 0.60 Sbi < x  x i + 1.80 SBi Good 3 x --0.60 Sbi < x  x i + 0.60 SBi Fair 4 x --1.80 Sbi < x  x --0.60 SBi Poor 5 x  x i -1.80 Sbi Bad Data from students were converted into quantitative data using the Guttman scale, which were converted into scores.Then, the percentage of product ideality was calculated for each aspect.The ideal percentage (%) was calculated using this formula: Percentage ideal = Results and Discussion In this study, a 4-D research model was used, which consisted of four stages, namely: Define, Design, Develop, and Disseminate (Sugiyono, 2019).The development process begins with the Define Stage.The Define stage includes five steps: needs analysis, student analysis, task analysis, material concept analysis, and formulation of learning objectives.This stage was carried out to identify and determine learning needs and collect information related to the Chemo-Entrepreneurship-oriented e-worksheet.From the results of the needs analysis carried out by distributing questionnaires to students, it was found that most students said that the colligative properties of solutions were only explained using the lecture method, so students needed help understanding the concept of chemo-entrepreneurship in the material.Using the lecture method causes students to be less active in the learning process, so students could be more optimal in developing their potential (Wulandari, 2022).Meanwhile, the interview with the teacher concluded that the colligative properties of solutions were carried out using textbooks, student worksheets, and discussions.It needs analysis showed that a teaching material product was required to foster students' entrepreneurial spirit and help them understand the concept of colligative properties of solutions. The second stage was the Design Stage, resulting in the e-worksheet being designed with the Canva application and converted into PDF.The Canva application is an online design program that provides various tools such as presentations, brochures, posters, resumes, pamphlets, banners, and so on provided in the Canva application (Junaedi, 2021).The Canva application has advantages, including having a variety of attractive designs and increasing the creativity of teachers and students in designing learning media.In addition, the Canva application provides various features and templates, saving time in making learning media (Admelia et al., 2022).In this e-worksheet, learning videos were created via the TikTok application and then inserted into the e-worksheet as a link that can be watched by clicking to ensure the learning variation.The software used in this research includes Canva, TikTok, Kinemaster, Flip PDF Professional, and Google Form applications. The e-worksheet design began with preparing and analyzing material on the colligative properties of solutions.Material analysis aims to determine the material's characteristics and depth to be presented.The material on colligative properties that will be presented includes 1) Understanding the colligative properties of solutions, 2) Types of colligative properties of solutions, 3) Understanding freezing point depression of solutions, 4) Figure 1. Process of creating e-worksheet via the Canva application The components in the e-worksheet included a foreword, table of contents, instructions for use, introduction, concept map, summary of material on colligative properties of solutions, product planning design, preparation of project schedules, implementation and monitoring related to making rotary ice, evaluation of questions, and references.The next process was making a video using the Kinemaster application and making evaluation questions using Google Forms. The video editing process was carried out with the help of the Kinemaster application.The Kinemaster application allows users to edit videos easily and quickly.Many features are available in the Kinemaster application, such as effects, filters, music, and much more.Videos that had been edited were then downloaded in high quality to obtain more apparent and exciting results.Videos edited using the Kinemaster application were then uploaded to the TikTok application.After the video had been successfully uploaded to the TikTok application, the next step was to insert the TikTok video link into Canva to access the video on the e-worksheet.The content of the video is an explanation of the material on the colligative properties of solutions starting with apperception (Astiani et al., 2018).Apperception in the video is related to daily life events, so it can reduce boredom from studying in line with (Kamila, 2022), who stated that apperception activities are beneficial in providing an initial overview when delivering material and can increase students' understanding and motivation in learning.Some of the apperception scenes presented in the video include the process of dissolving salt in each solution and then the role of adding salt to ice cubes in making rotating ice.Salt added to ice cubes helps lower the freezing point of ice cubes or solutions.Salt has hydrophilic properties or can bind with water molecules so that salt can make ice (Agung et al., 2022).The learning video in the TikTok application can be seen in Figure 3. The video contains the definition of colligative properties of solutions, various colligative properties of solutions, and questions for practice and discussions.The video can be viewed on the e-worksheet by clicking the button provided to make it easier for students to Figure 2. TikTok Video The third stage was Development, which was carried out by developing the e-worksheet product as a link created using the Flip PDF Professional application, as seen in Figure 4.After downloading the e-worksheet, it was uploaded to the Flip PDF Professional application.After successfully uploading to the Flip PDF Professional application, the file that had become a flipbook was published online so that a link appeared.Files were published online using high quality for a more attractive appearance and more explicit images.The link was then copied, and after that, it can be distributed online.The e-worksheet was distributed via a link so that e-worksheet can be easily accessed.After the product was finalized, the next step was a review by one material expert, one media expert, four reviewers (high school chemistry teachers), and students (see Table 2).The review results of these experts were then used as a consideration to determine the product's suitability.Validation by material experts was carried out twice.In the first stage of material expert validation, a score of 80% was obtained, stating the "good" category so that the e-worksheet that was being developed needed to be improved and revalidated.In the material validation process, material experts suggested enhancing aspects of the material content, namely by explaining the application of the colligative properties of solutions in everyday life apart from the process of making traditional ice cream, even though what was presented in the eworksheet was only the process of making traditional ice cream.After revision, the second stage of material expert validation was continued, where a score of 90% was obtained, which stated the "excellent" category.Therefore, based on experts' suggestions and comments, the media developed is feasible and can be tested for small-group research. The media expert's assessment obtained a result of 97% in the "excellent" category.These results state that e-worksheets include outstanding aspects of presentation, graphics, language, and TikTok videos.Media experts also provided advice regarding errors in writing and choosing font size.The choice of font size was intended to make the e-worksheet easy and exciting to read (Sari et al., 2023).Media experts also provided input regarding using the formula for lowering the freezing point of solutions found on page 10 of the e-worksheet.Assessment by the reviewers (four chemistry teachers) obtained an ideal percentage of 96% in the excellent category.Reviewers also provide suggestions and input, such as selecting animations that can be adapted to the current material; in the question evaluation, there should be questions related to making ice cream, completeness of the material, use of the freezing point depression formula, and suitability of the concept map to the material presented.Based on the assessment results by material experts, media experts, and reviewers, data was obtained that the Chemo-Entrepreneurship e-worksheet developed was in the excellent category. The developed Chemo-Entrepreneurship e-worksheet was tested to 30 students from three schools.They had delivered their comments and responses through a distributed questionnaire.Based on the student questionnaire, the ideal percentage was 96% in the "excellent" category.The results of student responses stated that the e-worksheet Chemoentrepreneurship that was developed made learning more enjoyable and fostered students' entrepreneurial spirit through the Chemo-entrepreneurship approach. The Chemo-Entrepreneurship approach in the e-worksheet can be seen in the primary material presented, work steps, and product planning tasks.A brief description of the primary material and additional information relating to the products produced.Experimental work steps explain the process of processing material into a functional product with economic value.Experiments are not only related to chemistry but also have entrepreneurial characteristics in them.The CEP approach makes chemistry learning exciting and allows students to optimize their product production potential (Wibowo & Ariyatun, 2018).Eworksheet is equipped with product planning tasks to foster an entrepreneurial spirit in students. Based on research that has been conducted, the E-LKPD Chemo-Entrepreneurship that has been developed has received a very good assessment from teachers and is very feasible to be used as teaching material to foster the entrepreneurial spirit of students.E-LKPD Chemo-Entrepreneurship makes learning more enjoyable and helps students understand the learning material. Conclusion The conclusions obtained from the results of this study that the developed e-worksheet Chemo-Entrepreneurship obtained an ideal percentage of 90% from material experts in the excellent category, 97% from media experts in the excellent category, 96% from reviewers in the excellent category, and 96% of the test Student responses were in the excellent category.Thus, the developed Chemo-Entrepreneurship e-worksheet is of excellent quality and is suitable as an alternative learning medium for material on the collaborative properties of solutions in classroom learning to foster students' entrepreneurial spirit. Recommendation Some recommendations given for future researchers are as follows: (1) Research using the R&D method only reaches the development stage, it is hoped that further research can reach the dissemination stage to perfect this research.(2) Complement the chemo-entrepreneurship e-worksheet developed with practical videos other than making ice cream so that students understand it more easily. Figure 3 . Figure 3. Process of changing the e-worksheet in the form a PDF file into a link
2024-01-11T16:06:26.421Z
2024-01-10T00:00:00.000
{ "year": 2024, "sha1": "ba4728dd25c21ebc9974053c452ddd58a0a0a257", "oa_license": "CCBYSA", "oa_url": "https://e-journal.undikma.ac.id/index.php/pedagogy/article/download/9868/5312", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4f832691373791eaaabca0656723bc57ca161e0b", "s2fieldsofstudy": [ "Chemistry", "Education", "Business" ], "extfieldsofstudy": [] }
2945155
pes2o/s2orc
v3-fos-license
Dwell Time Distributions of the Molecular Motor Myosin V The dwell times between two successive steps of the two-headed molecular motor myosin V are governed by non-exponential distributions. These distributions have been determined experimentally for various control parameters such as nucleotide concentrations and external load force. First, we use a simplified network representation to determine the dwell time distributions of myosin V, with the associated dynamics described by a Markov process on networks with absorbing boundaries. Our approach provides a direct relation between the motor’s chemical kinetics and its stepping properties. In the absence of an external load, the theoretical distributions quantitatively agree with experimental findings for various nucleotide concentrations. Second, using a more complex branched network, which includes ADP release from the leading head, we are able to elucidate the motor’s gating effect. This effect is caused by an asymmetry in the chemical properties of the leading and the trailing head of the motor molecule. In the case of an external load acting on the motor, the corresponding dwell time distributions reveal details about the motor’s backsteps. Introduction The molecular motor myosin V is a dimeric protein with two identical motor domains or 'heads', each of which has a nucleotide binding pocket for the hydrolysis of ATP. The motor transduces the free energy released from ATP hydrolysis into discrete mechanical steps along actin filaments [1,2]. The properties of these steps have been characterized by changing the ATP concentration and an external load in various single-molecule and chemokinetic experiments [2][3][4][5][6][7][8][9][10] including living cells [11]. The details of the motor's mechanical steps have been investigated using sophisticated single-molecule techniques [12][13][14][15]. Moreover, the motor's motion has directly been visualized through AFM imaging [16]. In a single forward step, the motor unbinds its trailing head from the filament, moves this head forward by 72 nm, and rebinds it to the filament in front of the other head. Since the latter head stays at a fixed filament position, the motor's center-of-mass is displaced by 36 nm during such a step. This directed motion requires the coordination of the ATP hydrolysis by the two motor heads. It is generally believed that this coordination involves the following motor states and transitions. For most of its time, the motor molecule dwells at a fixed filament position with ADP bound to both heads. The sequence that leads to a forward step starts with ADP release from the motor's trailing head followed by ATP binding to this head, while the motor's leading head remains in its ADP state. The different ADP release rates for the trailing and leading head, that we will describe as gating, are thus essential for the coordination of the two heads. However, it has not been possible, so far, to directly measure these two rates for doubleheaded myosin V. Experiments on single-headed myosin V indicate that resisting and assisting load forces lead to rates that can differ up to 100-fold [8,9,17,18]. In a double-headed molecule, intramolecular strain leads to opposite forces on the two heads of the motor. Therefore, the experiments on singleheaded myosin V imply that the different release rates will depend on force as well. In the present study, we will refer to the difference of ADP release rates as gating. Note that the latter term is used with a slightly different meaning by different authors. This paper is closely related to our previous work, where we have discussed a network description for myosin V that captures the motor's stepping properties as a function of external control parameters such as nucleotide concentration and external load force [19]. The step velocity of myosin V depends on the concentration of ATP, and decreases with decreasing [ATP]. For a load force that opposes the motor's stepping direction, the load decreases the motor's velocity, until this velocity vanishes at the stall force F s~1 :5{3 pN [3,4,6,7]. For resisting loads that exceed the stall force, the motor exhibits a ratcheting behaviour, i. e., it steps backwards without being much affected by the ATP. For assisting loads, the step velocity of the motor is independent of the load force. In our previous study [19], the motor's motion was described by a chemomechanical network that includes both chemical reactions, provided by the binding and release of nucleotides, as well as two mechanical stepping transitions, both of which have the same step size ('~36 nm). The stepping properties of the motor for both assisting, sub-and superstall forces, were described by three different motor cycles, a chemomechanical, an enzymatic, and a mechanical step cycle. Furthermore, the gating effect was incorporated by differing ADP release rates of the molecule's leading and its trailing head. In this paper, we will determine the ratio of the two ADP release rates by analyzing the dwell time distributions as measured for the double-headed motor. We deduce this ratio, termed gating parameter, through comparison of the three cycles discussed in [19] with the experimental dwell time distributions that are available for myosin V [4,7,10]. In this way, our work is embedded into the framework of branched chemokinetic networks that have been addressed predominantly in the context of kinesin [20,21]. Our aim is to directly relate the experimentally determined chemokinetic parameters of myosin V such as the binding and release of nucleotides to the dwell time distributions as measured in single-molecule experiments with double-headed myosin V. In single-molecule experiments that involve double-headed myosin V, the motor's steps are monitored through the motion of a bead attached to the stalk of the motor. The evaluation of a stepping trajectory, as shown in Fig. 1, leads to a distribution of its dwell times during which the motor sojourns between two steps. These dwell times provide information about the molecule's chemomechanical mechanism and have been computed for kinesin [22] and for complex networks of myosin V [23]. For kinesin, it is difficult to measure the dwell time distribution because of the motor's fast kinetics [24]. For myosin V, however, the slower motion allows to experimentally resolve the overall shape of its dwell time distribution. The network representation for myosin V as introduced in Ref. [19] and used here is based on the experimentally observed separation of time scales between mechanical and chemical transitions [13,25]. A similar time scale separation has been observed for conventional kinesin [24] and used to construct chemomechanical networks with several motor cycles [21,26,27]. This paper is organized as follows. We give a brief overview about network representations for molecular motors and the formalism for the calculation of the dwell time distributions using Markovian dynamics. We first describe the motor's kinetics by a single chemomechanical cycle that is dominant for external loads F below the stall force F s , as follows from our previous study [19], in which we used a more complex three-cycle network. We then calculate the motor's dwell time distributions for various nucleotide concentrations, in very good agreement with experimental data. The use of a network based on few parameters allows to quantify the influence of nucleotide binding and release rates onto the dwell time distributions. Second, taking additional pathways from the more complex network into account, we quantitatively determine the motor's gating effect. To address force dependent dwell time distributions, we discuss the range of external loads, for which the uni-cycle network applies. Our approach enables us to determine separate distributions for forward and backward steps, and we gain information about the motor's backward steps through comparison with experimental data. Finally, we summarize and discuss our results. Network Representations In general, the stepping properties of molecular motors can be described through network representations with discrete chemomechanical states of the motor supplemented by Markovian dynamics. As explained in previous studies [19,26,28], the network for a double-headed motor contains, in general, many chemical states that differ in the occupation of the two heads by nucleotides. It is interesting to note that each of these chemical states represents a branching point of the networks. As shown in Ref. [19], the behavior of myosin V as observed in single-molecule experiments can be described by reduced networks with four chemical states as in Fig. 2(a) or with six chemical states in Fig. 2(b). In these networks, the motor moves along a discrete, one-dimensional coordinate x towards the barbed end of the actin filament. The binding sites along the filament are separated by the motor's step size '. At each site x, the motor can undergo several chemical transitions that lead either to the hydrolysis or the synthesis of one ATP molecule. These transitions connect the motor's states that are defined by the chemical composition of its two heads. Each head can contain bound ATP (T) or ADP (D), and it can be empty (E), such that a combination of these states of the motor's leading and trailing head determine, together with its position, its chemomechanical state. To determine the dwell time between two steps of the motor, we consider a network with all chemical states at a given lattice site x with the states 399 and 49 at neighbouring sites x99 and x9. A chemical transition DijT from state i to state j involves the binding or release of ATP, ADP, or P, while the mechanical transitions Dij'T and Dij''T correspond to forward and backward steps of size '. Throughout this work, we use networks with absorbing boundaries, i. e., network representations that include all motor states at lattice site x and are truncated at the neigbouring sites x' and x''. The step velocity of the motor can be obtained when the network cycle F is periodically repeated along the filament, see Fig. 2(a). The network that captures the stepping properties for the experimentally accessible range of load forces as discussed in [19] is shown in Fig. 2(b). It is an extended version of the network in Fig. 2(a), with two additional cycles E and M. These two cycles become dominant for the motor's motion in a range of forces that exceed the stall force of the motor. The network contains two stepping transitions, D34'T in F and D55'T in the cycle M. From this network, the motor's step velocity can be deduced by using multiple copies of the network, see [19]. Here, we focus on the unicycle network shown in Fig. 2(a), that, as a sub-network of the three-cycle network, describes the motor's motion for a restricted range of external loads. This approach allows us to analytically determine the dwell time distributions. Moreover, a direct connection between the chemical binding and release rates of the motor and the dwell time distributions for various nucleotide concentrations can be established. To extract additional information in particular about the gating effect, we return to the more complex network in Fig. 2(b). The chemomechanical network in Fig. 2(a) describes the stepping behaviour of myosin V in accordance with experimental studies [4,9] for forces that do not exceed the stall force F s^2 pN of the motor. We will elucidate the dependence on external load in detail further below. The motor starts from the DD state with ADP bound to both heads, releases ADP from its trailing head to attain state ED, binds ATP to its trailing head (TD), and performs a forward step, during which both heads interchange their position (DT). Hydrolysis at the leading head leads to state DD, and, in this way, a trajectory connecting two chemically equivalent states completes the chemomechanical cycle In vitro experiments are typically performed for relatively low concentrations of ADP and P, which implies that both ATP synthesis and the reverse cycle F { are strongly suppressed. In addition, the rate of ATP dissociation from the trailing head of the myosin V motor, which is part of the reverse cycle F { , is very small, as discussed in detail in [19]. Backward steps are still possible, however, even for the relatively simple network displayed in Fig. 2(a) since a backward step may occur immediately after a forward step corresponding to the sequence TD ? DT ? TD. The latter sequence of transitions is very unlikely in the absence of load but becomes more probable with increasing load force [19]. Gating and Mechanical Details of the Myosin V Step Before we discuss the actual network dynamics, let us briefly review some molecular details of myosin V, which will be important in order to relate our theoretical results to experimental observations. So far, we have emphasized the uni-cycle network F . In this network, the release of ADP takes place at the molecule's trailing head. If the DD state were 'symmetric' with respect to ADP release, the probability to release ADP from the leading head would be equal to the one from the trailing head. It is, however, generally agreed that the rates of ADP release are different for the leading and the trailing head [8,9]. Thus, in order to describe the gating effect in an explicit manner, we need to consider the three-cycle network displayed in Fig. 2 The different ADP release rates from the heads of myosin V are thought to arise from internal strains that the heads experience when they are simultaneously bound to the filament [9]. In this case, one head is subject to a positive internal force and the other to a negative one. Experiments with single-headed myosin V constructs have shown that the ADP release rate depends on the direction of the external load imposed onto the molecule [18]. When both heads are bound to the filament, the motor experiences an internal strain arising from the elastic properties of its lever arms, that corresponds to a force acting on both heads in opposite directions. To what extent this strain is distributed in the double-headed motor, is, however, not a priori clear. The step of myosin V consists of a large, directed swing of its lever, called power stroke, and a diffusional search of the free head for the next target. The elastic energy provided for the power stroke is induced by the hydrolytic reaction taking place at the myosin head. How this elastic energy is distributed in the different chemical states of the motor heads, however, remains unclear. In single-headed molecules, the kinetic properties of the power stroke have been characterized in detail [15,29]. The stroke is induced through a conformational change, that affects the position of the motor head on the filament, and rotates the lever arm. This conformational change is assumed to affect the ADP-bound state of the motor [15]. In a double-headed molecule, the elastic energy required for the stroke leads to a strained position of myosin V, as deduced from AFM images, which reveal a bending of the molecule's leading lever [16], thereby changing the internal force acting onto the motor heads. In this way, both the gating and the power stroke have an effect onto the steps of the motor. The substeps of myosin V have been monitored in various experiments [6,7,13,25], with different substep numbers and step sizes. Keeping these observations in mind, we will combine the putative substeps of myosin V into a single step, and discuss the limitations of our approach along with the dependence of the dwell times on an external load. Another property that is not fully understood is the gating effect. In an experimental study that involves mutants of kinesin-1, the possible causes of the gating effect are elucidated through singlemolecule techniques [30]. There, the authors conclude that the gating in kinesin-1 arises through both intramolecular strain and steric effects. Due to the step size of 36 nm of myosin V, which is large compared to the 8 nm step size of kinesin, steric effects for gating are likely to play a minor role for myosin V. For myosin V, both Refs. [5] and [9] conclude that the intramolecular strain leads to an increase in the ADP release rate at the molecule's rear head and to a decrease of the ADP release rate at the front head. Comparison with the data in [18] that test single heads as a function of force support the conclusion that ADP release from the front head is strongly reduced, while the release at the rear head is only moderately enhanced. We characterize the gating effect by using an ADP release rate that is measured in chemokinetic experiments [3], and impose the asymmetry through reduction of the ADP release rate at the leading head. Even though we use a specific network here, our approach is applicable to any network description for myosin V that allows for ADP release from both heads of the molecule, such as the one proposed in [28]. However, the parametrization will, in general, differ for different networks, especially with respect to the force dependence of transition rates. Let us now turn to the formalism that allows to compute the dwell time distributions for myosin V. Markov Chains The probability distribution for the time between two successive steps of the motor is governed by a random walk that has one or more absorbing boundaries. This approach corresponds to a first passage problem on a specific network [31], and is closely related to the methods used in [22,23]. The process starts at a fixed site i at t~0 and is stopped when an absorbing state j is reached. For a Markov chain that consists of two states, an initial state 0 and an absorbing state 1 connected through the transition rate v:v 01 , the probability distribution is exponential, which applies to myosin V for superstall resisting forces. In the trajectories observed in single-molecule experiments, the dwell times between two successive steps correspond to random walks whose dynamics are determined by the underlying chemomechanical network. These random walks start directly after a mechanical step and are terminated after another mechanical step. In addition, these states also terminate the random walk when a step is taken through the mechanical transition. A Markov chain that corresponds to a closed network thus consists of a piece of that network that contains all chemical transitions at a lattice site x and is terminated at the two neighbouring sites x' and x'', see Fig. 2(a). Thus, the latter two states are absorbing states of the network, while the remaining states are transient. For a given Markov chain X (t) with t §0 and N states, let the first n states be transient and the remaining N{n states be absorbing. We denote the conditional probability for the process to dwell in state j at time t given that it started in state i at time t~0 by P ij (t). The corresponding master equation reads. where v ij is the transition or jump rate from state i to state j. These rates have the general form where W ij (F ) accounts for the force dependence [26]. For better readability, we omit the prime to indicate the spatial coordinate in both the transition rates v ij and the functions W ij (F ). For chemical rates, we have v ij,0~k k ij ½X for binding and of a nucleotide species X, as appropriate for dilute solutions. For the step rates in F , v 34 and v 43 , we have for a forward and for a backward step with parameter 0ƒhƒ1 in accordance with the balance conditions from nonequilibrium thermodynamics [27]. In principle, all chemical transition rates may depend on force but this force dependence is difficult to estimate. The force dependence of the binding or release of a specific nucleotide in a complex macromolecule such as a motor head cannot be accounted for by basic approaches such as reaction rate theory. Our minimal approach is thus to neglect the putative force dependence of the chemical rates, unless such a dependence is needed to describe the experimental data. In agreement with experimental studies [9,18], we thus concluded in [19] that solely the binding rates v 56 and v 52 decrease with resisting loads, see section S. 2 of Text S1 for details of the parametrization. A force dependence of these two rates is sufficient to describe the stepping behaviour of myosin V for all three regimes of external load. Thus, we take all chemical rates to be independent of force both for the cycle F within the three-cycle network in Fig. 2(b) and for the single-cycle network in Fig. 2(a). The steady state solution to the master equation is given by P st ik~0 for any transient state, because the walk will eventually always end up in an absorbing state. For an absorbing state l, P st il is equal to the probability for being absorbed in l given that the walk started in i, see [22]. The dynamics of the process prior to absorption is identical to the dynamics of an unrestricted Markov process. This means as long as the process does not end up in an absorbing state, its behaviour is identical to that of a closed network: Being in a given state, the process is not influenced by the absorbing states, until the process is terminated. Before reaching an absorbing state, the random walk proceeds with an exponentially distributed waiting time in every transient state i, with an average dwell time t i~1 = X j v ij . The process starts in a state i, sojourns in each state according to the probability until it is eventually absorbed in state k. The dwell time of the process is given by the shortest time it takes to arrive in any absorbing state k given that the walk started in i, see section S. 1 of the Text S1. To describe trajectories from single-molecule experiments, we are interested in all walks that, as mentioned before, start in a state directly after a mechanical transition. This transition can consist of a forward or a backward step, which implies two possible initial states. In addition, as another mechanical step either in the forward or backward direction terminates the process, we also have two possible absorbing states. In order to distinguish between the subsets of dwell times that arise from forward and backward steps, the conditional probability density distribution r iDk (t) is required. This distribution governs the subset of walks that start in i and are absorbed in k, and thus refers to the absorption into a specific state k. The conditional probability density distribution r iDk (t) is defined as It is given by the time-dependent derivative of the probability, _ P P ik (t), rescaled with the steady state probability for absorption, P st ik . To determine r iDk (t) via Eq. (8), we explicitly solve the master equation, to obtain the time-dependent transition probabilities P ij (t), and thus _ P P ik (t). The corresponding steady state solution follows by integration, Prior to discussing the explicit form of the dwell time distributions, let us note that in case of a network that does not contain any absorbing states, the corresponding master equation can be rewritten in terms of flux differences or excess fluxes DJ ij from state i to state j, with the excess fluxes DJ ij (t):P i (t)v ij {P j (t)v ji and transition rates v ij . The step velocity of the motor is related to the flux through the mechanical transitions of the network in the steady state with d=dtP i (t)~0. For the network cycle F as in Fig. 2(a), the velocity of the motor is then given by i.e., by the excess flux through the transition S34'T. Conditional Dwell Time Distributions Let us calculate the distributions that refer to transitions connecting two subsequent forward or backward steps, D34'T and D43''T, or a backward following a forward step and vice versa, D33''T and D44'T. Hence, the four distributions have the initial states 3 and 4, and the absorbing states 399 and 49, respectively. In the chemomechanical cycle F , the rates for ATP dissociation and P binding in the case of [P]^0 are very small, v 23^v14^0 , in accordance with the experimental conditions in [4,7]. For simplification, we set these rates equal to zero in our calculations, such that the pathway D33''T vanishes in the network in Fig. 2(a). For the network in Fig. 2(b) that consists of three cycles, we use the values for the transition rates v 23 and v 14 as determined in Ref. [19], while the remaining rates within F are identical for both networks. The steady state probabilities read. With the use of Eq. 8, the dwell time distributions for these four conditional steps can be explicitly calculated and compared to experimental data. As shown in [22], the distributions that refer to the probabilities of taking a forward and a backward step, r f (t) and r b (t), read. q~v The distribution for all events is given by The distributions r f (t) and r b (t) are multi-exponential functions with decay rates l i , with the tail of the distributions governed by the smallest eigenvalue The eigenvalues for the network F read For small [ADP], the first two eigenvalues reduce to l 1^v23 and l 2^v12 . Dependence on Nucleotide Concentrations In order to compare our results with the experimentally determined distributions reported in [4,7], we have rescaled the experimental data such that the area covered by the histogram is normalized. Throughout this article, experimental data are shown as green bars, while the total dwell time distributions as obtained from the network shown in Fig. 2(a), appear as solid blue lines. In the experiments in [4,7], low concentrations of ADP and P were used so that we can put [P] equal to zero as discussed in the previous section. For comparison, we take [P] = 0 in the threecycle network as well, our results, however, are not altered for [P] = 0.1 mM, the concentration used in [19]. As ADP binding has more impact on the motor's motion [2,32], we use, if not indicated differently, a small concentration of [ADP] = 0.1 mM in our calculations for both the single-cycle and the three-cycle network. We have also performed calculations for zero ADP concentration and have checked that the precise value of [ADP] does not alter the distributions in any significant manner. Fig. 3 shows the total distribution of dwell times, r(t), for F~0 and different nucleotide concentrations using the transition rates shown in Table 1 and the experimental data from [4]. The transition rates for the three-cycle network are given in Ref. [19]. For F~0, [ATP] = 2 mM, and small [ADP], see Fig. 3(a), our results (blue lines) are in good agreement with the data. We have l min~v 12~1 2 s {1 , and the tail of the distribution reflects the rate of ADP release. With addition of 400 mM [ADP] (Fig. 3(b)), the distribution broadens significantly, which reflects the inhibiting effect of ADP on the motor's motion, a fact experimentally well established [2,6,32]. For limiting [ATP] (Fig. 3(c, d)), the step velocity is, in the absence of ADP, governed by the rate of ATP binding. The network in Fig. 2(a) contains no transition where ADP is released from the leading head. The gating effect has to be taken into account for networks that involve the transitions DD ? DE or ED ? EE. The latter transitions constitute leaks from the simple network in Fig. 2(a). In order to address the gating effect, we also considered a more complex network that allows for ADP release from the leading head, as shown in Fig. 2(b). It contains an additional forward and backward stepping transition D55'T and D55''T that is active in the regime of superstall resisting forces, see section S. 2 of Text S1. Networks that include ADP release for both the leading and the trailing head, may be supplemented by the simplifying assumption that these rates do not differ. Indeed, the dwell time distributions that are obtained from the network shown in Fig. 2(b) do agree with the experimental data for high concentrations of ATP without any gating. Let us describe the gating effect by the ratio f between the ADP release rates from the molecule's leading and trailing head, i. e., In the case of limiting [ATP], neglecting the gating effect by assuming equal rates of ADP release for both heads, i. e, f~1, leads to discrepancies between the experimental data and simulated dwell times (green circles in Fig. 3(c, d)) for the network in Fig. 2(b). These discrepancies for low [ATP] can be understood because the ATP binding transition D23T, which is rate-limiting for the motor's kinetics, competes with the transition for ADP release from the leading head, D25T. The motion is not affected as long as competing transitions in the network have a small probability compared to ATP binding. In the absence of a gating effect, the rate for ADP release is^10-fold higher compared to the ATP binding rate at 1 mM [ATP], which leads to less mechanical steps through the stepping transitions D34'T and D43''T. This would result in longer dwell times and hence in a broader distribution than the Fig. 2(a) for F~0, as determined experimentally in [3] (*) and [4] (**), from simulations [35] one observed experimentally. The width of this distribution is primarily determined by the gating parameter f and decreases with decreasing f. For the chemomechanical network in Fig. 2(b) which includes the transition ED ? EE, the red circles in Fig. 3(c, d) show the simulated dwell times that are obtained for gating parameter f~v 12 =v 25~1 0, which is in the range of f where we find best agreement of the simulated dwell times with the experimental data [5,9]. To determine the optimal gating parameter f, we compared the experimental dwell time distributions and the ones obtained from the three-cycle network for different values of f. Fig. 4(a) shows the root mean square deviation RMSD between experimental dwell times and simulated ones as a function of the gating parameter f for limiting concentrations of ATP, [ATP]~10mM and [ATP]~2mM. The root mean square deviation has been calculated between the simulated dwell times and the experimental ones as. where x i are the dwell times for experiment and simulation, respectively, and n exp is the number of bins for the experimental data (green bars in Fig. 3(c, d)). Note that we have adjusted the bin size of our simulations to the experimental bin size n exp for comparison. For both concentrations, the RMSD decreases with increasing f and saturates for values f * > 7. To maintain the forward stepping of myosin V, ATP binding by the trailing head and ADP release from the leading head compete for small concentrations of ATP. For f * > 7, the ATP binding rate is sufficiently large compared to the ADP release from the leading head, and the RMSD saturates. Because all other parameters used for the description of the myosin V velocity are derived from experimental data (in the limit of F~0), the value of f~7 should be regarded as a lower bound for the gating parameter f. The agreement between the calculated dwell time distributions and the experimental data can be further improved by treating the ratek k 23 of ATP binding in the chemomechanical cycle F as a fit parameter. Fig. 4(b) shows the dwell time distribution for [ATP]~2mM for an ATP binding rate ofk k 23~1 :6(mMs) {1 , which provides the best fit for fixed gating parameter f~7. The inset in Fig. 4(b) shows the RMSD as a function of a variable ATP binding ratek k 23 , which exhibits a minimum atk k 23~1 :6(mMs) {1 . Let us note that with varying f, the location of the minimum is shifted marginally within a range that falls below 0:1(mMs) {1 . As can be infered from Fig. 4 (b), the agreement between the calculated dwell time distributions and the experimental data can be improved by considering the rate of ATP binding as a fit parameter. For the ATP binding ratek k 23 , we have used a value of k k 23~0 :9(mMs) {1 as reported for actin-bound myosin V in chemokinetic experiments with myosin V [3]. The values for ATP binding estimated from single-molecule experiments cover a Comparison of distributions calculated using the uni-cycle network in Fig. 2(a) (blue solid lines) with experimental data (green bars) from [4]. Insets: Concentrations that apply to both experimental data theoretical curves are shown in the gray panels, while parameters specific to the theoretical results are given in the framed panels. In (a), (c-d), the experimental concentration of ADP is believed to be negligible. For saturating [ATP], (a,b) the dwell time distributions for the uni-cycle network (blue line) agree with those for the network shown in 2(b), for all gating parameters f (data not shown). The symbols show simulated data for the network in 2(b) without gating (green circles) and gating with a 10-fold decelerated ADP release from the motor's leading head (red circles). doi:10.1371/journal.pone.0055366.g003 range ofk k 23~( 1:0{2:7)(mMs) {1 [4,33]. For f~7, the best fit of our simulations to the data leads to an ATP-binding rate of k k 23~1 :6(mMs) {1 , which lies well within this range. Dependence on External Load Before addressing the dwell time distributions of myosin V subject to an external load, let us discuss the motor's step velocity as a function of external load. The corresponding force-velocity relation is needed to clarify (i) the range of external loads where the description via the network formed by the cycle F is valid and (ii) the set of experimental data that can be evaluated within our theoretical framework. Using the transition rates as given in Table 1, we calculate the velocity v:v(½ATP, ½ADP, F ) ð26Þ of the motor via Eq. 11. The velocity depends on the external load force F and the concentrations of [ATP] and [ADP] through the transition rates v ij . In the following, we will distinguish three different regimes of external load F : (I) assisting and small resisting forces, where F ƒ1 pN; (II) forces close to the stall force with 1 pNvF ƒ2:5 pN; and (III) large resisting forces with F w2:5 pN. Fig. 5 shows the motor velocity as calculated from the singlecycle network in Fig. 2 (a) with periodic boundary conditions for different concentrations of ATP, with [ADP] = [P]~0 and the corresponding experimental data reported by various groups [1,6,7,10,34]. The experimental values for the stall force cover a range of 1,6-2,5 pN [1,6,7,10,34], while F s^2 pN for the network studied in Ref. [19]. Note that the three-cycle network description of the myosin V motor in Ref. [19] appropriately captures the ratchet mechanism of myosin V observed in [10] for large resisting forces (regime (III)), while the single-cycle description based on the network cycle F is not valid in this load regime. For saturating [ATP]~1mM, comparison with experimental data shows good agreement for small resisting forces in regime (I). In regime (II) around the stall force F s , however, there is a discrepancy between the theoretical results and the experimental data. Since our parametrization is solely based on the parameter h to account for the force dependence of the step rates v 34 and v 34 , the molecular details of the step are not included in our description. Hence, the motor's dwell time distributions cannot be compared to experimental data for forces that are close to the stall force. Let us point out that, in addition, the variation in experimental data restricts the possibility to compare all the dwell time distributions that have been measured as a function of external load. The velocities in Ref. [7], shown as solid blue diamonds in Fig. 5, correspond to the experimental dwell time distributions given therein. The distributions are available for forces F~{5 pN, F~0:7 pN, F~1 pN, and F~1:5 pN from Ref. [7], and for F~0 pN from Ref. [4]. Moreover, the dwell time distributions for F~5 pN can be found in [7] (used here) and [10]. A comparison of the experimental dwell time distribution with the theoretical one is meaningful if the corresponding velocity correctly reproduces the data, which is the case for F~{5 pN and F~1 pN. The distribution for F~5 pN will be discussed further below. Since experimental and theoretical results for the velocity do not agree for both F~0:7 and F~1:5 pN, the theoretical and experimental dwell time distributions cannot agree either. The disagreement for F~0:7 pN reflects the variation in the experimental data and the latter value, F~1:5 pN, lies in the force regime (II) where the data are not reproduced correctly by our network. Ref. [7] provides a more exact fit to the velocity data for forces that do not exceed F~1:5 pN by including more force-dependent parameters. It does, however, not lead to the correct prediction of the stall force F s . This illustrates the difficulties to correctly describe the complete set of measured dwell time distributions. The distributions for forces that do not exceed the stall force are shown in Fig. 6(a-c), for F~{5 pN, F~0 pN and F~1 pN. In the presence of an external load, the distributions change through the force-dependent forward and backward stepping rates v 34 and v 43 with the force factor h~0:65. For superstall forces depicted in Fig. 6(d), the motor steps backwards in a forced manner that can be described by the network in Fig. 2(b), which contains the state EE. For F~{5 pN, we find good agreement between our theoretical results and the experimental data, as shown in Fig. 6(a). This confirms that for assisting forces, the stepping behaviour is virtually unaltered compared to F~0, see Fig. 6(b), as observed in [7]. All of the theoretical curves arising from the network in Fig. 2 (a) show a steep decay of rapid events for short times ƒ 0.01 s. The width of this decay signal increases with increasing resisting load. For a force of F~1 pN as in Fig. 6(c), the width of this peak exceeds the experimental resolution of 0:01 s [7] (blue line). These events of short times are related to the distribution of backward steps, r b (t), and reflect the increase of the backward stepping rate v 43~k43 exp ((1{h)'F =k B T) with increasing load. The experimental data in Fig. 6(c) however, agree with the distribution of forward steps, r f (t) (dashed brown line). The number of backward stepping events observed in [7] might have been insufficient to determine the distributions of r b (t). These fast events have also been observed in simulations for single-headed myosin V constructs [23]. Thus, experimental studies would be desirable that address these short dwell times to gain more insight into the mechanical properties of the motor, such as the reversal of its power stroke [15]. In load regime (II) that is close to the stall force, the paramerization with a single force-dependent parameter h is not sufficient to explain the motor's dwell time distributions. We have rescaled the theoretical distribution for F~1:5 pN in such a way that its maximum agrees with the experimental data in Ref. [7]. The rescaled distribution and the experimental one disagree, which might be due to additional transitions v ij , e.g., those that capture sub-steps induced by the motor's power stroke, as discussed in the context of the gating effect. Since the step velocity of the motor decreases more slowly in the experiments than in our theory, one might speculate that the molecule's mechanical properties stabilize its chemical activity in the presence of an external force. In part, this stabilization effect has been accounted for by a force threshold in the parametrization of the chemical rates, see section S. 2 of Text S1 and [19], which provides a correct reproduction of the ratcheting behaviour of myosin V but does not affect the mechanical step of the motor in itself. Because of the shortcomings of our model for forces around the stall force corresponding to force regime (II), it seems plausible to expect that effects in the mechanical step rates arising from the motor's power stroke, have also to be taken into account to describe the slow decrease in velocity with increasing load force. For high resisting forces, F~5 pN, the motor steps in a forced manner, with a single stepping rate as determined in Ref. [19] based on the data in [10]. For superstall forces, the mechanical cycle M governs the molecule's motion in force regime (III), such that the motor steps solely through the transition 5?5''. In this case, the dwell time distribution reduces to an exponential function r EE (t) with rate v EE~9 /s for F~5 pN, as shown in Fig. 6(d) in agreement with the data. Discussion In this paper, we have focused on a network description of myosin V that consists of only four chemomechanical states, and calculated the dwell time distributions for this molecular motor. Our approach provides a direct relation between nucleotide binding and release rates, that are accessible via chemokinetic experiments, and the dwell times distributions as observed in single-molecule measurements of myosin V. The dwell time distributions obtained from our network description agree with the experimental data for a wide range of nucleotide concentrations, Fig. 3, and substall load forces, Fig. 6(a-c). In case of small ADP concentrations, the tails of the distributions are governed, for saturating [ATP], by the ADP release rate, and by the ATP binding rate for small concentrations of ATP. Comparison with a [4,7]. The blue lines show r f (t)zr b (t) obtained using the single cycle network F for F ƒF s . In (c) the distribution of forward steps, r f (t) (brown, dashed line) agrees with the data that does not exhibit rapid events as in r f (t)zr b (t). (d) Forced backward stepping for F §F s leads to a single exponential decay (dashed violet line) that arises through the mechanical transition D55''T in the network in Fig. 2(b). doi:10.1371/journal.pone.0055366.g006 more complex network (Fig.2(b)) allows us to elucidate the gating effect, see Fig. 3(c, d). In networks that include ADP release from the leading head, ATP binding competes with ADP release from this head. A significant impact on the motor's step velocity arises once the two transition rates have comparable strength; the motor velocity is reduced by ADP release. The gating effect leads to a regulation of this inhibition through a suppressed rate of ADP release from the motor's leading head with respect to its trailing head. Through comparison with the experimental data, we quantify the gating effect through an ADP release rate that differs 10-fold for the motor's leading and the trailing head. In the case of an external load force acting on the motor, we have determined the range of forces that can be described through the network that consists of four states by analysis of the force velocity relation of myosin V, Fig. 5. In addition, we distinguish between dwell time distributions that arise from backward and from forward steps of the motor. For intermediate resisting forces, the experimental data agree with the distribution of dwell times that is associated with forward steps. A peak for short events can be related to backward steps, see Fig. 6(c). Our analysis strongly reinforces the hypothesis that the motor's motion is governed, for forces up to the stall force, by a simple chemomechanical cycle rather than a complex branched network, which is coordinated by the force-dependent release of ADP. A further step is to relate the information about fast events to the motor's power stroke. Supporting Information Figure S1 Repeated version of the network shown in Fig. 2(b) in the main text, with three network cycles F , E and M. The stepping transitions in the cycle F are dominant for forces below the stall force, while steps through the mechanical cycle M occur for superstall resisting forces, as discussed in [19]. The shape of the distribution resembles, for 1.4 and 1.6 pN, the shape of the distributions for forces that are below these values. The distribution broadens as approaching a vanishing step velocity of the motor at the stall force F s^2 pN, where the sharp peak of short events vanishes and turns into a single exponential distribution, whose slope rises with increasing the load force, as seen for F~2:2 and F~2:4 pN. Note that the simulation is based on &10 6 events. (TIF) Text S1 S. 1. Absorbing boundary formalism. S. 2. Three-cycle network. (DOC)
2018-04-03T05:51:09.237Z
2013-02-13T00:00:00.000
{ "year": 2013, "sha1": "64a3f174336b2daaaa20f9c2ce8a907387369cdf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055366&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64a3f174336b2daaaa20f9c2ce8a907387369cdf", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
251260274
pes2o/s2orc
v3-fos-license
Periodicity and Spectral Composition of Light in the Regulation of Hypocotyl Elongation of Sunflower Seedlings This study presents the hypocotyl elongation of sunflower seedlings germinated under different light conditions. Elongation was rhythmic under diurnal (LD) photoperiods but uniform (arrhythmic) under free-running conditions of white light (LL) or darkness (DD). On the sixth day after the onset of germination, seedlings were entrained in all diurnal photoperiods. Their hypocotyl elongation was dual, showing different kinetics in daytime and nighttime periods. The daytime elongation peak was around midday and 1–2 h after dusk in the nighttime. Plantlets compensated for the differences in the daytime and nighttime durations and exhibited similar overall elongation rates, centered around the uniform elongation in LL conditions. Thus, plants from diurnal photoperiods and LL could be grouped together as white-light treatments that suppressed hypocotyl elongation. Hypocotyl elongation was significantly higher under DD than under white-light photoperiods. In continuous monochromatic blue, yellow, green, or red light, hypocotyl elongation was also uniform and very high. The treatments with monochromatic light and DD had similar overall elongation rates; thus, they could be grouped together. Compared with white light, monochromatic light promoted hypocotyl elongation. Suppression of hypocotyl elongation and rhythmicity reappeared in some combination with two or more monochromatic light colors. The presence of red light was obligatory for this suppression. Plantlets entrained in diurnal photoperiods readily slipped from rhythmic into uniform elongation if they encountered any kind of free-running conditions. These transitions occurred whenever the anticipated duration of daytime or nighttime was extended more than expected, or when plantlets were exposed to constant monochromatic light. This study revealed significant differences in the development of sunflower plantlets illuminated with monochromatic or white light. Introduction Hypocotyl elongation is an early developmental process of seedlings that serves to bring the embryo plumule above the soil surface, enabling further autotrophic growth in full daylight. This process has been studied extensively in dicotyledonous plants. Hypocotyl elongation is influenced by numerous factors that can be studied under controlled laboratory conditions. Among the environmental factors, light is the most important [1], as it affects hypocotyl elongation through its intensity, spectral composition, and periodicity [2][3][4]. On the other hand, seedlings are well equipped to receive light signals using a variety of different receptor pigments [5], and they respond by selecting a developmental program suited for the current light conditions. Light provides plants with complex information about conditions characterizing their environment. Plants can detect light direction, intensity, duration, and quality, as well as receive precious information about the passage of time. Time tracking resulting in entrainment allows plants to anticipate upcoming light transitions and to predict duration of the current and future alternating light and dark periods. Plants can also predict the change of seasons and find the optimal time for their flowering and fruit bearing. The daily alternate periods of light and darkness induce diurnal rhythms that are visible in all major physiological processes and responses of plants [6,7]. They allow metabolic processes in plants to be phased and occur at a specific time of the day [8]. Microarray studies on the Arabidopsis genome have shown daily rhythmicity to appear in the expression of some 6-16% of all genes [9], resulting in coordinated timing of daily events [10]. Timing in plants is maintained by the circadian clock (oscillator), an endogenous oscillator present in every cell of the plant body. The circadian clock is not a discrete cellular structure, but the consequence of interconnected transcriptional and translational feedback loops in the expression of genes that form the core of circadian clock [11]. Functionally, the circadian clock consists of three main components: input, core genes, and output [12]. Input refers to receptors that can perceive light signals [13]. Light receptors comprise mainly the phytochromes and chryptochromes [14,15], with participation of other pigments such as zeitlupe [16]. Output genes transmit the functional states of the clock oscillator to metabolic or developmental processes. Hypocotyl elongation is a known output of the circadian clock, and it is often used in experimental studies [17] to monitor clock function in real time. Pigments involved in light inhibition include phytochromes and chryptochromes [18][19][20][21], although phototropins [22,23] and other pigments may also contribute. Signal transduction pathways starting from individual pigments and their crosstalk have also been extensively studied [24,25], especially with the use genome-wide surveys (microarrays) detecting large numbers of signaling components [26]. Some components such as SPA1 even showed a promoting effect on hypocotyl elongation and counteracted the growth inhibition mediated by phyA and phyB [27]. However, the major factor determining whether plantlets will follow photomorphogenesis or skotomorphogenesis is the activity of the COP9 signalosome (CSN), a protein complex supposed to function as the main light switch in plants [28], involved in the regulation of a number of developmental processes. Triggered by the presence of light, it acts as a negative regulator of photomorphogenesis [29], preventing plants from starting deetiolation in darkness. The presence of light also affects the metabolism and activity of most phytohormone groups [30], as in the case of gibberellins, which also represses photomorphogenesis in darkness. Phytochromes and cryptochromes are known to be the key pigments in the light entrainment of circadian rhythms [13,14]. Entrainment enables plants to anticipate timing of imminent light transitions and duration of daytime and nighttime periods of the current photoperiod. Sunflower was a popular model system in early developmental and tropism studies of dicotyledonous plants [31][32][33]. Studies of hypocotyl elongation [34][35][36][37] failed to detect rhythmicity of hypocotyl elongation in plants grown under diurnal photoperiods. This was probably caused by the use of a small number of daily measurements. Digital imaging and related techniques simplified and gave a new impetus to studies of plant growth and development [38]. The use of digital imaging techniques allowed accurate studies of sunflower seedling circumnutation and showed that hypocotyl elongation has a distinct diurnal rhythmicity [39]. The action spectra determined for inhibition of hypocotyl elongation in Raphanus sativa [40] and Sinapis alba [41] showed that blue, red, and far-red light components were the most inhibitory in dark-grown plants, as shown previously for Cucumis sativus [42]. The involvement of two different light receptors in the control of seedling growth has been a long-standing concept, although the relative importance and contribution of the receptors appears to vary between different species [43]. Controversial effects of blue light in hypocotyl elongation have been reported in several plant species, although blue light clearly exerts an inhibitory effect in most of them, as in Arabidopsis. In potato shoot cultures, it was found that removal of blue light from white light using yellow filters significantly promotes shoot elongation [44]. However, the situation is different in sunflower, where blue light has been reported to stimulate hypocotyl elongation [33,45]. On the other hand, in dark-grown sunflowers, blue light was shown to induce a strong but transient suppression of hypocotyl elongation [46]. Similar observations of transient suppression in hypocotyl elongation in Arabidopsis have also been reported [22], indicating that phototropin is responsible for this rapid, initial growth inhibition. In Arabidopsis, all wavelengths except green [47] were found to inhibit hypocotyl elongation starting at low fluence values [25]. We investigated rhythmic hypocotyl elongation in diurnal photoperiods with artificial white light provided as daytime lasting 8, 10, 12, 14, or 16 h, comparing it with the uniform elongation characteristic for free-running LL and DD conditions. The study was extended to cover effects of monochromatic blue, green, yellow, and red light applied in various free-running, diurnal, or other light combinations. Illumination of plants with a single monochromatic light supported only uniform elongation, abolishing rhythmicity to the same extent as the extended duration of daytime or nighttime periods in plants entrained to diurnal photoperiods. In darkness, hypocotyl elongation was variable, depending on the spectral composition of light to which plants were exposed prior to the start of darkness. Our data indicate that red light has a suppressive effect on hypocotyl elongation but only when it is applied in combination with other monochromatic light colors. White light, being a mixture of many different light colors including red, is highly suppressive to hypocotyl elongation, extending this suppressive effect to last during the period of darkness that follows the daytime in diurnal photoperiods. Therefore, hypocotyl elongation in young sunflower seedlings results from the balance and interaction of light components that can support or suppress hypocotyl elongation as suggested by Parks et al. (2001) [27]. We also briefly discuss how the spectral composition of light, as well as its periodicity, affects the establishment and maintenance of light entrainment. Plant Material and Germination Seeds of cv. Kondi (Syngenta, ChemChina, Basel, Switzerland) were washed in water for 1 h (imbibition) and then placed in PVC boxes under layers of moist paper towels at 24-25 • C for germination. Germination occurred in dim light or darkness. Some 24-36 h after the start of imbibition, the seeds were inspected for germination success. Those whose radicles had reached 3-5 mm in length were sown in 50 mL PVC centrifuge tubes filed to the rim with peat-based potting mixture ( Figure 1a). The plantlets in tubes were well watered during sowing, and no additional watering was required until the end of treatments. Tubes with the sown plantlets arranged in Styrofoam trays were placed in growth chambers with a suitable photoperiod. The temperature was maintained at 24-25 • C. Free-running conditions included constant white light (LL), constant darkness (DD), and constant monochromatic blue light (BB at 470 nm), green light (GG at 560 nm), yellow light (YY at 600 nm), or red light (RR at 660 nm). Monochromatic light was also tested in diurnal 14/10 h LD photoperiods in which monochromatic light was applied as daytime, in 4 h long T-cycles alternating with darkness, or in other combinations. Monochromatic light was produced by LED lights sources such as Philips GU10 spot lamps, V-Tac LED strips, and LED arrays with high irradiance from unknown manufacturers. Emission spectra of various LED light sources were measured using an Ocean NIR UV 2000 spectrophotometer. White LED panels had a nearly continuous emission of visible light rich in blue, yellow, and orange portions of the spectrum (Figure 1c). Irradiance was measured using Li-Cor 250A light meter with quantum sensor (Figure 1d). The time of imbibition was considered as the time of dawn for diurnal or subjective dawn for free-running photoperiods. For imagining during the nighttime darkness, plants were briefly (seconds) illuminated at 2 h intervals with green or yellow LED light at an irradiance of 0.45 µmol·m −2 ·s −1 or less. A longer duration of irradiance could affect the hypocotyl elongation pattern. Treatments and Imaging Batches of 15-20 tubes containing germinating plantlets were arranged in Styrofoam trays in front of the camera. For each treatment, the batches were replicated three times or more. The curves in the figures are averages of plants in a representative treatment batch. Plantlets were oriented so that their cotyledonary axis was transverse to the imaging camera. Images were captured using the time-lapse function of Nikon P520 and P510 cameras at 10 min intervals. Arrangement of tubes with plantlets enabled good frontal visibility of the entire hypocotyl with the cotyledon petiole junction as the external marker of hypocotyl length (Figure 1b). The suture between the cotyledons was visible as soon as the protective hulls slipped from the expanding cotyledons. The thick, black-colored hulls of cv. Kondi achenes seemed to be impenetrable to light, preventing early light priming. The start of imaging was connected with the developmental stage of plantlets, requiring them to be at the advanced stage of hook straightening. In most treatments employing monochromatic light, the start of imaging was significantly delayed. The time available for imaging also varied between treatments depending on their elongation rates. Those with lower elongation rates as in diurnal photoperiods could be followed much longer than those in treatments with monochromatic light that induced fast hypocotyl elongation. Hypocotyl Length Measurements The length of the hypocotyl was measured with Jstore software using digital images, measuring the distance between the tube rim and the cotyledon petiole junction position, as presented in Figure 1b. The hypocotyl length registered in this way was the relative hypocotyl length, as the actual hypocotyl length could only be estimated. The relative hypocotyl length presented in the figures is the cumulative increase in hypocotyl length. The hypocotyl elongation rate was calculated and provided in 2 h increments, while the overall elongation rate refers to the elongation rate of the longer periods of time or to the entire treatment duration. All key findings obtained with cv. Kondi were confirmed in treatments with the earlybearing sunflower hybrid NS H7749, which shows rapid development of plantlets. Rhythmic Hypocotyl Elongation in Diurnal White Light/Dark Photoperiods In the five diurnal photoperiods that were studied, hypocotyl elongation showed a prominent daily rhythmicity, resulting in characteristic stairway-shaped elongation patterns visible in curves obtained by plotting the cumulative increase in hypocotyl lengths ( Figure 2a). The overall elongation rates were similar in all five photoperiods, although the photoperiods with short and moderately short days showed a slightly faster initial hypocotyl elongation than the photoperiods with long and moderately long days. They could be grouped together with the photoperiod of neutral day 12/12 h LD, which occupied the central position. The overall elongation rate of the rhythmic elongation in the neutral day photoperiod was similar to the uniform elongation of plants grown under LL conditions ( Figure 2b). Therefore, all photoperiods that contained white light could be placed together in the same elongation rate group, regardless of their rhythmicity. The rhythmicity of the diurnal photoperiods was best observed and analyzed in graphs in which changes of hypocotyl elongation rates were plotted over time (Figure 2c,d). The graphs showed two daily elongation peaks separated by periods of decreased (arrested) elongation. There was a daytime peak around midday or slightly later and then another peak in the nighttime, 1-2 h after dusk. Two daily periods of low elongation (daily arrests) with minimal elongation values were located at the beginning and at the end of the daytime period. The positions of the two daily peaks and the minimum at the end of daytime were fixed, while the position of the second minimum was variable depending on the length of nighttime. The dual nature of the hypocotyl elongation kinetics which differed between daytime and nighttime periods was prominent in the long and moderately long day photoperiods. In the short and moderately short photoperiods, the dual nature of hypocotyl elongation was less pronounced. The time between the two elongation peaks became shorter as the daytime duration decreased, as seen for the arrest at the end of day (Figure 2d). Thus, in photoperiods with short and moderately short days, hypocotyl elongation was dominated by the long-lasting nighttime with asymmetric position of its elongation peak. Interestingly, the nighttime elongation minimum was located 8 h after dusk in both the short-day and the long-day photoperiods, suggesting that it may be a pivotal, nighttime recovery point. A similar nighttime recovery point was previously described for the phototropic bending ability of sunflower seedlings [44]. Uniform (Arrhythmic) Elongation in Free-Running White Light (LL), Darkness (DD), or Monochromatic Light Under the free-running LL and DD conditions, hypocotyl elongation was uniform (arrhythmic), but it gradually increased over time (Figure 3a). Elongation at DD was significantly faster than at LL. Plants grown in DD were etiolated, with long hypocotyls and undeveloped cotyledons, still partially enclosed by seedling husks. LL plants were dark green, husk-free, well developed, and vigorous in appearance. Plants from LL and those from diurnal LD photoperiods had similar overall elongation rates; thus, they could be grouped together (Figure 3b). Etiolated plantlets germinating under the DD conditions emerged above the soil with typical hooks that gradually straightened. However, cotyledon opening and subsequent cotyledon expansion in plants from DD were delayed. Vigorous hypocotyl elongation was uniform, but it accelerated somewhat over time (Figure 3a). Etiolated DD plants placed back in LL or diurnal photoperiods continued their very fast elongation during the first 8 h, showing that fast elongation induced by DD could not be quickly suppressed by white light. In free-running conditions equipped with monochromatic LED light, hypocotyl elongation was also uniform, as seen in LL and DD (Figure 3a). Hypocotyl elongation in blue (BB) light was nearly high as in DD, albeit with a somewhat delayed start. Green (GG), yellow (YY), or red (RR) light provided mutually similar and parallel graphic almost identical to the elongation in the RGB white light mixture. Elongation of this group was delayed and slower than in the DD and BB elongation group, but still much faster than in the group containing LL and diurnal LD photoperiods (Figure 3b). Thus, elongation under blue monochromatic light (1.9 mm/h) was nearly five times faster than the overall elongation rate in the diurnal moderately long day photoperiod (0.4 mm/h) Figure 3c. Elongation rates of plants during the sixth day after the onset of germination, growing under different monochromatic light colors and diurnal moderately long day photoperiod, are presented in Figure 3c. The differences in elongation rates between DD and monochromatic lights were less striking than the large delays observed in their start of elongation. In contrast to monochromatic lights, elongation in LL treatment was not delayed, and it began at the same time as in DD conditions. RGB LED strips with all three colors turned on produced a bluish-green mixture of white light, providing equally high hypocotyl elongation as in RR, GG, and YY (Figure 3b). However, elongation rates of plants growing under the RGB triple white-light mixture and those of plants under white light (LL) were significantly different. Supplementing more red light to the RGB white-light mixture suppressed this excessive hypocotyl elongation even if dark periods were interpolated (Figure 3d). Elongation of the hypocotyl under monochromatic light in free-running conditions was not dependent on irradiance levels (data not presented). Elongation responses were, therefore, saturated at very low light intensities. Thus, even a 10-fold change in irradiance levels resulted in little or no change in the elongation rate of plants grown under blue or red light. Only in the case of blue light at the beginning of irradiation was there a transient decrease in hypocotyl elongation lasting up to 2 h, which was visible in all experimental treatments. Blue light was the only component of visible light that could trigger the phototropic (PT) bending in sunflower seedlings, strongly suppressing circumnutational movements. Employing the conditions elaborated previously [48], we showed that PT bending was also saturated at very low blue light irradiance, as in the case of hypocotyl elongation. Vigorous PT bending was observed under continuous blue light at irradiation as low as 0.5 µmol·m −2 ·s −1 (data not presented). Irradiation with Two Different Colors of Light In treatments in which monochromatic light alternated with darkness or in those where two different monochromatic lights alternated, hypocotyl elongation was uniform with rates characteristic for the currently employed light colors (Figure 4a,b). Changes in the elongation rates at the end and at the beginning of a new period were abrupt. In each of the periods, non-suppressive hypocotyl elongation was an independent event that followed its own rules. (b) Four hour long T-cycles in which red or blue light alternated with darkness, which was not suppressed by previous blue or red light illumination. Blue periods following darkness had low initial elongation rates that later improved. (c) Hypocotyl elongation in treatments with a suppressive red + blue dual illumination followed by true darkness or blue light. Suppressive red + blue light continued to suppress elongation in the following period of true darkness but not in the blue light. Therefore, darkness remained suppressive after the red + blue doublet, whereas blue light promoted hypocotyl elongation after red + blue. (d) Hypocotyl elongation in plants illuminated for 14 h with three distinct RGB doublets followed by 10 h periods of darkness. High hypocotyl elongation was promoted only in the blue + green doublet, which lacked the early night peak found in white light diurnal photoperiods. Simultaneous illumination of seedlings with two different monochromatic light colors (light doublets) during a 14 h long daytime followed by a 10 h long dark period resulted in gross changes in the hypocotyl elongation patterns. Uniform hypocotyl elongation was abolished and replaced by complex responses including those resembling rhythmic elongation in diurnal photoperiods provided with white light (Figure 4c). In the blue-green color doublet, hypocotyl elongation was very high with a strong peak in the 14 h long "daytime" period. The peak in the nighttime was either absent or postponed. In the other two doublets (red-green and red-blue), hypocotyl elongation was much lower but there were two noticeable daily peaks. One peak was in the middle of the "daytime" and the other in the nighttime, just as in the white light diurnal photoperiods. The absence or delay of the nighttime peak in the blue-green doublet suggests that the absence of nighttime peak may have been caused by the presence of red light in the "daytime" of other doublets. Although red light was provided at rather low irradiance (Figure 1b), blue and green light could not cancel the suppressive effect of red light when they were present together in doublet combinations. Similarly, green light could not prevent hypocotyl elongation when present in doublet together with blue light. The treatment with red-green light was the doublet that best restored the diurnal rhythmicity. We also exposed the plants to alternating periods of single and dual light irradiance. This was achieved by providing 14 h long periods of red light interpolated into a background of a free-running blue light treatment. In this way, a diurnal photoperiod (RB+B) was created with a 14 h "daytime" provided by the red-blue doublet, followed by a 10 h blue light "nighttime" (Figure 4d). Blue light was actually free-running, being present at the same irradiance level in both light periods. During the red-blue "daytime" (120-128 h), hypocotyl elongation was suppressed in the same way as during the daytime of white-light diurnal LD photoperiods. In the blue nighttime that followed for the next 10 h, plantlets slipped into very fast uniform type elongation characteristic for BB treatments. The bluelight nighttime strongly influenced the next red-blue daytime period, which started at 144 h and also appeared uniform. A control treatment (RB+D) which also had a 14 h long red-blue doublet for daytime but followed by a true dark nighttime period, showed a completely different response as elongation was suppressed in true darkness and not stimulated as in the blue nighttime of RB+B. The control RB+D treatment was similar to another treatment (RGB+D) that used an RGB white-light mixture for daytime followed by a 10 h long dark nighttime. The RB+D and RGB+D treatments differed only by the presence of green light in the RGB mixture, which resulted in no significant differences in their elongation patterns. Dark treatments resulted in rapid elongation of seedlings that were not previously illuminated. When a dark period followed illumination with red light, then elongation was suppressed. Blue light can overcome suppression induced by red light, providing fast elongation during the blue nighttime period. It remains to be tested how long this red light-induced suppression can last. This experiment showed that red light (alone or in presence of other light colors) was the factor suppressing hypocotyl elongation. It was also responsible for the establishment of diurnal rhythmicity as hypocotyl expression is suppressed at nighttime with true darkness. Blue light applied at nighttime instead of true darkness overcame the red light-induced suppression, enabling plantlets to slip into the uniform pattern of hypocotyl elongation. Blue light only superficially resembled darkness, supporting a high rate of hypocotyl elongation. Plantlets in blue light were elongated but not etiolated, and they suppressed circumnutations that otherwise appeared in true darkness. Entrainment in LD Photoperiods and Maintenance of Rhythmicity The pattern of hypocotyl elongation of all treatments whether rhythmic or uniform was evident at dawn of the sixth day, 120 h after the onset of germination. The time available to plantlets from diurnal photoperiods to establish light entrainment was very short. The first signs of successful germination, visible as local elevation of the soil surface, were observed about 72-96 h after the onset of imbibition. By the fifth day, 96 to 120 h after the onset of germination, the plumule was at least partly exposed to direct light at the soil surface on the fifth day. Establishment of light entrainment was, therefore, limited to a short period of time, barely exceeding a day. The presence of entrainment in LD diurnal photoperiods was recognized by elongation rates that changed significantly throughout the day, with peaks and periods of arrested elongation. Successful entrainment enables plantlets to position the peak of daytime hypocotyl elongation at midday and to anticipate the onset of the transition from light to dark at dusk. In other words, plants anticipate the position of daily elongation maximum before the day is over. Thus, entrainment appeared to do nothing more than repeat the situation established the previous day. In the nighttime, the elongation peak was always located 2 h after dusk and the beginning of the dark period. It is questionable whether the plants anticipated the end of night period or simply adjusted to the start of new day when it came. Hence, what happens if dawn simply fails to appear? To solve this dilemma, we designed treatments in which the last day or last night of plantlets well entrained to their 14/10 h LD diurnal photoperiod were unexpectedly extended, mimicking de novo start of a free-running LL or DD condition (Figure 5a,b). At the expected end of nighttime, the lights were not turned on and there was no dawn. Shortly after the onset of extended darkness, hypocotyl elongation accelerated considerably and became uniform. (b) Prolonged daytime duration. At the expected end of daytime period, the light was not turned off and there was no dusk. In the extended daytime, elongation became uniform and the expected nighttime peak 1-2 h from dusk was absent. After the first 10 h of the prolonged daytime, which corresponds to the subjective night, hypocotyl elongation showed a small, transient increase. The extended daytime and nighttime durations both appeared as de novo start of free-running LL or DD conditions. In both cases, the circadian clock did not cut in as expected for a functional circadian regulation, and plants failed to manifest the elongation pattern characteristic for the anticipated light period. After just 1-2 h spent in a regime of extended daytime or nighttime plants switched from diurnal rhythmic to uniform, free-running elongation patterns: LL for extended daytime and DD for extended nighttime. Apparently, the circadian clock malfunctioned as plants entered a uniform elongation pattern in which the circadian clock functionality seemed to be absent, nonvalid, masked, or simply overridden. In the case of extended nighttime, the highly accelerated elongation rapidly brought the hypocotyl length to high values that prevented further measurements. In the case of extended daytime treatments, the outcome of daytime extension could be followed much longer. During the first 10 h of growth in the extended daytime, corresponding to the subjective nighttime, hypocotyl elongation was uniform, and the peak corresponding to the expected nighttime maximum was absent, appearing as a failure of circadian regulation. However, in the continued duration of the extended day, corresponding to the subjective day, some circadian regulation reappeared, such that, in the following subjective daytime period, weak hypocotyl elongation could still be observed according to the diurnal elongation pattern. This means that circadian regulation was present and functional, but strongly suppressed or overridden. Therefore, an answer emerges indicating that, in sunflower seedlings, circadian regulation may be suppressed in some situations and overridden by other regulatory mechanisms. In our concurrent study dealing with sunflower phytohormones and their circadian rhythmicity [49], we will show correlations among light duration, light transitions, and phytohormone production, demonstrating that circadian regulation goes on in LL conditions with uniformity of responses due to the synchronization of genes associated with the core of the circadian clock. Discussion We studied the development of sunflower seedlings under various white-light sources such as fluorescent lamps, LED panels, or LED strips. At an irradiance of 70 µmol·m −2 ·s −1 , white light suppressed the hypocotyl elongation irrespective of the light source, as seen under natural conditions. In diurnal photoperiods, this suppression persisted throughout the dark period (nighttime period) of entrained plantlets. The dark periods (nighttime) were of utmost importance in maintaining diurnal rhythmicity and sustained suppression of hypocotyl elongation. Suppression and Promotion of Hypocotyl Elongation White light suppressed hypocotyl elongation irrespective of its periodicity, showing the same suppressive effect both in diurnal and in LL conditions. Under diurnal photoperiods, hypocotyl elongation actually depended on the duration of nighttime (Figure 3b), similar to Arabidopsis, in which hypocotyl length was found to increase with increased nighttime duration of photoperiods [50] (Figure 1a). Sunflower plantlets apparently compensated for the large difference in daytime duration of diurnal photoperiods. The light of monochromatic LEDs could not suppress hypocotyl elongation; after numerous attempts, we had to give up and admit that monochromatic light in sunflower has a promotive effect on hypocotyl elongation. In a situation in which light exerted both suppressive and stimulatory effects, we had to carefully dissect the effects of individual wavelengths and their combinations. The first task was to establish a growth rate limit that separated suppressive from stimulative light effects. In photoperiods with white light, elongation rates were lower than 0.6 mm/h and considered as suppressive, since, in monochromatic light, elongation rates were 0.8 mm/h or higher, and they were considered as stimulative (Figure 3b). The stimulatory effect of monochromatic light was not related to irradiance, as it was recorded both in the lower (1-5 µmol·m −2 ·s −1 ) and higher (40-50 µmol·m −2 ·s −1 ) irradiance levels of blue and red light. Even large changes in the irradiance during the same treatment were hardly detected in the graphs of hypocotyl elongation. Obviously, the stimulative effect of monochromatic light on hypocotyl elongation had a rather low saturation value. Therefore, monochromatic safe lights with long exposure time cannot be recommended for use in manipulations with plants during the nighttime. Data from our experiments confront the classic concept of white-light diurnal photoperiods in which suppression of hypocotyl elongation by white light is extended during the nighttime. Monochromatic light is not suppressive per se, and dark periods that follow illumination with monochromatic light are also not suppressive, as can be seen in the 4 h T-cycles in Figure 4b, or when longer dark periods alternated with monochromatic light, as in case of blue and yellow in Figure 4a. Thus, a modified diurnal photoperiod, in which monochromatic light alternates with true darkness, cannot suppress hypocotyl elongation nor can it provide entrainment. Dual illumination with two different monochromatic lights applied together restored elongation suppression in doublet combinations containing red light. Treatments in which blue and green light were applied together strongly stimulated hypocotyl elongation and promoted only a "daytime" elongation peak, as shown in Figure 4d. The data confirm the wellknown concept that multiple pigment systems are involved and operational in the reception and transduction of light signals in plants. This was suggested from the earliest studies and confirmed later as the transduction pathways for light signals became better characterized [51,52]. However, there seems to be no point in comparing Arabidopsis and sunflower when their light requirements and light responses are so different. The components are the same, homologous, or compatible, but their arrangement and functioning in Arabidopsis significantly differ in comparison to sunflower [53]. In the case of sunflower, background knowledge of the pigments involved in photomorphogenesis is still sparse, so we are forced to discuss light effects in terms of their absorption wavelengths rather than the actual pigments transducing the light signals. This is a task for future studies. Interestingly the white-light mixture produced by RGB 5050 LED strips stimulated hypocotyl elongation, providing equally high elongation rates to monochromatic red or green light (Figure 3b). However, red light in this mix was fairly low (Figure 1d), and providing additional red light to plants at 20 µmol·m −2 ·s −1 together with the RGB mixture efficiently suppressed hypocotyl elongation. Antagonistic Effects of Monochromatic Light We confirmed that blue light has both a suppressive and a promotive effect on hypocotyl elongation. Suppression is visible as the initial effect, expressed only at the start of blue-light illumination, gradually changing into significant promotion, resulting in overall elongation rates equivalent to those of dark-grown etiolated plants. Therefore, both reports showing a suppressive [46] and promotive effect of blue light [45] are valid. Red light was promotive when applied alone, but suppressive when combined with other wavelengths as in white light. Darkness that followed monochromatic red light was promotive, as seen in 4 h long red/dark cycles (Figure 4b). However, when darkness followed treatments with dual red + another light color or light from white LED plates, then hypocotyl elongation was suppressed, indicating that it is a prolonged state which needs time to expire. This expiring time seems to be stored by entrainment. The use of mixed single and dual monochromatic light illumination gave some interesting responses, as shown in Figure 4c. Suppression of hypocotyl elongation induced by the combination of red and blue illumination continued in the dark, but not in the "nighttime" with blue light. The blue nighttime light treatments could also be interpreted as free-running blue light treatments with superimposed periods of red light, which caused temporary suppression of hypocotyl elongation. Thus, it could also be interpreted as free-running blue light (BB) with suppressive red light occurring only during the "daytime". Being continuous, the initial blue light decline in hypocotyl elongation occurred only once at baseline. We did not examine far-red light as it is mostly absent from light produced by white-light LED panels, especially in the cool white type (Figure 1b). However, it is known that far-red light applied alone stimulates elongation of sunflower shoots and hypocotyls [33,54]. Far-red light applied together (in a doublet) with red light leads to responses that depend on the mutual ratio of the two red colors. Thus, a higher proportion of red light (low red/far-red ratio) had an inhibitory effect, while a higher proportion of far-red light (high red/far-red ratio) stimulated elongation of sunflower internodes and hypocotyls [53]. Interactions of red and far-red light are known to be involved in the plant shade avoidance responses [55]. Entrainment and Circadian Regulation The slippage of the elongation pattern from rhythmic to uniform (arhythmic), when the expected duration of nighttime or daytime periods was exceeded, was initially considered to be a malfunction of circadian regulation and its associated entrainment. However, we then observed the same response when plants were exposed to monochromatic light. In spite of this new finding, the relation between the rhythmic and uniform elongation patterns became more complex. These new findings show that the changes in elongation pattern can be triggered rapidly at the very beginning of signal transduction lines, suggesting that different regulatory mechanisms are involved and not just circadian regulation. Our results suggest that entrainment is dependent on the spectral composition of light provided to plants. At present, it seems that the presence of red light appears to be a mandatory requirement for light entrainment, at least in cv. Kondi, but such a conclusion requires more evidence from future studies spanning different sunflower genotypes. Conclusions There are two different situations in which light entrainment in sunflower is abolished and hypocotyl elongation changes from rhythmic to a uniform pattern during diurnal photoperiods. The first case is associated with missed anticipation timing and appears to be distinct from the later discovered failure of entrainment, which is triggered simply by illumination with monochromatic light. While the first case may be related to failure of the circadian clock to regulate metabolic pathways, the second case appears to be related to specific light receptors and interactions between their signal transduction pathways. To understand the data that we collected, it was necessary to accept a new attitude in which light is not considered as a purely inhibitory factor of hypocotyl elongation. Instead, light should be considered as a factor with both suppressive and promotive effects. Our findings, therefore, support the view put forward by Parks et al. (2001b) stating that both stimulative and suppressive light factors are present and balanced in the regulation of hypocotyl elongation [27]. Conversely, elongation in darkness can be either suppressive or enhancing, depending on the light conditions provided to the plants before the onset of darkness. Various treatments that have promotive and suppressive light effects on hypocotyl elongation are summarized in Table 1. Table 1. Overview of the light quality (waveband composition) used in the treatments, their irradiance levels, and their effects on hypocotyl elongation. Light Treatment Irradiance Effect on Hypocotyl Elongation True darkness from the start of germination Data that we presented here are far from final. They cover the main findings, indicating the direction in which studies should be continued. Elucidating the effects of monochromatic light combinations in regulating hypocotyl elongation and establishment of light entrainment needs to be extended to cover diverse sunflower genotypes.
2022-08-03T15:17:38.613Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "c78f6aff13818d5530b286a069ff530c9b9cd07e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/11/15/1982/pdf?version=1659540119", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01cd8a3a9865a9e5feb349d4d1ffebebebfb980d", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
124402989
pes2o/s2orc
v3-fos-license
Integrated navigation system and experiment of a low-cost and low-accuracy SINS/GPS When SINS (strap-down inertial navigation system) is combined with GPS, the observability of the course angle is weak. Although the course angle error is improved to some extent through Kalman filtering, the course angle still assumes a divergent trend. This trend is aggravated further when using low-cost and low-accuracy SINS. In order to restrain this trend, a method that uses AHRS to substitute for SINS course angle information is put forward aimed at the hardware component characteristic of the low-cost and low-accuracy SINS including AHRS (attitude and heading reference system) and IMU (inertial measurement unit). Real static and dynamic experiments show that the method can restrain the divergent trend of the navigation system angle effectively, and the positioning accuracy is high. Introduction The advantages and disadvantages of GPS and SINS form strong complementarity, and since the 1980s, the GPS/SINS integrated navigation system has had a broad application in planes, missiles and ships [1] . Research indicates that after the combination of SINS and GPS, indexes are all constringent such as the position and velocity error of integrated navigation systems, pitching and rolling error. Because the observability of heading angle information in an integrated system equation-of-state is weaker (coupled with the north component of the Earth's rotation angular rate under the static state condition), after Kalman filtering correction, although the heading angle error is improved to some extent, it still assumes a divergent trend. The degree of this trend relates with the drift of the gyroscope in the direction of the azimuth axis [2] . Obviously, adopting a high accuracy SINS can maintain the high accuracy property of the integrated system's output angle information in some time range. However, the high accuracy SINS has some limitations such as production technology and cost, thus placing restrictions on the extensive use of the GPS/SINS integrated system [3] . Therefore, there has been a shift to SINS of low-cost and low-accuracy. Deviation of the azimuth axial gyroscope of the low-cost and low-accuracy SINS is big, therefore the course angle information drifts acutely. In order to eliminate this, we put forward an integrated pattern of low-cost and low-accuracy SINS and GPS, with AHRS included in SINS and where course angle information can be adjusted online. Design of integrated navigation system In this integrated navigation system, the autonomous navigation system adopts the low-cost and low-accuracy SINS which includes AHRS and IMU; GPS is the supporting equipment of the integrated system. The integrated navigation system adopts GPS as the outer observed quantity, gets estimation of output navigation parameter error through Kalman filtering, and adopts closed-loop policy to correct the output error of the inertial navigation system. The scheme is shown in Fig.1. Fig.1 Scheme for the integrated navigation system An attitude and heading reference system is a system for independent course and movement, and it mainly provides the course of the carrier and the movement information [4] . For example, the system used in this paper is a certain inertial navigation system model of low cost and low accuracy whose hardware includes one set of attitude and heading system, and the course error adopts the angle of deviation and magnetic compass indication to revise itself. At the same time, it adopts a unique algorithm that estimates the conversion angle under the dynamic model from the main coordinate to the tangent coordinate system through 6-state Kalman filters of suitable gain, therefore the precision of course angle is high. In Fig.1, "Heading correction" means that the computed course of SINS is substituted by the course angle information of AHRS. Establishment of integrated navigation system model There are many error sources of low-cost SINS. In the error model, except for the gyroscope drift and the accelerometer error, the gyroscope error that is caused by the temperature and calibration factor error of the gyroscope accelerometer should be included. The error model can be written as [4][5][6] : δ are attitude error vector, speed error vector and position error vector, respectively; d, b serve as gyroscope and accelerometer error vector, respectively; T d serves as gyroscope error that is caused by the temperature; g k , a k serve as the demarcation factor errors of the gyroscope and accelerometer; , , D D D F G W serve as transfer matrix, noise matrix and system noise of the system, respectively [3] . In the integrated navigation system, owing to the fact that the systematic error is constantly estimated and corrected, we can adopt a simplified model for reducing operation work in the real application, not thinking over high-speed channel. In the simplified model, the random error of the gyroscope and accelerometer is carried out from the mathematical point, the gyroscope error is considered as the sum of colored random noise and the white noise error. Therefore, the corresponding three-dimensional gyroscopes need to be expanded for three random error states capacity, and the accelerometer error is only considered as the random white noise. 10 1 10 10 10 9 9 1 In Eq.(2), the state vector of the system is: The white noise vector of the system is: T gx gy gz rx ry rz ax ay az ω ω ω ω ω ω ω ω ω The meaning of each parameter in Eqs.(2)-(6) can be referred to the Reference [1]. Kalman filter model of closedloop correction The control item ( 1) k − U should be added to the Kalman filter in order to correct SINS, then the dynamic equation of the system is: When the quadratic performance index is considered, after derivation, it can be expressed as [1] : In summary, under the circumstances of closedloop completely revising and controls, the form of the Kalman filter is: We can see that the recursion calculating formula of covariance and gain matrix is unchangeable, but the estimated vector of the forecasting of state is 0, and the estimated vector of the real time of state takes a simple form. Experiment result and conclusions In order to verify the scheme mentioned above, one set of static experiment was carried out in a certain district, the time interval was about 1 h; one set of dynamic experiment was carried out (time interval was 3 h and 20 min) in a certain district. The precision of the gyroscope used in the experiment was 3°/s, accelerometer precision was 3 5 10 g − × , the output course angle of AHRS replaces the output course angle of SINS in real-time. The position accuracy of GPS was 20 m and speed precision was 0.5 m/s. The sampling data of the gyroscope and accelerometer was 0.01 s in the experiment, the data sample period of GPS was 5 s, the integration period was also 5 s. The navigation parameter error curve of the static experiment result is shown in Fig.2, the standard deviation of every navigation parameter is shown in Table 1. The navigation parameter error curve of the dynamic experiment result is shown in Fig.3. From the static and dynamic experiment curves, it can be seen that when low-cost and low-accuracy SINS is integrated with GPS, without online adjustment of AHRS, the course angle information is assumed to have a diffusing trend because of the low precision of the gyroscope. At the same time, parameters are all constrained such as the position and velocity error of the integrated navigation system, pitching and rolling attitude error. The dynamic experiment curves (Fig.3(e)) show that the orbit of the GPS orientation coincides with the orbit of the integrated system orientation, which can be seen from Fig.3(f). Therefore, the position output of the integrated navigation system stays consistent with the position output of the GPS, and the integrated navigation system can reflect the dynamic course accurately and timely.
2019-04-21T13:11:09.073Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "db421ad1fe7170900b4e3ac9564771ca5781fd74", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1007/s11806-007-0080-6?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "95643d9f9a2416158353f790200259807b5d20fa", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
219686850
pes2o/s2orc
v3-fos-license
A unifying perspective on linear continuum equations prevalent in physics. Part V: resolvents; bounds on their spectrum; and their Stieltjes integral representations when the operator is not selfadjoint We consider resolvents of operators taking the form ${\bf A}=\Gamma_1{\bf B}\Gamma_1$ where $\Gamma_1({\bf k})$ is a projection that acts locally in Fourier space and ${\bf B}({\bf x})$ is an operator that acts locally in real space. Such resolvents arise naturally when one wants to solve any of the large class of linear physical equations surveyed in Parts I, II, III, and IV that can be reformulated as problems in the extended abstract theory of composites. We review how $Q^*$-convex operators can be used to bound the spectrum of ${\bf A}$. Then, based on the Cherkaev-Gibiansky transformation and subsequent developments, that we reformulate, we obtain for non-Hermitian ${\bf B}$ a Stieltjes type integral representation for the resolvent $(z_0{\bf I}-{\bf A})^{-1}$. The representation holds in the half plane $\Re(e^{i\vartheta}z_0)>c$, where $\vartheta$ and $c$ are such that $c{\bf I}-[e^{i\vartheta}{\bf B}+e^{-i\vartheta}{\bf B}^\dagger]$ is positive definite (and coercive). Introduction In Parts I, II, III, and IV [32][33][34][35] we established that an avalanche of equations in science can be rewritten in the form as encountered in the extended abstract theory of composites, Γ 1 is a projection operator that acts locally in Fourier space, and s(x) is the source term. Here in Part V we are concerned with resolvents of the form (that, as we will see in the next section, arise naturally in the solution of (1.1)) where the operator A takes the form A = Γ 1 BΓ 1 , in which Γ 1 is a projection operator in Fourier space, while B acts locally in real space and typically has an inverse, and one that is easily computed. Thus if Γ 1 or B act on a field F to produce a field G then we have, respectively, that G(x) = B(x)F(x) or G(k) = Γ 1 (k) F(k), in which G(k) and F(k) are the Fourier components of G and F. As in the previous parts we define the inner product of two fields P 1 (x) and P 2 (x) to be where (·, ·) T is a suitable inner product on the space T such that the projection Γ 1 is selfadjoint with respect to this inner product, and thus the space E onto which Γ 1 projects is orthogonal to the space J onto which Γ 2 = I − Γ 1 projects. We define the norm of a field P to be |P| = (P, P) 1/2 , and given any operator O we define its norm to be |OP|. (1.4) When we have periodic fields in periodic media the integral in (1.3) should be taken over the unit cell Ω of periodicity. If the fields depend on time t then we should set x 4 = t take the integral over R 4 with the integral over the spatial variables restricted to Ω if the material and fields are spatially periodic. The goal of this paper is four fold: • To highlight the connection between such resolvents and the solution of problems in the extended theory of composites; • To review how Q * -convex operators can be used to bound the spectrum of A = Γ 1 BΓ 1 , and to review some methods for constructing Q * -convex operators [31]; • To establish a remarkable connection, founded on the work of Cherkaev and Gibiansky [13] and elaborated upon in [27,30,37,38], between the resolvent with A = Γ 1 BΓ 1 where A is not Hermitian, and the inverse H 0 of an associated operator having Hermitian real and imaginary parts, and with H 0 being real and positive definite when z 0 is real and greater than a constant c such that c − [B + B † ] is positive definite. Furthermore the Hermitian part of H 0 is positive definite in the half plane Re z 0 > c. • On the basis of this connection to obtain Stieltjes type integral representations for the resolvent in the case where B is non-selfadjoint but there exists an angle ϑ such that c − [e iϑ B + e −iϑ B † ] is positive definite (and coercive) for some constant c. The integral representation holds in the half plane Re(e iϑ z 0 ) > c. The work presented is largely based on the articles [13,27,31,37,38], but develops some of the ideas further. There is also the related resolvent where in the final expression on the first line the inverse is to be taken on the subspace onto which Γ 1 projectsthus R is the resolvent of A within this subspace, i.e. on this subspace (1.6) The equivalences in (1.5) are easily checked by expanding each expression in a powers of A or Γ 1 B. One reason for the importance of knowing the resolvent as a function of z is that it allows computation of any operator valued analytic function f (A) of the matrix A according to the formula where γ is a closed contour in the complex plane that encloses the spectrum of A. The first equation in (1.1) is called the constitutive law with s(x) being the source term. As remarked previously, if the null space of L is nonzero then one may one can often shift L(x) by a multiple c of a "null-T operator" T nl (x) (acting locally in real space or spacetime, and discussed further in Section 3), defined to have the property that that then has an associated quadratic form (possibly zero) that is a "null-Lagrangian". Clearly the equations (1.1) still hold, with E(x) unchanged and J(x) replaced by J(x) + cT nl E(x) if we replace L(x) with L(x) + cT nl (x). In other cases L may contain ∞ (or ∞'s) on its diagonal. If one can remove any degeneracy of L(x), we can consider the dual problem with Γ 2 = I − Γ 1 , and then, if desired, try to shift L −1 (x) by a multiple of a "null-T operator" T nl (x) satisfying Γ 2 T nl Γ 2 = 0 to remove its degeneracy. Our results, in particular, apply to the family of problems associated with analyzing the response of two phase composite materials, where B(x) itself depends on z 0 and takes the form (1.10) where the χ i (x) are the characteristic functions satisfying χ 1 (x)+χ 2 (x) = 1, while L 1 and L 2 are the tensors of the two phases, representing their material properties, and the "reference parameter" z 0 can be freely chosen. In the particular case when L 1 = z 1 I and L 2 = z 2 I we have (1.12) where now, for example, z 1 and z 2 may represent the conductivities of the two phases and z 0 a reference conductivity. With the choice z 0 = z 2 the expression (1.2) reduces to which is now again a problem directly of the form (1.2) with B and z 0 now being identified as (1.14) 2 Recasting the resolvent problem as a problem in the extended abstract theory of composites As Γ 1 is a selfadjoint projection in Fourier space, so too is Γ 2 = I − Γ 1 . We let E denote the space of fields onto which Γ 1 projects, and J denote the orthogonal space of fields onto which Γ 2 projects. Associated with these operators is a problem in the extended abstract theory of composites: given s ∈ E, find E ∈ E and J ∈ J that solve The abstract theory of composites is reviewed, for example, in Chapter 12 and forward in [28], and in Chapters 1 and 2 in [39]. The extended abstract theory of composites is developed in Chapter 7 of [39] and further in [31]. Applying Γ 1 to both sides of (2.1) we obtain Next let us introduce a constant Hermitian (typically positive definite) reference tensor L 0 , the associated "polarization field" and a matrix Γ defined by E = ΓP if and only if E ∈ E, P − L 0 E ∈ J. (2.5) Equivalently, Γ can be defined by where the inverse is to be taken on the subspace E. The operator L 0 Γ is the projection onto the subspace L 0 E that annihilates J. These are not orthogonal subspaces unless one modifies the norm to (P 1 , P 2 ) L0 = (L 0 P 1 , P 2 ) [42] but this brings with it the problem that if L(x) is Hermitian in the original norm, then it will not be in the new norm unless L(x) commutes with L 0 . Instead, if L 0 is positive definite then L 1/2 0 ΓL 1/2 0 is the projection onto L 1/2 0 E that annihilates the orthogonal subspace L −1/2 0 J. We require that L 0 be chosen so that the rank of Γ 1 (k)L 0 Γ 1 (k) does not change as k varies. While the operator Γ is not a projection with respect to the standard norm it satisfies (2.7) From (2.4) it follows that ΓP = −E, and so Comparing this with (2.3) gives While it appears like the right hand side of (2.10) depends on L 0 , and not just on L and Γ 1 the identity and the preceding derivation shows it does not. In particular, a general choice of L 0 gives the same result as the choice L 0 = z 0 I, for which Γ = Γ 1 /z 0 . This establishes the identity where R is the resolvent (1.5) and B 0 = z 0 I − L = B + z 0 I − L 0 . Conversely, if we are interested in computing the resolvent in (1.2) or (1.5), then we can recast it as a problem in the theory of composites with L = L 0 − B, where we are free to choose L 0 . The solution (2.9) is well known in the theory of composites: see, for example Chapter 14 of [28], [56], and references therein. For the special case of a two phase medium where B(x) takes the form (1.10) we may take L 0 = L 2 giving B(x) = χ(x)(L 2 − L 1 ) and correspondingly (2.12) Having established this connection with the resolvent we can now apply all the theory developed in extended abstract theory of composites to resolvents of the required form, and conversely. In the theory of composites it is clear that (2.1) can be written in the equivalent form So direct analogy with (2.9) gives 16) and in particular with M 0 = I/z 0 this implies It follows from this identity that if B(x) = χ(x)I where χ(x) is a characteristic function, then So if λ is in the spectrum of Γ 1 BΓ 1 then 1 − λ will be in the spectrum of Γ 2 BΓ 2 . This can also be established by representing χI in a basis where Γ 1 is block diagonal with I in the first block and 0 in all other entries. There is another connection with the theory of composites. Suppose that s ∈ E is given and define the three projection operators 19) and the three subspaces U, E, and J onto which they project. Now the solution to E to (2.1) can be written as Hence (2.1) can be recast as a standard problem in the abstract theory of composites: given s ∈ U find z * , which is known in the theory of composites as an effective parameter, such that 20) and the solution (2.9) implies Now we treat B as fixed so that L depends on z 0 . Consider the function z * (z 0 ). When z 0 has positive (negative) imaginary part, and B is a matrix, the resolvent [z 0 I − Γ 1 B] −1 is negative (positive) definite. We conclude that the imaginary part of z * has the same sign as that of z 0 . When B is a finite dimensional matrix the poles and zeros of z * lie on the real axis, are simple, and the poles alternate with the zeros along the real axis. Assuming the source s excites all modes (i.e is not orthogonal to any eigenfunction of Γ 1 BΓ 1 ) the zeros of z * (z 0 ) will reveal the spectrum Owing to the invariance of the form of (2.21) when we make the replacements we obtain the alternative identity: Treating B as fixed we see that the spectrum of [z 0 − Γ 2 BΓ 2 ] −1 is revealed by the spectrum of z * (z 0 ). Bounding the spectrum of A and Q * -convexity The spectrum of A consists of those values of z 0 for which the inverse of z 0 I − A does not exist. Let us assume that B and hence A are self adjoint. Then the spectrum is on the real axis and we let [α − , α + ] denote the smallest interval on the real axis that contains all the spectrum (α − could be −∞ and α + could be +∞). Here we interested in finding outer bounds on the spectrum: constants a − and a + such that [α − , α + ] ⊂ [a − , a + ]; and outer bounds on the spectrum: constants Inner bounds on the spectrum allow one to see how tight are outer bounds, and vice-versa. The most well known inner bounds on the spectrum are those obtained by the Rayleigh-Ritz method: one may look for the extreme lower value c − RR and extreme upper value c + RR of (As, s)/|s| 2 ∈ [α − , α + ] as s varies in some finite dimensional subspace S. Let s = s − RR and s = s + RR be the corresponding fields, normalized with |s − RR | = |s + RR | = 1, that achieve these extreme values. The Rayleigh-Ritz method can be improved by using the power method as |((A − cI) n s, s)|/|s| 2 provides a lower bound on the maximum of |α − − c| n and |α + − c| n . This gives the bounds Outer bounds on the spectrum, that we come to now, can be used to verify that these conditions hold. Outer bounds on the spectrum can be obtained using the powerful methods introduced in [31] to bound resolvents using Q * -convex operators. The methods build on a large body of literature associated with quasiconvex functions and the associated notion of weak lower semicontinuity. This has a long history, with many applications, reviewed for example in [9,14]. The approach can have the advantage that in one fell swoop it gives bounds that are universally valid for the spectrum of all resolvents in a given class. For example, if L(x) is piecewise constant taking N values, that then can be labeled as N different phases, then the bounds on the spectrum can be independent of the geometry, i.e., on the way these phases are distributed. As yet, due to their novelty, the methods described have not been exploited, even within the theory of composites. That they will prove to be a strong tool is guaranteed, based on the successful application of quasiconvex functions for obtaining sharp bounds on effective moduli based on the translation method, or method of compensated compactness, as summarized in the books [2,12,28,52,54]. I prefer the name translation method as it applies more broadly (see, for example, [29]) than within the compensated compactness framework of sequences of spatially oscillating fields with progressively finer and finer oscillations (as occurs in periodic homogenization when one has a sequence of periodic materials with smaller and smaller unit cell sizes) -in particular the concept of Q * -convexity, described below, loses its significance in the compensated compactness setting. In practice, if one is not considering optimization problems or energy minimization problems, then one rarely has a sequence of materials, but rather just one inhomogeneous material and one wants to say something about the response of it. For geometries of well separated spheres Bruno [11] obtained some bounds on the spectrum for the conductivity problem. Let us first consider the case when B and hence A are Hermitian. Then, to bound the spectrum of A we look for a Hermitian tensor field T(x), and constant a − such that: Following [29,31] we call T(x) a Q * -convex operator and the associated quadratic form a Q * -convex function. Q *convexity generalizes, for quadratic forms, the notion of quasiconvex functions, which have a long history, reviewed for example in [9,14]. Besides their importance for obtaining bounds on the effective properties of composites as outlined in the books [2,12,28,39,52,55], they have been a powerful tool for the proof of existence of solutions to the nonlinear Cauchy elasticity equations [5,6] and for furthering our understanding of shape memory material alloys [7,8]. For quadratic functions, quasiconvexity [25,40,41], and the closely related A-quasiconvexity [16] are associated with Q * -convexity when T(x) is independent of x and Γ 1 (k) is a homogeneous function of k. The non-negativity of the operator Γ 1 TΓ 1 ≥ 0 is only linked to the highest derivatives in the functional one is minimizing. This is because these notions of quasiconvexity arose in the context of understanding what could go wrong when seeking minimizers of nonconvex functions in the calculus of variations. Without going much into the details, weak lower semicontinuity is what one needs to show existence of smooth solutions to equations, such as the nonlinear elasticity equations where one needs to show that minimizers of the integral of W (∇u(x)), over a body represented by a region Ω in the undeformed state, exist [5,6]. Here W is the elastic energy and u(x) is the position of a particle in the body having coordinates x ∈ Ω in the undeformed state. Rather than there being a minimizer there may be a sequence of highly oscillatory functions u = u i (x), i = 1, 2, 3 . . . , producing ever lower energies that cannot be achieved with smooth functions u(x). If this happens one says that the integral as a function of u is not weakly lower semicontinuous. Quasiconvexity safeguards against this by ensuring that the finest scale oscillations have an energy penalization. When one has quadratic functions W (∇u, ∇∇u, . . . , ∇ m u) it is typically only the dependence of W on the highest derivatives ∇ m u (which dominate when u is highly oscillatory) that is important to determining weak lower semicontinuity and hence the existence of a minimizer. One needs to show that W (∇u, ∇u, . . . , ∇ m u) is quasiconvex with respect to ∇ m u when one replaces all the other arguments of W by fixed constants [25]. If W is a quadratic function and a smooth minimizer exists, it satisfies the m-th order gradient elasticity equations discussed in Section 3 of Part IV [35]: hence the connection to Γ 1 (k), with ∇ m u being associated with the terms of order k 2m in Γ 1 (k). For bounding the spectrum of A we need Q * -convexity rather than quasiconvexity if Γ 1 (k) is not a homogeneous function in k. For the Schrödinger equation some Q * -convex operators have been identified (see Sections 13.6 and 13.7 of [39]), but not yet applied to bounding spectrums. Combining the equations in (3.2) gives Then clearly a − is a lower bound on the spectrum of A = Γ 1 BΓ 1 in the space E. Alternatively, for the same or another T satisfying Γ 1 TΓ 1 ≥ 0, one can look for a constant a + such that and we obtain thus implying that a + is an upper bound on the spectrum of A. From the identity (2.17) we see that when B is a finite dimensional matrix, bounds on the spectrum of also allow us to bound the values of z 0 for which R − L −1 has a null space. Note that z 2 0 B approaches −B in the limit as z 0 → ∞. To bound the spectrum of A we seek a T(x) and constant a − such that and then a − is a lower bound on the spectrum of A (where this spectrum itself depends on z 0 ). Bounds analogous to (3.5) can clearly also be obtained. When B(x) = χ(x)I for some characteristic function then, as observed following (2.17), Γ 2 B(x)Γ 2 will have exactly the same spectrum as I − A, even though Γ 1 and Γ 2 project onto different spaces. One of the most important classes of T(x) = T nl (x), not necessarily Hermitian, are those having the property that Γ 1 T nl Γ 1 = 0. The associated quadratic form is then what is known as a null-Lagrangian, so we call them null-T operators. However, it should be remembered that the non-Hermitian part of T nl (x) gets lost when considering the quadratic form -the quadratic form could even be zero if T nl (x) is anti-Hermitian. A simple example is for electrical conductivity with Γ 1 (k) = k ⊗ k/k 2 where one may take T nl (x) to be any antisymmetric matrix valued field with ∇ · T nl = 0. If Γ 1 E = E, then automatically In other words, we are free to subtract T nl (or any multiple of it) from L(x) without disturbing the solution E(x), and with J(x) being replaced by J − T nl E. We can shift L(x) in this way as we please. These T nl (x) allow one to establish equivalence classes between problems taking the form (1.1). Of course, if one is finding the spectral bounds according to the prescription just outlined, then the non-Hermitian part of T nl (x) is irrelevant, and we may as well assume that T nl (x) is Hermitian. Then one can recover it from the associated null-Lagrangian. In this case we call T nl (x) a null-Lagrangian. To generate suitable null-T operators in three dimensions, for equations where Γ 1 (k) has some diagonal block entries of the form k ⊗ k/k 2 , one uses the result that if U(x) is a antisymmetric 3 × 3 matrix valued field with ∇ · U = 0, and e(x) is a three component curl free field, then j = Ue satisfies ∇ · j = 0. This is easily seen if we write e = ∇φ, giving ∇ · j = ∇ · U(x)∇φ = (∇ · U) · ∇φ + Tr(U∇∇φ) = 0, (3.9) in which Tr(U∇∇φ) vanishes because U is antisymmetric while ∇∇φ is symmetric. Equivalently, one may write U = η(u) where the antisymmetric matrix η(u) has the property that η(u)e = u ⊗ v. Then ∇ · U is zero if ∇ × u = 0, and (3.9) reflects the fact that the cross product of two curl free fields is divergence free. Here φ could be any potential, or linear combination of potential components, in the equations that are being studied. The same holds true in two dimensions, but then the condition that ∇ · U = 0 forces the antisymmetric 2 × 2 matrix U to be constant, and thus proportional to the matrix for a 90 • rotation. Additionally in two dimensions, R ⊥ acting on a divergence free field j produces a curl free field R ⊥ j. Through these observations one can generate a multitude of null-T operators associated with a given operator Γ 1 . As an application of the power of null Lagrangians in proving uniqueness of solutions, one may consider the elasticity equations, when the elasticity tensor C(x) is bounded and coercive, i.e., on the space of symmetric matrices the inequality β + I ≥ C(x) ≥ β + I (3.11) holds for some constants β + > β − > 0. This implies that a unique solution for the strain ε = [∇u + (∇u)] exists, but that does that uniquely determine u? Korn's inequality shows that it does, but a simpler approach [22] is to introduce the null-Lagrangian associated with a fourth order tensor T, whose action on a matrix P is given by Adding T, with > 0, to C(x) gives an equivalent problem that breaks the degeneracy: for small enough , C(x)+ T is bounded and coercive on the space of all matrix valued fields, not just the symmetric matrix valued fields (see, for example, Section 6.4 of [28]). So ∇u, and hence u, is uniquely determined. In the context of minimizing sequences of fields, Bhattacharya [10] has used T to bound the fluctuations in the antisymmetric part of u in terms of the fluctuations of the symmetric part of ∇u. For a fixed Γ 1 the associated set S of Hermitian Q * -convex operators is a convex set, since if Γ 1 T 1 Γ 1 ≥ 0 and Γ 1 T 2 Γ 1 ≥ 0 then clearly Γ 1 [(T 1 + T 2 )/2]Γ 1 ≥ 0. The set is invariant with respect to additions or subtractions of any null-Lagrangians. If we focus, for simplicity, on Q * -convex operators that do not depend on x, then it makes sense to look for the extreme points of S, modulo additions or subtractions of null-Lagrangians. These extremal T = T e have the property that they lose their Q * -convexity whenever any Q * -convex T that is not a null-Lagrangian is subtracted from it, as portrayed in Figure 1. Since the inequalities (3.13) we see that the best bounds on the spectrum will be generated by the extremal T. The characterization of the extremal T is a challenging problem. Interestingly, there is a connection with extremal polynomials: see [17] and references therein. Extremals Null−Lagrangian Direction (3.14) If for some tensor field T(x) and constant a − one has As the first and last operators are both block diagonal we see that this new a − is also a lower bound on the spectrum of A = Γ 1 BΓ 1 in the space E. Of course if T is block diagonal then we gain nothing by this procedure, but the point is to take operators T where there are off diagonal blocks that couple everything together. Generally it is very difficult to find T(x) such that Γ 1 TΓ 1 ≥ 0 (or T(x) such that G 1 TG 1 ≥ 0 which is essentially the same problem so we will not treat it separately). However there appear to be at least three routes. One approach, following the ideas of Tartar and Murat [43][44][45][49][50][51], is to look for T(x) that are constant and recognize Γ 1 TΓ 1 ≥ 0 is an inequality in Fourier space with Γ 1 TΓ 1 ≥ 0 acting locally in Fourier space. Thus the inequality holds if and only if Γ 1 (k)TΓ 1 (k) ≥ 0 for all k. [39]. One general class of T(x) (or T) has met with a lot of success as it generates sharp bounds on the effective moduli of composites corresponding to many obtained using the successful Hashin-Shtrikman variational principles [18,19], and sometimes improves on them. This form of T [27] was motivated by optimal bounds [3,21] derived using these variational principles, and is given by: where by choosing we ensure that T is Q * -convex [27]. Typically the "reference medium" L 0 is Hermitian and positive definite, but it suffices for it to be Q * -convex. If T is rotationally invariant then it will suffice to check this inequality for one value of k. There is some freedom in what one means by rotationally invariant. For example for three dimensional conductivity with = 3 one may treat T(x) as a fourth order tensor with three potentials that mix under rotations, or as an array of 9 second order tensors, with three potentials that do not mix under rotations, and T(x) may be rotationally invariant in one sense but not the other. In the first instance there are 3 real parameters that specify a rotationally invariant Hermitian fourth order tensor (each associated with projections onto the three rotationally invariant subspaces of matrices proportional to I, tracefree symmetric matrices and antisymmetric matrices), but three real and three complex numbers in a Hermitian matrix T(x) containing 9 blocks each proportional to the identity matrix. In applications more success in producing tight bounds on the effective properties of composites have been obtained by looking for T(x) that are fourth order tensors for conductivity [4,45,51] and eighth order tensors for elasticity [26,27]. Two other routes can produce a T(x) that depend upon x. This can be advantageous since B(x) depends on x. Given Γ 1 (k) one approach is to take an associated T(x), and then to make a coordinate transformation in the underlying equations to x = x (x) and obtain a Γ 1 (k) and T (x ) in the new coordinates, that depends on x even if T(x) was independent of x. The second approach is to make a substitution. For example, following [46] and Section V(C) of [31], if some of the fields in E derive from the derivatives of any order of a scalar, vector, or tensor potential u then for any given Z(x) and z(x) one can try substituting in a Q * -convex quadratic form, involving u and its derivatives, to get a Q * -convex quadratic form, and associated Q * -convex operator T, involving u and its derivatives. Here u might include all potentials on the right hand side of the constitutive law. Note that even if the original Q * -convex quadratic form, only involves say ∇ u then the new Q * -convex quadratic form will involve both ∇u and u. In this case Γ(k) will transform to a Γ(k) that will not necessarily be a homogeneous function of k even if Γ 1 (k) is a homogeneous function of k. Correspondingly, T will transform to an associated T that generally will be a function of x even if T was not. We will not discuss these last two routes, but instead we refer the interested reader to Section V in [31]. We can also use Q * -convex operators to bound the spectrum of the function 1/z * (z 0 ) defined in Section 2. From (2.21) we see this spectrum is contained in the spectrum of the operator Γ 1 BΓ 1 , so outer bounds on this spectrum immediately apply to the spectrum of 1/z * (z 0 ). Similarly, the spectrum of z * as a function of z 0 = 1/z 0 is contained in the spectrum of Γ 2 BΓ 2 and outer bounds on this spectrum immediately apply to the spectrum of z * as a function of z 0 . The results in this Section are easily extended to non-Hermitian operators. For example, we can replace (3.2) with e iϑ A ≥ a − Γ 1 if e iϑ B(x) ≥ T(x) + a − I for all x and Γ 1 TΓ 1 ≥ 0, (3.21) where the inequalities holds in the sense of quadratic forms, i.e. they bound the Hermitian part of e iϑ A given bounds on the Hermitian part of e iϑ B. Similarly with z 0 = 0 so that B(x) is the inverse of B(x), the obvious extension of (3.21) implies bounds on the Hermitian part of e iϑ Γ 2 B −1 Γ 2 given bounds on the Hermitian part of e iϑ B −1 . 4 A remarkable identity between the resolvent of a non-Hermitian operator A = Γ 1 BΓ 1 and the inverse of an associated Hermitian operator Let us express L as L = L 1 + L 2 and consider the equation which we rewrite in the two equivalent forms The first is easily seen to hold by substituting (4.1) in it, and the second follows by substituting the first equation back in (4.1). We write these and the differential constraints as These manipulations are similar to the manipulations of Cherkaev and Gibiansky [13] and the subsequent manipulations in [27] that led to (4.4), (5.2), and (9.2) in Part I [32], and which were generalized in [38] to include source terms. Now there is a close relation between J 0 and E 0 and they need not be real. Clearly, we are back at a problem in the extended abstract theory of composites. It so happens that the constitutive law (4.1) implies this very close relation between J 0 and E 0 . This equation has the solution implying Hence we arrive at the remarkable identity: that holds for any real or complex B and any real or complex z 0 (not to be confused with z 0 ) with B 0 = z 0 I − L 0 giving H 0 as in (4.5), where L 0 is defined by (4.3). In particular, if we take L 1 as the Hermitian part of L and L 2 as the anti-Hermitian part, then L 0 is a Hermitian operator. So we have an identity between the resolvent of a non-Hermitian operator and the inverse of an associated Hermitian operator. Furthermore, and what is more significant, L 0 will be positive definite if and only if L 1 is positive definite. Of course these results apply to matrices as well, not just operators. The simplest example is when Γ 1 = I and B = (z 0 − z)I, where z = z 1 + iz 2 is complex. Then Hence the right hand of (4.6) evaluates to in agreement with (4.6). A novel Stieltjes function integral representation for the resolvent of a non-Hermitian operator Here we obtain Stieltjes type integral representations for the resolvent in the case where B is non-selfadjoint but there exists an angle ϑ such that c − [e iϑ B + e −iϑ B † ] is positive definite (and coercive) for some constant c. The integral representation holds in the half plane Re(e iϑ z 0 ) > c. We just treat the case where ϑ = 0 as the extension to the case where ϑ = 0 is clear. It is obviously best to keep z 0 complex rather than splitting it into its real and imaginary parts. To do this we take where Z (z 0 ) and Z (z 0 ) are the real and imaginary parts of Z(z 0 ), we see that Z (z 0 ) and Z (z 0 ) are Hermitian, and Z (z 0 ) is positive definite if Re(z 0 )I > 1 2 (B + B † ). This is an extension of the result that the inverse of a matrix A = A h + A a with positive definite Hermitian part A h and anti-Hermitian part A a has a positive definite Hermitian part. To establish this we write and then diagonalize the Hermitian matrix iA to calculate the inverse. The Hermitian part of L 0 (z 0 ) is and this is clearly positive definite if (5.3) holds. Furthermore, the real and imaginary parts of L 0 (z 0 ) are each Hermitian by themselves. Let c be a real value of z 0 such that (5.3) holds (and c − 1 2 (B + B † ) is coercive), and define w 0 = z 0 − c. Then we have So H 0 (w 0 ) is Hermitian when w 0 is real and positive, and the Hermitian part of H 0 (w 0 ) is positive definite for all w 0 in the right hand plane. Additionally, as w 0 → ∞, we have These properties are reminiscent of the complex conductivity tensor σ as a function of −iω where ω is the frequency. The associated permittivity ε = iσ/ω is then a Stieltjes function of −ω 2 : see, for example, [36]. Analogously, with −iω replaced with w 0 and setting v = w 2 0 we have that H 0 /w 0 is a a operator valued Stieltjes function of v. Equivalently, H 0 (w 0 )/w 0 has the representation formula: where µ(λ) is a positive semidefinite Hermitian valued measure. This measure is given by the Stieltjes inversion formula: for all λ 2 > λ 1 ≥ 0, where Im denotes the imaginary part In summary, by substituting (5.8) back in (4.6) we obtain an integral representation for R. We have established the following: Theorem 1 Given an operator B and a real constant c such that c − (B + B † ) is coercive, let L 1 (z 0 ), L 2 , and L 0 (z 0 ) be as given by (5.1) Furthermore, H 0 (w 0 ) has the integral representation (5.8) in terms of 12) and the positive semidefinite operator valued measure µ(λ) given by (5.9). We emphasize that the integral representation only holds for z 0 in the half plane Re(z 0 ) > c. There are other families of non-selfadjoint operators for which the resolvents have integral representations. The simplest is for bounded operators where the real and imaginary parts are each selfadjoint with a real part that is coercive (as for the just mentioned complex conductivity tensor σ as a function of iω). For dissipative operators, which (modulo multiplication by a complex number) have a positive semi-definite imaginary part, one can embed the Hilbert space on which A acts in a larger Hilbert space and find a Hermitian operator H such that < f (H)P, Q >=< f (A)P, Q > for all P and Q in the Hilbert space where A acts, where < , > denotes the norm in this Hilbert space [48]. The spectral theory for H then allows one to compute f (H) for analytic functions f , and gives a Nevanlinna-Herglotz representation integral representation for the resolvent associated with H. This result was anticipated by Livšic in his construction of a characteristic function of a dissipative operator. These and further mathematical developments in the area have been summarized by Kuzheel' [23] and Pavlov [47]. From the physics perspective, an excellent treatment of embedding dissipative problems in a larger Hilbert space in which energy is conserved, and also allowing for dispersion (frequency dependent moduli) has been given by Figotin and Schenker [15] (see also [53]). As they point out, one can think of the additional fields as corresponding to a system with an infinite number of "hidden variables" that may also be called a heat bath. While one can easily go from a conservative system with an infinite number of hidden variables to a dissipative system, they show the reverse is true too. By contrast, our analysis does not correspond to introducing an infinite number of "hidden variables" and applies simply when one has a finite dimensional Hilbert space (n-dimensional vector space) in which A and B are non-Hermitian n×n matrices. In that case, H 0 /w 0 is a 2n×2n matrix valued Stieltjes function of v = w 2 0 = (z 0 −c) 2 . The measure entering the spectral representation will not be discrete, in contrast to the usual spectral representations of Hermitian matrices. This measure is then a positive semidefinite Hermitian 2n × 2n matrix valued continuous measure. The "heat bath" approach corresponds to embedding in a problem with Hermitian (n + m) × (n + m) matrices, as m (which can be thought of as the number of oscillators) approaches infinity. A beautiful physical demonstration of the energy absorbing properties of a system of undamped oscillators (pendulums) is in [1]. The approach we take here also has some similarities with Livšic's compression of resolvents: see [20,24] and references therein.
2020-06-08T01:00:29.187Z
2020-06-04T00:00:00.000
{ "year": 2020, "sha1": "4f946a746809e1b895a05aad0c942241d9db3d5b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4f946a746809e1b895a05aad0c942241d9db3d5b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
254209735
pes2o/s2orc
v3-fos-license
An Improved Synthesis Phase Unwrapping Method Based on Three-Frequency Heterodyne An improved three-frequency heterodyne synthesis phase unwrapping method is proposed to improve the measurement accuracy through phase difference and phase sum operations. This method can reduce the effect of noise and increase the equivalent phase frequency. According to the distribution found in the phase difference calculation process, the Otsu segmentation is introduced to judge the phase threshold. The equivalent frequency obtained from the phase sum is more than those of all projected fringe patterns. In addition, the appropriate period combinations are also studied. The simulations and related experiments demonstrate the feasibility of the proposed method and the ability to improve the accuracy of the measurement results further. Introduction Fringe projection profilometry (FPP) is an optical three-dimensional (3D) profile measurement technique [1][2][3][4], which is a non-contact active measurement method to obtain the 3D profile of an object by projecting and collecting fringe patterns. With the development of digital projectors and sensor devices, this method achieves high accuracy at a low cost. Therefore, this method has many applications in reverse engineering, mechanical assembly, biomedicine, heritage conservation, and other fields [5][6][7]. The core content in structured light 3D measurement is obtaining the continuous phase of the object [8,9]. The phase acquisition is divided into Fourier transform profilometry (FTP) [10] and phase-shifting profilometry (PSP) [11]. For FTP, the fundamental frequency component is filtered out in the frequency domain after the collected image is transformed by the Fourier transform. Then, the Fourier inverse transform is used to obtain the phase. This method only needs one fringe image [12,13] and has good performance in high-speed measurement, but the measurement accuracy of this method is low. However, according to the principle of PSP, the phase is obtained by projecting multiple groups of phase-shifting fringe patterns to the object and performing the point-to-point operations on the collected fringes. This method effectively avoids the influence between adjacent points and has high measurement accuracy [14]. Since these methods extract the phase by tangent calculation, the phase values range from −π to π. Therefore, a phase unwrapping process is required to obtain a continuous phase. Among the phase unwrapping methods, they can be broadly classified into two categories, spatial phase unwrapping (SPU) [15] and temporal phase unwrapping (TPU) [16]. The SPU [17][18][19] algorithms unwrap the phase by the phase value of adjacent pixels. However, one problem with this SPU algorithm is that the phase errors can spread to other locations. In contrast, the TPU algorithm does not have this problem [20][21][22][23]. The basic idea of this algorithm is to make the frequency of fringes change with time, and the fringe Principle The FPP measurement system comprises a projector, a camera, a processing unit (computer), and a working plane. In a 3D measurement process, a computer generates pre-designed sinusoidal fringe patterns, which are then projected onto the object's surface by a projector. The camera captures the deformed fringes modulated by the object, then uses the phase extraction formula to obtain the wrapped phase. The schematic diagram is shown in Figure 1. Wrapped Phase Extraction In N-step PSP, the projected fringe patterns Wrapped Phase Extraction In N-step PSP, the projected fringe patterns I p n (n = 1, . . . , N; N ≥ 3) can be denoted as I p n (x p , y p ) = a p + b p cos[2π f x p − 2πn/N], (1) where (x p , y p ) is the pixel coordinate of the projector, a p is the average intensity, b p is the amplitude, f is the frequency of the projected fringe ( f = 1 T , T is the period of the projected fringe), n represents the phase-shifting index, and N is the phase-shifting step number [14]. The intensity of deformed fringe patterns captured by a camera can be expressed as where (x, y) is the pixel coordinate of the camera, A(x, y) is the average intensity relating to the pattern brightness and background illumination, B(x, y) is the intensity modulation relating to the pattern contrast and surface reflectivity, and ϕ(x, y) is the corresponding wrapped phase which can be extracted by [14] ϕ(x, y) = tan −1 ∑ N n=1 I n (x, y) sin(2πn/N) ∑ N n=1 I n (x, y) cos(2πn/N) . ( It can be noted that there are three unknown quantities, A(x, y), B(x, y), and ϕ(x, y), in Equation (2), so at least three fringe patterns are needed to calculate the wrapped phase ϕ(x, y). Due to the different projected fringe frequencies used in the TFH method, the wrapped phase of each frequency needs to be obtained for further unwrapping. Principle of Phase Unwrapping The phase values calculated by Equation (3) range from −π to π and they will have a phase jump of 2π. To obtain a continuous phase, the wrapped phase plus integer multiple of 2π is needed, which is given by Φ(x, y) = ϕ(x, y) + 2πk(x, y), where ϕ(x, y) is the wrapped phase, Φ(x, y) is the unwrapped phase, and k(x, y) is the fringe order. The main task of phase unwrapping is to calculate the correct fringe orders k(x, y). In the traditional TFH phase unwrapping algorithm, three groups of fringes with frequencies of f 1 , f 2 , and f 3 ( are projected onto the target object. Then, the wrapped phase ϕ 1 , ϕ 2 , and ϕ 3 are obtained by Equation (3). The synthetic phases ϕ 12d , ϕ 23d , and ϕ 123d can be expressed, respectively, as where subscript d refers to phase difference operation. The equivalent frequencies of synthetic phases meet The obtained synthetic phase is equivalent to the extract phase from equivalent frequency fringes. The equivalent phase with a larger period is obtained by phase difference operation. If the fringe period is chosen appropriately, the synthetic phase ϕ 123 with Sensors 2022, 22, 9388 4 of 16 a period covering the whole measurement field can be obtained after two phase difference operations. Φ(x, y) = ϕ 123d (x, y) is the continuous phase that helps calculate the fringe orders. The fringe order k of ϕ h and continuous phase Φ h can be obtained by Equation (6) as where round[·] is the symbol indicating the nearest integer to the value. f l and f h are the frequencies of low-frequency and high-frequency fringes, ϕ h is the high-frequency wrapped phase, and Φ l and Φ h are the low-frequency and high-frequency continuous phases [29]. By repeatedly applying the process of recovering the high-frequency phase with the low-frequency phase, the final continuous phase with the highest frequency is obtained. For the traditional TFH phase unwrapping algorithm, it is easy to misjudge the phase when the phase values are closer due to the noise effect in the phase difference calculation process, which will lead to the phase jump with the value of 2π. In addition, the reduction of the equivalent frequency will also affect the measurement accuracy. Figure 2 shows the schematic diagram of the proposed algorithm. Both phase difference and phase sum operations are introduced to obtain synthetic phases, which could conduct the final continuous phase. Firstly, three groups of fringes with different periods are projected, and the periods meet T 1 < T 2 < T 3 . Similar to the traditional method, by calculating the phase difference with Equation (5), the phase differences ϕ 12d and ϕ 23d can be obtained. process, which will lead to the phase jump with the value of 2π . In addition, the reduction of the equivalent frequency will also affect the measurement accuracy. Figure 2 shows the schematic diagram of the proposed algorithm. Both phase difference and phase sum operations are introduced to obtain synthetic phases, which could conduct the final continuous phase. Firstly, three groups of fringes with different periods are projected, and the periods meet 1 2 3 T T T   . Similar to the traditional method, by calculating the phase difference with Equation (5), the phase differences 12d  and 23d  can be obtained. After this first phase difference calculation, the phase with the larger equivalent period is achieved. However, during the second phase difference calculation for 123d  , the positions with very small phase differences are sensitive to noise. The possible noise will make the phase fluctuate, which is easy to cause misjudgment, and eventually lead to phase jumps. To investigate how to obtain robust phase difference results in this process, the lateral change of the two synthetic phases was observed after the first phase difference calculation, and it was found that the phase difference absolute value 12 The Proposed Algorithm  is mainly divided into two parts, (1) and (2), respectively, as shown in Figure 3a. After this first phase difference calculation, the phase with the larger equivalent period is achieved. However, during the second phase difference calculation for ϕ 123d , the positions with very small phase differences are sensitive to noise. The possible noise will make the phase fluctuate, which is easy to cause misjudgment, and eventually lead to phase jumps. To investigate how to obtain robust phase difference results in this process, the lateral change of the two synthetic phases was observed after the first phase difference calculation, and it was found that the phase difference absolute value |ϕ 12d − ϕ 23d | is mainly divided into two parts, (1) and (2), respectively, as shown in Figure 3a. positions with very small phase differences are sensitive to noise. The possible noise will make the phase fluctuate, which is easy to cause misjudgment, and eventually lead to phase jumps. To investigate how to obtain robust phase difference results in this process, the lateral change of the two synthetic phases was observed after the first phase difference calculation, and it was found that the phase difference absolute value 12  is mainly divided into two parts, (1) and (2), respectively, as shown in Figure 3a. It can be observed from Figure 3a that the distribution of phase difference in (1) and (2) is discontinuous, and the value in the (1) region is smaller than that in the (2) region. There is a steep jump in the change of phase difference value. In addition, the statistics of It can be observed from Figure 3a that the distribution of phase difference in (1) and (2) is discontinuous, and the value in the (1) region is smaller than that in the (2) region. There is a steep jump in the change of phase difference value. In addition, the statistics of phase differences are shown in Figure 3b. It also can be found that the difference is mainly divided into two parts, (3) and (4). Part (3) corresponds to the region (1), which has smaller values, and part (4) corresponds to the region (2), which has larger values. It can be clearly noticed that there is a gap between part (3) and part (4) from Figure 3b. Based on the above findings, the threshold can be set to carry out the correct phase difference operation in the second phase difference calculation process. This algorithm uses the Otsu thresholding segmentation algorithm to calculate the intermediate thresholds of part (3) and part (4). The Otsu method is an adaptive threshold segmentation algorithm, and it can quickly calculate the interclass data threshold by the maximum variance between two data classes after segmentation. Therefore, the threshold T h can be obtained by where OTSU(·) is the Otsu operation. By comparing the phase difference absolute value and the threshold value, the improved second phase difference calculation can be expressed as where ϕ id is the wrapped phase difference, f id is the equivalent frequency, and i = 12, 23, 123. The phase difference operation can obtain the phases of larger equivalent periods. When selecting an appropriate combination of periods, the period of the synthetic phase ϕ 123d will cover the whole measurement field, while Φ 123d = ϕ 123d is a continuous phase, which could help calculate the fringe orders. The phase sum operation in the proposed method is also introduced to obtain the synthetic phases with higher equivalent frequency. The phases ϕ 12s and ϕ 23s after the first phase sum calculation are obtained by where subscript s refers to the phase sum operation. After the first phase sum calculation, all the phase values ranging from 0 to 2π are obtained. If these results are used directly for the second phase sum calculation, the phase sum in higher frequencies cannot be obtained correctly. Therefore, the first phase sum results need to be shifted between −π and π by where ϕ is is the phase sum after the shift. Then, the ϕ 123s is the result of the second phase sum calculation, which is expressed as where f is is the equivalent frequency of phase sum and i = 12, 23, 123. The phase sum operation can obtain the phases of smaller equivalent periods. Afterward, the phase sum ϕ 123s is restored to a continuous phase using phase difference ϕ 123d . The phase calculation process is shown in Figure 4. where is f is the equivalent frequency of phase sum and 12, 23,123 i  . The phase sum operation can obtain the phases of smaller equivalent periods. Afterward, the phase sum 123s  is restored to a continuous phase using phase difference 123d  . The phase calculation process is shown in Figure 4. where i  and j  are the lower frequency and higher frequency continuous phase, j  is the higher frequency wrapped phase ( 123 ,12 ,1,12 is the fringe order. The sequence of the whole phase recovery is as follows: The phase is unwrapped from low to high equivalent frequencies step by step. Mathematical Derivation and Analysis The sensitivity gain G (between 123d  and 123s  ) of the three-frequency method can be calculated according to the two-frequency method [34][35][36]. Figure 4c, ϕ 123d is a continuous phase (Φ 123d = ϕ 123d ), while ϕ 123s is a wrapped phase. The fringe order k s of ϕ 123s and the final continuous phase Φ 123s can be obtained by where Φ i and Φ j are the lower frequency and higher frequency continuous phase, ϕ j is the higher frequency wrapped phase (i = 123d, 12d, 1, 12s, j = 12d, 1, 12s, 123s), k is the fringe order. The sequence of the whole phase recovery is as follows: The phase is unwrapped from low to high equivalent frequencies step by step. Mathematical Derivation and Analysis The sensitivity gain G (between Φ 123d and Φ 123s ) of the three-frequency method can be calculated according to the two-frequency method [34][35][36]. Obtaining Sensors 2022, 22, 9388 7 of 16 As the T 1 , T 2 , and T 3 are greater than zero, the G is always greater than one. The phase sum Φ 123s is G = T 123d /T 123s times more sensitive than Φ 123d . Next, the SNRs of phase difference and phase sum will be discussed. In practice, the phases Φ 1n (x, y), Φ 2n (x, y), and Φ 3n (x, y) are corrupted by additive white Gaussian noise n 1 (x, y), n 2 (x, y), and n 3 (x, y) as where n 1 , n 2 , and n 3 (omit the coordinate notation (x, y) henceforth) are uncorrelated samples with a variance of σ 2 , and h(x, y) (h(x, y) = Φ(x, y)T/2π) is the height contained in the phase. The phase difference Φ 123dn and phase sum Φ 123sn with noise are Then, the SNRs for Φ 123dn and Φ 123sn are where the (x, y) ∈ Ω is the two-dimensional region where the fringe data are well-defined. Since n 1 , n 2 , and n 3 are generated by the same Gaussian zero-mean stationary stochastic process, then the average energies of n 1 − n 2 , n 1 + n 2 ; n 3 − n 2 , and n 3 + n 2 are equal [34][35][36][37]. The SNR gain between Φ 123sn and Φ 123dn is Thus, Φ 123sn (x, y) has G 2 times higher SNR than Φ 123dn (x, y). A phase sum operation can obtain higher G and SNR through mathematical analysis. In addition, the noise-induced phase error is also Gaussian distributed and is assumed to have a variance of σ 2 Φ according to the study of Ref [29]. After the phase unwrapping is performed, the phase is extended from 2π to 2π f t ( f t is the total number of fringes in the fringe pattern). If the already expanded continuous phase is scaled down to the scale [−π, π), the variance of the phase error is equivalent to being reduced by a factor of f t 2 . Equation (19) shows that three factors can be used to reduce the impact of noise, but the most convenient way to reduce phase error for a system that already exists is increasing f t . Compared with the DFH method, the TFH method can select the fringes with higher frequency to calculate the phases because it performs the phase difference twice, and it can measure a larger field. This paper introduces the phase sum operation with higher sensitivity gain and SNR into the TFH method. Combining the TFH method with the phase sum operation can obtain better measurement results through mathematical derivation and theoretical analysis. Phase Calculation from Fringe Patterns with Noise Added To verify the whole process and the anti-noise performance, random noise with an SNR of 29.8 dB is added to the simulated fringes. The simulated image size is 512 × 512 pixels, and the computer generates three groups of sinusoidal fringe patterns (T 1 = 21, T 2 = 23, T 3 = 25). The phase is calculated by the four-step PSP algorithm. The phase calculation process with noise is shown in Figure 5. ing t f . Compared with the DFH method, the TFH method can select the fringes with higher frequency to calculate the phases because it performs the phase difference twice, and it can measure a larger field. This paper introduces the phase sum operation with higher sensitivity gain and SNR into the TFH method. Combining the TFH method with the phase sum operation can obtain better measurement results through mathematical derivation and theoretical analysis. Phase Calculation from Fringe Patterns with Noise Added To verify the whole process and the anti-noise performance, random noise with an SNR of 29.8 dB is added to the simulated fringes. The simulated image size is 512 512 pixels  , and the computer generates three groups of sinusoidal fringe patterns . The phase is calculated by the four-step PSP algorithm. The phase calculation process with noise is shown in Figure 5. In the first phase difference calculation process, the statistics of the phase difference absolute value |ϕ 12d − ϕ 23d | with noise are calculated and shown in Figure 6. It can be observed that the phase difference values after adding noise still remain in the found distribution, which could be divided into two parts. The threshold is It can be observed that the phase difference values after adding noise still remain in the found distribution, which could be divided into two parts. The threshold is T h = 3.1 using the Otsu method, and it will be used for the second phase difference calculation. The comparison of improved phase difference calculation results and the traditional method are shown in Figure 7. It can be observed that the phase difference values after adding noise still remain in the found distribution, which could be divided into two parts. The threshold is 3.1 h T  using the Otsu method, and it will be used for the second phase difference calculation. The comparison of improved phase difference calculation results and the traditional method are shown in Figure 7. Figure 7a shows the results of traditional phase difference calculation. At the position where the phase value is close, due to the effect of noise, the calculated continuous phase has multiple jumps of 2π. Figure 7b demonstrates the result of the improved method proposed in this work, and it can effectively eliminate the phase jumps. To further study the improvement compared with other methods, one of the improved DFH methods [39], the traditional TFH method, and the proposed method are compared. To ensure that each method can finally obtain a continuous phase, the fringe periods of the improved DFH method are set as T 1 = 31 and T 2 = 32, and the fringe periods of the traditional TFH method and the proposed method are set as T 1 = 21, T 2 = 23, and T 3 = 25. All methods are employed to measure a virtual plane, and the same level of random noise is added to fringe patterns. The plane phase of each method is calculated and shown in Figure 8. Comparing these three methods shows that the improved DFH method and the traditional TFH method have some steep phase jumps, and the improved DFH method has more small phase fluctuations in the linear region. In general, using the proposed method can effectively improve the accuracy of the measurement and reduce the phase jumps caused by noise. Effect of Fringe Period Selection on Phase Calculation Then, the selection of the fringe period is discussed and the effect of various combinations on the measurement accuracy is studied. It can be predicted that not all combinations are feasible. The combination must meet as , , where m is the image resolution in the fringe coding direction. Since there are three periods that need to be determined step by step, we first fix 3 T in the range of 20 to 80, then fix 2 T in the range of 1 to 3 1 T  , and finally fix 1 T in the range of 1 to 2 1 T  . The random noise is also added to the simulation, and the root mean square error (RMSE) of different combinations is calculated after determining each period. When the maximum pe- 1920 pixels pixels  . The error bar graph for various period combinations is shown in Figure 9. Comparing these three methods shows that the improved DFH method and the traditional TFH method have some steep phase jumps, and the improved DFH method has more small phase fluctuations in the linear region. In general, using the proposed method can effectively improve the accuracy of the measurement and reduce the phase jumps caused by noise. Effect of Fringe Period Selection on Phase Calculation Then, the selection of the fringe period is discussed and the effect of various combinations on the measurement accuracy is studied. It can be predicted that not all combinations are feasible. The combination must meet as where m is the image resolution in the fringe coding direction. Since there are three periods that need to be determined step by step, we first fix T 3 in the range of 20 to 80, then fix T 2 in the range of 1 to T 3 − 1, and finally fix T 1 in the range of 1 to T 2 − 1. The random noise is also added to the simulation, and the root mean square error (RMSE) of different combinations is calculated after determining each period. When the maximum period T 3 = 40, 50, 60, 70, we select the T 1 with the smallest error after T 2 is fixed to draw the error bar graph of Φ 123s , which is labeled as (T 2 , T 1 ). To avoid the random fluctuation caused by a single calculation as much as possible, the mean value and peak-valley value are obtained by repeating each combination 20 times. The image size is 1920 pixels × 1920 pixels. The error bar graph for various period combinations is shown in Figure 9. According to Figure 10, it can be known that those combinations with larger RMSE in Figure 9 have a wider range of fluctuations and errors. This leads to larger phase errors and reduces the measurement accuracy during the phase calculation. Therefore, selecting the appropriate fringe period combination is necessary before conducting the experiments. To obtain the optimal period combination, those points with lower mean and peakvalley values are selected, as shown in Figure 11. It can be seen that the optimal combination of periods shows a linear trend. Therefore, it can be fitted by a linear function, Equation (21). According to Figure 9, it is known that when the value of T 3 is larger, there are more stable period combinations that meet Equation (20). When the values of T 1 , T 2 , and T 3 are relatively close, the error value and fluctuation are smaller, while the error in the place where the period difference is is larger, and the fluctuation of the calculation results is also larger. Those period combinations are sensitive to noise in the measurement process. Therefore, that should be avoided. To investigate the reasons for the large errors and fluctuations generated by some combinations, the three combinations with the periods of T 1 = 15, T 2 = 23, T 3 = 50, T 1 = 44, T 2 = 47, T 3 = 50, and T 1 = 46, T 2 = 48, T 3 = 50 are conducted without adding noise to reflect the impact of different combinations simply. The errors of the Φ 123d and Φ 123s corresponding to the three combinations are shown in Figure 10. According to Figure 9, it is known that when the value of 3 T is larger, there are more stable period combinations that meet Equation (20). When the values of 1 T , 2 T , and 3 T are relatively close, the error value and fluctuation are smaller, while the error in the place where the period difference is is larger, and the fluctuation of the calculation results is also larger. Those period combinations are sensitive to noise in the measurement process. Therefore, that should be avoided. To investigate the reasons for the large errors and fluctuations generated by some combinations, the three combinations with the periods of According to Figure 10, it can be known that those combinations with larger RMSE in Figure 9 have a wider range of fluctuations and errors. This leads to larger phase errors and reduces the measurement accuracy during the phase calculation. Therefore, selecting the appropriate fringe period combination is necessary before conducting the experiments. To obtain the optimal period combination, those points with lower mean and peakvalley values are selected, as shown in Figure 11. It can be seen that the optimal combination of periods shows a linear trend. Therefore, it can be fitted by a linear function, Equation (21). According to Figure 10, it can be known that those combinations with larger RMSE in Figure 9 have a wider range of fluctuations and errors. This leads to larger phase errors and reduces the measurement accuracy during the phase calculation. Therefore, selecting the appropriate fringe period combination is necessary before conducting the experiments. To obtain the optimal period combination, those points with lower mean and peakvalley values are selected, as shown in Figure 11. It can be seen that the optimal combination of periods shows a linear trend. Therefore, it can be fitted by a linear function, Equation (21). Using the fitting function, the values of 1 k , 2 k , 3 k , and 4 k can be obtained, respectively, as Before the fringe projection, the maximum period needs to be first determined for the measurement, and then the periods of the other two groups are obtained by function calculation. The accuracy of the measurement results can be further improved through this process. Experiment A fringe projection measurement system is set up for the experiment, as shown in Figure 12. The system includes a CCD camera, a digital projector, a high-precision motorized linear translation stage, a checkerboard, and a computer. The CCD camera is the digital camera IMAVISION MER-231-41GM-P of the Mercury series from Daheng Imaging, Using the fitting function, the values of k 1 , k 2 , k 3 , and k 4 can be obtained, respectively, as Before the fringe projection, the maximum period needs to be first determined for the measurement, and then the periods of the other two groups are obtained by function calculation. The accuracy of the measurement results can be further improved through this process. Experiment A fringe projection measurement system is set up for the experiment, as shown in Figure 12. The system includes a CCD camera, a digital projector, a high-precision motorized linear translation stage, a checkerboard, and a computer. The CCD camera is the digital camera IMAVISION MER-231-41GM-P of the Mercury series from Daheng Imaging, with a resolution of 1920 × 1200. The motorized linear translation stage is a GCD-203300M from Daheng Optics, with an accuracy of 0.001 mm. The projector is an Epson CH-TW5600, with a resolution of 1920 × 1080. The size of the checkerboard square is 15 mm. Under the premise of satisfying the period relationship of Equation (20), the maximum period is selected to be 3 60 T  , so that the continuous phases can be obtained for fair and reliable comparison when other methods are calculated, which will be discussed Under the premise of satisfying the period relationship of Equation (20), the maximum period is selected to be T 3 = 60, so that the continuous phases can be obtained for fair and reliable comparison when other methods are calculated, which will be discussed later. According to the period optimization method, the other two groups of fringe periods are T 2 = 58 and T 1 = 56. Firstly, the smooth surface continuous objects are measured, as shown in Figure 13. Under the premise of satisfying the period relationship of Equation (20), the maximum period is selected to be 3 60 T  , so that the continuous phases can be obtained for fair and reliable comparison when other methods are calculated, which will be discussed later. According to the period optimization method, the other two groups of fringe periods are 2 58 T  and 1 56 T  . Firstly, the smooth surface continuous objects are measured, as shown in Figure 13. Figure 13a shows the deformed fringe patterns, and Figure 13b illustrates the wrapped phases calculated by PSP. Figure 13c describes the phase difference maps. After two phase difference operations, the continuous phase covering the whole measurement field is obtained. Figure 13d shows the phase sum maps, and the equivalent frequency after two phase sum operations is much more than that of all projected fringes. Figure 13e demonstrates the histogram graph of 12 , and it still conforms to the found distribution. The threshold obtained by the Otsu segmentation algorithm is 2.51. Figure 13f illustrates the final continuous phase 123s  , obtained by the introduced phase recovery sequence. The calibration is performed using a translation stage and a checkerboard to present the results in the world coordinate system. The translation stage moves with a known height multiple times to calculate the phase-to-height parameter [40]. The calibrated height volume is 100 mm. Twenty-five checkerboard images are captured and calculated for camera internal, external, and distortion parameters by Zhang's camera calibration technique [41]. After calibration, the subsequent experimental results are converted to the world coordinate system. Figure 13a shows the deformed fringe patterns, and Figure 13b illustrates the wrapped phases calculated by PSP. Figure 13c describes the phase difference maps. After two phase difference operations, the continuous phase covering the whole measurement field is obtained. Figure 13d shows the phase sum maps, and the equivalent frequency after two phase sum operations is much more than that of all projected fringes. Figure 13e demonstrates the histogram graph of |ϕ 12d − ϕ 23d |, and it still conforms to the found distribution. The threshold obtained by the Otsu segmentation algorithm is 2.51. Figure 13f illustrates the final continuous phase Φ 123s , obtained by the introduced phase recovery sequence. The calibration is performed using a translation stage and a checkerboard to present the results in the world coordinate system. The translation stage moves with a known height multiple times to calculate the phase-to-height parameter [40]. The calibrated height volume is 100 mm. Twenty-five checkerboard images are captured and calculated for camera internal, external, and distortion parameters by Zhang's camera calibration technique [41]. After calibration, the subsequent experimental results are converted to the world coordinate system. To make the error comparison fairer and more reliable, a 24-step phase-shifting plus multi-frequency method [28] is used to obtain the ground truth in the experiment. The periods of the projected fringe are determined by T i = m 2 i−1 (i = 1, 2, 3, . . .), where the projector's resolution is m × n, and m and n are the horizontal and vertical resolutions of the projector, respectively. Then, the comparison of the proposed method, the traditional TFH method, and the improved DFH method is performed. To ensure that these three methods can obtain continuous phases, and make their period values close to proceed with a fairer comparison, the periods of DFH are selected as T 2 = 68 and T 1 = 66, while the combination of the proposed method and the traditional TFH method are selected as T 3 = 60, T 2 = 58, and T 1 = 56. The target of this combination with a smaller period value is to avoid f id > m (i = 12, 23) after the first phase difference calculation, where m is the horizontal resolution of the projector (m = 1920 in this research). Additionally, the periods of the multi-frequency method are determined by T i = 1920 2 i−1 (i = 1, 2, 3, 4, 5, 6), and the continuous phase is solved with T 6 = 60 as the ground truth. In this case, each method can satisfy its own period relationship. A ceramic standard gauge block with a height of 20 mm was measured. A linear area was selected on its upper surface to compare these three methods, as shown in Figure 14. T   as the ground truth. In this case, each method can satisfy its own period relationship. A ceramic standard gauge block with a height of 20 mm was measured. A linear area was selected on its upper surface to compare these three methods, as shown in Figure 14. As shown in Figure 14, the result of the proposed method is closer to the actual height, and the RMSE is smaller. A part of a gourd and a statue were measured to compare the three methods further. The comparison between these methods was performed, as shown in Figure 15. As shown in Figure 14, the result of the proposed method is closer to the actual height, and the RMSE is smaller. A part of a gourd and a statue were measured to compare the three methods further. The comparison between these methods was performed, as shown in Figure 15. T   as the ground truth. In this case, each method can satisfy its own period relationship. A ceramic standard gauge block with a height of 20 mm was measured. A linear area was selected on its upper surface to compare these three methods, as shown in Figure 14. As shown in Figure 14, the result of the proposed method is closer to the actual height, and the RMSE is smaller. A part of a gourd and a statue were measured to compare the three methods further. The comparison between these methods was performed, as shown in Figure 15. Analyzing the results in Figure 15, the proposed method has fewer jump points than the other two methods. To show more clearly the differences between several methods, their Euclidean distance (ED = (x − x ) 2 + (y − y ) 2 + (z − z ) 2 , where (x, y, z) and (x , y , z ) are the coordinates of two space points, respectively) maps were calculated from the ground truth, as shown in Figure 16. Figure 16 shows that the overall RMSE of the proposed method is minimal, and there are fewer jump points. The above comparison proves that the proposed method can further improve measurement accuracy and reduce jump errors. The optimization method of the fringe period is verified. According to the above conclusion, the optimal fringe periods are T 1 = 56, T 2 = 58, T 3 = 60, and the results are compared with several other periods' results, as shown in Figure 17. The optimal combination has much fewer jumps and is almost the same as the ground truth. The error of other combinations is more prominent, especially combination 2. To more clearly observe the difference between the results of several combinations and the ground truth, the ED maps were calculated as shown in Figure 18. Analyzing the results in Figure 15, the proposed method has fewer jump points than the other two methods. To show more clearly the differences between several methods, their Euclidean distance ( , , x y z and   , , x y z    are the coordinates of two space points, respectively) maps were calculated from the ground truth, as shown in Figure 16. Figure 16 shows that the overall RMSE of the proposed method is minimal, and there are fewer jump points. The above comparison proves that the proposed method can further improve measurement accuracy and reduce jump errors. The optimization method of the fringe period is verified. According to the above conclusion, the optimal fringe periods are 1 2 3 56, 58, 60 T T T    , and the results are compared with several other periods' results, as shown in Figure 17. Figure 16 shows that the overall RMSE of the proposed method is minimal, and there are fewer jump points. The above comparison proves that the proposed method can further improve measurement accuracy and reduce jump errors. The optimization method of the fringe period is verified. According to the above conclusion, the optimal fringe periods are 1 2 3 56, 58, 60 T T T    , and the results are compared with several other periods' results, as shown in Figure 17. The optimal combination has much fewer jumps and is almost the same as the ground truth. The error of other combinations is more prominent, especially combination 2. To more clearly observe the difference between the results of several combinations and the ground truth, the ED maps were calculated as shown in Figure 18. As shown in Figure 18, the ED map and RMSE of the optimal combination are the smallest, while the error of combination 2 is the largest, and the error level of combination 3, 4, and 5 is between them. Figure 18 shows that the optimal combination has the least phase jumps and a higher measurement accuracy. This experiment proves that the opti- As shown in Figure 18, the ED map and RMSE of the optimal combination are the smallest, while the error of combination 2 is the largest, and the error level of combination 3, 4, and 5 is between them. Figure 18 shows that the optimal combination has the least phase jumps and a higher measurement accuracy. This experiment proves that the optimal period selection criterion can obtain better measurement results. The above experiments can show that the proposed method is feasible and has a high anti-noise ability. By comparing the results obtained by several methods, it can be seen that the proposed method has better measurement accuracy, also with the optimal fringe period combination. Conclusions An improved TFH synthesis phase unwrapping algorithm is proposed in this research. The phase sum operation is introduced into the TFH algorithm, and the phase sum and phase difference operations are improved. In the process of phase difference calculation, according to the found distribution of phase difference value, the Otsu method is used to calculate the threshold to help phase judgment and reduce the effect of noise. In the process of phase sum operation, the phase with a much higher frequency than those of projected fringes can be realized. The continuous phase difference is used to assist in phase sum unwrapping. The improved phase sum and phase difference operation jointly achieve the effect of accuracy improvement and error reduction. The fringe period selection method is deduced based on the distribution of error values for different period combinations, and better results can be obtained using the optimal combination. Simulation and experimental results show the feasibility and anti-noise ability of the proposed method, which can improve the accuracy of measurement results to a certain extent.
2022-12-04T16:53:54.302Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "c69213c1555db61f3ead9addf76a2a90e193a3de", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/23/9388/pdf?version=1669901397", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4528629e129e19362313e50dcf91c42c8119d7d1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
252496645
pes2o/s2orc
v3-fos-license
Clinical characteristics and outcomes of a patient population with atypical hemolytic uremic syndrome and malignant hypertension: analysis from the Global aHUS registry Introduction Atypical hemolytic uremic syndrome (aHUS) is a rare form of thrombotic microangiopathy (TMA) often caused by alternative complement dysregulation. Patients with aHUS can present with malignant hypertension (MHT), which may also cause TMA. Methods This analysis of the Global aHUS Registry (NCT01522183) assessed demographics and clinical characteristics in eculizumab-treated and not-treated patients with aHUS, with (n = 71) and without (n = 1026) malignant hypertension, to further elucidate the potential relationship between aHUS and malignant hypertension. Results While demographics were similar, patients with aHUS + malignant hypertension had an increased need for renal replacement therapy, including kidney transplantation (47% vs 32%), and more pathogenic variants/anti-complement factor H antibodies (56% vs 37%) than those without malignant hypertension. Not-treated patients with malignant hypertension had the highest incidence of variants/antibodies (65%) and a greater need for kidney transplantation than treated patients with malignant hypertension (65% vs none). In a multivariate analysis, the risk of end-stage kidney disease or death was similar between not-treated patients irrespective of malignant hypertension and was significantly reduced in treated vs not-treated patients with aHUS + malignant hypertension (adjusted HR (95% CI), 0.11 [0.01–0.87], P = 0.036). Conclusions These results confirm the high severity and poor prognosis of untreated aHUS and suggest that eculizumab is effective in patients with aHUS ± malignant hypertension. Furthermore, these data highlight the importance of accurate, timely diagnosis and treatment in these populations and support consideration of aHUS in patients with malignant hypertension and TMA. Trial registration details Atypical Hemolytic-Uremic Syndrome (aHUS) Registry. Registry number: NCT01522183 (first listed 31st January, 2012; start date 30th April, 2012). Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s40620-022-01465-z. Introduction Atypical hemolytic uremic syndrome (aHUS) is a rare form of thrombotic microangiopathy (TMA) typically caused by alternative complement pathway dysregulation, that is often classified as a complement-mediated TMA (CM-TMA) [1][2][3][4]. aHUS is characterized by thrombocytopenia, microangiopathic hemolytic anemia, and acute kidney injury and can also present as progressive kidney damage, or as extrarenal manifestations resulting in damage to other organs [5][6][7]. Another condition that can result in TMA is malignant hypertension (MHT), a severe form of arterial hypertension traditionally diagnosed by high blood pressure (diastolic pressure > 120 mmHg) with papilledema/hypertensive retinopathy [8][9][10][11][12]. More recent experience has emphasized the role of multi-organ involvement/damage in the diagnosis and prognosis of MHT, and MHT with multi-organ involvement has also been referred to as hypertensive emergency [10,13]. The kidneys are frequently affected in patients with MHT, and patients often present with elevated serum creatinine, proteinuria, hemolysis, low platelet count, and kidney failure, all of which are also key markers of TMA [10,14]. Further, complement dysregulation has also been implicated in patients with hypertension-associated TMA, with one study finding that 87.5% of patient serum samples induced formation of abnormal C5b-9 on microvascular endothelial cells in vitro. This has previously been proposed as a highly specific assessment of complement dysregulation/ activation in patients with aHUS [15]. Previous studies have suggested that aHUS and MHT are common comorbid conditions, although their precise relationship has often been unclear [16][17][18][19]. Recent evidence suggests that while MHT is highly prevalent in patients with aHUS, among all cases of MHT, aHUS remains a marginal cause. There is also evidence of direct associations between MHT and development of TMA [8,9,13,20]. The interplay/overlap between these conditions means that establishing causality is often extremely difficult. Despite the difficulties associated with differentiating between MHT and aHUS, establishing a clear and correct diagnosis is extremely important as the underlying mechanisms and treatment choices differ significantly. The current standard of care in patients diagnosed with aHUS is complement C5 inhibitor therapy, while patients presenting with MHT will typically be treated with blood pressure lowering medications [21,22]. Due to the substantially different pathophysiological mechanisms underlying these conditions, delays in diagnosis and sub-optimal treatment regimens can have considerable, negative effects on patient outcomes. Finally, it is presently unknown whether the complement C5 inhibitor eculizumab is effective in treating patients with aHUS and MHT. Using data from the Global aHUS Registry, the largest registry of real-world data relating to patients with aHUS, this analysis characterized pediatric and adult patients with aHUS, both with and without MHT, who were either treated or not treated with eculizumab. This study explored the baseline characteristics of these patient groups and assessed the risk of reaching the composite endpoint of end-stage kidney disease (ESKD) or death. Clinical characteristics and outcomes are also presented by adult and pediatric designation. Methods This retrospective analysis utilized data from the Global aHUS Registry (NCT01522183), an observational, noninterventional, multicenter registry that retrospectively and prospectively collects demographic information, natural history data, and treatment outcomes of patients with aHUS. The registry methodology and initial patient characteristics have previously been reported [23]. This analysis included patients enrolled into the registry from April 2012 until 26 October, 2020 [23]. Patients were included if they were enrolled in the registry and were followed up for ≥ 90 days after initial aHUS presentation or diagnosis date. aHUS was diagnosed locally, with no central registry definition of aHUS used. Patients in the MHT cohort were also required to have a recorded diagnosis of MHT, as defined by the local registry investigator/treating physician, and applied criteria usually included diastolic blood pressure > 120 mmHg, alongside papilledema, retinopathy and/or exudates. No definition of severe hypertension was available within the registry. Patients were excluded from this analysis if they withdrew consent from the registry or discontinued eculizumab due to a revised diagnosis of any condition other than aHUS. To assess the effects of eculizumab on outcomes, patients were defined as either treated or not-treated. Patients not treated with eculizumab included any patients who were never treated with eculizumab, or who received eculizumab after reaching ESKD (defined as kidney transplantation or chronic maintenance dialysis), or who received eculizumab up to and including one month prior to kidney transplantation. No minimum duration of eculizumab administration was required for inclusion in the treated group. Patient disposition for this analysis is presented in Fig. 1. The following variables were extracted for analysis; age at aHUS diagnosis, sex, time to eculizumab initiation, family history of aHUS, timing of MHT diagnosis (related to the time of initial aHUS presentation), new extra-renal 26 October, 2020; b Nottreated patients included any patients who were never treated with eculizumab; who received eculizumab after reaching ESKD, defined as kidney transplantation or chronic maintenance dialysis; or who received eculizumab up to and including 1 month prior to kidney transplantation. aHUS atypical hemolytic uremic syndrome, MHT malignant hypertension manifestations of aHUS not present at initial diagnosis (number and organ system), pathogenic genetic variant status and presence of autoantibodies to complement factor H, triggering conditions other than MHT, kidney transplant status, and baseline serum creatinine, platelet counts and lactate dehydrogenase levels. Baseline was defined as the closest value to aHUS onset in either direction. The primary outcome of interest was the composite endpoint of time to ESKD or death. The variables and primary endpoint were stratified by treatment status, MHT status, and age group (pediatric [< 18 years] vs adult). Statistical analysis Continuous data were summarized as median (min, max), while categorical data were summarized as number of patients (%). Laboratory parameters were presented using both number of patients with available data (%) and median (min, max) for values. No formal statistical comparisons were performed on baseline characteristics data. Kaplan-Meier survival plots were generated for the composite endpoint, and hazard ratios (HRs) were calculated using Cox regression analysis. Both unadjusted and adjusted HRs and 95% confidence intervals are reported. HRs for the comparison of treated vs not-treated patients with aHUS and MHT were adjusted for plasma exchange/plasma infusion at the time of initial TMA, dialysis at the time of initial TMA, and the presence of any pathogenic genetic variants or anti-CFH antibodies. HRs for the comparison of not-treated patients with aHUS with vs without MHT were adjusted for age at initial onset of aHUS, sex, and the presence of any pathogenic genetic variants or anti-CFH antibodies. For assessment of the composite endpoint, propensity matching by age at initial onset of aHUS, sex, and presence of pathogenic genetic variants was performed. Additionally, only those patients with recorded genetic testing results had their genetic data included in the analysis. Any missing data were excluded from this analysis. Patient disposition Patient disposition is presented in Fig. 1. At the time of this analysis, a total of 1903 patients were enrolled in the Global aHUS Registry. Following application of the inclusion and exclusion criteria, 1797 of the 1903 patients were eligible for this study. A further 695 patients were excluded due to unknown MHT status and five due to unknown eculizumab treatment status (1 with MHT, 4 without MHT). This analysis therefore included 1097 patients; 71 presenting with both aHUS and MHT (20 treated and 51 not treated with eculizumab) and 1026 presenting with aHUS without MHT (429 treated and 597 not treated with eculizumab). Overall, 20 (28%) patients with aHUS and MHT were treated with eculizumab, compared to 429 (42%) patients without MHT. Of the 72 patients with aHUS and MHT, 23 (32%) had a recorded onset of aHUS prior to 2011, while of the 1030 patients with aHUS without MHT, 323 (31%) had a recorded onset of aHUS prior to 2011. Eculizumab was granted marketing authorization in 2011. Patient demographics Key patient demographics are presented in Table 1 and patient demographics stratified by age group are presented in Supplementary Table S1. Age at aHUS diagnosis, sex, and family history of aHUS were all similar between patients with aHUS both with and without MHT, irrespective of treatment status. Patients with aHUS and MHT had a slight numerical increase in the percentage of new extrarenal manifestations of aHUS across all organ systems. Genetic screening for at least one pathogenic complement variant was conducted in 61 (86%) patients with aHUS and MHT, and in 742 (72%) patients with aHUS without MHT. Of these, 34 (48%) with aHUS and MHT had their results recorded in the registry, compared to 300 (29%) patients without MHT. Testing for anti-CFH antibodies was performed in 11 (16%) patients with aHUS and MHT and in 91 (9%) patients with aHUS without MHT. Among patients whose genetic screening results were entered in the registry database, those with aHUS and MHT had a higher proportion of pathogenic genetic variants or anti-CFH antibodies compared to aHUS patients without MHT (40 [56%] vs 382 [37%]). Further, patients with aHUS and MHT who were not treated with eculizumab were found to have a much higher proportion of pathogenic genetic variants or anti-CFH antibodies (33 [65%]) than those with aHUS and MHT who were treated with eculizumab ( Patients not treated with eculizumab included any patients who were never treated with eculizumab; who received eculizumab after reaching ESKD (defined as kidney transplantation or chronic maintenance dialysis); or who received eculizumab up-to and including one month prior to kidney transplantation b Patients with missing data had no recorded data available within the registry database c Patients with unknown family history had a specific 'unknown' data entry in the registry database, based upon clinician input via the recording form d Also includes any patients with missing data e Data is included only for patients who were tested and had a result recorded in the registry database, patients who were tested but had no available results were excluded f Baseline laboratory parameters reported were the closest value to aHUS onset date in either direction Aside from patients with aHUS and MHT treated with eculizumab-who reported no triggering conditions other than MHT-similar, small proportions of patients reported triggering conditions other than MHT in all other patient cohorts (Table 1). Time to ESKD or death Kaplan-Meier plots and HRs for the combined endpoint of ESKD or death are presented in Fig. 2, and full HR analyses are available in Supplementary Table S2. Figure 2a Kaplan-Meier plots for the combined endpoint of ESKD or death in patients with aHUS and MHT, stratified by age groups (adult or pediatric), are presented in Fig. 3. Figure 3a presents a comparison of adult and pediatric patients with aHUS and MHT who were treated with eculizumab, while Fig. 3b presents a comparison of adult and pediatric patients with aHUS and MHT who were not treated with eculizumab. Adult patients were at greater risk of ESKD or death than pediatric patients, and not-treated patients had worse outcomes than treated patients in both age groups. Discussion This study presents data from the largest comparison of patients with aHUS with and without comorbid MHT to date. In the study population, MHT was reported as occurring at the same time as aHUS symptoms in ~ 2/3 of patients presenting with comorbid aHUS and MHT, irrespective of treatment status, and more patients with aHUS and MHT possessed pathogenic genetic variants or anti-CFH antibodies than patients with aHUS alone (40 [56%] vs 382 [37%]). Further, a much higher proportion of non-treated patients with aHUS and MHT had pathogenic genetic variants or anti-CFH antibodies (33 [65%]) compared to their treated counterparts (7 [35%]). Considering these data, and that these patients were diagnosed with aHUS, it is perhaps surprising that 51 (72%) patients with aHUS and MHT were not treated with eculizumab. However, this may partially be explained by 23 (32%) patients with aHUS and MHT and 323 (31%) patients with aHUS without MHT having a recorded onset of aHUS prior to eculizumab obtaining marketing authorization in 2011. Other possible explanations include 20 (61%) of the 33 not-treated patients with aHUS and MHT who required a kidney transplant reaching ESKD (a criterion for designating patients as not-treated in this study) prior to eculizumab availability, and some patients may also have been treated with eculizumab post-ESKD (another criterion for not-treated designation in this study). Furthermore, while eculizumab treatment status itself is not directly related to the prevalence of pathogenic genetic variants or anti-CFH antibodies, the results suggest that many of the patients listed as not-treated may either have reached ESKD before eculizumab became available or were not initially identified as patients with aHUS prior to ESKD. Indeed, diagnosis of aHUS may often occur late in the disease course, following TMA recurrence, a requirement for long-term dialysis, or kidney transplantation [13]. It is important to note, however, that this study only reports genetic analyses in patients who were screened and had a result reported in the registry; some patients were recorded as having been screened but no subsequent results were reported. When the combined outcome of time to ESKD or death was assessed, both uni-and multi-variable analyses showed that significantly fewer patients with aHUS and MHT who were treated with eculizumab reached the composite endpoint, compared to not-treated patients. Further, the multivariable analyses also highlighted that patients who presented with pathogenic genetic variants and/or anti-CFH antibodies, and patients who were adults at the time of aHUS onset, were generally at a higher risk of ESKD or death. However, many other clinical features were similar between these patient groups. As anticipated, patients from both age groups who were not-treated had worse outcomes than their treated counterparts. These results, combined with higher proportions of pathogenic genetic variants and kidney transplants in patients with aHUS and MHT-particularly those not treated with eculizumab-reiterate the importance of establishing an early and accurate diagnosis, as treating the correct patients with C5 inhibitors has been shown to substantially reduce morbidity and mortality [24][25][26][27]. However, in this analysis, fewer patients with aHUS and MHT were treated with eculizumab than patients without MHT, despite the potentially counter-intuitive increased incidence of complement gene variants in this patient population. This raises the possibility that clinicians may be continuing to regard TMA as secondary to MHT and proceed with MHTspecific treatment regimens, without considering this as a potential presentation/manifestation of aHUS/CM-TMA [9,13,16,28]. One patient who presented with aHUS and MHT and was treated with eculizumab progressed to ESKD. This patient 1 3 In their review, Fakhouri and Frémaux-Bacchi stated that while aHUS remains, globally, a rare cause of MHT, MHT frequently complicates aHUS disease course, adding that genetic screening may not be suitable for diagnosis of aHUS as not all patients carry complement gene variants [13]. However, they commented that TMA rarely complicates the course of MHT (5-15% of cases), with the low prevalence limiting assessments of complement gene variants in patients with comorbid severe hypertension and TMA [13]. Our data are therefore important as all patients in the current analysis were diagnosed with both MHT and aHUS/TMA and were seen to have a greater prevalence of pathogenic genetic variants or anti-CFH antibodies. These results suggest that clinicians should explicitly consider genetic screening in this specific patient population. Also, while our results agree with Fakhouri and Frémaux-Bacchi that patients with aHUS can often present with MHT [13], they further suggest that a differential diagnosis of aHUS/CM-TMA should be considered in patients presenting with both MHT and TMA. This is particularly important as, in our analysis, patients with aHUS responded well to eculizumab in the presence of MHT, making an early and correct diagnosis integral to improving patient outcomes [24][25][26][27]29]. This agrees with the paper by Karoui et al., who found that the 5-year renal survival rate was substantially lower in patients with aHUS with identified complement variants and/or hypertensive emergency than their counterparts without these complicating factors [30]. There are several potential limitations to this studymainly in relation to the nature of registry-derived data, as previously described [23]-leading to missing/incomplete data, particularly around the recording of dates, genetic screening results, blood pressure measurements, and concomitant medication. Specifically relating to blood pressure, these data were not necessarily recorded at the time of MHT and had large variances, making conclusions difficult. Furthermore, the Global aHUS Registry only collects data on patients with a local clinical diagnosis of aHUS (not a centrally defined diagnosis) which may potentially limit the generalizability of these findings to proven CM-TMA populations. While the lack of a central definition of MHT may be a potential limitation of this study, the general clinical characteristics of MHT used for diagnosis are easily assessable and well defined. This analysis of patients with aHUS and MHT using data from the Global aHUS Registry shows a higher prevalence of pathogenic complement variants or anti-CFH antibodies, alongside a high proportion of kidney transplantation, in patients with aHUS and MHT (particularly in not-treated patients) indicating a potential lack of early/ correct diagnosis and high severity of disease in these patients when left untreated. Indeed, patients who were positive for pathogenic variants or anti-CFH antibodies were at greater risk of ESKD or death than patients without them. However, in not-treated patients with aHUS, the concurrent presence of MHT did not appear to significantly impact the risk of reaching ESKD or death, compared to not-treated patients without MHT. Moreover, MHT did not appear to affect the effectiveness of eculizumab, or baseline demographics and characteristics, compared to patients without MHT, although no formal statistical assessment of this comparison was conducted. This study also demonstrates that while clinical characteristics in patients with aHUS and MHT are similar in both pediatric and adult patients, with comparable demographics and baseline clinical measures, patients who were adults at the time of aHUS onset were at greater risk of ESKD or death than patients who were below 18 years of age at the time of aHUS onset. Lastly, the significant difference in the composite endpoint of ESKD or death between patients who were treated with complement C5 inhibition and those who were not-treated highlights the importance of an early and accurate diagnosis in these patients, to allow for the correct use of these therapeutics. Alongside a reiteration of the importance of complement C5 inhibitor therapy in patients with aHUS, the results of this study provide evidence that, in patients presenting with MHT and comorbid TMA, complement genetic screening and consideration of a differential diagnosis of aHUS are warranted to allow for prompt and correct treatment decisions. (Austria), Michal Malina (Czech Republic), Leena Martola (Finland), Annick Massart (Belgium), Eric Rondeau (France), and Lisa Sartz (Sweden). The authors would like to acknowledge Alexander T. Hardy, PhD, of Bioscript, Macclesfield UK for providing medical writing support with funding from Alexion Pharmaceuticals, Inc. and Radha Narayan, PhD, Alexion, AstraZeneca Rare Disease for critical review of the manuscript. Author contribution Authors contributed to the conception and/or design of the study, participated in the acquisition, analysis and/or interpretation of data, and in the writing, review and/or revision of the manuscript. All authors read and approved the final manuscript. Funding Open Access funding provided by Alexion, AstraZeneca Rare Disease, Boston, MA. This analysis was funded by Alexion, Astra-Zeneca Rare Disease, Boston, MA. Alexion, AstraZeneca Rare Disease, Boston, MA. was responsible for the collection, management, and analysis of information contained in the Global aHUS Registry. Alexion, AstraZeneca Rare Disease, Boston, MA contributed to data interpretation, preparation, review, and approval of the manuscript for submission. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. Data statement Alexion will consider requests for disclosure of clinical study participant-level data provided that participant privacy is assured through methods like data de-identification, pseudonymization, or anonymization (as required by applicable law), and if such disclosure was included in the relevant study informed consent form or similar documentation. Qualified academic investigators may request participant-level clinical data and supporting documents (statistical analysis plan and protocol) pertaining to Alexion-sponsored studies. Further details regarding data availability and instructions for requesting information are available in the Alexion Clinical Trials Disclosure and Transparency Policy at https:// alexi on. com/ our-resea rch/ resea rchand-devel opment. Link to Data Request Form: https:// alexi on. com/ conta ct-alexi on/ medic al-infor mation. Ethical statement This was a multicenter study comprising many different sites of enrollment. Federal, provincial, and local regulations and International Conference on Harmonization guidelines, if relevant, required that approval was obtained from an Ethics Committee (EC)/IRB prior to participation of patients in research studies. Where required and prior to the study onset, the EC/IRB must have approved the protocol, informed consent, advertisements to be used for patient recruitment, and any other written information regarding this study to be provided to the patient or the patient's parents/legal guardian. The sites maintained and made available for review by the sponsor or its designee documentation of all EC/IRB approvals and of the EC/IRB compliance with International Conference on Harmonization Guidance E6: Good Clinical Practice, if relevant. All EC/IRB approvals were signed by the EC/IRB chairman or designee and identified the EC/ IRB name and address, the clinical protocol by title and/or protocol number, and the date approval and/or favorable opinion was granted. The investigator conducted all aspects of this study in accordance with all national, provincial, and local laws of the pertinent regulatory authorities. Conflict of interest Informed consent A written informed consent was obtained from each patient prior to participation in the study. The sponsor or its designee could provide an informed consent template to the sites, if required. If the site made any The sponsor or its designee could provide an informed consent template to the sites, if required. If the site made any institution-specific modifications, the sponsor or its designee could review the consent prior to IRB/EC submission. The investigator or the sponsor would then submit the approved, revised consent to the appropriate IRB/EC for review and approval prior to the start of the study. If the consent form was revised during the course of the study, all active participating patients to whom the revision may have had an impact must have signed the revised form. Before recruitment and enrollment, each patient was given a full explanation of the study and was allowed time to read the approved informed consent form. Once the investigator was assured that the individual understood the implications of participating in the study, the patient was asked to give consent to participate in the study by signing the informed consent form. The investigator provided a copy of the signed informed consent to the patient. The original form was maintained in the study files at the site. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-09-25T06:18:04.408Z
2022-09-24T00:00:00.000
{ "year": 2022, "sha1": "082865ef478d21cebce810485bc6c216c3cd6708", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40620-022-01465-z.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "94686957d95107a055d543c694a4e4d9bb86d151", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267277440
pes2o/s2orc
v3-fos-license
Genome-Wide Association Studies of Embryogenic Callus Induction Rate in Peanut (Arachis hypogaea L.) The capability of embryogenic callus induction is a prerequisite for in vitro plant regeneration. However, embryogenic callus induction is strongly genotype-dependent, thus hindering the development of in vitro plant genetic engineering technology. In this study, to examine the genetic variation in embryogenic callus induction rate (CIR) in peanut (Arachis hypogaea L.) at the seventh, eighth, and ninth subcultures (T7, T8, and T9, respectively), we performed genome-wide association studies (GWAS) for CIR in a population of 353 peanut accessions. The coefficient of variation of CIR among the genotypes was high in the T7, T8, and T9 subcultures (33.06%, 34.18%, and 35.54%, respectively), and the average CIR ranged from 1.58 to 1.66. A total of 53 significant single-nucleotide polymorphisms (SNPs) were detected (based on the threshold value −log10(p) = 4.5). Among these SNPs, SNPB03-83801701 showed high phenotypic variance and neared a gene that encodes a peroxisomal ABC transporter 1. SNPA05-94095749, representing a nonsynonymous mutation, was located in the Arahy.MIX90M locus (encoding an auxin response factor 19 protein) at T8, which was associated with callus formation. These results provide guidance for future elucidation of the regulatory mechanism of embryogenic callus induction in peanut. Introduction Cultivated peanut (Arachis hypogaea L.) is among the most economically important oil, food, and feed crops worldwide [1].The seeds are rich in protein and oleic acid, which can regulate human physiological functions and promote growth and development.The seeds have a high nutritional value and are readily absorbed and utilized [2].In recent years, transgenic technology [3,4] has been increasingly widely applied in crop genetics and breeding, which has not only reduced the impact of diseases and pests on peanut yield and quality, but also overcome the problem of resistance in traditional breeding.Tissue culture is a basic technology essential for transgenic breeding and verification of gene function.However, in addition to the strong genotype dependence of the in vitro culture of peanut, the plant regeneration rate is low, which greatly limits biotechnology-based breeding.Therefore, there is an urgent need to elucidate the genetic mechanism controlling embryogenic callus induction in peanut to enhance the efficiency of genetic improvement and breeding. Tissue culture response refers to the process by which any organ, tissue, or cell of a plant develops into a complete plant in a specific environment.In 1958, Steward et al. [5] proposed that plants produced disorganized cell clusters in response to adversity stress, which had the ability to develop into complete plants.Embryogenic callus induction is a crucial step in tissue culture and the initial process of somatic cells regaining totipotency, which is caused by changes in endogenous plant hormone concentrations [6,7].ARF7 and ARF19 regulate the callus formation of Arabidopsis thaliana by activating the promoter and expression of LBD16, LBD17, LBD18, and LBD29 [8] in the absence of exogenous hormones.Inhibiting the expression of these genes inhibits callus formation.Interestingly, root explants of the Arabidopsis aux1 mutant do not form callus on the standard medium, but callus formation is initiated with an increase in the auxin concentration in the medium.Moreover, the induction and differentiation of embryogenic callus show similar histological characteristics to those of the root meristem [9,10].WIND1 can promote callus formation and shoot regeneration by upregulating the expression of ESR1 [11].ESR1 is a differentially expressed gene, which is encoded by an APETALA 2/Ethylene Response Factor (AP2/ERF) transcription factor in Arabidopsis, and high abundance of the WIND1 protein can induce cell dedifferentiation [12,13]. The formation of plant embryogenic callus is controlled by heredity, which can be qualitative [14] or quantitative [15][16][17].Therefore, the elucidation of the mechanism of callus formation is of considerable importance for the optimization of plant in vitro culture systems.The GWAS approach can help determine the number of loci controlling the genetic variation of complex traits and predict candidate genes [18].Quantitative trait loci (QTLs) controlling callus differentiation have been identified in many crops, such as rice [19], Populus euphratica [20], soybean [21], maize [22], and wheat [23].In recent decades, with the development of diverse molecular markers, more accurate high-throughput sequencing technology has been applied to peanut genetics and breeding [24,25].In the present study, we compared the genotype and callus induction rate (CIR) of peanut varieties, constructing a high-density SNP-array genetic map based on the whole-genome resequencing of 353 peanut accessions, with the aim of identifying germplasm resources with a high CIR to expand the efficiency of the genetic transformation of peanut.The results are an important foundation for improvement in the genetic transformation and germplasm conservation of peanut. Plant Material A panel of 353 peanut accessions from 26 countries was used in this study, which represented five botanical varieties and two irregular types.The irregular morphologies were formed by the hybridization of different botanical varieties, which were classified as irregular types [26].These accessions comprise 44 irregular fastigiata, 100 irregular hypogaea, 26 subsp.fastigiata var.fastigiata, 2 subsp.fastigiata var.peruviana, 84 subsp.fastigiata var.vulgaris, 12 subsp.hypogaea var.hirsuta, and 85 subsp.hypogaea var.hypogaea accessions (Table S1).The CIR of each accession was observed and recorded; however, only data for 335 accessions were obtained and analyzed, as some accessions were excluded due to contamination. Embryogenic Callus Induction Mature pods of the 353 accessions were artificially shelled, and then clean and mature seeds without plaque were selected.The seeds were disinfected with 75% ethanol for 30 s, followed by 1% (w/v) NaClO solution for 8 min, and finally rinsed five times with sterile water.Next, the testa was peeled off with sterile tweezers after the seeds were removed from the water; then, the embryonic axes were excised and inoculated on a SEM medium composed of MS salts, B5 vitamins, 0.088 M sucrose, 12.4 µM picloram, and 0.8% agar (the pH was adjusted to 5.8) [27].The embryos were cultured on the medium and were subcultured at 4-week intervals.Each subculture of each accession comprised five replicates, and the experiment was repeated three times. Phenotypic Identification and Statistics After the sixth subculture, the physiological status of the callus began to stabilize.The number of calli was recorded in each of the seventh, eighth, and ninth subcultures (T7, T8, and T9, respectively).The CIR was calculated as follows [28]: where N is the final callus number and N 0 is the number of inoculations.The CIR value was used for GWAS analysis (Table S1).All phenotypic data were analyzed using Graph Pad Prism 7 and IBM SPSS Statistics 25, including correlation analysis, analysis of variance (ANOVA), and frequency distribution analysis.All phenotypic evaluations were conducted by the same person to maintain consistency between tissue-culture experiments and phenotypic evaluations, and to minimize artificial errors caused by different operational behaviors. Genome-Wide Association Study DNA extraction, library preparation, and resequencing were performed as described in detail in a previous study [26].Whole-genome resequencing of the 353 peanut accessions was conducted using the Illumina HiSeq platform (San Diego, CA, USA).A total of 864,179 single-nucleotide polymorphisms (SNPs) and 71,052 insertions/deletions (InDels) were obtained after quality control.The mixed linear model (MLM) GWAS method [29] was implemented in R package GAPIT (v 3.0) based on the high-density markers.In this study, the significance of traits and markers in association was determined using a threshold of −log 10 (p) = 4.5.Manhattan and Q-Q plots were generated using the 'qqman' R package [30].We used the 200 kb genomic regions upstream and downstream of the SNPs that were significantly associated with the peanut CIR as the candidate genomic region, and we used heterotetraploid cultivated peanut genomes for functional annotation.Based on previous reports, we predicted candidate genes associated with embryogenic callus induction in peanut. Callus Formation On the germination medium, 324 of the 335 peanut accessions formed callus, and the growth status of the callus was recorded at T7, T8, and T9 (Figure 1).The CIR phenotype values of the 335 accessions were normally distributed at T7, T8, and T9, with skewness ranging from −0.17 to 0.36 and kurtosis ranging from 2.76 to 4.15 (Figure 2, Table S2).The average CIR ranged from 1.58 to 1.66, and the coefficient of variation was 33.06%, 34.18%, and 35.54% at T7, T8, and T9, respectively (Table S2).A significance analysis showed that the CIR was significant, and a moderately high correlation coefficient was observed between embryogenic callus induction in the three subcultures (0.56 to 0.61) (Figure 2).Given that only two accessions of subsp.fastigiata var.peruviana were included in the study panel, the phenotypic variation of the remaining four varieties and two irregular types was analyzed.At T7, the CIR varied among the six types, but the differences were not significant.The CIR values of the six types were ranked in the following order: var.hirsuta > irregular hypogaea > irregular fastigiata > var.fastigiata > var.vulgaris > var.hypogaea (Figure 3a).However, at T8 and T9, significant differences in CIR were observed among the six types, among which the highest CIR was recorded for the irregular hypogaea type, and the lowest CIR was that of var.vulgaris (Figure 3b,c).Given that only two accessions of subsp.fastigiata var.peruviana were included in the study panel, the phenotypic variation of the remaining four varieties and two irregular types was analyzed.At T7, the CIR varied among the six types, but the differences were not significant.The CIR values of the six types were ranked in the following order: var.hirsuta > irregular hypogaea > irregular fastigiata > var.fastigiata > var.vulgaris > var.hypogaea (Figure 3a).However, at T8 and T9, significant differences in CIR were observed among the six types, among which the highest CIR was recorded for the irregular hypogaea type, and the lowest CIR was that of var.vulgaris (Figure 3b,c). Interestingly, during the three subcultures, SNPs on chr13 formed one small clu Analysis of Candidate Genes for SNP Loci Associated with CIR in Peanut Genomic regions within 200 kb upstream and downstream of the significant SNPs were selected as candidate regions.Based on the SNPs significantly associated with CIR and functional annotations in these regions from the peanut reference genome, 600 annotated genes were obtained (Table S3).Thirty-six genes were potentially associated with embryogenic callus induction (Table 2).At T7, 17 genes were located near the nine significant SNPs, of which the most highly significant SNP was on chr13 at position 83801701, corresponding with the gene Arahy.LC8K5G that encodes a peroxisomal ABC transporter 1.At T8, five significant SNPs associated with CIR were detected, of which three SNPs were located on chr13.These SNP regions contained genes that affect embryogenic callus induction, and which are directly involved in callus formation or related developmental processes, and they comprised a peroxisomal ABC transporter 1, Pentatricopeptide repeat (PPR) superfamily protein, SAUR-like auxin-responsive protein, MYB transcription factor, and Homeodomain-like transcriptional regulator.The candidate region centered on the SNPA05-94095749 locus contained six candidate genes on chr05, of which four encoded an auxin response factor.At T9, the locus SNPB03-87089142 had the highest phenotypic variance (Table 1). Discussion This study is the first to use GWAS to explore the genetic basis of embryogenic callus induction in peanut.Here, we presented CIR data from 335 peanut accessions in three subcultures, identified the candidate genomic regions, and predicted genes associated with callus formation.The results contribute to an improved understanding of the mechanism of embryogenic callus induction and provide insights useful for further studies of tissue culture in peanut. Induction of Callus from 353 Peanut Genotypes Many studies have successfully regenerated plants through somatic and organogenic pathways, accelerating research progress in tissue culture and providing a convenient method for the preservation of rare or precious wild-type germplasm and the rapid propagation of superior germplasm in vitro.The source and genotype of explants are the main factors that regulate somatic embryo formation [31,32].Callus culture is an essential requirement for crop genetic transformation, but most studies of peanut callus culture to date have focused on comparisons among a small number of varieties, thus knowledge of the genetic mechanism of peanut callus induction is incomplete.The CIR is the most intuitive trait that characterizes the strength of callus induction ability [33].To explore the genetic basis of variation in callus differentiation, we studied the CIR of 353 peanut accessions representing five botanical varieties and two irregular types, and identified candidate genomic regions and associated genes that control callus differentiation.The present study used the largest number of peanut genotypes of any published study to date to evaluate single-callus formation. The present data showed that the CIR in the three subcultures was continuous and normally distributed (Figure 2), indicating that CIR was controlled by multiple loci in the study population.In addition, differences in CIR were observed among different botanical varieties and irregular types, for which the difference was not significant at T7, but was significant at T8 and T9 (Figure 3).This variation may be caused by the unstable physiological state of the calli in the early period of embryogenic callus induction in peanut.In addition, the CIR phenotype values in the three subcultures were strongly correlated (Figure 2).These results reflected that the CIR phenotype may be partially controlled by the same genetic factors.In a previous report, the CIR of three peanut market types was ranked as Spanish type > Valencia type > Virginia type [34].In the present study, at T8 and T9, the CIR of the irregular hypogaea type was the highest, and that of var.vulgaris was the lowest, which is inconsistent with previous findings.This difference may reflect the different numbers and genotypic representations of materials in the study.Nevertheless, the results confirm that the population structure plays an important role in regulating callus formation. GWAS Analysis To date, many studies have been conducted on genes associated with embryogenic callus induction in plants [35][36][37].In the present study, several significant SNPs merit discussion in detail (Table 2).The peak SNPB03-83801701 was detected in each of the three subcultures and was located 53 kb from Arahy.LC8K5G, which encodes a peroxisomal ABC transporter 1. Peroxisome transporters are crucial for the synthesis of certain bioactive molecules, such as docosahexaenoic acid in mammals and jasmonic acid [38].This protein may regulate plant growth and development by mediating the synthesis of specific bioactive molecules [39]. In the present study, a gene with nonsynonymous mutations was detected in proximity to SNPA05-94095749, namely, Arahy.MIX90M, which encodes ARF19 [40], a transcriptional activator of auxin early-response genes.In Arabidopsis, ARF19 regulates lateral root formation by activating LBD/ASL genes, reflecting that auxin-induced callus formation has similar characteristics to the root formation metabolic pathway [41].Moreover, it was observed that the candidate genes Arahy.SI4FKG, Arahy.HDV1A7, Arahy.F6IJX6, Arahy.X8R2JI, and Arahy.K8JH1A encode ARF19, an auxin canalization protein, and ARF19-like isoform X1.Auxin signaling activates the expression of downstream genes and regulates plant tissue meristem and embryonic development by mediating these auxin transcription factors and auxin channel protein genes [42].Furthermore, we located several genes on chr13 that were significantly associated with embryogenic callus induction, most of which were associated with a PPR superfamily protein, Homeodomain-like transcriptional regulator, and Saur-like auxin responsive protein. The peak SNPB03-17372387 was located 119 kb from Arahy.2NE4Q7, which encodes a SAUR-like auxin-responsive protein family member.SAUR genes are the largest early auxin-responsive gene family in plants [43].Active auxin can rapidly induce the expression of SAUR genes [44].A SAUR gene was first identified in soybean hypocotyls [45], which had been confirmed to play a role in different processes in plant growth, development, and stress response [46][47][48].Therefore, it was speculated that candidate genes may affect plant tissue differentiation by regulating cell growth and development.The SNP marker SNPA05-94095749 is located in Arahy.KFS3KW, which encodes a B3 domain-containing VRN1-like transcription factor.VRN1, a critical protein that responds to long-term cold treatment to promote flowering, was detected in the vernalization response of Arabidopsis and is crucial for development in Arabidopsis [49].Overexpression of VRN1 leads to early flowering and phenotypic abnormalities [50]. Recently, marker-trait associations have been analyzed for embryogenic callus induction in a number of plants; for example, SNPB03-17372387 and SNPB07-124709181 are derived from genes that encode an MYB transcription factor.Ge et al. [51] located zmMYB138, which is a nucleus-localized member of the MYB transcription factor family of maize, and determined that zmMYB138 could promote the formation of maize embryogenic callus through gibberellin signal transduction.The MYB75 transcription factor in Arabidopsis regulates anthocyanin biosynthesis in Nicotiana callus [52].Significantly, we found that one SNPB07-124709181 was strongly associated with CIR, and this was consistently across genotypes with low or NO callus induction (Figure S1).The linkage analysis suggested that the accessions with CAT/CAT have a high CIR, while C/C have a low or zero CIR at this locus at T7, T8, and T9.Due to the small difference in CIR between the two genotypes of the material, this SNP was not identified at T8 and T9, while there was a significant difference in CIR between the two genotypes at the level of 0.05. The present study was focused on the prediction of candidate genes associated with embryogenic callus induction.Further functional analysis and verification are needed.At present, although many QTLs of embryogenic callus induction traits have been identified in other crops, there are currently few reports on genes associated with peanut embryo-genic callus induction, and the genetic mechanism of peanut embryogenic callus induction remains unclear.The SNPs and candidate genes identified in the current study lay the foundation for further cloning of the functional genes that regulate peanut embryogenic callus induction and analysis of the genetic mechanism of peanut embryogenic callus induction. Conclusions In this study, we conducted the embryogenic callus induction of 353 peanut accessions and observed and scored their phenotypes.The results showed that callus induction might be partially controlled by the same genetic factors.In addition, we identified the genomic regions associated with callus formation and located candidate genes associated with callus formation based on functional annotations. Figure 2 . Figure 2. Correlation between callus induction rate in peanut at the T7, T8, and T9 subcultures.The correlation coefficients are presented above the diagonal, and the distributions of the number of calli data at T7, T8, and T9 are presented on and below the diagonal.*** p < 0.001. Figure 2 . Figure 2. Correlation between callus induction rate in peanut at the T7, T8, and T9 subcultures.The correlation coefficients are presented above the diagonal, and the distributions of the number of calli data at T7, T8, and T9 are presented on and below the diagonal.*** p < 0.001. Figure 3 . Figure 3. Analysis of variance of differences in callus induction rate (CIR) among six peanut b ical varieties and irregular types at the T7, T8, and T9 subcultures.Boxplots show the variati the CIR in each peanut type at T7 (a), T8 (b), and T9 (c).Different lowercase letters above the b indicate a significant difference within a subculture (p ≤ 0.05; Tukey's HSD test). Figure 3 . Figure 3. Analysis of variance of differences in callus induction rate (CIR) among six peanut botanical varieties and irregular types at the T7, T8, and T9 subcultures.Boxplots show the variation in the CIR in each peanut type at T7 (a), T8 (b), and T9 (c).Different lowercase letters above the boxes indicate a significant difference within a subculture (p ≤ 0.05; Tukey's HSD test). Table 1 . Single-nucleotide polymorphisms (SNPs) significantly associated with peanut callus induction rate identified at the T7, T8, and T9 subcultures through genome-wide association study. Table 1 . Single-nucleotide polymorphisms (SNPs) significantly associated with peanut callus induction rate identified at the T7, T8, and T9 subcultures through genome-wide association study. Table 2 . Single-nucleotide polymorphisms (SNPs) and candidate genes significantly associated with callus induction rate in peanut at the T7, T8, and T9 subcultures.
2024-01-28T16:21:05.342Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "26ff1c0fbff19945885ae362b3bbeedf1c4b3d95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/15/2/160/pdf?version=1706251423", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a136809ae4eda1198708691089ca36091bcac20", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [] }
247080052
pes2o/s2orc
v3-fos-license
Energy efficiency evaluation of high-pressure DTH hammers The design of a domestically produced DTH hammer for drilling holes in rocks of medium and high strength is presented. Using the mathematical modeling method, the serviceability of the hammer at an energy source pressure of 2.4 MPa is proved. The results of DTH hammer prototype testing at lab and full scale are presented. Introduction Mineral exploration and mining involves drilling in rocks having different physical and mechanical properties, including strong and very strong rocks. Drilling technologies are imposed with certain requirements, namely, high capacity, low cost and high endurance of drilling equipment. Holes should be straight and have high quality walls [1]. The present-day drilling is commonly carried out using rotary percussion drilling equipment wit down-the-hole or offset rock-breaking tools powered by air or fluids [2]. It is expedient to perform rotary percussion drilling using DTH air hammer. The shock source is located directly at the bottomhole, which ensures efficient fracture and straight-line drilling in rocks of high and medium strength ( 30 200 com σ = − MPa). In this case, the power source is also employed as a cleaning agent to remove cuttings from the bottomhole. One of the ways of improving this drilling technology is increasing the pressure of the energy source [3]. The modern compressor plants generate pressure to 3.2 MPa. DTH air hammers capable to operate at such pressure of compressed air are only manufactured abroad, in particular, in Sweden, USA, China, etc. Given the absence of the domestic manufacture equipment of the kind, Russian drillers totally depend on import, which poses a threat to the national energy and resource security [4]. Toward this problem solution, the Institute of Mining SB RAS designed a DTH air hammer drill for making holes with a diameter of 130 mm ( Figure 1) in accordance with the high present-day requirements [5]. DTH air hammer test results This DTH air hammer design features a few advantages. The slide valve-free system of air distribution allows using wide effective pressure range of compressed air, from 0.3 to 3.2 MPa. The absence of additional air-distribution means enables the energy source to affect the maximum area of the piston when accelerated, which elevates the impact energy of a machine of the same size. The back valve prevents sludging of the machine in drilling in water-cut rocks. The body of the machine has no radial perforations and can have walls of any thickness subject to hardness and abrasivity of rocks, which ensures long service life (as, as a consequence, saves drilling cost). Spent air is exhausted toward the bottomhole, which contributes to removal of cuttings from the bottomhole zone and prevents overgrinding of chips. Furthermore, the layout of exhaust openings allows using foamy agents to ensure the required velocity of the rising current in the annular space in drilling holes of larger diameters. The analysis of the work cycle of the DTH air hammer in the main line pressure range 0.6-2.4 MPa, as well as the machine design refinement was implemented using a component-based graph model. We constructed an analytical model of DTH air hammer model P130 (Figure 1b). It comprises the basic parameters of the machine: volumes of working chambers, areas of passage, opening and shutoff points of channels, mass and areas of piston on the sides of the forward and backward strokes. This model allows varying the design factors of air distribution and controlling the energy outputs: impact energy and frequency, impact capacity, pre-blow velocity, specific air flow rate, etc. Applicability of this approach is described in [6,7]. It enables experiemntation without a fullscale speciment of the machine. The computer modeling made it possible to obtain assumption (Figures 2a-2c, respectively). It is seen thart the increase in the pressure brings no violation to the machine stability, the work cycle is full and efficient, the unit blow energe and frequency grow. In order to reach the wanter energy and flow rate of the machine, the DTH air hammer design was amended. After the amendment based on the mathematical modeling, a research prototype of the machine was manufactured and tested at the lab scale (Figure 2e). Using the program and procedure from [8], the pressure diagrams (Figure 2d) of air in the idle and power stroke chambers at the standard energy source pressure of 0.6 MPa were processed. The comparison of the assumption and actual diagrams (Figures 2a and 2d) of air pressure in the work chambers at the same main line pressure shows that deviations of such parameters as the unit blow energy and frequency, and the air flow rate in these diagrams are not higher than 7%. This proves that the model sufficiently accurately and completely describes operation of a real machine. The Table 1 The full-scale tests of DTH air hammer model P130 were carried out in partnership with Sibir Mining and Engineering at Borok quarry in the Novosibirsk Region. The tests included blasthole drilling in granite having the uniaxial compression strength of 120-140 MPa using Atlas Copco Rock L8 drill at the energy source work pressure of 2.1 MPa. The drilling penetration rate in the tests made 0.6-0.8 m/min on the average. This value is comparable with the efficiency of the air drill hammers produced by the world's top manufacturers (Atlas Copco, Numa). Conclusions The new design of the DTH air hammer for drilling in medium and high strength rocks is advantageous for the wide range of compressed air pressure (0.6-2.4 MPa), back valve preventing the machine from sludging when drilling is carried out in water-cut rocks, no perforations in the body walls and for the air exhaust toward the bottomhole. The mathematical and simulation modeling has proved the new design machine efficiency in operation at higher pressures. The energy efficiency of the machine at different pressure of energy source is evaluated. The dull-scale tests show the high capacity of the machine (drilling penetration rate), which is comparable with the capacity of air drill hammers of foreign manufacture.
2022-02-24T20:07:11.876Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "4a30f9baa00da3d651b20aa26f4265676cafa763", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/991/1/012030/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4a30f9baa00da3d651b20aa26f4265676cafa763", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
245432781
pes2o/s2orc
v3-fos-license
Single-Cell RNA Sequencing Reveals Microevolution of the Stickleback Immune System Abstract The risk and severity of pathogen infections in humans, livestock, or wild organisms depend on host immune function, which can vary between closely related host populations or even among individuals. This immune variation can entail between-population differences in immune gene coding sequences, copy number, or expression. In recent years, many studies have focused on population divergence in immunity using whole-tissue transcriptomics. But, whole-tissue transcriptomics cannot distinguish between evolved differences in gene regulation within cells, versus changes in cell composition within the focal tissue. Here, we leverage single-cell transcriptomic approaches to document signatures of microevolution of immune system structure in a natural system, the three-spined stickleback (Gasterosteus aculeatus). We sampled nine adult fish from three populations with variability in resistance to a cestode parasite, Schistocephalus solidus, to create the first comprehensive immune cell atlas for G. aculeatus. Eight broad immune cell types, corresponding to major vertebrate immune cells, were identified. We were also able to document significant variation in both abundance and expression profiles of the individual immune cell types among the three populations of fish. Furthermore, we demonstrate that identified cell type markers can be used to reinterpret traditional transcriptomic data: we reevaluate previously published whole-tissue transcriptome data from a quantitative genetic experimental infection study to gain better resolution relating infection outcomes to inferred cell type variation. Our combined study demonstrates the power of single-cell sequencing to not only document evolutionary phenomena (i.e., microevolution of immune cells) but also increase the power of traditional transcriptomic data sets. Introduction Pathogenic infection is a major ecological interaction that drives physiological and immune response in hosts, natural selection (Cagliani and Sironi 2013;Gignoux-Wolfsohn et al. 2021), and population dynamics (Frick et al. 2010;Hochachka et al. 2021). Immense natural inter-and intraspecific variation exists in organismal response to pathogens (Lazzaro et al. 2004;Fuess et al. 2017;Grab et al. 2019), contributing significantly to disparate infection outcomes (Ellison et al. 2014;Fuess et al. 2017;Grab et al. 2019). While the consequences of variability in immunity are well documented, the underlying mechanisms which produce this variability are poorly understood. Historically, inter-and intraspecific variation in pathogenic response has been most often studied in the context of single components of the immune system (cells, genes, etc.) (Lazzaro et al. 2004;Schröder and Schumann 2005;Shinkai et al. 2012;Schenekar and Weiss 2017;Pérez-Espona et al. 2019). For example, the MHC II allele repertoire is significantly correlated to amphibian susceptibility to fungal pathogens; MHC heterozygosity across and within populations significantly affects pathogen resistance (Savage and Zamudio 2011). However, recent studies have suggested that intraspecific immune variation extends beyond single components to the broad cellular structure of immune systems. Studies have documented lineagespecific loss of immune cell types, as well as evolution of novel cell types in some species (Hilton et al. 2019;Guslund et al. 2020). This suggests that broad-scale variation in immune cell function and/or relative abundance might contribute to variation in immune responses. Still, the majority of data to this effect come at the among-species level or even larger macroevolutionary scales; less is known about the extent to which immune cell identity and function evolve at short time scales within species. Understanding the extent of immune cell microevolution is a necessary first step in deciphering how microevolution of immune cell types may contribute to divergence in immune response and pathogen resistance at a population level. The immunological mechanisms underlying variable pathogen response and resistance remain particularity enigmatic in natural, nonmodel systems where most conclusions regarding differentiation in immunity are drawn from transcriptomic data generated from whole tissue samples (Dheilly et al. 2014;Sudhagar et al. 2018). While a powerful tool, traditional RNA sequencing (RNA-seq) studies condense any cell type heterogeneity within a sample to one data point. Thus, it is difficult to distinguish whether changes observed reflect regulatory changes in gene expression, versus shifting cell type abundance within the broader tissue. This problem is especially acute for immunological studies given the mobility of, and rapid mitotic diversification of, certain cell types and when considering nonmodel species for which genetic markers of prominent cell types are lacking. Here, we leverage recent advances in single-cell RNA sequencing (scRNA-seq) technologies to test whether significant variation in immune cell abundance and/or function exists at the population level, potentially contributing to differentiation of immune responses. We focus our efforts on the emerging natural immunological model system, the three-spined stickleback (Gasterosteus aculeatus). This small fish is a tractable natural system for considering questions related to evolutionary and ecological immunology, largely due to their unique natural history. During the Pleistocene deglaciation, ancestrally anadromous populations of stickleback became trapped in newly created freshwater lakes (McKinnon and Rundle 2002). Thousands of independent lake populations have since been evolving in response to novel biotic and abiotic stimuli associated with freshwater environments for thousands of generations. This transition to freshwater exposed stickleback to many new parasites, including freshwater-exclusive, cestode parasite, Schistocephalus solidus (Simmonds and Barber 2016). Populations have subsequently evolved different immune traits to resist or tolerate this parasite (Weber et al. 2017a). Immense variation exists between independent lake populations in susceptibility to S. solidus (Weber et al. 2017b(Weber et al. , 2022. Consequently, the G. aculeatus-S. solidus system provides a great opportunity for addressing diverse questions related to evolutionary and ecological immunity. Despite this opportunity, the understanding of the broader structure of the stickleback immune system (i.e., immune cell types and functions) is limited. We conducted scRNA-seq analysis to advance our understanding of immune cell repertoires and function in this important natural model system. Additionally, we leveraged the unique natural history of this species to assess questions regarding the response of immune systems to selective pressure (i.e., a novel parasite). By comparing immune cell repertoires among ecologically divergent but closely related populations of fish, we are able to demonstrate that selection can create rapid evolutionary change in not only relative immune cell abundance but also function (i.e., gene expression) of these immune cell types. These findings add further evidence that variation in broad immune system structure contributes to functional diversity of immunity and divergence in immune responses on a microevolutionary scale. Results and Discussion The Stickleback Head Kidney is Comprised of Eight Cell Types To create a description of the immune cell repertoire of the three-spined stickleback, G. aculeatus, we conducted scRNA-seq and associated analysis of nine laboratoryraised adult fish. Individuals were lab-raised descendants bred from wild-caught ancestors from three different populations on Vancouver Island with variable resistance to S. solidus (3 fish per population). These populations include one anadromous population from Sayward Estuary, which are highly susceptible to S. solidus which they rarely encounter in nature (Weber et al. 2017a). In Gosling Lake, fish are frequently infected and tolerate rapid tapeworm growth (Weber et al. 2022). In the nearby Roberts Lake, the parasite is extremely rare, apparently because the fish are able to mount a strong fibrosis immune response that suppresses tapeworm growth and can even lead to parasite elimination. The three populations have been diverging in isolation for ∼12,000 years (since Pleistocene deglaciation) and exhibit weak but significant differences in allele frequencies throughout the genome, most strongly at loci under divergent selection (Weber et al. 2022). Prior work on these populations, including experimental infection of pure genotypes, F1 hybrids, and a recombinant F2 mapping population (backcrosses and intercrosses) confirmed that there are heritable differences in resistance to S. solidus (Weber et al. 2022). Flow cytometry and whole-tissue transcriptomics of the pronephros (a.k.a. "head kidney," an important hematopoietic organ in fish) (Kum and Sekkin 2011), identified genetic associations between infection outcomes and cell type composition (lymphocyte/leukocyte ratios, coarsely defined) and gene expression (Fuess et al. 2021b;Weber et al. 2022). Here, we revisit this result by first generating single-cell RNA-seq data from the three focal populations. Importantly, the fish sampled for the scRNA-seq were not infected with this cestode parasite but instead represent constitutive population-level variability. Resulting libraries ranged in size from 8,119 to 19,578 cells with mean reads per cell ranging from 15,580 to 55,204 and median genes per cell ranging from 307 to 707. Mapping rates of genes to the newest version of the stickleback genome (Peichel et al. 2017(Peichel et al. , 2020Nath et al. 2021) with improved annotations ranging from 51% to 68%, with most samples mapping at >55%. The use of improved annotations reduces but does not eliminate the impacts of 3′-UTR bias on our data analysis, an important consideration for future studies (Healey et al. 2022). Following filtering (see Methods for details), our final data set consisting of samples ranges between 1,780 and 9,160 cells per library. First, we describe these scRNA-seq results which provide the first immune cell atlas for G. aculeatus, then we use this new resource to re-examine the prior experimental infection data. A first pass analysis of the resulting data revealed 24 unique clusters of cells. However, further analysis of these clusters and their marker genes revealed that many of these original 24 clusters likely corresponded to subtypes (or different activation states) of major vertebrate immune cell types. Based on key marker genes (supplementary table S1, Supplementary Material online) each of these original clusters was assigned a putative cell type. Clusters assigned to the same cell type were then condensed, resulting in eight new clusters ( fig. 1; supplementary figs. S1-S2, table S1, File 1, Supplementary Material online). These eight new clusters were representative of most major immune cell types (Carmona and Gfeller 2018): hematopoietic cells (HCs), neutrophils, antigen-presenting cells (APCs), B-cells, erythrocytes (RBCs), platelets, fibroblasts, and natural killer cells (NKCs) (supplementary figs. S3-S10, File 2, Supplementary Material online). Most of the original 24 unique clusters were easily grouped into one of these eight major groups based on comparison to existing data regarding vertebrate and teleost immune cell expression. For example, highly abundant neutrophils bear strong similarity to previously described teleost neutrophils, including high expression of zebrafish neutrophil marker nephrosin (npsn) Also present in low abundance were a number of important immune cell types: platelets, fibroblasts, and NKCs; all of which were easily identifiable based on high expression of characteristic genes (supplementary figs. S8-S10, Supplementary Material online). The cluster was comprised of two distinct original clusters which were both characterized by high expression of NKC markers. Unfortunately, these two subgroups were not easily distinguished due to low representation. However, there were subtle differences in gene expression between the two groups which could be determined: one of these subgroups displayed constitutive expression of the human innate lymphoid cell (ILC) marker gene, rorc (Hoorweg et al. 2012), as well as high expression of runx3, which modulates development of ILCs (Ebihara et al. 2015), providing some support that this subgroup was comprised of putative fish ILCs. Conspicuously absent were putative T-cells. This can likely be explained due to the nature of the pronephros, which is believed to operate similarly to mammalian bone marrow (Tomonaga et al. 1973;Hitzfeld et al. 2005;Kum and Sekki 2011). Consequently, T-cells are likely only transiently found in this organ, perhaps primarily early in life. Alternatively, T-cells may have been less robust to the cell isolation procedure. Stickleback RBCs Express a Variety of Immune Genes In teleosts, unlike mammals, RBCs are nucleated and genetically active (Witeska 2013). A large, heterogenous group of cells with high expression of hemoglobin-associated genes was identified as putative RBCs. Interestingly, these cells also had high expression of a number of immune genes characteristic of both neutrophils and B-cells ( fig. 2). Previous findings have indicated that teleost RBCs have diverse roles in the regulation of host immunity (Pereiro et al. 2017;Shen et al. 2018). For example, it is well documented that teleost RBCs contribute to antiviral immunity (Nombela et al. 2017;Pereiro et al. 2017;Puente-Marin et al. 2019). Preliminary evidence suggests they also can phagocytose and kill bacterial pathogens (Qin, et al. 2019) and even yeast (Passantino et al. 2002). However, our results suggest further refinement of these functions. Clustering analysis shows two distinct subgroups of RBCs, dividing based on similarity to either myeloid-(neutrophil) or lymphoid-(B-cells) type cells ( fig. 2). Thus, while previous studies have both characterized myeloid-type functions (Pereiro et al. 2017;Puente-Marin et al. 2019;Qin et al. 2019) and document interactions with lymphoid cells (Jeong et al. 2016), this is the first evidence for diversification of teleost RBCs into distinct subgroups, each serving a particular immunological role. Further study is needed to improve the understanding of the distinct roles of these two subtypes and their broad roles in fish immunity. Two Groups of B-Cells Are Identifiable: Resting and Plasma B-Cells A large group of cells uniquely expressing cd79a, swap70a, and a number of putative immunoglobulin genes was identified as putative B-cells ( fig. 1). This group was comprised of three subclusters (original clusters 11, 12, and 13; supplementary fig. S2, Supplementary Material online), two of which (cluster 12 and cluster 13) were readily distinguished by expression patterns (supplementary fig. S11, Supplementary Material online). The smaller of the two subclusters (cluster 13) had considerably higher expression of immunoglobulin genes as well as X-box binding protein 1 (xbp1) and associated proteins, key markers of plasma cells in mammals (Shaffer et al. 2004). Thus, we concluded that these two groups likely comprised of resting B-cells (cluster 12) and activated/plasma B-cells (cluster 13), respectively. Previous work has documented the diversification of fish B-cells into antibody-secreting cells upon immune stimulation (Jenberie et al. 2018). Furthermore, studies have indicated that antibody-secreting cells (including plasma cells and plasmablasts) constitute a stable subpopulation of cells in the head kidney of other fish species. Interestingly though, low levels of resting B-cells in the head kidney have been documented in salmonids of stock origin, which is contrary to our results (Ma et al. 2013). High levels of resting B-cells are characteristic of tissues involved in inducible responses to immune challenge, typically the blood and spleen in teleost fish (Ma et al. 2013). However, it is possible that some fish lineages may have evolved more plasticity in head kidney function as part of an inducible immune response. Further characterization of B-cell subpopulation in other tissue types from G. aculeatus will provide insight regarding the lineagespecific roles of various lymphoid tissues in immunity. Isolated Populations of Stickleback Vary Significantly in Cell Type Abundance The nine fish sampled for our scRNA-seq analysis were representative of three isolated and genetically divergent populations. Laboratory experimental infection studies have confirmed that these three populations, Roberts Lake, Gosling Lake, and Sayward (anadromous) differ considerably in their immune responses to a common freshwater parasite, S. solidus (Weber et al. 2017b(Weber et al. , 2022. The marine population is evolutionarily naïve to the parasite, which does not survive brackish water, and consequently is readily infected and permits rapid cestode growth. Both Gosling and Roberts Lakes are more resistant to laboratory infection than their marine ancestors, but the most resistant Roberts Lake population significantly suppresses cestode growth and is more likely to encapsulate and kill the cestode in a fibrotic granuloma (Weber et al. 2017b(Weber et al. , 2022). Here, we tested whether the three populations exhibit differences in immune cell relative abundance, or differences in within-cell-type expression. We found significant betweenpopulation variation in abundance in every cell type except fibroblasts (supplementary table S2, Supplementary Material online; fig. 3). Roberts Lake fish (ROB), which are most resistant to the parasite, had considerably more neutrophils and platelets, but significantly less NKCs, RBCs, and B-cells than the other two populations. Sayward fish (parasite naïve) had the highest abundance of APCs, B-cells, and RBCs. It is important to note that sampled fish did vary in age, which may have had some effect of observed variation in immune cell repertoires. Furthermore, the context of this study does not allow for any firm conclusions linking these single-cell differences to functional effects on, or adaptation to, specific parasite species. While it is likely the differences reflect adaptive divergence due to natural selection (given the populations' recent divergence), the three populations differ with respect to multiple environmental factors and multiple parasite species (Bolnick et al. 2020) so attributing evolution to a specific parasite requires future confirmation. Such confirmation might entail experimental coevolution of the host and parasite, or comparative methods spanning numerous populations to establish a reliable correlation between a given gene change and a particular parasite. Still, several of the observed differences in immune cell repertoires across populations could have significant implications for host defense. For example, ROB had higher abundance of neutrophils and platelets, both . a) Heatmap of log-normalized expression of annotated B-cell and neutrophil marker genes which were significantly differentially expressed between the two RBC subgroups (mitochondrial and ribosomal genes excluded). Heatmap generated using the pheatmap package in R. b) Violin plot of log-normalized expression of significantly differentially expressed neutrophil marker genes among the two subgroups of cells. c) Violin plot of log-normalized expression of significantly differentially expressed B-cell marker genes among the two subgroups of cells. Genome Biol. Evol. 15(4) https://doi.org/10.1093/gbe/evad053 Advance Access publication 11 April 2023 of which play important roles in parasite defenses. Neutrophils and other granulocyte cells such as eosinophils are important components of the initial innate immune response to helminths and other parasites (Chen et al. 2014;El-Naccache et al. 2020). Platelets, specifically thrombocytederived compounds, are important mediators of fibrotic responses (Antoniades et al. 1990;Abdollahi et al. 2005), and fibrosis is a major part of Roberts Lake sticklebacks' response to S. solidus infection (Weber et al. 2022). Consequently, enhanced abundance of both neutrophils and platelets in ROB may allow for quick induction of resistance phenotypes (i.e., fibrosis) (Hund et al. 2022) and other immune responses which result in the efficient elimination of the parasite. It should be noted that the lack of variation in fibroblast abundance among populations is not unexpected; the fish used here are uninfected and so do not differ in fibrosis levels. Also, while platelets normally originate in hematopoietic tissues, like the head kidneys (Chang et al. 2007), fibroblasts are usually stimulated at sites of damage (Wynn 2008), which is in the peritoneal (body) cavity for the S. solidus parasite. Combined, the differences in relative abundance of immune cell types observed among our three populations of fish are likely to be mechanistically linked to observed variation in parasite resistance, an inference we revisit below by reanalyzing prior experimental data. Expression of Each Cell Type Varies among Populations In contrast to the significant variation in relative abundance of immune cell types between the three sampled populations, we found modest signatures of among-population variation in expression profiles within cell types (supplementary File 3, 3). Despite having significantly fewer total B-cells (consistent with their lower proportion of lymphocytes in prior flow cytometry data) (Weber et al. 2017a), ROB had B-cells which exhibited higher average expression of immunoglobulin-type genes per cell. This may be a compensatory method as B-cell production of immunoglobulin is an essential component of response to helminth infection (Ma et al. 2013). Higher expression of immunoglobulin genes by Roberts Lake B-cells is likely the result of a significantly higher relative abundance of putative plasma B-cells in ROB (compared with resting B-cells). ROB, when compared with Gosling Lake and Sayward fish, had higher proportions of plasma cells. This was true when considering both the ratio of plasma cells to all head kidney cells, and plasma cells to B-cells specifically (χ 2 test; Padj < 0.001). Again, this difference may be connected to variability in parasite resilience. Helminth-protective T H 2-type immune responses induce expansion of plasma cells producing IgE ). Thus, a higher constitutive abundance of plasma-type B-cells in ROB may contribute to enhanced resistance to S. solidus parasites. Finally, patterns of expression of neutrophil-associated markers also varied significantly across populations. Both HCs and RBCs in Roberts Lake had significantly higher expression of neutrophil marker genes ( fig. 3). This is likely the result of enhanced overall investment in neutrophil-like cells in ROB, which could support a quick initial response to invading parasites (Chen et al. 2014;El-Naccache et al. 2020). Perhaps most interestingly, we observed population-specific, preferential expression of what is presumably duplicated copies of the important zebrafish neutrophil marker gene, npsn. We identified two highly similar genes annotated as npsn, both of which were significant markers of neutrophils. However, one gene was preferentially expressed by ROB, while the other was expressed higher in Gosling and Sayward neutrophils ( fig. 3). Sequence comparison of these two gene copies revealed that while highly similar to zebrafish npsn, there are several species-specific and copy-specific amino acid substitutions in the sequences, suggesting potential neofunctionalization (supplementary fig. S12, Supplementary Material online). Neofunctionalization of one copy of this gene could be the result of any number of environmental differences between the populations but is of particular interest here due to the apparent speed at which shifts in preferential expression of these two isoforms have evolved. Insights from scRNA-seq Analyses Improve Interpretation of Past Traditional RNA-seq Studies The scRNA-seq data allowed us to confidently identify a suite of genes which are markers of each of these putative eight cell types (supplementary File 2, Supplementary Material online). Using these new candidate marker genes, we can reevaluate findings of past RNA-seq studies to understand the relative contributions of changes in gene expression versus changes in cell abundance. Specifically, we leveraged these markers to reinterpret results from two previous studies for which we had both traditional RNA-seq expression data and flow cytometry data coarsely estimating granulocyte to lymphocyte relative abundance using forward and side-scatter gating (Lohman et al. 2017;Fuess et al. 2021b). The first, and larger, of the two studies investigated variation in constitutive and induced immune response to experimental parasite infections in laboratory-reared F2 fish (Fuess et al. 2021b). Within this data set, granulocyte and lymphocyte frequencies are, respectively, correlated to expression of both putative granulocyte markers (npsn B, transcript 1; Pearson correlation, P < 0.001, r = 0.3904) and lymphocyte markers (cd79a; Pearson correlation, P < 0.001, r = 0.4569). The second, smaller, study conducted a similar experimental parasite infection of laboratory-reared F1 fish (Lohman et al. 2017). Within this study, these correlations are less significant for lymphocytes (Pearson correlation, P = 0.016, r = 0.25), and both nonsignificant and trending in the opposite direction for granulocytes (Pearson correlation, P = −0.17, r = 0.12; fig. 4). These inconsistencies are likely due to the nature of our correlative data. Flow cytometry grouped cells into two large bins: granulocytes and lymphocytes. Thus, finding two markers that accurately correlate to these broad groups across experiments is difficult, particularly for diverse granulocytes. Examination of a broader group of potential markers revealed strong correlations between several additional markers and both lymphocyte granulocyte abundances for our larger data set, and strong associations between additional lymphocyte markers and lymphocyte abundance for our smaller data set (supplementary figs. S13-14, Supplementary Material online. These findings suggest that variation in expression of cell markers identified here may be reflective of changes in abundance of immune cell types. We believe that further validation using more comprehensive paired transcriptomic and flow cytometry data will demonstrate that this data provide a powerful new resource that will increase the interpretive power of traditional RNA-seq analyses, particularly when combined with developing methods for deconvolution of bulk-tissue data (Cobos et al. 2020;Jin and Liu 2021). Assuming that changes in expression of these markers is at least in part due to changes in their respective cell type, we can now glean more insight regarding the cellular changes in response to infection of G. aculeatus by S. solidus by reexamining previous data sets. Consequently, we applied the markers generated here to reinterpret results from the two studies of response experimental parasite infection in laboratory-reared F1 and F2 fish (Lohman et al. 2017;Fuess et al. 2021b). In each case, we conducted χ 2 tests to detect overrepresentation of cell markers (generally or specific cell type) among significantly differentially expressed genes. In the case of groups where significant overrepresentation was detected, we conducted a proportion test to detect statically significant skew in the directionality of differential expression. In the smaller study of response of laboratory-reared F1 fish, we observed few significant patterns of biological interest (Lohman et al. 2017); supplementary table S3, Supplementary Material online). However, in our larger data set (F2 fish), we noticed significant overrepresentation of APCs and B-cell marker genes among the genes differentially expressed as a result of infection or between populations, respectively (Fuess et al. 2021b) (supplementary table S3, Supplementary Material online). Importantly, this result is in F2 hybrid fish where recombination during two generations of breeding has randomized most between-population differences, and environmental effects are removed by rearing two generations in the lab. Thus, associations between our interpolated cell type results and infection outcomes represent evidence for genetic covariance between the cell type (expression) and resistance traits. Markers of APCs were not only significantly overrepresented but also exclusively increased in response to infection ( fig. 4). Alternatively activated macrophages are known to play key roles in response to helminth infection, including mediating inflammatory responses (Kreider et al. 2007;Coakley and Harris 2020). B-cell markers were generally expressed at higher levels in susceptible back-crossed fish compared with resistant back-crosses, consistent with analysis of scRNA data presented here. Finally, we also considered results from correlative analyses of associations between gene expression in F2 fish, and gut microbiome composition (Fuess et al. 2021a). Here, we observed significant overrepresentation of markers of neutrophil, B-cell, and fibroblast cells among lists of genes significantly correlated to abundance of specific microbial taxa in the gut (supplementary table S3, Supplementary Material online). Neutrophils demonstrated the most consistent patterns of association with microbial taxa abundance, with some microbial taxa demonstrating strongly significant positive or negative associations with many neutrophil markers ( fig. 4). Neutrophils and gut microbiota are believed to be functionally linked, with gut microbiota regulating components of neutrophil activity and vice versa (Lajqi et al. 2020). Our findings suggest that specific microbiota have systemic effects on the proliferation of (or lack thereof) neutrophils in hematopoietic organs. In sum, the markers discovered here provide new power to interpret traditional RNA-seq data and begin to disentangle relative contributes of changes in gene expression versus changes in cell type abundance. These results point to the value of small-sample scRNA-seq in guiding reinterpretation of new or existing large-sample bulk-tissue transcriptomic data and hence the potential future value in emerging methods for bulk-RNA-seq deconvolution (Cobos et al. 2020;Jin and Liu 2021). Conclusions Here, we present a robust analysis of population-specific variation in immune system structure (relative cell type abundance and function) and the potential connection of this variation with observed variation in parasite resistance. Using single-cell RNA-seq analyses, we demonstrate that independent populations, with known differences in parasite 4.-Evaluation of applicability of identified markers to past traditional RNA-seq data sets a-d) Pearson correlations between expression of identified lymphocyte (cd79a) or granulocyte markers (npsn.b) and normalized lymphocyte or granulocyte frequency (detected by flow cytometry) in our two previous transcriptomic study sets (a, b) (Fuess et al. 2021b) (c, d) (Lohman et al. 2017). For all correlation plots, the regression line is shown and shading indicates 95% confidence intervals. e) Patterns of differences in gene expression of identified APC in uninfected versus infected fish: all data shown correspond to genes which were significantly differentially expressed in a previous traditional RNA-seq study (Fuess et al. 2021b). f) Heatmap of significant correlations (tau) between gene expression of identified neutrophil markers and abundance of specific microbial taxa. Nonsignificant correlations are displayed in grey. Data taken from a previous correlative analysis of traditional RNA-seq data (Fuess et al. 2021a). GBE resistance, vary significantly in both abundance and expression patterns of immune cell types. Our reanalysis of prior bulk tissue data then allows us to infer cell-type correlations with the results of experimental infections (cestode infection success and growth). This is, to our knowledge, the first evidence that rapid evolution of immune cell repertoires among populations both occurs and potentially contributes to variation in immune response and infection outcome. Our results add to the growing body of evidence that suggests that the immune system may be much more malleable than once thought. Furthermore, these findings provide compelling rationale for further studies investigating adaptability of immune system structure within and between species, focusing on the evolutionary causes of such adaptability. Also notably, our findings present the first description of prominent immune cell types in an important ecological and evolutionary model species. This provides new cell marker resources that can be used to streamline further immunological studies and provide new insight into traditional RNA-seq studies. In sum, our work not only adds strong evidence suggesting that microevolution of immune cell repertoires contributes to variation in immune response but also provides a robust new tool for researchers utilizing the stickleback system as a model of evolutionary and ecological immunology. Sample Collection & Processing Single-cell libraries were generated from head kidneys of laboratory-reared F1 stickleback from three populations on Vancouver Island in British Columbia (Sayward Estuary, Roberts Lake, and Gosling Lake). Reproductively mature fish were collected at each location using minnow traps. Gravid females were stripped of their eggs, which were then fertilized using sperm obtained from macerated testes of males from the same lake. Fish were collected with permission from the Ministry of Forests, Lands, and Natural Resource Operations of British Columbia (Scientific Fish Collection permit NA12-77018 and NA12-84188). The resulting eggs (F1 generation) were shipped back to Austin, Texas, hatched, and reared to maturity in controlled laboratory conditions. At ∼2-3 years of age, fish were transferred to aquarium facilities at the University of Connecticut. At the time of sampling, fish ranged from 3 (Sayward and Gosling) to 4 (Roberts) years of age. Sampled fish were a random selection of F1-generation fish of unknown relatedness. All sampled fish were male. We generated single-cell suspensions from the pronephros (head kidney) of three fish from each population (Sayward, Roberts, and Gosling). Fish were humanely euthanized one at a time, and their head kidneys were immediately extracted. Dissected head kidneys were placed in 2 mL of R-90 media (90% RPMI 1640 with L-glutamine, without Phenol red; Gibco) in a sterile 24-well plate on ice. Tissue was then physically dissociated using a sterile pipette tip. The resulting slurry was then strained through a 40 μm nylon filter. An additional 2 mL R-90 was added to the resulting suspension. Cells were then spun at 440 g for 10 min at 4 °C. The supernatant was removed, and cells were resuspended in 2 mL R-90. Cells were spun one more time, and the resulting supernatant was replaced with 1 mL R-90. Cell suspensions were then transported on ice to the Jackson Lab facility in Hartford, Connecticut, where samples were prepared for sequencing and sequenced within 6 h of initial sample collection. Single-Cell Library Preparation and Sequencing Cells were washed and suspended in PBS containing 0.04% BSA and immediately processed as follows. Cell viability was assessed on a Countess II automated cell counter (ThermoFisher), and an estimated 12,000 cells were loaded onto one lane of a 10× Genomics Chromium Controller. Single-cell capture, barcoding, and single-indexed library preparation were performed using the 10× Genomics 3′ Gene Expression platform version 3 chemistry and according to the manufacturer's protocol (#CG00052) (Zheng et al. 2017). cDNA and libraries were checked for quality on Agilent 4200 Tapestation, quantified by KAPA qPCR, and sequenced on an Illumina sequencer which targeted 6,000 barcoded cells with an average sequencing depth of 50,000 read pairs per cell. Three initial libraries (one per population) were sequenced on individual lanes of a HiSeq 4000 flow cell; all other libraries were sequenced on a NovaSeq 6000 S2 flow cell, each pooled at 16.67% of the flow cell lane. Illumina base call files for all libraries were converted to FASTQs using bcl2fastq v2.20.0.422 (Illumina), and FASTQ files were aligned to reference genome constructed from the v5 G. aculeatus assembly and annotation files available at https://stickleback.genetics.uga.edu/ (Nath, et al. 2021). Briefly, annotations from Ensembl (release 95) were combined with repeat, Y chromosome, and revised annotations from Nath et al. using AGAT (0.4.0) (Dainat et al. 2021), and a STAR-compatible reference genome was generated by Cell Ranger (v3.1.0, 10× Genomics) using these annotations and the v5 assembly from Nath et al. The Cell Ranger count (v3.1.0) pipeline was used to construct the cell-by-gene counts matrix for each library, subsequently analyzed using Scanpy 1.3.7 (Wolf et al. 2018) and the Loupe Cell Browser (10× Genomics). Each counts matrix was individually subjected to quality control filtering, such that cells with more than 35,000 UMIs, fewer than 400 genes, more than 30% mtRNA content, and more than 1,000 hemoglobin transcripts were discarded from downstream analysis. The nine filtered counts GBE matrices were concatenated, normalized by per-cell library size, and log transformed. The expression profiles of each cell at the 4,000 most highly variable genes (as measured by dispersion) (Satija et al. 2015;Zheng et al. 2017) were used for principal component (PC) analysis and subsequently batch corrected using Harmony (Korsunsky et al. 2019). The batch-corrected PCs were utilized for neighborhood graph generation (using 25 nearest neighbors) and dimensionality reduction with UMAP (McInnes et al. 2020). Clustering was performed on this neighborhood graph using the Leiden community detection algorithm (Traag et al. 2019). Subclustering was performed on a per-cluster ad hoc basis to separate visually distinct subpopulations of cells. This UMAP embedding and clustering metadata were then imported into the Loupe Cell Browser (generated using Cell Ranger aggr [v3.1.0]) for interactive analysis. Cluster Identification Once data (UMAP embedding and clustering metadata) were loaded into the Loupe Cell Browser, we then generated lists of marker genes for each of the identified clusters using the "Globally Distinguishing" feature. Marker genes were classified as those genes upregulated in each cluster (compared with all other cells) with an adjusted P < 0.10. Next, we assigned tentative identities to each of these initial clusters by comparison of marker genes to available literature regarding markers of immune cells in teleost fish and other vertebrates. During this initial identification process, we identified multiple groups of cells with homology to the same major immune cell type (e.g., three clusters demonstrated patterns of expression indicative of neutrophils). Consequently, we condensed the initial 24 identified clusters into eight major groups based on homology to known vertebrate immune cell types for downstream analyses. Supplementary table S1, Supplementary Material online, lists all original clusters, their top statistical marker genes, the genes used to assign the cluster to a major immune cell type, and the final immune cell type assignment. After this process, we also examined differential expression between original clusters within these major groups using the "Locally Distinguishing" feature in the Loupe Cell Browser. Cluster identification and subcluster distinctions were confirmed by visual analysis of expression of major immune cell type markers in the Loupe Cell Browser. Violin plots and heatmaps displaying patterns of expression across major group and subclusters within groups were generated in R using read count matrices and cluster identity information (exported from the Loupe Cell Browser). Relevant code can be found at https://github.com/lfuess/scRNAseq. Comparative Analyses Across Populations When comparing across populations, we assessed two hypotheses: 1) relative abundance of immune cell types is variable across populations and 2) expression patterns within each identified immune cell type are variable across populations. First, to identify differences in relative abundance of each of our eight major immune cell types, we performed independent, binomial general linear models (GLM) for each cell type. The binomial GLM uses the numbers of observed instances of a given cell type, out of a known number of observed cells per fish, to calculate the proportion of that cell type (with appropriate binomial error) and test whether this proportion varies between factor groups (e.g., populations). Tukey's post hoc tests were used for pair-wise comparisons if significant differences were identified between populations (the code can be found at https://github.com/lfuess/scRNAseq). Second, to identify differences in gene expression patterns within each of our identified immune cell types, we again used the "Locally Distinguishing" feature in the Loupe Cell Browser. Cells within each major group were subdivided by population, and then, all possible pairwise comparisons of gene expression were conducted. When conducting pairwise comparisons, the Loupe Browser specifically normalizes for differences in cell abundance between groups using size factors to ensure that differences in average cell expression and abundance are not conflated. Genes with adjusted P < 0.10 were identified as significantly differentially expressed. Relevant violin plots and heatmaps were generated in R using read count matrices and cluster identity information (exported from the Loupe Cell Browser). The relevant code can be found at https://github.com/lfuess/scRNAseq. Sequence Alignment In order to examine sequence divergence in the two identified copies of neutrophil marker gene, npsn, we conducted a multiple sequence alignment of both npsn transcripts from stickleback and the zebrafish npsn transcript sequence using the R package msa (Bodenhofer et al. 2015). Comparison to Past Analyses We leveraged past transcriptomic analysis of the stickleback head kidney to assess whether whole tissue-measured expression of putative markers identified here could be used as a reliable metric of relative cell type abundance. We specifically analyzed two past transcriptomic data sets: 1) an analysis of laboratory-reared F1 fish from Roberts and Gosling Lakes experimentally exposed to parasites (Lohman et al. 2017) and 2) an analysis of laboratory-reared F2 and backcrossed fish, the offspring of fish from experiment 1, experimentally exposed to parasites (Fuess et al. 2021b). For both of these data sets, we had access to transcriptomic data detailing whole tissue expression of our putative cell markers, and flow cytometry data coarsely estimating granulocyte to lymphocyte relative abundance using forward and side-scatter gating. For each data set, we examined the correlation between normalized gene expression of putative markers and square root transformed frequency data for granulocytes or lymphocytes as appropriate. Once we established that whole-tissue expression of putative cell markers was at least partially indicative of relative abundance of immune cell types, we then leveraged our newly identified cell markers to reinterpret three past transcriptomic studies of stickleback immunity: the two previously mentioned transcriptomic studies of F1 & F2/ backcross fish to immune challenge (Lohman et al. 2017;Fuess et al. 2021b) and an additional study examining correlations between head kidney gene expression and gut microbiome composition (Fuess et al. 2021a). Specifically, we used chi-squared tests to identify significant overrepresentation of markers of any given cell type within lists of genes significantly differentially expressed as a result of traits of interest, or genes significantly correlated to microbial diversity/taxa of interest. χ 2 tests were used to test for overrepresentation of each immune cell type within each list of genes independently. Supplementary Material Supplementary data are available at Genome Biology and Evolution online (http://www.gbe.oxfordjournals.org/).
2021-12-24T16:07:42.515Z
2021-12-21T00:00:00.000
{ "year": 2023, "sha1": "b5d3e03641132dbfb2df2ab86ba7c0b67248b6cc", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gbe/advance-article-pdf/doi/10.1093/gbe/evad053/49837532/evad053.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e0a91e1289434cca9df85bb55a017af1651d477", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
208019149
pes2o/s2orc
v3-fos-license
In Vitro Activity of Cefiderocol Against a Broad Range of Clinically Important Gram-negative Bacteria Abstract Carbapenem-resistant gram-negative bacteria including Enterobacteriaceae as well as nonfermenters, such as Pseudomonas aeruginosa and Acinetobacter baumannii, have emerged as significant global clinical threats. Although new agents have recently been approved, none are active across the entire range of resistance mechanisms presented by carbapenem-resistant gram-negative bacteria. Cefiderocol, a novel siderophore cephalosporin, has been shown in large surveillance programs and independent in vitro studies to be highly active against all key gram-negative causative pathogens isolated from patients with hospital-acquired or ventilator-associated pneumonia, bloodstream infections, or complicated urinary tract infections. The improved structure, the novel mode of entry into bacteria, and its stability against carbapenemases enables cefiderocol to exhibit high potency against isolates that produce carbapenemases of all classes or are resistant due to porin channel mutations and/or efflux pump overexpression. Resistance to cefiderocol is uncommon and appears to be multifactorial. embedded into the outer bacterial membrane [12]. The natural bacterial iron transporters are upregulated under iron-depleted conditions that occur during acute infections. Thus, iron concentration needs to be taken into account when determining the in vitro activity of such antibiotics. The iron concentration in standard culture media (eg, cationadjusted Mueller-Hinton broth [CAMHB]) is neither controlled nor limited, and it can vary depending on the manufacturer [16]. To test the in vitro activity of siderophore antibiotic conjugates, iron-depleted media are required to provide reproducible minimum inhibitory concentrations (MICs) that predict in vivo efficacy [17,18]. The Clinical and Laboratory Standards Institute (CLSI) has approved the use of iron-depleted CAMHB to determine cefiderocol MICs. The growth medium is prepared by removing all cations from the Mueller-Hinton broth through incubation with a cation-binding resin, followed by replenishment of Mg 2+ , Ca 2+ , and Zn + [19]. Based on the preclinical in vivo efficacy and pharmacokinetic/ pharmacodynamic (PK/PD) analyses using this MIC testing methodology, provisional susceptible, intermediate, and resistant cefiderocol breakpoints of 4, 8, and 16 μg/mL, respectively, have been approved by CLSI for Enterobacteriaceae, P. aeruginosa, A. baumannii, and S. maltophilia [19]. This was the first case of breakpoints being approved by CLSI prior to approval of a new drug based on in vitro activity and preclinical in vivo PK/PD data. ACTIVITY AGAINST CLINICAL ISOLATES IN MULTINATIONAL STUDIES The in vitro activity of cefiderocol has been investigated in small independent and large-scale multinational surveillance studies. As part of the preclinical development of cefiderocol, large multinational surveillance studies (ie, SIDERO-WT studies) were initiated in North America and Europe [20][21][22]. In parallel, carbapenem-resistant isolates collected in Europe, North America, South America, and the Asia-Pacific region are being tested in the SIDERO-CR program [23]. In addition, several independent studies to determine cefiderocol activity have included collections of difficult-to-treat carbapenem-resistant pathogens gathered from various countries. The activity of cefiderocol in these studies was compared with that of the recently approved BL-BLI combinations, such as ceftolozanetazobactam and ceftazidime-avibactam. In vitro activity of cefiderocol has been demonstrated in a multinational randomized phase 2 clinical study that enrolled patients with complicated urinary tract infections, and a small proportion with acute uncomplicated pyelonephritis [43]. The majority of causative pathogens were Enterobacteriaceae spp, although a small proportion of patients were infected with P. aeruginosa. All species had cefiderocol MIC 90 values of ≤4 μg/mL, and only a small number of K. pneumoniae had a cefiderocol MIC of 8 μg/mL, suggesting a very high susceptibility rate among clinically relevant pathogens of urinary tract infections [43]. ACTIVITY AGAINST CARBAPENEMASE PRODUCERS The recently approved BL-BLI combination drugs, such as ceftazidime-avibactam, meropenem-vaborbactam, and imipenem-cilastatin-relebactam, have been shown to be active against only KPC and/or OXA-48 producers, and not against other carbapenemase-producing organisms, suggesting that rapid diagnosis of specific carbapenemase enzymes using molecular methods will be important in guiding antibiotic selection. Investigation of the carbapenemase production profile of the isolates from the SIDERO-CR-2014/2016 study showed some variation in the carbapenemase enzymes between regions ( Figure 2A) and countries ( Figure 2B) [38,44]. These new results from the SIDERO-CR study [44] showed that cefiderocol had potent activity against each carbapenemaseproducing organism, irrespective of the bacterial species and the carbapenemase molecular types (class A such as KPC and Guiana extended-spectrum β-lactamase [GES]; class B such as VIM, NDM, and IMP; and class D such as OXA-23, -24/40, -48, and -58) with MIC 90 values of 0.5-8 μg/mL (Figure 3). One feature was that cefiderocol was active against class B MBL producers such as NDM, VIM, and IMP; this finding is in contrast to the profile of the recently approved BL-BLI combination drugs [22]. The activity of cefiderocol against a broad range of pathogens harboring carbapenemases could be due to the unique mode of action of cefiderocol, which is described in detail by Sato and Yamawaki [45]. ACTIVITY AGAINST COLLECTIONS OF DIFFICULT-TO-TREAT PATHOGENS Among multiple independent studies, Rolston et al reported that cefiderocol showed activity against 478 gram-negative clinical isolates from the MD Anderson Cancer Center [46]. In this study, composed mostly of blood culture isolates, 97% had a cefiderocol MIC of ≤4 μg/mL. Cefiderocol was shown to be active against less common pathogens such as Achromobacter spp. (MIC 90 of 0.125 µg/mL), as well as MDR P. aeruginosa, Acinetobacter spp, and S. maltophilia, with MIC 90 values of 1, 4, and 0.25 μg/mL, respectively [46]. The authors also found that cefiderocol inhibited the growth at ≤8 µg/mL against clinical isolates of Pantoea spp, Sphingomonas paucimobilis, Rhizobium radiobacter, and Elizabethkingia meningoseptica. Separately, Robertson et al reported that cefiderocol was active against 185 clinical isolates of a biothreat pathogen, Burkholderia pseudomallei, which was isolated in Northern Australia, with an MIC 90 of 0.125 μg/mL [47]. These results show that cefiderocol has in vitro activity against a wide variety of gram-negative bacteria including carbapenemresistant Enterobacteriaceae and nonfermenters. The potent activity of cefiderocol against carbapenemresistant gram-negative bacteria including various carbapenemase producers was also demonstrated separately by external investigators who conducted multiple independent studies using a collection of difficult-to-treat carbapenemresistant pathogens. Cefiderocol was shown to be effective against a collection of carbapenem-resistant gram-negative pathogens from Greek hospitals. Activity against A. baumannii (n = 107), P. aeruginosa (n = 82), K. pneumoniae (n = 244), and Enterobacter cloacae (n = 14) was demonstrated by MIC 90 values of 0.5, 0.5, 1, and 1 μg/mL, respectively [48]. In a study that investigated a worldwide collection of isolates from hospitalized patients, cefiderocol was shown to be active with an [49]. A recent study conducted by Public Health England investigated the activity of cefiderocol against 210 carbapenemresistant nonfermenting clinical isolates from the United Kingdom with diverse carbapenemase production profiles [51]. Against 111 carbapenem-resistant P. aeruginosa clinical isolates from the United Kingdom, cefiderocol inhibited 86.5% of all isolates at ≤4 μg/mL, except for those with NDM (72.7%) and Pseudomonas extended resistant (PER) (73.3%) β-lactamases. Against 99 carbapenem-resistant A. baumannii clinical isolates, cefiderocol inhibited 88.9% of all isolates at ≤4 μg/mL, except for those with NDM (80.0%) [51]. Cefiderocol was also shown to be effective against wellcharacterized carbapenem-resistant Enterobacteriaceae, including the isolates with the mutation in KPC genes involving the Ω-loop insertion, which has been reported to occur in patients during treatment with ceftazidime-avibactam. In this collection of isolates, cefiderocol showed an MIC 90 of 4 μg/ mL and an 8% resistance rate, whereas ceftazidime-avibactam showed an MIC 90 of >8 μg/mL and a 14% resistance rate [52]. In summary, cefiderocol has been shown to have potent antimicrobial activity against a wide variety of carbapenemaseproducing carbapenem-resistant bacterial species that were collected globally or from selected countries. The acquisition of resistance to cefiderocol mediated by specific carbapenemase production has not been observed, although the susceptibility rate was lower for NDM producers than for other carbapenemase-producing organisms. Further investigations have revealed that cefiderocol resistance could be reverted in the presence of BLIs, suggesting that the production of β-lactamases might have been responsible for the elevated cefiderocol MICs in almost all cases. In MBL-producing organisms, including NDM-producing Enterobacteriaceae, the elevated cefiderocol MIC was not decreased in the presence of metallo-β-lactamase inhibitors, although the MIC was decreased when both metallo-and serine-β-lactamases were inhibited. These results suggest that not only NDM production but also the simultaneous production of NDM and some serine-β-lactamases could lead to cefiderocol resistance. Against all non-NDM/VIM producers, mainly PERproducing A. baumannii, cefiderocol resistance was suppressed by the addition of avibactam. This suggests that combination with avibactam could be effective against cefiderocol-resistant isolates without MBL production. However, cefiderocol showed an MIC 90 of 1 μg/mL against other PER-producing bacteria, suggesting that PER expression alone might not be the cause of cefiderocol resistance. From these results, cefiderocol resistance in clinical isolates was mainly due to the production of β-lactamases, and PER and NDM could be important factors responsible for cefiderocol resistance, although NDM or PER production alone might not be sufficient to cause cefiderocol resistance [27,53]. CONCLUSIONS Cefiderocol shows potent activity against both carbapenemsusceptible and nonsusceptible/resistant gram-negative bacteria, almost irrespective of the type of carbapenemases, although some NDM producers showed an elevated MIC. The enhanced activity of cefiderocol against gram-negative bacteria could be due to the combination of rapid penetration via iron transport channels and high stability to both serineand metallo-carbapenemases. The ongoing SIDERO-WT and SIDERO-CR surveillance studies will continue to monitor resistance rates to cefiderocol globally.
2019-09-17T02:48:06.954Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "b0eb39e894852805ed8b52cde6119d8797aa6beb", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/cid/article-pdf/69/Supplement_7/S544/30678473/ciz827.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "81c63efed8a338bd5003bd16c259363030a96306", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237634974
pes2o/s2orc
v3-fos-license
Virial clouds evolution from the last scattering upto the formation of first stars The asymmetry in the cosmic microwave background (CMB) towards several nearby galaxies detected by Planck data is probably due to the rotation of"cold gas"clouds present in the galactic halos. In 1995 it had been proposed that galactic halos are populated by pure molecular hydrogen clouds which are in equillibrium with the CMB. More recently, it was shown that the equillibrium could be stable. Nevertheless, the cloud chemical composition is still a matter to be studied. To investigate this issue we need to trace the evolution of these virial cloud from the time of their formation to the present, and to confront the model with the observational data. The present paper is a short summary of a paper [1]. Here we only concentrate on the evolution of these clouds from the last scattering surface (LSS) up to the formation of first generation of stars (population-III stars). Introduction The study of the nature of galactic halos and their dynamics is a task that is difficult to address. Here we present a summary of a more detailed analysis addressing this issue [1]. In 1995 it was proposed that a fraction, f , of these missing baryons is present in the galactic halos in the form of pure molecular hydrogen (H 2 ) clouds which are in equillibrium with the CMB [8]. The difficutly was on the observation of such "chameleons" megerd with the background. One of the suggestions was, to look for a Doppler shift effect due to the rotation of galaxies, assuming that the rotation of these clouds is synchronized with the rotation of galactic halos, and hence they should be Doppler shifted, those clouds roatating towards us should give a blue-shift effect while those rotating away from us would give a red-shifted contribution. In 2011 WMAP data was analyzed for M31. The analysis revelaed a temperature asymmetry in the CMB which was almost frequency independent [10], which was a strong indication of the Doppler shift effect due to the galactic halo rotation. This opens up a window to observe these clouds and to study the baryonic content of the galactic halos [11]. Soon after WMAP, in 2014 Planck data towards M31 was analyzed and the asymmetry was seen at a more precise level [12]. A temperature asymmetry was also detected towards several nearby spiral galaxies [13][14][15][16]. There was more than one item of evidence of the predicted Doppler shift effect, but observing that there is a Doppler shift due to the halo rotation does not reveal the true nature of the effect, i.e. if it is partially or fully due to the molecular clouds in the halos, or, if is there anything else that could give a masking or mimiking effect in the asymmetry. Another unanswered question is the chemical composition of these clouds: are they pure H 2 clouds, or there is some contamination of dust or heavier molecules in them? It is quite obvious that galactic halos contain a significant fraction of dust that should contaminate these clouds [18], so one needs to model the clouds. As these clouds should survive on acount of the virial theorem, they were called "virial clouds". These clouds were modeled and it was seen that at the current CMB temperature the centeral density of pure H 2 clouds was ≈ 1.60 × 10 −18 kg m −3 , similarly the mass and radius were ≈ 1.93 × 10 −4 M ⊙ , and ≈ 0.032 pc [19]. The change in their physical parameters with the contamination of heavier molecules and dust were also estimated. It was seen that as the contamination of dust and hevier molecules were increased in the clouds, they became denser, and their mass and radius decreased [19][20][21]. An objection was raised on the stability of these clouds, as it was believed that molecular clouds can not be stable at this, very low, CMB temperature, as there would be no mode that could be excited by the photons and the cloud might collapse to form stars or other planetary objects [22], but it was demonstrated that this equilibrium does arise on account of the translational mode, despite its extremely small probability, because of the size of the virial clouds and the time scales available for thermal equilibrium to be reached, so that the time required for thermalization is much less than that required for collapse [19]. Modeling the virial clouds and estimating the change in physical parameters with the contamination of heavier molecules and dust, and observing the CMB temperature asymmetry by Planck data still does not answer the question on the nature of virial clouds and the exact cause of the observed asymmetry in the CMB. One has to run the clock back and trace the evolution of virial clouds when they were formed at the LSS to the present. This task needs to be done in two phases: (i) from LSS up to the formation of population-III stars; and (ii) from the formation and explosion of population-III stars to the present. Here we discuss the first part of evolution, as various qualitative changes took place during the formation of population-III stars [23]. Hence there will be significant changes during the second step, which will be studied more clearly and in more detail later. WSPC Proceedings -9.61in x 6.69in ws-procs961x669 page 3 3 First epoch of virial clouds evolution Virial clouds would have formed at z = 1100, the last scattering time (LSS) and they evolved in their chemical composition and physical parameters, but in order to maintain their stability they survived the collapse and stayed in quasi-static equilibrium with the CMB since then. It is quite obvious that when they formed they should have had the primordial chemical composition, i.e. ∼ 75% atomic hydrogen and ∼ 25% helium. In addition to H and He there were other atoms and molecules like deuterium, helium-3, lithium and molecular hydrogen, which could have contributed to the virial clouds, but their fraction was negligilble as compared to H and He. Hence these molecules and atoms could not have any significant effect on the virial cloud physical parameters. As a result the ratio of atomic hydrogen and helium would remain the same and these would be the main component to form the virial clouds during the period, but there should be a fast change in their chemical composition after the formation and explosion of population-III stars. Hydrogen-Helium virial clouds Since virial clouds must be considered to be in thermal equilibrium because they are embedded in the heat bath of the CMB, we need to use the canonical distribution function for a fixed temperature and use the cooling of the heat bath to provide a quasi-equilibrium. Moreover, these clouds should start to form in the potential well of cold dark matter (CDM). As the clouds are thermalized the potential well will not cause them to collapse to form population-III stars [24], but will modify the physical parameters of virial clouds. To obtain a general expressions we consider a virial cloud composed of an arbitrary mixture of H and He, with mass fractions α and β. Then, we use the primordial cosmological fractions of H and He for the final computation. The total mass of the cloud is, obviously, M cl (r) = αM H (r) + βM He (r), with the condition α + β = 1. The density distribution for two fluids is given by [19] ρ cl (r) = 64 27 and the corresponding differential equation can be written as [19] r dρ cl (r) where, τ = (8/3 √ 3)[(Gρ cH ρ cHe ) 3/2 /(k B T ) 9/2 ][m H m He ] 5/2 , ρ cH is the central density of H cloud, ρ cHe is the central density of He cloud, m H , the mass of single atom of hydrogen, and m He , the mass of single atom of helium. WSPC Proceedings -9.61in x 6.69in ws-procs961x669 page 4 We use eq.(2) to estimate the central density of the clouds. We estimated the central density, Jeans mass and radius of the two-fluid virial clouds with primordial fraction of H, and He, i.e. α = 0.75, and β = 0.25. In order to solve eq.(1) numerically, we assumed a guess value of ρ c , at a fixed temperature, and estimate where the density becomes exactly zero at the boundary. We then compare the Jeans radius with that central density with the value available to us. We adjust the central density so that the density becomes zero exactly at the Jeans radius. In this way we get a self-consistent solution of the differential equation subject to the given boundary conditions. Next, we decrease the temperature and repeat the process. It was seen that with the decrease in temperature and redshift, the density of virial clouds increased, at z = 1100 these clouds were ≈ 8.53 × 10 −20 kgm −3 and at z = 50 the density of the clouds was ≈ 6.05 × 10 −19 kg m −3 . On the other hand the mass decreased with the decrease in temperature and so as the radius of the clouds. These clouds were massive ≈ 50 pc, with a mass ≈ 3.05 × 10 5 M ⊙ at z = 1100, then at z = 50 the clouds were ≈ 3.54 pc with a mass of ≈ 7.54 × 10 3 M ⊙ . Results and discussion The first stage of the virial cloud evolution turns out to be quite simple. We have seen that virial clouds became denser with time, and they lost mass and shrunk in size. The physics of the second stage of evolution will not be that simple since there will be cooling due to the formation of molecular hydrogen, turbulence and the angular momentum effects in the virial cloud after the formation and explosion of population-III stars. We need to consider all these things while tracing the second epoch of evolution, and this will be done in the later paper. One could expect molecular hydrogen to be formed in the first epoch, but during the era of study in the current summarized paper any H 2 molecules formed in the clouds will be unstable and dissociate due to the radiation pressure at that time, since H 2 needs dust particles to remain stable. The molecular cooling of such clouds played a vital role after the recombination and thir cooling rate has been analyzed from T ∼ 120−3 K [25]. Hence, we do not need to consider the effect of cooling from molecular hydrogen till population-III stars exploded. It may also be expected that the electronic transition mode in hydrogen and helium atoms by the CMB photons. One can check that most of the CMB photons at 3000 K do not have enough energy to excite the electrons of a hydrogen atom from its ground state, but the higher energy tail does have sufficient energy to do so. The percentage of such photons is 0.016, which is quite low. Hence, there will be a negligible effect of this mode, but these effects will be more significant in the later stage of evolution.
2021-09-27T01:15:55.766Z
2021-09-24T00:00:00.000
{ "year": 2023, "sha1": "ab4279b39b437b148ed1cbb137122ae2c4687357", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ab4279b39b437b148ed1cbb137122ae2c4687357", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235655951
pes2o/s2orc
v3-fos-license
Novel Functional Genes Involved in Transdifferentiation of Canine ADMSCs Into Insulin-Producing Cells, as Determined by Absolute Quantitative Transcriptome Sequencing Analysis The transdifferentiation of adipose-derived mesenchymal stem cells (ADMSCs) into insulin-producing cells (IPCs) is a potential resource for the treatment of diabetes. However, the changes of genes and metabolic pathways on the transdifferentiation of ADMSCs into IPCs are largely unknown. In this study, the transdifferentiation of canine ADMSCs into IPCs was completed using five types of procedures. Absolute Quantitative Transcriptome Sequencing Analysis was performed at different stages of the optimal procedure. A total of 60,151 transcripts were obtained. Differentially expressed genes (DEGs) were divided into five groups: IPC1 vs. ADSC (1169 upregulated genes and 1377 downregulated genes), IPC2 vs. IPC1 (1323 upregulated genes and 803 downregulated genes), IPC3 vs. IPC2 (722 upregulated genes and 680 downregulated genes), IPC4 vs. IPC3 (539 upregulated genes and 1561 downregulated genes), and Beta_cell vs. IPC4 (2816 upregulated genes and 4571 downregulated genes). The gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis of DEGs revealed that many genes and signaling pathways that are essential for transdifferentiation. Hnf1B, Dll1, Pbx1, Rfx3, and Foxa1 were screened out, and the functions of five genes were verified further by overexpression and silence. Foxa1, Pbx1, and Rfx3 exhibited significant effects, can be used as specific key regulatory factors in the transdifferentiation of ADMSCs into IPCs. This study provides a foundation for future work to understand the mechanisms of the transdifferentiation of ADMSCs into IPCs and acquire IPCs with high maturity. INTRODUCTION Diabetes encompasses a group of lifelong metabolic diseases, and common drug therapies are not able to cure it. Longterm use of drugs and the continuous injection of insulin greatly reduce the patient's quality of life, and strengthening of treatment increases risk of hypoglycemic coma and can even be life-threatening (Cattin, 2016;Harreiter and Roden, 2019;Petersmann et al., 2019). Thus, a safe and effective treatment for diabetes and its complications is urgently needed. At first, islet transplantation was considered as an excellent approach for curing diabetes, but its clinical application was greatly limited by lack of islet donor sources, low islet survival rate in vitro and immune rejection after transplantation (Bruni et al., 2014;Gamble et al., 2018;Rickels and Robertson, 2019). Therefore, the search for insulin-producing cells (IPCs) from other sources to replace islet transplantation has become an active area of research. Adipose-derived mesenchymal stem cells (ADMSCs) are abundant in sources, are easily isolated and cultivated, exhibit pluripotent differentiation potential and show low immunogenicity after transplantation, serving as ideal seed cells for the treatment of diabetes and its complications (Kim et al., 2010;Zhang et al., 2016;Takemura et al., 2019;Wada and Ikemoto, 2019;Tokuda et al., 2020). Numerous small molecule compounds, growth factors, activators, and inhibitors can transdifferentiate ADMSCs into IPCs and improve IPCs survival and ability to release insulin in vitro (Dayer et al., 2017;Anjum et al., 2018;Ikemoto et al., 2018;Shahjalal et al., 2018;Pavathuparambil Abdul Manaph et al., 2019;Ghoneim et al., 2020). The overexpression of Pdx1, Neurog3, MafA, and Pax4 can improve transdifferentiation efficiency and insulin secretion (Limbert et al., 2011;Xu et al., 2017;Zhu et al., 2017;Dayer et al., 2019). However, various transdifferentiation methods have not been systematically compared with one another, leaving various methods in a chaotic state, and opinions on transdifferentiation efficiency vary. Moreover, the changes of genes and metabolic pathways on the transdifferentiation of ADMSCs into IPCs is largely unknown. Thus, ADMSCs transdifferentiate into functional mature beta cells need more research. In this study, canine ADMSCs were transdifferentiated into IPCs by five types of procedures, the optimal procedure was determined through comparison. Absolute Quantitative Transcriptome Sequencing Analysis was performed at different stages of the optimal procedure to study the changes in genes and metabolic pathways during the transdifferentiation process for the first time. The datasets obtained provided important reference value for the study of the transdifferentiation of ADMSCs into IPCs, islet development and canine genes pool. Five functional genes were screened out. The functions of these genes were verified by overexpression and silencing. This study provides a foundation for future work to understand the mechanisms of the transdifferentiation of ADMSCs into IPCs and acquire IPCs with high maturity. Animal All the dogs (Beagle, Female, 2-5 years old) were purchased from Northwest Agriculture and Forestry University Animal Laboratories (Xian, China). All of the dogs were reared, obtained, and housed in accordance with our institute's laboratory animal requirements, the dogs were kept in cages in a feeding room without purification equipment at a temperature of 18-25 • C, humidity of 40-60%, airflow value of 0.13-0.18 m/s, ventilation rate of 10-20 times per hour, light normal, noise below 60 dB, and all procedures and the study design were conducted in accordance with the Guide for the Care and Use of Laboratory Animals (Ministry of Science and Technology of China, 2006) and were approved by the Animal Ethical and Welfare Committee of Northwest Agriculture and Forest University (Approval No: 2020002). Isolation and Culture of Canine ADMSCs Canine inguinal adipose tissue was obtained by aseptic surgery. The adipose tissue was minced using a sterile scissors and placed in a 50-mL sterile tube with triple volume of 0.1% type I collagenase (Sigma, Ronkonkoma, NY, United States) solution, the tube was transferred to shaker at 180 r/min, 37 • C for 60 min (Raposio et al., 2017). α-MEM Medium [MEM Alpha Modification Medium (Gibco, Waltham, MA, United States) supplemented with 10% fetal bovine serum (Zeta Life, Menlo Park, CA, United States), 100 U/mL penicillin (Sigma, Ronkonkoma, NY, United States), 0.1 mg/mL streptomycin (Sigma, Ronkonkoma, NY, United States) and 0.5 µg/mL Mycoplasma Removal Agent (MP Biomedicals, Irvine, CA, United States)] was used to stop digestion, the tube was centrifuged at 1000 r/min for 5 min, and the upper suspension and floating fat were discarded (Zuk et al., 2001;Vieira et al., 2010). Cells were resuspended in the α-MEM Medium, the suspension was filtered by 200-mesh sieves and centrifuged at 1000 r/min for 5 min, and the supernatant was discarded. Cells were resuspended in α-MEM Medium, transferred to a 60mm cell culture dish (ThermoFisher Scientific, Waltham, MA, United States), and cultured in a carbon dioxide incubator at 37 • C, 5% CO2 (Palumbo et al., 2015). When the cells grew to 90%, they were digested with trypsin (Gibco, Waltham, MA, United States) and passaged at a ratio of 1:3. Identification of Canine ADMSCs The fourth-generation canine ADMSCs were inoculated into 96well plates at 5 × 10 2 cells per well for a total of 44 wells. Contents were taken from four wells every day, and the MTT Cell Proliferation Assay Kit (ThermoFisher Scientific, Waltham, MA, United States) was used to determine the proliferation of cells with continuous determination for 11 days. Transdifferentiation of ADMSCs Into IPCs Based on published studies, appropriate modifications to transdifferentiation procedures were developed. Finally, we used five types of procedures (Figure 1 and Supplementary Material 1) to transdifferentiate canine ADMSCs into IPCs for screening. In the initial study, the procedure 1 (Sun et al., 2007;Mohamed et al., 2016), the procedure 2 (Zhang et al., 2010), and the procedure 3 (Wang et al., 2020) can transifferentiated mesenchymal stem cells into IPCs, the procedure 4 (Pagliuca et al., 2014), and the procedure 5 (Rezania et al., 2012(Rezania et al., , 2014 can transdifferentiated induced pluripotent stem cells into IPCs. The canine ADMSCs were plated in six-well plates, and the cells were transdifferentiated when the cell density reached 75%. After transdifferentiation, cells morphology was observed, the numbers of islet-like cells were counted, and their diameters were measured. The cells were stained with dithizone using the Dithizone dyeing solution (PB9012, Coolaber, China). RT-qPCR The expression of islet β-cell related genes and differentially expressed genes (DEGs) was detected by RT-qPCR in cells (Nolan et al., 2006). Canine ADMSCs were used as the control group, and GADPH was used as the reference gene. The TaKaRa MiniBEST Universal RNA Extraction Kit (TaKaRa, Japan) was used to extract RNA from cells. The PrimeScript TM RT Master Mix (Perfect Real Time) (TaKaRa, Japan) was used to prepare cDNA. Reactions were conducted according to the Maxima SYBR Green/ROX qPCR Master Mix (ThermoFisher Scientific, Waltham, MA, United States) manual, and RT-qPCR was performed using the Step One Plus Real-Time PCR System (Applied Biosystems, Bedford, MA, United States). Three biological replicates and three technical replicates were used to determine the Ct values. The expression levels of the tested genes were determined from the Ct values, as calculated by 2 − Ct (Livak and Schmittgen, 2001). Glucose-Stimulated Insulin Secretion Cells were washed with PBS 3 times, 5 mM glucose was added, the cells were incubated for 30 min, and the supernatant was collected. Next, cells were washed three times in PBS and incubated in 25 mM glucose for 30 min, and supernatants were collected. Finally, cells were washed three times in PBS and incubated in 5 mM glucose and 30 mM KCl for 30 min, and the supernatant was collected. Cell masses were dispersed into single cells, and cells were counted. Supernatant samples containing secreted insulin were processed using the Human/Canine/Porcine Insulin Quantikine ELISA Kit (R&D Systems, Minneapolis, MN, United States). The glucose stimulation index (SI), which measures the sensitivity of IPCs to glucose, was obtained by dividing the amount of insulin secreted at the high glucose (25 mM) level by the amount of insulin secreted at the low glucose (5 mM) level. Absolute Quantitative Transcriptome Sequencing Analysis Among the five procedures, the optimal procedure was selected. Absolute Quantitative Transcriptome Sequencing Analysis (Kivioja et al., 2011) was performed on the cells obtained by the optimal transdifferentiation procedure. Cells were collected at 5 days (IPC1), 11 days (IPC2), 19 days (IPC3), 25 days (IPC4) of transdifferentiation. Canine ADMSCs were used as a negative control, and canine pancreatic islets were used as a positive control. Two biological replicates were completed for each sample. Total RNA was extracted using TRIzol R reagent (Invitrogen, Waltham, MA, United States) following the manufacturer's protocol. mRNA was enriched, and cDNA was synthesized and connected to adapters using unique molecular identifiers (UMIs). Illumina Sequencing Platform HiSeq 2500 was used for sequencing, and the sequencing length was PE150 (Sequencing was conducted by LC Sciences, LLC.). Removal of duplicates and data error correction were performed based on genome location and UMI tagging (FastQC 0.10.1, RSeQC 2.3.9, UMI_tools 0.5.4). High quality clean reads were generated from the assembly library by filtering. HISAT2 2.0.4 (Daehwan et al., 2015) was performed to align high-quality reads with the Canis lupus familiaris genome, 4 and the alignment rate was calculated. StringTie 1.3.4d (Pertea et al., 2015) was used for transcript splicing and merging. Gene expression was calculated by the fragments per kilobase per million (FPKM) method (Roberts, 2011). And edgeR (Robinson et al., 2010) was used for transcript quantification. The DEGs were selected with log2 (fold change) ≥ 2 or log2 (fold change) = −2 and the p-value = 0.05 by R package-edgeR (Robinson et al., 2010). If too many genes meet this p-value condition, use the q-value (the fold discovery rate p-value correction) = 0.05 for further screening (Flenniken and Andino, 2013). In many cases, it's more rigorously corrected and there are fewer differential genes. The R pheatmap toolkit 5 was used for hierarchical clustering analysis. We used GOseq (Young et al., 2010) for gene ontology (GO) enrichment analysis, 6 Kyoto Encyclopedia of Genes, and Genomes (KEGG) 7 for KEGG enrichment analysis in the DEGs. Adenovirus-Mediated Gene Overexpression The cDNA sequences for genes were synthesized by Wuhan Gene Create Biological Engineering Co., Ltd. (Wuhan, China). The cDNA for each gene was PCR-amplified and inserted into pAdTrack-CMV previously digested with BglII/HindIII using the In-Fusion HD cloning kit (Takara, Japan) (Sleight et al., 2010). The resulting plasmid was linearized with PmeI, and then homologous recombination was performed with pAdeasy-1 in E. coli BJ5183. The recombinant plasmid with PacI linearization was used to transfect AAV-293 cells to produce recombinant adenovirus particles using the Advanced DNA RNA transfection reagent (Zeta Life, Menlo Park, CA, United States). RT-qPCR (Nolan et al., 2006) and Western-blot (following the Elabscience Western blot detection kit operation manual) were used to detect the expression of target gene. The canine ADMSCs were transfected with the recombinant adenovirus particles. Immunofluorescent Staining Cells were fixed with 4% paraformaldehyde for 10 min and washed with PBS three times. Next, the cells were permeabilized for 15 min with 0.1% Triton X-100 (Sigma, Ronkonkoma, NY, United States) and washed with PBS three times. Cells were blocked with goat serum for 30 min and incubated with primary antibodies (1:600; Abcam, United Kingdom) at 4 • C overnight. After washing with PBS three times, cells were incubated with secondary antibodies (1:600; Abcam, United Kingdom) for 1 h at 37 • C in the dark, and again washed with PBS three times. Nucleus counterstaining was performed with 10 µg/mL Hoechst 33342 (Sigma, Ronkonkoma, NY, United States). Fluorescence images were obtained with an inverted fluorescence microscope (Sunny Optical Technology Company Limited, ICX41, China). siRNA-Mediated Gene Silencing Fluorescein-labeled siRNA (Small interfering RNA) was synthesized and purified by Gene Pharma Co., Ltd. (Shanghai, China). Each gene was designed with three siRNAs. In the process of transdifferentiation, the canine ADMSCs were transfected with siRNAs using the Advanced DNA RNA transfection reagent (Zeta Life, Menlo Park, CA, United States). RT-qPCR (Nolan et al., 2006) was used to detect the silencing efficiency of siRNAs and the one with the highest silencing efficiency was selected. The gene was silenced every 7 days throughout the transdifferentiation to keep it silent. Statistical Analysis Assays were repeated three times. One-way analysis of variance (ANOVA) was used for the statistical comparisons among groups. The tests were performed using IBM SPSS Statistics 25 software (SPSS Inc., Chicago, IL, United States). Canine ADMSC Isolation, Culture, and Identification The fourth-generation isolated cells showed long spindle type and adherent growth (Supplementary Figure 2A) and proliferated rapidly in the 3-7 day after adherent growth, exhibiting logarithmic growth (Supplementary Figure 2B). The cells were tested by flow cytometry, and the expression levels of CD13, CD29, CD44, CD73, CD90, and CD105 were positive. Expression levels for CD31, CD45, and CD235a were negative (Supplementary Figure 2C). In osteogenic differentiation, the cells grew in clusters; Alizarin Red staining showed redstained calcified nodules. In chondrogenic differentiation, the cells gathered and grew, and blue staining was observed after Alcian Blue staining. In adipogenic differentiation, large areas of fat droplets were observed by Oil Red O staining (Supplementary Figure 2D). These results prove that the isolated cells were canine ADMSCs. Transdifferentiation of ADMSCs Into IPCs The canine ADMSCs were transdifferentiated into IPCs using five types of procedures. In procedure 1, the cells did not form into clusters, and no obvious islet-like cells were found. In procedures 2, 3, 4, and 5, the cells agglomerated into a spherical shape, with obvious islet-like cells; the resulting islet-like cells were scarlet with dithizone staining (Figure 2A). There were also differences in the number of cell cluster from procedures 2, 3, 4, and 5, among which the number of procedure 5 cells was the greatest, reaching 123 ± 10.72/10 6 canine ADMCS, with a cell cluster of 99.75 ± 8.26 µm in diameter and an average of 110.50 ± 18.81 cells per cell cluster ( Figure 2B). For procedure 1, after stimulation with low glucose (5 mM), high glucose (25 mM), 5 mM glucose, and 30 mM KCl, the secretion of insulin was significantly lower than values for procedures 2, 3, 4, and 5. The secretion of insulin was the highest in procedure 5, with 53.13 ± 3.39 IU/10 5 cells in low glucose, 125.23 ± 4.35 IU/10 5 cells in high glucose and 127.02 ± 4.66 IU/10 5 cells in 5 mM glucose and 30 mM KCl; the second highest level of insulin secretion was observed for procedure 4, but all these values lower than for the mature islet cells group ( Figure 2C). The glucose SI after procedure 1 was significantly lower than values for procedures 2, 3, 4, and 5; the SI of procedure 5 was 2.36 ± 0.11, showing the most favorable response to glucose stimulation ( Figure 2D). The RT-qPCR was performed on the cells (the primers are provided in Supplementary Table 1 at Supplementary Material 3). As shown in Figure 3, the expression level of each gene (Canine ADMSCs were used as the control group, and GADPH was used as the reference gene, the relative expression of the genes calculated by 2− Ct ) was elevated following procedure 5 and significantly different from values for procedures FIGURE 2 | The transdifferentiation of ADMSCs into IPCs using five types of procedures. (A) The cellular morphology and dithizone staining. (B) The number of cell cluster of procedures 2, 3, 4, 5 was significantly more than inducing procedures 1 (####p < 0.0001). There were also differences in the number of cell cluster in procedures 2, 3, 4, and 5 (****p < 0.0001; **p < 0.01) (n = 4). (C) After the stimulation of low glucose (5 mM), high glucose (25 mM), 5 mM glucose, and 30 mM KCl, the secretion of insulin in procedure 1 was significantly lower than that of procedures 2 and 3 (ˆˆˆˆp < 0.0001). The secretion of insulin in procedure 4 and 5 was higher than that of procedures 2 and 3 (####p < 0.0001), procedure 5 is higher than procedure 4 (*p < 0.05), but they were all lower than the mature islet cells group (****p < 0.0001) (n = 4). (D) The glucose Stimulation Index (SI) of procedure 1 was significantly lower than that of procedures 2, 3, 4, 5, and mature islet cells (####p < 0.0001), procedure 5 is highest in five types of procedures (****p < 0.0001; ***p < 0.001; *p < 0.05) (n = 4). According to the above results, procedure 5 exhibited the highest transdifferentiation efficiency. Quality Control of Sequencing Data and Genes Expression Level Analysis Absolute Quantitative Transcriptome Sequencing Analysis was performed for the four stages of procedure 5 (divided into IPC1, IPC2, IPC3, and IPC4). Canine ADMSCs were used as a negative control (divided into ADSC) and mature canine islet cells as a positive control (divided into Beta_cell). After sequencing, the raw reads were saved in FASTQ format. Supplementary Table 1 in Supplementary Material 4 lists the data quality throughout the data analysis process. After data processing, highly reliable data were aligned to the canine reference genome to obtain comprehensive transcript information (Supplementary Table 2 in Supplementary Material 4). The regional distribution aligned with the reference genome is shown FIGURE 3 | RT-qPCR analysis. The expression level of each gene in procedure 5 was elevated, which was significantly higher than procedures 1, 2, 3, and 4, there is no difference between procedures 1, 2, and 3. The expressions of Pdx1, MafA, Nkx6.1, and Ins in procedure 4 were significantly higher than those in procedure 1, 2, and 3 (****p < 0.0001; ***p < 0.001; **p < 0.01; *p < 0.05). The expression level of all genes in procedure 1, 2, 3, 4, and 5 was significantly lower than that of mature islet cells (####p < 0.0001). in Supplementary Material 5. The result of genes expression level analysis (Transcriptome expression profile and Gene expression profiling) was shown in Supplementary Material 6, a total of 60,151 transcripts were obtained. The problem of oncological transformation of stem cells is acute in the development of molecular stem cell technologies. In this study, tumor markers, such as Cd133, A2b5, Ssea-1 were not expressed, and Myc had low expression in ADSC, IPC1, IPC2, IPC3, IPC4, and Beta_cell. GO Functional Enrichment Analysis and KEGG Pathways Enrichment Analysis Supplementary Material 8 and Figure 5A show the GO enrichment analysis for all DEGs. The fold discovery rate p-value correction (FDR, q-value) is also provided in the results. There were 47 GO Terms enriched by over 100 DEGs (p-value = 0.05 and q-value = 0.05), 26 GO Terms enriched by over 200 DEGs (p-value = 0.05 and q-value = 0.05). Supplementary Material 9A shows the GO enrichment analysis of each group. According to the size of p-values and q-values, the 20 most significant GO terms were selected as dot plots (Figure 5B), Supplementary Material 9B shows the 20 most significant GO terms for each group. In this study, scatterplots were used to visually demonstrate KEGG pathway enrichment results of DEGs. The 20 KEGG pathways with the most significant expression were selected according to p-values and q-values ( Figure 5C). Supplementary Material 11 shows the 20 most significant KEGG pathways for each group. There were 23 KEGG Pathways enriched by over 100 DEGs (p-value = 0.05 and q-value = 0.05), 3 KEGG Pathways enriched by over 200 DEGs (p-value = 0.05 and q-value = 0.05) (Supplementary Material 12). RT-qPCR Verification of Absolute Quantitative Transcriptome Sequencing In this study, 12 DEGs were selected from the above DEGs, and the Absolute Quantitative Transcriptome Sequencing Analysis results were verified by RT-qPCR (primers listed in Supplementary Table 2 at Supplementary Material 3). In Absolute Quantitative Transcriptome Sequencing, we used the PRKM results (Supplementary Material 6). In RT-qPCR, we used the relative expression of the genes (Canine ADMSCs were used as the control group, and GADPH was used as the reference gene, calculated by 2 − Ct ) (Supplementary Material 16). The results of RT-qPCR were consistent with the gene expression trend in the Absolute Quantitative Transcriptome Sequencing, and the Absolute Quantitative Transcriptome Sequencing results were correct. Functional Verification of the Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3 The canine ADMSCs were infected with the adenovirus particles according to the multiplicity of infection MOI = 100. Two days after canine ADMSCs were infected, positive cells were screened according to green fluorescence. The cells were cultured for 2 days and transdifferentiated with procedure 5 (grouped into FOXA1 + Procedure 5, HNF1B + Procedure 5, DLL1 + Procedure 5, PBX1 + Procedure 5, RFX3 + Procedure 5) with normal cell passage during transdifferentiation. After 25 days of transdifferentiation, the cells were observed. The cells of five groups grew in clusters, showing the appearance of islets. The green fluorescence carried by adenoviruses had disappeared, the cells were stained with dithizone and were able to be dyed red (Figure 6A), indicating that the cells could express insulin. The cells of five groups under the stimulus of glucose were able to secrete insulin, and insulin secretion was higher than procedure 5 ( Figure 6B). The overexpression of Foxa1, Hnf1b, Dll1, Pbx1, Rfx3 were further able to improve the effect of transdifferentiation, insulin secretion. Among these, the overexpression of Foxa1, Pbx1, and Rfx3 exhibited the most significant effects. The glucose SI exceeded 2.5 for five groups, higher than for procedure 5, indicating that cells were able to respond to glucose stimulation ( Figure 6C). Immunofluorescence staining of insulin and c-peptide showed that the cells were insulin and c-peptide positive ( Figure 6D). The expression of islet-specific transcription factors was detected by RT-qPCR (primers listed in Supplementary Table 1 in Supplementary Material 3); the overexpression of Foxa1, Hnf1b, Dll1, Pbx1, Rfx3 were able to significantly stimulate high expression of Pdx1, MafA, Nkx6.1, Nkx2.2, and Ins genes with higher levels of expression than for procedure 5, indicating that the overexpression of these five genes could further stimulate the expression of islet cascade regulatory genes and improve transdifferentiation efficiency (Canine ADMSCs were used as the control group, and GADPH was used as the reference gene, the relative expression of the genes calculated by 2ˆ-Ct) (Figure 7). The results described above indicate that the overexpression of Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3 was able to increase transdifferentiation efficiency and improve maturity of IPCs; Foxa1, Pbx1, and Rfx3 exerted the most significant effects. siRNA-Mediated Genes Silencing Each gene was designed with three siRNAs (Supplementary Table 5 in Supplementary Material 3), and the one with the highest silencing efficiency was selected. The canine ADMSCs were transfected with the siRNAs every 7 days. After transfection, cells were induced with procedure 5 (grouped into: FOXA1 SiRNA + Procedure 5, HNF1B SiRNA + Procedure 5, DLL1 SiRNA + Procedure 5, PBX1 SiRNA + Procedure 5, RFX3 SiRNA + Procedure 5). After Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3 were separately silenced, the cells showed aggregation, but numbers of clusters were significantly less than those of the non-silenced groups ( Figure 8A). When the cells were subjected to the glucose stimulation test, it was found that the ability of the cells to secrete insulin decreased compared with the non-silenced groups; secretion declined significantly with FOXA1 siRNA, PBX1 siRNA, and RFX3 siRNA, indicating that the transdifferentiation efficiency of the cells was reduced after genes silencing ( Figure 8B). Decreases in the glucose SI indicated reduced FIGURE 6 | Functional verification of the Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3. (A) The cells agglomerated into a spherical shape, with obvious islet-like cells. The cells were stained with dithizone, and the cells could be dyed red. (B) After the stimulation of low glucose (5 mM), high glucose (25 mM), 5 mM glucose, and 30 mM KCl, the secretion of insulin in procedure 5 was significantly lower than that of genes overexpression groups (****p < 0.0001). The secretion of insulin in FOXA1 + Procedure 5, PBX1 + Procedure 5, and RFX3 + Procedure 5 was highest (**p < 0.01), but they were all lower than the mature islet cells group (####p < 0.0001) (n = 4). (C) The glucose stimulation index of genes overexpression is greater than 2.5, and higher than procedure 5 (####p < 0.0001) (n = 4). (D) Immunofluorescence staining of insulin and c-peptide showed that the cells were insulin and c-peptide positive. Green was insulin, red was c-peptide, and Hoechst 33342 made the nucleus blue. sensitivity to glucose with FOXA1 siRNA, PBX1 siRNA, and RFX3 siRNA (Figure 8C). The expression of islet-specific transcription factors was detected by RT-qPCR (Canine ADMSCs were used as the control group, and GADPH was used as the reference gene, the relative expression of the genes calculated by 2ˆ-Ct) (primers listed in Supplementary Table 1 in Supplementary Material 2), and it was found that the expression of Pdx1, MafA, Nkx6.1, Nkx2.2, Pax4, Pcsk1, and Ins genes was reduced when Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3 genes were separately silenced compared with the non-silenced groups. FOXA1 siRNA, PBX1 siRNA, and RFX3 siRNA decreased most significantly (Figure 8D). These results indicate that Foxa1, Hnf1b, Dll1, Pbx1, and Rfx3 were silenced, and the transdifferentiation efficiency and IPC maturity were depressed, the transdifferentiation efficiency most significantly decreasing after the silencing of Foxa1, Pbx1, and Rfx3. Transdifferentiation of Canine ADMSCs Into IPCs Retinoic acid and fibroblast growth factor are essential for pancreatic development, at present, most procedures include agonists for these signaling pathways (Bhushan et al., 2001;Molotkov et al., 2005). However, bone morphogenetic protein (BMP) signaling has been shown to promote choice of liver destiny rather than pancreas development (Wandzioch and Zaret, 2009). Accordingly, several procedures involve BMP inhibitors. However, it has also been suggested that BMP inhibitors should be eliminated because they have been shown to promote premature endocrine differentiation and damage PDX1/NKX6.1 positive cells (Russ et al., 2015). Studies have shown that histone deacetylase inhibitors could significantly improve the morphological grading and insulin secretion of islet cells (Ikemoto et al., 2018). There has also been no consensus on whether other pathway regulators, such as epidermal growth factor (EGF) or protein kinase C (PKC) agonists, should be included in procedures (Rezania et al., 2014;Nostro et al., 2015;Russ et al., 2015). In this study, we systematically compared five procedures, each of which used different regulators, including agonists and inhibitors of various signaling pathways, transdifferentiation steps, and transdifferentiation times. In procedure 1, cells unable to respond to glucose stimulation, and no islet-like cells appeared in procedure 1. In procedures 2 and 3, insulin secretion and glucose SI were significantly higher than for procedure 1. Procedures 1, 2, and 3 need to be improved to further improve insulin secretion and cell maturity. Insulin secretion and glucose SI were highest in procedures 4 and 5, procedure 5 is higher than procedure 4. In inducing procedures 2, 3, 4, and 5, insulin secretion increased with increasing formation of islet-like cells, FIGURE 7 | RT-qPCR analysis. The overexpression of Foxa1, Hnf1b, Dll1, Pbx1, Rfx3 can significantly stimulate high expression of islet cascade regulatory genes, with higher expression level than procedure 5. After Foxa1, Pbx1, and Rfx3 were overexpressed, the expression of Pdx1, Ins, and Nkx2.2 was significantly higher than that of the unexpressed group and Hnf1b and Dll1 overexpressed groups. There was no significant difference in the expression of Pax4 and Pcsk1 (####p < 0.0001; ****p < 0.0001; ***p < 0.001; **p < 0.01; *p < 0.05). this suggests that cell formation contributes to the maturation of cells. With respect to cell cluster diameter and the number of cells that a cell cluster contains, a bigger cell cluster is not more favorable. The results in this study showed that the transdifferentiation effect improved as the cell number and diameter of the cell cluster decreased, showing greater insulin secretion, but the optimal cell cluster diameter and number; i.e., the threshold values, are unknown and require further study. In the original study, procedure 1, 2, and 3 can transdifferentiate mesenchymal stem cells into IPCs, and procedure 4, and 5 can transdifferentiate pluripotent stem cells into IPCs. However, in this study, the transdifferentiation efficiency of procedures 4 and 5 was higher than that of procedures 1, 2, and 3, indicating that further modification of procedures 1, 2, and 3 was required, and that the transdifferentiation procedure suitable for induced pluripotent stem cells was also suitable for mesenchymal stem cells, and better efficiency could be obtained. Although procedure 5 achieved a good induction efficiency, it still lagged far behind the insulin secretion capacity of mature islet β-cells, which was caused by the limitations of in vitro culture conditions. No matter how perfect the in vitro conditions were, they could not be completely the same as the complex in vivo development environment. In addition, the transdifferentiation procedures in this study was carried out in two-dimensional mode, which had a certain impact on the efficiency of transdifferentiation. In 2019, Mohammad Foad Abazari et al., found that the expression levels of Ins, Glut2, and Pdx1 genes in cells induced by three-dimensional culture were significantly higher than those in cells cultured by twodimensional culture (Abazari et al., 2020). In the following studies, we will conduct transdifferentiation in three-dimensional mode to explore the changes in genes and metabolic pathways during the transdifferentiation of ADMSCs to IPCs in threedimensional mode. In this study, the optimal transdifferentiation procedure (procedure 5) was determined through comparison with various detection methods, and this laid a good foundation for quantitative Absolute Quantitative Transcriptome Sequencing to generate better genetic datasets and identify novel functional genes. Nkx6.1, Ngn3, and other transcription factors that are known to play an important role. In 2019, a study evaluated three transdifferentiation procedures, sequenced the resulting pancreatic progenitor cells with mRNA and ATAC, and compared them with a human embryonic pancreatic population. This study defined a common transcriptional and epigenetic signature of PPs, including several genes not previously involved in pancreatic development (Wesolowska-Andersen et al., 2020). In 2020, Wang et al., completed the transdifferentiation of BMSCs into IPCs process and achieved the transcriptome profiling of five samples with two biological duplicates. A total of 11,530 DEGs were revealed in the profiling data. In KEGG enrichment analysis, DEGs are mainly concentrated in tight junction, protein digestion and absorption, pancreatic secretion, focal adhesion, ECM-receptor interaction, Rap1 signaling pathway, and cell cycle, etc. In GO enrichment analysis, DEGs are mainly concentrated in the categories of nucleus, extracellular region, intracellular membrane-bound organelle, the regulation of transcription, regulation of RNA biosynthetic process, carbohydrate metabolic process, single-organism carbohydrate metabolic process and small GTPase-mediated signal transduction, et al. Sstr2, Rps6ka6, and Vip they pick up may regulate decisive genes during the development of transdifferentiation of insulin producing cells (Wang et al., 2021). In this study, Absolute Quantitative Transcriptome Sequencing was used to detect the changes of genes and metabolic pathways during the transdifferentiation of canine ADMSCs into IPCs in vitro for the first time. In this sequencing, we obtained a large genetic database, which provided a certain reference for the study of ADMSCs transdifferentiating into IPCs, islet development, and canine gene pool. A total of 15,561 DEGs were revealed in the profiling data, 4031 more DEGs were found than the study by Wang et al. The Sstr2,Rps6ka6, and Vip selected by Wang et al., showed no specificity in this study and were not selected. In GO and KEGG enrichment analysis, the signal pathways and functions enriched by DEGs were also different compared with those studied by Wang et al. Only a few signal pathways were the same. The transcriptome data of the two studies are partly the same, but also partly different. The main reasons are as follows: The first is the comparison between BMSCs and ADMSCs, the second is the application of Absolute Quantitative Transcriptome Sequencing technology, in Absolute Quantitative Transcriptome Sequencing, UMI technology is used to tag each sequence segment to eliminate interference with the quantitative accuracy of the transcriptome by PCR amplification to the maximum extent and to obtain more accurate quantitative analysis results (Kivioja et al., 2011;Islam et al., 2014). And the third is the difference of transdifferentiation procedures. The transdifferentiation procedures (the procedure 3 in this study) used in the study of Wang et al., were found in this study to be of low transdifferentiation efficiency, but this study compared the transdifferentiation procedures. The optimal procedure (procedure 5) was selected for Absolute Quantitative Transcriptome Sequencing. The transcriptome data in this study were more accurate. In procedure 5, after the first stage, we obtained 2546 DEGs; in the second stage, we obtained 2126 DEGs; in the third and fourth stages, we obtained 1402 and 2100 DEGs, respectively; and after transdifferentiation, relative to mature islets cells, we obtained 7387 DEGs. These results suggest that at the end of each transdifferentiation phase, the cells underwent corresponding changes and transdifferentiated toward IPCs, but a large gap with mature islet cells remained. Through GO functional enrichment analysis, we obtained 126 DEGs, and through KEGG pathway enrichment analysis, we obtained 266 DEGs and 18 pathways, all of which are related to islet development and insulin secretion. These data can be further mined and validated. Subsequently, through further bioinformatics analysis such as with protein interaction networks, we identified 18 genes as novel functional genes, which are of great significance for subsequent research. Novel Functional Genes Can Improve the Transdifferentiation Efficiency In this study, we obtained 18 novel functional genes, and through verification, we found that 5 novel functional genes may be the key regulators of ADMSCs transdifferentiation into IPCs. Studies have shown that Foxa1 Hnf1b, Dll1, Pbx1, and Rfx3 plays an important role in the regulatory network that controls the generation of pancreatic endocrine cell lines in model animals (Kim et al., 2002;Ait-Lounis et al., 2010;Gao et al., 2010;De Vas et al., 2015;Rubey et al., 2020). However, the roles of these genes in the transdifferentiation of ADMSCs into IPCs in vitro and whether these genes are key regulatory factors remain unknown. Therefore, the functioning of these five novel genes was verified by gene overexpression and silencing. In overexpression experiments, the results showed that these five genes played an important role in the transdifferentiation of canine ADMSCs into IPCs. They can further improve insulin secretion and glucose SI, significantly improve the transdifferentiation efficiency and IPC maturity, with the most significant effects with Foxa1, Pbx1, and Rfx3. At the same time, the expression of islet development cascade regulation genes was significantly increased, but the specific direct or indirect effects need more research to verify. The solution of these problems can clarify the mechanism of these five genes improving transdifferentiation efficiency. After silencing these five genes, respectively, the transdifferentiation efficiency decreased to varying degrees, with silencing of Foxa1, Pbx1, and Rfx3 genes showing significant decrease. This finding may be observed because Dll1 and Hnf1b, when silenced, stimulate cascade regulatory gene expression in other ways; however, after Foxa1, Pbx1, and Rfx3 were silenced, no other compensation appeared. This finding also proves the importance of Foxa1, Pbx1, and Rfx3 in the transdifferentiation of ADMSCs into IPCs. The above results prove that the five novel functional genes screened by us are of great significance in transdifferentiating ADMSCs into IPCs in vitro; Foxa1, Pbx1, and Rfx3 are especially essential in the transdifferentiation of ADMSCs into IPCs and can be used as specifically key regulatory factors. In this study, after the overexpression/silencing of Foxa1, Pbx1, and Rfx3, we continued to conduct in vitro induction to explore the role of these three genes in the transdifferentiation of ADMSCs to IPCs. We will continue to explore the effects of Foxa1, Pbx1, and Rfx3 on the function of mature islet β cells in future studies. CONCLUSION In this study, canine ADMSCs were transdifferentiated into IPCs using five types of procedures, and the optimal procedure was determined. Many genes and signaling pathways were identified may play an important role in transdifferentiation of ADMSCs into IPCs in Absolute Quantitative Transcriptome Sequencing Analysis. Hnf1B, Dll1, Pbx1, Rfx3, and Foxa1 were found to play important roles in ADMSCs transdifferentiating into IPCs. Foxa1, Pbx1, Rfx3 exerted the most significant effects and can be used as specific key regulatory factors in the transdifferentiating of ADMSCs into IPCs. This study establishes a foundation for the further acquisition of IPCs with high maturity. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material 18. ETHICS STATEMENT The animal study was reviewed and approved by the Animal Ethical and Welfare Committee of Northwest Agriculture and Forest University (Approval No: 2020002). AUTHOR CONTRIBUTIONS PD: methodology, data curation, writing-original draft, writingreview, and editing. JL: methodology, data curation, and writing-original draft. YC: data curation and writing-original draft. LZ: data curation and writing-original draft. XZ: data curation. JW: data curation. GQ: data curation. YZ: funding acquisition, project administration, supervision, writingreview, and editing. All authors contributed to the article and approved the submitted version.
2021-06-28T13:16:54.324Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "b63ed99b3483671f45c217b563c05da7b95d08df", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.685494/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b63ed99b3483671f45c217b563c05da7b95d08df", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12550390
pes2o/s2orc
v3-fos-license
Comparative Genomics of Serratia spp.: Two Paths towards Endosymbiotic Life Symbiosis is a widespread phenomenon in nature, in which insects show a great number of these associations. Buchnera aphidicola, the obligate endosymbiont of aphids, coexists in some species with another intracellular bacterium, Serratia symbiotica. Of particular interest is the case of the cedar aphid Cinara cedri, where B. aphidicola BCc and S. symbiotica SCc need each other to fulfil their symbiotic role with the insect. Moreover, various features seem to indicate that S. symbiotica SCc is closer to an obligate endosymbiont than to other facultative S. symbiotica, such as the one described for the aphid Acirthosyphon pisum (S. symbiotica SAp). This work is based on the comparative genomics of five strains of Serratia, three free-living and two endosymbiotic ones (one facultative and one obligate) which should allow us to dissect the genome reduction taking place in the adaptive process to an intracellular life-style. Using a pan-genome approach, we have identified shared and strain-specific genes from both endosymbiotic strains and gained insight into the different genetic reduction both S. symbiotica have undergone. We have identified both retained and reduced functional categories in S. symbiotica compared to the Free-Living Serratia (FLS) that seem to be related with its endosymbiotic role in their specific host-symbiont systems. By means of a phylogenomic reconstruction we have solved the position of both endosymbionts with confidence, established the probable insect-pathogen origin of the symbiotic clade as well as the high amino-acid substitution rate in S. symbiotica SCc. Finally, we were able to quantify the minimal number of rearrangements suffered in the endosymbiotic lineages and reconstruct a minimal rearrangement phylogeny. All these findings provide important evidence for the existence of at least two distinctive S. symbiotica lineages that are characterized by different rearrangements, gene content, genome size and branch lengths. Introduction Symbiosis is a widespread phenomenon among all branches of life. Especially, insects show a tight relationship with a variety of these organisms [1] mostly having a metabolic foundation, as bacteria provide the insects with the nutrients lacking in their diet. This is the case for many aphids that maintain a close association with the ancient obligate bacterium B. aphidicola. The association is mutualistic as none of the partners can subsist without the other one. The aphid gives B. aphidicola a stable environment and in return this gives the aphid the nutrients lacking from its diet, the plant's phloem. At present the genomes of B. aphidicola from seven aphid species have been sequenced [2,3,4,5,6,7,8,9], with the smallest genome found in the aphid C. cedri (B. aphidicola BCc), with a genome size of 416 Kb coding solely 357 protein-coding genes. This contrasts with other less genomically reduced Buchnera, like the one from the aphid A. pisum (B. aphidicola BAp). The symbiotic role of B. aphidicola BCc has even been questioned since, contrary to other Buchnera, it was found unable to fulfil some of its symbiotic functions [5]. In addition to B. aphidicola, some aphids harbour other endosymbiotic bacteria called secondary or facultative endosymbionts, such as Hamiltonella defensa [10], Regiella insecticola [11] and S. symbiotica [12,13], whose genomes have recently been sequenced. Although primarily transmitted vertically, these facultative bacteria undergo occasional horizontal transfer [14,15,16,17]. These three bacteria have been shown to benefit the host, providing defense against fungal pathogens, parasitoid wasps or even increasing survival after environmental heat stress (revised in [17]). However, as they are facultative, they do not seem to be essential to the insect's survival. An interesting genomic feature from these young associations, contrary to more ancient ones, is the massive presence of mobile genetic elements in their genomes [18,19,20,21], which would cause their genomes to undergo a number of rearrangements as compared to their freeliving relatives. Species of the genus Serratia have been found in numerous places such as water, soil, plants, humans and invertebrates like many insects [22]. The presence of Serratia in insects digestive tract has been speculated to be of plant origin, since the hemolymph cannot prevent the multiplication of potential pathogens [23]. On the other hand, S. symbiotica is one of the most common facultative symbionts in many aphids. In A. pisum, it has been found to confer defense against environmental heat stress [24,25,26,27]. In a study into the evolution of S. symbiotica endosymbionts, both phylogenetic and morphological evidence was found of the possible existence of at least two different S. symbiotica clades named A and B [28]. Clade A shows characteristics resembling a facultative symbiont, whereas clade B resembles more to an obligate-like endosymbiont [28,29]. The genome sequencing of S. symbiotica from A. pisum (S. symbiotica SAp) [12] and C. cedri (S. symbiotica SCc) [13], belonging to clade A and B respectively, has revealed very different genomic features (see Table 1). S. symbiotica SAp possesses a genome size of around half of that of FLS, over two thousand fewer protein coding genes and an impressive extent of pseudogenes (550), giving some indication of a relatively recent inactivation of many genes. On the other hand, S. symbiotica SCc is immediately striking in that it presents a genome size around 1 Mb smaller than that of S. symbiotica SAp, a reduced set of protein coding genes, a very low coding density and GC content and surprisingly depleted of mobile genetic elements [13] identified in other recently derived endosymbiotic relationships [10,11,19], including S. symbiotica SAp [12]. The pan-genome approach for studying evolutionary relationships at a certain taxonomical level has been proved a very powerful tool to study diverse aspects of genomic, functional and structural characteristics of groups of genomes [30,31,32,33,34]. The term ''pan-genome'' has been used to refer to the collection of the core genome (genes shared by all strains and probably encoding fundamental functions of the biology and phenotype of the species) and an accessory genome (constituted from the genes present in some but not all strains) [35], this latter one including genes that are essential for a certain environmental adaptation [33] and linked to capsular serotype, virulence, adaptation and antibiotic resistance probably giving some indication to the organisms lifestyle [36]. In the present study, due to the findings in both S. symbiotica genomes sequenced so far, we wanted to study the diverse processes that occurred once these organisms adapted to an intracellular environment. These include their genetic reduction, rearrangements, and also how the current functional state of their respective B. aphidicola partner explains the current functionality of each S. symbiotica. It is worth mentioning that we have a unique opportunity with the genus Serratia provided by the availability of complete genomic data from three different snapshots distributed throughout the transition from free-living (Serratia proteamaculans 568, Serratia marcescens Db11 [37] and Serratia odorifera 4R613 [38]) passing through facultative endosymbiosis (S. symbiotica SAp) to obligate endosymbiosis (S. symbiotica SCc). In order to gain insight into the level of genome reduction undergone by the two S. symbiotica strains, we first defined a pangenome for the genus Serratia using the annotated CDSs for the five strains mentioned above. We then went on to explore specific subspaces of the pan-genome, such as some of the genes retained outside the core genome and strain-specific genes of each S. symbiotica. We found a massive level of genomic reduction in S. symbiotica SCc, even when compared to S. symbiotica SAp in which a great number of accessory genes are still retained in its genome. The differential genetic reduction suffered by these endosymbionts also became evident, finding a number of CDSs shared with other Serratia but not between them. We then went on to analyze and compare the functional profiles for each Serratia strain used to reconstruct the pan-genome along with the pan-genome itself and the core-genome using the Cluster of Orthologous Groups (COG) functional categories [39]. We were interested in observing the functional clustering and shifting of both endosymbiotic strains compared to the FLS. We also compared each of the S. symbiotica functional profiles to that of the average of FLS in order to detect profile modification of individual categories to be able better understand the functional constraints under which each S. symbiotica genome has evolved and the divergence of these endosymbiotic strains. We observed that the functional profile of S. symbiotica SCc clustered very close to that of the core-genome, supporting a very advanced stage of genetic reduction. In addition, we wanted to analyze the process of genome rearrangements and genetic evolution that these endosymbionts have undergone. To do so, we first defined a set of single-copy shared genes which were taken as a base to study the different arrangements of these among the different Serratia genomes and to perform a phylogenetic reconstruction of the Serratia spp. In contrast to the perfect conservation of the single-copy shared genes order and orientation in the FLS, we found a great level of reordering even between the two endosymbiotic Serratia strains. Also, we quantified the minimal rearrangements needed to get to an ancestral gene order through a minimal number of rearrangements tree. Finally, the phylogenetic analysis confidently resolved the relationships among the different Serratia strains used in this study, allowing us to propose a probable origin for the endosymbiotic lineages. Pan-genome's General Features To gain insight into the process of reductive evolution undergone by the adaptation from a free-living state to an endosymbiotic-lifestyle, we reconstructed a pan-genome for the genus Serratia. It is worth mentioning that, at present and to our knowledge, it is the only bacterial genus for which full genome sequences for both endosymbiotic and free-living species are available. General features for each strain, together with the ones from the two Buchnera strains sharing their host with Serratia in the same aphid (B. aphidicola BAp and BCc), are summarized in Table 1. The CDSs of the two available endosymbiotic strains (the facultative S. symbiotica SAp and the obligate S. symbiotica SCc) and three free-living ones (S. marcescens, S. proteamaculans and S. odorifera) were recovered from their respective sources. After a clustering of the organisms' protein sequences we ended up with 4, 469 orthologous clusters of proteins, leaving 2, 293 unclustered proteins, corresponding to the strain-specific genes. To visualize the clusters' location within the pan-genome subspaces, an Euler diagram was computed ( Figure 1). The first remarkable feature is finding very few clusters (607) from the pan-genome in the core (8.98%). While, if we also take into account the genes shared by the three FLS plus the core (3, 452), the percentage is greatly increased (51.05%) due to the presence of the endosymbiotic genomes, mainly the genome from S. symbiotica SCc, which displays an extensive genomic reduction due to its adaptation to an intra-cellular lifestyle [13]. Regarding the strain-specific genes, almost half of them (47.45%) are hypothetical proteins and 7.07% are putative ones. This is not surprising since it has been described that most of the strain specific genes in a pan-genome are hypothetical genes, genes which may be product of overannotation given their generally reduced sizes, or ORFan genes [40]. It is worth mentioning that all the coding genes present in the annotated CDSs of S. symbiotica SCc are present one gene per cluster, showing no evidence of genetic redundancy and supporting its extreme reductive process compared to the other S. symbiotica. This is important since taking into account that the levels of duplication of the other Serratia are higher (S. marcescens 3.6% duplicated genes, S. odorifera 3.2%, S. proteamaculans 6.6% and S. symbiotica SAp 3.7%). In addition, almost all of the coding genes from S. symbiotica SCc (607 out of 672) clustered into the core. Not surprising for obligate endosymbionts since the reductive process tends to reduce both redundancy and genetic repertoire, conserving the genes that allow the bacteria to sustain themselves and fulfil their role in the symbiotic association. S. symbiotica Strain-specific Genes Amazingly, there is only one strain-specific gene for S. symbiotica SCc. A 67 amino-acid hypothetical protein, which on a BLAST search against nr was found to vaguely resemble (less than 56% covered and 62% identity) another hypothetical protein in S. marcescens (genbank locus tag HMPREF0758). This displays the massive genetic decay that S. symbiotica SCc has suffered, basically losing all its strain-specific genes, contrary to S. symbiotica SAp, which still retains many of them (516 gene clusters), reminding us of characteristics of free-living Serratia. Mostly, the genes present in these clusters code for phage proteins (26), transposases (29) or are annotated as hypothetical proteins (304), while the rest are annotated mostly as putative proteins (157, related to conjugative systems, pili, fimbria, transporters and some others). Due to the accessory nature of these groups, they might eventually be degraded in the genome reduction process if this endosymbiont continues to accommodate itself in the system. S. symbiotica Genes Outside the Core Two genes are shared by both S. symbiotica, epsI and rfaI, coding for a glycosyl transferase and a lipopolysaccharide 1,3-galactosyltransferase respectively. The two are involved in cell envelope biogenesis (outer membrane), which could explain the reason why these do not cluster with other members coming from FLS. These type of proteins have been found to show weak signals of incongruence, due to being genes involved in diversifying selection, coding for antigenic proteins exposed at the cell surface [41]. Interesting are the clusters shared by both endosymbiotic bacteria (S. symbiotica SAp and SCc) and FLS regarding fimbrial genes. With S. proteamaculans, they shared the genes fimA and pagO, coding for the filament protein FimA involved in fimbrinbrial formation and a putative membrane protein respectively, and with S. odorifera and S. proteamaculans the gene etfD which codes for a protein associated to fimbrin. In both S. symbiotica, these fimbrial genes have been retained although at least in S. symbiotica SCc there is a loss of the capacity to form fimbrins. This probably means that this intersection is disappearing due to the deterioration of this pathway in the intracellular adaptation process, although it is also possible that it plays a role in the pathogen-host cross-talk or in infection. Other interesting genes are the two shared with S. marcescens and S. proteamaculans (hha coding for a haemolysin expression-modulating protein and feoB coding for a part of the iron transport system which makes an important contribution to its supply to the cell under anaerobic conditions), and one (yidD) shared with both S. odorifera and S. proteamaculans, which product clusters with a hemolysin from S. proteamaculans. Some hemolysins have been shown to allow bacteria to evade the immune system by escaping from phagosomes [42], and they are reported to serve as a way of obtaining nutrients from host cells. For example, in other organisms they have been involved in the iron uptake by pathogenic bacteria from their eukaryotic hosts [43]. Functional Relatedness and Divergence in S. symbiotica To inquire into the functional roles of the selected Serratia strains, we assigned COG categories to each organism's CDSs. Through a Kruskal-Wallis test on the absolute COG frequencies per organism we found significant differences in the core/pangenome/Serratia COG profiles (x 2 = 72.84, df = 6, p-value = 1.07e-13). Through the same test using only the FLS, we found that they did not showed any significant difference (x 2 = 0.11, df = 2, pvalue = 0.95). This indicates that the general functional composition of the FLS is highly conserved; being able to assume that any significant deviation from this profile in the endosymbiotic Serratia would be due to their adaptation to a new lifestyle. Then, to identify retained and reduced functional categories against FLS, as a way to asses functional divergence from FLS for each endosymbiotic Serratia, we mapped the COG profile differences from the FLS COG profile in a heatmap for the core, pan-genome and the individual genomes of Serratia ( Figure 2A). As shown, the COG profile heatmap revealed a tight clustering of S. symbiotica SCc with the core-genome, expected from the fact of its gene content being too close to that of the core and giving support to its distinction from its facultative relative, S. symbiotica SAp, which remained as a separate group, probably exemplifying the functional profile of a facultative-symbiont lineage of Serratia. Since S. symbiotica SCc shows the most extreme COG profile modification against FLS, we decided to take its functional profile to compare against the FLS average and afterwards check for the state of the same functional category in S. symbiotica SAp (Table 2; Figure 2A). We divided the results in the following categories (see Methods): (i) Extremely retained. Category J, meaning a great part of its gene repertoire is dedicated to basic functions for its cellular life maintenance as previously shown [44]. Also, most the universally conserved COGs fall into this category [45]. It is important to note that there is also an increase in this category in S. symbiotica SAp compared to FLS. (ii) Highly retained. Category O, as in the previous case this is not surprising, since it is normal that the genomic reductive process affects many of the genes involved in DNA repair, by which some proteins involved in post translational modification are common and which avoid missfolding or accumulation of defective peptides. This group includes the genes groES and groEL, that code for chaperone GroEL, which might mitigate the damage of reduced protein stability by maintaining a high cytoplasmic level of it in B. aphidicola [46,47]. Category F, since S. symbiotica SCc, contrary to B. aphidicola BCc, still preserves the capacity to synthesize pyrimidines, and in the case of purines it could be recycling the aphids nucleosides to produce nitrogenous bases, complementary to the case of S. symbiotica SAp [12,13]. Category H, showing the specialization of S. symbiotica SCc as a cofactor supplier [13]. Category M, explained by the noted ability of S. symbiotica SCc to still synthesize its own cell membrane, in contrast to the obligate endosymbiont B. aphidicola BCc which has lost many of the genes necessary for this function [5,48]. On the other hand, S. symbiotica SAp still resembles the FLS in this category more closely. Category L, where previously shown that in spite of having a reduced number of genes compared to FLS, it still maintains those necessary for its genome replication [13]. Also the repair system (based on E. coli) by base excision is conserved, while the repair by recombination system is almost complete (recB, recC, recD, sbcB, priB) but missing the recA gene, as happens with the obligate endosymbiont B. aphidicola [13]. On the other hand, S. symbiotica SAp reveals more genes in this category compared to S. symbiotica SCc, as expected due to its less degraded genome. (iii) Moderately retained. Category U, mainly showing protein translocation and export related genes. This comes as no surprise, since in both cases, they had to adapt to import and export a variety of components due to the gene loss undergone in the adaptation process to intracellular life, which would require the conservation of many genes in this category. Category D, consisting among other things of the fts genes (ftsL, ftsW, ftsA, ftsZ, ftsK), cell-wall topological and structural coding genes (mrdB, mreB) and other cell cycle related proteins coding genes (gidA, minC,minE, minD, ygbQ). Category I, since S. symbiotica SCc, being not as advanced in genetic degradation as its partner B. aphidicola BCc, still preserves higher number of genes in this category, as also happens with S. symbiotica SAp which presents a greater repertoire, despite the relatively lower number of genes in the present category compared with FLS. (iv) Moderately reduced (which interestingly do not vary much between endosymbiotic Serratia). Category V, which in spite of showing almost no relative change in comparison to the FLS, there is a drastic decrease in absolute gene number against these. In S. symbiotica SCc these genes comprise only two genes involved in lipid transport (msbA and lolD), two multidrug efflux system genes (mdtK and emrA) and two predicted transporter subunits (yadH and yadG). Categories Q and C, comprising genes mainly involved in cellular life maintenance. Category P, where many transporters have been lost in S. symbiotica lineages, retaining only a limited repertoire. (v) Highly reduced. Category T, in which a massive loss of transcription regulators and sensor proteins has occurred. Category N, in which we see a vast reduction in absolute gene number from FLS. In this category the losses are mainly from flagellar proteins, fimbrial, pili and chemotaxis related proteins along with some outer membrane proteins. The reduction in both T and N categories can be explained by the stable environment in which the bacterial cell now resides, making many of the sensory systems and the motility mechanisms dispensable. Category R and S, displaying the different state of the genetic degradation of mainly strainspecific genes, since it has been noted that these are rich in proteins of unknown function [33,40], which would explain why S. symbiotica SAp shows a pattern that is more similar to that of FLS. Category G, which in spite of the losses, is still able to import sugars from the aphid host (fructose), while S. symbiotica SAp can still use more (glucose, manose, etc.) [12]. Category E, where we find a common reduction in both S. symbiotica from FLS. This feature displays both endosymbionts reliance on Buchnera to supply many essential amino-acids partially or entirely [12,13]. (vi) Extremely reduced. Here we find category K, where both S. symbiotica strains have lost a massive amount of transcriptional regulators. This also displays the loss of transcriptional regulation and responsiveness of reduced genomes of endosymbionts [49]. Functional Convergence of C. cedri Bacterial Endosymbiotic Consortia and a Less Genomically Reduced Buchnera Besides observing the specific categories in which both S. symbiotica find themselves altered compared to FLS, it is of importance to determine the evolution and fate of the two different associations each bacterium has established with Buchnera. It has been proposed that in C. cedri the bacterial consortium is involved in a co-obligate endosymbiosis, both members being required for the survival of the three partners in the system [13], while in A. pisum Buchnera alone is able to sustain the nutritional requirements of the aphid, without the need an extra member. Then, we decided to analyze whether the bacterial consortium in C. cedri could, at least generally, functionally resemble B.aphidicola BAp. To do so, we added the functional profiles from the corresponding B. aphidicola to those of its corresponding S. symbiotica partner (partner defined as the bacteria that share the same host) and performed a two-way clustering of the relative number of genes in each COG category using a heatmap ( Figure 2B). First, the sum of Serratia and Buchnera in C. cedri clusters closer to the functional profile of B. aphidicola BAp than to any other Serratia, and also brings the functional profile of B. aphidicola BCc closer to other less genomically reduced Buchnera (See Figure S1). This data provides evidence that genetic decay in S. symbiotica SCc is adapted to compensate for the losses in B. aphidicola BCc, and in conjunction functionally resemble a less genomically reduced Buchnera. And second, the sum of Serratia and Buchnera in A. pisum still clusters apart from the rest of Serratia, proof of the facultative state of S. symbiotica in the aphid A. pisum [12], failing to show a marked functional complementation with Buchnera. So, in the event of a facultative endosymbiont establishing a consortium with an already present obligate endosymbiont, like in the case of S. symbiotica SCc, the new consortium would be compelled to maintain general functionality of the previously present and well-established bacterium. Single-copy Core Genome Phylogeny It was of great importance to determine the phylogenetic position of both S. symbiotica among the Serratia in a maximum likelihood (ML) phylogenetic tree. In a past phylogenetic reconstruction by Burke et al. [12], they were unable to resolve with complete confidence the position of S. symbiotica SAp within the Serratia using various c-proteobacteria. To approach this problem we chose the 580 single-copy genes of the Serratia spp. core genome that were shared with Y. pestis (used as an outgroup) and reconstructed a concatenated protein sequence phylogeny (Figure 3: left). A striking feature is the evident acceleration in the branch leading to S. symbiotica SCc in contrast to what is seen in the other Serratia, including S. symbiotica SAp. However, S. symbiotica SAp clusters with S. symbiotica SCc forming a symbiotic clade. It is worth mentioning that both S. symbiotica cluster with S. marcescens, which is the only one isolated from an insect (D. melanogaster) [37] from the strains used in this study, insinuating that the symbiotic lineage may have come from an insect-pathogen rather than a plant-pathogen Serratia. Rearrangements Across Serratia Another interesting feature in the genomic evolution of endosymbionts is the invasion by mobile genetic elements. These elements can cause a high degree of rearrangements in the bacterial genomes undergoing adaptation to intracellular life [12,19]. These elements are especially present in recent associations but lacking in ancient ones. For example, in the ancient and obligate endosymbionts B. aphidicola and Blochmannia, an extreme genome stasis has been described [3,50] having a parallel evolution with its hosts. This contrasting what is seen in more recent associations like in the case of SOPE (Sitophilus oryzae primary endosymbiont) [19], facultative endosymbionts like Sodallis glossinidius [20], Hamiltonella defensa [10] and Regiella insecticola [11] or the recently sequenced genome of REIS (the Rickettsia endosymbiont of Ixodes scapularis) [21]. To study the rearrangements undergone by both S. symbiotica we decided to analyze the rearrangements of the single-copy core genes (Figure 3: middle). We can clearly see that among FLS, the synteny of the single-copy core is perfectly conserved among the strains, with the 597 single-copy genes being in the same order and orientation, except in the case of S. marcescens where the replication origin seems to be misplaced as checked by originX [51] (data not shown). This lets us assume these genes are present in the same order among Serratia, and thus we can assume that any reordering witnessed in S. symbiotica strains could be due to the invasion and/or mobilization of mobile genetic elements that occurred during the endosymbiotic genomic reduction [19]. In the case of both S. symbiotica, the level at which they have undergone genetic rearrangements becomes evident, even showing great rearrangements between the two. This means that the divergence of these two endosymbionts must have been prior to the loss of S. symbiotica SCc's capacity to rearrange its genome. We then calculated a minimal rearrangement phylogeny for the selected Serratia genomes (Figure 3: right). This method allows us to calculate a tree with the minimal number of rearrangements required to obtain an ancestral gene order. Strikingly, the minimal rearrangement distance from S. symbiotica SAp to FLS (129) is greater than that of S. symbiotica SCc to FLS (108), and the distance between them to a common ancestor (155) is the greatest. The rearrangements undergone in the recent endosymbiotic lineages might have happened in a random fashion due to the high numbers of mobile genetic elements and could also be facilitated by the relaxation in pressure of gene order in certain genes because of the degradation of the transcription regulation. This means that in different events of infection by Serratia endosymbionts or in early divergences, we might have very different gene orders. Conclusions The study of this type of endosymbiotic organisms is shedding light on the differences between a free-living and obligate endosymbiotic state. Knowledge is also provided on adaptations to a nutrient-rich and stable environment, in which the bacteria cells undergo drastic changes in their genomes. In the present study, we have found multiple evidences supporting the existence of two very distinct S. symbiotica lineages. One of which has obligate endosymbiotic characteristics (S. symbiotica SCc), such as accelerated evolution, with the COG profile being similar to that of the core genome, with a lack of mobile elements, no genetic redundancy and loss of almost all strain-specific genes. And a second one (S. symbiotica SAp), presenting all the traits of a facultative endosymbiont, with its functional profile being ''intermediate'' between that of S. symbiotica SCc and FLS, with the presence of mobile genetic elements and preserving still a great amount of strain-specific genes. In the case of the Serratia genus, commonly found in a variety of insects, it would not be surprising to find more endosymbiotic strains in different stages of lifestyle transition from FLS to more ancient and well-established obligate endosymbionts, as also proposed for Wolbachia [52] and Ricketsia [53,54]. We were also able to determine the phylogenetic relationship among the different Serratia and place the symbiotic lineage closer to an insectisolated strain indicating its probable insect-pathogen origin. Even though we have gained insight into how the genetic rearrangements are happening, in order to have a better understanding of this process as well as the genetic decay and the transition from a free-living bacteria to an endosymbiotic one, more S. symbiotica must be analyzed to determine the basis and reason for the associations these bacteria have with aphids and its obligate endosymbiont B. aphidicola. Construction of Serratia spp. Pan-genome Serratia genomes were recovered from their respective databases (See File S2: Table S1), gene annotation from prediction of S. marcescens Db11 was done using BASys [55]. The protein sequences were fed into OrthoMCL [56] with an inflation value of 1.5, a 70% match cut-off, and e value cut-off of 1e-5. A total of 17, 086 coding genes were clustered into 4, 469 families of orthologous genes, leaving 2, 293 as single family genes. Clustering was checked for consistency using COG categories [45] to assess the homogeneity of COG assignment for all the genes in a given family, screening of clusters to check for inflation value cluster fragmentation effect, and gene number per family to make sure not many gene rich families arose. Visual display of the pangenome subspaces was done using the R custom modified drawVennDiagram function of package gplots [57]. COG Profiles COGs categories were assigned using a series of Perl scripts to find non-overlapping hits against the COG database using Blastp with an e-value cut-off of 1e-03 [58]. The COG profile displays and clustering were made using the heatmap2 function from the R package gplots. This heatmap would represent a two way clustering having the most similar columns closer together and the most similar rows in the same fashion, showing the dissimilarity distances of columns with the top dendogram and the dissimilarity distance from the COG categories in the left one. Row reordering was chosen for the function for visual and categorization purposes. For assessing S. symbiotica divergence from FLS, absolute COG category frequencies were divided by the strains total number of COG assigned CDSs (see File S1: Table S2) and then subtracted the mean relative frequency from the FLS in the same COG category. Kruskal-Wallis tests were carried out on the absolute frequency tables of COG profiles using R. The categorization of retained/reduced COG categories in comparison to FLS (using the relative values table, as described above) was done in the following way: Extremely retained, more than 5% difference above zero; highly retained, more than 2 and less than 5% difference above zero; moderately retained, more than 0 and less than 2% difference above zero; moderately reduced, less than 0 and more than 2% difference below zero; highly reduced, less than 2 and more than 5% difference below zero; extremely reduced, less than 5% difference below zero. Important is to remark that even a lower than 1% difference is important since FLSs differences range between 0.0006% and 0.4253% with a mean of 0.1218% (visually displayed in Figure 2A by showing cells of the FLS in close-to-white tones). Phylogenetic Analysis The 580 single-copy shared genes identified between Serratia spp. and Yersinia pestis CO92 were extracted and translated to amino-acid sequences using transeq from the EMBOSS suite [59] and aligned using the L-INS-i algorithm from MAFFT v6.717b [60] (See file S2). Gblocks [61] was used to refine the alignment. ML tree was calculated with 1000 bootstrap replicates using RAxML v7.2.6 [62]. Visual display of both trees was done using FigTree v1.3.1 and edited in Inkscape. Genome Rearrangements In all, 597 single-copy genes (the ''single-copy core'') were selected to study the rearrangement history of Serratia genus. Scaffold or contig order for unfinished genomes was determined with MUMers promer v3.22 [63] using as reference the genome of S. proteamaculans 568. Custom Perl scripts were developed to create input files for genome rearrangements plotting using genoPlotR v0.7 [64]. Minimal number of rearrangements phylogeny was calculated using MGR v2.03 [65] with the circular genomes option and without using any heuristics. Figure S1 Heatmap of COG profile clustering for selected B. aphidicola and B. aphidicola BCc plus S. symbiotica SCc functional profile. Heatmap displaying the clustering of various B. aphidicola genomes along with the sum of the functional profiles for B. aphidicola BCc and its symbiotic partner S. symbiotica SCc, showing a closer clustering of these joint genomes to that of less genomically reduced B. aphidicola genomes. BAp: B. aphidicola from A. pisum; BBp: B. aphidicola from B. pistaciae; BCc: B. aphidicola from C. cedri; BSg: B. aphidicola from S. graminum; SCc: S. symbiotica from C. cedri.
2017-04-02T07:33:10.426Z
2012-10-15T00:00:00.000
{ "year": 2012, "sha1": "a887d0ef8a54eb04f6fdc15d117632a2fce90525", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0047274&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "336c9517d3bc99923f9f5569b90e9e092072dc26", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
232111714
pes2o/s2orc
v3-fos-license
How Do Decision Makers and Service Providers Experience Participatory Approaches to Developing and Implementing Physical Activity Interventions with Older Adults? A Thematic Analysis Background: Physical activity has numerous health and well-being benefits for older adults, but many older adults are inactive. Interventions designed to increase physical activity in older adults have typically only produced small effects and have not achieved long-term changes. There is increasing interest in participatory approaches to promoting physical activity, such as co-production, co-design and place-based approaches, but they have typically involved researchers as participants. This study aimed to understand the experiences of decision-makers and service developers with the introduction of such participatory approaches when developing new physical activity programmes outside of a research setting. Methods: Semi-structured, qualitative interviews were conducted with 20 individuals who were involved in commissioning or developing the Greater Manchester Active Ageing Programme. This programme involved funding eight local authorities within Greater Manchester, England, to produce physical activity projects for older adults, involving participatory approaches. An inductive thematic analysis was conducted, structured using the Framework approach. Results: Interviewees identified important benefits of the participatory approaches. The increased involvement of older adults led to older adults contributing valuable ideas, becoming involved in and taking ownership of projects. Interviewees identified the need to move away from traditional emphases on increasing physical activity to improve health, towards focussing on social and fun elements. The accessibility of the session location and information was considered important. Challenges were also identified. In particular, it was recognised that the new approaches require significant time investment to do well, as trusting relationships with older adults and partner organisations need to be developed. Ensuring the sustainability of projects in the context of short-term funding cycles was a concern. Conclusions: Incorporating participatory approaches was perceived to yield important benefits. Interviewees highlighted that to ensure success, sufficient time needs to be provided to develop good working relationships with older adults and partner organisations. They also emphasised that sufficient funding to ensure adequate staffing and the sustainability of projects is required to allow benefits to be gained. Importantly, the implementation of these approaches appears feasible across a range of local authorities. Abstract: Background: Physical activity has numerous health and well-being benefits for older adults, but many older adults are inactive. Interventions designed to increase physical activity in older adults have typically only produced small effects and have not achieved long-term changes. There is increasing interest in participatory approaches to promoting physical activity, such as coproduction, co-design and place-based approaches, but they have typically involved researchers as participants. This study aimed to understand the experiences of decision-makers and service developers with the introduction of such participatory approaches when developing new physical activity programmes outside of a research setting. Methods: Semi-structured, qualitative interviews were conducted with 20 individuals who were involved in commissioning or developing the Greater Manchester Active Ageing Programme. This programme involved funding eight local authorities within Greater Manchester, England, to produce physical activity projects for older adults, involving participatory approaches. An inductive thematic analysis was conducted, structured using the Framework approach. Results: Interviewees identified important benefits of the participatory approaches. The increased involvement of older adults led to older adults contributing valuable ideas, becoming involved in and taking ownership of projects. Interviewees identified the need to move away from traditional emphases on increasing physical activity to improve health, towards focussing on social and fun elements. The accessibility of the session location and information was considered important. Challenges were also identified. In particular, it was recognised that the new approaches require significant time investment to do well, as trusting relationships with older adults and partner organisations need to be developed. Ensuring the sustainability of projects in the context of short-term funding cycles was a concern. Conclusions: Incorporating participatory approaches was perceived to yield important benefits. Interviewees highlighted that to ensure success, sufficient time needs to be provided to develop good working relationships with older adults and partner organisations. They also emphasised that sufficient funding to ensure adequate staffing and the sustainability of projects is required to allow benefits to be gained. Importantly, the implementation of these approaches appears feasible across a range of local authorities. Introduction Physical activity confers various benefits to older people, including improved wellbeing, a reduced illness risk, and an increased life-expectancy [1]. However, older adults are the least physically active age group and activity declines with advancing age [2]. In England, in 2016, 44% of adults aged ≥ 65 years engaged in 150 min of moderate intensity physical activity a week, compared with 67% of adults aged 19-64 years [3]. A number of interventions have been designed to promote physical activity in older adults. These can be effective in increasing activity up to one year later [4], but the activity increases are generally small, and typically smaller than those produced by interventions with younger age groups [5]. Furthermore, these increases are not apparent beyond one year [4]. One possible explanation for this lack of maintenance is that many older adults take part in physical activity programmes to increase social contact and to take part in fun activities, rather than through a desire to increase their physical fitness [6]. However, many interventions do not aim to meet these older adults' need for social contact and enjoyment [6]. Additionally, qualitative studies of older adults who were not taking part in interventions to increase their physical activity have revealed indifference or even hostility to the idea of increasing physical activity for its own sake [7]. In sum, individually delivered interventions to promote physical activity in older adults produce small effects that are often not maintained, and may be of limited interest to many older adults. There is increasing interest in participatory approaches to promoting physical activity [8]. These centre around promoting the participation of older adults in the development of physical activity interventions, which are valued in the locations they are implemented, in order to increase physical activity. Such approaches typically aim to embed long-term, sustainable physical activity programmes within the neighbourhoods where older adults live, rather than being 'interventions' delivered for a fixed period of time and then withdrawn. The present research considers various participatory approaches to involving older adults, including co-production, co-design, place-based working, and an asset-based approach. There are a variety of definitions for these approaches that derive from different disciplinary backgrounds [9], leading to frequent areas of disagreement [10,11]. In the present research, terms are used as follows: Co-production is an umbrella term for activities that aim to fully involve end-users in the development of interventions, by viewing the experiential knowledge of these end-users as core to the success of their development [10]. A related concept-co-design-emphasises involvement in identifying the problem and how to go about addressing it, rather than involvement in the development or delivery of interventions [12]. A place-based approach considers both local needs and local assets [13], drawing on older people's extensive knowledge of the communities and environments in which they live. This relates to taking an asset-based approach, where the experiences and skills of older adults are recognised and valued, as are the resources available in a local area [14]. Despite the growing interest in using these new approaches, there have been few evaluations of their success, especially in relation to groups such as older adults [15]. In particular, there is limited evidence concerning the effectiveness and impact of participatory approaches with older adults [15]. There has been some evaluation of such programmes, aimed at older adults and adults with health conditions, delivered in partnerships by the local government, local NHS organisations and voluntary sector organisations. These suggest increased levels of physical activity in those who persist with the programmes, as well as improved mental wellbeing, with some projects continuing beyond the funding period [16,17]. Furthermore, many existing co-production exercises have involved researchers as a key participant group alongside service providers and service users [18]. There is limited knowledge about the effectiveness of participatory approaches in the absence of support from researchers or the types of problems which teams may encounter [10]. An examination of these participatory approaches in practice is timely, given the ongoing and lively debates about the challenges and potential negative consequences of approaches such as co-production [19], which require more effort and resources than more traditional 'top-down' (e.g., theory-driven) interventions [10,11]. The present research considers the acceptability of an approach involving co-production and related methods to the commissioners of physical activity programmes and those who are responsible for designing such programmes. We evaluated the Greater Manchester Active Ageing (GM-AA) initiative-an innovative programme enacted across eight local authorities (Metropolitan Borough Councils (MBCs)) in the Greater Manchester area in England. The programme received £1 million from Sport England, over a two year period, with an explicit emphasis on trying 'new ways' of encouraging physical activity provision for older adults through increased participation of older adults. Each MBC had freedom to design their own programmes in response to local needs and capacities, but with an explicit criterion for funding to be used for one or more participatory approaches. The present research aimed to understand the experiences of service providers and decision-makers with these participatory approaches to developing interventions, and to comprehend barriers and facilitators to designing and implementing physical activity opportunities. Design and Setting Semi-structured, qualitative interviews were conducted with MBC leads and stakeholders with key decision-making roles in the GM-AA Programme. Greater Manchester is a conurbation with a population of approximately 2.8 million people [20]. It is an area that includes areas of high deprivation on multiple indicators; eight of 10 Greater Manchester MBCs are ranked within the most deprived 100 of 317 local authorities in England, with six being within the 50 most deprived (by the average index of multiple deprivation score [21]). The GM-AA Programme started on 1 April 2018. Interviews took place approximately one year into this two-year project, when planning for local projects had been taking place (and some localities had commenced the implementation of projects), but before it was apparent how successful these participatory approaches were at increasing physical activity. Participants Interview participants belonged to two groups. The first included representatives from MBCs who had decision-making roles in specifying the approach that each GM-AA project took, and/or were involved with securing GM-AA investment in their locality ('MBC Lead' participants). These individuals were therefore involved in deciding on the approach the locality projects took, but not necessarily involved in the on-the-ground delivery of new sessions. The second group included individuals from GM-wide stakeholder organisations who had contributed to the overall project through involvement in the initial bid to Sport England or through roles in commissioning and supporting the MBC applications ('Stakeholder Organisation' participants). Participants were purposively sampled to ensure that all participating MBCs were represented by at least one person, and there was representation from a range of stakeholder organisations (including Greater Sport, Sport England, and Greater Manchester Ageing Hub). Data Collection Semi-structured interviews took place between 18 December 2018 and 18 May 2019. Interviews were conducted face-to-face or by telephone and were structured using a topic guide (see Supplementary S1). The topics discussed included experiences of developing GM-AA projects, the effects of contextual factors on implementation, and what constitutes the successful provision of physical activity projects to older adults. The topic guides were used flexibly, with some topics covered in more depth with some individuals, according to their role in the process of commissioning and designing projects. Interviews were audio-recorded and transcribed verbatim. Analysis Inductive thematic analysis was conducted with the aim of understanding the experiences of development and implementation from the perspective of study participants [22]. The Framework approach was used to structure the analysis [23]. Framework provides a transparent structure to the analysis process, which is particularly useful when multiple researchers are working with the dataset, and easily permits analysts to review the steps that other researchers have taken. The first and third authors familiarised themselves with the data by reading and re-reading interview transcripts and developed 'codes'-labels that reflected important issues in the dataset. Codes were organised into a working thematic framework, including a list of categories and sub-categories. Two other authors (second and last authors) read transcript samples and reviewed and discussed the working thematic framework. The framework was applied to the full dataset ('indexing'): This indicated where text within interviews fitted within the categories/ subcategories of the working thematic framework. Matrices were developed: Charts in which category contents were mapped by participants were produced so that researchers could compare category content across participants, as well as participant perspectives across categories (see Supplementary S2 for an illustrative extract from a matrix). These matrices were interrogated to identify important and related issues in the dataset, and to generate insights into the issues considered. Through this process, initial categories of the working framework were further developed and refined to produce the final themes reported here. Results Twenty individuals were interviewed: 13 MBC Lead (MBC) and seven Stakeholder Organisation (SO) interviewees aged 20-59 years, with half aged 40-49 years. All selfidentified as white; 16 were female and four were male. Most (19) interviews were conducted face-to-face and one was conducted by phone. Interviews ranged from 34 to 113 min (mean: 56 min). Three main themes which address the aims of this paper were identified (see Table 1): Experiences of participatory approaches; understanding of the acceptability of physical activity programmes by older adults; and resources and sustainability. The following report gives more space to findings that are novel, and less to findings that previous research has covered in detail. MBC participants' understanding of co-design and co-production varied widely. The description of 'co-design' by one MBC lead was more about gathering opinions, rather than having true involvement in developing physical activities: "We get them all in, give them like a tea or a coffee and some biscuits and get them chatting in a social element, and we do come round with a short questionnaire basically asking what activity they'd like to do on what day, what time, and just a rough idea of the barriers to the physical activity" (P6, MBC). Other MBCs seemed to seek more in-depth input from older adults. For example, in the following quote, the ideas are coming from those in the community working with the providers in this case, rather than the providers developing the programme for the community without that local knowledge and input: "People have got strengths, they've got assets and they've got some fantastic ideas, when we've sort of looked at numbers in the past and said, "Well, how would you get more inactive older people to come along?" They've come up with suggestions. They've really sort of taken ownership of the sessions. So, yeah, it's happened very sort of organically" (P15, MBC). Challenges that arose when attempting co-design approaches to programme development were discussed. Inexperience with co-design approaches meant that initial strategies were not always optimally effective. One MBC initially invited older adults to steering group meetings with an operational focus; this was not found to be an effective way to draw on older adults' experience. An alternative approach, of organising separate meetings in a community setting, seemed more suitable for this MBC, enabling older adults to contribute to the decision-making process. There was a distinct sense that co-design approaches had important benefits for projects, as indicated in the quote from P15 above. In this locality, older adults were involved in every step, helping design the programme of activities, and with representatives attending steering group meetings. Experiences of Place-Based Working Participants generally seemed to feel that a place-based approach meant looking at how projects could be embedded in the community. For example, one neighbourhood lacked leisure facilities and the MBC lead saw the GM-AA programme as an opportunity to develop ideas around finding alternative available resources: "It doesn't have a specific leisure facility so therefore it gave us an added advantage of kind of testing other community place-based models, you know, what assets do we have in that community or that neighbourhood" (P7, MBC). There was a perception that traditional leisure facilities, such as gyms, might be offputting for older adults, and considering alternative venues could therefore be helpful for increasing participation: "It's a lot about the facilities and where older adults would like to go. So if the provision's in a hi-tech leisure centre the chance of getting older adults to want to engage in that, it's sort of not understanding the provision of what the activity should be but where that activity should be based is massive" (P5, SO). One participant described a community centre within a park, which was a setting that was seen to have important benefits: "I think one of the positives has been that it's not a traditional kind of leisure centre setting. It's very much, you know, green space park and then there's an indoor space for people to go and meet and have a cup of tea. Participants also considered the needs of different localities. In particular, as more deprived areas lacked resources and engagement opportunities, there was a particular concern about ensuring that older adults in those areas be included: "We are very aware that different localities have different levels of assets, community assets, community resources. That affects older people's ability to engage in programmes like this. So if you are in an area that's poorer or perhaps people are having to do paid work or have health problems or have carers' responsibilities, not able to get out and about so much. So I know that some of the localities were trying to yeah targeting most deprived areas" (P1, SO). Partnership and Collaborative Working Most participants seemed to consider older adults as assets, voicing an ethos of 'doing with' rather than 'doing to'. Involving older adults in physical activity provision, and utilising their skills and connections, were seen as important ways to enhance projects and to maximise the programme reach: "Training older people to be trainers themselves is sustaining that model that we are looking to have, older people being assets, doing things for themselves, being in a good position to reach people within their own communities" (P1, SO). However, it was acknowledged that it takes time to build relationships with partners, and that effective collaboration may depend on pre-existing relationships with organisations: "I think the most successful projects are, will be where you've got existing relationships, a lot of strong relationships at local level and trusted relationships. It's very difficult to just to go in cold to an area and start things from scratch and to build up relationships, and confidence and trust" (P1, SO). Miscommunication around expectations, capacity issues, and competition for limited financial resources were potential challenges for collaborating organisations, and could hinder optimal delivery: " . . . it's quite a barrier [ . . . ], in that none of us have got much money. So people, if you're not careful, are chasing money. And I think if it becomes about the money, then you've got a problem, because it should be about the programme and the older people locally" (P13, MBC). Despite the challenges, it was clear that partnership working could be beneficial in facilitating processes such as co-design and place-based working. Social Element Both MBC and stakeholder interviewees identified the social element of activities as highly important and key for participation: "You know, we're selling activity but people are buying friendship" (P9, MBC). One interviewee noted how older adults would meet to socialise before or after the physical activity session: "So people were turning up early to have a brew, as well as staying at the end to have a brew" (P15, MBC). Shifting the Norms around Physical Activity Many interviewees saw changing how physical activity provision is thought of and spoken about to be central. It was felt that the way physical activity opportunities are traditionally described could have negative connotations, and that focusing on fun and pleasurable aspects of sessions would be beneficial: "People have negative perceptions of physical activity. And when we have the conversations we focus more on the, not the health messages or the physical activity messages but utilising the fun and the connections and getting out in the fresh air and the relaxation and things like mindfulness and things" (P11, MBC). It was recognised that systemic change is required to change long-standing ways of thinking and talking about physical activity, but that such change can be challenging when resources are limited: "And then you're trying to change a system which has got embedded ways of working, which is financially under strain or stress and has a view of what older people are and do, you know. And you don't have to wander around very often, very far, to look at the leaflets, the imagery, so on and so forth, that's commonplace in leisure provision, to see that older people, you know, they're not kind of part of the package at all, you know" (P18, SO). Accessibility Accessibility was seen as key for delivering successful programmes. A local venue, minimising travel, was considered important due to financial implications and psychological factors around travel, such as lacking confidence: "It's very much around doorstep delivery as well because obviously transport and travel is an issue for lots of people including some older people, so it's around making sure they're in the right place, not just in terms of the usual inequalities but in terms of access generally" (P3, MBC). A consideration of the physical environment of activity sessions to ensure that older adults felt at ease and the importance of social support in enabling engagement was discussed: "Because I think from the focus groups, a couple of the people said that actually they were quite fearful of walking in parks on their own, because they felt that people looked at them as though they were a bit strange and things like that, so I think it gives that real sort of, that bond, if you like, and makes people feel safer" (P19, MBC). Access to marketing and promotional materials was raised by MBC leads. They proposed alternative marketing methods based on existing/developing relationships, or traditional approaches to publicity: " . . . it's getting that message out to them because the barrier is that a lot of them aren't on social media, they don't know how to access the information, so being in the area and on the ground and being that face of contact and going to where the older people are is a must" (P6, MBC). However, one participant found that social media could be effective, and could engage younger individuals who can share information with older relatives. Encouragement from friends and family was seen to be important in facilitating engagement: "More often than not the wives really encourage the men, [ . . . ] maybe it is tapping into the more active spouse, or that kind of thing, to get people through the door and like harness that friendship, that that need to make friends" (P9, MBC). Staffing and Timescales Both stakeholder and MBC lead interviewees discussed the impact of staff turnover during the development or implementation of GM-AA projects. New staff joining projects partway through struggled to develop and deliver programs, feeling that they lacked background understanding: "I think with the change in management, one member of staff leaving and one coming off maternity leave, I think it has affected it a lot, because I think we were flowing really well" (P16, MBC). Related to the staff capacity, time was considered a valuable resource, and tight timescales were found to be challenging. A central aspect of the GM-AA Programme was the expectation that MBCs would work with older adults and communities when developing projects. However, the timescales of the programme seemed to make such activities difficult: "If you're going to do it really true to the spirit of co-production and people and communities, it takes a really long time. And I think we had about three months from start to finish as I remember it. Well, that's not time to engage with new people and communities and understand their lives and get them to help shape the plan" (P17, SO). Effective engagement with communities took significant time as relationships and trust needed to be built. Where relationships with older adult communities do not already exist, connections need to be developed, and this may be challenging where timescales are tight and staff capacity is low, or impacted by turnover: "Essentially it takes time and effort to hear older people's voices and often setting up systems takes time and investment. And a lot of the local authorities do not have that older people's network or forum-some do but a lot don't. [ . . . ] So it's more difficult for somebody to go: right, here's some money you can apply for, you need to involve older people. How do you get older people, you know? However if it's already set up and you've already got older people telling you what it is they want, then that engagement is already there" (P1, SO). Sustainability How projects might be sustained beyond the programme funding period was a concern for many interviewees, and there was a recognition that sustainability could be most challenging in areas of higher deprivation: "Unfortunately, we're still in a world where we're on two or three year funding cycles and all of that, we all know that genuine long-term behaviour change takes time and it takes more time in places with less social capital and less, you know, to work with at the beginning. So, there will no doubt be a difference between the places, the more affluent places and the least affluent places in terms of actually impacting, and until we move to a world where we're investing long-term, and we're not on this project by project basis, yeah, it's not ideal" (P17, SO). Some locations were already aiming to ensure the sustainability of activities by charging a small fee to participants, although this raised the concern that such an approach could make it difficult for older adults with limited financial resources to attend: "The cost associated helps to pay for the instructor long term and the venue hire, without that cost it's just not sustainable. So if in deprived areas people couldn't afford to do that it would affect the sustainability" (P6, MBC). The sense that older adults might be viewed as assets was supported by one interviewee, who perceived local older adults to be potentially more valuable than staff in facilitating project sustainability: "If you've got somebody from a similar age [ . . . ] grown up in the same area, who knows the language, who knows some of the social networks, who knows some of the families who live in the place and what their concerns are [ . . . ] you're going to have more impact and those people stay in the community, you know. They don't then go and get another job two years later" (P18, SO). This stakeholder felt that older adults delivering physical activity sessions themselves was a powerful model because participants might feel that they can relate to the deliverer, upskilling older adults from the community may mean that the skills and delivery are more likely to stay in the community than if they are delivered by externally commissioned staff, and the individual might be less likely to leave a project on cessation of funding. One MBC was already following a model of training individuals from the community to deliver activity sessions: "Yeah, say for example if we want chair-based activities and we recognise there aren't many available chair-based deliverers [ . . . ], then what we start to do more and more now is upskill an individual from within the community or even a participant that wants to be involved so we've got that legacy left for the programme" (P7, MBC). These findings would suggest that utilising key aspects of participatory approaches in the GM-AA Programme may not only support the design and delivery of acceptable projects with which older adults will engage, but also help projects to be sustainable. Discussion Interviewees' perspectives of the new approaches were generally positive: The approach was not only seen as useful, but also valuable, with partnerships formed and benefits experienced that could inform subsequent working. Older adults were viewed as assets within co-design and place-based approaches, with value seen in them contributing to and taking ownership of projects. Benefits were also seen in increased working with partner organisations. Interviewees felt that the language used when talking about and promoting physical activity needed to change in an effort to highlight fun and social aspects. Challenges to carrying out co-design and co-production were also identified. Developing working relationships with partner organisations was recognised as requiring a significant period of time to do well, particularly where there were no pre-existing partner relationships or there was inexperience in using these approaches. High staff turnover and tight timelines further exacerbated these issues. Sustainability beyond the period of funding was a key concern for interviewees, particularly for areas of high deprivation. Viewing older adults as assets enabled interviewees to see them as part of the solution to the continuation of projects once funding ceases. It was apparent from descriptions of activities engaged in that not all of the localities carried out true co-production and co-design with older adults, but sometimes referred to consultation work. The Social Care Institute for Excellence (SCIE) [24] identified a number of barriers to co-production, including a lack of knowledge and understanding of what is involved, as well as time pressures and shortages of funding. In line with this, interviewees in the current study mentioned a lack of time as a key reason for why true co-production might not have occurred. Some interviewees mentioned that involving older adults in steering group meetings did not work well. One study looking at the experiences of health professionals and peer leaders in working together found that tensions arose when peer leaders felt they were not given status or a strong voice in their role, indicating the need for a 'culture of mutual respect' [25]. Interviewees discussing a place-based approach to working saw this as an opportunity to determine what physical assets are already in the community, and how ideas could be developed around these assets. It was felt that community venues (rather than leisure facilities) could be more appealing to inactive older adults, and accessibility was seen as important. A review of studies examining interventions to promote physical activity in older adults found that the environment in which physical activity sessions are provided is important to older adults, with participation at least in part depending on the availability and proximity of environments perceived as attractive, safe, and low-cost [26]. The concept of older adults being assets came across strongly during discussions of co-production and place-based working. An example of a successful model utilising the skills of older adults in supporting engagement with physical activity is the Someone Like Me programme, which involves older adult peer mentors supporting other older adults in physical activity [27]. Support for peer volunteers increasing physical activity levels in older adults has been found [28,29]. This study took an in-depth look at how individuals with development and decisionmaking roles experienced and perceived developing projects using participatory approaches. However, there are other important perspectives to be taken on board: It is also important to understand the experiences of the older adults who take part in the projects developed and the perceptions of the individuals who deliver sessions. This study evaluated the experiences within a single programme in the UK. Other regions could have different organisational structures in place, and cultural differences could impact the experiences of such participatory approaches, so it will be important to evaluate similar programmes developed in other locations. However, the present study involved representatives of eight MBCs in a conurbation with a combined population of 2.8 million people. Furthermore, the MBCs included a number of the most deprived locations in England, with corresponding pressures on financial and time resources. Given this potentially challenging environment for developing new approaches, the positive experiences reported here should be possible to replicate in less deprived areas of the UK and internationally. It is also important to note that the participatory approaches covered by the present report did not include the substantial involvement of researchers as participants: This is unusual in reports on the co-production of interventions [10,18]. The main finding of the present study is that it shows the feasibility of using novel participatory approaches by people who have a limited experience of these, and without a good deal of support from researchers or experts in these approaches. Furthermore, the participatory approaches to physical activity examined in this paper seemed to yield important benefits in the development of projects that are suitable for the project location and acceptable to older adult participants. The major implication of this research is that, even in a difficult financial environment with a deprived population, the experiences across multiple MBCs and organisations were generally positive, with all partners seeing value in these participatory approaches compared to traditional approaches to increasing physical activity. This new way of working also appeared to bring about a new way of thinking and speaking about older adults and physical activity, with participants talking about an ethos of 'doing with' rather than 'doing to'. Valuing older adults' views and involving them in processes brought about ideas and feedback that ensured projects were acceptable and appealing, with an emphasis on social benefits. However, where organisations are expected to implement these new approaches, it is important that they are provided with sufficient periods of time and appropriate staffing to build trusting relationships with older adults and partner organisations. They also need to receive appropriate training and support. Ensuring the sustainability of programmes is a key concern. An important benefit of the co-design approach was that older adults were able to consider sustainability and contribute recommendations for achieving this. Taking a place-based approach seemed to help identify assets that were already in place, independently of funding. However, there is a need to take a long-term approach to investing in physical activity provision for older adults to ensure that the creative and engaging projects developed in a programme such as this are able to be sustained and prosper beyond a short-term funding cycle. Conclusions In sum, incorporating participatory approaches, such as co-design, co-production and a place-based approach were seen to yield important benefits by individuals involved in designing and making decisions around physical activity provision for older adults. Sufficient funding to ensure adequate staffing, support for staff, and sustainability of projects beyond a short-term funding cycle is required.
2021-03-05T05:29:23.723Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "68146eb29089638a77c5642d6fced09f5d413074", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/4/2172/pdf?version=1614255482", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0e0b52a98c6d4e09e5edd8399bf9ee56218dcb48", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
258179825
pes2o/s2orc
v3-fos-license
Evidence for Misalignment Between Debris Disks and Their Host Stars We place lower limits on the obliquities between debris disks and their host stars for 31 systems by comparing their disk and stellar inclinations. While previous studies did not find evidence for misalignment, we identify 6 systems with minimum obliquities falling between ~30{\deg}-60{\deg}, indicating that debris disks can be significantly misaligned with their stars. These high-obliquity systems span a wide range of stellar parameters with spectral types K through A. Previous works have argued that stars with masses below 1.2 $M_\odot$ (spectral types of ~F6) have magnetic fields strong enough to realign their rotation axes with the surrounding disk via magnetic warping; given that we observe high obliquities for relatively low-mass stars, magnetic warping alone is likely not responsible for the observed misalignment. Yet, chaotic accretion is expected to result in misalignments of ~20{\deg} at most and cannot explain the larger obliquities found in this work. While it remains unclear how primordial misalignment might occur and what role it plays in determining the spin-orbit alignment of planets, future work expanding this sample is critical towards understanding the mechanisms that shape these high-obliquity systems. INTRODUCTION The Sun's equatorial plane is well-aligned with the ecliptic, having an obliquity of 7.155 ± 0.002 • (Beck & Giles 2005). Most of the major solar system bodies move in nearly the same plane, suggesting that the planets formed from a protoplanetary disk that was rotating in the same direction as the early Sun. It has commonly been thought that other planetary systems form similarly and that exoplanet orbital axes should be closely aligned with their stars' spin axes. However, observational techniques such as the Rossiter-McLaughlin effect (Queloz et al. 2000;Shporer & Brown 2011;Triaud 2018), Doppler shadows (Albrecht et al. 2007;Zhou et al. 2016), and gravity darkened transits (Barnes 2009;Ahlers et al. 2020) have measured large spin-orbit angles for many extra-solar systems (Albrecht et al. 2022). Possible mechanisms responsible for spin-orbit misalignment generally fall into three categories: primordial misalignment, post-formation misalignment, and changes in the stellar spin axis that are independent of planet formation. The first, primordial misalignment, suggests that a protoplanetary disk is misaligned with its star's rotation axis and that planets with large spin-orbit angles form in situ. Processes that could misalign the disk include chaotic accretion (where the late arrival of material from the molecular cloud warps or tilts the disk; Bate et al. 2010;Thies et al. 2011;Fielding et al. 2015;Bate 2018;Takaishi et al. 2020), magnetic warping (when the Lorentz force between a young star and ionized inner disk magnifies any initial misalignments; Lai et al. 2011;, and secular processes involving an inclined stellar or planetary companion (Borderies et al. 1984;Lubow & Ogilvie 2000;Batygin 2012;Matsakos & Königl 2017). Post-formation misalignment implies that after formation, gravitational interactions alter a planet's orbit. This could occur via planet-planet scattering (Malmberg et al. 2011;Beaugé & Nesvorný 2012) or secular processes like Kozai-Lidov cycles (Naoz 2016) or disk-driven resonance (Petrovich et al. 2020). Both primordial and post-formation misalignment could also occur via stellar clustering, which appears to have a strong influence on the architecture of planetary systems (Tristan & Isella 2019;Winter et al. 2020;Rodet & Lai 2022) and may be commonplace (Yep & White 2022). Thirdly, it has been proposed that stars with convective cores and radiative envelopes can reorient themselves without an external torque due to internal gravity waves generated at the radiative-convective boundary (Rogers et al. 2012(Rogers et al. , 2013. Hot Jupiters-massive planets on very short orbits-frequently appear misaligned with hot, rapidly-rotating stars that generally fall above the Kraft break (∼6200 K; Kraft 1967) while low-mass planets appear misaligned with both cool and hot stars (Winn et al. 2010;Schlaufman 2010;Albrecht et al. 2022). It has been suggested that hot Jupiters first enter high-obliquity orbits regardless of their host stars' properties; however, tidal interactions between the massive, close-orbiting planets and the relatively thick convective envelopes found in stars below the Kraft break realign the stellar spin axes with the hot Jupiters' orbits. The mechanisms responsible for spin-orbit misalignment may help reveal how these exotic planets form. While formation in situ through core accretion may be possible (Batygin et al. 2016), it would be challenging for enough material to accumulate and develop into a planet that close to a star. Instead, if a massive planet formed far from its star, it could move to a short orbit via disk-driven migration or high-eccentricity tidal migration (Dawson & Johnson 2018). If the hot Jupiter were primordially misaligned, this would indicate disk-driven migration, whereas post-formation misalignment could result from high-eccentricity tidal migration. Constraints on which mechanisms actually contribute to spin-orbit misalignment can be placed using the observed distribution of obliquities and trends across system parameters. Additional constraints on primordial misalignment can be placed using observations of circumstellar disks and their stars. Watson et al. (2011) first compared stellar inclinations to disk inclinations for 8 systems with spatially resolved debris disks, while Greaves et al. (2014) later did the same for 10 systems imaged by the Herschel satellite. Neither found evidence for misalignment, but both had limited samples and predate many spatially resolved images of disks taken by the Atacama Large Millimeter/submillimeter Array (ALMA), Hubble Space Telescope (HST), and Gemini Planet Imager (GPI) that can robustly measure disk inclinations. Davies (2019) compared inclinations for resolved disks (mostly protoplanetary) in the ρ Ophiuchus and Upper Scorpius star forming regions, finding that a third of systems are potentially misaligned. Davies (2019) used these contrasting results to raise the additional question of whether or not debris disks preserve the preceding protoplanetary disks' geometry and if star-disk-planet interactions or the formation of a debris disk can change the star-disk obliquity. In this work, we study the star-disk alignment for an expanded sample of 31 resolved debris disks. In Section 2, we outline our methods, including the sample selection and measurements made. We then discuss our results in Section 3. Finally, in Section 4, we conclude our findings. METHODS We assembled a list of spatially resolved debris disks from the literature, excluding circumbinary and circumtriple disks to simplify our analysis. We then identified systems with published stellar inclinations (i s ) or the data necessary to measure the inclination available, leaving a sample of 31 targets that can be found in Table 1. Effective temperatures (T eff ), masses (M ), and radii (R) were taken from the TESS Input Catalog (TIC; Stassun et al. 2018;Paegert et al. 2021) for most stars in our sample. Given the majority resolved debris disks are located around nearby, bright stars, the TIC adopted most of these parameters from large spectroscopic catalogs, avoiding the challenges of color-temperature relationships discussed in Stassun et al. (2018), particularly for the coolest stars (T eff < 3800 K). Parallaxes are known for all objects in our sample, providing precise measurements of radius in the TIC. 4 targets (AU Mic, Vega, β Leonis, and β Pictoris) that were either missing measurements or were reported without uncertainties were supplemented using values found elsewhere in the literature. The number of confirmed planets in each system was additionally determined by searching the NASA Exoplanet Archive Confirmed Planets Table (NASA Exoplanet Archive 2019). 8 out of the 31 systems have at least one confirmed planet. To determine the projected rotational velocities (v sin i) of our sample, we adopted published values for each target. If multiple values were found, we adopted the measurement made using the highest-resolution spectrograph. Only 1 object (β Leonis) had a v sin i reported without uncertainties; we assume 10% error bars on this measurement, typical for the uncertainties in our sample. 2 objects (GJ 581 and HD 23484) had upper limits on their projected rotational velocities and were treated as such in our analysis. We note that spectral line broadening from rotation is degenerate with turbulence in the stellar atmosphere and the v sin i measurements in this work use a variety of modeling frameworks to account for macroturbulence, possibly introducing unknown systematics to our analysis. Archival rotation periods were gathered for 26 objects in our sample. We also directly measured the rotation period for stars that displayed quasiperiodic variations in the Pre-Search Data Conditioned Simple Aperture Photometry (PDCSAP) light curves produced by the TESS Science Processing Operations Center (SPOC), which have been corrected for instrumental systematics Smith et al. 2012;Stumpe et al. 2014). Each photometric time series was modeled using a Gaussian process (GP), which are commonly used to represent rotational modulation induced by active regions rotating in and out of view (Haywood et al. 2014;Rajpaul et al. 2015). We used the rotation kernel implemented in celerite2 that combines two dampened simple harmonic oscillators with periods of P and P/2 to capture the stochastic variability in a star's rotation signal (Foreman-Mackey et al. 2017;Foreman-Mackey 2018). Using TESS data, we measured the rotation period of 18 stars, 5 of which had no previously published measurements; all of these measurements agree with either our archival values or the rotation period relationship in Noyes et al. (1984). For each of these 18 targets, we used the rotation periods and uncertainties determined using TESS data as they are well-constrained and measured under a standard framework. These periods, along with the projected rotational velocities and the corresponding uncertainties, can be found in Table 1 while the TESS light curves, GP models, and rotation period posteriors are shown in Appendix A. We then determined the stellar inclination for each target with a known radius, rotation period, and v sin i using the projected rotational velocity method, where the inclination is given by As discussed by Masuda & Winn (2020), v sin i and v are not independent from each other, complicating the statistical inference of i. A simple technique accounting for this is to to use a Markov chain Monte Carlo (MCMC) process with a uniform prior on cos i and measurement informed priors on R, P , and (2πR/P ) √ 1 − cos 2 i (Albrecht et al. 2022). This approach is also advantageous because it easily accounts for uncertainties in our measurements of R, P , and v sin i. Given that measurements of v sin i are typically made from a star's spectral absorption lines and require that broadening from rotation be distinguished from other sources, including turbulence in the stellar atmosphere or instrumental resolution, the projected rotational velocity method is often subject to systematic uncertainties. Therefore, we adopted stellar inclinations previously determined using more accurate methods such as interferometry (Vega and β Leonis), asteroseismology (β Pictoris), and starspot tracking ( Eridani) whenever possible. Interferometry and asteroseismology also allow us to expand our sample to early-type stars with weak, often undetectable rotational modulation. We conducted a literature search for disk inclinations (i d ), selecting values that were the most well-constrained, typically corresponding to images with the highest spatial resolution. Most of these images were taken using ALMA, HST, and GPI, although the uncertainties on the inclinations vary widely as the spatial resolution is highly dependent on instrument configuration and distance. Inclinations can also be determined more precisely for edge-on disks than face-on disks. To better understand whether the star and disk might be misaligned, we calculated the difference between the disk and stellar inclinations (∆i = i d − i s ), the absolute value of which gives the minimum star-disk misalignment; because we are unable to determine the position angle of the stellar rotation axis or the direction of the disk and stellar angular momenta, we are unable to calculate the full obliquity. For systems with stellar inclinations determined using the projected rotational velocity technique, we assumed the MCMC posterior distribution; for the systems with archival measurements, a sample of stellar inclinations were drawn from Gaussian distributions. Similarly, we drew a sample of disk inclinations using either uniform or Gaussian distributions when appropriate. We then took the differences between our samples of disk and stellar inclinations and adopted the median value along with lower and upper uncertainties representative of the 68% credibility interval. These differences, along with the stellar and disk inclinations, are given in Table 2. Comparing Disk and Stellar Inclinations 25 systems appear to be closely aligned with disk and stellar inclinations consistent of being within 10 • of each other (although large uncertainties mean that some of these systems could still be misaligned). There are several exceptions; most notably, HD 10647, HD 138813, HD 191089, HD 30447, Eridani, and τ Ceti all have misalignments ranging roughly between 30 • and 60 • . If stars and their disks are well-aligned, we would expect to see a monotonic, increasing relationship between disk inclination and stellar inclination in Figure 1. We test how well-aligned systems tend to be by calculating the Spearman rank-order correlation coefficient (r S ) for our data set. Using the median values for our inclinations, we find a coefficient of 0.62 with a p-value of 0.0002; however, this does not reflect the broad uncertainties on some of the inclination measurements. For each disk and stellar inclination, we drew a random sample and calculated a new coefficient and the corresponding p-value 10 4 times. The 68% credibility interval for r S was .54 ± 0.08 with p-values of 0.0008 +0.0036 −0.00069 . These values for r S are notably lower than the coefficient of 0.82 found by Watson et al. (2011) and indicate that while there is a positive correlation between disk and stellar inclinations, they are not always well-aligned. It is important to keep in mind that disk and stellar inclinations can only put a lower limit on misalignment and that a full analysis requires knowledge of both the disk and stellar position angle on the sky plane. Further, inclinations do not indicate the directions that the star is rotating and the disk material is orbiting; if they are moving in opposite directions, the misalignment between the disk and star would be much greater than calculated. Given that systems such as K2-290-a strong candidate for primordial misalignment-have co-planar planets in retrograde orbits, this may be a significant bias (Hjorth et al. 2021). While Watson et al. (2011) and Greaves et al. (2014) did not find signs of star-disk misalignment in their sample of debris disks, Davies (2019) observed misalignment of protoplanetary disks at a rate slightly higher than seen in our analysis (∼33%); however, we note that they observed much smaller misalignments, typically less than 30 • . This indicates that the star-disk misalignment may not decrease as the disk transitions, as suggested by Davies (2019), and raises the question of whether misalignment increases as the system evolves. It is possible that mechanisms such as stellar flybys can incline debris disks (Moore et al. 2020) while processes such as accretion onto the star are unlikely to realign the system. Figure 2 shows the mass, radius, v sin i, and rotation period of each star in our sample versus the effective temperature. Figure 3 shows the difference between the disk and stellar inclinations as a function of system parameters. In these plots, we see most of the star-disk systems are well-aligned aside from the 6 mentioned above. The misaligned systems are not clustered around any specific T eff or mass, suggesting that misalignment may occur regardless of stellar type, although there are not enough stars to make definitive conclusions. We also do not observe misalignment occurring more frequently with the presence of known planets; yet, many substellar objects in these debris disk systems may easily be undetectable. Finally, the 6 significantly misaligned systems span a wide range of ages (∼7 Myr to 5.8 Gyr; Mamajek & Hillenbrand 2008;Bell et al. 2015;Pecaut & Mamajek 2016;Shkolnik et al. 2017;Nielsen et al. 2019); this is not surprising given that primordial misalignment is expected to occur during the protoplanetary disk stage, well before debris disks form. Implications for Primordial Misalignment If magnetic warping were responsible for spin-orbit misalignment, Spalding & Batygin (2015) argue that misalignment should occur more frequently around stars with masses greater than 1.2 M ; this is because lower-mass, young stars would be able to realign their stellar spin axes with the surrounding disks due to their stronger magnetic fields. As seen in Figure 3, 2 of the significantly misaligned systems ( Eridani and τ Ceti) have stellar masses below 1.2 M while 2 (HD 10647 and HD 191089) have masses very close to this limit, suggesting that magnetic warping alone is not a viable mechanism for disk misalignment. If chaotic accretion were at play, subsequent accretion of disk material onto the star is expected to reduce the misalignment to values lower than 20 • by the time planets begin to form (Takaishi et al. 2020). Not only does this fail to describe the distribution of obliquities observed for exoplanets, but it does not match the ∼30 • − 60 • misalignments shown in Figure 3. We do see systems with small potential misalignments near or below 20 • , including HD 107146, HD 129590, HD 145560, HD 202917, HD 206893, HD 35650, HD 377, and β Leonis, but the large uncertainties on our obliquity measurements make it difficult to determine whether low-obliquity systems are truly misaligned. Additionally, without knowing the position angles of each star, we cannot definitively comment on whether star-disk misalignment commonly falls near 20 • . While the significantly misaligned disks could have been torqued out of alignment by an inclined stellar or planetary companion, this mechanism is unable to explain the observed distribution of spin-orbit obliquities (Zanazzi & Lai 2018;Albrecht et al. 2022). Ultimately, it is unclear what mechanisms can misalign disks around their stars; further, because we do not know of many planets in these systems, we are unable to determine whether the same mechanisms could be responsible for spin-orbit misalignment. As discussed in Section 3.1, mechanisms such as stellar flybys may incline debris disks in addition to planetary orbits, and the obliquities measured in this work may not reflect a system's primordial architecture. CONCLUSIONS We investigate the alignment of resolved debris disks with their stars, placing a lower limit on their obliquities by comparing stellar and disk inclinations. With recent resolved images of disks taken by ALMA, HST, and GPI, along with rotation periods measured using TESS, we were able to include 31 systems in our analysis, more than 3 times as large as the samples included in previous studies of debris disks. While there formerly was little evidence for misalignment between debris disks and their stars, we find 6 systems with disk and stellar inclinations separated by ∼30 • − 60 • . This indicates that these evolved disks are frequently misaligned with their stars; although, systems are more often well-aligned than not. Given that we observe such large minimum obliquities, some mechanism other than chaotic accretion needs to be at play. We also see misaligned systems with stellar masses below or near 1.2 M , suggesting that magnetic warping alone cannot be responsible for misalignment. Because resonant processes that could torque the disk out of alignment fail to explain the distribution of spin-orbit obliquities, it remains unclear what role primordial misalignment could play in shaping planetary systems. Further, it is unknown whether these disk obliquities truly reflect the structure of the preceding protoplanetary disk. Future work needs to expand the number of debris disk hosts with inclination measurements, helping constrain the characteristics of misaligned systems. Few stars in our sample have known planetary companions and no confirmed hot Jupiter systems are currently known to contain circumstellar debris; searching for dust in confirmed planetary systems could help better understand whether the mechanisms that misalign disks with their stars are also responsible for spin-orbit misalignment. Existing methods to measure stellar position angle cannot be applied to the vast majority of debris disk hosts (Le Bouquin et al. 2009;Lesage & Wiedemann 2014), meaning the full obliquity between a disk and its star cannot be measured. As mentioned by Watson et al. (2011), a full Bayesian analysis accounting for this limitation could place more useful upper limits on the misalignment, similar to the framework presented in Fabrycky & Winn (2009) for spin-orbit angles. Regardless, the lower limits on misalignment presented in this work help better understand the geometry of debris disks, and future observations will improve our understanding of the mechanisms that shape and misalign these systems. We thank the referee for thorough and insightful feedback that greatly improved the quality of this paper. We also thank Ruth Angus and Megan Bedell for a helpful discussion on how to measure stellar rotation periods and Ann-Marie Madigan and Carolyn Crow for thoughtful conversation about our results. M.A.M. acknowledges support for this work from the National Aeronautics and Space Administration (NASA) under award number 19-ICAR19 2-0041. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work made use of the SIMBAD database (operated at CDS, Strasbourg, France), NASA's Astrophysics Data System Bibliographic Services. The TIC data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute and can be accessed via 10.17909/fwdt-2x66. This research has made use of the VizieR catalog access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the VizieR service was published in A&AS 143, 23. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. b Planets follow edge-on orbits around AU Mic and are well-aligned with star and disk (Plavchan et al. 2020;Addison et al. 2021;Martioli et al. 2021). c Planets around GJ 581 detected via radial velocities and inclinations are unknown (Trifonov et al. 2018). d Planet around HD 10647 detected via radial velocities and inclination is unknown (Marmier et al. 2013). e The two planets around HD 206893 have a mutual inclination of 15 • at most and could be misaligned from the disk by ∼20 • (Hinkley et al. 2022). f Planets around β Pictoris detected via direct imaging and appear well-aligned with the disk (Feng et al. 2022). g Orbital inclination of planet around Eridani has large uncertainties, may be misaligned with the disk (Llop-Sayson et al. 2021;Benedict 2022). h Planets around τ Ceti detected via radial velocities and inclinations are unknown (Feng et al. 2017). References-(1) Keenan & McNeil (1989), (2) A description of the light curve modeling approach is given in Section 2 while the derived rotation periods and 1σ uncertainties are found in Table 1. We additionally calculate Lomb-Scargle periodograms for each light curve and the false alarm probability (FAP) associated with the rotation signal (Zechmeister & Kürster 2009). Several stars display double dipping, where two opposing star spots create a false signal at P/2 (Basri & Nguyen 2018); in several instances, including HD 377 and HD 92945, we find that phase dispersion minimization periodograms (Stellingwerf 1978) better capture the true rotation period and use them in place of Lomb-Scargle periodograms. Figures 4 through 21 show a periodogram, phase-folded light curve, light curve from a single TESS sector, and the rotation period posterior for each object showing quasiperiodic variations in TESS data.
2023-04-18T01:16:30.898Z
2023-04-15T00:00:00.000
{ "year": 2023, "sha1": "84260024db84b1fb4b99c7bc29d8664d63754036", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "84260024db84b1fb4b99c7bc29d8664d63754036", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27749239
pes2o/s2orc
v3-fos-license
Map location of lactate dehydrogenase-elevating virus (LDV) capsid protein (Vpl) gene Lactate dehydrogenase-elevating virus (LDV) is currently classified within the Togaviridae family. In an effort to obtain further information on the characteristics of this virus, we have begun to sequence the viral RNA genome and to map the virion structural protein genes. A sequence of 1064 nucleotides, which represents the 3′ terminal end of the genome, was obtained from LDV cDNA clones. A 3′ noncoding region of 80 nucleotides followed by two complete open reading frames (ORFs) were found within this sequence. The two ORFs were in different reading frames and overlapped each other by 11 nucleotides. One ORF encoded a protein of 170 amino acids and the other ORF, located adjacent to the 3′ noncoding region of the viral genome, encoded a 114 amino acid protein. Thirty-three N-terminal residues were sequenced directly from purified LDV capsid protein, Vpi, and this amino acid sequence mapped to the ORF adjacent to the 3′ noncoding region. The presence of overlapping ORFs and the 3′ terminal map position of Vpi indicate that LDV differs significantly from the prototype alpha togaviruses. The Togaviridae family consists of small, spherical, enveloped viruses with icosahedral nucleocapsids and single-stranded RNA genomes of positive polarity. Recent sequence analyses indicate that many of the viruses initially classified in this family differ from the prototype alpha togaviruses in their genome organizations and replication strategies (1)(2)(3)(4). Lactate dehydrogenase-elevating virus (LDV) is currently classified as a togavirus ( 1). The LDV particle has a diameter of 50-55 nm and the diameter of the nucleocapsid has been estimated to be 30-35 nm (5, 6). The genome of LDV is a single-stranded RNA molecule of positive polarity (5) which contains a poly(A) tract at its 3' terminus (7, 8). The estimated molecular weight of the genome is 5 X 1 O6 Da (5,9). LDV particles are composed of at least three structural proteins: the capsid protein, Vpl , with a molecular weight of 15,000 Da; a nonglycosylated envelope protein, Vp2, with a molecularweight of 18,000 Da; and an envelope glycoprotein, Vp3, which exhibits a heterogeneous migration pattern on SDS-PAGE with an estimated molecular weight range between 24,000 and 44,000 Da (5, 10). It is not known whether the Vp3 region on the gel contains more than one protein or various differentially glycosylated forms of the same protein. Although in vivo replication of LDV is highly efficient, it is difficult to produce virus in tissue culture; no cell line has yet been found which can efficiently support LDV replication. LDV replicates in primary murine cell cultures containing macrophages (1 I), but only a small ' To whom reprint requests should be addressed. subpopulation (6-20%) of cells in these macrophage cultures are permissive for LDV replication (12). Since such a small proportion of cells are infected in the macrophage cultures, it has not been possible to detect intracellular viral components in cell culture extracts. Because of the technical difficulties inherent in studying LDV replication in tissue culture, neither the gene order nor the replication stategy of this virus have yet been delineated. In order to obtain the information needed to definitively classify LDV, we have begun to map the structural proteins of the neurotropic isolate of LDV (LDV-C; 13) on the viral genome. LDV-C (approximately 2 X 10" IDS,,) was purified from blood plasma taken from 50 CD-1 mice (Charles River Breeding Laboratories, Boston, MA) as previously described (8). The viral structural proteins were separated by SDS-PAGE and transferred electrophoretitally to a polyvinylidene difluoride (PVDF) membrane (Millipore Corp., Bedford, MA) using a modified Towbin Tris-glycine buffer (12.5 mM Tris, 96 mAI glycine, pH 8.3) containing no methanol (14). After staining with Coomassie blue, the Vpl protein band, which migrated with an apparent molecular weight of 14,000 Da (data not shown), was excised from the PVDF membrane and analyzed in an Applied Biosystems Model 475A protein sequencer. Automated protein sequence analysis was performed in the gas phase mode with on-line PTH analysis using a Model 120A analyzer as previously described (15). The following N-terminal 33 amino acid sequence was obtained for Vpl : Copyright 0 1990 by Academic Press, Inc All rights of reproduction I" any form reserved. No homologous sequence was found in a search of the National Biomedical Research Foundation Protein Sequence data base. We previously reported an LDV-C cDNA clone, dt4, which was synthesized by oligo-deoxythymidine priming of the viral RNA and represents the 3' terminus of the LDV-C genomic RNA (8). The genomic RNA of LDV-C was recloned as previously described (8) using calf thymus (ct) pentameric DNA for priming. Both strands of the double-stranded cDNA clones were sequenced by the dideoxy chain termination method as previously described (8) until complementary overlapping sequences were obtained within each clone. As shown in Fig. 1, four ct-primed clones (b24, b63, b104, and ~44) were found to contain long poly(A) tracts at one end. These four clones completely overlap the unique sequence of the dt4 clone. The longest poly(A) tract found among these clones was 52 nucleotides in length which is very close to the length of the 3' terminal poly(A) tract (approximately 50 nucleotides) previously estimated directly from the LDV genomic RNA (7). Two additional clones, b90 and a16, further extended the 5'end of this sequence (Fig. 1). The sequence obtained from the seven DNA clones extends 1064 nucleotides beyond the 3' terminal poly(A) tract of the LDV-C genome. This sequence, which has been converted to the viral RNA sequence, is shown in Fig. 2. Because of the high mutation rate characteristic of RNA virus replication, multiple clones were sequenced in order to obtain the majority nucleotide for any given position. With the exception of the reported change in the dt4 clone (8) at position 976 of this sequence, identical nucleotide sequences were found in all clones at all positions represented by three or more clones. Although the regions between nucleotides 89 and 207,292 and 421, and 565 and 657 were each sequenced from only two overlapping clones, the sequences obtained were identical except at position 31 1. At position 31 1, clone b63 contained uridine, whereas clone b90 contained cytosine (Fig. 2). The regions between nucleotides 1 through 88, 208 through 291, and 422 through 564 have not yet been confirmed in overlapping clones. When this 1064 nucleotide sequence was translated using the Sequence Analysis Package of the Wisconsin Genetics Computer Group (16) two complete ORFs were found in different reading frames (Fig. 2). No ORF of significant size was found in the third reading frame. One ORF begins with a start codon (AUG) at nucleotide 135 and ends with an ochre termination codon (UAA) at nucleotide 648. This ORF encodes a protein of 170 amino acids (denoted as VpX, Fig. 2). The nucleotide ambiguity at position 31 1 does not change the encoded amino acid. Although the identity of the encoded protein is not yet known, the amino acid sequence of this protein does not contain potential Nlinked glycosylation sites and is not sufficient in length to be the gene encoding the envelope glycoprotein Vp3. We have not yet obtained sufficient data to determine whether this ORF encodes the Vp2 protein. The 5' end of the second ORF overlaps the 3' end of the ORF described above by 11 nucleotides and is in a different reading frame (Fig. 2). The sequence of the overlap region between these two ORFs was confirmed in four clones. The second ORF begins with a start codon at position 637 and ends with a single termination codon (UAG) at nucleotide 982. The 3' noncoding region of the LDV genomic RNA is 80 nucleotides in length (Fig. 2). The N-terminal amino acid sequence obtained directly from the LDV-C Vpl protein was found to map to the 5' terminus of the second ORF (Fig. 2). This ORF encodes a 1 14 amino acid protein which would have a molecular weight of approximately 12,200 Da. The estimated molecular weight of Vpl is 15,000 Da (5, 10). The amino acid sequence of Vpl indicates that, like other RNA virus capsid proteins, the LDV capsid protein is a basic protein: 16% of the residues in this sequence are basic amino acids (lysine or arginine) at pH 7, while only 3% of the residues are acidic (aspat-tic acid or glutamic acid). Consistant with its amino acid composition, Vpl migrated to the upper pH range (pH 2 8; data not shown) when electrophoresed on a twodimensional gel (17). Another partial ORF is located at the 5' end of the nucleotide sequence. This ORF is at least 144 nucleotides in length and is in a different frame from the adjacent ORF, but in the same frame as the capsid protein ORF. The 3'terminus of this ORF overlaps the 5' end of the adjacent ORF by 10 nucleotides (Fig. 2) sequence of the 5' end of this ORF is incomplete, we do not yet know the length nor identity of the protein encoded by this ORF. There is one potential N-linked glycosylation site near the 5' end of this partial ORF. Although morphologically similar to the togaviruses and flaviviruses, the presence of multiple ORFs in the LDV genome and the 3'terminal location of the capsid protein gene suggest that LDV is neither a togavirus nor a flavivirus. The genomes of the viruses within these two families contain one ortwo long ORFs which encode polyproteins and the capsid protein genes of these viruses are located at the 5' ends of their respective ORFs. The genome structure of LDV resembles that of equine arteritis virus (EAV), which is the only member of the genus arteriviruses within the Togaviridae family (1). Although this virus is still classified as a togavirus, it differs significantly from the alpha and rubi togaviruses: (a) the EAV proteins are encoded by multiple ORFs; (b) the capsid protein gene maps to the 3'terminus of the coding region on the genome (4); and (c) the EAV proteins are translated from six subgenomic mRNAs (19). The presence of overlapping reading frames each beginning with a start codon suggests that LDV proteins are also translated from subgenomic mRNAs. However, due to the difficulty of detecting intracellular viral components in LDV-infected tissue culture extracts, no LDV subgenomic mRNAs have yet been observed. EAV and LDV are also morphologically similar in both virion size and nucleocapsid structure. Although the LDV structural proteins are similar in size to those of EAV (20), no serologic cross-reactivity has been found between these two viruses (2 1). The LDV genome structure is also similar to that of the coronaviruses (18). However, LDV has an icosahedral nucleocapsid, whereas the coronaviruses have helical nucleocapsids. Nonetheless, the properties of the LDV genome suggest that LDV belongs to a recently proposed virus superfamily (4) consisting of the coronaviruses, the toroviruses, and the arteriviruses. Further characterization of LDV is necessary to facilitate the classification of this virus and to determine the degree of similarity it shares with EAV and the coronaviruses. ACKNOWLEDGMENTS This work was supported by Public Health Service Grant NS 19013 from NINCDS. We thank Michelle Gonda and Janice Dispotofortechntcal assistance. We also thank Kaye Spelcher, Kevin Beam, and Clement Purcell in the Protein Microchemistry Facility at the Wistar lnstltute for protein sequence analysis and technical assistance.
2018-04-03T01:01:44.202Z
1990-08-01T00:00:00.000
{ "year": 1990, "sha1": "b9808e7321847df1bb8d4b36ffe6227ffe44c32f", "oa_license": null, "oa_url": "https://doi.org/10.1016/0042-6822(90)90546-4", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "75e66da0fccbae4e094e2ff72e3fff10dc9c3be9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264050516
pes2o/s2orc
v3-fos-license
Formulation optimization and characterization of functional Kemesha This study aimed to enhance Kemesha by incorporating a blend of composite flours, including germinated haricot bean, ultrasonicated fine-milled pumpkin, CMC (Carboxymethyl cellulose), and common wheat flour. Additionally, a D-optimal design was employed to optimize the formulation and achieve the desired outcome. Protein, fibre, total carotenoid content, and firmness were responses for optimizing Kemesha formulation. The numerical optimization and model validation results indicated that it is feasible to use a flour composition of 63.00 g common wheat flour, 19.01 g germinated haricot bean flour, 14.51 g ultrasonicated fine-milled pumpkin flour, and 3.48 g carboxymethyl cellulose (CMC) per 100 g of flour to prepare Kemesha with desirability of 0.596. The proximate composition analysis results showed that the optimized Kemesha had higher levels of fibre, ash, and protein compared to the control Kemesha, whereas the carbohydrate content was significantly lower. The studies on color estimation revealed that the yellow color of the product was slightly increased during the optimization of Kemesha (15.09–31.09), while the brightness index was reduced from 89.38 to 74.44. Compared to the control kemesha, the optimized Kemesha had a total phenolic, flavonoid, and carotenoid content of 7.47, 3.67, and 149.20 times greater. The cooking loss (4.95%) and water absorption (220.68%) of optimized Kemesha were improved compared to control Kemesha. The composite significantly improved the sensory qualities of both raw and cooked Kemesha, including surface smoothness, resistance to break, appearance, texture, color, and overall acceptance. Introduction Kemesha is traditionally produced and consumed in different parts of the Arsi zone, Ethiopia.It is prepared from common wheat flour and water through traditional processing steps of mixing, sheeting, rolling, cutting, and sun-drying.All age groups of people consume it.Kemesha was utilized as dry food for transit, for household consumption, during festive events like festivals and wedding ceremonies, and as a source of revenue for a small minority in the Arsi zone.Presently, the processing and consumption of Kemesha are village-based, and it has been underutilized.Due to its gluten content, wheat flour is the preferred raw material for making Kemesha, as it is well-suited for dough development and prevents disintegration during the cooking process [1].Kemesha is currently underutilized due to laborious, inadequate hygienic practices during and after processing, long drying time, low nutritional value, and unattractive presentation.Products made from wheat are typically rich in carbohydrates but poor in fibre, protein, minerals, vitamins, and phenolic compounds, which frequently causes nutrient imbalances in consumers [2].Nowadays, consumers worldwide have shown increasing interest in reducing disease risks by consuming health-promoting dietary ingredients [3] along with fulfilling their basic nutrition requirements [4].This is why foods today are expected to do more than just satisfy hunger and deliver essential nutrients; they are also expected to prevent diseases linked to poor nutrition and improve customers' physical and emotional well-being [5,6].In this context, functional foods present a remarkable opportunity to enhance the quality of products.In addition to dietary fibre, organic micronutrients like carotenoids, polyphenolics, tocopherols, vitamins C, minerals, organic acids, and others are primarily responsible for the health benefits of plant-based diets [7].Specifically, phenolics and carotenoids have the potential to provide health advantages by scavenging reactive oxygen species and safeguarding against degenerative diseases such as cancer and cardiovascular diseases [8].Over the past few decades, there has been an escalating consumer preference for wheat-based products such as pasta and noodles, considering them added value by using both animal and plant products [9].Likewise, to enhance the nutritional value of Kemesha, it is necessary to incorporate ingredients that are high in protein, fiber, and bioactive compounds, such as haricot bean [10,11] and pumpkin flour [12]. Investigations on nutritional, functional, and phytochemical properties of four improved varieties of haricot beans (Phaseolus vulgaris) and pretreated pumpkin have underlined the importance of these crops on the human diet for their high protein, fibre, bioactive component and carbohydrates content, which makes this food a good source of nutrients [11][12][13].As a matter of fact, several clinical studies show that eating enough fruits and vegetables has positive benefits on the body, acting as a preventative measure for conditions including cataracts, constipation, asthma, cancer, and respiratory (asthma and bronchitis) diseases [14].To achieve this goal, it is essential to engage in the development and investigation of novel fruit-based Kemesha products that possess desirable nutritional, functional, and sensory attributes.The proportion of alternative flours (germinated haricot bean, ultrasonicated fine-milled pumpkin, and carboxymethyl cellulose (CMC)) that can substitute common wheat flour in the Kemesha recipe should strike a balance between achieving nutritional enhancement and maintaining satisfactory sensory characteristics.In order to reduce the price of Kemesha and make it more accessible to low-income people, it is also necessary to partially substitute wheat with less expensive food crops like pumpkin and haricot bean.Including germinated legumes, especially beans, into cereal-based products could be a good option for increasing the nutritional intake of people [13].They have a significant role in human nutrition, especially in the diets of low-income populations in developing nations, since they are affordable protein sources [15].Haricot beans contain about 23.11-27.96%protein which is about two-fold higher than wheat and is also reported to be a good source of bioactive components [11]. It is well recognized that reducing the amount of gluten in a product made from wheat by adding more haricot and pumpkin flour does not improve Kemesha's sensory or cooking qualities.As a result, carboxymethyl cellulose, a hydrocolloid, must be added to successfully substitute the gluten in Kemesha.The literature also noted that a substance suitable to produce a cohesive structure could overcome the absence of gluten [14].Generally, using structuring agents may yield acceptable Kemesha with good texture and minimum cooking loss [16].Despite the previous research attempts to promote the partial substitution of common bean flour in pasta and noodles, there remains a significant gap in our knowledge when it comes to the use of germinated haricot bean flour and ultrasonicated fine-milled pumpkin flour in Kemesha processing.The present investigation was undertaken to optimize functional Kemesha of high nutritive value comprising germinated haricot bean, ultrasonicated fine-milled pumpkin flour, carboxymethyl cellulose (CMC), and common wheat flour by using a D-optimal mixture design.Furthermore, the physical, chemical, and acceptability properties of the optimized Kemesha were evaluated to assess its overall quality.This comprehensive evaluation allowed the researchers to gain insights into the characteristics of the optimized Kemesha and its potential for further development. Material collection and preparation Common wheat flour, pumpkin, and carboxymethyl cellulose were procured from the local market in Addis Ababa, Ethiopia.Haricot bean seeds (SAB 632 variety) were brought from Awash Melkassa Agricultural Research Center.Germinated haricot bean flour was prepared as per the method used by Wodajo and Emire [11], but ultrasonicated fine-milled pumpkin flour was prepared after pumpkin slices (15 × 15 × 4) mm 3 were pretreated for 20 min in an ultrasonic bath (Model-EU-28, Akin Electronic, Turkey) followed by microwave (Model-CE107BT, Samsung, Thailand) blanching for 6 min at 300W and then dried at 60 • C and 1.2 m/s airflow by a fluidized bed drier for 121 min [17].The sliced dried pumpkin was milled coarsely by a hammer mill (Model BH24 1DY, Armfield, England), and then the flours were screened through 500 μm sieves to separate granulates.The resulting coarser flours were micronized using ball-milling (Planetary type ball mill, PM 100; Restch, Germany) at 300 r min − 1 for 15 min three times with an interval of 30 min to avoid flour overheating.Using carboxymethyl cellulose as a process control agent, the stainless steel container was filled to around two thirds of its capacity with the pumpkin flour and five times the weight in stainless steel balls (Φ = 10 mm).The milled flour was also split into distinct particles size fractions (250-150, 150-100, 100-75, and <75 μm particle size) using a set of screen sieved with the vibratory sieve shaker for 5 min.The milled flour obtained was stored at 4 • C in brown zipped bags until further analysis.All chemicals were analytical grade. Experimental design This study was conducted to find an appropriate ratio of four components: common wheat flour, germinated haricot bean flour, ultrasonicated fine-milled pumpkin flour, and carboxy methyl cellulose to prepare functional kemesha with optimum nutritional D.W. Bekele and S. Admassu Emire content and acceptability attributes.A total of twenty treatment combinations were generated using a D-optimal mixture design that was used to find the appropriate ratio.The percentage of the lower and upper range of the ingredients includes 61%-80 % for common wheat flour (CWF), 10%-30 % for germinated haricot bean flour (GHBF), 5%-20 % for ultrasonicated fine-milled pumpkin flour (UFPF), and 2-4% for carboxymethyl cellulose (CMC).Table 1 displays the composition of each blend calculated from the experimental design.The amount of components was selected based on similar available literature as well as by preliminary tests.Effects of wheat flour, germinated haricot bean flour, ultrasonicated fine-milled pumpkin flour, and carboxy methyl cellulose on the protein (Y 1 ), fibre (Y 2 ), total carotenoid content (Y 3 ), and firmness (Y 4 ) of the kemesha were investigated, and the optimum mixture was selected.The statistical parameters used in evaluating and selecting the best-fitted model were coefficient of determination (R 2 ), adjusted coefficient of determination (adjusted-R 2 ), coefficient of variation (C.V), standard deviation, predicted coefficient of determination (predicted R 2 ), predicted residual sum of squares (PRESS), regression data (P value and F value) and lack-of-fit.The analysis of variance (ANOVA) was used to determine the significant difference between linear, quadratic, and interaction terms of independent factors.A contour plot was created to visualize the concept more clearly by putting a single factor constant at the central point while changing the other three variables within the experimental range.Also, a three-dimensional response surface graph for the model's desirability was generated by Design-Expert Software Version 13.0 for a better explanation.The optimal Kemesha preparation was achieved by combining set goals of all quality parameters into an overall desirability function.To confirm the model's validity, the experiment was conducted at optimum values of processing variables, and obtained responses were then compared with predicted values of the responses. After selecting the optimal Kemesha (OK) based on protein (Y 1 ), fibre (Y 2 ), total carotenoid content (Y 3 ), and firmness (Y 4 ), its physicochemical properties, nutritional value, phytochemical activity, cooking, textural and sensory attributes were compared with the control sample. where Y = the predicted variable, X 1,2,3,4 = the proportion of the four flours in the mixture, β′s = the coefficient of the linear and quadratic terms of the model. For verification of the model, the difference between the predicted and actual values or the relative standard error (RSE) can be calculated by using the following equation (Equation ( 2)): Preparation of functional Kemesha The composite flours were added together in proportion given by design (Table 1), with small amounts of each flour added gradually while mixing slowly to prevent aggregation.The mixed flour samples were packed and sealed in brown bags.The process outlined in Fig. 1 was used to produce Kemesha, which entailed blending 100 g of different flours with 35 mL of water.Carboxymethyl cellulose (CMC) was dispersed in cold water and added to the recipe in the amount given by the design in Table 1 as a flour blend replacement.The mixtures were thoroughly worked to form a consistent dough.The formed dough was allowed to rest for 15 min in a closed plastic bag, then passing small portions (50 g) of kneaded flattened sheets of dough were through the pasta machine (Imperia Tipo Lusso SP150, Torino, Italy) at decreasing thicknesses (numbers 2, 3, and 4, respectively).The dough was folded into thirds and sent through again.It was then folded in half, run through, and cut into small manageable lengths.The thin, flattened sheets were passed through the fettuccine cutter to form Kemesha strands, which were 1.5 cm in length and 1.6 mm in width.The slit and cut strands were put in cleaned aluminum trays and then oven dried at 50 • C for 2:10 h to safe moisture content (<12 %).The dried Kemesha was stored in brown bags at room temperature until further use. Physical characteristics Ten (10) strips of Kemesha were taken for thickness and length measurements with a digital vernier caliper (TA, M5 0-300 mm, China) of 0.01 mm precision, and the average was reported.A water activity meter (HD-3A, NanBei, China) was used to gauge Kemesha's water activity at room temperature.Before estimating the water activity, Kemesha samples were comminuted and homogenized.After letting the produced slurry stand for 10 min while being constantly stirred, the pH of the comminuted Kemesha was Fig. 1.Flowchart of kemesha processing methods. D.W. Bekele and S. Admassu Emire measured by blending 10 g in a beaker containing 25 mL of distilled water with a pH meter (BANTE Multiparameter, China) in accordance with AOAC [18]. Color measurements on Kemesha samples were carried out according to Cappa et al. [19] using a Minolta colorimeter (3NH Technology Co., LTD, China).The dried Kemesha sample (120 g) was milled (BH24 1DY, Armfield, England) and sieved through a 500 μm sieve.The flour was then put into plastic Petri dishes, where the top was manually leveled to the brim of the dish, and a plastic film was then snugly placed on top.The black and white tile was used for instrument calibration before color measurement.Color coordinates L*, a*, and b* were measured at seven points on the surface.Results were expressed in the CIELAB space as L* (lightness; 0 = black, 100 = white), a* (+a = redness, -a = greenness) and b* (+b = yellowness, -b = blueness) values.Results were also expressed as color differential (ΔE) between the control (Kemesha with common wheat flour only) and the optimized kemesha, calculated by using the following equation (Equation ( 3))according to Jayasena and Nasar-Abbas [20]: The Chroma and Hue angle was determined using the following equations (Equations ( 4) and ( 5)) to demonstrate the relationship between a* and b* [21]: Proximate composition of the control and optimized Kemesha According to established procedures of AOAC [18], the approximate composition of the flour samples was ascertained.The samples were oven-dried (Model 10-D1391/AD, SCA) at 105 • C for 18 h to achieve constant weight to estimate the moisture content (MC).The percent crude protein (% CP) was determined using an automatic Kjeldahl analyzer (K1160, Hanon, China), and the acquired percent nitrogen (N) was multiplied by 6.25 to determine the percentage crude protein (% CP).The Soxhlet extractor method was used to calculate the fat content.After burning the samples at 550 • C for 4 h in a muffle furnace (MKF-07, Natek, Turkey), the mass difference was calculated to determine the percentage of ash (%).Dilute acid and alkali hydrolysis calculated crude fiber percentage (% CF) (BXB-06Guangzhou, China).Carbohydrate was calculated by difference. Phytochemical properties of the control and optimized Kemesha First, the methanol extracts were prepared from milled Kemesha flour, according to Erbiai et al. [22].A temperature shaker incubator (ZHWY-103B) was used to extract 10 g of flour with 100 mL of methanol over the course of 24 h at 25 • C and 150 rpm.The cleared mixture was then passed through the Whatman No. 1 paper.The deposit was extracted with two additional 100 mL portions of methanol described above.The methanolic extracts were vaporized using a Rota evaporator (R-300, Buchi, Switzerland) at 40 • C until dry, then redissolved in methanol at a 50 mg/mL concentration and kept at 4 • C for later use.Then the total phenolic content was determined in triplicate using the Folin-Ciocalteu method at a wavelength of 735 nm with gallic acid used as a standard.At the same time, the total flavonoid concentration was determined using the colorimetric method with quercetin used as standard at a wavelength of 510 nm, as described by Minuye et al. [23].By employing ascorbic acid as a standard without extract or control, antioxidant activities were assessed using the DPPH techniques [24].For the quantitative analysis of antioxidant activities, a calibration curve was obtained by injection of known concentrations of ascorbic acid standards (y = 474.36x+16.73,R 2 = 0.91).The amount of total carotenoid was calculated using the de Carvalho et al. [25] approach and expressed as g per g of dry matter.Expect DPPH; all analysis was carried out in triplicate. Evaluation of the Kemesha cooking qualities Water absorption, cooking loss, and volume increase of Kemesha were measured according to the AACC methods [26] by using the following equations (Equation (6),7 and 8), respectively.After cooking 10 g of fresh Kemesha for the appropriate amount of time in 100 mL of distilled water, cooling for 1 min with cold water, and removing the water for 30 s, the water absorption rate was determined.After determining the water absorption rate, the cooking loss was calculated following a 24-h drying period at 105 • C with the leftover water.The volume rise rate was determined by adding 10 g of fresh Kemesha and 10 g of cooked Kemesha, respectively, to a 500 mL measuring cylinder filled with 200 mL of distilled water.All the analyses were conducted in triplicate.The respective formulae used in the calculations are as follows: Kemesha texture profile analysis According to Larrosa et al. [27], a 36 mm diameter flat-ended cylindrical probe (P/36) was used in two compression cycle tests to measure the texture of cooked Kemesha.Kemesha (10 g) was cooked in 100 g of water using an induction oven (RBE-22H, Rinnai, Incheon, Korea) to optimum cooking time (6 min for control and 5.3 min for optimized Kemesha).After cooling in a sieve for 30 s, the cooked Kemesha was left in there for two to 3 min to drain off the remaining water.Kemesha of 1.6 mm thickness and 1.5 cm length were prepared for texture profile analysis using a texture analyzer (TA-XTplus, Stable Micro Systems Ltd., Godalming, UK).The test conditions were as follows: 1 mm/s pre-test speed, 1 mm/s test speed, 5 mm/s post-test speed, 80 % strain, and 20 g trigger force.From the force-time curve, the parameters calculated were hardness, adhesiveness, springiness, cohesiveness, and chewiness [28].Measurements were replicated 10 times for each treatment. Sensory evaluation of the control and optimized Kemesha Panelists familiar with the Kemesha among the students and employees of the University of Wolkite, Ethiopia, voluntarily participated in evaluating both dried and cooked Kemesha samples using nine-point hedonic scales with 1-dislike extremely to 9 -like extremely (appendix).For cooked Kemesha, a 100 g sample was boiled (98-100 • C) in 500 mL of unsalted water while being watched until the Kemesha's core vanished after being squeezed between two transparent glass slides for 6 min for control and 5.3 min for optimized Kemesha.The extra cooking and cooling water were then drained from the sample.The samples were then stored for not more than 30 min in tightly covered plastic food containers before testing.Ten panelists were given samples of Kemesha to judge the texture, color, odor, appearance, and general acceptability of cooked kemesha as well as the smoothness, resistance to breaking, odor, appearance, and overall acceptability of raw kemesha. Results and discussions Then, prior to optimization preliminary investigations were carried out to identify the suitable variables for the response and determine the ranges of these variables in the Kemesha formulation.The preferred Kemesha from the acceptability test was used as a control.Carboxymethyl cellulose (CMC) is selected as the structuring agent.According to Liu et al. [29] and Hu et al., [30] adding CMC greatly improved the texture and cooking quality of the noodles by increasing their firmness, reducing their stickiness, improving their chewiness, and increasing their elasticity. Nutritional and phytochemical composition of raw materials Table 2 shows the chemical composition of the raw materials of kemesha.According to the investigation, while germinated haricot bean (SAB 632 variety) flour had the highest protein content (26.740.82),ultrasonicated fine-milled pumpkin flour had the highest levels of bioactive components and fibre, which improve the functional qualities of a food product.Carboxymethyl cellulose (CMC) was very effective on texture due to its network-forming capacity [31]. Fitting for the best model Experimental results for the response variables of kemesha preparation are shown in Table 1.The best model was selected based on a low standard deviation, a low predicted sum of squares, and a high R-squared [32].While the total amount of carotenes could be explained by a linear model, protein, fibre, and sample firmness could all be explained by quadratic models.The ANOVA showed that lack of fit was insignificant for all the D-optimal mixture designs at a 95% confidence level.The lack of fit test measures how well a model captures experimental domain data during times when such data were not included in the regression [33].The CV indicates the relative dispersion of the experimental points from the model's prediction.According to Gull, Prasad and Kumar [34], the model was considered adequate when the multiple coefficients of correlation (R 2 ) were more than 93 %, and the lack of fit test was non-significant.The (R 2 ) values for the responses, i.e., protein, fibre, total carotenoid, and firmness, were 0.97, 0.99, 0.99, and 0.99.A high proportion of variability (R 2 > 0.97) in the response models was obtained (Table 3).Adding a variable to the model always increase R 2 , regardless of whether the additional variable is statistically significant or not, so a large value of R 2 does not always imply that the regression model is a good one.Thus, it is preferred to use an adj-R 2 to evaluate the model adequacy, and it should be over 90% [33].Table 3 shows that R 2 and adj-R 2 values for the models did not differ dramatically, indicating that non-significant terms were not included in the model.The models' sufficiency precision values were greater than 4, and it may be inferred that they can be used to track the design space [35].Thus all four responses were considered adequate to describe the effect of variables on the quality of Kemesha.Fig. 2(a-l) indicates the difference in fits (DFFITS), Leverages, and Cook's distance for firmness, fiber, protein, and total carotenoid contents.As can be seen, all of the leverage values are lower than 0.50, so there are no outliers or unanticipated errors in the model.Also, the cook's distance and DFFITS plots confirmed the model's reliability because the values are within the specified range [32].The estimated regression coefficients of the proposed models for each response are given in Table 4.The coefficient estimate shows the severity of one factor when all other variables are held constant by estimating the expected change in response per unit change in factor value [15]. Effect of variables on the protein content of Kemesha According to Fig. 3 (a), the greatest effect on protein content was related to geminated haricot bean flour (GHBF).A quadratic model effectively explained how the protein content and blend proportions relate to one another (R 2 = 0.97 and adjust-R 2 = 0.96).The linear blends significantly affected the protein content.In contrast to the wheat-pumpkin flour blends, the binary (wheat-haricot bean flour) mix was synergistic and positively affected the volume for a maximal response of protein content (Fig. 3(a)).As demonstrated in Table 1, the protein content ranged from 9.54 % to 13.64 %.The maximum protein content was in the formulation consisted of 66 % common wheat flour (CWF), 30 % geminated haricot bean flour (GHBF), 5 % ultrasonicated fine-milled pumpkin flour (UFPF), and 2 % Carboxymethyl cellulose (CMC) (run 5, protein content: 13.64).According to the findings, the mixture of these ingredients improved the protein content of Kemesha by 53.25% compared to the control kemesha.Protein content increased significantly when the proportion of GHBF flour increased; however, it fell slightly when the fraction of UFPF flour increased.Previously similar reports were done on pasta and noodle protein enhancement using common bean flour [36,37].Moreover, Shogren, Hareland, and Wu [38] revealed that the protein content of pasta increased by 54 % when soy was added to it (at a level of 50 %) compared to the control sample.The greater protein level of the sprouted haricot bean flour used in creating composite flour may be responsible for the rise in protein content in Kemesha. Effect of variables on the fibre content of Kemesha The fibre content of Kemesha ranged from 2.65 % to 5.88 %, as shown in Table 1.In comparison to the control, Kemesha from a blend of 61 % CWF, 16 % GHBF, 20 % UFPF, and 3 % Carboxymethyl cellulose (CMC) (run 12) showed a considerably (p < 0.05) higher fibre content.These combinations increased the amount of fiber by 7.35 fold compared to the control sample.According to the fibre content analysis, ultrasonicated fine-milled pumpkin flour (UFPF) presented an influential effect on the Kemesha fibre content, with a drop in the percentage of wheat flour, the fiber content also rose.The relationship between the blend proportions and the fibre content was adequately described by a quadratic model Table 3 with R 2 = 0.99 and aduj-R 2 = 0.99.The fibre content is significantly (P > 0.05) affected by the linear mixes (Table 4) (Fig. 3(b)).With the exception of the binary (wheat-haricot bean flour) blend, which had a minimal effect on fiber content, the other blends had a favorable impact on fiber content Fig. 3(b).The fiber content of Kemesha showed an increasing trend with a parallel increase in the proportion of ultrasonicated fine-milled pumpkin flour due to its high fiber content (14.22 ± 0.30) as compared to geminated haricot bean flour (GHBF) (6.2 ± 0.40) and common wheat flour (1.83 ± 0.10).Fig. 3 (b) shows that the amount of fibre significantly increased when the ratio of pumpkin and haricot bean flour was increased but reduced when the ratio of wheat flour was increased.Also, a similar trend of rising fibre content together with rising legume content for composite flour has been documented [39].According to MA et al. [40], adding pumpkin flour to wheat flour increases the fibre content of biscuits.According to Gull, Prasad and Kumar [34], incorporating high-fibre material enhances pasta's nutritional and functional quality.Since kemesha is regarded as a traditional cuisine made primarily from common wheat flour, this is crucial to improving the fibre content from underutilized pumpkin and haricot bean for producers. Effect of variables on the carotenoid content of Kemesha The carotenoid content of developed kemesha products ranged from 5.12 to 29.21 μg/g.The statistical analysis suggested a linear 3).The highest carotenoid content was observed in the combination of 61 % common wheat flour (CWF), 16 % geminated haricot bean flour (GHBF), 20 % UFPF, and 3 % of CMC (Table 1), and the lowest was found in control (0.14 μg/g) (Table 6).The total carotenoid content was positively impacted by UFPF flour, followed by GHBF flour, but negatively impacted by common wheat flour, as shown in Fig. 2 (c), where the linear blend coefficients significantly (p < 0.05) affected the score.As previously mentioned, this is owing to the pumpkin's high carotenoid concentration [12], and its integration into kemesha at various degrees considerably boosted the carotene content.With the economically effective utilization of underutilized greens, enriching low-carotenoid content foods with high-carotenoid foods like pumpkin may help fight blindness issues [41].Also, adding more pumpkin flour improved the food's functional qualities regarding its phytochemical content [42].In a related study, MA et al. [40] found that adding pumpkin flour to biscuits raises their carotenoid concentrations.But as depicted in Fig. 3 (d), adding carboxymethyl cellulose to the kemesha did not significantly affect total carotenoid content. Effect of variables on the firmness of Kemesha Firmness is among the most crucial qualities of Kemesha.Table 3 shows that the lack-of-fit is insignificant, but the model is significant.This means that the possibility of an error occurring is low.Table 1 displays that the firmness of the Kemesha varied between 882.85 g (run 2) and 1260.46 g (run 9).The hardness of run 9 was 1.39 times higher than the control sample.As shown in Fig. 3 (e and f), firmness decreases as the proportion of both ultrasonicated fine-milled pumpkin flour (UFPF) and geminated haricot bean flour (GHBF) increases, but with an increase of carboxymethyl cellulose (CMC) percentage in the blends, the Kemesha gets harder and harder.Regression coefficient Table 4 showed that the firmness of Kemesha samples was significantly affected (p ≤ 0.05) by the CMC at a quadratic level.Generally, firmness is reduced by replacing common wheat flour with pumpkin and haricot bean flour by keeping CMC constant (Fig. 3 (e)).The general trend observed is a progressive reduction in Kemesha firmness with increasing fiber concentration.The disruption of the protein starch matrix within the Kemesha microstructure by fibre, as in pasta, may be responsible for the drop in hardness [43].It could be associated with a weakening gluten network [42] and as well as poor availability of water to develop the gluten network [44].According to Gatta et al. [45], foreign proteins that prevent the development of gluten-starch complexes may lessen the stiffness.In addition, the firmness response of the Kemesha reached the maximum value when the proportion of CMC increased.The formation of complexes may cause this due to the interaction of hydrophilic groups on starch, CMC, fat, and protein, thereby improving the structure of Kemesha [46,47].Similar experiments showed that adding xanthan gum and locust bean gum at 2.5-10 % significantly increased the stiffness of pasta [20].As stated by Widelska et al. [48], hydrocolloids' binding effect of water-soluble starch improved the texture of gluten-free pasta.CMC can increase the viscosity of the Kemesha dough, which can affect the texture of the final product.Higher viscosity can lead to a firmer and more elastic texture, which is desirable in Kemesha products.According to Kamali Rousta et al. [32], firmness depends on the level, kind, and interaction of the flours incorporated with the product. Optimized level of ingredients To produce functional Kemesha, Design-Expert Software (version 13.0) was used to determine the ideal level of variable as well as the extrapolative value of responses in accordance with the predetermined goals with maximum desirability function.A good quality functional Kemesha should have a high level of fibre, total carotenoid content (TCC), protein content, and firmness, so the criteria target for responses is maximum.Optimization was done by maximizing the amount of protein, fiber, total carotenoid content, and firmness.The numerical response analysis found that optimum values were 63.00 g of CWF, 19.01 g of geminated haricot bean flour (GHBF), 14.51 g of ultrasonicated fine-milled pumpkin flour (UFPF), and 3.48 g of carboxymethyl cellulose (CMC) with 0.596 desirabilities (Fig. 4).Desirability demonstrates the effectiveness of the optimization objective function, displaying the program's capacity to satisfy user wishes per the standards established for the finished output to reach a satisfactory compromise [46].The numerical optimization finds a point that maximizes the desirability function.Protein, fibre, total carotenoid, and firmness had Table 4 The Estimated regression coefficients of the proposed models for each response.Note: Common wheat flour (A), Germinated haricot bean flour (B), ultrasonicated fine-milled pumpkin flour (C), Carboxy methyl cellulose (D), total carotenoid content (TCC).a, Significant at 0.0001 levels, b, Significant at 0.01 levels, c, Significant at 0.05 levels, d, Not Significant at 0.05 levels. D.W. Bekele and S. Admassu Emire predicted values of 11.57g/100 g, 4.70 g/100 g, 20.79 g/100 g, and 1110.043g, respectively, under the optimal circumstances.The amount of protein and fibre in the optimal sample was 1.30 and 3.76 times higher than in the control sample.The validation findings showed good agreement between experimental and predicted response values and no statistically significant difference between them, proving the model's applicability (Table 3).Also, if the relative standard error (RSE) (Equation ( 2)) or the difference between the predicted and actual values derived from the optimal conditions is less than 2 % (Table 5), it demonstrates the validity of the suggested model based on the D-optimal design [49,50].Therefore, the finalized equation (Equation ( 1)) for each variable generated by Design-Expert 13.0 software is acceptable for use in the Kemesha formulation. Chemical properties of control and optimized Kemesha Table 6 indicates that the moisture content of the optimized kemesha sample (10.03 ± 0.24) was found to be slightly but not significantly higher (P < 0.05) than that of the control kemesha sample (9.22 ± 0.85).The higher moisture content in optimized kemesha may be due to the higher water-holding capacity of fibres in pumpkin and haricot beans during dough formation.Understanding how a product's water content influences its shelf life is critical since abundant water could encourage the development of harmful microbes [51].In similar studies, the increase in moisture content with the addition of gums in noodles was also reported by Shere, Devkatte and Pawar [52].Table 6 depicted that while protein content increased (8.90 ± 0.62 to 11.64 ± 0.12) but fat content slightly decreased (2.13 ± 0.14 to 1.75 ± 0.18) with the incorporation of germinated haricot bean and ultrasonicated fine-milled pumpkin flour in Kemesha. This might be due to a lower fat percentage in haricot bean and pumpkin flour and a higher percentage of protein in germinated haricot bean flour (GHBF) (Table 2).This low-fat level is also appropriate for customers who demand a low-fat diet.The crude fibre content of haricot bean and pumpkin flour-supplemented Kemesha was much higher than those prepared from common wheat flour only.As shown in Table 6, there was a prominent increment in crude fiber by 3.85 times as compared to the control Kemesha.The addition of haricot bean flour and ultrasonicated fine-milled pumpkin flour, which have higher crude fibre contents than regular wheat flour, is what caused the increase in crude fibre levels.There was a significant difference in carbohydrate content of optimized Kemesha and control.In general, carbohydrate content decreased progressively with adding haricot bean and pumpkin flour to kemesha.The considerable reduction may be due to supplementing other nutrients by haricot bean and pumpkin flour.Ash content was also found to be increased significantly from 2.41 ± 0.27 to 2.95 ± 0.09 in optimized kemesha.Another research showed that adding dried pumpkin powder to noodles raised their ash level [53]. Table 6 Proximate and phytochemical composition of control and optimized kemesha. Phytochemical composition of control and optimized Kemesha The rise in degenerative diseases, bad lifestyles, inactivity, and excessive consumption of foods high in fat and sugar are among the current social debates.The development of healthy food products has expanded in response to escalating consumer demand.The communities recognize that the value of food intake should be nutritional and provide more advantages to overall health.One of the most important aspects of functional food research is examining the properties of naturally occurring active components (such antioxidants like polyphenols) in extracts derived from particular food sources.High levels of active ingredients improve food's ability to promote health and improve consumers' quality of life [54][55][56].In addition to their basic nutrients, pumpkin and haricot bean flour also include phytochemicals that may have positive health effects [11]. Tables 1 and 6 present the phytochemical compositions of flour and Kemesha samples, respectively.Common wheat flour's total phenolic and flavonoid contents were significantly lower (p < 0.05) than germinated haricot bean and ultrasonicated fine-milled pumpkin flour.Kemesha that has been modified had considerably greater levels of total phenolic (1.12 mg GAE/g), total flavonoids (0.770.21 mgCE/g), and total carotenoids (20.891.49g/g) than control Kemesha.Fig. 5 illustrates how optimized Kemesha has a higher scavenging ability than unimproved Kemesha. The increment in total phenolic and flavonoid content of optimized Kemesha samples may be due to higher phenolic content in ultrasonicated fine-milled pumpkin flour (Table 2).In general, the values of total phenolic content found in the present work were lower than those reported by Gallegos-Infante et al. [57] for pasta made with semolina and common bean flour, respectively.According to a related study, bean flour boosts the amount of phenolic acids and the antioxidant power of pasta dough.Faba seeds are rich in pro-health phytochemicals such as phenolic compounds, which increase the pro-health qualities of functional foods, according to Karkouch et al. [58].According to research by Fernando-Panchon et al. [59] and Luo et al. [60], field beans' beneficial effects on health are directly related to their high antioxidant content.Given this, common wheat-germinated haricot bean-ultrasonicated fine-milled pumpkin Kemesha with the addition of carboxymethyl cellulose (CMC)can be an important source of natural bioactive compounds.Phenolic compound-rich foods have been shown to possess antioxidant properties [54].The addition of haricot bean and pumpkin flour, which are well-known to be effective sources of antioxidant components, may have contributed to the enhanced antioxidant activity in the case of the optimized Kemesha Fig. 5. Due to the contribution of antioxidant activity from both flours, as shown in Fig. 5, the antioxidant activity of optimized Kemesha samples increased significantly.According to the findings, mixing haricot bean and pumpkin flours into Kemesha would be a practical strategy to market this product rich in phenols.According to Alberto et al. [54] study, which was similar to this one, the DPPH test spaghetti manufactured with common bean flour had a higher value than the control spaghetti. Physical properties control and optimized Kemesha Color is a crucial component when evaluating the aesthetic appeal and market worth of food goods.Color values were measured for both raw optimized and control Kemesha samples.Control Kemesha displayed the highest lightness L* value (89.39 ± 2.24), while optimized Kemesha revealed the lowest (74.44 ± 2.50).This decrease in lightness may be due to color contribution from another component, haricot bean, and pumpkin flour.That means both flour samples were darker and greener because of the natural pigment color of the flour.Also, as stated by Han et al. [61]and Gull, Prasad, and Kumar [16], the decline in whiteness may be attributed to an increase in fibre content.Slightly higher values of L* were obtained for a product prepared from common wheat flour comparable to a product made from durum wheat semolina [62].Noodles made from semolina flour had a higher yellow hue but were darker compared to the Control Kemesha, as evidenced by the respective color parameter values of 68.9 ± 1.5, 1.6 ± 0.4, and 20.8 ± 1.1 for L*, a*, and b* [63].Optimized Kemesha showed the highest a* value (3.62 ± 0.35) compared to control (common wheat flour) Kemesha (0.35 ± 0.20).This could be due to the red color contribution of the haricot bean flour [11] and pumpkin flour [53].As depicted in Table 7, optimized Kemesha showed a higher b* value or yellowness (31.09 ± 1.84) as compared to control Kemesha (15.09 ± 2.24).This may be due to the carotenoids present in pumpkin flour [12,40].Incorporating natural pigment not only promotes the sensory features of Fig. 5. Free radical scavenging of methanolic extract of Kemesha samples and controls (ascorbic acid). D.W. Bekele and S. Admassu Emire food but also functionally enhances the nutritional quality of food [64].The chroma index and hue angle for the optimized Kemesha (31.30 ± 1.84 and 83.35 ± 0.70) were significantly different (p < 0.05) than Kemesha prepared only from common wheat flour only (15.09 ± 2.24 and 88.65 ± 0.76).Results obtained for color change indicated that colors are not close to one another, with a relative difference of 22.31 ± 2.63.This difference was associated with using common bean and pumpkin flour, which produced a darker color.The outcomes were better than those found in Gallegos-Infante et al. [57] study on pasta made with common bean flour.Also, it was discovered by Setady et al. [65] that the addition of various additives to pasta noodles improved the color shift. The Water activity and pH of optimized and control Kemesha were not significantly (p > 0.05) different from each other.Water activity, however, changed insignificantly from 0.46 ± 0.01 and 0.49 ± 0.02 with optimized Kemesha (higher moisture content); these data indicate that CMC binds water within the system.The pH also increased insignificantly from 5.88 ± 0.06 to 5.91 ± 0.07.This might be attributed to when dry beans are germinated, they tend to have a higher pH compared to conventional wheat flour.This is because the enzymes activated during germination break down complex carbohydrates into simpler sugars, which can create a more alkaline environment.As a result, this could potentially lead to an increase in the pH of the Kemesha if germinated dry beans are used as an ingredient [66]. Texture and cooking properties of control and optimized Kemesha The test consists of compressing bite-size pieces of food two times in a motion that simulates the jaw's action and extracting several textural parameters from the resulting force-time curve.Consumer acceptance of cooked Kemesha is greatly influenced by its firmness and stickiness.The textural properties of control and optimized Kemesha are presented in Table 9.9.The control Kemesha displayed the maximum cohesiveness (shows the strength of the internal link) and adhesiveness (24.94 ± 5.22 g*s), whereas the optimized Kemesha exhibited the highest firmness (1110.05± 59.93), springiness (0.54 ± 0.07), and chewiness (181.80 ± 42.88), in that order.Because it takes more effort to chew before swallowing, Kemesha, with a higher degree of firmness, also tends to have higher chewiness values.The optimized Kemesha adhesiveness was found to be insignificantly lower than control Kemeshas.According to Oduro-Obeng, Fu and Beta [67], adhesiveness is related to the number of starch granules that exudate from the pasta matrix into the cooking water and coat the product's surface.Similar research on pasta by Widelska et al. [48] with the addition of xanthan gum led to the formation of a continuous protein matrix and a stiff protein network that avoids excessive material leaking during cooking and lowers pasta adhesiveness.Also, Padalino et al. [68] reported a similar finding on gluten-free spaghetti; as hydrocolloids were added, adhesiveness was lowered.Optimized Kemesha samples showed a significant increase in springiness compared to the control.Springiness indicates the ability of the Kemesha to return to its original shape after deformation.This could also be improved by using carboxymethyl cellulose (CMC) because the interactions among their polymer chains (hydrophobic interactions, hydrophilic interactions, as well as H-bonding) could provide elasticity or flexibility in the Kemesha [69]. In Table 8, cooking loss, water absorption, and volume gain are some of the qualities of the kemesha cooking process that are shown.It is preferable for Kemesha to have little leached solid in cooked water, indicating Kemesha with a compact texture.During cooking, the solid leached is widely used to indicate the overall cooking performance; the low amounts of residue indicate high-quality cooked Kemesha.Cooking loss is undesirable, and according to Ugarčić-Hardi et al. [70], it should not exceed 10 % of the dry weight.A significant decrease (p < 0.05) in the cooking loss was reported on the optimized Kemesha, containing 3.48 % carboxy methyl cellulose, as compared to the control Kemesha.While the cooking loss of optimized Kemesha was 4.95 % that of the control Kemesha was 7.25 %.The degree of Kemesha hydration can be measured as the water absorption capacity index.The optimized Kemesha's capacity to absorb water was higher than the control Kemesha's, at 220.68 % versus 180.62 %, respectively.Comparing the functional Kemesha to the control, the volume rise increased significantly (p < 0.05), from 216.50 to 250.55 %.According to Cristina, Paes and Pereira [71] an ideal volume increase is found between 200 and 300 %, so the pasta is considered of good quality.With regard to this classification, all the samples presented good quality (Table 8).Kemesha's ideal cooking time was reduced from 6 min for the control to 5.3 min.Most probably, due to the dilution of gluten, the starch-protein network will be weakened and it facilitates water diffusion through the food matrix, reducing the time the water needs to reach the food center during the cooking process [72]. According to Gull, Prasad and Kumar [16], during cooking, soluble starch and other soluble components, including nonstarch polysaccharides, leach out into the water, and as a result, the cooked water becomes thick.According to Larrosa et al. [27], a high loss difference in the characteristic aromatic flavor of haricot bean and pumpkin flour to wheat flour could have been the reason for the odor of Kemesha samples.Moreover, the structuring agents did not alter the odor of the samples, which was pleasant [1].In general, the structuring agent has more affinity to starch and forms a stable polymeric network, which is important for the entrapment of carbohydrates and good Kemesha quality.The sensory properties of the spaghetti samples were found to be improved by Chillo et al. [72] and (L.Padalino et al. [1], who discovered that the presence of carboxymethyl cellulose (CMC) slows down the diffusion of amylose molecules from the internal part to the spaghetti surface.The spaghetti samples displayed good elasticity and firmness and low adhesiveness.Similarly, Yadav et al. [77] also observed an increase in overall acceptability with CMC for non-wheat pasta based on pearl millet flour containing barley and whey protein concentrate.In line with this study, Bharath Kumar and Prabhasankar [63] stated that noodles prepared with different cereal flours, vegetables, and pulses showed increased acceptability among consumers, with improved quality characteristics. Conclusions The chemical and physical properties of Kemesha formulations were effectively improved by including germinated haricot bean, ultrasonicated fine-milled pumpkin flour, and carboxymethyl cellulose (CMC), which improved Kemesha as a functional food.Doptimal mixture design was used to optimize the formulation of Kemesha with better nutritional, cooking, and sensory characteristics than the control sample.The optimal formulation contained 63.00 g of common wheat flour, 19.01 g of germinated haricot bean flour, 14.51 g of ultrasonicated fine-milled pumpkin flour, and 3.48 g of carboxymethyl cellulose per 100 g of flour.The amount of fibre and protein in the optimal sample was 3.85 and 1.31 times higher than the control Kemesha, respectively.Total phenolic, carotenoid, and antioxidant properties of optimized functional Kemesha were significantly higher than control Kemesha and may offer the inherent health benefits of pumpkin and germinated haricot bean, especially phytochemicals, to the consumer.This could substantially impact the increasing consumption of kemesha from underused crops like haricot beans and pumpkin flour, which are both functionally and nutritionally acceptable.Carboxymethyl cellulose (CMC) improved sensory and cooking quality aspects such as cooked loss, water absorption, and volume increase.It also raised hardness and lowered adhesiveness significantly (P ≤ 0.05).The sensory evaluation results showed that color changes increased the overall liking of the optimized treatment compared to the control sample.The findings of this study could also give the food industry vital knowledge on the development of new functional foods and the possible use of underutilized pumpkin and haricot bean crops in food formulations.In general, this form of Kemesha will provide vital nourishment and health benefits, despite its intake not being widespread in the country. Table 1 Experimental design showing the doses of each formulated blend and response each runs. Table 2 Chemical composition of raw materials. bAll values are mean ± standard deviation.This means sharing the same letters in raw are not significantly different from each other (student-t-test, p < 0.05; CWF, common wheat flour; GHBF, germinated SAB 632 haricot bean flour, UFPF, Ultrasonicated pumpkin flour milled to <75 μm.D.W. Bekele and S. Admassu Emire Table 3 ANOVA showing the linear, quadratic, and lack of fit of the response variables. P < 0.05 is significant, P > 0.05 is not significant; PRESS, predicted residual sum of squares; C.V, coefficient of variation.D.W. Bekele and S. Admassu Emire Table 5 Actual and predicted values of protein, fibre, TCC, and firmness of optimal formulation. Table 7 Physical properties control and optimized Kemesha.
2023-10-14T15:58:06.918Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "b902002083ff5a3245ea787f998715ddf2a56f61", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023080374/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93a8ee7b190bb07bc71b83899a2f1e4bed12a6e8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
39508472
pes2o/s2orc
v3-fos-license
Basic amino acids in a distinct subset of signal peptides promote interaction with the signal recognition particle. Previous studies have demonstrated that signal peptides bind to the signal recognition particle (SRP) primarily via hydrophobic interactions with the 54-kDa protein subunit. The crystal structure of the conserved SRP ribonucleoprotein core, however, raised the surprising possibility that electrostatic interactions between basic amino acids in signal peptides and the phosphate backbone of SRP RNA may also play a role in signal sequence recognition. To test this possibility we examined the degree to which basic amino acids in a signal peptide influence the targeting of two Escherichia coli proteins, maltose binding protein and OmpA. Whereas both proteins are normally targeted to the inner membrane by SecB, we found that replacement of their native signal peptides with another moderately hydrophobic but unusually basic signal peptide (DeltaEspP) rerouted them into the SRP pathway. Reduction in either the net positive charge or the hydrophobicity of the DeltaEspP signal peptide decreased the effectiveness of SRP recognition. A high degree of hydrophobicity, however, compensated for the loss of basic residues and restored SRP binding. Taken together, the data suggest that the formation of salt bridges between SRP RNA and basic amino acids facilitates the binding of a distinct subset of signal peptides whose hydrophobicity falls slightly below a threshold level. Previous studies have demonstrated that signal peptides bind to the signal recognition particle (SRP) primarily via hydrophobic interactions with the 54-kDa protein subunit. The crystal structure of the conserved SRP ribonucleoprotein core, however, raised the surprising possibility that electrostatic interactions between basic amino acids in signal peptides and the phosphate backbone of SRP RNA may also play a role in signal sequence recognition. To test this possibility we examined the degree to which basic amino acids in a signal peptide influence the targeting of two Escherichia coli proteins, maltose binding protein and OmpA. Whereas both proteins are normally targeted to the inner membrane by SecB, we found that replacement of their native signal peptides with another moderately hydrophobic but unusually basic signal peptide (⌬EspP) rerouted them into the SRP pathway. Reduction in either the net positive charge or the hydrophobicity of the ⌬EspP signal peptide decreased the effectiveness of SRP recognition. A high degree of hydrophobicity, however, compensated for the loss of basic residues and restored SRP binding. Taken together, the data suggest that the formation of salt bridges between SRP RNA and basic amino acids facilitates the binding of a distinct subset of signal peptides whose hydrophobicity falls slightly below a threshold level. The signal recognition particle (SRP) 1 is a ribonucleoprotein complex that targets proteins to the eukaryotic endoplasmic reticulum (ER) as well as the bacterial inner membrane (IM). Although a core domain of SRP is highly conserved throughout evolution, both the size of the particle and its substrate specificity vary considerably (for review, see Ref. 1). Mammalian SRP is a relatively large particle comprised of six polypeptides and a 300-nucleotide RNA. In the first phase of the targeting reaction, the SRP 54-kDa subunit (SRP54) binds to both Nterminal signal sequences and transmembrane segments (TMSs) of integral membrane proteins (which often lack cleaved signal peptides) as they emerge during translation (2)(3)(4). Subsequently the ribosome-nascent chain complex migrates to the ER, where an interaction between SRP54 and a membrane-bound receptor catalyzes release of the nascent chain and its insertion into a protein translocation channel (5)(6)(7). At the other extreme, Escherichia coli SRP consists of only a homolog of SRP54 (Ffh) and an ϳ100 nucleotide RNA (4.5 S RNA) that is closely related to helix VIII of mammalian SRP RNA. Despite the difference in size, bacterial and mammalian SRPs share many biochemical properties (8,9). The substrate specificity of E. coli SRP, however, is more restricted in that it targets primarily inner membrane proteins (IMPs) to the IM (10 -12). Most periplasmic and outer membrane proteins, which contain cleaved signal peptides, are targeted to the membrane by molecular chaperones such as SecB (13). Unlike SRP, chaperones do not recognize signal sequences. Instead, they bind to the mature region of presecretory proteins late during translation or post-translationally to maintain translocation competence and to ensure that signal peptides are accessible to gate open translocation channels (14). Biochemical studies showed 20 years ago that SRP recognizes the 7-13-amino acid hydrophobic core ("H region") that is a universal feature of signal peptides (15). More recently, crystallographic analysis of mammalian SRP54 and its bacterial homologs revealed the presence of a large hydrophobic groove in the "M domain" that likely represents the signal peptide binding pocket (16,17). Mammalian SRP appears to interact with signal peptides that vary widely in hydrophobicity. In bacteria and the yeast Saccharomyces cerevisiae, which also has multiple targeting pathways; however, SRP discriminates between different targeting signals that vary only slightly in hydrophobicity. In those organisms presecretory proteins that contain moderately hydrophobic signal peptides are bypassed by SRP and targeted by molecular chaperones by default. In E. coli, maltose binding protein (MBP) and OmpA are normally targeted to the IM by SecB, but increasing the net hydrophobicity of their signal peptides reroutes both proteins into the SRP pathway (18). Furthermore, the biogenesis of M13 procoat protein, a small IMP whose insertion normally does not require any targeting factor, becomes SRP-dependent when it contains an unusually hydrophobic signal peptide (19). Likewise, yeast SRP binds preferentially to signal peptides that have a high hydrophobicity index (20). The data suggest that different SRP54 homologs are calibrated to bind to a different range of targeting signals. Indeed the observation that the putative binding pockets of evolutionarily distant M domains differ considerably in size and shape (16,17) might account at least in part for the variation in substrate specificity. The recent solution of the crystal structure of the E. coli Ffh M domain bound to a fragment of 4.5 S RNA raised the unexpected possibility that SRP RNA may also play a role in substrate recognition (21). The x-ray data show that a portion of the phosphate backbone of 4.5 S RNA lies adjacent to the hydrophobic groove in the Ffh M domain and appears to create an extended signal peptide binding pocket. The structure suggests that electrostatic interactions between the phosphates * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ To whom correspondence should be addressed: National Institutes of Health, Bldg. 5, Rm. 201, Bethesda, MD 20892-0538. Tel.: 301-402-4770; Fax: 301-496-9878; E-mail: harris_bernstein@nih.gov. 1 The abbreviations used are: SRP, signal recognition particle; ER, endoplasmic reticulum; HA, influenza hemagglutinin epitope HA.11; IM, inner membrane; IMP, IM protein; IPTG, isopropyl-␤-D-thiogalactopyranoside; MBP, maltose-binding protein; TF, trigger factor; TMS, transmembrane segment. and basic amino acids that often reside at the N terminus ("N region") of signal peptides and that flank TMSs might contribute to substrate recognition. Curiously, biochemical studies have not provided any evidence that SRP interacts with basic amino acids. Mutation of basic amino acids in model signal peptides does not significantly affect recognition by mammalian SRP in cell-free assays (22,23). By contrast, alteration of either the charge of the N region or the distance between basic amino acids and the H region can profoundly affect signal peptide cleavage, interaction with components of the translocation machinery, and translocation into ER vesicles (22)(23)(24)(25)(26). In E. coli, basic amino acids that flank TMSs influence IMP topology but are not required for membrane integration (27). A screen for mutations in the MBP signal sequence that improve export in a secBϪ strain (and that probably reroute MBP into the SRP pathway) yielded changes that increase its hydrophobicity but not its net positive charge (28). None of these results, however, rules out the possibility that interactions between SRP RNA and basic amino acids play a minor role in substrate recognition or that SRP RNA interacts with only a subset of signal peptides. Indeed electrostatic interactions might be expected to make a relatively small contribution to SRP recognition since signal peptides are predominantly hydrophobic and because only about a third of the putative extended binding surface is contributed by the RNA. In this study we reexamined the role of basic amino acids in the N region of signal peptides in targeting pathway selection. Our experimental strategy was based on the observation that E. coli presecretory proteins can be rerouted into the SRP pathway by increasing the hydrophobicity of their signal sequences (18). We reasoned that if electrostatic interactions between SRP RNA and basic amino acids in signal peptides promote substrate recognition, then the presence of a highly charged signal peptide might likewise alter the targeting of presecretory proteins. Consistent with our hypothesis, we found that replacing the native signal peptides of MBP and OmpA with a moderately hydrophobic, but atypically basic signal peptide derived from the signal peptide of the E. coli autotransporter EspP directed both proteins into the SRP pathway. As expected, SRP recognition required the presence of multiple basic amino acids. Several lines of evidence indicated, however, that basic residues only promote the binding of SRP to a distinct subset of signal peptides that barely escape detection on the basis of hydrophobicity alone. Plasmid Construction-Plasmids pHL36, which contains an HAtagged version of ompA under the control of a trc promoter, and pJH28 and pJH29, which contain malE under the control of a tac promoter, have been described (18,29). To construct plasmid pJH46, the signal peptide of EspP was first amplified by PCR using the oligonucleotides 5Ј-GTTTCCCTTAAAAATGGAGCTCATATGAA-3Ј (EspP1) and 5Ј-GATGTAGAAATTTGAAATATCCATATGTGACGC and E. coli strain EDL933 genomic DNA (ATCC) as a template. The amplified DNA was then cloned into the NdeI site of pJH29. To make plasmid pJH47, the downstream NdeI site was abolished by site-directed mutagenesis using the oligonucleotide 5Ј-CTTTTGCGTCACAGATGAAAATCGA-AGAAGG-3Ј and its complement, and a new NdeI site was introduced in the middle of the EspP signal peptide using the oligonucleotide 5Ј-CA-TCAAGAGCAACTCATATGAAAAAACACAAACGCATACTTGC-3Ј and its complement. A plasmid that lacks the N terminus of the EspP signal peptide (pJH48) was then generated by resealing NdeI-digested pJH47. To construct plasmid pJH50, the EspP signal peptide was amplified with the oligonucleotides EspP1 and 5Ј-TTGAAATATCCATCTCGGC-CGCAAAAGAATATGAGG-3Ј, and the amplified DNA was cloned into the NdeI and EagI sites of pHL36. Subsequently a new NdeI site was introduced into the middle of the signal peptide, and the plasmid was resealed after NdeI digestion as described above to create pJH51. All of the mutant versions of the MBP and ⌬EspP signal peptides were constructed by introducing point mutations into pJH28 and pJH48. Site-directed mutagenesis was performed using the QuikChange mutagenesis kit (Stratagene). DNA encoding the first 94 amino acids of MBP and ⌬EspP-MBP was amplified using the oligonucleotides 5Ј-GTCCGTTTAGGTGTTTTCACGAGGAATTCACCA-3Ј and either 5Ј-TTGAGCGGATCCACCCATGCGGTCGTGTGCCCAGAA-3Ј or 5Ј-TTGAGCGGATCCACCCATGCGGTCGTGTGCCCAGAACATAATG-3Ј and either pJH28 or pJH48 as templates. The amplified DNA was then cloned into the EcoRI and BamHI sites of pGEM-4Z (Promega) to generate pJH56 and pJH57. To equalize the signal in in vitro translations, two amino acids near the C terminus of the ⌬EspP-MBP 94-mer were changed to methionine during the PCR amplification. Plasmid pJH58 was constructed by transferring an NheI-Hind III fragment of pJH42 (29) containing the tig gene into pBAD33. Protein Export Assays-For most experiments, cells were grown in M9 containing 0.2% glucose. Overnight cultures were washed and diluted into fresh medium at an optical density (OD 550 ) of 0.025. For analysis of OmpA export in SKP1101/SKP1102 and in cells that overproduced TF, M9 supplemented with 0.2% glycerol and all the L-amino acids except methionine and cysteine was used. For Ffh depletion studies, cells were grown overnight in M9 containing 0.2% fructose and 0.2% arabinose, washed in medium lacking arabinose, and then added at OD 550 ϭ 0.005 to medium containing fructose and either arabinose or glucose (0.2%). In general, synthesis of plasmid-borne presecretory proteins was induced by the addition of 50 M isopropyl-␤-D-thiogalactopyranoside (IPTG) at OD 550 ϭ 0.2. For trigger factor (TF) overproduction studies, cultures were divided in half at OD 550 ϭ 0.2, arabinose (0.2%) was added to one portion, and incubation was continued for 30 min before IPTG addition. To analyze protein export at low temperature, cultures were shifted to 22°C at OD 550 ϭ 0.2 and incubated for 40 min before IPTG addition. In experiments involving SKP1101/ SKP1102, cultures were grown at 30°C to OD 550 ϭ 0.1 and then incubated at 42°C for 1.5 h before IPTG was added. In all experiments aliquots were removed from each culture 20 -30 min after IPTG addition. Cells were then pulse-labeled with 30 Ci/ml Tran 35 S-label (Amersham Biosciences) for 30 s and incubated for various chase times. After the chase period proteins were precipitated immediately by the addition of cold 10% trichloroacetic acid. Immunoprecipitations were performed essentially as described (29), and proteins were resolved by SDS-PAGE on 8 -16% minigels (Novex). In Vitro Translation and Cross-linking-An E. coli translation extract was prepared by first rapidly chilling exponentially growing MRE600. Cells were washed and resuspended in 50 mM triethanolamine-acetic acid (pH 8.0), 50 mM KCl, 15 mM magnesium acetate, 1 mM dithiothreitol, 0.5 mM phenylmethylsulfonyl fluoride and passed twice through a French pressure cell at 8000 p.s.i. in a 1:1 (w/v) suspension. The cell lysate was centrifuged at 30,000 ϫ g for 30 min, and the resulting supernatant was then incubated at 37°C for 1 h before freezing. Truncated mRNAs were synthesized by incubating BamHI-digested pJH56 and pJH57 (100 ng/l) and 15 units of SP6 polymerase (Promega) in 40 mM Tris-HCl (pH 7.5), 6 mM MgCl 2 , 2 mM spermidine, 10 mM dithiothreitol, 0.5 mM rNTPs for 1 h at 40°C. In vitro translation reactions (50 l) programmed with these mRNAs were performed essentially as described (31), except that they were incubated at 25°C for 20 min. Reactions were then placed on ice for 5 min and diluted with an equal volume of buffer A (35 mM triethanolamine in acetic acid (pH 8.0), 60 mM potassium acetate, 11 mM magnesium acetate, 1 mM dithiothreitol). Ribosome-nascent chain complexes were collected by centrifugation at 60,000 rpm for 30 min in a TLA100 rotor at 4°C. The pellets were washed, resuspended in 50 l of buffer A, and divided in half. E. coli SRP (50 nM) that had been purified as described (32) was added to one portion. Cross-linking reactions were then performed with 2 mM disuccinimidyl suberate as described (33), and proteins were precipitated with cold acetone. Half of each sample was subjected to SDS-PAGE on 14% minigels. Ffh-containing polypeptides were isolated from the other half by immunoprecipitation and resolved by SDS-PAGE on 8 -16% minigels. RESULTS The Highly Basic ⌬EspP Signal Peptide Reroutes E. coli Presecretory Proteins into the SRP Pathway-In considering the hypothesis that basic amino acids in signal peptides play a role in targeting pathway selection, we reasoned that naturally occurring presecretory proteins that contain signal peptides with atypically charged N regions might be SRP substrates. Strikingly, the signal peptides of the serine protease autotransporters of E. coli and Shigella ("SPATEs") are both unusually long and unusually basic. These signal peptides contain a ϳ25amino acid segment that resembles typical signal peptides as well as a ϳ25-amino acid N-terminal extension of unknown function. Previous results indicate that one member of the SPATE family, Hbp, is targeted to the IM by SRP (34). To determine whether the basic residues found in SPATE signal peptides promote SRP recognition, we first replaced the native signal peptides of MBP and OmpA, two proteins that are normally targeted by SecB, with either the complete EspP signal peptide or a truncated version that lacks the N-terminal extension (⌬Esp). We then examined the effect of changing the signal peptide on the targeting of each protein. The EspP signal peptide was chosen as a model because its N region contains four closely spaced basic residues and a histidine, which might also be slightly charged (Fig. 1). Although the EspP signal peptide did not alter the targeting pathway of either MBP or OmpA (data not shown), we found that the ⌬EspP signal peptide eliminated the SecB requirement for export. Initially MC4100 (secBϩ) and HDB55 (secBϪ) were transformed with plasmids encoding MBP or OmpA or a derivative containing the ⌬EspP signal peptide ⌬EspP-MBP or ⌬EspP-OmpA under the control of the trc promoter. The plasmid-borne versions of OmpA were HA-tagged to distinguish them from endogenous OmpA. The synthesis of plasmid-borne proteins was induced by the addition of IPTG, and export was examined in pulse-chase labeling experiments. Radiolabeled proteins were immunoprecipitated, and export was assessed by comparing the relative amounts of precursor and mature forms of MBP or OmpA at each time point. Consistent with previous results, the wild-type proteins were exported much less efficiently in the secBϪ strain than in MC4100 ( Fig. 2A, lanes 1-6). By contrast, ⌬EspP-MBP or ⌬EspP-OmpA was exported equally well in both strains ( Fig. 2A, lanes 7-12). These results imply that the presence of the highly basic signal peptide reroutes the proteins from the SecB pathway to another targeting pathway or abolishes the need for a targeting factor altogether. Further investigation indicated that the ⌬EspP signal peptide directs presecretory proteins into the SRP targeting pathway. To test the effect of depleting SRP on the export of proteins containing the ⌬EspP signal peptide, isogenic secBϩ and secBϪ strains in which ffh is under the control of the araBAD promoter (HDB51 and HDB52, respectively) were transformed with a plasmid encoding MBP or ⌬EspP-MBP and grown in medium supplemented with arabinose. Ffh was then depleted from half of the cells by switching the carbon source to glucose, and protein export was assayed as described above. Ffh depletion did not measurably affect the export of ⌬EspP-MBP in HDB51 but caused a significant export defect in the secBϪ strain (Fig. 2B, lane 4). The results suggest that ⌬EspP-MBP is targeted by SRP in wild-type E. coli but can also be targeted effectively by molecular chaperones when the SRP pathway is impaired. Indeed given that the ⌬EspP signal peptide is only moderately hydrophobic, this interpretation of the data is consistent with other results showing that SRP dependence correlates with an unusual degree of signal peptide hydrophobicity (see below and Ref. 18). We next obtained direct evidence that SRP can interact with the ⌬EspP signal peptide in chemical cross-linking experiments. Cell-free translation reactions were programmed with mRNAs that encode the first 94 amino acids of MBP or ⌬EspP-MBP, radioactive nascent chains were synthesized, and the homobifunctional cross-linker disuccinimidyl suberate was added to isolated ribosome-nascent chain complexes in the presence or absence of 50 nM E. coli SRP. When ⌬EspP-MBP (but not wild-type MBP) nascent chains were synthesized, a prominent radiolabeled band of ϳ55 kDa (the combined molecular mass of Ffh and the nascent chain) was observed in the presence of SRP (Fig. 3A, lanes 1-4). Immunoprecipitation with an anti-Ffh antiserum confirmed that the band corresponded to a cross-linked complex of Ffh and the nascent chain (Fig. 3B, lane 4). Ffh was cross-linked to the ⌬EspP signal peptide considerably less efficiently than to the highly hydrophobic MBP*1 signal peptide (data not shown), but the reason for this discrepancy is unclear. The results of a different set of experiments strongly suggested that SRP also targets ⌬EspP-OmpA to the IM. Presumably because SecB targets wild-type OmpA post-translationally, a variable amount of pro-OmpA was reproducibly observed in pulse-labeled MC4100 and related secBϩ strains (Figs. 2, A and C, lane 1). This effect was particularly pronounced when cells were grown at 22°C (Fig. 2C, lane 1, top panel). Interestingly, the precursor form of ⌬EspP-OmpA was not observed in pulse-labeled MC4100 (Fig. 2A, lane 7; Fig. 2C, lane 3, top panel). When a strain harboring an ffh Ts mutation (SKP1101) and an isogenic ffhϩ strain (SKP1102) were shifted to 42°C, however, the ⌬EspP-OmpA precursor was observed in the mutant strain (Fig. 2C, lane 3, bottom panel). These results suggest that ⌬EspP-OmpA is targeted rapidly to the IM by the co-translational SRP pathway in wild-type cells but routed by default into a slower post-translational pathway when SRP function is impaired. We obtained further evidence that SRP targets ⌬EspP-OmpA to the IM in experiments in which we overproduced TF, a chaperone that binds promiscuously to nascent polypeptides early in biosynthesis. Previous work showed that TF overproduction strongly retards the export of OmpA, ␤-lactamase, and alkaline phosphatase (a protein that does not require a chaperone for export) but does not affect the biogenesis of proteins targeted by SRP (29). This effect can be explained by the observation that the binding of SRP and TF to nascent polypeptides is mutually exclusive (35). We transformed HDB37 (MC4100 araϩ) with plasmids expressing the TF gene under the control of the araBAD promoter and either OmpA or ⌬EspP-OmpA. As expected, the addition of arabinose greatly delayed the export of OmpA (Fig. 2D, lanes 1-4). TF overproduction, however, only very slightly affected the export of ⌬EspP-OmpA (Fig. 2D, lanes 5-8). Taken together with the results described above these data provide strong evidence that the presence of the ⌬EspP signal peptide routes presecretory proteins into the SRP pathway. SRP Recognizes the ⌬EspP Signal Peptide on the Basis of Both Charge and Hydrophobicity-We next wished to deter-mine whether the basic amino acids in the N region of the ⌬EspP signal peptide are required for SRP binding. To this end we mutagenized the basic residues in various combinations to glutamine. Mutants that contained glutamine in place of the first two lysines and the histidine (⌬EspP(Ϫ3)), the lysine and arginine adjacent to the H region (EspP(Ϫ2)), and all five of the charged and partially charged residues (⌬EspP(Ϫ5)) were produced (see Fig. 1). MC4100 and HDB55 were transformed with plasmids that encode the modified versions of ⌬EspP-MBP, and export was assessed as described above. Decreasing the net charge of the N region of the signal peptide did not affect export in MC4100 but led to progressively severe export defects in the secBϪ strain (Fig. 4, top three panels). Interestingly, mutation of either the first two or the last two charged amino acids in the ⌬EspP signal peptide partially restored the SecB requirement. These results imply that complete rerouting of MBP into the SRP pathway requires the presence of basic amino acids at multiple positions within the N region of the ⌬EspP signal peptide. In considering the features of a signal peptide that promote SRP binding, we were struck by the fact that the H regions of the ⌬EspP and MBP signal peptides are curiously similar in sequence (see Fig. 1). Five amino acids in the respective H regions are identical, and two others are closely related. Although both H regions contain seven large and two small hydrophobic amino acids, a calculation based on a standard hydropathy scale (36) indicates that the ⌬EspP H region has a higher average hydrophobicity. We conjectured that this rela-tively small difference between the two signal peptides might help to explain their differential ability to interact with SRP. To test this possibility we attached versions of the ⌬EspP signal peptide that contain single point mutations (F12A and L15T) to MBP. These mutations were chosen because they introduced the less hydrophobic amino acids found at specific positions in the MBP signal peptide into the ⌬EspP signal peptide. The single amino acid substitutions had no effect on MBP biogenesis in MC4100 but created export defects in 4. Interaction of SRP with the ⌬EspP signal peptide requires a minimum level of charge and hydrophobicity. MC4100 and HDB55 were transformed with a plasmid that produces the indicated variant of ⌬ EspP-MBP. After IPTG was added, protein export was analyzed by pulse-chase labeling and immunoprecipitation with an anti-MBP serum. The length of the chase is shown. p, precursor; m, mature. HDB55 that were at least as severe as those produced by the ⌬EspP(Ϫ5) mutant (Fig. 4, bottom two panels). Indeed the export of MBP containing the ⌬EspP(L15T) signal peptide showed essentially the same degree of SecBϪ dependence as wild-type MBP (compare Figs. 4 and 2A). These results strongly suggest that the H region of the ⌬EspP signal peptide barely surpasses a threshold level of hydrophobicity that is essential for SRP recognition. Moreover, by showing that targeting pathway selection is far more sensitive to small changes in the H region than to neutralization of the entire N region, the data suggest that signal peptide hydrophobicity is the primary parameter that governs SRP binding. We obtained additional evidence that SRP recognition requires a minimum level of signal peptide hydrophobicity in experiments in which we increased the net positive charge of the N region of the wild-type MBP signal peptide. Our mutagenesis strategy involved changing 2 or 3 amino acids to arginine or lysine. The most highly charged signal peptide variant (MBP(ϩ3)) contains a stretch of five consecutive basic amino acids that is nearly identical in sequence and location to the basic motif found in the ⌬EspP signal peptide (see Fig. 1). Pulse-chase experiments conducted in MC4100 and HDB55 cells showed that attachment of a signal peptide containing two extra positive charges (MBP(ϩ2)) to MBP had no effect on the rate of export or the SecB requirement (Fig. 5, top two panels). The export of MBP containing the MBP(ϩ3) signal peptide was then analyzed at 22°C as well as 37°C since electrostatic interactions are likely to be stronger at low temperature. Remarkably, attachment of the MBP(ϩ3) signal peptide appeared to actually increase dependence on SecB at both temperatures and slow export at 22°C (Fig. 5, last three panels). These data confirm that a high net positive charge of a signal peptide does not promote SRP binding if the hydrophobicity falls even slightly below a sharply defined threshold. A High Degree of Signal Peptide Hydrophobicity Is Sufficient to Promote SRP Recognition-Given that the targeting of ⌬EspP-MBP appeared to be more sensitive to changes in hydrophobicity than net positive charge, we hypothesized that basic amino acids might be superfluous for SRP recognition provided that a signal peptide is sufficiently hydrophobic. To test this idea, we first systematically increased the hydrophobicity of the ⌬EspP(Ϫ5) signal peptide by mutating C11 and G14 and increasing the length of the H region and examined the export of MBPs containing the mutant signal peptides. Although some of the mutations slightly delayed MBP export in MC4100 (Fig. 6A, lanes 1-3), it was clear that increases in the overall hydrophobicity and length of the H region progressively reduced the SecBϪ dependence of export (Fig. 6A, lanes 4 -6). Elevating the hydrophobicity of the signal peptide concomitantly increased the severity of export defects in HDB51 after Ffh depletion (Fig. 6B, lanes 3 and 4). This enhanced SRP dependence likely reflects protein aggregation in the absence of a co-translational targeting mechanism. A signal peptide that contained leucines in place of Cys-11 and Gly-14 (⌬EspP*2(Ϫ5)) appeared to confer partial dependence on both the SecB and SRP pathways and therefore probably interacts with SRP only marginally. The data demonstrate that a high degree of signal peptide hydrophobicity is sufficient to route a presecretory protein into the SRP pathway. To corroborate this conclusion, we subsequently reexamined the export of MBP*1, an MBP derivative containing three amino acid substitutions that increase the hydrophobicity of the signal peptide (Fig. 1). Previous studies showed that MBP*1 is targeted to the IM by SRP (18). Because the MBP*1 signal peptide is nearly as hydrophobic as the most hydrophobic ⌬EspP signal peptide derivatives described above, we surmised that the three basic amino acids in the N region might be dispensable for SRP recognition. Consistent with this prediction, we found that like MBP*1, MBP*1(Ϫ3) was exported efficiently from secBϪ cells (Fig. 6A, bottom panel). Moreover, both proteins showed similar export defects in cells that lack Ffh (Fig. 6B, bottom panel). Taken together the results provide additional evidence that SRP recognizes signal peptides primarily on the basis of hydrophobicity. DISCUSSION In this report we describe evidence that basic amino acids in the N region of signal peptides can play a significant role in promoting signal peptide recognition by SRP. Initially we found that the unusually basic ⌬EspP signal peptide suppresses the SecB requirement in the export of MBP and OmpA under physiological conditions and that this effect was dependent on the presence of multiple basic amino acids in the N region. Taken together, several observations strongly suggest that the elimination of the SecB requirement was due to a rerouting of the proteins into the SRP pathway. First, the export of ⌬EspP-MBP was inhibited by Ffh depletion in secBϪ cells. The simplest interpretation of this result is that the protein can be targeted effectively by both SRP and chaperonebased pathways and that export defects are detected only when multiple pathways are impaired. Because SRP acts at a very early stage of protein biosynthesis, this explanation implies that it provides the primary targeting pathway for ⌬EspP-MBP. Second, cross-linking experiments showed directly that SRP can interact with the ⌬EspP signal peptide. Third, the presence of the ⌬EspP signal peptide accelerated OmpA export except when the SRP pathway was impaired. Fourth, the ⌬EspP signal peptide prevented the delay in OmpA export that is associated with TF overproduction. Based on previous studies (29,35), the most likely explanation of this result is that interaction with SRP prevents the binding of TF to the mature region of ⌬EspP-OmpA. Finally, several experiments showed that the presence of a highly basic N region is necessary but not sufficient to explain the strong effect that the ⌬EspP signal peptide exerts on targeting pathway selection. The data strongly suggest that the basic amino acids in the ⌬EspP signal peptide contribute to eliminating the SecB requirement by promoting a specific macromolecular interaction rather than by affecting the folding of presecretory proteins. Although we found that signal peptide charge can influence targeting pathway selection in E. coli, our results clearly show that signal peptide hydrophobicity is the primary criterion for SRP recognition. We found that SRP recognizes signal peptides that are devoid of basic amino acids provided that they are atypically hydrophobic. Furthermore, single point mutations that slightly change the hydrophobicity of the H region profoundly affect SRP recognition (see also Ref. 18), whereas mutations that alter the charge of the N region have much smaller effects. Indeed one of our most intriguing observations is that a threshold level of signal peptide hydrophobicity is absolutely essential for SRP recognition. Taken together with the finding that a 20-fold overproduction of SRP does not alter the targeting of MBP (18), this observation suggests that SRP has dra-matically different affinities for signal peptides that vary only slightly in hydrophobicity. Given that SRP binds to a diverse range of substrates, such an exquisite degree of specificity seems surprising. The ability of E. coli SRP to interact with signal peptides may be very limited, however, because it is probably designed to interact primarily with the extended stretches of hydrophobic residues found in the TMSs of IMPs. In this regard it is interesting to note that SRP recognizes the MBP*1 signal peptide but not ⌬EspP*1(Ϫ5) signal peptide. The H domain of the former peptide is longer but has a lower average hydrophobicity. Indeed it makes sense that the number of hydrophobic amino acids in a targeting signal would be an important factor in SRP recognition since few TMSs have an average hydrophobicity equivalent to that of signal peptides such as MBP*1. Our results also imply that basic residues promote the binding of SRP to only a subset of signal peptides whose hydrophobicity falls slightly below a critical level. The contribution of signal peptide charge to SRP recognition may not have been detected in previous studies precisely because it was not significant for the recognition of the small number of model signal peptides that were examined. Our data predict that basic residues promote the recognition of only relatively few naturally occurring signal peptides in E. coli because the hydrophobicity threshold for SRP interaction is set extremely high. If the threshold is set closer to the hydrophobicity of an average signal peptide in other species due to differences in the structure of SRP54/Ffh or the interaction of SRP with the translation machinery, however, then the composition of the N region may be relevant for the binding of a much greater number of substrates. In light of the crystallographic analysis of the SRP ribonucleoprotein core (21), it is very likely that basic residues in signal peptides promote SRP binding by forming electrostatic interactions with the phosphate backbone of SRP RNA. It is doubtful that basic residues form salt bridges with SRP54/Ffh because the protein does not have any significant negatively charged surfaces (16,37). We cannot completely exclude the possibility, however, that basic amino acids in signal peptides facilitate SRP binding by an indirect mechanism, perhaps by affecting the length or ␣-helical structure of the H region. Given that arginine-and lysine-to-glutamine substitutions perturb the interaction of the ⌬EspP signal peptide with SRP significantly but presumably alter its biophysical properties only very minimally (38), this scenario seems unlikely. A simple model that emerges from our data is that electrostatic interactions involving SRP RNA help to stabilize the binding of signal peptides that bind to SRP54/Ffh with only moderate affinity. Factors such as the size and hydrophobicity of the signal peptide binding pocket in the M domain and the relative position of the M domain with respect to the phosphate backbone of SRP RNA may influence the range of substrates that are effectively engaged via these stabilizing interactions. Indeed a two-part binding surface that has the capacity to form two distinct types of chemical bonds with potential ligands may have evolved to fine tune the limits of SRP recognition to meet the needs of different organisms.
2018-04-03T04:34:11.450Z
2003-11-14T00:00:00.000
{ "year": 2003, "sha1": "89e9ed56eb1cb37ea44c8e5d41803807c930d7cc", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/46/46155.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9036105ce28d46d66fbd20d02c4ede6a8ee4d067", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
56360551
pes2o/s2orc
v3-fos-license
Investigation of Neutrino-Nucleon Interaction through Intermediate Vector Boson ( IVB ) This work deals with the interaction of neutrino with the nucleon considering data taken from different experiments. It is assumed that the interaction of neutrino with nucleons go through the intermediate vector boson (IVB) which may be the W or Z with effective mass of the order of 80 GeV. The neutrino wave function is obtained via perturbation technique to calculate the weak leptonic current. On the other hand, the quark current is estimated using the measured experimental data of deep inelastic scattering of neutrino-nucleon interaction. Eventually the total interaction transition matrix is calculated as a function of momentum transfer square, q and qualitatively compared with the available experimental data. Besides, a comparative study is also done to explore the influence of the target composition during the neutrino weak interactions. In this context an investigation of neutrino-proton and neutrino-neutron interactions are carried out to calculate the deep inelastic cross section in both cases. Introduction The problem of weak interactions through the charged and neutral currents is dealt by many different approaches.A classical picture of lepton neutral current by James L. Carr [1] considered that, when charged current weak interactions are excluded, the neutral current weak interaction is formally similar to ordinary electromagnetism with a massive photon.In this spirit, the Maxwell equations for the fields of the Z-boson are derived from the standard model.For neutral current events, electrons (or neutrinos) remain as electrons (or neutrinos). In the charged current case, an initial electron state emerges as a final neutrino state or vice-versa.For this reason it is difficult to consider such a picture for the charged current interaction.A non-relativistic weak-field Hamiltonian for the electron is developed which allows computing the interaction energy of an electron in the presence of a classical Z-boson field.The Maxwell equations for the Z-boson are then developed.In the absence of sources, the Maxwell equations, [2] are identical to those of ordinary electromagnetism but with a massive photon.The Maxwell equation source terms are derived from the interaction energies for both electron and neu-trino sources.The Maxwell equations derived in this case can be used to describe the Z-boson field generated by macroscopic or atomic-scale.They may also be used to visualize the Z-boson fields surrounding classical pointlike electrons and neutrinos.The classical point particle solutions provide an interesting visualization of the parity violation in the standard model in terms of a vortex-like magnetic field structure oriented with the electron's spin. In calculating the cross section of neutrino nucleon interactions, we consider the three independent helicity states (-1,+1,0) for the mediating bosons W ± .In the weak interactions there is no conservation of parity which compels helicity -1 and +1 states to occur with equal probability as a coherent superposition as in electromagnetic case.Thus for e-nucleon interactions we need 2-structure functions (F 1 and F 2 ) to describe the inelastic cross section while 3-structure functions (F 1 , F 2 and F 3 ) are needed for neutrino nucleon interactions. Background An alternative method was developed by T. Siiskonen et al., [3], where the phenomenological structure of the weak hadronic current between the proton and neutron states is well determined by its properties under the Lorentz transformation.Additional constraints come from the requirement of time reversal symmetry as well as from the invariance under the G-parity transformation (combined charge conjugation and isospin rotation).The resulting interaction Hamiltonian consists of vector (V), axial vector (A), induced weak magnetism (M), and induced pseudoscalar (P) terms together with the associated form factors C  ,  = V, A, M, or P.These form factors are called as coupling constants at zero momentum transfer.The present experimental knowledge does not exclude the presence of the scalar and tensor interactions.However, their contribution is expected to be small due to weak coupling [4].The values of vector, axial vector, and weak magnetism couplings are well established by beta-decay experiments as well as by the conserved vector current hypothesis (CVC), introduced already in the late 50's [5].The magnitude of the pseudoscalar coupling is more uncertain, although the partially-conserved axial current hypothesis (PCAC) [6] provides an estimate along with muon capture experiments in hydrogen [7,8].In nuclear beta decay, with an energy release up to some 20 MeV, only the vector (Fermi) and the axial vector (Gamow-Teller) terms are usually important.The induced pseudoscalar and weak magnetism parts are essentially inactive, since their contributions are proportional to q /M, where q is the energy release and M is the nucleon mass (in units where ħ = c = 1). T. Siiskonen et al., [9,10] constructed effective operators for the weak hadronic current between proton and neutron states.These operators take into account the core polarization effects, which are expected to be the largest correction to the bare matrix element [11]. As mentioned earlier, Fermi conceived of -decay as a process analogous to that of an electromagnetic transition, the electron-neutrino (e-) pair are playing the role of the emitted photon.The amplitude was assumed to involve, for the nucleons, the hadronic weak current matrix element <p|J  |n >, in analogy to the electromagnetic transition currents.A simple Lorentz invariant amplitude is then obtained if e pair also appears as a 4-vector combination <e|J  |O>, which is the leptonic weak current matrix element.The complete matrix element being, M = <p|J  |n> <e|J  |O>.At very low energy release, one might expect that, to a good approximation, all momentum dependence in the matrix element could be ignored, reducing it to a constant G = 1.14 × 10 -5 GeV -2 , in the natural units (ħ = c = 1).The first statement of universality of weak interactions was that all processes have the same coupling constant G. Fermi's vector-vector theory was motivated by the analogy of the vector currents of QED.The analogy was however, imperfect.The photon emitted in a radioactive transition is the quantum of the electromagnetic field, but it is hard to see how the corresponding e  pair can be the weak field quantum, since the effective mass of the pair varies from process to another.It is therefore natural to postulate the existence of a weak analogue of the photon-the intermediate vector boson (IVB) and to suppose that weak interactions are mediated by the exchange of IVB's as the electromagnetic ones are by photon exchange.This was the first step toward an eventual unification at the weak and electromagnetic fields.In the presence of currents, the wave equation for the photon has the form: The propagator associated to the process is just the inverse of the differential operator in Equation (1).Applying this to the free particle, we get . As for a massive spin-1 particle, in a general gauge, the Maxwell equations read We make the natural replacement For plane-wave solution, And the propagator is expected in this case to correspond to the inverse operator which may be written in the form of A g B q q     . The values of the constants A and B are found from the matrix identity This leads to the propagator form Then the total transition matrix element has the form: Furthermore, a series of celebrated experiments [12][13][14] have shown that neutrinos have the following properties: 1) They are massless or nearly so in the standard model viewpoint. 3) They have spin 1/2 but only the negative helicity state (left-handed) participates in weak interactions. 4) The weak interactions don't conserve P, the parity, not do they respect invariance under the charge conjugation.Instead of pure vector currents, we have now the vector V, and axial vector, A, pieces.The leptonic weak current for each lepton and its neutrino has the form, where g, is the coupling constant for the W(Z) boson that exchanges in weak processes.The total interaction matrix element contains both the leptonic current and the hadronic or the quark current, written as: where q refers to the quark type u, d, s,…. Problem Statement A model for weak interaction of neutrino with nucleons is proposed.In this model we assume that the neutrino interacts with nucleons through the IVB which may be the W or Z with effective mass about 80 GeV.The Feynman diagram as in Figure 1 represents the interaction. The scattering amplitude is then calculated according to Equation (6).The implementation of this equation reveals two main problems.The first of them is latent in the calculation of neutrino wave function to calculate the weak leptonic current.The second one comes in calculating the quark hadronic current, which consequently needs the specification of the wave function of the quarks forming the nucleon. Results and Discusion As mentioned earlier Equation ( 5), the weak leptonic current is calculated as: where ( ) & ( ')     , are the neutrino wave functions before and after scattering at the first vertex of Figure 1. As a good approximation, it is possible to consider the neutrino's wave function as a plane wave with the form: The 4-component matrix u describes the spin 1/2 particle: Since the neutrino is massless and moves initially in the z-direction so, On the other hand, we used the perturbation technique to find the scattered wave function of the neutrino as: Where ,  and k'are the azimuthal, polar angles and the momentum of the scattered neutrino, f is the scattering amplitude and r is the distance from the scattering center.Since the scattering is due to weak field, then it is sufficient to consider only one term in the perturbation series. Then the first component of the leptonic current J x is given by,   The integrals in Equation ( 12) are due to the averaging of the current allover the available space inside the nucleon of radius R. Similarly J y , J z are found to be: The weak leptonic current density is a complex function of the momentum transfer q 2 , the imaginary part of which measures the absorption rate.The current components J x and J y are equal in the absolute values, due to the assumption of azimuthal symmetry of the problem.Fig- ure 2 displays the current components J x and J z , while the total leptonic current is displayed in Figure 3. Appreciable values of the current are obtained near small q 2 .To proceed further, we shall determine the wave functions for the u and d quarks, forming the nucleon by empirical method.In other words, we shall use the values of the structure functions F 2 (x) and xF 3 (x) that extracted from the deep inelastic scattering of neutrino with nucleon.Making the approximation of setting the Cabibbo angle to zero, we obtain the correspondence where 2 p F  and 3 p F  are the structure functions for -p scattering.Using the hadronic isospin invariance we get, where 2 n F  and 3 n F  are the structure functions for -n scattering.Hence it is easy to define the quark and the anti-quark wave functions as: The structure function F 2 and xF 3 are functions only in the scaling variable x and approximately independent of the 4-momentum square q 2 .The data of the experiments carried out in CERN-WA-025 [15] and FNAL-616 [16] are used to put the functions F 2 and xF 3 in parametric forms in the variable x, as shown by figures Figure 4 and Figure 5 for -n and -p reactions.The structure function F 2 is formulated as: show that the structure function F 2 is more predominant in -n than the -p all over the range of x.Their values are relatively close near the deep inelastic scattering (x  0) and divert toward the elastic end (x1).On the other hand the third structure function xF 3 shows a bell shape in all cases with peak value near (x  0.7).The -n structure function overpass that of -p with relatively constant ratio of 2 that divert to more than 4 near the elastic end.Accordingly we conclude that target constitution plays important role in the interaction cross section.In other words since neutrons are enriched with d quarks so a model that relies a point like interaction is much supporting collision of -d more than  collisions with u quarks.In this context we are able to extract the quark distribution functions u(x) and d(x) according to Equation (18). Figure 7 shows that the quark currents have minimum value in the range 0.4 < x < 0.8 for both u and d quarks, as well as they are very close to each other. Further, according to Equation (6) the relation between the matrix element squared M 2 and the momentum transfer square, q 2 is displayed in Figure 8 which reveals that the matrix element is almost independent on q 2 in the range 0.05 < x < 0.5.The general feature of the results seems comparable to those produced by CTEQ collaboration [17] and MRS collaboration [18] at adjacent energy values. Conclusions In summary, neutrino-nucleon interaction was investigated through intermediate vector boson (IVB).The neutrino wave function was derived with perturbed technique.Thus, the weak leptonic current can be obtained in term of q 2 .Also, the quark wave functions were determined by empirical method based upon experimental data and the weak hadronic current can be estimated as a function of x. The differential deep inelastic cross section of neutrino-nucleon interaction is described in terms of three structure functions representing the three helicity states H = 1, -1 and 0. The appreciable increase of -n cross section compared to -p supports the point particle interaction model and that  is likely dependent on the quark flavor of the nucleon constituent quarks.The down quark d structure function overpass that for the up quark u at all values of x. The quark distribution functions are also studied using  [19], e and µ [20] inelastic scattering.The analyses are done in the leading order (LO) and next-to-leading order (NLO) of running coupling constant.Although the uncertainty by the NLO perturbation show slit modification at small x. however, they are not significant at larger x.In both cases the result of the analysis are very close to that obtained by the IVB model. It is found also that the determination of the quark distribution functions are independent on the type of the projectile of the reaction whether it is  or e. Moreover, the total interaction matrix element is calculated by IVB and NLO and found to be almost independent on q 2 in the range 0.05 < x < 0.5.The prediction of this analysis shows global fair agreement with experimental data in the neutrino energy range (E  ) 150-250 GeV. The study of the Lorentz covariance of Dirac equations defines the vector current as u u where u is a 4-component wave function and  5 is a 4 × 4 matrix that defined in terms of the Dirac  matrices as:  .The right and left-handed helicity operators P R , P  .So that, for massless spin 1/2 neutrinos, the combination Figure 2 . Figure 2. The lepton current components J x and J z as a function of q 2 . Figure 3 . Figure 3.The total lepton current as a function of q 2 . F 2 = 1 .Figures 4 & 5 Figures 4 & 5show that the structure function F 2 is more predominant in -n than the -p all over the range of x.Their values are relatively close near the deep inelastic scattering (x  0) and divert toward the elastic end (x1).On the other hand the third structure function xF 3 shows a bell shape in all cases with peak value near (x  0.7).The -n structure function overpass that of -p with relatively constant ratio of 2 that divert to more than 4 near the elastic end.Accordingly we conclude that target constitution plays important role in the interaction cross section.In other words since neutrons are enriched with d quarks so a model that relies a point like interaction is much supporting collision of -d more than  collisions with u quarks.In this context we are able to extract the quark distribution functions u(x) and d(x) according to Equation(18). Figure 4 . Figure 4.The relation between F 2 and x for -p, -n and their relative values. Figure 5 . Figure 5.The relation between xF 3 and x for -p, -n and their relative values.The quark functions , , and u d u d are calculated using Equation (18) and presented in Figure 6.It is clear that the quark wave functions have similar behavior with appreciable values only in the range x < 0.4.Also, they are decreasing gradually with x and diminishes at x = 1.The quark current is then calculated tn terms of the quark wave function u(u) and u(d) Figure 6 . Figure 6.The wave functions of the quarks and antiquarks u, d u and d as estimated by the empirical method. Figure 7 . Figure 7.The weak quark current for u and d quarks as a function of x. Figure 8 . Figure 8.The matrix element squared as a function of q 2 at different x values.
2018-12-18T00:43:22.487Z
2010-10-29T00:00:00.000
{ "year": 2010, "sha1": "d700219af51e650d9fa97395f492e3e05d93089a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=3317", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d700219af51e650d9fa97395f492e3e05d93089a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240416610
pes2o/s2orc
v3-fos-license
Automatic Noise Analysis on Still Life Chart In this paper, we tackle the issue of estimating the noise level of a camera, on its processed still images and as perceived by the user. Commonly, the characterization of the noise level of a camera is done using objective metrics determined on charts containing uniform patches at a given condition. These methods can lead to inadequate characterizations of the noise of a camera because cameras often incorporate denoising algorithms that are more efficient on uniform areas than on areas containing details. Therefore, in this paper, we propose a method to estimate the perceived noise level on natural areas of a still-life chart. Our method is based on a deep convolutional network trained with ground truth quality scores provided by expert annotators. Our experimental evaluation shows that our approach strongly matches human evaluations. Introduction Camera quality has been considerably improved in the last years to meet the ever-growing standards of the consumers. Image quality can be characterized through multiple attributes such as exposure, color, texture, and noise. In this work, we are focused on assessing the capability of a camera to control its level of noise. In addition, we aim to provide this assessment as a metric that correlates with human judgment. To assess the quality of a camera, a common way is to capture for each camera the same chart in a controlled environment. A chart is designed to be reproducible and therefore allowing to fairly compare different cameras due to its consistent visual content. Since noise in an image is a random granulation, it is not exactly reproducible from one image to another, but only statistically, so generally we aim at estimating its second central moment (i.e. its variance) to describe this random process. This metric is easier to estimate over uniform areas, this is why noise is commonly measured on charts with uniform patches. One of the common metrics for assessing noise level is the signal-to-noise ratio (SNR). On a uniform area, this metric is the ratio between µ Image , the average of the image values, and σ Image , the standard deviation of the image values. SNR = 20 × log 10 ( µ Image σ Image ) However, the SNR only reflects the total amount of noise for a given signal level, it does not describe how the human observer actually perceives the noise. To tackle this issue, the visual noise metric has been proposed. This metric intends to measure noise as perceived by end-users. For example, noise that cannot be seen by the eye at a given viewing condition will not be included in the noise measurement. The Visual Noise measurement is standardized by IEEE CPIQ P1858 (Camera Phone Image Quality) 2016 [1], this standard is an adaptation of ISO 15739 [2] proposal. To compute this metric, the used test target must be compliant with the ISO 14524 [3] opto-electronic conversion function (OECF) test chart. This test chart is represented in Figure 1. However, nowadays, cameras integrate one or more denoising steps in their processing pipeline. The purpose of these steps is to reduce the noise in the image and restore the original signal. But it is challenging to reduce the noise in the high-frequency components while preserving the high-frequency content (such as edges and textured areas). While on the contrary, the lowfrequency components have regular values, so noise is easily suppressed by averaging the pixels within a neighborhood. Thus, it is common to observe a different noise level between textured and uniform areas cf. As common measurements are not suitable for assessing noise in other than uniform areas, they cannot lead to adequate noise characterization of cameras with the behavior detailed above. To tackle this issue, we define two areas of interest well-suited for noise assessment in a still life chart (cf. Figure 3). We then propose a learning-based method using these specific areas of interest. The problem of assessing the perceived level of noise in these areas of interest can be formulated as a regression problem, so in order to solve this problem we suggest using a deep convolutional network. We train the network using annotations provided by image quality expert annotators, this annotation process allow obtaining a set of scores that will match with the perceptual user experience. We show that this learning-based approach strongly correlates with the perceptual ground truth and better predicts the perceived level of noise on natural scenes than standard approaches. https://doi.org /10.2352/10. /issn.2694/10. -118X.2021 ©2021 Society for Imaging Science and Technology Related Work In this section, we will review the existing works done on quality assessment of noise. Visual Noise Signal-to-noise ratio is often used as a metric to assess noise. However, SNR only reflects the total amount of noise for a given level of signal, it does not describe how the human observer actually perceives it. The level of noise can be critical for the image quality, as it can affect multiple of its aspects, from object visibility to face detection. That is why the study of noise and in particular that of visual noise remains mandatory for the image quality assessment (IQA). Visual noise has been introduced to propose a metric that correlates more with human perception. The visual noise metric takes into account the spectral frequency content of luminance and chrominance noise by applying a contrast sensitivity function (CSF), a metric that integrated the noise power spectrum with properties of the human visual system. The computation of the visual noise described by CPIQ P1858 standard [1], based on the formulation made by ISO 15739 [2], requires the following steps: • Conversion of the source image in a color opponent space AC 1 C 2 • Filtering of the luminance and chrominance channels by respective CSFs • Filtering of the channels by the display or print MTFs • Application of a high pass filter to remove nonuniformities due to lens shading • Conversion to CIELab color space and computation of variances of luminance and chrominance channels The CSF used for the spatial filtering is defined as: where parameters are defined in Table 1. The visual noise metric is then obtained by applying the log 10 base to the weighted sum of the L * , a * , b * variances and L * a * covariance. The previous formula weights the color noise for the b * channel with a negative value, hence noise in the b * channel leads to the decrease of the visual noise metric. Besides, a negative value on [4]. Moreover, the presence of the negative weights combined with the covariance, expressed by L * a * , can lead to negative values and the inability to estimate the visual noise metric for a given image. Learning Based Methods In opposition to the visual noise metric described in the previous section, learning-based methods require annotated datasets. TID2008 [5] and its extension TID2013 [6]) are image quality datasets that give a Mean Opinion Score (MOS) for each distorted image. These distortions are artificially introduced and correspond mostly to compression or transmission scenarios. As these distortions are artificially introduced, they do not fully cover the ones introduced by real cameras. The LIVE in the wild [7] database contains 1162 authentically distorted images captured from many diverse mobile devices. Each image was viewed and rated online on a continuous quality scale by an average of 175 unique subjects with the goal of providing one MOS per each image, and not one score for each image quality attribute, such as the noise which is our interest study. Similarly, the KonIQ10k [8] dataset consists of samples from a larger public media database with unknown distortions. This dataset provides a ground truth for several image quality attributes, but does not consider the noise quality as one of them. More recently, Yu et al. [9] collected a dataset of 12 853 natural photos from Flickr and annotated them according to image quality defects: exposition, white balance, color saturation, noise, haze, undesired blur, composition. They aimed to solve a multi-task learning problem and trained a multi-column deep convolutional neural network to simultaneously predict the severity of all the defects. While their approach showed promising results, we are tackling a different issue, that of noise estimation in specific areas only. To the best of our knowledge, the most related work has been proposed by Tworski [10] et al.. They adopt a regression formulation and train a network to estimate the camera capacity to preserve texture using a common perceptual chart. In the next section, we will detail our deep regression framework for noise quality estimation and the method used to collect the datasets relevant to our noise assessment problem. Method In this section, we detail the proposed method for perceptual noise estimation on natural images. This task is a regression problem, in which we want to estimate for an image X of dimensions Height × Width × 3 its corresponding noise quality score Y , a scalar. To perform this, we use a learning-based method, meaning that we use the ground-truth noise quality of the given image provided by expert annotators (cf. subsection Datasets ). Inspired by previous works [11,10], we chose to rely on the very versatile ResNet-50 architecture. This network has already shown some excellent results in other related IQA tasks [11]. ResNet -short for Residual networks, is the neural network that won the imageNet [12] contest in 2015. The main addition of the ResNet architecture is to partially solve the vanishing gradient problem on extremely deep neural networks. We have images with fixed size of 1000 × 1000 × 3, ResNet50 can take an input of any dimensions but using large inputs usually leads to large memory consumption so often it is not an available option, e.g. a common input size for ResNet50 is 224 × 224 × 3. As resizing the images to a lower resolution will affect the level of noise, we decide to take fixed size image crops input. During our investigation we observed better results when training the ResNet50 with a 448 × 448 × 3 input size, so we decide to take crops of this size. We used the convolutional layers and average global pooling layer of the ResNet-50 model trained on ImageNet database and replaced the fully connected layer to fit our regression problem with a unique output. It is thus a layer with 2048 entries, that requires the training of 2049 additional parameters, with a single output to which we apply the sigmoid function σ (x) = 1 1+exp ( −x) to obtain a continuous output ranging from 0 to 1. At each epoch a crop is randomly selected, allowing the model to learn to estimate the perceived noise on variable zones and thus having a more robust estimation to field of view variations. As some crops may not be relevant for the evaluation, we choose to use Huber loss during training as this loss is less sensitive to outliers than the squared error loss. At test time, we extract ten random crops and average their predictions to get the estimated noise score. Datasets Lighting conditions While having photos from different cameras is important for constructing our database, so are the lighting conditions that do affect heavily the level of noise. Our database therefore, contains multiple lighting conditions for each device and chart: • 5 Lux Tungsten • 20 Lux Tungsten • 100 Lux Tungsten • 300 Lux TL84 • 1000 Lux D65 Charts and devices As there is no well-established reference dataset for our problem, we collected annotated data using two different charts. • Still-Life: First, we use the chart in Fig. 5. This dataset is referred to as Still-Life. This chart is specifically designed by DXOMARK to evaluate multiple IQA attributes and contains diversified content such as uniform zones, fine details, portraits, vivid colors for color rendering, as well as resolution lines and a low-quality Dead Leaves version. We extract 2 areas of interest represented in Figure 3, that we will note Feather and Woman. Images are acquired using 293 different smartphones and cameras from different brands commonly available in the consumer market. Thus this database consists of 1465 crops for each area of interest. In Fig. 4, we provide an example region captured with two different cameras in different lighting conditions. The left image corresponds to a low-quality device in low light conditions, while the other is obtained with higher quality. It illustrates the nature of distortions that appear in this dataset when using different lighting intensities. • Dead Leaves: Second, we employ the Dead Leaves chart proposed in [13]. This chart depicts gray-scale circles with random radius and locations. This chart is compliant with ISO 14524 [3] and so allows to compute the visual noise on it. In all our experiments, we refer to this dataset as Dead Leaves. We use the same five lighting conditions and devices as for the Still-Life chart. Consequently, this database is made up of 1465 crops Dead Leaves shots. (a) High Noise Image (b) Low Noise Image Annotations In order to obtain more faithful results, we need to provide a reliable ground-truth annotation for each pair of device and lighting condition in our database. These annotations should correspond to a precise way of encapsulating the perceived visual noise quality. To obtain such quality annotations for each of our pairs, we first established the ground truth references by asking 20 human experts to rank the images according to the level of perceived noise. We then averaged the rankings by excluding the images rated with the highest and lowest positions within the obtained stack. In order to obtain continuous scores, we performed a linear rescaling of the ranks within the interval [0, 1], where the best possible rank corresponds to a score of 1, while the worst to a score of 0. This reference set constitutes our noise quality ruler. For each image to be annotated, we ask an expert to correctly rank it by evaluating it with respect to the quality ruler (cf. Figure 6). Specific conditions were prepared to make the comparison as reliable as possible, we use a 24" full hd monitor with a pixel pitch of 0.27 millimeters, while the distance between the analyst and the screen is fixed to 40 centimeters. Note that the images used for annotation were provided with no down-sampling. However, for low resolution images, bicubic resizing is applied to match their size to the highest in the image stack. Each position among the set of references is assigned a score between 0 and 1. In the Still-Life chart, we have considered two different regions of interest to study as seen in Figure 3. In the case of the Dead Leaves charts, since the charts are unnatural images, human perceptual annotation is quite complex due to multiple reasons, but it is mainly the presence of different types of patches at different intensities that make the annotation task quite hard for the annotator. Therefore, we chose to transfer the annotations obtained on the Still-Life chart to the Dead Leaves one, rather than re-annotating the images. This assumes that our annotations are device based: the quality of a given image depends mostly on the device itself. The Still-Life chart contains diverse scenes similar to what real images would contain. Evaluating devices according to their performance on this card allows us to obtain a subjective device evaluation in a setting more similar to real-life scenarios. Metrics A straightforward way to assess our results could consist in computing the correlation between the predictions and the annotation. However, the underlying assumption that the predictions of each method correlate linearly with our annotations is not always correct and might bias our evaluation. Thus we decided to use two distinct metrics based on the correlation of the rank-order. First, the Spearman Rank-Order Correlation Coefficient (SROCC) defined as the linear correlation coefficient of the ranks of predictions and annotations. We also note the Kendall Rank-Order Correlation Coefficient (KROCC) defined by the difference between concordant and discordant pairs divided by the number of possible pairs. This second metric allows us to check the similarity of the ranking. For both metrics a value of 1 means that the observation of the predictions and annotations are identical. For all visual charts, the dataset is split into training and test sets as follows. First, among the devices we use in our experiments several are produced by the same brand. So, to avoid bias between training and test, we impose no brand-overlap between training and test sets. To do that, we create 6 distinct manufacturer families chosen at random to balance the set of images from each family of devices. Then for each family, we proceed for a training on the rest of our set excluding it, and using that said family of devices as a test set. Thus, for each performed training and test there are approximately 1221 images in the training database and 244 images in the test database. Comparison to state of the art In this section, we compare the performance of our approach to existing methods. We compare the measurements performed on the DeadLeaves chart and predictions on the Still-Life chart on the whole database (293 devices). We chose to benchmark our method to three different formula of the visual noise metric: • The formula standardized by CPIQ [1] (V N CPIQ ) • The formula in discussion in ISO15739 and lastly proposed [4] (V N ISO ) • The formula used by DXOMARK [14] (V N DXOMARK ) As the visual noise metric provides one metric for each patch, we consider for each formula the one interpolated for CIE −L * = 50. Besides this, the visual noise takes into account the sensitivity of the human eye to different spatial frequencies under various viewing conditions. Hence the measurement is always dependent on the size of the image (i.e. print or on-screen) and the viewing distance. The effect of the viewing conditions is to stretch the CSF along the frequency axis. To evaluate the ability of the visual noise measure to assess the noise level in our dataset, we use two different conditions: • Viewing Condition Print: a commonly used viewing condition of a print of 120 centimeters height viewed at 100 centimeters • Viewing Condition Display: a viewing condition as the one used during the annotation process, involving a display viewed at 40 centimeters with a pixel pitch of 0.27 millimeters Moreover, our method on the Still-Life chart, gives predictions on two areas of interest for each image: Woman and Feather. We will therefore evaluate the predictions of Woman and Feather compared to the ground truth of their respective areas as well as the average of the two predictions compared to the average of the annotations. Quantitative results are reported in Table 2 Performance on the devices database. First, we observe that our method strongly matches with the provided annotations, and that it also outperforms other benchmark methods. These results must also be weighted, as the predictions were made on the same chart as the annotations (i.e. the Still-Life chart), while the visual noise metrics were established on the Dead Leaves chart. The results of the visual noise metrics show that the concerns raised in Introduction are valid: measuring the noise on uniformly gray patches does not sufficiently allow us to assume the perceived level of noise of the camera on a natural image. Conclusion In this paper, we propose an efficient learning-based method to assess the perceived level of noise of a camera. Compared to traditional methods, our approach can be used in images with natural contents. The experimental results show that our predictions strongly match that of the user experience. These promising results show the great potential of deep learning for image quality assessment. Future work will focus on improving the proposed method and will consist in building a system able to evaluate the noise more exhaustively, namely by characterizing its chromaticity as well as its frequency.
2021-11-02T15:08:10.535Z
2021-09-20T00:00:00.000
{ "year": 2021, "sha1": "9578fe58d73eb9f640b9193008679769614c7bd8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2352/issn.2694-118x.2021.lim-101", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9d4e58820a968e9ddd8a81b2a5c098fdb028a6c0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
17319233
pes2o/s2orc
v3-fos-license
Genome-wide identification of transcription factors and transcription-factor binding sites in oleaginous microalgae Nannochloropsis Nannochloropsis spp. are a group of oleaginous microalgae that harbor an expanded array of lipid-synthesis related genes, yet how they are transcriptionally regulated remains unknown. Here a phylogenomic approach was employed to identify and functionally annotate the transcriptional factors (TFs) and TF binding-sites (TFBSs) in N. oceanica IMET1. Among 36 microalgae and higher plants genomes, a two-fold reduction in the number of TF families plus a seven-fold decrease of average family-size in Nannochloropsis, Rhodophyta and Chlorophyta were observed. The degree of similarity in TF-family profiles is indicative of the phylogenetic relationship among the species, suggesting co-evolution of TF-family profiles and species. Furthermore, comparative analysis of six Nannochloropsis genomes revealed 68 “most-conserved” TFBS motifs, with 11 of which predicted to be related to lipid accumulation or photosynthesis. Mapping the IMET1 TFs and TFBS motifs to the reference plant TF-“TFBS motif” relationships in TRANSFAC enabled the prediction of 78 TF-“TFBS motif” interaction pairs, which consisted of 34 TFs (with 11 TFs potentially involved in the TAG biosynthesis pathway), 30 TFBS motifs and 2,368 regulatory connections between TFs and target genes. Our results form the basis of further experiments to validate and engineer the regulatory network of Nannochloropsis spp. for enhanced biofuel production. Galdieria sulphuraria 14,15 and an Eustigmatophyceae strain (Nannochloropsis oceanica CCMP1779 16 ). On the other hand, a global in silico prediction of cis-regulatory element (CRE) was reported in C. reinhardtii 17 . However, it remained largely unclear how the genomic profiles of TFs in microalgae is linked to or different from higher plants, and how such relationships is implicated in the evolution of these unicellular organisms and their multicellular higher plant siblings (all modern higher plants were derived from green algae 1 ). Furthermore, as potential model organisms of oleaginous microalgae have started to emerge only recently 16,18,19 , few attempts have been made to model the links between TF and their cognate TFBS (i.e., the targeted genes) on a genome-wide scale in oleaginous microalgae. Nannochloropsis spp. are a group of microalgae in the Eustigmatophyceae class, and are widely distributed in the marine environment as well as in fresh and brackish waters 18,20 . These algae are of industrial interest due to their ability to grow rapidly, synthesize large amounts of TAG and high-value polyunsaturated fatty acids (e.g. eicosapentaenoic acid), and tolerate broad environmental and culture conditions 21,22 . As a results, these organisms attracted particular attention and have emerged as a research model for microalgal oleaginousness 16,[18][19][20] . We have recently adopted a phylogenomic approach to unravel the genome-wide diversity and divergence of the oleaginous loci in this microalgal genus 18,20 . A comparative analysis of six genomes of oleaginous Nannochloropsis spp. that includes two N. oceanica strains (IMET1 and CCMP531) and one strain from each of four other recognized species: N. salina (CCMP537), N. gaditana (CCMP526, which was previously reported 19 ), N. oculata (CCMP525) and N. granulata (CCMP529) revealed a core genome of ca. 2,700 genes and a large pan-genome of .38,000 genes 18 . Moreover, the six genomes share key oleaginous traits such as the enrichment of selected lipid biosynthesis genes 18 . This genus-wide set of oleaginous genomes thus provides an opportunity to identify the diversity and evolution of TF families as well as TFBSs in Nannochloropsis. Furthermore, we have generated large-scale, highly reproducible transcript profiles from N. oceanica strain IMET1 as a function of time (i.e., over the six time points of 3, 4, 6, 12, 24, 48 h) under both N-replete (N1) and N-depleted (N-) conditions via mRNA-Seq 23 . This time-series transcriptomic dataset in Nannochloropsis thus laid a foundation for unraveling the links between TFs and TFBSs via gene coexpression analysis. Here we present a genome-wide in silico map of TFs and TFBSs and a computationally predicted, preliminary regulatory network that link TFs and target genes in Nannochloropsis. First, the TFencoding genes in the genomes of N. oceanica IMET1, N. oceanica CCMP1779 and N. gaditana CCMP526 were identified. A two-fold reduction in the number of TF families plus a seven-fold decrease of average family-size in Nannochloropsis, Rhodophyta and Chlorophyta were apparent, as compared to those of the surveyed higher plants. The degree of similarity in TF-family profiles was found to be indicative of the phylogenetic relationship among the species, suggesting that the co-evolution of species and TF profiles occurred largely at the level of TF-family. Furthermore, an improved computational pipeline based on the MERCED algorithm 17 was developed for TFBS identification via comparative analysis of six sequenced Nannochloropsis genomes. This analysis revealed 68 ''most-conserved'' TFBSs, with 11 of which predicted to be related to lipid accumulation or photosynthesis. Comparison of the IMET1 TFs and TFBSs to the reference plant TF-TFBS motif pairs in TRANSFAC 24 enabled us to predict 78 interaction pairs between a TF and a TFBS motif, which consisted of 34 TFs (with 11 TFs potentially involved in the TAG biosynthesis pathway), 30 TFBSs and 950 target genes. These results form the basis of further experiments to validate and engineer the regulatory network of Nannochloropsis spp. for enhanced biofuel production. Results Genome-wide identification of TFs in Nannochloropsis spp. and comparative analysis of TF-family profiles among 36 plant species. A systematic identification of TFs in Nannochloropsis spp. The presence or absence of the defining features of TFs (e.g., their DNA-binding domains, auxiliary domains and forbidden domains 25 ) was typically employed as the major criteria for identification of TFs and moreover, for classification of the TFs into individual TF families. As a result, several public databases of plant TFs have been established, such as PlantTFDB (v. 2.0, http:// planttfdb.cbi.pku.edu.cn) 14 and PlnTFBS (v. 3.0, http://plntfdb.bio. uni-potsdam.de/v3.0/) 15 . These databases have cataloged the predicted TFs of over 50 species from the main lineages of the plant kingdom, including green algae, moss, fern, gymnosperm and angiosperm. We first performed a genome-wide identification of TFs in N. oceanica IMET1 18 , N. oceanica CCMP1779 16 Table S1, Supplemental Dataset S1). Each predicted TF was then assigned into a specific TF-family based on its DNA-binding domain (based on the criteria of PlantTFDB). In the three strains, 26 TF families were found collectively, among which 19 are shared by all three strains. MYB, bZIP, MYB-related and NF-YC are the four largest TFs families, together accounting for 48 , 56% of all TFs in each of the strains. The MYB group of TFs, found in the MYB family and the MYBrelated family, is the largest in each of the three strains. N. oceanica IMET1 harbored 35 TF genes from this group, including 15 R2R3-MYB genes, 8 R1R2R3-MYB genes and 12 MYB-related genes (Supplemental Table S2). Numerous MYB genes have been characterized by genetic approaches and found to be involved in the control of plant-specific processes in higher plants including (i) primary and secondary metabolism, (ii) cell fate and identity, (iii) developmental processes and (iv) responses to biotic and abiotic stresses 26 . It was also observed that the ratio (i.e., relative abundance of a group of TFs among all TFs in the genome) of MYB genes in higher plants are remarkably higher than those in fungi or animals 7 . Intriguingly, the ratios of the MYB group of TFs in Nannochloropsis spp., 28%, 30% and 36% in IMET1, CCMP1779 and CCMP526 respectively, are even higher than those of the higher plants such as Arabidopsis thaliana (12%), Glycine max (13%) and Zea mays (11%), which is perhaps indicative of its significant and broad roles in transcriptional regulation in microalgae. In higher plants, most MYB genes encode proteins of the R2R3-MYB class (e.g., Arabidopsis thaliana harbors 131 R2R3-MYB genes yet only five R1R2R3-MYB genes 7 ), while R1R2R3-MYB proteins are the norm in animals. The plant-specific R2R3 organization is usually thought to have evolved from an R1R2R3-type ancestral gene by the loss of the first repeat 27 ; however, the evolution of 3R-MYB genes from R2R3-MYB genes by the gain of the sequences encoding the R1 repeat through an ancient intragenic duplication has also been proposed 28 . Intriguingly, the proportions of R1R2R3-MYB genes are significantly higher in the three Nannochloropsis strains (e.g. 15 R2R3-MYB genes and 8 R1R2R3-MYB genes in IMET1), which appears to support the hypothesis of ''loss'' instead of ''gain'' in higher plants. Comparison of TF-family profiles among 36 plant genomes. It has been proposed that alterations in the expression of TF-encoding genes serve as one major source of the diversity and changes that underlie higher plant evolution 6,7 , however the potential link between genome-wide TF-family profiles and plant evolution remains elusive. To probe such a putative link, we compared the genome-wide TFfamily profiles among 36 plants, including three Nannochloropsis strains, four red algae strains, nine green algae strains and 20 higher www.nature.com/scientificreports SCIENTIFIC REPORTS | 4 : 5454 | DOI: 10.1038/srep05454 plants in PlantTFDB. These 36 plants were classified into four phylogenetic lineages: Nannochloropsis (three strains in two species), green algae (Chlorophyta; nine species), red algae (Rhodophyta; four species) and higher plants (Bryophyta, Lycopodiophyta, Dicotyledon and Monocotyledon; totally 20 species). In total, 58 TF families were present in these 36 plants (Supplemental Dataset S2). However, only 17 TF families were present in all four lineages ( Figure 1). Members of such ''core plant TF-families'' represent over 80% of all the TFs in Nannochloropsis and Rhodophyta, over 65% in Chlorophyta and about 50% in higher plants. These ''core TFs'' might play key roles in gene expression regulation of plants as they appear early in the evolution of plants. On the other hand, 16 TFs families (ZF-HD, NAC, GRAS, MIKC, EIL, RAV, TCP, HRT-like, LBD etc.; Figure 1) were found only in higher plants. It is possible that these ''higher-plant specific TF-families'' have emerged independently or diverged from other TF-families along the evolution of plant genomes. Many fewer TF families were found in Nannochloropsis (26), Rhodophyta (26) and Chlorophyta (35) than in higher plants (ranging from 53 to 58; 57 on average). Moreover, no microalgae-specific TF-families were found. This likely reflects the unicellular life style and the aquatic environment of microalgae as compared to the much more intricate land environment of the multicellular land plants, although one cannot rule out the possibility of high false-negative rate of TF recognization in these microalgae (e.g., the HMM models in PlantTFDB were all derived from higher plants). On the other hand, within microalgae, most of the TF-families were shared by all the three lineages, except only a few TF families (which exist in one or two microalgal lineages) such as STAT, LFY and BBR-BPC (found in neither Rhodophyta nor Chlorophyta), and AP2, ERF, DBB, B3 and Whirly (absent in Rhodophyta) ( Figure 1). These TFs that are specific to each of the microalgal lineages might contribute to differentiation of these lineages. Principal component analysis (PCA) for these plant species based on their profiles of TF-families (with the ratios of 58 TF families as variables and the 36 plants as samples; Figure 2) revealed that: (i) The TF-family profiles of the four lineages (Nannochloropsis, green algae, red algae and higher plants) are all quite distinct from each other, whereas those of the species within each of the lineage are more similar. In fact, the three microalgae lineages (Nannochloropsis, green algae and red algae) can be separated with higher plants on PC1 level (accounting for 42.5% of cumulative variance), while Nannochloropsis, green algae and red algae can be distinguished from each other on PC2 (with PC1 and PC2 together accounting for 54.5% of cumulative variance). (ii) The TF-family profiles of the three Nannochloropsis strains are most similar to those of red algae, which were known to be phylogenetically close to Eustigmatophyceae, yet least similar to those of higher plants. (iii) At PC1, the top five TFfamilies that are able to distinguish the algal species from higher plants are MYB, MYB_related, bZIP, C2H2 and C3H. At PC2, the top five drivers that separate the three microalgal lineages from one another are the TF-families of bHLH, NAC, ERF, C3H and WRKY. Therefore these TF-families appear to play particularly prominent roles within microalgae and between microalgae and higher plants, respectively. The observations above suggested a potential link between TFfamily profiles and organismal evolution. To test this hypothesis, for these 36 plant species, an organismal tree based on hierarchical clustering of their TF-family profiles was compared with a phylogenetic tree that was constructed based on the multiple-alignment of their 18S sequences ( Figure 3A, 3B; Methods) [29][30][31][32] . The two trees were correlated, as the 36 plants were divided into four main branches (Nannochloropsis, green microalgae, red microalgae and higher plants) in both trees; moreover, between the two trees, the topology of their green microalgae branches are quite similar ( Figure 3A, 3B). Therefore, TF-family profiles can potentially be indicative of organismal phylogeneny, suggesting that the co-evolution of species and their TF-profiles occurred largely at the level of TF-family. TFs involved in lipid-related pathways. Recent genome sequencing and gene annotation studies of Nannochloropsis spp. have revealed genes involved in lipid production in these algae 16,18,19 , however no attempts have been carried out to investigate regulators of lipid accumulation process. Therefore, a computational strategy was devised here to identify the TFs involved in lipid-related pathways. Firstly, for the three Nannochloropsis strains, the orthologs of TFs that were experimentally shown to be related to lipid accumulation in higher plants (including WRINKLED1 (WRI1) and GmDofc 33,34 ) were identified. The WRI1 family of TFs, which is a member of a plantspecific family of TFs (AP2/EREBP) that share either one or two copies of AP2 DNA-binding domain, serves as an important regulator of oil accumulation in maturing Arabidopsis seeds [34][35][36] . Three, two and one genes were identified as putative WRI1 orthologs in N. oceanica IMET1 and N. oceanica CCMPP1779 and N. gaditana CCMP526 respectively (via PSI-blast with the E-value cutoff at 1E-5) (Supplemental Table S3). Secondly, putative lipid-synthesis-related TFs in IMET1 were identified via co-expression analysis between TFs and 118 lipid-synthesis-related genes based on our time-series transcriptome dataset that tracked the TAG accumulation process for 48 hours upon nitrogen depletion 23 (Supplemental Dataset S3, Methods). In the end, 27 putative lipid-related TF genes were identified, which were from 11 TF families that include NF-YC, bZIP, HB-other, HSF, C3H, E2F/DP, AP2, MYB_related, CPP, MYB and LFY (Supplemental Table S4). Three of these TF families (NF-YC, C3H and E2F/DP) were found enriched in lipid-related pathways (p-value ( 0.05). Together, our analyses uncovered 30 lipid-related transcriptional factors in N. oceanica IMET1, which were found in 11 TF-families (Table 1). Among them, MYB_related (five), NF-YC (five), AP2 (four) and C3H (four) are the dominating families. The functional role of these TFs, which are presented to be involved in lipid metabolism, requires experimental characterization. Genome-wide identification and functional analysis of TFBS in N. oceanica IMET1 via phylogenetic footprinting. An improved pipeline for whole-genome prediction of TFBS motifs via comparative genomics. TFBS motifs are short genomic DNA segments that play important roles in gene regulation by modulating gene activities through their interaction with TFs. A widely used computational strategy for TFBS motifs identification is to detect over-represented and conserved patterns that might be good candidates for being TFBSs from promoter regions of co-regulated or co-expressed genes of a single genome 37,38 . Such methods, however, are not suitable for non-model organisms such as Nannochloropsis spp., due to the paucity of transcriptomic or Chip-Seq resources. Alternative methods, such as phylogenetic footprinting, consider conserved patterns in the promoter regions of orthologous genes among several orthologous species as putative TFBS motifs, as functional sequences in promoter regions usually evolve slower than non-functional sequences due to the selective pressure 9,10,39,40 . Such methods have been proved efficient in detecting TFBS motifs with biological significance in several studies 9,10,17,41-43 . The major advantage of phylogenetic footprinting over the co-regulated genes approach is that it's possible to identify motifs on a genome-wide level based on orthologous sequences groups of considered genomes, while the latter requires a reliable method for identifying co-regulated genes 44 . The recent availability of seven Nannochloropsis strains in five species 16,18,19 therefore provides an opportunity for TFBS-identification via phylogenetic footprinting. To identify the TFBSs in Nannochloropsis spp., we devised an improved pipeline for TFBSs identification based on phylogenomic footprinting, which represented an improvement based on the algorithms of MERCED 17 , by extending the comparative genomics method of two species into multi-species and applying parallelization programming to improve the computing performance. The genome sequences of six strains (from five Nannochloropsis species) that included N. oceanica IMET1, N. oculata CCMP525, N. gaditana CCMP526, N. granulata CCMP529, N. oceanica CCMP531 and N. salina CCMP537 were used for TFBSs identification, employing the IMET1 genome as the ''reference genome'' and the other five as ''query genomes''. This pipeline consists of five steps: (i) Orthologous gene groups among these six Nannochloropsis strains were first identified, by applying PSI-BLAST 45 (E-value cutoff at 1E-5) to all protein sequences of IMET1 and those of each ''query genome''. (ii) A substitution matrix was constructed to model the neutral evolution rate of nucleotide substitution between IMET1 and each ''query genome'' 46 . (iii) Conserved k-mers in the promoter sequences of orthologous genes were obtained between IMET1 and each ''query genome''; (iv) Conserved k-mers were clustered using hierarchical clustering with average linkage 47 . (v) The clustered TFBSs patterns were converted into a series of Position Frequency Matrix (PFM), each of which characterized a TFBS motif (See Methods for details). The predicted TFBSs are consistent with experimentally verified TFBSs in databases. Analysis of the promoter regions of the six Nannochloropsis genomes thus revealed 68 TFBS motifs (8-mer) that were shared by all the strains (which were called ''most-conserved'' TFBS motifs), whereas 382 TFBS (8-mer) motifs shared by at least five strains (Supplemental Dataset S4). To test the specificity of our predicted TFBSs, our computationally determined TFBS motifs were then compared to the experimentally verified motifs in TRANSFAC (http://www.gene-regulation.com/index2) 24 and PLACE (http://www.dna.affrc.go.jp/PLACE/) 48 using STAMP 49 . TRANSFAC provides one of the most comprehensive collections of experimentally determined TFBSs and positional weight matrices, where 2173 TFBS motifs including 199 plant TFBSs were cataloged 24 . PLACE compiled 469 experimentally verified TFBS motifs in plants, which also serves as an important reference for the studies of plant TFBSs 48 . STAMP, which evaluated motif similarities, aligns input motifs against the chosen database (or alternatively against a userprovided dataset), and returns a lists of the highest-scoring matches 49 . Among our 68 predicted ''most-conserved'' TFBSs motifs, 46 (67%) were similar to TRANSFAC motifs (STAMP E-value cutoff as 1E-5). Among these 46 TFBSs motifs, 22 were related to experimentally verified motifs of plants, while the other 24 to motifs of vertebrata, insect, fungus and nematode (Supplemental Dataset S5). For example, for the predicted motif Nanno_M53, the consensus sequence of its reverse complement (GGCACGKG) is similar to TRANSFAC motif bHLH66_M01054 GCACGTGB (E-value 5 1.22E-09), which is a TFBS motif found in Arabidopsis that regulates root hair elongation 50 . On the other hand, 37 (54%) of the 68 ''most-conserved'' motifs were similar to the PLACE motifs. Taking predicted motif Nanno_M27 for example, the consensus sequence of its reverse complement (CCACGTMC) is similar to PLACE motif ABRE3HVA22 (E-value of 4.3E-11), which is the cis-acting element of an abscisic acid(ABA)inducible gene 51 . Collectively, these evidences suggested that our predicted TFBS motifs appeared to be largely consistent with experimentally verified TFBS motifs in higher plants. Functional enrichment analysis of predicted TFBSs. We next probed the functions of the predicted TFBS-motifs in IMET1 via two different approaches (Methods). In the ''TFBS-enrichment analysis'', those TFBSs enriched in promoters of specific functional gene clusters in IMET1, such as genes involved in the TAG (triacylglycerols) biosynthetic pathways and the photosynthesis pathways, were identified, which thus allowed pinpointing the TFBS-motifs specifically associated with these functions. Glycerolipids and specifically triacylglycerols (TAG) are the main target products in algae-based biofuels 16 . Totally 36 genes and 44 genes were found to be involved in ''TAG assembly pathway'' and ''Fatty acid biosynthesis pathway'' respectively in N. oceanica IMET1 (Supplemental Dataset S3). Nanno_M49 and Nanno_M51 were enriched in the genes cluster of ''TAG assembly pathway''. Nanno_M55 was enriched in the genes cluster of ''Fatty acid biosynthesis''. These TFBSs motifs were thus proposed to be implicated in lipid accumulation pathways in Nannochloropsis. Photosynthesis is also an essential pathway for biomass accumulation and biofuel production in Nannochloropsis. In total, 41 nuclear genes were found related to photosynthesis in IMET1, which encode components of photosynthetic linear electron transport chain, including photosystem (PS) I reaction center and extrinsic proteins, PS II reaction center and extrinsic proteins, chlorophyll binding proteins and photosynthetic electron transfer proteins (Supplemental Dataset S6). Four TFBS motifs were enriched, including Nanno_M4, Nanno_M5, Nanno_M19 and Nanno_M35, in these 41 genes, which thus may potentially be the binding sites of TFs that are related to photosynthesis. In the Gene Ontology (GO)-enrichment analysis, the enriched GO terms and GO-slims (Generic GO slims) 52 in the target genes of each predicted TFBS motif were identified to reveal the main function of the TFBS motif. For the 68 ''most conserved'' TFBSs motifs, totally 40 GO-slims on ''biological process'' level and 36 GO-slims on ''molecu-lar function'' level were found enriched (Supplemental Dataset S7). Several GO-slims on the ''biological process'' level were enriched in multiple TFBSs motifs, including cellular amino acid metabolic process (GO:0006520), chromosome organization (GO:0051276) and sulfur compound metabolic process (GO:0006790). Two TFBS motifs (Nanno_M5, Nanno_M35) were likely involved in photosynthesis pathways, as the GO-slim of ''photosynthesis'' (GO:0015979) is significantly enriched in the target genes of these motifs. Six TFBS motifs were found involved in lipid synthesis pathways, as the GOslim of ''lipid metabolic process'' (GO:0006629) is significantly enriched in the target genes of these motifs. In addition, ''carbohydrate metabolic process'' (GO:0005975) was enriched in target genes of three TFBSs motifs, while ''cellular nitrogen compound metabolic process'' (GO:0034641) in target genes of five TFBSs motifs (Supplemental Dataset S7). Thus these TFBSs motifs might also contribute to lipid accumulation process in Nannochloropsis spp. A preliminary TF-TFBS interacting network of Nannochloropsis oceanica IMET1. Construction of a preliminary TF-TFBS interacting network for N. oceanica IMET1. To probe the relationship between these predicted TFs and TFBSs, the 125 TFs and the 68 ''mostconserved'' TFBSs in IMET1 were compared to the reference plant TF-TFBS pairs in TRANSFAC (Methods). The analysis yield 78 TF-TFBS interaction pairs that involved 35 TFs and 14 TFBSs motifs (Supplemental Dataset S8). Then the genes whose promoter sequences contained one or more of these 14 TFBSs motifs were identified, which were considered as the targets of the corresponding TFs in the interaction pairs. As a result, 18992 regulatory connections for 35 TFs that target 2801 genes were predicted. To reduce the false positive rate, for each regulatory connection, we computed the Pearson product-moment correlation coefficient and its statistical significance (p-value) based on the time-course transcriptomic data of IMET1 under N-depletion conditions (Methods). Only those with a significant correlation (pvalue ( 0.05) are preserved. In the end, 2,386 regulatory connections between a TF and a gene in IMET were identified, among which 1,315 are positively correlated and 1,071 negatively correlated ( Figure 4, Dataset S9). We next compared our network with those connections between the TFs and their target genes in the Arabidopsis thaliana regulatory network database AtRegNet 53 (Methods). As the result, 76 connections in the IMET1 network were supported by AtRegNet, with 11 of them supported by the ''confirmed'' connections in AtRegNet (Table 2). There are on average 68 gene targets per TF in the IMET1 network. However the number of gene targets for each TF varies widely (ranging from 1 to 250). Several TFs regulate only a small number of genes such as s043.g1656 (one target) and s355.g10347 (five targets), while others control a large number of targets in the network such as s259.g7362(250 targets), s043.g2022 (230 targets) and s247.g6812 (221 targets). For example, the 250 targets of s259.g7362 encode proteins involved in biosynthetic process, cellular nitrogen compound metabolic process, transport, small molecule metabolic process, cellular amino acid metabolic process, etc., suggesting that this TF might regulate a wide range of functions in Nannochloropsis. Despite the lack of experimental evidence, it is possible that such TFs might be the ''master regulators'' in the IMET1 regulatory network. The IMET1 regulatory network revealed 11 TFs that are potentially involved in the transcriptional regulation of TAG biosynthesis pathways ( Figure 5). Among them, TFs of the bZIP family were dominant (five such genes). The binding site of s259.g7362 [bZIP] was present in the promoter regions of four genes including the Acyl-CoA-binding proteins(ACBP) and 3-Ketoacyl-ACP synthase (KAS) in fatty acid biosynthesis pathway, the Long chain acyl-CoA synthetases (LCFACS) converting free fatty acid to acyl-CoA, and the Lysophospholipid acyltransferase (LPAT) transferring acyl-CoA to Lysophosphatidic acid to form Phosphatidic acid. Moreover, all these four genes were potentially up-regulated by s259.g7362 [bZIP], in that the Pearson product-moment correlation coefficient between each of the genes and this TF is positive based on the time-course transcriptomic data. These observations suggested a potential coregulation mechanism where this TF simultaneously controls the transcript levels of multiple enzymes to produce TAG under nitrogen-depletion conditions ( Figure 5). Such co-regulation by a single TF was also found in additional sets of genes that are directly involved in TAG-synthesis. Examples included the co-regulation of Lysophospholipid acyltransferase (LPAT) and Enoyl-ACP reductase (ENR) by s009.g891 [ERF], and that of Phosphatidic acid phosphatase (PAP), Long chain acyl-CoA synthetases (LCFACS) and Type I 3-Ketoacyl-ACP synthase (FAS-1) by s295.g8604[AP2] ( Figure 5). On the other hand, a number of genes might be each regulated by multiple TFs, such as LPAT (connected to three TFs) and the DGAT-2A (controlled by four TFs). To allow readily access by the research community, the predicted TFs and TFBSs motifs in Nannochloropsis spp. and the preliminary regulatory network of N. oceanica IMET1 are displayed in a public website (http://www.singlecellcenter.org/en/NannoRegulationDatabase/ home.htm). Discussion Genome-wide identification of TFs and TFBSs is one first step for dissecting gene regulation networks in oleaginous microalgae, and serves as the foundation of directed genetic engineering to enhance the lipid-synthesis process. Here employing Nannchloropsis oceanica IMET1 as a model, we present one of the first genome-wide TFs and TFBSs maps in oleaginous microalgae. Furthermore, a preliminary global regulation network that links TFs to their target genes was constructed. Nannochloropsis spp. are promising feedstock for biofuel production. Genome sequencing and gene annotation studies in Nannochloropsis have revealed genes involved in lipid production in this microalgae 16,18,19 . However, it's crucial to identify TFs involved in lipid related pathways in this species, which serve as the master controls of gene regulation. From the 125 TFs identified in N. oceanica IMET1, we predicted 30 TFs which might be related to gene regulation of lipid synthesis processes, three of which are WRI1 orthologs, while others are detected based on gene expression correlation analysis of mRNA-Seq data. These TFs, albeit the presence of false positives, can serve as the primary focuses for experimental tests. The TF-family profiles of Nannochloropsis spp. revealed significant divergence of TF-family profiles among Nannochloropsis, Chlorophyta, Rhodophyta and higher plants. Within microalgae, the TF-family profiles of Nannochloropsis strains are most similar to those of red algae, and relatively distinct from those of green algae, which is consistent with their organismal phylogeny. TF-family profiles of green algae are more similar to those of higher plants, which is also consistent with organismal evolution in plants 1 . Specifically, several TF families such as SBP and WRKY, which are usually large in size and vital for the gene regulation in higher plants, exist in green algae while are absent in Nannochloropsis spp. and red algae. SBPs form a major family of plant-specific TFs related to flower development 54 , thus their specific emergence in green algae might underlie the development of the distinct reproduction modes in terrestrial plants. The WRKY family of TFs is one of the largest TF-families in plants and they are integral parts of signalling webs that modulate many plant processes 55 , thus its emergence in green algae might contribute to the formation of cell signaling in modern land plants. Moreover, a significant expansion of TF families from microalgae to terrestrial plants was observed, as evidenced by the 16 higher-plant specific TF families that were absent in the microalgae lineages. These higher-plant specific families are mainly involved in advanced regulation processes for multicellular organisms, such as mediation of auxin signaling by NAC to promote lateral root development in A. thaliana 56 and modulation of phyA-signaling homeostasis by FAR1 in higher plants 57 . Confirmation of the expansion and elucidation of its mechanism and functional implication will be facilitated by analysis of genomes of additional plants and algae and experimental characterization of TFs in model plant and algal species. Previous studies reveal that alterations in the expression of TF-encoding genes serve as a major source of the diversity and changes that underlie evolution 6 . Here, the correlation between TF-family profiles and phylogenetic relationship in plants, as well as the distinct TF-family profiles between microalgae and land plants, suggested that the TFfamily profiles also play a potential role in organismal evolution. In this study, identification of TFBS motifs was carried out via comparative genomics of the six Nannochloropsis strains. Here we built connections of predicted TFs and TFBS motifs by comparison with reference plant TF-''TFBS motif'' pairs in TRANSFAC, using correlation in gene expression to filter potential false positives. This network likely only represents a certain portion of the regulatory connections in this species, due to the limitated number of experimental-confirmed TF-''TFBS motif'' pairs in TRANSFAC. Regarding lipid accumulation processes in IMET1, 11 TFs were predicted to be involved in the transcriptional regulation of TAG biosynthesis pathway, and several genes in the pathway appeared to be regulated by multiple TFs. Efforts are currently ongoing to experimentally verify these predicted regulatory links. In summary, this preliminary global regulatory network in an oleaginous microalga should help to prioritize the interactions between TF and their target genes for in-depth interrogation of regulatory links of interest. Moreover, these in silico efforts can guide experimental approaches such as Chip-Seq 13 that promise to unveil the intricate regulatory interactions that underpin the robust TAG biosynthesis in this and related microalgae. TFs predication was carried out in IMET1, CCMP526 and CCMP1779, mainly because the genome assemblies and gene annotations for these three strains are of higher quality than the other strains. To improve the accuracy of TFs prediction, genomes of all available strains except CCMP1779 were employed for TFBSs identification. The ''promoter sequence'' of a gene was defined as the upstream 1 kb sequence relative to the translation start site of the gene. ''Translation start sites'' instead of ''transcription start sites'' was adopted, because ''translation start sites'' could be obtained more reliably than ''transcription start sites'' especially in the draft genomes. Promoter sequences of all these Nannochloropsis strains can be accessed via our website (http://www.singlecellcenter.org/en/NannoRegulationDatabase/home.htm). Prediction of TFs based on the method in PlantTFDB. To identify the TFs in N. oceanica IMET1, N. oceanica CCMP1779 and N. gaditana CCMP526, HMMER 3.0 58 was employed to search the characteristic domains of plant TFs in the proteins of each strain following the methods of PlantTFDB 14 . Details are as follows: (i) a total of 64 HMM models were used to identify the TF-related domains, of which 53 models were collected from Pfam 24.0 59 and 11 models were built by the PlantTFDB authors; (ii) HMMsearch in HMMER 3.0 package was employed to search the TF-related domains in the proteins of each strain, with e-value of 0.01 and the ''domain-specific bit-score'' from PlantTFDB as the threshold for domain identification; (iii) Each TF candidate was assigned into a specific TF-family based on the ''family assignment rules'' described in PlantTFDB. Details about the ''64 HMM models'', ''domain-specific bitscore'' and ''family assignment rules'' used in TF identification can be found in 14 . Phylogenetic tree for 36 plant species. The 18S sequences of 36 plant species were downloaded from Silva database (http://www.arb-silva.de/; released on August 23, 2013) 60 . The phylogenetic tree was constructed as follow: (i) multiple sequence alignment was carried out by MUSCLE (-maxiters 100) 29 based on all 18S sequences; (ii) based on the alignment results, the NJ tree was constructed by MEGA5.2 31 with bootstrap test (100 replicates), setting Homo sapiens as the outlier. Identification of lipid-related TFs via co-expression analysis based on time-series transcriptome dataset. Our IMET1 genome annotation revealed 118 genes that were related to lipid-synthesis pathways (such as TAG assembly pathway, fatty acid desaturase pathway and fatty acid biosynthesis pathway). The correlation coefficient between each of the 125 TF genes and each of the 118 lipid-synthesis-related genes was calculated based on the temporal dynamics of transcripts in the triplicate cultures over the six time-points (3, 4, 6, 12, 24, and 48 h) under the nitrogen-depletion condition 23 . A correlation was considered significant if the absolute value of the coefficient was over 0.8 and p-value not higher than 0.05 (Method below). A TF gene was considered as lipid-synthesis related if its transcript level during the time-course was significantly correlated with those of at least 30% of the 118 lipid-synthesis related genes in IMET1. Pearson correlation coefficient (r) and p-value were used in assessing statistical significance in the prediction of lipid-synthesis-related TFs and the filtration of predicted regulatory connections based on the time-series transcriptome dataset of N. oceanica IMET1. The Pearson correlation coefficient (r) of two genes (their transcript level designated as x and y at the time point of i over the totally six time points sampled) was calculated as below: The Pearson correlation p-value of two genes were also calculated based on the Pearson's correlation coefficient and a Student t distribution with four (4 5 6 2 2) degrees of freedom. The cor() and cor.test() functions in the R ''stats'' package were employed for the calculations above 61 . TFBSs predication based on comparative genomics of six Nannochloropsis strains. To identify the TFBSs in Nannochloropsis, we devised an improved pipeline based on phylogenetic footprinting. This pipeline utilized the MERCED 17 algorithm as the computational core, and improved it by extending the comparative genomics method of two species into multi-species, and by applying parallelization programming to improve computing performance. The pipeline has four main steps: (i) IMET1 was chosen as ''reference genome'', while genomes of other five strains were considered as ''query genomes''. PSI-BLAST was performed (E-value cutoff of 1E-5) between proteins of IMET1 and those of each query genome to get the ''reciprocal best hit pairs''. Orthologous gene groups of all strains were defined as the intersection of all these five ''strain-pair'' orthologous gene (protein) sets. (ii) Substitution matrix was constructed between IMET1 genome and each query genome to describe the neutral evolution rate of nucleotides, based on the four-fold degenerate sites in orthologous proteins for strain pairs. MUSCLE (version 3.8) 29 was employed to align each of the orthologous proteins and obtain all the four-fold degenerate sites with the same amino acid in the alignment. Additional details on the substitution matrix calculation method were as previously published 46 . (iii) To identify the conserved k-mers (with the degree of conservation evaluated by statistical significance calculated from the corresponding nucleotide substitution matrices) in Nannochloropsis spp., we first identified conserved k-mers between IMET1 and one ''query genome'', by adopting the method used in MERCED 17 . Then the ''conserved k-mers'' shared by more Nannochloropsis strains than our pre-set cutoff were obtained. For example, the ''most-conserved TFBS'' were defined as the k-mers shared by all six strains. Details about ''conserved k-mers'' identification in the promoters of the strain pairs were described in MERCED 17 . The k-mers' length was chosen as 8 bp in our study, because the most dominant length of motifs in the TRANSFAC database is eight and several previous studies have successfully identified meaningful motifs in plants and other species using 8-mers 17,62,63 . (iv) Hierarchical clustering algorithm 47 was applied to cluster the ''conserved k-mers'' to obtain TFBS motifs, which can be degenerated in some sites, employing a position weight matrix to represent the TFBS motif for each cluster 64 . TFBS enrichment analysis for genes clusters of specific functions. TFBS enrichment analysis was performed on gene clusters related to ''photosynthesis'', ''TAG assembly pathway'' and ''Fatty acid biosynthesis pathway'' in N. oceanica IMET1. Genes related to photosynthesis were extracted from the genome based on blast-NR annotation results (E-value cutoff of 1E-05). Genes related to ''TAG assembly pathway'' and ''Fatty acid biosynthesis pathway'' were obtained from lipid pathway reconstruction of N. oceanica IMET1 based on lipid pathway in Chlamydomonas reinhardtii and Saccharomyces cerevisiae 23 . The statistical significance of each TFBS motif in the functional gene cluster was calculated as follow. For example, for ''TFBS Nanno_M0'' (Dataset S4) in ''photosynthesis genes cluster'', let ''N'' be the total number of genes in N. oceanica IMET1, ''n'' be the number of genes in ''photosynthesis genes cluster'', ''M'' be the total number of target genes for Nanno_M0, and ''m'' be the number of target genes for Nanno_M0 in ''photosynthesis genes cluster''. Then the p-value of the Nanno_M0 in ''photosynthesis genes cluster'' can be estimated based on the hypergeometric test: in which C(x,y) is the combinational number of choosing y items out of x items. All the TFBSs with sufficient statistical significance (p-value ( 0.05) and the ratio (i.e., relative abundance of the targets in the total number of genes of the cluster) over 0.1 were selected as the enriched TFBSs. Gene ontology (GO) enrichment analysis for each predicted TFBS motif. Gene ontology (GO) enrichment analysis was carried out for each TFBS motif to investigate its possible functions. GO annotation for all the genes of IMET1 was carried out by InterProScan5 65 . GO terms were mapped to the GO slim (Generic GO slim) hierarchy proposed by the GO consortium based on the goslim_generic.obo (version 1.2) file (http://www.geneontology.org/GO.slims.shtml). The statistical significance of each GO-slim term in the target genes of one given TFBS motif was calculated as follows. For example, for GO-slim term ''lipid metabolic process (GO:0006629)'' and motif ''Nanno_M0'', let ''N'' be the total number of genes with GO annotation, ''n'' be the number of genes in targets of ''Nanno_M0'' with GO annotation, ''M'' be the total number of genes belonging to GO-slim term ''lipid metabolic process (GO:0006629)'', and ''m'' be the number of genes in targets of ''Nanno_M0'' belonging to GO-slim term ''lipid metabolic process (GO:0006629)''. Then, the pvalue of the ''lipid metabolic process (GO:0006629)'' in target genes of ''Nanno_M0'' can be estimated based on the hypergeometric test: in which C(x,y) is the combinational number of choosing y items out of x items. All GO-slim terms with sufficient statistical significance (p-value ( 0.05) and the ratio (i.e., relative abundance of the targets in the total number of genes of the cluster) over 0.1 were selected as the enriched terms. Identification of TF-TFBS interaction pairs for constructing an initial regulation network in N. oceanica IMET1. To establish the genome-wide regulatory connections between our predicted TFs and target genes, we developed a method via comparison with the TRANSFAC database 24 . First, our computationally determined TFBS motifs were mapped to all those 199 experimentally verified plant TFBS motifs in TRANSFAC using STAMP (E-value cutoff 1E-5) 49 . For each of the TFBS motifs, the best-hit plant motif in TRANSFAC was defined as its ''similar motif''. Next, the TF proteins binding to this ''similar motif'' were then extracted from the TRANSFAC database; and thus the orthologous proteins of these TF proteins were obtained from the collection of our predicted TF proteins (using PSI-BLAST; E-value cutoff at 1E-5) 45 . Based on this procedure, all possible interaction pairs between our predicted TFs and TFBS motifs could be obtained. The target genes of a predicted TF were thus defined as all those genes whose promoters contained one or more target TFBSs. Method to compare our network to the Arabidopsis thaliana regulatory network database AtRegNet. AtRegNet contains information on physical direct regulatory interactions between 8131 target genes, 64 TFs and three TF complexes, connected by 11355 edges, among which 769 interactions were classified as ''confirmed connections'' (http://arabidopsis.med.ohio-state.edu/moreNetwork.html). Bidirectional Blast search was performed for the TFs and the target genes between the two networks respectively (E-value cutoff at 1E-05). A connection in the IMET1 network was considered ''supported'' by the corresponding one in AtRegNet if both the two criteria were met: (i) the TF in an IMET1 connection was the ortholog of the TF in the corresponding connection in AtRegNet; (ii) the target gene in the IMET1 connection was also the ortholog of the target gene in the corresponding connection in AtRegNet.
2018-04-03T00:55:42.609Z
2014-06-26T00:00:00.000
{ "year": 2014, "sha1": "5fadeab33052529cfc194801ec4563eaebd8b983", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep05454.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fadeab33052529cfc194801ec4563eaebd8b983", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54499024
pes2o/s2orc
v3-fos-license
Conflict of Interest in Institutional Arrangement and Apparatus Placement Resource of Regional Expansion in Indonesia This study aims to describe, analyze and assess exhaustively the implementation of institutional arrangement and the placement of local apparatus resources in the expansion area as well as the conflict of interest in the arrangement of regional institutions and placement of local apparatus resources and settlement of conflict of interest in regional institutions arrangement and the placement of local apparatus resources in the expansion area Pangandaran District. This study applies case study approach as qualitative method by interviewing eight participants (see Appendix 1) that are involved in regional expansion from various occupations. The gap of this study is the existence of the conflict of interest in the institutional arrangement and apparatus placement resource in the new regional expansion. This study found that there is a conflict association that caused the lack of development in regional expansion. The resolutions were as follows: (a) Establishment of a joint forum on a regular basis between the regent, council presidium and community leaders to discuss the plan and program areas. (B) The Regent accommodates the council presidium proposal, and Journal of Public Administration and Governance ISSN 2161-7104 2017, Vol. 7, No. 4 http://jpag.macrothink.org 239 put the appropriate and qualified officials. (C) Improve the performance of the region and increase cooperation in order to avoid misunderstanding by the local governments. Theoretically, this study is hoped could strengthen the conflict theory in the new district of regional expansion. On the other hand, the government’s role is hoped in arranging and managing the institution as well as apparatus resource in regional expansion by inventing the laws and policies to execute it. Introduction Basically, the policy of regional expansion was a new concept in Indonesian regional autonomy policy that was capable to deliver the best services to the welfare of society. This is due to the high demand policy regarding the regional expansion in Java and outside Java that triggered the demands on the expansion itself with several reasons among others: Firstly, the motives of governmental administration effectiveness/efficiency are recalling the regional region that was so wide, has prevalent population but a left behind development. Secondly, the trends of homogeneity (ethnic, language, religion, urban-rural, income level, and etc.). Thirdly, the existence of fiscal spoilt that was guaranteed by the laws (provided public allocation fund/DAU, profit share from natural resources and the availability of Regional Income Source (PAD). And lastly, the rent-seeking motive from the elites (Fitriani et al., 2005). Due to the policy, the development of regency/city nowadays and the total provinces in Indonesia can be seen in the following interest, but also due to the central government's inconsistency in applying rules on the expansion. Furthermore, Eko Prasojo said that the discussion of regional expansion and new regional forming, the subjectivity elements that are more dominant than primordialism elements, political party, local and national elite interest are tend to utilize that conditions for the sake of getting more votes in public election. The issues of regional expansion that took place in Indonesia generally showed the following problems: 1. The regional apparatus resource quality was far from the expectation both from educational aspect, experience, performance standards, regulation understanding and also the managers' and employees' placement were not compatible with their competences. 2. The limits of regional expansion in formulating vision and mission, institutional main tasks and conducting the regional institutional wheels were very minimum. 3. The regional readiness in implementing the autonomy actually showed the trends of mostly the lack of performance of regional expansion and also the emergence of corruption in the regional expansions. (The Ministry of Internal Affairs, 2017) Apart from those factors, the problems in regional expansion were the emergence of elite and public interest conflicts in the implementation to be a new autonomy region. The facts have showed that regional extension policy took place in most of Indonesian provinces whereby those were not separable from the conflicts as the following table: (2017) The above data has showed that several provinces and regencies/cities of expansion process were not separable from the conflicts. There were various conflicts that occurred among regions. For instance, asset seizure conflict that happened in capital regency (Nunukan Regency), population-migration conflict (Sengkawang City), APBD conflict (Lampung Province), pros and cons conflict of expansion (Riau Island, West Papua, Batu regency of North Sumatra, Tapanuli Province), Ethnic, Religion and Race conflict (Polewali Mamasa Regency), conflict of mining natural wealth seizure (Sumbawa Regency), conflict of tourism location seizure (Batu City). The conflicts of expansion above have showed that the interest occurred between pros and cons supporters as well as the interest conflicts with capital regency with various backgrounds such as region seizure, natural resources seizure, tourism location, population-migration as well as race or religion conflicts (The Ministry of Internal Affairs, 2017). In terms of management of institution and placement of apparatus resources, the regional expansion had referred to Regulation No.23/2004 on Regional Government (amended Regulation No32/2014), Governmental Regulation (PP) number 78 of 2007 on procedures of forming, deletion, regional incorporation and Regulation number 43 in 1999 on State Civil Apparatus (later amended Regulation no.5/2014). The principle was a reference in the implementation of regional expansion and a placement of regional apparatus that emphasized competences, qualifications, performances, transparencies, objective principles and freedom in political interventions and Corruption, Collusion and Nepotism (KKN). However, whatever happened the model has triggered the interest conflicts. The facts have showed that regional expansion policy took place in almost all Indonesian provinces whereby it was not separable from interest conflicts in institutional management and placement of regional apparatus resources. It can be seen as the following table below: The table above has showed that there were several regencies/cities occurred the conflict in the implementation of regional expansion. For instance, the determination and placement of SKPD position (Bintuni Bay), interest of official movement and regulation incompliance in position determination (Lubuk Linggau Ciity), and capital regional apparatus that were not wanted to be moved in the regional expansion (Tasikmalaya City), educational issues, funds, family matters, conflict of educations and training delivery participants, involvement in initial expansion and also position interest (Sungai Penuh City, Banjar Rehency, Batu Regency). In ISSN 2161-7104 2017 addition, conflicts of interest of institutional management, placement and determination of Regent position, Regional Secretary (SEKDA), SKPD determination, and regional apparatus placement were the conflicts within extension presidium (Pangandaran Regency). Journal of Public Administration and Governance In that context, the above issues generally have showed that institutional management and regional apparatus resource placement that referred to the existing regulations have emerged interest conflicts in the regional expansion as well as the regional non-readiness in managing an autonomy region. The conflict of interest in institutional management and regional apparatus resource placement became a crucial factor to be discussed in this research. This discussion was expected to resolve the problems and give a proper solution that is suitable with the state administration discussion among others on regional expansion policy, regional autonomy, bureaucracy, governmental performance, institution, leadership, apparatus resources, conflict solution and the improvement of regional public services. This research was different with the research discussion that was conducted by the previous experts and the academicians. This research focused on the conflict of interest in the institutional management and regional apparatus resource placement in the regional expansion. In the existing dynamics have showed that the expansion was full of interest both official, political parties, public figures and national interests. The objective of an expansion was to approach and to deliver an optimum public service. However, the process of becoming an autonomy region was inseparable of the conflict of interest. Hence, the regional expansion that resulted in conflicts should return according to regulations that available. This research focused on the institutional management and regional apparatus placement and conflict of interest of institutional main tasks of formulation, structure, organization, determination of Regional Government/SKPD organizational staffing, determination of Regent Position, Sekda, SKPD, determination of SKPD structure position, and placement of regional apparatus employees. Conceptual Study The conceptual study was initiated by a decentralized concept according to whereby it was a political phenomenon that involved administration and government. Decentralization was an authorization whereby it delegates to a lower level power region whether one of governmental hierarchies in the state government or similar offices in a large organization, while in general, decentralization was divided into two types namely: territorial decentralization and functional decentralization. Functional decentralization meant an authorization that transfers to a functional organization (technical) that directly related to public . Rondinelli and Cheema (1983) stated that decentralization was a delegation of specific functional responsibilities to organization outside of governmental bureaucracy structure and it was indirectly controlled by Central Government. The problem of a power relation between central government and regional government in the united states is it consolidated in national governmental level. Therefore, the power in regional government is really depend on the national government pleasure to decentralize. According to Daniel J. Elazar (1995), decentralization was not an independent system, but a series of a unity of a bigger system. Miftah (2015) said that decentralization was national Journal of Public Administration and Governance ISSN 2161-7104 2017 governmental interest that form a nationalism spirit that was not narrowed by regional sectarian spirit. The understanding was delivered a confirmation that decentralization was a general regional demand. This was called an extension as an effort of delivering authority to the region to be an autonomy region. Concept of Regional Expansion The regional expansion according to Gabrielle Ferrazzi (2007), could be viewed as a part of regional management process or territorial reform or administration reform i.e. a management in a size and also a hierarchy in regional government to achieve goals, both in political and administrative objectives. The regional management generally covered the regional expansion, incorporation and eradication. Likewise, he highlighted that the main strategy of an optimum regional expansion does not only determine the number of ideal autonomy regions in a state, but more than that it should be able to answer the question of what actually the function of a regional autonomy was. After that, we look for the answer of what the objectives of a regional expansion, specifically in the context of territorial reform (Gabriele Ferrazzi, 2007). According to Kosworo (2001), a regional expansion was an implementation of decentralized principles, precisely it was a territorial decentralization. A territorial decentralization was an authority that was given by the government to a public board such as an alignment with a self-government to develop the whole interrelated interest from groups of population in a specific region. Constitution in 1945 did not constitute a regional forming or a regional expansion specifically, but it was mentioned in Article 18 B verse (1) that "A state acknowledges and respects regional governmental units that was specific in nature or extraordinary in nature that was governed by regulations". Further, in verse (2) the same article mentioned the following phrases:"A state acknowledges and respects the custom law community units along with their traditional rights as long as it still alive and suitable with the community development and principles of United States of Republic Indonesia that was governed in the regulations". More specifically, Regulation Number 32 in 2004 (amended Regulation 23/2014) that constituted the provisions on regional forming in Chapter II and Specific Territory. This could be analogized that a regional expansion was also included in a regional forming coverage. Regulation Number 32 in 2004 determined that a regional forming should be determined by a specific regulation. This provision was mentioned in Article 4 verse (1). Later, in verse (2) the same article was mentioned as followed, "the regulations of a regional forming as mentioned in verse (1) among others were covered name, regional coverage, limits, capital, authority to establish a governmental affair, appointment of regional chief officer, Regional House of Representative (DPRD) staffing, staffing transfer, funding, equipment, document and also regional apparatus". The legalization of a regional expansion was mentioned in the same articles in the following verse (verse 3) that stated, "regional forming could be in form of several regional incorporations or part of adjacent regions or a regional expansion that became two or more regions". And verse (4) stated, "then extension of a region became 2 (two) regions or more as mentioned in verse (3) that it could be conducted after achieving Journal of Public Administration and Governance ISSN 2161-7104 2017 minimal limit of a governmental establishment". However, regional forming could only be conducted if it had fulfilled administrative, technical and physical requirements of regions. In terms of provinces, the administrative requirements were obliged to fulfill cover regency/city DPRD approval and regent/governor would be the coverage towards province of related regions, approval of capital province DPRD and Governor and also the Ministry of Internal Affair recommendation. Meanwhile, in terms of regency/city, the administrative requirements should be fulfilled to cover regency/city DPRD approval, regent/governor related, province DPRD approval and also the governor as well as the Ministry of Internal Affair recommendation. Furthermore, technical requirements from new regional forming should cover actors who were the principals of regional forming that cover the following factors: a) economic ability, b) regional potentials, c) social culture, d) social politics, e) population, f) regional width, g) defense, h) safety, i) and other factors that enable regional autonomy implementation. On the other hand, according to PP Number 78 in 2007 on management of Regional Forming and incorporation that was mentioned as a regional expansion was a province or regency/city that splitting into two regions or more. Regional expansion of regency/city became several new regency/city regions was basically an effort to improve service quality and intensity to public. Regional expansion was principally also an effort to improve public welfare by improving and speeding up services and improving a democratic political life, regional economic, regional potential management, safety and orders and also a harmonious relation between central and regional. The regional expansion context was synergized by a regional decentralization and autonomy that were a reform demand and regional needs. However, its implementation was not proportional with the expectation at which it was the objectives of an extension policy. According to Tryatmoko (2010), governability local issue was not only marked by weakness or extension of a regional expansion results, but also a public ability weakness in supporting local political and economic development. Furthermore, Tryatmoko then highlighted the issue of governmental policy effectiveness in governing cover public involvement in decision making and also control governmental run. In this case, regional expansion needed to pay attention and continuously controls and evaluates the implementation step by step. Concepts of Institutional and Apparatus Management One of important factors of unsuccessful regional expansion became a new autonomy was institutional management in running service tasks as well as regional development. The concept that could be reviewed in answering expansion issues were among others of institutional capacity concept. The institutional capacity was a strategic approach in development plans in materializing a good governance based on: (1) capacity in implementing policies and governmental functions, (2) accountability and transparency in decision making, (3) participation in a democracy process, (4) care of poverty and welfare equalization, (5) commitment towards a market-oriented economic policy. To respond the readiness of the regional expansion to be an autonomy region could be Journal of Public Administration and Governance ISSN 2161-7104 2017 referred from Keban (2000) opinion that stated as the attempts to strengthen the ability of regency and city region both as an institution as well as an apparatus individual could refer to the ability development that covered: (1) composing strategic planning and policy formulation; (2) organizational design; (3) management approach; (4) moral and work ethic and (5) accountability. Meanwhile, the individual development covered: (1) ability to perform jobs' compatibility with job's needs; (2) ability to face the future; (3) guidance of the job's motive that suitable with the job's needs; (4) personality guidance at work. The concept was in line with Grindle's (2007) opinion in which it stated that an institutional capacity was an effort that aimed to develop a strategy variation in order to improve efficiency, effectiveness and responsiveness in a governmental performance namely: efficiency in time and resources that were needed to achieve an objective; effectiveness in form of business feasibility that conducted for the desired results; and responsiveness referred to how to synchronize the needs and ability to achieve the objectives. In this subject, Grindle (1997) stated that an institutional capacity had activity dimensions, focus and types among others such as: (1) human resources development dimension that focused on professional personnel and technical ability and also activities such as training, direct practice, work condition and recruitment; (2) organizational strengthening dimension that focused on the management to improve successfully the roles and functions and also activity types such as incentive system and personnel equipment and (3) institutional reform dimension that focused on institution and system and also macro structure with activity types such as economic political rules, policy and regulation of change and also constitutional reform. Concepts of Conflict of Interest According to Webter (1966), the term "conflict" in its original language meant a "fight, war or struggle" in form of physical confrontation among several parties. However, the meaning of the word then develops along with the entrance of "sharp dissent or opposition on various interest, idea and so forth". On the other hand, the term nowadays also touched a psychological aspect behind a physical confrontation that occurred besides physical confrontation itself. Shortly, the term "conflict" became wider so that it was too risky to loose status as a single concept. In the context of interest according to Surbakti (1992), the interest group was a number of people who have similarities in nature, attitude, self-confidence or objectives that agree to organize themselves in order to guide them in achieving the objectives. Meanwhile, Farazmand Almmond (2001) stated that the interest groups are those organizations that attempted to influence governmental policy without, at the same time, desired to obtain a public position. The above conflict classification can be viewed from the number of people and group facets who interact then occurred dispute. This could be observed in our living environment and wider environment. However, we need to contemplate whether each conflict was something good or a disruption. About this case, there were two aspects that we could contemplate. Firstly, a conflict was an indication of something wrong or a problem that needed to be determined. Secondly, a conflict created a widening undermine consequences. In general, conflict would occur anywhere as long as there happened interaction or relation among Journal of Public Administration and Governance ISSN 2161-7104 2017 humans both individual with individual as well as group to group in doing something. In its development, conflicts related to a strategic position that often emerged that behind the conflict there was an interest. The interest somehow could be happened in short, medium and long terms. For whatsoever association, there were two main SMU groups namely those who have an authoritative domination of position and those who should obey to the authority users. If one in a quasi-group developed a jointly class awareness on a jointly interest, the organizing activity that chase the interest would emerge an interest group. Even though members of an interest group that was conflict in nature was taken from the same quasi-group, not all people in the same quasi-group should join an interest group that was conflicted in nature to chase its class interest. An interest meant an individual desire or aspiration to focus consciously on one thing based on various backgrounds of social, economic and culture. For instance, on an animal protection: interest of children protection, interest of a healthy justice creation. Principally, the various interess that emerged could be viewed from the existence of an interest group that has various numbers. An interest conflict group gathered and changed whereby it scattered the interest in the community that become a unity to struggle in order to be a part of public policy that delivered group advantage and struggled public interest. William Zartman (1997) said that governmental management was one of methods to manage a violence conflict in a country that needed to pay attention regarding a welfare issue or citizen satisfaction to the government in case of service, citizen expression to participate in a public region, noted competition among them and issue of resource allocation that owned by a region for their needs in regional development. Therefore, a good government was a government that was able to manage "regional conflict sources" by delivering service and welfare or satisfaction to public so that the needs of political expression, inter-public competition and the needs of treasure resource that shared profit justice could run effectively. Furthermore, related to an effective government in relation with a regional conflict Zartman also proposed that the ongoing conflict in a state region is needed an effective governmental role in which the effective government would depends on a national consensus whereby it jointly agreed as a jointly norm. The norm was then strengthened jointly (jointly recognized) and supported by a political regime that legitimate ruling and also the existing power regime that enact them. In understanding a conflict of interest issue in an institutional management and apparatus resources, there were several public administration theories that proposed several models, conceptual framework or different paradigms . Martin Laffin (1997) proposed three models namely: agency model, bureaucratic politics and institutionalism. Agency model viewed the relation between political institution and bureaucracy as a conflict of interest in which bureaucracy party was a party that is mastering the information. Consequently, the information flow became asymmetric. This fact became a source of bureaucracy bargaining power during the interaction with a political institution. On the other hand, a political institution had an authority of bureaucracy agents and their incentive pattern. The meeting point between these two power sources was a phenomenon that became an agency model as a ISSN 2161-7104 2017 main discussion in understanding a political institution interaction versus bureaucracy. Journal of Public Administration and Governance The second model, the bureaucratic politics viewed relationship between political institution and bureaucracy as inter-individual bargaining in which the attitude was determined by the attendance of bureaucratic affiliation and participants who attend in the interaction. Meanwhile, the effectiveness of political strategy was much determined by controlling resources and a persuasive ability level of each actor in the process of interaction. Lastly, the third model, institutionalism interpreted actor behavioral patterns that involved in the interaction process in which it comes from a historical process and specific institution. The basic assumption of this model as social reality was a social construction and the organization that played a vital role in the reconstruction process of the social reality. Concepts of Advocacy Coalition Framework The theory that was used in the analysis of conflict of interest in this study was Susan L Carpenter (1988) in her book "Managing Public Dispute". Public dispute or public conflict showed the healthy social dynamics. This phenomenon was emerged by the various factors that all needed to be noted contextually and carefully. The conflicts could be understood as a productive or destructive element if able to create an institutional correction as well as a positive output. On the other hand, conflict could also yield a long bad effect unless it could be solved proportionally and immediately. In general, public dispute or conflict indicated a democratization in which avocation towards rights, obligations and roles of politics have been addressed. An authority that was materialized in a policy process face the critics and the resistance. Therefore, the understanding towards public dispute needed to be positioned in seizure framework and influenced the related policy processes. The disputes on public issues emerged in various sizes and forms. Generally, it happened between community and policy makers, among organization members as well as public and existed organizations. A number of conflicts could float to a confrontation that harm a bad development quickly. The framework analysis used in this study was related to the policy of Sabatier and Jenkins theory in which a policy process acts as a competition from a number of actors who advocated or struggled the faith on policy issues and its solutions. This competition occurred in a policy sub-system that was defined as a number of actors who actively paid attention on an issue and attempted to influence the related public policy continuously. Likewise, from the competition process that changes the policies among the existing groups, ACF approach also keeps formulating the alternative path of policy avocation. The method that used was a consensus through a negotiation process in order to achieve an agreement. Institutional Management The implementation of an extension that occurred in Indonesia constitutionally and public demand was to deliver an equal service and welfare in each extended region. In the occurring extension, the important aspect in institutional management was a management for a new autonomy region and the importance of the roles in order to work suitable with vision, ISSN 2161-7104 2017 mission and institutional main tasks namely extension work procedures and management in implementing a good governance so that the regional expansion could run well, advanced and strong. Journal of Public Administration and Governance In the work mechanism of regional expansion and the implementation of regional government run by regional officials was not separable from work procedures according to the applied regulations. The work procedures in this context was related to an extension process of Pangandaran Regency (see Appendix 2). In the extension process, obviously only formally i.e. refer to Regulation Number 23/2004 and Governmental Regulation No 78 of 2007 namely indicator of score survey 350 out of population indicator of 95, indicator value of economic ability 85, indicator value of economic potential 90 and indicator value of financial ability 80 with recommendation of Pangandaran Regency to be extended. The regulations have not been a strong principle of an extended region to be an advanced regency due to disavow institutional aspects and regional apparatus. In the implementation, the extension of Pangandaran Regency is more emphasized to the elite and public interest. This was due to the regulations in the extension was very weak and emphasizing a formal regulation only. The effects of a weak and formal extended regulation resulted the implementation of government in Pangandaran Regency had nut run well, or even far from autonomy principles of region among others such as: (1) the government of Pangandaran Regency in the initial of government focused on personnel transfer and regional apparatus staffing from the capital Regency to Pangandaran Regency; (2) organizational structure of Pangandaran Regency Government had not wholly optimized services to the public and needed an SOPD management maximally; (3) there is no synergy of license service or other public services; 4) in the composing of KUA PPAS of 2014 there was delay due to SOPD forming and position staffing in July 2013 and the amount of budget accepted in 2014 was not known; 5) the limited office equipment in Regional Secretariat, Regional Office and Regional Technical Institution; 6) The program and Activity Composing and also Budget Activity Planning had difficulties and obstacles; 7) the weakness of information system due to not maximally utilized the information technology whereby that one established a networking; 8) accountability system was not implemented whereby it increased regional authority misused that cause inefficiency, leakage/corruption, collusion and nepotism that harm the regions 9) trends of Regent Position are less communication with the public, especially Pangandaran Presidium that triggered the conflict of interest. Placement of Regional Apparatus As the attempts to strengthen the regional expansion in Pangandaran Regency to be an autonomy region, the placement of regional apparatus should be conducted to improve regional performance. The effort of regional apparatus placement was conducted several aspects that consisted of regional apparatus educational level, recruitment, promotion, remuneration and work discipline. In the implementation and regional apparatus, the educational level in the Government of Pangandaran Regency was dominated by bachelor degrees and college scholars in all levels and also Diploma and Senior High School Journal of Public Administration and Governance ISSN 2161-7104 2017 educational level. Hence, the educational facet had not fulfilled feasibility as an autonomy region. This issue really affects the performance and work ethos in the Government of Pangandaran Regency. Whereas, data showed that the development of apparatus of Pangandaran Regency is very limited both in budget and activities to strengthen the regional apparatus such as education and training. For the recruitment process, the officials of Pangandaran Regency came from capital regency (Ciamis Regency) or Pangandaran district of origins. The selection process was only conducted in a simple way namely by an interview and willingness to transfer to the new regency of Pangandaran. In the recruitment process, there was happened dissatisfaction and problems among officials, chief and capital Regency. The solution of the problems could be overcome by the efforts to deliver explanation if Pangandaran Regency conducts the approach to the employees with new hope of Pangandaran as a region that wants to be advanced and welfare. Meanwhile, since the beginning the extended promotion for echelon was conducted jointly into a structural position. In addition, the employee placement rightly and exactly based on the efforts to motivate employees to get satisfaction from the jobs. The employee promotion in a specific position could be a promotion for employees related if the position that is managed currently had a bigger level, more responsibility and more authority than the previous position. Otherwise, it could be a demotion if the position had a lower level, less responsibility and less authority than the previous position. The problem in the promotion was also due to the capital Regency (Ciamis Regency) sent the related officers were regarded as less professional and not suitable with the applied regulation. The remuneration policy in Pangandaran regency had not been materialized due to the budget availability and also the concentration to bureaucracy management that was imperfect. To materialize a good governance in Pangandaran Regency properly is by violating discipline regulation for officers as well as ordinary employees should be enforced. The further importance of work discipline for apparatus of Pangandaran Regency was an accuracy to implement the task and service to the public. Hence, the policy of new cultural work discipline in the regional expansion would be materialized as expected to Pangandaran Regency expansion. The determinant factor of the policy of regional apparatus resources placement in Pangandaran Regency was education and feasible budget availability. The limits very influence the performance and the quality of regional apparatus and bureaucrats to work maximally and professionally. Pangandaran Regency as a new autonomy region can be more advanced than other regions if it strengthened by human resources and budget in improving a bureaucracy role. The limits of education and budget also influenced the position of promotion process, regional apparatus recruitment, remuneration, and work discipline. Generally, the placement of regional apparatus resources in regional expansion of Pangandaran Regency had not been optimal due to the limited human resources and problems if local elite interest and also bureaucracy officials emerged in determination, placement process in each SKPD, recruitment process and the transfer from capital regency (Ciamis) to the new regency of Pangandaran. Conflict of Interest in Managing Institution In the dynamic of regional expansion as a new autonomy institution, the regional institutional management could emerge an institutional conflict. The conflict that emerged related to the interest of post-regional institutional determination to be a new autonomy region such as conflict of interest in composing and determining institutional number, institutional main task formulation, bureaucracy structure management and SOPD management that occurred in Pangandaran Regency. The problems occurred due to a readiness to be an autonomy region or extended region that was less ready politically and technically. Political tolerance was based on the regulations and commitment to regional advance instead of individual effort, group and instant interests. A conflict of interest was a situation of a personal interest that has a difference with an organizational interest, however, the existence often adheres as well as join with an organizational interest where they are incorporated and also taking advantages from policy and management that running within. The weakness of a public staffing system in Indonesia was often began by a conflict of interest and this was occurred in almost all functions of human resources. Recruitment process, placement, promotion, mutation, development and evaluation often experienced bias due to those were contaminated by a conflict of interest. A process of regional institutional management in regional expansion of Pangandaran Regency was not separable from a conflict of interest. Conflict of interest occurred in a strategic position as well as a strengthening effort both from an institutional facet, regional potentials and human resources that was existed in Pangandaran Regency with public figures who were less satisfied from the triggered conflicts in managing the expansion of Pangandaran Regency. For instance, an institutional regulation made by the government of Pangandaran Regency that was regarded double such as Regent Regulation Number 2 of 2013 and Number 3 of 2014 on institutional main task, Work Order and Regional Apparatus Organization, problems of Regent performance, SPOD determination and district number determination from 14 into 10 districts and also determination of Pangandaran capital. For the implementation, the actors in regional institutional management that triggered a conflict of interest was dissatisfaction of Pangandaran extension presidium, NGO's, public figures, and bureaucracy elites. This was due to the perception difference, miscommunication and also regent roles in management and implementation of Regional Government was still regarded as less sufficient. In addition, there is also the determination of personnel who occupy the position that full of interest from bureaucracy elites, political parties and regional elites. Conflict of Interest in the Placement of Apparatus Resources When the implementation of expansion became an autonomy region, Pangandaran Regency's interests and conflicts occurred during the determination of regional apparatus resources. The conflicts that occurred were related to the determination and placement of strategic position in bureaucracy of government of Pangandaran Regency that started since the forming process Journal of Public Administration and Governance ISSN 2161-7104 2017 of new region and formally became new regency. Conflict of interest on regional apparatus placement were among others as stated below: 1. In the process of real suggestion of Regent position that has been proposed, there were emerged the various interests, conflicts and also various organizations, groups and elites. They were political parties, presidium's extension and bureaucracy. The dispute that occurred was the people who asked the presidium's extension as a struggle place of Pangandaran extension to participate in determining the officials and the condition in Pangandaran after became the new region. 2. The determination process of Regional Secretary of Pangandaran Regency was not separable from a seizure interest. This was proven almost 9 (nine) months from the status of Regional Secretary vacant. The vacant occurred on the interest between Regent Official and Presidium for the placement of Regional Secretary placement. 3. The determination process of The Chief's office and regional institution of Pangandaran Regency at which there happened an opinion difference between Regent Official and Extension Presidium. The extension presidium was proposed nine offices, while Regent Official was proposed 7 offices and 4 boards that worked suitably with institutional main tasks and processes respectively. 4. In the implementation placement of employees, Pangandaran Regency got employees' devolution from the capital regency to be the personnel in the new Autonomy Regional Government. In the placement process, the civil servants (PNS) moved to Pangandaran Regency at which there happened problems and conflict of interest. Conflict of interest among civil servants from capital Regency (Ciamis) were not willing to be moved to the government of Pangandaran Regency and also problem conflict of districts of origin to occupy strategic position and employee recruitment from official relative's element. Conflict Resolution In the conflict resolution, the interest of institutional management and apparatus resource placement were explained below: a) The establishment of a routine jointly forum among Regent, Presidium Council and the placement of human resources as the attempt to discuss the regional plans and programs. b) The Regent that accommodated the proposal of Presidium Council and would appoint the appropriate officials and fulfilled the requirements and districts of origin. c) The improvement of regional performance and the improvement of cooperation with the related parties in order to avoid misunderstanding that conducted by the regional government. d) The optimization of the more aspirated regional institutional roles. e) The establishment of jointly team among Presidium Council, Public Figures and ISSN 2161-7104 2017 Regional Government. Conclusion The era of decentralization, regional autonomy and regional expansion demands in Indonesia should be based on the institutional aspect in determining the regional expansion to work in regional advance according to expansion objective namely: improving human resources, improving public services, welfare and equality development in the region. In this case, it was policy of regional apparatus management that should be adjusted with management in institutional level, at which it was based on competence, work's discipline, remuneration, promotion and recruitment whereby it was based on assessment, test and precise feasibility. The proper management of regional apparatus resources were expected to be able to deliver the answer and explanation on the framework of regional expansion of performance improvement. Moreover, the extended regions should be strong, advanced and optimal based on the extension objectives. Conflict of interest in the management of institution and position placement in the regional expansion in this case of conflict of interest related in the efforts of institution to conduct the improvement and the empowerment of regional apparatus for the regional expansion. The governmental bureaucracy was an institution that is able to deliver the political roles in resolving conflicts that emerged among people and a group of people. The relationship between political institution and bureaucracy as conflict of interest in which the bureaucracy party was a party that mastering information. Consequently, the information flow became asymmetric. These facts were the sources of bureaucracy bargaining power when interacting with a political institution. If conflict of interest in the management of regional apparatus resources would strengthen the institution, it could be handled well suitable with the regulations by using a comprehensive approach so that the conflicts that occurred in the management of regional apparatus resources in the regional expansion would be the best solution that resulted the discontinuity in other regional problems.
2019-05-30T13:21:39.610Z
2017-12-08T00:00:00.000
{ "year": 2017, "sha1": "fb6ed7d2e34d3e515b1bf6fae31e71be43680a0e", "oa_license": "CCBYNC", "oa_url": "https://www.macrothink.org/journal/index.php/jpag/article/download/12260/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2c19b523b2529e433acd217757bdcc3275aa9d56", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
214683600
pes2o/s2orc
v3-fos-license
Policy congruence and advocacy strategies in the discourse networks of minimum unit pricing for alcohol and the soft drinks industry levy Background and Aim Public health policy development is subject to a range of stakeholders presenting their arguments to influence opinion on the best options for policy action. This paper compares stakeholders’ positions in the discourse networks of two pricing policy debates in the United Kingdom: minimum unit pricing for alcohol (MUP) and the soft drinks industry levy (SDIL). Design Discourse analysis was combined with network visualization to create representations of stakeholders’ positions across the two policy debates as they were represented in 11 national UK newspapers. Setting United Kingdom. Observations For the MUP debate 1924 statements by 152 people from 87 organizations were coded from 348 articles. For the SDIL debate 3883 statements by 214 people from 175 organizations were coded from 511 articles. Measurements Network analysis techniques were used to identify robust argumentative similarities and maximize the identification of network structures. Network measures of size, connectedness and cohesion were used to compare discourse networks. Findings The networks for both pricing debates involve a similar range of stakeholder types and form clusters representing policy discourse coalitions. The SDIL network is larger than the MUP network, particularly the proponents’ cluster, with more than three times as many stakeholders. Both networks have tight clusters of manufacturers, think-tanks and commercial analysts in the opponents’ coalition. Public health stakeholders appear in both networks, but no health charity or advocacy group is common to both. Conclusion A comparison of the discourse in the UK press during the policy development processes for minimum unit pricing for alcohol and the soft drinks industry levy suggests greater cross-sector collaboration among policy opponents than proponents. INTRODUCTION The global rise in non-communicable diseases (NCDs) can be understood as 'industrial epidemics' driven at least in part by powerful corporations and their allies promoting products that are also disease agents [1]. Decades of mounting evidence on the tobacco industry highlighted its detrimental effect on health and brought about the introduction of upstream policies targeting price, marketing and availability. More recently, UK public health policymakers have turned their attention to upstream policy interventions targeting alcohol and sugar. There is growing evidence that the alcohol industry and ultra-processed food and drink industry use similar strategies to the tobacco industry to undermine effective public health policies [2][3][4]. Public health policy development is subject to a range of stakeholders presenting their arguments in the news media on the best options for policy action [5][6][7]. In this respect, the news media can be seen as important in contributing to agenda-setting [8] and in shaping public and policy opinion on the acceptability of public health policies [9][10][11]. Two recent examples of controversial pricing policy options that prompted intense media debates throughout the United Kingdom were minimum unit pricing (MUP) for alcohol and the soft drinks industry levy (SDIL). Both policy options were considered by the UK Government. However, while the SDIL was implemented throughout the United Kingdom in 2018, the introduction of MUP in England was placed on hold indefinitely in 2013, despite being included in the UK Government's 2012 Alcohol Strategy [12]. Meanwhile, in June 2012, the Scottish Government passed the Alcohol (Minimum Pricing) Scotland Act 2012, paving the way for MUP in Scotland [13]. The MUP pricing policy targets the sale of cheap, high-strength alcohol to reduce alcohol consumption and related harms. After a failed legal challenge [14], in May 2018 a minimum price of 50p per unit was implemented in Scotland [15]. Arguments in support of MUP, appearing in the UK press, largely related to concerns about high levels of problem drinking; its effect on public health and public order; and a widespread belief that most of the alcohol that contributes to drunken behaviour is irresponsibly priced and sold [7,16]. Key opposing arguments in the debate positioned the policy as an illegal barrier to fair trade that would harm the economy and penalize responsible drinkers [7]. Public Health England's report, 'Sugar Reduction: The Evidence for Action', highlighted the high levels of sugar consumption and associated health harms [17]. The report recommended a broad range of measures, including the introduction of a tax on high sugar products. In the March 2016 budget, the Chancellor of the Exchequer, George Osborne, announced the Conservative Government's intention to introduce the SDIL [18]. They intended that the SDIL would encourage producers to re-formulate products with a reduced sugar content to avoid paying the levy [19]. Following a consultation period, the levy was introduced in April 2018 and set at 18p per litre on soft drinks with a total sugar content of 5 g or more per 100 millilitres, and 24p per litre for those with 8 g or more per 100 millilitres. The levy was to apply to all sugar-sweetened beverages except pure fruit juices (with no added sugar) and drinks with a high milk content. Key supportive arguments appearing in the UK press centred on the extent of the health harm caused by excess sugar consumption; that such a policy was a necessary government intervention as part of a package of measures; and that voluntary industry codes, such as the Public Health Responsibility Deal, had been ineffective [6]. Opposing arguments emphasized that industry was already taking voluntary action and playing an active role in health promotion, therefore further regulation was unnecessary;any form of taxation would be ineffective in tackling the complex problem of obesity; and such measures would cause economic harm to industry and the wider economy [6]. Successful implementation of 'controversial' health policies requires a high level of political commitment and support from advocacy stakeholders [20,21]. It has been argued that interest groups that present a united front may be more effective in having their preferred policy option adopted than if they work separately [22]. Indeed, Rasmussen and colleagues suggest that the likelihood of advocacy success increases when advocates publicly support each other's position [23]. Hawkins & McCambridge suggest that a factor in the failure to implement MUP in England was that health advocates were initially underprepared and did not present consistent arguments for the policy in the media [21]. Conversely, the complex corporate relationships that exist between unhealthy commodity industries may represent an opportunity for strategic cross-industry collaboration and result in more coherent alignment of media messaging when seeking to influence policy development [24][25][26]. Smith and colleagues highlight the need for research to 'better understand how processed food, soft drinks, and alcohol industries influence public, political, and policy debates', in order to understand how to mitigate against industry messaging and successfully advocate for public health policy via the media [27]. This study seeks to address calls for research to compare stakeholder influencing activities across industry sectors [24,25,27]. We use discourse network analysis (DNA), a research method that allows the analysis and visualization of actor-based debates using network analysis, to explore the complex web of arguments, or discourse coalitions [28], that form when stakeholders seek to publicly influence government policy [29,30]. Previous studies have used DNA to describe the appearance of discourse coalitions in support of, and opposition to, MUP and SDIL [6,7]. In the recent commentary on Fergie et al., Schmidt highlights that this methodology is 'likely to prove a particularly valuable tool for comparative research, allowing efficient, systematic, rigorous analysis to compare policy debates internationally and across multiple unhealthy products' [31]. Here we aim to build on our previous DNA studies and use this methodology to compare stakeholders' positions in the discourse networks across two pricing policy debates, MUP for alcohol and the SDIL, as represented in UK newspapers. The comparison of MUP and SDIL is an appropriate case study, as they are both examples of 'sin taxes' (pricing policies targeting products deemed harmful to society and individuals) [32,33]; intended to be UK-wide policies; and attracted a very public debate in the news media which, in turn, affected their chances of policy adoption. Specific research questions are: (i) what are the similarities and differences in the policy discourse networks' composition and structure; (ii) how does the composition of coalitions differ between the two debates and what might this tell us about policy beliefs and advocacy strategies; and (iii) how do the arguments that polarize the coalitions differ? METHODS Pre-existing discourse network analyses on MUP [7] and SDIL [6] were employed as test cases to examine how DNA could be used as a comparative methodology. While the policy context was somewhat different for the two debates, both controversial policies drew significant media attention with clear polarization in stakeholder views, thus providing a useful case study. Additionally, although MUP was only finally implemented in Scotland, it was originally proposed as a UK-wide policy and included in the UK Government's Alcohol Strategy [12]. We searched articles from 11 national UK newspapers, representing all political views and genres, in the months preceding and following key policy announcements: between May 2011 and November 2012 for the MUP debate; and between May 2015 and November 2016 for the SDIL debate. Stakeholder statements were identified and coded using the Discourse Network Analyzer (DNA) software [34], a qualitative content analysis software tool which combines category-based content analysis with network analysis [29,35]. Each coded statement consists of four variables: the person's name, their organizational affiliation, the argument to which the subject refers (further called 'concept') and a binary qualifier indicating the stakeholder's agreement or disagreement with the concept. Weighted one-mode networks of stakeholders were created for both debates and exported from DNA as stakeholder × stakeholder matrices, using the 'subtract' transformation with 'average activity normalization' [29]. These procedures create a network in which a tie connects any two stakeholder nodes if they agree (more than they disagree) with each other, regarding the concepts in the debate. The methods used to create the separate policy discourse networks are described in detail elsewhere [6,7]. To allow comparison between the pricing debates, common concepts were harmonized wherever possible. For example, 'the policy will reduce consumption of the commodity' was used in favour of 'MUP will reduce consumption of alcohol' and 'the SDIL will reduce consumption of sugar-sweetened beverages'. Concepts that were unique to only one debate were not harmonized; for example, 'industry plays an active role in public health promotion' was specific only to the SDIL debate. For the MUP debate, 1924 statements by 152 people from 87 organizations were coded in 348 articles. For the SDIL debate, 3883 statements by 214 people from 175 organizations were coded in 511 articles. A total of 63 concepts were identified. Twenty-nine concepts were common to both debates, 17 unique to MUP, and a further 17 unique to SDIL. See Supporting information for a full list of concepts (Supporting information, Data S1) and stakeholder organizations (Supporting information, Data S2) appearing in each debate. Networks were plotted in Visone (a software tool that allows the visualization and analysis of network structures in network data sets, such as those exported from the DNA software) [36]. Ties between actors represent common agreement or common disagreement with a specific concept or argument. A tie weight threshold equivalent to the 67th percentile was applied to the signed network to reduce ties to only relatively robust argumentative similarities and to maximize the identification of both network structures. The 67th percentile (equivalent tie weight thresholds 0.400 for MUP and 0.333 for SDIL) was selected to ensure that the networks could be directly compared. The Girvan-Newman edge-betweenness community detection algorithm (an algorithm to identify clusters, or discourse coalitions, in the network, i.e. groups of actors with a similar argumentative position) [37] was used to identify clusters of stakeholder subgroups with argumentative similarities within the discourse network. These clusters can be interpreted as discourse coalitions. The coalitions were then highlighted using blue hyperplanes, the different stakeholder types were visualized with common colours for both debates and the frequency of codes for stakeholders was represented by the size of the respective node. Network measures were used to compare the two networks and principal coalitions regarding: size-the total number of nodes (actors) in a network or cluster; density-a measure of connectedness of actors within a network cluster or the overall network, expressing the relative number of ties (i.e. the number of ties as a proportion of the theoretical maximum) [38]; and the E-I index-a measure of subgroup cohesion, i.e. how strongly aligned the actors are internally in any one cluster versus external alignment with other clusters [39]. The range for E-I index is -1 (all ties are internal to the coalition) to +1 (all ties are external to the coalition). We examined the relative use of concepts in each debate by comparing the frequency with which they were used and the degree of agreement and disagreement. The concepts that were the most polarizing in each network were identified by: first, extracting the 15 most frequently used concepts for MUP and SDIL separately; secondly, calculating the ratio of agreement to disagreement for each concept; and finally, ordering them by this ratio. As such, the five most polarizing concepts were those with the highest ratio in each debate. The primary research question and analysis plan were not pre-registered and thus the results should be considered exploratory. Overview Research question (i) What are the similarities and differences in the policy discourse networks' composition and structure? [Correction added on 10 June 2020, after first online publication: The term 'Respiratory quotient' has been removed from the Results section in this version.] The composition of stakeholders in both networks was similar, reflecting the common interests of those participating in the debates. Both networks included politicians/political parties; government advisory bodies; health professionals/professional associations; health charities/advocacy groups; universities/academics; thinktanks/commercial researchers; retailers/retail associations; manufacturers/associated industries or associations; and international health organizations. The only stakeholder types that did not appear in both debates were European Union (EU) Member States/EU body and the police, which exclusively appeared in the MUP debate (Figs 1 and 2). Wine-producing EU Member States were particularly concerned about the legality of MUP, and the police highlighted MUP as a way of dealing with the violence resulting from 'problem drinkers', two issues that were not prominent in the SDIL debate. The detailed composition and characteristics of each network have been published elsewhere [6,7]. In this article, we focus on the comparison between the two networks and their respective coalitions. The structure of both networks formed two discourse coalitions representing proponents and opponents of the policies. However, at the chosen tie-weight cut-off, the MUP coalitions are more distinct. Fewer stakeholders (total nodes) are engaged in the debate, with almost twice as many apparent in the SDIL network; 3.3 times as many in the proponents' coalition and 1.7 times as many in the opponents' coalition (Table 1). This reflects the greater number of vocal stakeholders in the SDIL debate, particularly in the proponents' coalition. Additionally, the E-I index for proponents of SDIL is low compared with the other three coalitions ( Table 1), indicating that members of this coalition were even more likely to agree with each other than with stakeholders outside the coalition, compared to the other coalitions. Highlighting the 10 most active stakeholder organizations in each debate reveals that in both cases the commodity manufacturers and associated industry stakeholders (brown nodes) play prominent roles in opponents' coalitions and are closely aligned with think-tanks and commercial researchers (teal nodes) (Figs 3 and 4). Associations representing manufacturers of the products under scrutiny are dominant spokespeople in both debates, in particular the Scottish Whisky Association and the Wine and Spirit Trade Association for MUP and the British Soft Drinks Association for SDIL. However, the SDIL network also features a prominent manufacturer (Coca-Cola) and an association representing related industries (the UK Food and Drink Federation). The SDIL proponents' coalition features active stakeholders from a wider range of public health advocates [government advisory bodies (pink nodes), particularly Public Health England, together with health charities and advocacy groups (purple nodes)] than seen in the MUP network. Six of the most active stakeholders are from these groups compared with only one (Alcohol Concern) for MUP. Other active stakeholders in the MUP proponents' coalition are two professional associations (British Medical Association and the Royal College of Physicians) and one academic institution (University of Sheffield). While academic researchers are apparent in the SDIL network, they are not among the 10 most prominent stakeholders appearing in this debate. Political stakeholders (gold nodes) appear among the two coalitions in both networks. However, only the Conservative party is among the most active stakeholders in the SDIL network, compared with four political parties in the MUP network [the Conservatives, Scottish National Party (SNP), Scottish Government and Scottish Labour]. This reflects the origins of MUP as an SNP policy targeting what was framed as a Scottish issue of harmful drinking. In both networks, the Conservative party is towards the middle of the networks. However, in both cases this does not reflect a brokering role, but either a change in ideology over the course of the debate (for SDIL, the Conservative shift in position in the middle of the period studied) or splits within the party on the issue (for MUP, prominent politicians openly taking opposing positions over the course of the period studied). Despite similar patterns in the types of organizations making up the proponents' and opponents' coalitions across the two debates, only 30 organizational stakeholders are common to both ( Table 2). This suggests that the debates are relatively sparsely connected to each other through common stakeholders, despite their topical similarity. Apart from policymakers (political parties, government departments and advisory bodies), organizations from two other categories of stakeholders contribute to both the MUP and SDIL debates (Figs 5 and 6). Four think-tanks and commercial researchers (Adam Smith Institute, Institute of Economic Affairs, Institute for Fiscal Studies and the TaxPayer's Alliance) and six retailers or retail associations (Asda, Sainsbury's, Tesco, British Retail Consortium, Scottish Retail Consortium and Scottish Grocers Federation) appear in both debates. Think-tanks and commercial researchers (teal nodes) appear exclusively in the opponents' coalitions, while the retailers and retail associations (green nodes) are spread across both coalitions in both debates. In relation to MUP, few retailers are central to the proponents' coalition, unlike in the SDIL debate, where some retail stakeholders (e.g. Sainsbury's and the British Retail Consortium) are integrated into the proponents' coalition with strong belief ties to key policy proponents. It is noteworthy that, in contrast, there were no health charities or advocacy groups common to both debates, despite a range of these organizations being very active and central to the proponents' coalitions within each debate. Similarly, while universities and academic researchers appear in one or other debate, only the University of Birmingham is common to both. RQ (iii) How do the arguments that polarize the coalitions differ? Of the top five concepts that lead to the formation of coalitions in the two networks, two concepts are common to both (Table 3). 'Policy is supported by the evidence' is the most polarizing concept for both networks and 'policy will reduce consumption of the commodity' is the third and fourth most polarizing concept for MUP and SDIL, respectively. Three of the most polarizing concepts are unique to one or other of the debates: 'policy will penalize responsible consumers' for MUP; 'industry is taking voluntary actions' and 'industry plays an active role in public health promotion' for SDIL. Of note is the fact that the two most frequently cited arguments in the SDIL debate do not appear as significant polarizing concepts, i.e. 'policy needed to address commodity problem' and 'commodity consumption causes health harm'. These concepts relate to the framing of the problem in relation to population-level health harm and the need for a policy response. Conversely, the two most frequently cited arguments in the MUP debate result in network polarization, i.e. 'policy will reduce consumption of the commodity' and 'policy is illegal'. In contrast, these concepts relate to the framing of the solution and its probable effectiveness and legality. Thus, the most frequently cited arguments in the SDIL debate do not result in polarization of the network, suggesting a high degree of agreement about the extent of the problem resulting in more closely integrated coalitions. DISCUSSION There are calls for more nuanced analyses of stakeholder engagement in health policy development [24,27,31]. It has been suggested that research should compare stakeholders across multiple unhealthy products and related policies [31]. Using DNA methods, this study presents the first direct comparison of the discourse coalitions that were evident in the UK press during the policy development processes for MUP and the SDIL. Both networks show similarities in terms of structure, proponents' and opponents' coalitions and similar stakeholder types. However, important differences are revealed in terms of network size and complexity; the relative prominence, and lack thereof, of key stakeholders; subtle differences in the position of industry subsegments between networks; and the relative polarizing impact of frequently cited arguments. Proponents of the pricing policies in both debates included public health, health charities, advocacy groups and academics. While these stakeholders were present in both debates, few specific organizations were common to both, suggesting that such proponents tend to make media statements focusing on their area of policy interest. While it is clear that policy advocates are already working across sectors; for example, in the guise of the Cross Party Group on Improving Scotland's Health: 2021 and Beyond [40], and health alliances across the United Kingdom and internationally, this study suggests that they may not optimize their media messaging with regard to pricing policies. The World Health Organization (WHO) identifies such upstream policies as 'best buys' to tackle non-communicable [41]. There may be potential space for further cross-sector public health advocacy in support of pricing policies, by elevating the debate and presenting arguments across policy debates in support of their counterparts. Advocates could thus increase their chances of achieving policy congruence, as suggested by Rasmussen and others [23,42]. In contrast, opponents of regulatory pricing policies were present in both policy debates, specifically those with a vested interest in the economic impact of both policies such as retailers, representatives of licensed premises and commercial researchers. This structural similarity suggests industry stakeholders hold comparable discourse positions, supporting the idea of a common industry 'playbook', facilitated by public spokespeople, as suggested by Petticrew et al. [43]. The same four free market think-tanks and commercial researchers appear embedded in both opponents' coalitions, closely tied to industry stakeholders, suggesting similar market justice rhetoric based on commercial ideology [44,45]. Comparing alcohol and tobacco strategies, Savell and colleagues suggest that there are commonalities, including both sectors providing skewed interpretations of evidence while also promoting voluntary codes, based on establishing themselves as acting responsibly in relation to health [4]. Our findings support this by suggesting that both sides focus on the availability and quality of evidence and this is the most significant polarizing argument in both networks. There may be an opportunity for policy advocates and academics to focus their advocacy efforts in the media on stressing the importance of weight of evidence, strength of evidence, source of evidence and how it is best used. Polarizing concepts appearing in the SDIL debate but absent in the MUP debate are 'industry is taking voluntary action' and 'industry plays an active role in public health promotion'. This lends support to Nixon et al.'s findings that the food and drinks industry seeks to establish themselves as an exceptional case that should not be subject to the same controls as producers of other health-harming products, and is a key part of their corporate social responsibility rhetoric [6,46]. However, Collin et al. highlight the linkages that exist across tobacco, alcohol and ultra-processed food companies, positing the idea of a single unhealthy commodity industry requiring a consistent regulatory approach [2]. A key difference between the two networks is the number and distribution of associated industry stakeholders such as retailers and restaurants, with a greater number in the SDIL network, including the active voice of the UK Food and Drink Federation. Six key retailers are common to both debates but appear in different positions. For example, the British Retail Consortium and Sainsbury's appear as proponents of SDIL and opponents of MUP, whereas Tesco occupies inverse positions. This, together with wider industry engagement in the SDIL debate, reinforces the need to clearly define industry subsegments and their policy positions, as suggested by Collin et al. [24]. Policy advocates may benefit from understanding the policy responses of multiple industry subsegments to effectively counter policy objections and leverage potential policy support. One of the limitations of this study, which examines the debates as static networks, is that it does not allow analysis of subtle shifts over time. While the change in position of the Conservative Party in the SDIL debate was the only fundamental change in ideological position, there was an ongoing interplay of subtle shifts in emphasis and relative prominence of arguments over time in both debates. Future studies would benefit from comparing network development over time. Secondly, harmonizing the concepts for the two debates may have resulted in the loss of some nuanced arguments. However, the coders of the two debates worked together to ensure consistency and minimize this risk. Thirdly, the periods studied for each debate were 4 years apart: 2011-12 for MUP and 2015-16 for SDIL. The passage of time could have influenced stakeholders' strategies and the nature of their responses to proposed fiscal policy. However, we chose these time-periods deliberately to examine the debates at similar stages of policy development. Finally, while we recognize the importance of the digital world of echo chambers, tailored information and micro-targeting, which means that social media plays an increasing role in influencing the policy agenda [47], traditional newspapers remain an important barometer of the current political agenda. CONCLUSION In conclusion, this visualization of the discourse networks apparent in the debates on pricing policies spanning two unhealthy commodity industries may represent a manifestation of the underlying discursive strategies (manipulation or framing of a set of arguments by actors in order to achieve a certain goal) employed by policy stakeholders to influence policy makers and the public, via the news media. The network comparison is suggestive of greater cross-sector collaboration among policy opponents than proponents. Our analysis also suggests that, in seeking policy congruence, there may be a space for further cross-sector public health advocacy, by presenting arguments across policy debates in support of their counterparts. However, we recognize there are potential barriers to this model, not least resource constraints and the risk of mission creep for some public health advocates. Given the limited presence of academic institutions across the networks, and the importance of statements relating to evidence in polarizing both networks, we suggest that academics contribute more frequently on issues relating to evidence in policy debates. Finally, we suggest that DNA could usefully be applied to compare other policy debates over time and across countries, in attempting to tackle NCDs. Declaration of interests None. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Prominence indicates relative frequency of use in each debate (rank 1 = most frequently used); b italics = concept unique to one network. MUP = minimum unit pricing; SDIL = soft drinks industry levy.
2020-03-29T07:16:14.938Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "cc81db38bbd08cfb33f89d7468b5fd8514962bc3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/add.15068", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4775a59401e2a9e0fdd3af020b6a8182d4b5bd06", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
88517863
pes2o/s2orc
v3-fos-license
Differentiating the pseudo determinant A class of derivatives is defined for the pseudo determinant $Det(A)$ of a Hermitian matrix $A$. This class is shown to be non-empty and to have a unique, canonical member $\mathbf{\nabla Det}(A)=Det(A)A^+$, where $A^+$ is the Moore-Penrose pseudo inverse. The classic identity for the gradient of the determinant is thus reproduced. Examples are provided, including the maximum likelihood problem for the rank-deficient covariance matrix of the degenerate multivariate Gaussian distribution. Introduction We derive the class of derivatives of the pseudo determinant with respect to Hermitian matrices, placing an emphasis on understanding the forms taken by this class and their relationship to established results in linear algebra. In particular, care must be taken to address the discontinuous nature of the pseudo derivative. The contributions in this paper are primarily of a linear algebraic nature but are well motivated in fields of application. The pseudo determinant arises in graph theory within Kirchoff's matrix tree theorem [1] and in statistics, in the definition of the degenerate Gaussian distribution. The degenerate Gaussian has been useful in image segmentation [2], communications [3], and as the asymptotic distribution for multinomial samples [4]. Despite these appearances, knowledge of how to differentiate the distribution's density function is conspicuously absent from the literature, and-since differentiation is often essential for maximization-the lack of this knowledge is a plausible barrier to the distribution's wider use. Specifically, to obtain the maximum likelihood (ML) estimator for the singular covariance matrix of the degenerate Gaussian, one must be able to calculate the derivative of the log likelihood and hence the pseudo determinant of the covariance. Although [5] firmly establishes the subject of ML estimation for multivariate Gaussians, the authors never directly address singular covariance estimation. This problem is explored in Section 3. In Section 2, the pseudo determinant is introduced, and its derivative with respect to Hermitian matrices is derived. The canonical derivative We begin by introducing the pseudo determinant both as a product of eigenvalues and as a limiting form. Definition 2.1. The pseudo determinant Det of a square matrix A is defined as the product of its non-zero eigenvalues. If a matrix has no non-zero eigenvalues, then we say Det(0) = 1. See [1] for an equivalent definition of the pseudo determinant in terms of the characteristic polynomial. In deriving its derivative, it will be useful to write the pseudo determinant as a limit. Proposition 2.2. If A is an n × n matrix of rank k, then Det(A) is the limit for det(·) the regular determinant. Whereas this result is known [6], we were unable to find its proof, so it is given here in the spirit of completeness. Proof. We use the identity Replacing X with k I n and letting A = U ΛU * = ZY Z * , we have Next, we define the Moore-Penrose pseudo inverse [7], an important object involved in the derivative of the pseudo determinant. Definition 2.3. The pseudo inverse A + of a matrix A is also defined in terms of a limit: 4) A + exists in general and is unique. It may also be defined as the matrix satisfying all the following criteria: Hermitian matrices, the pseudo inverse is obtained by inverting the matrix eigenvalues. As is the case for the pseudo inverse [7], the pseudo determinant is discontinuous. For an example, consider the two matrices As one might gather from this example, the pseudo determinant is discontinuous between sets of matrices of differing ranks. This discontinuity will effect the way we define the derivative of the pseudo determinant. We now turn to deriving this derivative. For matrix A in the space of n × n matrices M n×n , the matrix derivative of a function h : M n×n → R is given by the matrix ∇h(A) satisfying for any matrix B ∈ M n×n , where ∇ B h(A) is the directional derivative. We use the directional derivative to define the derivative of the pseudo determinant, but, on account of the discontinuity of the pseudo determinant, we must restrict the directions B in which the directional derivative is defined. For this reason, we may define the derivative at a point only in certain directions and must modify the common definition of the directional derivative. n×n that share the same kernel as A, i.e. for which Ker(A) = Ker(B). Then the derivative ∇ Det(A) is given by any matrix satisfying Note that, according to this definition, ∇ Det(A) is not unique, since it can take on different values along the kernel of B. This non-uniqueness can also be seen using the following class equations for the class of derivatives ∇ Det(A) of the pseudo determinant at a matrix A. Definition 2.5. (Definition 2) A derivative of the pseudo determinant at a point We demonstrate that this is a natural definition using the facts that A(A 2 ) + = A + and (A 2 ) + A = A + for any Hermitian A and assuming one may interchange limits: Multiplying both sides by A 1/2 and rearranging gives the first class equation. The derivation of the second equation is symmetric. We illustrate the preceding definitions-and that they do not define unique derivatives-with a few examples. It is clear that Det(A) = a and A + is obtained by taking the reciprocal of the first element of A. The above result renders In practice, one may obtain the canonical element ∇Det(A) of class ∇ Det(A) directly from a corollary to the following Pythagorian theorem. where P indexes all k × k minors of A satisfying det(A P ) = 0. As a corollary, the canonical gradient ∇Det is directly obtainable. The gradient of the pseudo determinant may be found using Formula (2.21): The reader may check that as expected from Equation (2.15). The above examples suggest that ∇Det(A) should satisfy the class equations in general. To show this, we first cite a result. Theorem 2.14. (Berg 1986 [8]) The pseudo inverse of a Hermitian, rank k matrix A takes the following form: . that maps a point v ∈ R 2 onto the line through the origin containing the unit vector u = (a, b) T / (a 2 + b 2 ) while scaling by a 2 + b 2 . The reader may check that We thus obtain the intriguing result where the last form is meant to make clear that the result is the projection onto the subspace spanned by (a, b) T . The previous example touches on graph theory if we let (a, b) = ( √ c, − √ c). Example 2.19. Let L denote the Laplacian L = D − A of a weighted graph, where A is the weighted adjacency matrix having zeros down the diagonal and off-diagonal elements A ij equal to the value associated with the edge connecting nodes i and j. The matrix D is diagonal and has elements satisfying In the special case of a connected, two node graph with edge value c, the Laplacian is Noting that L is a projection-dilation matrix (see prior example), we get The last term is half the Laplacian associated to the simple, unweighted graph obtained by removing the weight c. Hence, ∇Det(L) takes graph connectivity into account but not scale. 2.1. The matrix differential. When obtaining matrix derivatives, it is often easiest to calculate the matrix differential dA and then relate back to the gradient using the formula [9] dh(A) = tr (dA) G ⇐⇒ ∇h(A) = G . where we are implicitly selecting for the canonical gradient ∇Det(A) in order to satisfy Ker(dA) = Ker(A). Equation (2.41) may also be derived directly using the spectral decomposition A = U ΛU * = k j=1 λ j u j u * j for rank k, Hermitian A. The differential of an eigenvalue of a Hermitian matrix A may be written in terms of the matrix differential itself [9]: dλ = tr uu * (dA) . Proof. The result is proven directly using Formula (2.42). The reader should note that Theorem 2.20 could also be used to derive the canonical gradient ∇Det(A) via Formula (2.40). An example from statistics We now derive the maximum likelihood estimator (MLE) for the singular covariance of the degenerate multivariate Gaussian distribution. Thus, this section may be considered an extension of the results found in [5]. The MLE may be incorporated into more advanced statistical algorithms such as expectation maximization for image segmentation [2]. The formulas derived in the following are also potentially useful in a Hamiltonian Monte Carlo algorithm for Bayesian inference over reduced-rank covariance matrices (cf. [10]). Let x 1 , . . . , x N follow a degenerate Gaussian distribution with mean µ and singular covariance Σ. The probability density function of such a random variable x i is given by Assuming that µ is known, the log-likelihood ℓ(Σ) of Σ is proportional to where R is the matrix of residuals. To obtain the MLEΣ, we obtain the gradient of ℓ(Σ) and set it to zero, just as in the case of a full-rank covariance matrix. To calculate the second term in the log-likelihood, we need the formula for the matrix differential of the pseudo inverse [7]: dΣ + = −Σ + (dΣ)Σ + + Σ + Σ + (dΣ)(I − ΣΣ + ) + (I − Σ + Σ)(dΣ)Σ + Σ + . Thus only with that key assumption are we able to reproduce the classical result for full rank Σ. If we are not willing to make this assumption, i.e. if we have prior belief that, or have set up our model in such a way that, the range of Σ is a predetermined subspace, then the above equation may be written ThenΣ is precisely the projection of the residual matrix R/N onto the range of Σ.
2018-03-08T16:43:02.000Z
2018-02-13T00:00:00.000
{ "year": 2018, "sha1": "14fafd5adc1714a2ce3a309ed4c5b441ba7004ca", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.04878", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "14fafd5adc1714a2ce3a309ed4c5b441ba7004ca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
18123911
pes2o/s2orc
v3-fos-license
Alcohol screening for older adults in an acute general hospital: FAST v. MAST-G assessments Aims and method Documented prevalence of alcohol misuse among older adult patients at Birmingham Heartlands Hospital is significantly lower than the national prevalence. We aimed to evaluate our alcohol misuse screening protocol for older adults to identify possible shortcomings. Hospital protocol is to screen all adults for alcohol misuse in the accident and emergency (A&E) department using the Fast Alcohol Screening Test (FAST). One hundred consecutive consenting in-patients aged 65-94 admitted via A&E subsequently undertook an additional alcohol screening test (Michigan Alcoholism Screening Test-Geriatric version; MAST-G). Results of the two tests were compared. Results FAST screening was completed for 71 patients and none were FAST-positive for alcohol misuse, yet using MAST-G, 18 patients scored positively for alcohol misuse. FAST screening failed to identify 8 patients with a documented history of alcohol misuse. Clinical implications Older adult alcohol misuse prevalence is significantly underreported using FAST. Screening older adults for alcohol problems requires a different approach to screening the general population. We hypothesised that there is no difference in using FAST or MAST-G to identify older people with increasing alcohol intake in an acute hospital setting. We aimed to conduct further screening and medical history review to better understand the pattern of alcohol misuse in the older adult population in our hospital and compare the true prevalence of alcohol misuse to that identified by our current screening protocol. Method Participants Consecutive in-patients aged 65 or over admitted to the acute medical unit via the accident and emergency (A&E) department were identified. Patients were excluded from participation if they were medically unfit for interview, if they were acutely confused, or if communication in English was difficult. Eligible patients were invited to undertake the MAST-G alcoholism screening test. The test consists of 24 questions about alcohol use habits with yes/no responses. A score of five or more questions answered positively indicates alcohol misuse (older adults sensitivity 94.9% and specificity 77.8%). 17 The first 100 eligible patients who gave verbal consent and who completed MAST-G were included in the service evaluation; 7 eligible patients did not give their consent to participate. Aims and method Documented prevalence of alcohol misuse among older adult patients at Birmingham Heartlands Hospital is significantly lower than the national prevalence. We aimed to evaluate our alcohol misuse screening protocol for older adults to identify possible shortcomings. Hospital protocol is to screen all adults for alcohol misuse in the accident and emergency (A&E) department using the Fast Alcohol Screening Test (FAST). One hundred consecutive consenting in-patients aged 65-94 admitted via A&E subsequently undertook an additional alcohol screening test (Michigan Alcoholism Screening Test-Geriatric version; MAST-G). Results of the two tests were compared. Results FAST screening was completed for 71 patients and none were FAST-positive for alcohol misuse, yet using MAST-G, 18 patients scored positively for alcohol misuse. FAST screening failed to identify 8 patients with a documented history of alcohol misuse. Clinical implications Older adult alcohol misuse prevalence is significantly underreported using FAST. Screening older adults for alcohol problems requires a different approach to screening the general population. Declaration of interest None. Procedure MAST-G was completed either by patients themselves or by a member of staff reading questions aloud and recording patient responses. Participating patients' A&E notes were retrospectively examined and their FAST score documented; the MAST-G scores and the A&E FAST scores were compared. History of alcohol misuse was identified from patients' records from previous hospital admissions. Analysis Results were analysed using SPSS (version 19 for Windows). Frequency data are reported as n (%). Non-parametric data were analysed using related-samples Wilcoxon signed rank test, Fisher's exact test and Spearman's rank correlation coefficient. Two-tailed P-values are given: a threshold of P50.05 was used to determine statistical significance. Population One hundred older adults completed a MAST-G questionnaire, answering all 24 questions. Their median age was 79 years (interquartile range 73-86, range 65-94, n = 100). The majority (61%) were female. The most commonly documented reasons for patients' admission to the acute medical unit were fall (n = 14), shortness of breath (n = 10) and chest pain (n = 9). FAST and MAST-G scores among different patient groups Of the 100 participants, 71 (71%) had a FAST score documented in A&E (Table 1). In none of these was the FAST score positive. In contrast, 18 participants (18%) subsequently scored positively for alcohol misuse using MAST-G. The difference in patients' scores between the two tests was statistically significant (P50.0001). Among the 18 patients with positive MAST-G scores, 12 had scored negatively for alcohol misuse on FAST, 4 patients had been unable to answer FAST in A&E and 2 patients were not asked FAST screening questions. Hazardous or harmful alcohol misuse was documented in the medical records of eight patients. Six of these patients had answered FAST questions in A&E (one patient was unable to answer, one patient not asked), but none scored positively on FAST. Seven out of the eight patients (87.5%) subsequently scored positively on MAST-G (Fig. 1). There was a significant association between those older adults who had a history of alcohol misuse and those who scored positively on MAST-G (P50.0001). Men were significantly more likely to score positively on MAST-G than women (12/29 men and 6/61 women scored positively; P = 0.015). No correlation was observed between MAST-G score and patient age (Spearman's rank correlation coefficient 70.164; P = 0.1). The question most frequently answered 'yes' by participants with a history of alcohol misuse was 'Does having a drink help you sleep?' -75% answered 'yes' compared with 22.8% of patients with no history of alcohol misuse ( Table 2). Main findings The current hospital screening protocol (FAST) did not identify alcohol misuse in any of the 100 participating older adults, including 8 patients with a documented history of alcohol misuse. On further screening, 18% of the same older adults were identified as misusing alcohol using MAST-G; this proportion is more consistent with previously reported national figures. 1 This service evaluation highlights a difference between the number of older adult patients identified as misusing alcohol using our standard practice (FAST in A&E) and those identified after admission using an alternative screening test. Targets for alcohol screening in adults The National Institute for Health and Care Excellence recommends that National Health Service (NHS) professionals should 'routinely carry out alcohol screening as an integral part of practice'. 18 Locally, as part of the Making Every Contact Count campaign, the Heart of England NHS Foundation Trust mandated that 50% of A&E attendees be screened for alcohol misuse and applied a financial penalty if this target was not met. This target was met in our subpopulation (71% were screened) but our results suggest that screening did not contribute to an increased awareness of, or service provision for, older adults admitted with alcohol misuse issues during this service evaluation. An aging population with increasing alcohol misuse Increasing alcohol misuse among older adults in the UK leads to health and social problems, as well as increasing the risk of accidents necessitating hospital admission and causing significant mortality. 2,3,19,20 Both acute and longterm complications of alcohol withdrawal are more severe in older adults. 21,22 The Royal College of Psychiatrists' recommendation to reduce the 'safe limit' for alcohol intake from 21 units per week for men and 14 units per week for women to 11 units per week for older adults reflects the increased risks of alcohol misuse in this population. 21 This trust has recognised the increased prevalence and risks of alcohol misuse among older adults, but our results compared with historical annual prevalence figures (documented as 1% in this trust) highlight an ongoing failure to recognise alcohol misuse in this population, despite the achievement of local screening targets. Older adults: a difficult population to screen Other researchers have found older adults a difficult population to screen for alcohol misuse, depression and delirium, 23-25 possibly due to stigma attributed to alcohol by adults in this population 26 or unwillingness to disclose information perceived to contribute little to an acute medical or surgical assessment. However, results from our evaluation show that most eligible older adults were willing to discuss alcohol misuse when invited to take part in the evaluation: only seven patients refused consent. Correlation with documented alcohol misuse indicates that those participants who undertook MAST-G were, on the whole, open about their drinking habits. It is therefore worth considering what other factors could have contributed to our findings. By investigating staff attitudes to older adult alcohol misuse, previous studies have found hospital staff to have less suspicion and more tolerance of alcohol misuse in older adults than the working-age population, 27 resulting in low levels of detection and low levels of referral for recognised problems. 28 In addition, it may in fact be harder for staff to detect alcohol misuse in older adults than it is in workingage adults 23 -notably, the pattern of alcohol misuse can be different in older adults. 29 MAST-G was developed to account for these differences, as FAST questions about meeting work responsibilities and concerns voiced by professionals or family members may be less relevant to older adults. MAST-G lifestyle-specific questions about daytime somnolence and social withdrawal aim to maximise sensitivity to typical presentations of alcohol misuse in older adults. Another reason for the difference between FAST results in A&E and MAST-G results in the acute medical unit could have been the different physical and temporal environments in which screening took place. FAST screening contributes to a battery of questions and investigations in A&E, where patients may feel stressed or time-pressured, and healthcare staff have a large number of clinical decisions to prioritise. Previous studies have suggested that the busy A&E environment could negatively affect clinicians' screening methods and documentation. 27 Indeed, response rates were lower for FAST in A&E compared with MAST-G in the acute medical unit: 13/100 (13%) of the study population refused to answer FAST (and 14 were unable to answer) compared with 7/107 (6.5%) eligible patients who refused MAST-G. It is possible that the slower paced questioning afforded by MAST-G in a more relaxed environment enabled patients to consider their answers more carefully than those given during FAST, and allowed clinicians more time to engage with participants' responses. If this is the case, and patients' environment affects their willingness to undergo screening, we should consider even more carefully our policy of undertaking screening of older adults in A&E. Practice implications highlighted by this service evaluation Throughout patient interactions as part of this service evaluation it has become clear that many older adults do not recognise or report their significant alcohol misuse as Table 2 The six Michigan Alcoholism Screening Test -Geriatric version (MAST-G) questions to which older adults with a history of alcohol misuse most frequently answered 'yes' Question Patients with a history of alcohol misuse answering 'yes' (n = 8), n (%) Patients with no history of alcohol misuse answering 'yes' (n = 92), n (%) harmful. It falls to healthcare staff to be opportunistic in their direct questioning about alcohol misuse and to give advice appropriately. Lack of opportunistic diagnosis and intervention from healthcare professionals not only inhibits optimal care during hospital admission, but denies patients the chance to make informed decisions about their longterm health and access to out-patient services. A proactive approach appears key to our discussions about alcohol use. This evaluation suggests that our current process is grossly underestimating true prevalence of older adult alcohol misuse. FAST has a sensitivity of 93% for alcohol misuse in the general population, but its sensitivity in older adults is less certain. The current evaluation has not identified whether it is the screening tool itself, the environment in which patients are screened, or the clinician-patient interaction during screening which has lowered our sensitivity for alcohol misuse so significantly. Importantly for our future practice, although our results indicate that MAST-G may be a more sensitive screening tool for our population, the time taken to complete its 24 questions makes it an unattractive screening test for A&E. In identifying problems with our current screening system, this service evaluation has not suggested a comparable alternative. Further prospective work is needed to determine the best way for us to accurately identify older adult alcohol misuse both efficiently and sensitively. Early opportunistic diagnosis of alcohol misuse has clear benefits for both primary and secondary healthcare provision; undiagnosed alcohol misuse has significant social and economic implications, as well as an impact on physical health. 21 Given the findings of this service evaluation, research into the sensitivity of alcohol screening of older adults in the community will complement our work. Limitations At this hospital every adult A&E attendee considered fit to answer screening questions is offered 4-question FAST screening, whereas only eligible medical in-patients willing to take part in this evaluation undertook the longer 24-question MAST-G and formed part of our analysis. The first limitation of this approach is that we have not evaluated directly comparable tests, but nor did we aim to do so. Our aim was to identify shortcomings in our current screening method in light of putative inconsistencies with national data. We have not identified an improved method of screening as a result of this study. Rather, we have confirmed that our current procedure is failing to identify important information which patients are, in fact, willing to disclose in an alternative environment. The second limitation resulting from our selection of in-patient participants is that our sample population does not fully represent the population of older adults attending A&E or, indeed, the wider population in the community. It is possible that alcohol misuse could have been a precipitating factor for medical admission among interviewees, leading to a biased sample and skewed results. In addition, we did not ask individuals' reasons for withholding consent to complete MAST-G (seven eligible patients refused consent). Selection bias may have increased if participants unwilling to discuss ongoing alcohol misuse withheld consent to avoid detection and intervention. When considering the true prevalence of alcohol misuse in this population, clinical history of alcohol misuse was taken from patients' recent medical notes. As such, any undocumented alcohol misuse was counted as absence of alcohol misuse, leading to a potential underestimation of misuse prevalence. This limitation does not negate our findings; rather, it adds weight to our need for an effective screening protocol. As a result of the screening protocol used in our typically busy general hospital A&E department, alcohol misuse went unidentified in a population with both known (documented) and freely volunteered (via MAST-G) alcohol misuse. Studies comparing FAST with MAST-G in the same temporal and physical setting will complement this initial evaluation, allowing us to understand why we are failing to identify misuse. Analysis of the reasons for discrepancies in the results from those two screening tools will aid our understanding of how to sensitively and efficiently screen for alcohol misuse in our older adult population.
2016-05-12T22:15:10.714Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "5c4f8f095eff240eb79ab088e129ec305113b7df", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/298EDA51863EC3A3414E0F20FAE62AB5/S2056469400001601a.pdf/div-class-title-alcohol-screening-for-older-adults-in-an-acute-general-hospital-fast-span-class-italic-v-span-mast-g-assessments-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7189af45355b279ae227fc74a5138c081a523d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234112966
pes2o/s2orc
v3-fos-license
A current sensor based adaptive step-size MPPT with SEPIC converter for photovoltaic systems Efficient maximum power point tracking (MPPT) is an important problem for renewable power generation from photovoltaic systems. In this work, a current sensor based MPPT algorithm using an adaptive step-size for a single ended primary inductance converter (SEPIC) based solar photovoltaic system is proposed. Due to lower sensitivity of power to current perturbation as compared to the voltage one, such a scheme is shown to yield better efficiency at steady-state. A new adaptation scheme is also proposed for faster convergence of the MPPT technique. Hence, the proposed scheme yields better transient as well as steady-state performance. A prototype converter is used along with digital implementation of the proposed MPPT technique to demonstrate the superiority of the proposed algorithm over the fixed step-size and voltage based ones. Simulation and experimental results corroborates the same. INTRODUCTION Photovoltaic (PV) power generation is commonly used as renewable energy source because of the advantages, e.g. pollution free, noiseless, lesser maintenance and easy to install in distributed fashion and in varied sizes [1,2]. The output of PV module depends on operating conditions, such as PVcell temperature and insolation-level [3,4]. Researchers developed various techniques to extract maximum power from the PV sources. Some of the MPPT techniques are perturb and observe (P&O) [5][6][7], hill climbing (HC) [8], incremental conductance (IncCond) [9,10], fractional voltage/current MPPT control [11], fuzzy-logic (FL) [12,13], neural network (NN) [14,15], optimization techniques [16], and sliding mode (SM) control [17][18][19]. Among the conventional MPPT techniques, P&O and the IncCond techniques are widely used due to their simplicity yet being efficient [20]. A demerit of P&O like algorithms is that it drifts away from MPP [5] for sudden changes in insolation leading to lesser efficiency. In [21], the drift problem is presented and resolved by modified P&O algorithm. Other machine learning MPPT techniques, e.g. NN [14], FL [12], SM This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. IET Renewable Power Generation published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology [17], and optimization techniques, show improved performance. But these are not commonly used due to need of expensive controllers and complexity for implementation and big data processing for the training of the system to enhance the tracking accuracy [22]. In conventional MPPT techniques, the perturbation stepsize is selected by considering convergence speed and steadystate performance. For faster convergence and to enhance the steady-state response, an adaptive step-size MPPT technique is proposed in [21]. In such adaptive methods, the stepsize is a linear function of either the derivative of power to duty-cycle (dP pv ∕dD) [23] or the derivative of power to voltage (dP pv ∕dV pv ) [24]. This leads to transient performance is improvement by reducing the tracking time. However, most of these methods are applied to voltage sensor based MPPT techniques. Both the current and the voltage sensors are used for implementing some of the MPPT algorithms, e.g. P&O that leads to increased cost [5][6][7][8][9][10][11][12][13][14][15][16]. Only a handful of techniques use either a voltage [21,25] or a current sensor [26]. Only voltage based methods are considerably cheaper as compared to the current based methods due to larger cost involved in current sensing. On the other hand, it is well known that the PV voltage varies logarithmically with solar insolation [27]. It makes all the voltage sensor based MPPTs to be less sensitive for large change in irradiation and hence yields slower convergence (due to large insolation change) when initial point is far away from the MPP. It is otherwise for the current sensing since the PV current varies linearly with insolation level. This property is very useful for a fast current sensor based MPPTs than voltage sensor based one for frequent change in insolation [27,28]. In the context of choice of converters, several different converters are used for impedance matching to obtain maximum power from the PV systems, e.g. buck [26], boost [2], buckboost [29], SEPIC [5,27] converters. The selection of converter topology depends on the PV module voltage and the load voltage [30]. For example, the buck topology is used only for the cases where the PV module voltage is always higher than the load voltage. Similarly, the boost topology is applicable if the PV module voltage is always lower than the load voltage. Clearly, both are extreme ended considering the fact that insolation variation in a day is over a wide range. On the other hand, the drawback of buck-boost converter is that the output is inverted which results in complex sensing and feedback circuit. It also has discontinuous input current that limits the ability of the converter. A comparison of several buck-boost converters from different point of view is given in [31]. Among these, although the SEPIC converter has poor efficiency and higher cost, it still has the merits of non-inverting polarity, continuous input current, works as buck-boost converter over wide range and low input current-ripple [5]. It is suitable for either voltage or current applications [32,33]. The main losses in the DC-DC converters are switching losses in the MOSFET and the diode, copper losses in the inductor windings and inductor core losses. A comparison of losses in converters is given in [34], it can be seen that all the converters have same number of switches (MOS-FET and diode), hence the switching losses will be the same. The losses in the inductors are lower for the SEPIC converter due to the reduced input current ripple that decreases the peak inductor current though it has two inductors. These properties make SEPIC converter a suitable candidate for PV applications. In this paper, SEPIC converter is used for development of a new MPPT technique, which is realized using a single current sensor. A Hall-effect current sensor is used for current sensing, which has many advantages like excellent accuracy and linearity, wide frequency bandwidth, optimized response time, lesser temperature drift, high resistance to external noise and current overload capacity. A new variable step-size algorithm is developed that yields a large step-size when the operating point is away from MPP. This results in faster convergence for change in MPP. An experimental setup is developed for the implementation of the Proposed MPPT technique and an ARDUINO UNO microcontroller is used for the programming of algorithm. A 40 W PV panel is considered for simulation and experiment. It has uses in standalone applications, such as solar lantern, solar mobile charger, small solar battery banks, solar garden lights, solar street light etc. It can also be used as portable power supply. It is shown through simulation and experimen-tal results that the proposed algorithm yields improved convergence time and thereby improves the efficiency of the system. This paper is organized as follows Section 2 presents the importance of current sensor in PV system. Section 3 presents development of the switching function for the current sensor based MPPT algorithm. A novel adaptive technique is proposed in Section 4. The design parameters of SEPIC converter is explained in section 5. Simulations as well as experimental results are shown in Section 6 and 7, respectively. This paper ends with conclusion in Section 8. THE LINEARITY OF PV CURRENT WITH INSOLATION A single-diode electrical equivalent circuit [35] is well known for modelling of PV modules. The V-I relationship of a PVmodule is given by Equation (1). where V pv and I pv are the output voltage and current of the PV-module, respectively. I rs is the reverse saturation current, R se and R p are the series and parallel resistances, respectively, n s is the number of series connected PV cells in a module, b is the ideality factor of the diode, v t = kT ∕e is thermal voltage, T is module temperature in K, e is charge on electron, and k is the Boltzmann's constant. For an ideal PV module, the series and parallel resistances in Equation (1) are zero and infinite, respectively. Then the voltage-current relationship can be written as: where I p is the photo-current and it is expressed in terms of the insolation G and the PV cell temperature T as follows. where k isc is the short circuit current temperature coefficient, I STC is the short circuit current at G STC = 1000 W/m 2 , T stc = 298 K. From Equations (2) and (3), it is clear that the V pv is nonlinear with G , whereas the PV current I pv depends linearly. Because of nonlinear relation between voltage and solar insolation, the convergence to maximum power point (MPP) is nonlinear for a voltage sensing based algorithm. However, the linear relationship between PV current with G would be beneficial for detecting solar insolation changes irrespective of the insolation level. Due to this, a current based MPPT algorithm can be adopted for uniform convergence over wide range of insolation and the same is studied in this work. It may be noted that linear characteristic is beneficial since uniform convergence can be FIGURE 1 PV system with the MPPT controller achieved using simpler logic, whereas intrinsically complex logic is required to tackle non-uniform behaviour. DEVELOPMENT OF SWITCHING FUNCTION FOR CURRENT SENSOR BASED MPPT In this section, switching functions for current sensor based MPPTs are derived. It is shown that the switching function S will depend on the converter chosen. Based on the appropriate selection of switching function, the MPPT algorithm is developed. Since the SEPIC converter is only used in this work, we start with this as described in the following. 3.1 Switching function for SEPIC converter The PV system considered is given in Figure 1. A SEPIC converter is used to supply the load form the PV panel. The duty-cycle of the converter is adapted by the MPPT controller for operation at the MPP. The proposed MPPT controller uses only one current sensor for determining the MPP. The corresponding switching function is derived below. The output voltage of SEPIC converter can be written as where V o is the output voltage and D is the duty-ratio of SEPIC converter. The efficiency of the SEPIC converter can be expressed for a load R as where R eq is the equivalent resistance of the converter. Then, R eq can be obtained from Equation (6) as The PV module power P pv can be expressed by using Equation (7) as: We can also write From Equation (9), the variation of P pv with respect to the duty ratio D can be written as: By putting the value of P pv from Equation (8) into Equation 10, we get The P pv − D characteristic waveform of a PV module is shown in Figure 2. One can observe that dP pv ∕dD is equal to zero at the MPP. Furthermore, dP pv ∕dD is greater than zero to the left of the MPP and dP pv ∕dD is less than zero to the right of the MPP. Hence, for maximum power, dP pv ∕dD = 0 and it can be calculated from Equation (12) as: Thereby, the switching function S can be written as the following: , to the right of MPP = 0, at MPP < 0, to the left of MPP (14) Note that the right of MPP indicates increased duty-ratio and the left of MPP indicates decreased duty-ratio. Hence, a MPPT controller can be developed based on the sign of switching function S . The corresponding S − D characteristic waveform is also shown in Figure 2. It is clear that S passes through zero at MPP. Switching function for buck converter The switching function S for tracking the MPP in case of buck converter can be obtained by evaluating R eq and dP pv ∕dD as follows Switching function for boost converter Similarly, the switching function S for tacking the MPP in case of boost converter can be obtained as follows From the above, it can be seen that the choice of switching function S depends on the converter type for this current sensor based MPPT technique while the conventional techniques, like P&O is independent of the DC-DC converter topology. However, for a given converter and S is chosen, the remaining of the MPPT algorithm remains the same as described in the next section. The changes in the PV module current I pv and duty-cycle D from previous iteration to the next iteration are obtained as follows. A NOVEL ADAPTIVE MPPT TECHNIQUE The position of operating point is decided by calculating sign of S . If S > 0 then the next duty-cycle D(m + 1) is decreased by ΔD(m), and if S < 0 then the next duty-cycle D(m + 1) is increased by ΔD(m) as mentioned in Equation (25). The variation in S is large during transient-state for a change in insolations say from 0 to 800 W/m 2 and from 800 to 500 W/m 2 as shown in Figure 3. Whereas the variation in S is small at steady-state as given in condition. An adaptive perturbation step-size is considered to change the ΔD, which is written in terms of S as With where is a constant to be chosen and Sign is the signum function. The value of should be calculated based on Equation (28 The upper limit of the perturbation step-size ΔD max is chosen as 0.5 (thumb rule) and lower limit ΔD min can be selected based on the steady-state performance and the resolution of the ADC (analogue to digital converter) used in the microcontroller for realizing the algorithm [26]. The lower value of perturbation step-size ΔD min = 0.005 is considered in this work. The value of ΔD will remain between ΔD min and ΔD max . The limiting values of ΔD can be taken care of by the following condition. The flow-chart of the proposed current sensor based algorithm is given in Figure 6. It can be seen that only the PV current I pv is sensed through the current sensor and thereby switching function is calculated. DESIGN OF THE SEPIC CONVERTER In this paper, a SEPIC converter is used as an interface between the PV module and the resistive load as shown in Figure 7. One advantage of this converter is that, it isolates the input and the output by using coupling capacitor C [36]. The capacitor C protects against overload and short circuit condition. In this paper the SEPIC converter is designed [37] for a 40 W PV module. The design consideration for the converter is given in Table 1. Duty-cycle consideration For continuous conduction mode (CCM) operation of the SEPIC converter, the maximum duty-cycle is calculated by By putting all the values in Equation (30), one gets Inductor selection Conventionally, the peak to peak ripple current is considered to be 20% to 40% of the maximum input current I pv at the minimum input voltage V pv (min) . Here, the peak to peak ripple current is considered to be 20%. The ripple current flowing in L in and L o can be calculated as: By putting all the values in Equation (32), one gets Thereby, the value of inductor is chosen for CCM operation as: By putting the value of ΔI L from Equation (33) into Equation (34), we get the values of inductors as: In this paper the value of inductor L in = L o = 180 H is chosen for simulation and experimental validation. Output capacitor selection The value of output capacitor is calculated by considering the output voltage ripple V ro = 0.3 V The value of C o = 220 F is selected for proposed SEPIC converter. Coupling capacitor selection The value of coupling capacitor C is calculated by considering the ripple voltage on C as V r = 1.3 V. Then The coupling capacitor C = 47 F is selected which meets the RMS current requirement that produce the small ripple voltage on C . Input capacitor selection The inductor L in is connected at the input side of the SEPIC converter. Due to this inductor the input current waveform is triangular and continuous. The inductor makes sure that the current passes through the input capacitor C in must have low ripples. The RMS current in the input capacitor is given by The input capacitor C in is selected based on the RMS current handling capability. Although in SEPIC converter C in is not so critical, a considerable capacitor value C in = 440 F or higher would prevent impedance interactions with input supply. SIMULATION RESULTS The PV module model number ELDROA 40P in MATLAB Simulink [5] is used for simulation of the PV system as given in Comparison between fixed and adaptive step-size The proposed MPPT technique is first verified by considering FSS (ΔD) = 0.005 and sampling time (T s ) = 20 ms, whereas ΔD min = 0.005 is chosen for adaptive step-size (ASS) technique [21]. The tracking performance for a decrease in insolation from 800 to 500 W/m 2 at 1 s with the FSS and ASS techniques are shown in Figure 8. The variation in PV current with respect to the change in insolation is shown in Figure 8(a). It is clear that both the techniques are effectively converges to the MPP, but the convergence time is quite large with the FSS (ΔD) = 0.005 technique. It can also be seen that the convergence time with the FSS technique during transient period for G = 800 W/m 2 is T 1 = 720 ms, and the convergence time is reduced to T 1 = 60 ms for the adaptive technique. Similarly, the convergence time T 2 is reduced to 112 from 278 ms with the ASS technique as compared to the FSS technique for a decrease in insolation from G = 800 to 500 W/m 2 at 1 s. The corresponding PV voltage Figure 8(b,c), respectively. From PV power waveform, it can be observed that the steady-state oscillation is less, hence the steady-state power loss is reduced. From Figure 8(d), it can be seen that D is oscillating between two-levels at steady-state, and due to this the power loss is less compared to the three level method such as P&O as it is shown in ref. [21]. From the tracking performance given in Figure 8, it can be noticed that both the transient and the steady-state responses are improved for the proposed ASS technique. Comparison between current and voltage sensor based MPPT techniques In [21], a voltage sensor based (VSB) MPPT technique is proposed for the SEPIC converter. In this method, only PV module voltage V pv is sensed for MPP tracking and the switching function (Q) has been derived in Equation (41) to implement this algorithm, where Figure 9(a,b), it can be observed that the oscillations around MPP at steady-state is much smaller for current sensor based MPPT compared to the VSB one. For the current sensor based one, the power level is always above 30.1 W, whereas for the voltage based one, the power level goes much bellow to 30.1 W. The comparison of dynamic performance of both the MPPT techniques for the same operating condition is shown in Figure 10. The MPP convergence time of current sensor based technique is T i = 720 ms which is smaller than the convergence time of VSB technique T v = 780 ms. It is observed that the current sensor based technique is faster than the VSB technique. Figure 11 shows a comparison of S and Q variation with respect to D. From Figure 11, it can be observed that the switching function S (for the current sensor) is more regular around the MPP with a saturation characteristic visible. Moreover, it is considerably uniform (large constant value) on the right side of the MPP in comparison to the irregular variation in Q (for voltage based). Hence, it is expected that the switching function S will work better than Q for large change in insolation. Simulation results with different operating conditions The simulation results with different loads are shown in Figures 12 and 13 for uniform insolation level G = 800 W/m 2 and temperature T emp = 43 • C. The generated PV power shown in Figure 12 is same for all the loads. It is also clear that the proposed CSB technique is able to track the MPP for different load conditions and the MPP convergence time is same irrespective of the loads. From Figure 13, it is clear that the load power P o is different for all the loads. The load power increases with increase in the load resistance due to the varying efficiency of the converter. For series RL and parallel RC loads, the PV power variations are shown in Figure 14. It is clear that, for both the RL and RC loads,responses are same as the resistive one in Figure 12. However there are small changes in the transient response, though the settling times are almost similar. The simulation result with variable temperature is given in Figure 15 for uniform insolation level G = 800 W/m 2 . Any PV module generates maximum power at standard test condition (G = 1000 W/m 2 and temperature T emp = 25 • C). It can be The proposed CSB MPPT technique is also studied for different switching frequencies and the resulted PV power is shown in Figure 16. It can be observed that the PV power is almost same for these frequencies. The average PV power is different due to oscillations around the MPP at the steady-state. 6.4 Performance comparison FIGURE 17 Comparison of adaptive current sensor based MPPT technique and enhanced auto scaling IncCond MPPT technique with resistive load FIGURE 18 Comparison of duty-cycle at steady state for CSB (ASS) MPPT and EAS IncCond MPPT techniques Another important condition is defined to detect large variation in either load or insolation as [36]: In this paper, for the sake of comparison, the result of [36] is also simulated with Z = 1.64 and E = 0.1092 calculated from Equations (42) and (43), respectively, for change in insolation level from G = 0 to 800 W/m 2 . The large step-size ΔD LS = 0.03 and small step-size ΔD SS = 0.005 are considered. The PV power convergence responses for enhanced auto-scaling incremental conductance (EAS IncCond) MPPT [36] and CSB MPPT with ASS are shown in Figure 17, it can be seen that the MPP convergence time for EAS IncCond technique T 2 = 140 ms, which is larger than the convergence time for CSB (ASS) technique T 1 = 60 ms. This is due to the fixed step-size (though overall adaptive) used for the transient period, hence the convergence depends on how large step-size is chosen. From Figure 18, it can be seen that the EAS IncCond MPPT technique has three-level operation around MPP, whereas with the proposed technique have two-level operation. It is shown in ref. [21] that two-level operation is beneficial. Due to this, the oscillation around MPP in PV power is less with the proposed technique as compared to the EAS IncCond technique and this can be seen in Figure 17. Next, a comparison of efficiencies is made. The average MPP tracking efficiency mpp(avg) is calculated as [36]: where P mpp(avg) is the extracted average maximum power from the PV module. P * mpp(avg) is the available average maximum PV power. For insolation level G = 800 W/m 2 , it is 30.28 W. The PV power reaches at steady-state after time t = 0.14 s for both the techniques as shown in Figure 17. Hence, the values of P mpp(avg) and the corresponding average efficiencies mpp(avg) are calculated for the time ranges t = 0 to 0.2 s and t = 0.2 to 1.0 s to differentiate between the transient and steady-state efficiencies as shown in Table 2. It can be observed that the average efficiency is improved with the proposed adaptive stepsize CSB MPPT technique considerably during transient-state, whereas the average efficiency is almost similar at steady-state since the small step-size ΔD SS and minimum step-size ΔD min are same for the two techniques. A comparison of PV power for CSB (ASS) MPPT technique and EAS IncCond MPPT technique with battery load (12 V, 7 Ah) is shown in Figure 19. It can be observed that the MPP convergence times T 1 = 40 ms and T 2 = 100 ms with battery load are reduced as compared to the convergence time with resistive load shown in Figure 17. This is because the SEPIC converter works in buck-mode as the battery voltage is less than the V mpp . Hence the average tracking efficiencies are improved at transient-state due to the reduced tracking time and almost similar at steady-state as compared to resistive load. This shows that the proposed MPPT technique performs well for battery load as well. EXPERIMENTAL SETUP AND VALIDATION To verify the tracking performance and functionality of the current sensor based technique, an experimental model of the SEPIC converter with controller circuit is designed in laboratory. The converter have the same parameters as given in Section 6. An ARDUINO UNO controller has been considered for implementation of the proposed technique and to provide the desired switching signal to the converter. The experimental setup of the PV system is shown in Figure 20. To implement the current sensor based technique, current measurement is required. The current is measured using LEM LTS6-NP hall effect current transducer. An IRFIZ44N power MOSFET, STPS2045CT power Schottky diode and HCPL3120 gate driver ICs are used for SEPIC converter design. An ELDORA 40P PV model having the same parameters as given in Section 6 is used to perform the experiment and halogen lamps are used for artificial insolation. The insolation of light is controlled by manual switches and the insolation is measured by using the solar power meter WACO 206. The current sensor based MPPT technique with the fixed and ASS techniques are compared for a change in insolation. The start-up tracking performance with the FSS technique at approximately G = 800 W/m 2 is given in Figure 21(a), and it can be noticed that the start-up convergence time is T 1 = 2500 ms. The tracking performance with the FSS technique for a change in solar insolation level approximately from G = 500 to 800 W/m 2 and from G = 800 to 500 W/m 2 are given in Figure 21(b,c), respectively. It can be observed that the convergence time to reach at MPP are T 2 = 2000 ms and T 3 = 1300 ms, respectively. Similarly, the tracking performance with ASS technique corresponding to start-up and change in solar insolation levels approximately from G = 500 to 800 W/m 2 and from G = 800 to 500 W/m 2 are shown in Figure 22(a-c), respectively. From Figure 22, it can be observed that the convergence time T 1 is reduced to 1000 from 2500 ms, T 2 is reduced to 600 from 2000 ms and T 3 is reduced to 550 from 1300 ms with the ASS technique compared to the FSS technique. Thus, the ASS technique is effective in terms of reduced convergence time. For evaluation of the steady-state performance of the proposed technique, experiments are performed with sampling time T s = 1 s, and the corresponding waveforms are shown in Figure 23. It can be observed that the proposed MPPT technique is giving two-level operation during steady-state, which effectively reduces the steady-state power loss as compared to EAS IncCond technique [21]. The average efficiencies of the experimental results are calculated based on Equation (44) and shown in Table 3, where the value of P * mpp(avg) is 30.28 W for the insolation-level 800 W/m 2 . From Figures 21(a) and 22(a), it is clear that the PV voltage and the current reach steady-state after time T 1 = 2.5 s. Hence the average extracted maximum powers P mpp(avg) and the corresponding average efficiencies mpp(avg) is calculated for the two time ranges, one for 0 to 2.6 s and the other for 2.6 to 9 s to differentiate the transient and steady-state efficiencies as shown in Table 3. It can be observed that the efficiency mpp(avg) is considerably improved with the proposed adaptive step-size Experiments are also performed for lead-acid battery load (12 V, 7 Ah) using CSB technique with ASS and EAS IncCond [36] technique, and the corresponding convergence responses for change in insolation-level from 0 to 800 W/m 2 are shown in Figures 24 and 25, respectively. It is clear that both the techniques are effectively tracking the MPP point with the battery load. The SEPIC converter always operates in buck mode, because the rated battery voltage is less than the maximum power point voltage (V mpp ). Hence, the variation in the dutycycle is less for change in insolation level, due to which the MPP convergence time (600 ms) is reduced with the battery load as compared to the resistive load (1000 ms). The average track- ing efficiencies are also calculated for the results shown in Figures 24 and 25 for the two time ranges as shown in Table 4. It can be observed that the efficiencies for resistive load and battery load are almost similar for both the time ranges. Hence, the proposed technique improves the performance for battery load as well. The comparison of simulation and experimental results using CSB (ASS) MPPT technique are shown in Table 5. CONCLUSION This paper considers the MPPT problem for PV system. A current sensor based MPPT algorithm using a novel ASS method to control the duty-cycle of SEPIC converter is proposed. Due to the linear property of current with insolation change, the proposed MPPT ensures uniform convergence compared to the voltage sensor based one. The same is validated by simulation and experimental results. The presented results show the steady-state power deviation is lesser in the current sensor based algorithm as compared to the voltage sensor based one due to the low sensitivity of the current sensor. Also, the proposed
2021-05-11T00:03:30.403Z
2021-01-20T00:00:00.000
{ "year": 2021, "sha1": "5afe6555c584ca90c6639b82bb2c41476bbdc5f1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/rpg2.12091", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ed10aa8cb76738b7be15f9322884946aad85176a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
169173574
pes2o/s2orc
v3-fos-license
Portfolio of Loans, Guarantees and Provisions Economic development is realized on the resources of the society. In addition to work and capital accumulation, financial resources play an important role in economic evolution and growth. Economic agents that do not currently have the necessary financial resources use the bank loans. From this perspective, the banking system is important in ensuring the financing of economic activities. The involvement of banks in lending involves the prospect of risk-taking, which is often imminent. We can talk about a loan portfolio, which the bank has to handle in a prudential way. An important element in hedging is the provisioning. They are formed at the beginning of the following year from the income achieved in the balance sheet of the previous year and must be weighted at the level of the pre-calculated risks. The authors also refer to the fact that bank risks can exceed, as a result, the level of provisions set up to cover the losses. It also underlines the need for the guarantees to be granted, especially long-term loans, to be sound, real and secured to cover the effect of the disappearance or diminution of the value of these guarantees. In this study, the authors focus on analyzing the correlation between the loan portfolio and the provisioning. Relevant data will be presented to highlight the level of contracted loans and, on this basis, the provisions made. Statistical and econometric methods and models will be used (data series, tables, graphs etc.) which quantify the correlation between the loan portfolio and the provisioning requirements to cover the credit risks. Introduction In the national economy, using the credit system for the needs of the economy is, of course, very important. Every bank has some limits to which it can provide loans, they are pursued by the rules of the national bank, in our case the NBR, and they are used by the economic agents in order to reintegrate their sources of financing so that they carry out activities that be effective and produce results. At commercial banks, credit quality is the risk, in the sense that risks are not sufficiently well correlated. They lead to bad loans, to large enough losses that the banks record. In banking management, the issue is to permanently analyze the correlation between the loan portfolio and hedge funds. In the article, the authors referred extensively to the provisions made by the commercial banks in the net results, so that the eventual occurrence and manifestation of the credit risks did not affect the activity and the results recorded in the national economy by the Romanian banking system. We can recall that in the last decade of the last century, a large number of commercial banks in Romania went into insolvency and then went bankrupt because these effects of credit risk as well as other banking risks in the entire risk system were not correct correlated, highlighted and used. This article expresses in concrete terms the situation in which certain circumstances have arisen as a result of the lack of correlation of the loan portfolio may be affected by banking risks and the constitution of those provisions. It should also be borne in mind that the authors make these references, stating that, in the case of loans, they must be constituted by the economic agents who make use of them, solid guarantees that can be executed in the event of non-performance and thereby ensure the completion or full recovery of credits that have been granted. Also, with reference to the classification of credits for the provisioning of provisions, some clarifications have been made regarding the provisions of Regulation no. 1/2018 of the National Bank of Romania, applying IFRS no. 9. Research methodology and data. Results and discussions Analysis of the loan portfolio with a view to provisioning is a stage in the credit risk management that precedes the credit granting process and aims at providing resources to cover the losses that may occur in the loan portfolio. Determining the level of provisions for credit risk is determined by the banking regulations in force and the sensitivity of the loan portfolio. In this respect, the provisions of Regulation no. 1/2018, for the classification of loans, for the provisioning of provisions, of the National Bank of Romania. This regulation implements the provisions of IFRS 9. In the data processed and presented, taking into account that banks have in their portfolio and long-term loans contracted several years ago, we also took into account the previous regulations. First, we classify the loans into five categories according to their specificity and for each category a provisioning level is set: Clients 'classification is made taking into account the assessment of the clients' financial performance and their ability to honor the accumulated debt on maturity. Financial performance is evaluated by each bank and, in this context, credits will be included in one of the following categories: -Category 1: Very good performance. which allow for debt to be matured while maintaining that performance; -Category 2: good performance but fairly certain on average; -Category 3: satisfactory financial performance, worsening trend; -Category 4: low and cyclical financial performance; -Category 5: losses and inability to repay. In general terms, the debt can be regarded as good (repayments after maturity, with a maximum delay of 7 days); (delays up to 30 days) and inadequate (delays over 30 days. Depending on these two criteria, the loans are classified according to the data in the following table: According to the lending policy, mature and doubtful maturities can be considered differently. Thus, the effective treatment of these loans involves either collecting funds from reduced claims or switching to losses, or renegotiating for the cancellation of the claim. Part of the interest rate charged by the bank on loans granted may be a way of indirectly financing risks. When determining the average interest rate negotiated with the client, the bank considers two strategies: pricing at a cost or charging according to the bank solvency rate. Cost pricing of the bank interest rate is calculated based on the cost of credit resources used by the bank. If credit resources come from a bank, a weighted average cost of borrowed funds is used. When special resources are used then their cost should be corrected by a percentage share of general expenses. The ROI is calculated based on the Financial Revenues Rate (RRF) using the relationship: Where: PR = return on profit; RRF = Financial Return Rate; t = average (average) loan share; k = capital; p = placements. Charging on own-fund coverage determines that at these base rates a risk premium is added that expresses the static risk of default. appreciated by credit quality, based on internal data from earlier periods, in terms of comparability in terms of the economic cycle phase. The warranty is the name used to designate any method, instrument or commitment that is ancillary to the loan agreement made available to or in favor of the bank by virtue of the contract concluded to provide the bank with a clear guarantee of the guaranteed rights, credit and cost, including interest, in the event of default by borrowers. The guarantees are executed in the event of the debtor's insolvency to recover the uncovered debit. The obligation in the credit agreement is the main obligation. The customer is obliged to repay the due installments and the related interest on the contracted terms. The additional guarantee gives rise to a second relationship. If the initial commitment was not fulfilled, the bank shall call for the guarantee to be executed. There are many types of collateral, with different features and uses, requiring specific documentation to allow banks to enjoy the rights that these guarantees give them. Providing additional guarantees has. in principle, two alternatives, namely: he guarantees with material goods, immovable property, land or financial assets which he makes available to the bank in the form of a mortgage, pledge, bank deposit etc . these are called real collateral or call on a guarantor he/she will record the obligation to pay off the debt if the one for whom he/she guarantees (the debtor) fails to fulfill his obligation under the loan agreement. Thus, it issues and presents a bank guarantee letter. In order to fulfill its purpose, the guarantor must meet certain requirements as follows: the existence of a patrimony independent of the contractual relationship, sufficiently large and demanded over time. to cover the guaranteed obligation; the guarantee is designed in such a way as to enable the bank to execute it without the debtor's opposition; the bank, as the beneficiary of the collateral, has a high degree of liquidity if it is to be executed if the client does not repay its debt. The main types of collateral used and defined in a bank's internal lending rules may be set out below. The mortgage serves to guarantee the debtor's obligations to his lender by means of a piece of immovable property in the patrimony, which is legally designated for that purpose. This guarantee is that. if at the maturity date the borrower does not reimburse the related rates and obligations, the bank may require the sale of the mortgaged property and from the amount obtained to cover its receivables. In practice, there are two major issues that need to be considered in the case of the mortgage. The first is how to evaluate. Even if a professional assessment is obtained, unexpected (force majeure) events can alter the real value of collateral. To avoid losses, the mortgaged property is secured to an insurance company and the insurance policy is endorsed by the creditor bank. The second difficulty lies in the time needed to sell the property, especially in a stagnant real estate market or in which there is a surplus of real estate and a low demand. The two aspects are covered by bank validation. Pledge means the alienation of the good and consists in the debtor giving in favor of the lender another mobile asset that. if the debt is not paid on maturity, it will be sold and the debt of the debtor will be covered by the money obtained. The pledge may be with or without dispossession (in the latter case the goods remain in the debtor's possession). As a rule, any good may be used as collateral if it meets the following conditions: the pledged asset has a sufficient value in relation to the guaranteed claim and the value is constant or even increasing over time in the case of a good, it is insured to a company insurance or stored for retention at specialized institutions. Bank deposit is the ideal guarantee, customers who provide a bank deposit as collateral, and will be collateral. The bank deposit is accessible at any time, it is in the form of a fixed amount. known to the bank, and when the deposit is made in a currency other than the one in which the loan was made, the bank must take into account the risks arising from exchange rate fluctuations of the two currencies in order to cover and exchange rate risk. Before accepting a guarantee, regardless of the nature of the underlying obligation or the guarantee instrument used, the bank must consider three aspects, namely: the right to property, the performance of the guarantee contract and the amount of the guarantee. In the case of real collateral, such as the mortgage, it is important to know if it is free from any other obligation. The performance of the guarantee contract is a generic term to designate all the activities and requirements imposed by the law and whose observance guarantees the bank the rights to the established guarantee. The appreciation of the guarantee depends on some aspects, such as: knowing the legal code for the use of collateral, both for compliance with specific laws and bank rules; the fulfillment of all legal formalities so that the bank, as the beneficiary, has assured the possibility of using the guarantee if the money has to be recovered; the priority documents of the guarantee must be designed to give the bank the right and the ability to execute the bank so that the borrower can oppose and any collateral involves certain costs, the bank has to assess them and determine who they bear. Basically, banks calculate the coverage level provided by the guarantee, in relation to the value of the guaranteed obligation. The goods offered as collateral are finally valued by the bank's specialists. The level of coverage reflects the bank's experience of being able to collect as much as possible from the guarantee as a result of its execution. Non-performing loans generate the highest risk management costs for the bank. The maximum cost level is reached in the case of overdue loans that can not be recovered and which are covered by the reserve fund or the risk fund. Covering leads to a corresponding decrease in the bank's assets and liabilities. Non-performing loans, as a rule, are retained in the portfolio and are not yet loss-making, with no chance of loss. For this reason, the expenses related to their management must include: the increase in administrative costs imposed by the separate and preferential administration of these credits as they may appear as an opportunity cost; the increase in legal costs in the case of appeal to the court and the deterioration of the bank's image vis-à-vis shareholders. In addition to the provisions specific to each type of credit, there is also the option of providing credit risk reserves. The legislation provides for a tax deductible amount of the general credit risk reserve of up to 2% of the balance of credits granted. Reserves can cover losses on the loan portfolio in the event of inappropriate provisioning. This results from the relationship: Where: Rgrc = the general credit risk reserve; Cr = balance of credits granted. Assurances are made for goods that constitute a material guarantee of the credit granted or life and accident insurance for private holders of long-term credit agreements. The problem of the debtor's guarantees is very important. As a rule, the valuation of material guarantees is carried out by the bank's specialized department or by a specialized firm approved by the bank. With all these precautionary measures, situations arise where the occurrence and manifestation of risks exceed the expected damage level. Risk Considerations are based on the establishment of own bank funds to cover any losses on the loan portfolio. Establishing funds to cover potential losses is to reduce the gross profit of the budget exercise, which calls into question the level of profitability of the bank. Of course, this problem occurs when the losses occur and the fund constituted is used. When emerging non-performing loans compete. in general, internal factors, which are related to the specificity and the manner of operation of the bank and external -environment and conjuncture. These factors often act together and, of course, independent of the will of the parties. As factors of non-performing credit we can remember: the phases of the economic cycle; economic, political, social, national or international conjuncture; (Basel agreements) or natural disasters. By detailing the occurrence and the influence of the factors that lead to the passage of credits into the category of nonperforming loans, we find that the "economic cycle phases" are the most difficult to anticipate, especially when the loan portfolio contains many long-term credits. Thus, in 2007-2008, when the effects of the global economic and financial crisis appeared on the financial-banking market, many credit was hit by the "credit risk" effect, many of which remained insufficiently covered by solid guarantees. The cumulative effect with other risks has resulted in the passage of many credits into non-performing ones. The economic, political or social, national or international conjuncture produces negative effects on banks' lending activity. For example, lending in foreign currency, usually in Swiss francs or the euro, has produced disastrous effects on customers (borrowers) and even on banks because of the change in the exchange rate of the domestic currency against the currencies underlying the contracting loans. So the conjuncture in the domestic and international financial market has changed and the effects have emerged. It was found that government intervention was attempted by strengthening a exchange rate that did not have the desired outcome. There are still enough credits in the first-house operation, whose cost has increased so that the "mortgage" category, when executed, no longer covers the effect of credit risk, not to mention the losses suffered by creditors. We will also make no mention of the causes of force majeure, which are easy to analyze as an effect on credit. These factors act on clients and therefore their identification and evaluation are done by bank specialists. In the internal factors category we mention: performing an erroneous credit analysis based on incomplete or inappropriate documentation, misinterpretation of financial results and customer creditworthiness, use of incomplete analysis procedures, failure to take into account some risk factors; unfavorable changes in the economic factors after the credits have been granted, so that the borrower can not achieve what he has proposed; improper management or inappropriate management changes, organizational structure or organizational structure of the client, and failure to report in time the action signal on inappropriate performance of the client's activity. That is why, in the study, we have sought to highlight the sensitivity of the level of provisions (risk hedging funds) that are created together with the possibility that some factors may lead to net effects higher than the predicted risks. Conclusions From the authors' submissions, there is a close link between the loan portfolio and the provisions. The bank management system should consider a concrete plan of measures to provide for those measures that need to be taken when the impact of credit risks arises. It is concluded that the avoidance of such a risk can be made depending on the size of the bank's loan portfolio, but it must be ensured by consistent guarantees or/and not least by provisioning, ie funds to be used in covering financial effects when bad loans occur. Another conclusion is that, in the context of the lack of a correlation between the loan portfolio and the established provisions, there may be a series of risks leading to the diminishing of the bank's results and, last but not least, the difficulty of insolvency or even bankruptcy. Lastly, a necessary conclusion is that banks must strictly abide by the rules of the central bank (the National Bank of Romania), but also the provisions of the Basel agreements stipulating and realizing, impose a strict concordance between the size of own funds, attracted sources and credits granted in such a way as to avoid as much as possible the issue of occurrence of risks.
2019-05-30T23:47:26.843Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "7cd1d774822f566938708a6d929df18d942d46ba", "oa_license": null, "oa_url": "https://hrmars.com/papers_submitted/4189/Portfolio_of_Loans,_Guarantees_and_Provisions.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3e613fd68a5b1e13b5ad3137382474d416d3aafa", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
119343896
pes2o/s2orc
v3-fos-license
Stochastic optimization methods for extracting cosmological parameters from CMBR power spectra The reconstruction of the CMBR power spectrum from a map represents a major computational challenge to which much effort has been applied. However, once the power spectrum has been recovered there still remains the problem of extracting cosmological parameters from it. Doing this involves optimizing a complicated function in a many dimensional parameter space. Therefore efficient algorithms are necessary in order to make this feasible. We have tested several different types of algorithms and found that the technique known as simulated annealing is very effective for this purpose. It is shown that simulated annealing is able to extract the correct cosmological parameters from a set of simulated power spectra, but even with such fast optimization algorithms, a substantial computational effort is needed. I. INTRODUCTION In the past few years it has been realized that the Cosmic Microwave Background Radiation (CMBR) holds information about virtually all relevant cosmological parameters [1,2]. The shape and amplitude of the fluctuations in the CMBR are strongly dependent on such parameters as Ω, H 0 etc. [3]. Given a sufficiently accurate map of fluctuations it should therefore in principle be possible to extract information on the values of these parameters. In general, it is customary to describe the fluctuations in spherical harmonics where the a lm coefficients are related to the power spectrum by C l = a * lm a lm m . For purely Gaussian fluctuations the power spectrum contains all statistical information about the fluctuations [3]. The CMBR fluctuations were first detected in 1992 by the COBE satellite [4], and at present the COBE measurements together with a number of smaller scale experiments [5] make up our experimental knowledge of the CMBR power spectrum. These data are not of sufficient accuracy to really pin down any of the cosmological parameters, but the next few years will hopefully see an explosion in the amount of experimental data. Two new satellite projects, the American MAP and the European PLANCK [6], are scheduled and are designed to measure the power spectrum precisely down to very small scales (l ≃ 1000 for MAP and l ≃ 2000 for PLANCK). This should yield sufficient information to determine almost all relevant cosmological parameters. However, using CMBR data to extract information about the underlying cosmological parameters will rely heavily on our ability to handle very large amounts of data (Refs. [7][8][9][10][11][12][13] and references therein). The first problem lies in constructing a power spectrum from the much larger CMBR map. If there are m data points, then the power spectrum calculation involves inversion of m × m matrices (an order m 3 operation). For the new satellite experiments m 3 is prohibitively large [7][8][9][10][11][12][13], and much effort has been devoted to finding methods for reducing this number by exploiting inherent symmetries in the CMBR [8,9]. However, once the power spectrum has been constructed the troubles are not over. Then the space of cosmological parameters has to be searched for the bestfit model. If there are n free cosmological parameters, each sampled by q points, then the computational time scales as q n and, if n is large, the problem becomes intractable. In the present paper we assume that a power spectrum has been constructed, so that only the problem of searching out the cosmological parameter space remains. In general, parameter extraction relies on the fact that for Gaussian errors it is possible to build a likelihood function from the set of measurements [13] where Θ = (Ω, Ω b , H 0 , n, τ, . . .) is a vector describing the given point in parameter space. x is a vector containing all the data points. This vector can represent either the CMBR map, or the reconstructed power spectrum points. C(Θ) is the data covariance matrix. Assuming that the data points are uncorrelated, so that the data covariance matrix is diagonal, this can be reduced to the simple expression, L ∝ e −χ 2 /2 , where is a χ 2 -statistics and N max is the number of power spectrum data points [1,9]. In order to extract parameters from the power spectrum we need to minimize χ 2 over the multidimensional parameter space. In general there is no easy way of doing this. The topology of χ 2 could be very complicated, with several different local minima. However, let us for now ignore this possible problem and assume that the function is unimodal. Then there exist a vast number of algorithms for extremizing the function. The most efficient methods for optimization usually depend on the ability to calculate the gradient of the objective function, χ 2 . These methods work on completely general continuously differentiable functions, but under the right assumptions, χ 2 possesses qualities which makes it possible to improve on the simple gradient methods. In general, the second derivative of χ 2 with respect to parameters i and j is Sufficiently close to the minimum of χ 2 , the second term in the equation above should be small compared with the first. In practice this means that we get the second derivative information "for free" by just calculating the first derivative. Therefore, if we assume that the starting point for the optimization is sufficiently close to the true minimum, an algorithm utilising second-derivative information should converge much faster than a gradient method. The most popular algorithm of this type is the Levenberg-Marquardt method [14]. Note, however, that far away from the minimum, the above expression for the second derivative can be very wrong and cause the algorithm to converge much slower. Both gradient and second order algorithms are typically very efficient. However, there are several weaknesses: 1) They rely on our ability to calculate derivatives of χ 2 . Although in principle this is no problem, numerical experiments have shown that results for this derivative are not always reliable [2]. For instance, the numerical code for calculating power spectra, CMBFAST [15], is fundamentally different for open and flat cosmologies, and has no implementation of closed models, so that the derivative of χ 2 with respect to Ω 0 is not reliable at Ω 0 = 1. This is just one example, but the problem is generic as soon as points are located sufficiently near parameter boundaries. 2) The next problem is related to the fact that the above methods in general works as steepest descent methods. This means that they are very easily fooled into taking the shortest path towards some local minimum which needs not be global. If there are either many local minima or the topology of χ 2 is complicated with many near degeneracies, then the above gradient-based methods are likely to perform poorly. Unfortunately this might easily be the case with any given realization of the CMBR power spectrum. A. Multistart algorithms The above caveats lead us to look for more robust methods for finding the true minimum of χ 2 . As soon as we are dealing with multimodal functions it is clear that we cannot contend ourselves with just running an optimization scheme based on the above method with just one starting point. The simplest possible improvement on the above method is a Monte Carlo multi start algorithm. In this case a starting point is chosen at random in the parameter space, and optimization is performed, using either a gradient or a second-order method. After the algorithm converges a new starting point is chosen. This method has the advantage that it converges to the global minimum in the asymptotic limit of infinite computational time. However, it is easy to improve on it, because the simple multistart algorithm will detect the same local minimum many times uncritically. The multi level single linkage (MLSL) algorithm [16] tries to alleviate this problem by mapping out the basins connected with the different local minima. If it detects that a trial point lies within a basin which has already been mapped, then the point is rejected. Depending on the type of objective function this algorithm can perform exceedingly well [17] In what follows we use the simple implementation of the MLSL algorithm provided by Locatelli [18]. First, we need the following definition: Let x max and x min be the maximum and minimum allowed value of parameter i. Then define a new parameter q ≡ (x − x min )/(x max − x min ), so that q ∈ [0, 1]. We use this new parameter q in the algorithm below, so that all cosmological parameters are treated on an equal footing and the allowed region is a simple hypercube spanning all values from 0 to 1 in R n . The algorithm is then devised as follow: 1) At each step, k, pick out N sample points from the allowed region and calculate the objective function. 2) Sort the whole sample of kN points in order of increasing χ 2 value and select the γkN points with smallest values. 3) For all of these points, run optimization on given point q, iff -No point y exists so that d(q, y) ≤ α and χ 2 (y) ≤ χ 2 (q) -d(q, S) > d -Optimization was not previously applied to q. Optimization is performed with a gradient method. 4) Proceed to step k + 1. In the above, d(q, y) is the Euclidean distance between x and y, and S is the set of already discovered local minima. α and d are predefined distances which should be chosen to optimize the rate of finding local minima. They are a measure of how large the basins connected with local minima are in general in that specific problem. The above method thus includes a host of different parameters which should be chosen by the user, N , d, γ and α. This can make it quite troublesome to devise an algorithm which performs optimally. In our implementation we have chosen N = 10, γ = 0.2, d = 0.1 and α = 0.1. Note that this is somewhat in conflict with the definition given by Refs. [16,18], in that α should really be a quantity which depends on k, but in order to obtain a simple implementation we have used the above values. B. Simulated annealing A completely different method, which in the next section is shown to be very effective for χ 2 minimization on CMBR power spectra, is simulated annealing. The method of simulated annealing was first introduced by Kirkpatrick et al. in 1983 [19,20]. It is based the cooling behaviour of thermodynamic systems. Consider a thermodynamic system in contact with a heat bath at some temperature, T . If left for sufficiently long the system will approach thermal equilibrium with that temperature. The heat bath is then cooled, and if this is done slowly enough the system maintains equilibrium in the cooling phase, and finally as T → 0 settles into the true ground state, the state with the lowest possible energy. This is very similar to global searches for minima of functions and simulated annealing relies on the fact that the function to be minimized can be considered as the energy of a thermodynamic system. If the system is then cooled from a very high "temperature" towards T = 0 it should find the global minimum, given that it maintains thermal equilibrium at all times. In practise one lets the system jump around in parameter space at random. Given a starting point i, a trial point is sought according to some prescription, and is then either accepted or rejected according to the Metropolis acceptance probability [21] where, in our case E = χ 2 . There are very many similarities between this and thermodynamic systems, at high temperatures the system visits all states freely, while at low temperatures it can visit only states very close to the minimum. For instance it has been shown that by using the above criterion the system asymptotically approaches the Boltzmann distribution, given that it is kept at constant temperature asymptotically long [22]. Also, if a system undergoes simulated annealing with complete thermal equilibrium at all times then as T → 0 the energy approaches the global minimum [22]. For absolute global convergence to be ensured it is thus necessary to allow infinite time at each temperature. In order to use simulated annealing for functional optimization it is necessary to specify three things: 1) A space of all possible system configurations 2) A cooling schedule for the system 3) A neighbourhood structure. Here, the configuration space is a hypercube in R n bounded by the limits on the individual parameters. The cooling schedule and the neighbourhood structure are both something which in general are quite difficult to choose optimally [20]. Further, they make the scheme problem dependent. For this reason adaptive simulated annealing procedures have been devised which dynamically choose the cooling rate and neighbourhood directly from the previous iterations in order to maximize the thermalisation rate [23]. The problem with this approach is that the thermodynamic behaviour is no longer welldefined. For instance the approach to a Boltzmann distribution is not ensured. In the present work we choose a relatively simple cooling schedule and neighbourhood structure, neither of which are adaptive. In practise we start with an initial temperature, T 0 , which is then lowered exponentially by the following criterion T i+1 = αT i , where α is some constant. When the temperature reaches a final value T 1 the algorithm stops. In this way α is a function of the total number of steps, N s , given as α = (T 1 /T 0 ) 1/Ns . The neighbourhood search is devised so that at high temperatures the system is prone to make large jumps whereas at lower temperatures it mostly searches the nearest-neighbour points. In our specific model the parameter space consists of a vector, x, of n free parameters, bounded from below by the vector, x min , and from above by x max . Let iteration point i have the value (x β ) i for the parameter labelled β. Then the value of this parameter at iteration i+1 has acceptance probability given as where and A β is some constant, chosen to yield a good convergence rate. The above probability is set to 0 if (x β ) i+1 is outside the allowed interval for the given parameter. This criterion for picking out trial points has the desired quality that it makes large jumps at high temperature and progressively smaller jumps as the temperature is lowered. If the objective function depends strongly on β, then A β should be small, whereas if it is almost independent of β, A β should be large. It is well known that χ 2 is almost degenerate in the parameter Ω m h 2 [2]. Therefore it is natural to choose A Ωmh 2 to be small. In our implementation we have chosen the following values for the control parameters: T 0 = 10 4 , T 1 = 2, A Ωmh 2 = 1/32, Note that the method of simulated annealing was first applied to simulated CMBR data by Knox [24], for a relatively small model with four free parameters. A. Performance of different algorithms In order to test the relative efficiency of the different optimization schemes we have tried to run χ 2 minimization on synthetic power spectra. All the power spectra in the present paper have been calculated by use of the publicly available CMBFAST package [15]. To make calculations not too cumbersome we have restricted the calculations to a six-dimensional parameter space, characterised by the vector Θ = (Ω m , Ω b , H 0 , n S , N ν , Q). The model is taken to have flat geometry so that Ω Λ = 1 − Ω m . We start from an assumed true model with Θ = (0.5, 0.05, 50, 1, 3, 30 µK), i.e. fairly close to the currently favoured ΛCDM model [25]. Table I shows the free parameters, as well as the allowed region for each. We further assume that all C l 's up to l = 1000 can be measured without noise. That is, the errors are completely dominated by cosmic variance, with the error being equal to [1,3] σ(C l ) = 2 2l + 1 C l . From underlying statistics we have produced a single realisation which we take to be the measured power spectrum. Since we have N = 999 synthetic data points, all normally distributed, χ 2 of the data set, relative to the true, underlying power spectrum should have a χ 2 distribution with mean N , and standard error √ 2N , so that The specific synthetic data set we use has χ 2 * = 1090.98, i.e., it is within about 2σ of the expected value. If the optimization routine is optimal, then for each optimization run The average of several optimization runs should preferably yield a value which is somewhat below χ 2 * . We therefore have a measure of whether or not the optimization has been successful. We have tested four different optimization algorithms on a subset of the full six-dimensional parameter space. The algorithms are: Simple Monte Carlo multistart with: 1) gradient optimization method (G), 2) Levenberg-Marquardt method (LM), 3) multi level single linkage (MLSL), as described in Section IIa, 4) simulated annealing, as described in Section IIb. Algorithms 1-3 use optimization routines from the PORT3 library [26]. In order to make direct comparison between the algorithms, we have let them run for a fixed number of steps, where one step is defined equal to one power spectrum calculation. All methods, except simulated annealing, use gradient information, which means that additional power spectra must be calculated at each iteration. We use two sided derivatives, so that to calculate the gradi-ent (and Hessian), we need 2n more calculations, where n is the number of cosmological parameters. Fig. 1 shows the minimum χ 2 found by the different algorithms. Each point in Fig. 1 stems from a Monte Carlo run of 15 optimizations. Clearly, the MLSL method improves on the simple multi start algorithm. The LM algorithm performs better than gradient optimization in some cases, but in other cases it is much worse. This is probably due to the fact that if the starting point is far away from a local minimum then the second derivative may yield false information because Eq. (4) does not hold, causing the algorithm to converge slower. This weakness could be remedied to some extent by diagonalising the matrix of second-derivatives (Fisher matrix diagonalisation), so that the correlation between different parameters is approximately broken. However, the most striking feature in Fig. 1 is that SA outperforms the other algorithms easily. Most likely this is due to the fact that χ 2 possesses valleys where the function has many almost degenerate local minima. Note that the likelyhood function does not need to be truly multi-modal for this effect to occur. It can happen either because the parameter space is constrained so that the algorithm takes a path which leads out of the allowed space, or because there are small "bumps" on χ 2 close to the global minimum, which cause the gradient algorithms to get trapped. χ 2 is not multimodal in the sense that it contains equally good local minima, separated by long distances in parameter space. For the case of four free parameters (upper panel), most of the algorithms produce acceptable results with about 1000 steps, but with five parameters (lower panel), about 2000 steps are needed. In both cases, SA needs substantially fewer steps than the other algorithms. In Fig. 2 we show four different runs of the simple gradient-based algorithm without multi-start. In two of the cases the algorithm converges towards the global minimum, whereas in the two other it becomes trapped at much higher lying points in parameter space. We have tested the effect of varying step size in the gradient calculation and found that the results do not depend on this. This figure also shows that the gradient based algorithms generally converge fairly rapidly (i.e. a few hundred steps), so that the multi-start algorithm generally runs several times even for relatively a relatively small number of total steps. B. Parameter extraction If the χ 2 minimization succeeds in finding the global minimum, then the value found should reflect the underlying measurement uncertainty. We have performed a detailed Monte Carlo study of how well the SA algorithm is able to extract parameters from the power spectrum. The test goes as follows: First, construct N MC synthetic measured power spectra, as described in the previous section. Then run optimization on each one of these spectra. This produces N MC estimated points in parameter space. In order to compare these points with the underlying uncertainty, we then need to calculate the estimated standard error on the different parameters. This is done by the standard method of calculating the Fisher information matrix. At the true point in parameter space, the likelihood function should be maximal, so that it should have zero gradient. The matrix of second derivatives is then given by (Eq. (4)) The expected error on the estimation of parameter i is then given by if we assume that all the relevant cosmological parameters should be determined simultaneously. The expected error on Ω m is σ = 0.098, given our assumed measurement precision. Note that above we have again assumed that the only uncertainty in the measurements is from cosmic variance. We have performed this Monte Carlo test on the 6dimensional parameter space, using 24 different synthetic spectra. We have extracted parameters using SA with a different number of total steps: 500, 2000 and 4000. Fig. 3 shows how the estimated points are distributed for the parameter Ω m . We have binned the extracted points in bins of width 1σ up to 5σ. For the optimization performed with 500 steps the distribution is very wide, showing no specific centering on the true parameter value. The optimization with 2000 steps extracts values which are centered on the true value, indicative of a good optimization. Furthermore, the optimization with 4000 steps shows little improvement over that with 2000, again indicating that the one with 2000 steps is already performing optimally. Note that both for 2000 and 4000 steps the distribution of extracted points is significantly wider than the theoretical expectation which was calculated assuming a normal distribution with σ = 0.098. One would expect this to be the case since the probability distribution of any given parameter is only normal close to the true value, even for a perfect optimization. Therefore there are likely to be more outlying points than suggested by the normal distribution. If we have N MC Monte Carlo runs, then if the optimization is perfect one should obtain a sample mean of roughly where σ s = σ/ √ N MC for a given parameter if N MC is large and the extracted parameters are drawn from a normal distribution. We can also calculate χ 2 for the sample This function should be approximately χ 2 distributed. We have calculated µ and χ 2 for the sample of extracted parameters, to see if it is compatible with the theoretical expectations. Table II shows the values found from the 24 Monte Carlo simulations. The sample mean found by the optimization with 500 steps deviates by more than 7σ from the expectation. Again this indicates a poor optimization. The optimizations with 2000 and 4000 steps succeed in recovering the true mean to within 2σ. As for χ 2 , it is much lower for the 2000 and 4000 steps optimizations than for the 500 steps. However, both are still much larger than expected from a normal distribution. As mentioned above this has to do with the fact that the distribution is not normal far away from the true parameter value, so that more outlier points are expected. These contribute heavily to χ 2 , so that a larger value can be expected, even for a perfect optimization. As seen above, even for the small 6 parameter model we use, it is necessary on average to calculate more than 10 3 power spectra. Even on a fast computer this is something which takes several hours. This must be done each time one wants to check how a new proposed cosmological model fits the data. This very clearly shows the necessity of using fast optimization algorithms for parameter extraction. Note that the models we have calculated are flat and without reionization, including either curvature or reionization significantly slows the CMBFAST [15] code. Also, more exotic models like scenarios with decaying neutrinos lead to very cumbersome CMBR spectrum calculations [27]. The above Monte Carlo method was also used by Knox [24] in order to test the χ 2 optimization efficiency for a small model with 4 free parameters. IV. DISCUSSION AND CONCLUSIONS We have tested different methods for χ 2 minimization and parameter extraction on CMBR power spectra. It was found that simulated annealing is very effective in this regard, and that it compared very favourably with other optimization routines. The reason for this is most likely that χ 2 posseses very nearly degenerate minima. Also, numerical noise in the CMBFAST code can cause the gradient information to become unreliable near stationary points, causing the gradient based methods to become trapped in points which are not true minima. We have also found that even for the simulated annealing algorithm, many power spectrum calculations are usually necessary in order to obtain a good estimate of the global minimum. Without a fast optimization algorithm it is very difficult to extract reliable parameter estimates from CMBR power spectra, and even with a routine like SA, it is computationally very demanding as soon as the parameter space is realistically large (9-10 dimensional). Note that all of the above calculations rely on stochastic methods in that they start out at completely random points in the allowed parameter space. This is very different from the method used by Oh, Spergel and Hinshaw [9], who use as the initial point a fit obtained by the chi-by-eye method and then optimize that initial guess using a second order method. This method surely makes the optimization algorithm converge faster, but suffers greatly from the problem of how to choose the initial point without biasing the outcome (i.e. making the algorithm find a minimum which is not global). We believe that using stochastic optimization is a much more robust way of optimization. Interestingly, there are other modern algorithms for optimization which work along some of the same principles as SA, for instance genetic algorithms [28]. Given the magnitude of the computational challenge provided by upcoming CMBR data, it appears worthwhile to explore the potential of such new algorithms.
2019-04-14T01:54:33.217Z
1999-11-17T00:00:00.000
{ "year": 1999, "sha1": "db431e4d206121d33aa8d154f3ca7a1badd4cf7e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9911330", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "db431e4d206121d33aa8d154f3ca7a1badd4cf7e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244097539
pes2o/s2orc
v3-fos-license
The Ancient Varieties of Mountain Maize: The Inheritance of the Pointed Character and Its Effect on the Natural Drying Process The the Process. Abstract: The introduction of mechanized agricultural practices after the Second World War and the use of productive hybrids led to a gradual disappearance of local maize varieties. However, 13 landraces are still cultivated in North-Western Italy, in the Lombardy region; those that are cultivated in mountainous areas (roughly up to 1200 m in altitude) are often characterized by the pointed shape of their seeds (i.e., “Nero Spinoso”, “Rostrato Rosso di Rovetta”, “Spinato di Gandino” and “Scagliolo di Carenno”) and the presence of pigments (i.e., “Nero Spinoso”, “Rostrato Rosso di Rovetta”). The pointed shape of the seeds is an ancient characteristic of maize-ancestors, which negatively affects the yield by not allowing optimal “filling” of the ear. This study reports work on four different Italian varieties of pointed maize in order to assess the genetic bases of the “pointed character” and to try to explain the reasons for this adaptation to the mountain environment. The data obtained by genetic analysis, seed air-drying modeling and thermographic camera observations demonstrated that the “pointed trait” is controlled by the same genes across the different varieties studied and suggested that this peculiar shape has been selected in mountainous areas because it promotes faster drying of the seed, with the presence of pigments implementing this effect. the trait “pointed ears” in the F2 progeny obtained by crossing “Nero Spinoso” × B73/Mo17. The hypotheses made for the χ 2 test were 1:16 and 1:64 segregation values for “pointed ears”, considering two or three major genes, respectively, involved in the “pointed trait. Introduction The last century has been characterized by a serious loss of biodiversity, and it is estimated that about three quarters of the lines of living beings around the world (plants, animals and microorganisms) identified as being used in the past for nutrition and food production have disappeared [1]. Hence, the conservation and promotion of agrobiodiversity have been crucial topics in recent decades [2][3][4]. The first appearance of maize in Europe occurred after the travels of Christopher Columbus to America. After the first arrivals of samples from the Caribbean, subsequent introductions of maize germplasm from higher latitudes better adapted to European conditions boosted maize cultivation in Europe. Since then, a multitude of landraces linked to local food production and traditional farming systems have been developed [5]. Plant Material The four pointed varieties used in this study, "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino" and "Scagliolo di Carenno", were obtained from the germplasm collection of CREA, Stezzano (BG). The B73 inbred line from the germplasm collection of DISAA, Milan, Italy was used to study the inheritance of the pointed trait. B73/Mo17 F1 ears were used as the non-pointed benchmark for the analysis of the shape of the kernels, the laser scan 3D digitalization, and the seed air-drying modeling. Outline Analysis of the Seeds With the aim of comparing the seed shape of the pointed varieties with the B73/Mo17 control, we performed the outline analysis of the kernels. The mature seeds were collected and photographed, and the images were processed to obtain the main shape of the kernels of each variety. Further details: fifty kernels for each maize genotype were used for the elliptical Fourier descriptors analysis (outline analysis) [36]. The grains were collected from various ears of plants cultivated in the experimental field of the University of Milan, located in Landriano, Pavia, Italy. Over-or under-developed kernels of the basal and terminal parts of the ears were not considered. The kernels were photographed in dorsal view [23] using a digital camera (Canon EOS 2000D, Amstelveen, The Netherlands ). The images were processed using Adobe Photoshop software. In particular, the shadows of the grains were removed, and the images were transformed into black and white. The outline coordinates were extracted with Momocs 1.3.0 [37][38][39] in an R environment [40] and converted into Fourier coefficients, considering 12 harmonics that gathered at least 99% of the total harmonic power [39]. The kernels were positioned in the same direction in order to control left/right asymmetry [41]; then, a landmark was defined at their base (tip cap) as a starting point for importing outline coordinates. The contours were centered, and the outline analysis was carried out without numerical normalization. Principal component analysis (PCA) was carried out on the matrix of coefficients, and the samples were plotted on the first two principal components (PCs). Linear discriminant analysis (LDA) of the principal components [42] was carried out, retaining 13 PCs. Finally, the mean shape of the kernel of each cultivar was obtained using the 'MSHAPES' function of Momocs, and multivariate analysis of variance (MANOVA) was performed to evaluate the significance of kernel shape differences between the five genotypes. Complementation Test and Constitution of F2 Segregating Population for Pointed Seed Trait A complementation test can be used to test whether two traits characterized by a similar phenotype are controlled by different genes. The two lines are crossed and, if complementation occurs, the F1 progeny will display a wild type phenotype, suggesting that the two traits are controlled by different genes. All genotypes were crossed in pairwise combinations and the F1 ears were scored by visual inspection. The F2 segregating population was obtained by crossing "Nero Spinoso" × B73/Mo17. The F1 seeds obtained were shown to obtain F2 segregating ears. Data were recorded by visual inspection in four classes: pointed, intermediate, little pointed and not pointed. For each observation, five ears were scored. Seed Air-Drying Modeling Seed air-drying was modeled using CFD (Computational Fluid Dynamics) in order to simulate the traditional air-drying adopted by farmers in the mountainous areas of the Lombardy region. In the traditional process, maize cobs were collected, stored under farmhouse roofs and exposed to the natural air stream during autumn to dry out the grain moisture. The aim of this analysis was to evaluate if pointed grain maize exhibited better performances during this traditional air-drying (less moisture content). The process modeled, seed drying, aims to remove a solvent (water) through evaporation mass transfer. Evaporation is a multiphase phenomenon that requires that the solvent is present in both the liquid and gas phase. Liquid water is contained in the kernel, and water vapor is present downstream in the air. The model was set up as a mixture multiphase model. This physics approach solves a single set of governing equations for mixture and a volume fraction transport equation for each phase. The external air phase was modeled as a multicomponent since it contains both the vapor and the dry air. Moreover, the solid porous region (the maize grain) was modeled as a multicomponent as well (as it contains both solid dry parts and water). The interaction between the multiphase media used the Spalding Evaporation/Condensation model [43]; Sherwood and Nusselt numbers were obtained by the Armenante-Kirwan correlation. To reduce the computational cost, the model developed was 2D; one tunnel had an air inlet and an atmospheric pressure outlet. Air entered at the inlet, evaporated the water trapped inside the porous media and exited with the vapor through the outlet. The simulation was performed using Star CCM+, a commercial software provided by SIEMENS. Aerial data inputs were imposed to represent the typical values that would be recorded on a sunny day in autumn in the pre-Alps in the Lombardy region, where pointed maize used to be grown. These data represent the ambient boundary conditions of the process adopted widely in the Lombardy Alps, Italy, up to the middle of the 20th century. Air Temperature was set up at 15 • C with a wind speed of 0.5 m/s. The turbulence intensity and the viscosity ratio were imposed according to a standard laminar flow, respectively equal to 0.01 and 10. Kernel properties were set up according to [44]: maize density was equal to 1320 kg/m 3 , with a specific heat of 2800 J/(kg· • K), a Thermal Conductivity of 0.125 W/(m· • K) and a general Porosity of 0.95%. For simplification, both kernels were simulated with the same physical properties. In this way, the differences caused by the different varieties could be neglected, and the effect of the pointed shape could be isolated. The geometry used in the CFD simulation was obtained as follows: first of all, starting from a maize cob, a laser scan digitalized the 3D shape. After that, the image was cleaned and reconstructed with Gom software. Finally, a 2D slice of the maize kernel, both pointed and control, was obtained using spline interpolation in Solidworks CAD Software ( Figure S1). The set-up model and the mesh created are reported in Figure S2. Pictures a and b report the cross section of the air tunnel with one maize kernel in the lower side. Dry air entered at the left side of the box, stabilized and flowed around the kernel. Afterwards, air and water moisture exited from the right side of the box. Pictures c and d, by contrast, represent the meshed model (in which discretization was implemented by applying the finite volume method) where the fluid dynamics equations are imposed. Qualitatively, it can be seen how cells are denser downstream of the grain which captures the effect of the moisture flow dragged downstream by the surrounding air. The Effect of Pigmentation on the Surface Temperature of the Seeds Seeds were placed in the sun for about 30 min, until they reached a constant temperature. Thermal images of "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino", "Scagliolo di Carenno" and B73/Mo17 (control) were taken between 11 and 12 h with a semiautomated long-wave infrared camera system (FLIR T650sc, FLIR Systems, Inc., Parkway Avenue Wilsonville, OR, USA). The temperature accumulated by seeds was then measured using the FLIR ResearchIR Max software. Results In the results below, the four traditional Italian varieties shown in Figure 1 were studied with the aim of assessing the genetic bases of the "pointed character" and to explain the reasons for the adaptation of these landraces to the mountain environment. So far as we know, this is the first work that describes the inheritance of the "pointed trait" in Italian native varieties. From the left to the right: "Nero Spinoso", "Rostrato Rosso di Rovetta", "Scagliolo di Carenno" and "Spinato di Gandino". (Figure 2b) showed that B73/Mo17 (control) differs from other genotypes. While in pointed genotypes ("Nero Spinoso", "Rostrato Rosso di Rovetta", "Scagliolo di Carenno", "Spinato di Gandino") the mean shape was elliptical, in B73/Mo17 it was obovate as these grains have no beak ( Figure 2c). Results of the MANOVA test confirmed significant shape differences between the kernels of the five genotypes (F4, 245 = 79.45; p < 0.01), and Figure 3 shows the differences between the average shape of the kernels of the control variety and that of the other genotypes. Inheritance of Pointed Trait Starting from the hypothesis that the genes involved in the "pointed trait" were common to all the pointed varieties and were due to the maternal genotype, the "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino" and "Scagliolo di Carenno" pointed varieties were crossed pairwise. The F1 seeds obtained from each cross were grown on to obtain F1 ears: all the ears obtained had pointed seeds, as did the following F2 ears (Figure 4), suggesting that the varieties under evaluation in this study have the same genetic basis for the "pointed trait". With the aim of estimating the number of genes involved in the "pointed trait", we studied the reappearance of the "pointed trait" in an F2 population created by crossing "Nero Spinoso" with B73/Mo17. As shown in Figure 4, F1 ears bore seeds that were slightly pointed, and, in the following F2 generation, the pointed seed trait was observed in 6 out of 183 F2 ears analyzed (Table 1). Inheritance of Pointed Trait Starting from the hypothesis that the genes involved in the "pointed trait" were common to all the pointed varieties and were due to the maternal genotype, the "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino" and "Scagliolo di Carenno" pointed varieties were crossed pairwise. The F1 seeds obtained from each cross were grown on to obtain F1 ears: all the ears obtained had pointed seeds, as did the following F2 ears (Figure 4), suggesting that the varieties under evaluation in this study have the same genetic basis for the "pointed trait". With the aim of estimating the number of genes involved in the "pointed trait", we studied the reappearance of the "pointed trait" in an F2 population created by crossing "Nero Spinoso" with B73/Mo17. As shown in Figure 4, F1 ears bore seeds that were slightly pointed, and, in the following F2 generation, the pointed seed trait was observed in 6 out of 183 F2 ears analyzed (Table 1). . Inheritance of "pointed trait". On the right is shown the cross between "Nero Spinoso" and "Rostrato Rosso di Rovetta", given as an example of all the crosses carried out pairwise (complementation test) between the four pointed varieties. F1 hybrid ("Nero Spinoso" × "Rostrato Rosso di Rovetta") remains pointed as does the whole of following F2 segregating population. On the left is shown the creation of an F2 population by crossing "Nero Spinoso" with B73/Mo17. The following F2 generation segregates for the "pointed trait", permitting an estimation of the number of loci involved in this trait. The hypothesis is accepted if χ 2 ≤ 3.84 with DF = 1. Figure 4. Inheritance of "pointed trait". On the right is shown the cross between "Nero Spinoso" and "Rostrato Rosso di Rovetta", given as an example of all the crosses carried out pairwise (complementation test) between the four pointed varieties. F1 hybrid ("Nero Spinoso" × "Rostrato Rosso di Rovetta") remains pointed as does the whole of following F2 segregating population. On the left is shown the creation of an F2 population by crossing "Nero Spinoso" with B73/Mo17. The following F2 generation segregates for the "pointed trait", permitting an estimation of the number of loci involved in this trait. Seed Air-Drying Modeling The results provided are reported graphically to assess a comparison between Nero Spinoso and the control kernels (B73/Mo17 hybrid). The velocity profile is reported in Figure 5. The airstream at the entrance is equal to 0.5 m/s, to simulate a natural convection wind inside a farmhouse. It can be seen from the pictures that the airflow around Nero Spinoso reaches a maximum speed of 0.67 m/s, 7% higher than that of the control maize. Moreover, the downstream appears more turbulent, a factor that increases the humidity exchange and transport. The second type of data extrapolated from the CFD simulation is the evaporation rate. In Figure 6, this rate is reported at different timings, i.e., 0.5 s for pictures a and b and 1 s for pictures c and d. At the beginning, after 0.5 s, the evaporation rate of Nero Spinoso is 15% higher than that of the control. Qualitatively, the downstream aerodynamic trail of humidity is also wider. After 1 s, Nero Spinoso maintained the gain of 15% versus the commercial control kernel. The two behaviors are also reported graphically in Figure 7. The integral of the curve on the left is visibly larger than the one on the right. Finally, the last comparison is reported for the Volume Fraction of water inside the kernel after 0.5 s and 1 s of airflow. Despite the two values being very close (both the simulations start with same boundary conditions and are run for 1 s of physical time), it is the internal distribution of water that is impressive. Nero Spinoso maize has a greater stream of humidity detaching from the trailing edge of the pointed kernel. The internal distribution, moreover, is macroscopically different between the two varieties after 1 s. While for Nero Spinoso, almost half of the kernel does not contain any more humidity, for the control, this area (blue in Figure 8) is less than 30%. The second type of data extrapolated from the CFD simulation is the evaporation rate. In Figure 6, this rate is reported at different timings, i.e., 0.5 s for pictures a and b and 1 s for pictures c and d. At the beginning, after 0.5 s, the evaporation rate of Nero Spinoso is 15% higher than that of the control. Qualitatively, the downstream aerodynamic trail of humidity is also wider. After 1 s, Nero Spinoso maintained the gain of 15% versus the commercial control kernel. The two behaviors are also reported graphically in Figure 7. The integral of the curve on the left is visibly larger than the one on the right. The results provided are reported graphically to assess a comparison between Nero Spinoso and the control kernels (B73/Mo17 hybrid). The velocity profile is reported in Figure 5. The airstream at the entrance is equal to 0.5 m/s, to simulate a natural convection wind inside a farmhouse. It can be seen from the pictures that the airflow around Nero Spinoso reaches a maximum speed of 0.67 m/s, 7% higher than that of the control maize. Moreover, the downstream appears more turbulent, a factor that increases the humidity exchange and transport. The second type of data extrapolated from the CFD simulation is the evaporation rate. In Figure 6, this rate is reported at different timings, i.e., 0.5 s for pictures a and b and 1 s for pictures c and d. At the beginning, after 0.5 s, the evaporation rate of Nero Spinoso is 15% higher than that of the control. Qualitatively, the downstream aerodynamic trail of humidity is also wider. After 1 s, Nero Spinoso maintained the gain of 15% versus the commercial control kernel. The two behaviors are also reported graphically in Figure 7. The integral of the curve on the left is visibly larger than the one on the right. Finally, the last comparison is reported for the Volume Fraction of water inside the kernel after 0.5 s and 1 s of airflow. Despite the two values being very close (both the simulations start with same boundary conditions and are run for 1 s of physical time), it is the internal distribution of water that is impressive. Nero Spinoso maize has a greater stream of humidity detaching from the trailing edge of the pointed kernel. The internal distribution, moreover, is macroscopically different between the two varieties after 1 s. While for Nero Spinoso, almost half of the kernel does not contain any more humidity, for the control, this area (blue in Figure 8) is less than 30%. Figure 9a shows representative thermal images of "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino", "Scagliolo di Carenno" and B73/Mo17 seeds. The temperature accumulated by seeds was then measured using FLIR ResearchIR Max software, and statistical analysis was performed (Figure 9b). Figure 9 highlights that seed Finally, the last comparison is reported for the Volume Fraction of water inside the kernel after 0.5 s and 1 s of airflow. Despite the two values being very close (both the simulations start with same boundary conditions and are run for 1 s of physical time), it is the internal distribution of water that is impressive. Surface Temperature of the Seeds Nero Spinoso maize has a greater stream of humidity detaching from the trailing edge of the pointed kernel. The internal distribution, moreover, is macroscopically different between the two varieties after 1 s. While for Nero Spinoso, almost half of the kernel does not contain any more humidity, for the control, this area (blue in Figure 8) is less than 30%. Figure 9a shows representative thermal images of "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino", "Scagliolo di Carenno" and B73/Mo17 seeds. The tem- Figure 9a shows representative thermal images of "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino", "Scagliolo di Carenno" and B73/Mo17 seeds. The temperature accumulated by seeds was then measured using FLIR ResearchIR Max software, and statistical analysis was performed (Figure 9b). Figure 9 highlights that seed temperature was higher in varieties that accumulate phlobaphenes (i.e., dark pigments): after thirty minutes of exposure to the sun, the seeds of "Nero Spinoso" reached an average temperature more than 5 degrees higher than that of the control line (38.7 • C vs. 33.1 • C) (Figure 9b). The other three Italian landraces were in an intermediate situation, but "Rostrato Rosso di Rovetta" reached an average temperature statistically higher than "Scagliolo di Carenno" and "Spinato di Gandino", thanks to the accumulation of phlobaphenes in the pericarp. However, these two varieties accumulated one degree more than the colorless B73/Mo17 used as a control. Surface Temperature of the Seeds Agronomy 2021, 11, x FOR PEER REVIEW 11 of 15 temperature was higher in varieties that accumulate phlobaphenes (i.e., dark pigments): after thirty minutes of exposure to the sun, the seeds of "Nero Spinoso" reached an average temperature more than 5 degrees higher than that of the control line (38.7 °C vs. 33.1 °C) (Figure 9b). The other three Italian landraces were in an intermediate situation, but "Rostrato Rosso di Rovetta" reached an average temperature statistically higher than "Scagliolo di Carenno" and "Spinato di Gandino", thanks to the accumulation of phlobaphenes in the pericarp. However, these two varieties accumulated one degree more than the colorless B73/Mo17 used as a control. Discussion The domestication center of maize is located in Mexico, and from there it spread within the Americas and subsequently to the rest of the world, including Europe [45,46]. The cultivation of this crop all around the globe led to its local selection and its adaptation to new environments and, consequently, to the development of different landraces, mainly linked to local food production and traditional farming systems [5,46]. When compared to modern hybrids, these traditional varieties have lower yields, but are characterized by considerable phenotypic and genetic variability [47,48]. In addition, most of the landraces synthesize and accumulate pigments in the seed, such as phlobaphenes and carotenoids. The use of maize genotypes with pigmented pericarps seems beneficial for human health due to their antioxidant capacity [32,[49][50][51][52] and seems promising for a reduction in Fusarium spp. infection and fumonisin accumulation [26][27][28][29]. The beneficial properties derived from the accumulation of phlobaphenes (and flavonoids in general) allow the landrace "Nero Spinoso" to be considered as a functional food compared to the colorless varieties [21]. Pointed maize, also known as beaked maize, represent a peculiarity in maize cultivation. In northern Italy, 28 pointed varieties have been surveyed, but a specific characterization has only been reported for very few of these [21,53,54]. Discussion The domestication center of maize is located in Mexico, and from there it spread within the Americas and subsequently to the rest of the world, including Europe [45,46]. The cultivation of this crop all around the globe led to its local selection and its adaptation to new environments and, consequently, to the development of different landraces, mainly linked to local food production and traditional farming systems [5,46]. When compared to modern hybrids, these traditional varieties have lower yields, but are characterized by considerable phenotypic and genetic variability [47,48]. In addition, most of the landraces synthesize and accumulate pigments in the seed, such as phlobaphenes and carotenoids. The use of maize genotypes with pigmented pericarps seems beneficial for human health due to their antioxidant capacity [32,[49][50][51][52] and seems promising for a reduction in Fusarium spp. infection and fumonisin accumulation [26][27][28][29]. The beneficial properties derived from the accumulation of phlobaphenes (and flavonoids in general) allow the landrace "Nero Spinoso" to be considered as a functional food compared to the colorless varieties [21]. Pointed maize, also known as beaked maize, represent a peculiarity in maize cultivation. In northern Italy, 28 pointed varieties have been surveyed, but a specific characterization has only been reported for very few of these [21,53,54]. The traditional pointed varieties used in this study were "Nero Spinoso", "Rostrato Rosso di Rovetta", "Spinato di Gandino" and "Scagliolo di Carenno". Despite the lower yield, these varieties were selected because they were better adapted to cultivation in mountain areas compared to maize with the classic spherical/parallelepiped seed shape. To our knowledge, the inheritance of the "pointed trait" has not yet been studied in the Italian landraces. Hence, starting from the hypothesis that the "pointed trait" was a quantitative character, the heritability of this trait was determined in F1 and F2 populations obtained through controlled crosses. As shown in Figure 4, the genetic basis of this trait is common among the "Rostrato Rosso di Rovetta", "Nero Spinoso", "Spinato di Gandino" and "Scagliolo di Carenno" pointed varieties, due to the fact that the crosses carried out in all pairwise combinations showed the "pointed trait" in F1 and in the following F2 generation (Figure 4). Hence, the same loci determine the phenotype "pointed trait" in the different varieties. Furthermore, with the aim of estimating the number of genes involved in the "pointed trait", an F2 segregating population was scored for the presence of ears with pointed seeds The results obtained suggest that two/three major genes acting with additive effects are responsible for the "pointed trait" in the genetic material evaluated in the present study. The topic of the study was to assess the quantitative effect of the pointed shape of the kernels during traditional drying. The general idea was that this shape enhances the airflow around the kernel, while it is still attached to the cob and not exposed to free air convection. Natural or traditional solid drying was the process adopted widely in northern Italy up to the mid-20th century. Maize cobs were collected and stored under farmhouse roofs and exposed to the natural air stream during autumn and winter. In the literature, several works related to the drying process of maize are reported. Among others, Roman et al. [55] investigated the effect of a super absorbent polymer as a desiccant. Azmir et al. [56] and Janas et al. [57], in two different research papers, evaluated the loss of humidity in maize in fluidized bed-drying. Sanghi et al. [58] used CFD to evaluate the natural convection in a solar maize dryer, while Malekjani et al. [59] collected and evaluated different simulation method for different drying processes. Regarding the simulation of the behavior of the maize kernel during artificial drying, an interesting investigation was published by Nemenyi et al. [60]. Among the literature, several works assess the drying process but none, the natural drying process specific to the northern area of Italy, where pointed kernel maize was grown. The results presented in this work demonstrate that, for a natural drying process with a low temperature and a low airstream speed, the pointed kernel shows a distinct advantage over a variety with "conventional" shaped maize seeds, with a volume fraction of water that is 15% less with the same exposure time. However, a parallel simulation was also run with standard parameters for the industrial drying of maize (airstream of 5-10 m/s and air temperature around 50-60 • C). Within these boundary conditions the presence of the pointed shaped kernels played no role in the process (data not shown). In further work, it is planned to simulate the drying process of the 2D section of the cob to understand better the interaction effect between one kernel and the next and to assess the effect of the less dense packing within the cobs of Nero Spinoso. Finally, with High Power Computing, it will be possible to analyze the entire 3D shape of the cobs scanned. Furthermore, the pigmentation of the pericarp seems to have an effect on heat accumulation in the seeds, and this could, therefore, help to explain the use of these traditional varieties in mountain areas, where average seasonal temperatures are lower than those in the typical flat plains dedicated to maize cultivation. In fact, "Nero Spinoso" and "Rostrato Rosso di Rovetta" accumulated more heat in their seeds, thanks to the presence of phlobaphenes in the pericarp: 38.7 • C and 37.0 • C, respectively ( Figure 6). The difference of 1.7 • C between these two varieties was due to the concentration of phlobaphenes: in "Nero Spinoso" these were higher compared to "Rostrato Rosso di Rovetta", as shown in Figure 1. Furthermore, the average temperature of seeds was statistically lower in the varieties that accumulate carotenoids ("Scagliolo di Carenno" and "Spinato di Gandino"), but the lowest temperature values were recorded in the colorless B73/Mo17 (33.1 • C) ( Figure 6). Of course, different traits are involved in adaptation to the mountainous environment: among them, earliness of and cold tolerance during heterotrophic and autotrophic growth are recognized as the most important [61,62]. Further work will be necessary to better characterize these genetic materials. Conclusions The data reported in this work suggest that pointed varieties have been selected in mountain areas (at least partly) for their seeds' property of drying quickly in a relatively cold and wet environment, which confers an advantage in comparison with normally shaped varieties by limiting the development of harmful fungi after harvest. Of course, other traits are involved in adaptation to mountain environments that could be useful in breeding programs with the aim of improving the sustainability of this culture. Genetic analysis pointed out that two/three genes are involved in the "pointed trait", and this information will help to preserve and improve the so far neglected OPV (open pollinated varieties).
2021-11-14T16:20:48.212Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "4bbc6eba68f5729c3818d95f632c9f68078d9636", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/11/11/2295/pdf?version=1637126972", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "62fc866afc1484ef350ccbdf7beba5341bc1c321", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "History" ], "extfieldsofstudy": [] }
237875357
pes2o/s2orc
v3-fos-license
Experimental study on the explosion characteristics of hydrogen-methane premixed gas in complex pipe networks To explore the overpressure evolution laws and flame propagation characteristics in complex pipe networks after the addition of hydrogen to methane, we experimentally studied the explosive pressure wave and flame wave propagation laws for three different premixed gas mixtures with hydrogen-methane concentrations of 0, 10% and 20% when the equivalence ratio was 1. Experimental results indicate that the maximum explosion overpressure of the premixed gas increases with increasing distance from the explosion source, and it shows a gradually decreasing trend. In the complex pipe network, an overpressure zone is formed in the B–E–H and D–E sections of the network. The flame temperature is superimposed with the superimposition of the pressure, showing a trend of first increasing, then decreasing, then increasing, and finally decreasing in the complex pipe network. The flame arrival time increases with increasing distance, and the maximum flame speed shows a decreasing trend. The peak overpressure and maximum flame velocity of the premixed gas under a hydrogen volume fraction of 20% are 1.266 MPa and 168 m/s. The experimental research results could provide important theoretical guidelines for the prevention and control of fuel gas explosions in urban pipe networks. www.nature.com/scientificreports/ experimental pipeline. Zhou et al. 12 conducted an experimental study on the explosion of premixed hydrogen and air in a confined space, obtained the explosion characteristics of the premixed gas and determined that the thin-walled strain response of the pipeline was in good agreement with the explosion pressure. In addition, the study of the pipeline structure on the flame propagation characteristics and dynamic behaviour of explosions is also of great significance for the prevention and control of flammable gas explosion disasters. Zhu et al. 13 systematically studied the flame acceleration mechanism in pipe bends and bifurcation structures. Sulaiman et al. 14 used FLACS numerical simulation software and found that the presence of a 90° turning structure would increase the flame speed by approximately 2 times. Emami et al. 15 mixed hydrogen and air in 90° curved pipes and three-way pipes to conduct experiments to study the explosion characteristics, and the results showed that the mixed gas weakened the peak overpressure and flame propagation speed of the explosion in the curved pipe and that the peak overpressure and flame propagation speed of the explosion in the three-way pipeline were not affected. Zhu et al. 16 carried out experimental research and a numerical simulation on the propagation characteristics of methane explosion flames and shock waves in a parallel structure network and determined that when the shock wave propagates in parallel pipes, the peak overpressure and maximum temperature continues to decrease. Niu et al. 17 studied the propagation characteristics and attenuation laws of gas explosions in parallel network pipe networks, gradually complicating the pipeline structure. Previous work on hydrogen-methane premixed gas explosions mainly focuses on the change characteristics of explosion overpressure and flame propagation velocity in straight pipes and simple structure pipelines. However, there have been only a few studies on the propagation laws of pressure waves and flame waves in complex networks [18][19][20][21][22][23][24] . The crisscrossing pipeline structure will make the propagation laws of pressure waves and flame waves more complicated, and only consider simple curves and bifurcations. It is not enough to consider only the propagation process in simple roadways such as bends and bifurcations. Therefore, a complex pipe network system was established to conduct a hydrogen-methane mixed explosion experiment in the current work. An experimental study was carried out to investigate the overpressure attenuation and flame propagation characteristics of premixed gases with three different hydrogen-methane concentrations when the equivalent ratio was 1. The aim was to determine the propagation law of hydrogen-methane explosions in a complex pipe network. The results of this work could serve as theoretical guidelines for the prevention and control of gas explosions in pipe networks. Pipe network experiment system The experimental device is shown in Fig. 1. The experimental pipeline system mainly includes 5 subsystems, namely, high-energy ignition device, gas distribution device, vacuum meter, explosion pipe network system, and dynamic data acquisition system. In the explosion pipe network system, the volume of the explosion chamber is 0.1 m 3 , the inner diameter of the cylindrical pipeline is 200 mm, and the wall thickness is 12 mm. The pipes in the system are all made of carbon steel, which is resistant to high temperature and corrosion and has a pressure resistance of more than 5 MPa. Each pipe includes a 20 mm diameter hole for inserting various sensors. The various components are connected with the pipes using internal threads. The accuracy of the experiments were www.nature.com/scientificreports/ maximized by increasing the air tightness of the pipeline network through the installation of a silicone gasket at the connection between each component and its corresponding pipe. The ignition system mainly includes a high-energy igniter, a high-energy spark plug, a high-voltage and hightemperature-resistant cable, a power cable and an external trigger spark plug placed in the front of the explosion chamber, and a strong electric spark ignition is generated by alternating current with a voltage of 220 V and a frequency of 50HZ. The gas distribution system mainly uses three high-sensitivity mass flow controllers for direct gas distribution in accordance with the gas partial pressure law. The vacuum pump is used to send the prepared hydrogen-methane combustible gas into the gas filling area in the pipeline. Use the circulating pump to circulate the gas for 20 min, so as to ensure the uniform and full mixing of hydrogen, methane and air. An air compressor was used for 30 min of high-pressure ventilation after each experiment to discharge the residual exhaust gas from the explosion pipe network system. The TST6300 dynamic data acquisition and analysis system connects the dynamic data storage instrument, the pressure sensor, the flame sensor and the computer together. In the network, a group of sensors are arranged along the pipe central line of each measuring point. Eighteen groups are arranged in total, and each group includes one pressure sensor, one temperature sensor and one flame sensor (O is the explosion source, A, B, D, E, and H are bifurcated structures, F is a turning structure, and C and G are pipe outlets). The arrangement of the sensor measuring points is shown in Fig. 2. Three tests are conducted for each experimental condition, and the numerical value obtained in the experiment is the mean of the three values. If point O is considered the origin of the coordinates, the direction O-A-D-F is the x-axis, and the direction A-B-C is the y-axis, Table 1 shows the coordinates of the measurement points and the explosion source. For a multielement combustible gas mixture, its concentration can be expressed by the fuel equivalent ratio (ψ), which can be calculated by Eq. (1): where F/A is the fuel-air ratio and (F/A) stoic is the fuel-air ratio at the stoichiometric concentration. ψ < 1 indicates a lean fuel mixture, ψ = 1 indicates a mixture at a stoichiometric concentration, and ψ > 1 indicates a rich fuel mixture. An explosion experiment is carried out for the chemical dose concentration, that is, the CH 4 -H 2 gas mixture under the condition of ψ = 1. The volume fraction of hydrogen in the mixed fuel is expressed as The experiment was carried out under ambient pressure (1.0 atm) and temperature (298 K). The main components are methane and hydrogen, and their purity is greater than 99.9%. Three premixed gases with concentrations of0, 10% and 20% were used (the equivalent ratio is 1). The specific parameters are shown in Table 2. Experimental results and analysis Overpressure propagation laws of pressure waves in the pipe network. Figure 3 shows the maximum explosion overpressure of the hydrogen-methane pressure wave propagating in the complex pipe network under three hydrogen volume fractions. As shown in the figure, when the volume fraction of hydrogen is 20%, the maximum explosion overpressure of the premixed gas is higher than the volume fractions of 10% and 0. The maximum explosive overpressure of the premixed gas increases as the volume fraction of added hydrogen increases. When the volume fraction of hydrogen is less than 20%, the explosion intensity is reduced. In the complex pipe network, the explosion overpressure at the T1 measuring point at bifurcation structure A reaches the highest, and the explosion overpressure at the T15 measuring point at pipe outlet G attenuates to the lowest. T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 www.nature.com/scientificreports/ Figure 5 shows the curve of the maximum explosion overpressure at some measuring points of the complicated pipe network with time under a hydrogen volume fraction of 20%. When the pressure wave of the premixed gas propagates to measuring point 18, the maximum explosion overpressure is approximately 0.877 MPa, which is 18.2% and 2.6% higher than those of measuring point T16 and measuring point T18, respectively. Compared with measuring point 16, the pressure of measuring point 17 is increased by approximately 16.8%. The maximum explosion overpressure near the centre of the B-E-H branch gradually increases, forming an enlarged area. This result is mainly due to the appearance of opposing pressure waves in the B-E-H branch. When pressure waves from the A-B-C branch and the F-H-G branch, which are opposite to each other, meet in the B-E-H branch, the oscillating pressure waves are superimposed, causing the pressure to rise. Similarly, the pressure wave of the premixed gas explosion propagates to measurement point T7 in the middle of the pipe network. The maximum explosion overpressure is approximately 0.745 MPa. Compared with measuring points T6, T8 and T9, the maximum explosion overpressure increases by approximately 10.1%, 28.9% and 44.7%, respectively. The overpressure gradually increases near the centre of the D-E branch, and an area of increase is also formed. This is because pressure waves from the B-E-H branch direction and the opposite direction of the A-D-F branch appear in the D-E branch. When they meet in the D-E branch, the pressure waves are superimposed, and the pressure rises. Although the time for the pressure wave to propagate to measuring point T7 is shorter than that of measuring point T18, the overpressure at measuring point T18 is higher than the overpressure at measuring point T7. This is because although the energy is shunted many times, most of it is still spread throughout the main straight pipe. At the same time, measuring point T18 accumulates the energy from measuring points T17, T7 and T8. Because branches B-E-H and D-E are located in the middle of the complex pipe network, they easily withstand the repeated oscillations of overpressure in different branches of the pipe network. Under the action of the reverse pressure wave, a high-pressure area with strong destructive power is formed in the middle of the pipe network. Flame propagation laws of pressure waves in the pipe network. Figure T1 T2 T3 T16 T17 T18 T8 T9 T15 1200 www.nature.com/scientificreports/ of each measuring point in the branch, the mixture with the hydrogen volume fraction of 20% has a higher temperature than 10% mixture and the sample that does not contain hydrogen. When the hydrogen volume fraction is 20%, the temperature in the O-A-B-C branch is1555 K at T1; it then rises slowly to 1587 K at T10, reaches the highest point of 1647 K at T11, and finally drops. The flame temperature of other branches also increase first and then decrease. However, in the O-A-B-E-H-G and O-A-D-E-H-G branches, T3 and T11 first reach their peaks, then fall, and then reach their second peaks at T7 and T17, respectively, and then fall again. The overall flame temperature has a trend of repeatedly rising and falling. This trend occurs because the flame temperature of the B-E-H and D-E branches entering the middle of the pipe network increases. This result is due to the gradual increase in the expansion of the gas during the flame propagation process through the bifurcation and turning, the gas causes the disturbance to increase the flame surface, and the negative feedback effect of the compression wave on the flame propagation during the flame propagation makes the flame in the tube a backflow phenomenon has occurred, resulting in a new peak in the flame temperature in the B-E-H and D-E branches. Then, as the propagation distance increases, the maximum flame temperature at pipe outlet G decreases, and the flame attenuation is more obvious. The flame temperature during the explosion propagation process of hydrogen-methane premixed gas in the complex pipe network space shows a trend of first increasing, then decreasing, then increasing, and then decreasing. where v is the flame propagation velocity, x n is the distance from the n + 1-th flame sensor to the n-th flame sensor, t n+1 is the time for the n + 1-th flame front end to arrive at the flame sensor, and t n is the time for the n-th flame front end to arrive at the flame sensor. The velocity of hydrogen-methane premixed gas shows an upward and downward trend after the explosion. When the volume fraction of hydrogen is 20%, the pressure wave generated by the explosion breaks through the film. Under the action of high temperature and high pressure, the hydrogen-methane gas reacts fully with oxygen, and the flame begins to accelerate. The flame velocity increases from 78.3 m/s at T1to 92.1 m/s at T2. In the initial explosion stage, the pressure wave propagates in the straight pipe, and the flame propagates slowly. The maximum velocity rise at T3 is 167.9 m/s, and the flame propagation noticeably accelerates because the reaction is intensified under the guidance of turbulence, the flame front expands rapidly after bifurcation and turning and propagates towards different branches. After passing through measuring point T3, the flame propagation begins to decelerate due to reasons such as insufficient fuel, wall reflection and pipe heat dissipation, making the maximum flame propagation velocity continuously decrease in each branch. By the time the flame reaches measuring point T15 of pipe outlet G, the flame propagation velocity has been reduced to its minimum. The flame propagation velocity is higher under a hydrogen volume fraction of 20%than under a 0% or 10% hydrogen volume fraction, and the velocities in the other branches are similar. From the perspective of chemical reaction kinetics, hydrogen (H 2 ) has a larger C-H bond energy than methane (CH 4 ). As a result, the burning rate of a single gas is slow, the flame propagation velocity is low and the combustion is incomplete at low concentrations, and the activity of hydrogen is high. Adding a little hydrogen to methane will have a great impact on the overall properties of the premixed gas, and the proportion of hydrogen will increase. High enhances the concentration of energy release. The main reason for the flame propagation velocity of hydrogen-methane-air mixed gas is the free radical content of CH 4 combustion in the gas. The mixing of H 2 promotes the flame reaction to a certain extent 25 25 . The addition of more hydrogen significantly increases the forward reaction rate 26 ; therefore, significantly extends the combustion limit of methane and increases the combustion rate and flame propagation velocity. Overpressure attenuation and flame mutation in a complex pipe network. Under these three hydrogen volume fractions, the change in the peak overpressure of the explosion wave of each structure in the pipe network is expressed by the shock wave peak overpressure attenuation factor μ, and the calculation of μ is shown in Eq. (4): where P is the peak overpressure of the explosion wave before the bifurcation or turning structure in units of MPa; P′ is the peak overpressure of the explosion wave after the bifurcation or turning structure of the pipe in units of MPa; μ is a dimensionless quantity. where v is the flame propagation velocity before the bifurcation or turning structure; v′ is the flame propagation velocity after the bifurcation or turning structure; and ε is a dimensionless quantity. The experimental pipe network includes 5 bifurcation structures and 1 turning structure. Equation (4) and Eq. (5) are used to calculate the maximum explosion overpressure and flame propagation velocity changes at 11 locations in the pipe network. Figure 9a shows the overpressure attenuation factor of the pipe network. It can be observed that adding hydrogen to methane can reduce the attenuation of methane explosions in the pipe network. This is because when the higher sensitivity hydrogen is mixed into the lower sensitivity methane-oxygen premix, the entire premixed gas sensitivity increases. Compared with the case where hydrogen is not added, the highly sensitive gas has greater instability after the addition of hydrogen during the explosion near the pipe outlet. The movement is more violent and is relatively less affected by the expansion wave. Therefore, the explosion intensity near the bifurcation and turning structures of the pipe network are less attenuated after the addition of hydrogen. Figure 9b shows the flame mutation factor in the pipe network. When the volume fraction of added hydrogen increases, the effect of the premixed gas flame mutation near the bifurcation and turning structures of the pipe network are weakened. Figure 10 shows the influence of the structure of the complex pipe network on the overpressure attenuation and flame mutation. The pressure attenuation factor and flame mutation factor at the bifurcation structure of B and H in the pipe network in this experiment are relatively large, which means that the pressure and flame T1T2 T3T16 T3T4 T5T13 T11T6 T11T12 T14T9 T14T15 T7T18 T1T10 T1T2 T3T16 T3T4 T5T13 T11T6 T11T12 T14T9 T14T15 T7T18 www.nature.com/scientificreports/ decrease greatly after the branch flow of the bifurcation structure. The pressure attenuation factors and flame mutation factors at the D and E bifurcation structures are relatively small, which means that the pressure and flame decrease after the split flow through the bifurcation is small. Although the energy generated by the explosion propagates to the bifurcation structure of the pipeline, its propagation direction and magnitude change, most of the energy is concentrated in branches O-A-B-C and O-A-D-F-H-G of the pipe network, and the energy entering the middle of the pipe network B-E-H and D-E branches is reduced, weakening the explosion to a certain extent. However, due to the influence of the pipe network geometry, the energy meets in the opposite direction in the B-E-H and D-E branches so that the pressure and temperature increase instead of decrease. Therefore, under the repeated action of opposite energy waves in different routes, a high-temperature and highpressure zone is formed in the middle of the pipeline, and the destructive force increases. The geometric structure of the pipe network is an important factor that affects the attenuation of the hydrogen-methane premixed gas explosion energy in the pipe network. Conclusions (1) After different hydrogen volume fractions are added to an explosion of methane in a pipe network, the maximum explosion overpressure shows an increasing trend with increasing hydrogen volume fraction. The premixed gas with a hydrogen volume fraction of 20% has the most complete reaction, produces a stronger pressure wave, and has a faster flame propagation speed. At 1.266 MPa and 168.7 m/s, the maximum explosion overpressure and flame propagation speed are the largest, respectively. (2) After the hydrogen-methane premixed gas explodes in the pipe network, the maximum explosion overpressure of the premixed gas increases with increasing distance from the explosion source in the four branches of the complex pipe network, and the maximum explosion overpressure shows a gradual decreasing trend. (3) After the hydrogen-methane premixed gas explodes in the pipe network, the maximum flame temperature first increases and then decreases. Due to the reverse pressure wave and the subsequent forward pressure wave in the B-E-H and D-E branches, the temperature of the flame increases again and eventually decreases. The flame arrival time increases with increasing distance. The maximum flame propagation speed first rises and then gradually decreases. Near pipe outlet G, the flame speed decays to its lowest value. (4) The overpressure attenuation factor and flame mutation factor of the explosion at the bifurcation and turning structures in the pipe network increase the sensitivity of the premixed gas due to the increase in the hydrogen volume fraction. The increased sensitivity affects the explosion overpressure attenuation and flame mutation, resulting in a slow attenuation of the explosion intensity. In addition, due to the geometrical structure of the pipeline, the opposite energy waves of different paths repeatedly act on the B-E-H and D-E branches in the middle of the complex pipe network to form a high-temperature and high-pressure zone, which increase the destructive power. Data availability The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
2021-09-01T15:12:20.857Z
2021-06-22T00:00:00.000
{ "year": 2021, "sha1": "2b03698990812ee912385af5d9dcc64347a432e3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-00722-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "451a5fa427ad3091eac1c2042d56464693bdd30d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
225731410
pes2o/s2orc
v3-fos-license
Stem cells and tissue engineering: an alternative treatment for craniofacial congenital malformations and articular degenerative diseases The life quality of patients with craniofacial malformations is severely affected by the physical disabilities caused by the malformation itself, but also by being subjected to bullying, which leads to a series of relevant psychological and societal effects that have an economic impact on the health sector. Orofacial clefts, notably cleft lip (CL), cleft palate, and microtia, are the most common craniofacial birth defects in humans and represent a substantial burden, both personal and societal. On the other hand, osteoarthritis is a widespread degenerative disease that is becoming more common due to the extension of the human lifespan and to an increase in injuries in young people as a result of their lifestyle. Advances in tissue engineering as a part of regenerative medicine offer new hope to patients that can benefit from new tissue engineering therapies based on the supportive action of tailored 3D biomaterials and the synergic action of stem cells that can be driven to the process of bone and cartilage regeneration. This review provides an update on recent considerations for stem cells and studies on the use of advanced biomaterials and cell therapies for the regeneration of craniofacial congenital malformations and articular degenerative diseases. INTRODUCTION According to the National Cancer Institute, degenerative disease is a pathology in which the function or structure of the affected tissues or organs worsens over time [1] . Unfortunately, neither most degenerative diseases nor craniofacial congenital malformation diseases have a cure, so they evolve until patients become severely disabled. Since stem cells became an alternative treatment, they have changed the course of these diseases. Their applications are currently being tested and have shown positive results in several of these diseases. Stem cells are cells with self-renewal and differentiation abilities. Mesenchymal stem cells (MSC) are adult stem cells that are not hematopoietic and can be found in several tissues, such as adipose tissue, bone marrow, and umbilical cord, to mention some examples. According to the International Society for Cell Therapy (ISCT), MSC must (1) be plastic-adherent; (2) express CD105, CD73, and CD90; (3) lack CD45, CD34; and (4) differentiate into osteoblasts, adipocytes, and chondroblasts [2] ; however, these criteria do not suffice to justify their therapeutic potential [3] . Besides their differentiation ability, MSC have paracrine activity in angiogenesis, cellular activation/proliferation, and immunomodulation [4,5] . Since they were first introduced in 1970 by Friedenstein, MSC have changed the treatment of individuals with orthopedic, hematologic, oncologic, ophthalmologic and dermatologic conditions. They have been used mainly to replace cell lines that have been lost or destroyed or to modify the behavior of other cells. In this paper, we will briefly describe the applications of MSC in common degenerative and congenital diseases in Mexico. DEFINING THE REGENERATIVE POTENTIAL OF MSC BEYOND BIOLOGY For many years, the use of autologous cells isolated directly from biopsies was the only alternative for tissue engineering applications. Fully differentiated cells tend to lose cellular features if they are exposed to a constant cellular division. These cellular features include changes in the extracellular matrix (ECM), protein synthesis, altered metabolism, and dedifferentiation. Regenerative therapies commonly need a high number of cells, leading to the search of cells with high regenerative potential and no risk of morphological features loss. Mesenchymal stromal cells have become a promising alternative since they are one of the first cells in cellular lineage with unlimited fashion propagation and an extensive differentiation ability [3] . The analysis of the potential of MSC for therapeutic purposes can be conducted at different stages. Typically, the mesenchymal phenotype according to the ISCT criteria should be verified; however, additional surface markers have been described, which include being positive for CD29, and negative for CD14, CD11b CD19, CD79 alpha, and HLA-DR surface markers. Differentiation protocols can also be analyzed based on the expression of these markers in chondrogenic, adipogenic, or osteogenic lineages. For example, osteogenic differentiation can be confirmed with alkaline phosphatase activity, calcium release after osteogenic stimulation, catalase (osteoclast inhibitor), and glutathione peroxidase 3 (osteogenic biomarker) expression [6] . Transcriptional analysis at mRNA levels is another alternative to track the therapeutic potential of MSC. It is possible to estimate cellular growth and colony-forming potential quantifying the MSC marker STRO-1 and the platelet-derived growth factor receptor A (PDGFR-alpha). A transcriptional increase of Twist-related protein-1 (TWIST-1) and Twist-related protein-2 (DERMO-1) has also been described as crucial for MSC growth and development [7] . There has been a continuous debate about whether autologous or heterologous cells are the most adequate source of MSC in regenerative therapies for congenital and craniofacial diseases. Their immunomodulatory ability is a relevant aspect exerted through the inhibition of T-cell proliferation, which regulates the immune response, and is also involved in the alloimmune response. Autologous MSC have been shown to decrease in vitro alloimmune response in host autologous cells in transplanted murine models. It has been proposed that the homing activity of MSC creates immune-privileged sites that limit the infiltration of CD4+ and CD8+ T cells in tissues, thus limiting damage and promoting regeneration [8] . Meanwhile, heterologous MSC from bone marrow (BM-MSC) have been used for the treatment of pseudoarthrosis and have been proved to promote healing of femoral fractures in a claudication animal model. Heterologous BM-MSC reached the lesion 24 h after being infused, and later promoted a periosteal reaction that lead to fracture consolidation and cartilage formation 120 days after the infusion. In comparison, BM-MSC alone formed a fibro-osteoid tissue [9] . These effects lead us to elucidate that the use of autologous versus allogeneic MSC will depend on the required clinical outcome. In recent years, clinical implications and advantages in the use of stromal vascular fraction (SVF) have opened new alternatives for tissue engineering in craniofacial or degenerative diseases. The differences between SVF and adipose-derived MSC (AD-MSC) are that an SVF is a freshly harvested, heterogeneous population of cells directly isolated from lipoaspirates by mechanical or enzymatic disaggregation that contains stromal cells (15%-30%), erythrocytes, granulocytes, monocytes, pericytes, and endothelial cells [10] . AD-MSC are a cultured, more homogeneous subpopulation of cells resulting from a culture selection and in vitro expansion. On the other hand, compared to BM-MSC, adipose tissue contains 100-500 fold more MSC, and SVF contains 4-6 fold more MSC, whose therapeutic impact, angiogenic stimulation, T-cell regulation and reduction of IL-10 production represent a feasible source for tissue engineering [11] . Although there are still difficulties to establish the proper dose and clinical safety protocols, there is no doubt of the potential of AD-MSC to accelerate healing processes. Therapeutic efforts for the treatment of degenerative diseases have moved research groups to develop semi-automated, surgically-closed systems to obtain SVF during surgeries with minimal laboratory equipment requirements that will enable the application and implantation of autologous or heterologous MSC for tissue engineering [12] . Degenerative diseases A degenerative disease is a pathology in which the function or structure of the affected tissues or organs worsens over time [1] . As mentioned earlier, stem cells have changed the course of these diseases and have become an alternative treatment for degenerative disorders. Osteoarthritis Osteoarthritis (OA) is a condition that causes joints to hurt and become stiff. It is the most common cause of arthritis worldwide, and it mainly affects knees (85%), hips, hands, and feet. Approximately 240 million people in the world have OA [13] . 5% of adults worldwide have either hip or knee OA. These numbers will increase as the population ages and the obesity rates increases [14] . Pain is the main symptom that typically leads patients to seek medical care and guides clinicians into treatment decision-making as well. Pain can be so intense, patients become unable to work, making OA the fourth leading cause of years lived with disability worldwide [15] . OA has been part of the changes of articular cartilage, but that concept has evolved, now considering the whole joint [16,17] . Some of the structural damages of joints are (1) loss of cartilage; (2) osteophyte formation; (3) subchondral bone changes; and (4) meniscal alterations [17] . Chondral erosions caused by overload or abnormal joint kinematics turn into fissures. In an attempt to repair these lesions, hypertrophic chondrocytes increase their synthetic activity, but, by doing that, they increase the production of proinflammatory mediators and degradation products. These molecules stimulate surrounding synovium, increasing its proliferation, and proinflammatory response as well. All inflammation mediators favor endochondral ossification, causing bone overgrowth and osteophyte formation. Pain comes from the peripheral nociceptors sensing ongoing tissue injury, as well as inflammation in the joint [16] . Nowadays, treatment towards OA is oriented towards minimizing pain, optimizing function, and modifying the process of joint damage. Pain control, as mentioned earlier, is what guides the physician's decision into which treatment to use. Analgesics and anti-inflammatory medications are the mainstay treatment, accompanied by lifestyle modifications such as weight loss and physical therapy/activity [18] . Since no medication has been shown to stop the process of OA, measures have been taken to prevent it. Focal cartilage lesions, if left untreated, tend to quickly progress into osteoarthritis. A retrospective study performed in the National Institute of Rehabilitation in Mexico reported that 61% of the patients undergoing arthroscopic surgery had focal chondral lesions in the knee, with 74% of these being grade III-IV ICRS/Outerbridge [19] . Cartilage reparation techniques, such as microfractures, autologous chondrocyte implantation, and mosaicplasty have shown to delay the appearance of OA, as well as the need for total joint replacement after chondral injuries in young adults [20][21][22][23] . Some biological therapies have been researched, including drugs that promote chondrogenesis and osteogenesis [24] , matrix degradation inhibitors, apoptosis inhibitors, and anti-inflammatory cytokines [25] ; however, none of them have demonstrated sufficient symptom improvement to be included in the standard of care [26] . Mesenchymal stem cells have turned into the most explored therapeutic drug in cell-based OA treatment due to their ability to differentiate to chondrocytes and their immunomodulatory properties [27] . Furthermore, they have been used in different ways to try and modify the course of the disease. MSC seeded on scaffolds Cartilage implants: by taking advantage of the differentiation capacity of MSC to chondrocytes, MSC have been similarly used for cartilage lesion repair as matrix-assisted autologous chondrocyte implants. Previous studies using chondrocytes seeded on collagen or polyglycolic-acid matrixes have shown good mid-to long-term clinical and magnetic resonance imaging (MRI) outcomes, as well as the ability to delay degenerative changes in the knee [28][29][30][31] . A few years ago, the United States Food and Drug Administration approved MACI, a porcine collagen membrane seeded with autologous chondrocytes, for the treatment of focal chondral lesions in the knee [32] . Okano et al. [33] came up with the "cell sheet technology" consisting of multiple cell layers placed on top of another (instead of using a matrix), taking advantage of the intact ECM produced by the cultured chondrocytes and their adhesion factors. This innovative technique has been shown to form hyaline cartilage in preclinical studies and is currently undergoing clinical studies in Japan [34][35][36] . Even though these techniques have had great outcomes, they involve two surgical procedures: one to obtain the cartilage biopsy and the second one for the implantation. This makes the intervention expensive and may increase the risk of surgical complications. MSC seeded on a 3-dimensional scaffold or using the cell sheet technology can help solve this problem. Due to endogenous cell stimulation, MSC differentiate into cartilage, forming a cartilage-like tissue repair [37] . Several clinical and preclinical studies using MSC seeded on matrixes have shown positive results in forming cartilage-like tissue and alleviating symptoms [38,39] . In 2015, Kim et al. [40] conducted a comparative matched paired analysis comparing injected vs surgically implanted MSC in patients with knee osteoarthritis. Patients were evaluated with Patient-Reported Outcome Measures (PROMs), as well as a second-look arthroscopy. After a minimum follow-up of 24 months, patients who underwent MSC implantation showed better clinical and second-look arthroscopic outcomes. Despite the positive findings with this technique, it is usually employed to repair small defects and does not address larger areas related to OA. Problems related to the acquisition of autologous MSC and the risk of graft-versus-host reactions with allogeneic MSC have limited their use in clinical studies. Meniscus repair: menisci play an important role in load-bearing and load transmission to the cartilage and subchondral bone. Approximately 15% of knee lesions are associated with damage to the meniscus [41] . Meniscal lesions generate knee instability and further cartilage damage favoring the development of OA. Treatment for meniscal lesions is decided depending on the complexity and the location of the damage. Repair strategies are used when the rupture is small, located in the vascular areas, and the meniscus can be stabilized intra-articularly. However, partial meniscectomy or complete meniscectomy is required in complex lesions. Meniscectomies cause an increase of 235% contact pressure [42] , as well as an increase in OA incidence [43][44][45] . The use of meniscal substitutes after partial meniscectomy has shown symptom relief, as well as a slow decrease of articular degeneration; however, they do not prevent it [46,47] . Leroy et al. [46] reported a decrease in scaffold dimensions leading to a concern about the scaffold's capacity in the long term. The use of MSC in combination with meniscal substitutes have become of great interest due to the evidence of meniscal-like tissue formation after implantation in rats, pigs, and rabbits [48,49] . Olivos-Meza et al. [50] conducted a comparative study between patients who received meniscal substitution with acellular polyurethan meniscal scaffolds (APS) vs. polyurethane scaffold enriched with peripheral blood MSC (MPS). They evaluated femoral and tibial articular cartilage status using MRI T2-mapping 3, 6, 9, and 12 months after surgery, as well as clinical evaluation using PROMs. No differences were observed between APS and MPS during the 12-month follow-up; however, a longer follow-up is needed to see the scaffold degeneration and tissue formation. MSC exosomes: exosomes are extracellular vesicles that function as intercellular communication vehicles transferring lipids, nucleic acids (mRNA and microRNAs) and proteins to generate a response in recipient cells [51] . Exosomes are rich in microRNA, which can bind specific sites in transcribed mRNA, modifying their expression and transduction [51,52] . These properties have been studied to promote cartilage regeneration and decrease pro-inflammatory molecules in OA [53][54][55][56][57] . Tao et al. [56] and Toh et al. [58] reported several microRNAs (140-5p, 23b, 92a, 125b, 320, 145, 22 and 221) derived from human synovial MSC, which promote cartilage regeneration, OA suppression, and cartilage/extracellular matrix homeostasis in preclinical studies. The exosomes' potential for OA treatment, good tolerance, and minimal risk of immunogenicity and toxicity has made them one of the most important hotspots for future research. However, further studies describing how to obtain large-scale purified exosomes as well as their clinical efficacy and biosecurity are still needed. Intra-articular injections: intra-articular injections of MSC have become the main modality of cell therapy research for OA treatment due to their simple application thanks to their anti-inflammatory, immuneregulatory, and regenerative abilities. MSC can be either injected with no other components or mixed with hyaluronic acid (HA), platelet-rich plasma (PRP), or saline solution, to mention some examples. Preclinical studies have shown cartilage repair, reduction in proinflammatory cytokines, and improved imaging, morphology, and histology [59,60] . Mixed injections with PRP/MSC or HA/MSC have shown significantly better results on the repaired cartilage than individual uses of any of them. Several clinical trials have been developed worldwide using MSC derived from the stromal vascular fraction (SVF), umbilical cord (UC-MSC), adipose tissue (AD-MSC) or bone-marrow (BM-MSC), the latter being the most common site. BM-MSC have shown a better chondrogenic ability compared to AD-MSC [61] and have shown an improvement in cartilage quality and knee function, as well as a decrease in pain and other symptomatologies [27] . Most clinical trials that use AD-MSC and SVF have been conducted using mixed injections combined with PRP. Results have been positive, showing an increase in cartilage thickness, significant positive changes in MRI, and symptomatology improvement [62] . Few trials have been done using UC-MSC. Cartistem ® is the first approved allogeneic cell treatment for OA in the world. It was approved by the Ministry of Food and Drug Safety in Korea and is now commercially available Congenital anomalies Congenital anomalies, also known as birth defects, are structural or functional anomalies that occur during intrauterine life [65] . These defects can be identified prenatally, at birth, or even during later infancy. They occur in 2%-4% of live births [66] and are more common in stillborn spontaneous miscarriages. Approximately 50% of all congenital anomalies are not linked to a specific cause [65] ; however, they are commonly caused by genetic abnormalities and/or environmental exposures. Genetic abnormalities include chromosomal alterations (e.g., Down syndrome) or single-gene/monogenic disorders. The latter have different modes of inheritance such as autosomal dominant, autosomal recessive, or X-linked [67] . On the other hand, environmental exposure to a teratogen, any agent that causes abnormalities in the form or function of the fetus, can produce cell death, alter normal growth of tissues, or interfere with normal cellular differentiation, resulting in a congenital anomaly [68] . Birth defects are divided depending on the pathophysiology of the defect: (1) malformation when the intrinsic development is abnormal; (2) deformation when extrinsic mechanical forces modify a normally formed structure; (3) disruption when a vascular defect causes a malformation; or (4) dysplasia when there is an abnormal organization of cells into tissues [68] . These defects can be isolated or present in syndromes or associated patterns that may affect one or more organ systems. A lot of preventive measures, as well as treatment measures, have been focused on these anomalies due to their medical, surgical, psychological, and cosmetic significance. Congenital microtia Congenital microtia is the incomplete formation or growth of the auricle, leading to the small or deformed auricle. It may occur as an isolated condition or as part of a syndrome or spectrum of anomalies. Microtia severity ranges from a complete absence of the auricle (anotia) to a mild size discrepancy. Most of the time, microtia occurs unilaterally (79%-93%), the right side being the most affected side [69] . It is associated with hearing loss of the ipsilateral ear, but normal hearing in the unaffected ear. Speech and language development are usually normal. Individuals with microtia, however, are at a higher risk of communication delay and attention deficit disorders [70,71] . The etiology of microtia is poorly understood, though there is strong evidence supporting the importance of environmental causes such as altitude, and gestational exposure to certain drugs [72][73][74][75] . Ethnicity has been reported to be an important consideration due to the high incidence and prevalence of microtia among Asians, Hispanics, and Native Americans. In Mexico, the World Health Organization and the Mexican Registry and Epidemiological Surveillance of External Congenital Malformations (RYVEMCE) reported a prevalence of 6.15-7.37 cases per 10,000 childbirths, being one of the countries with the highest prevalence of microtia worldwide [72,75] . Due to the psychological and functional implications related to microtia, there have been several studies focusing on the surgical treatment and biotechnology measures needed to recreate an auricle as similar as possible to the native one. Auricle reconstruction with autologous rib cartilage remains the gold standard for patients with microtia/ anotia. Tanzer et al. [76] and Brent et al. [77] described this technique as an alternative to allogeneic implants in the late 1950s, overcoming several problems associated with these implants. Sculpted autologous costal cartilage graft is one of the most challenging procedures in plastic and reconstructive surgery since the surgeon has to handcraft the cartilage trying to create an ear similar in appearance to the contralateral one. Grafts have good long-term durability and grow concomitantly as the patient ages [77] . However, costal cartilage grafts are not as consistent as synthetic implants: they require long operative time, harvesting results in donor-site morbidity, and, occasionally, there is an insufficient source of cartilage. Tissue engineering techniques emerged as an alternative treatment. The idea of preformed ear structures seeded with cells goes back to the 1940s when Peer et al. [78] started using diced cartilage placed inside an auricle shaped mold. Research started focusing on scaffolds that could promote cell proliferation, as well as matrix production. Decades later, research focused on finding the ideal scaffold that would induce cellular proliferation and cartilage tissue formation. This was proved by Vacanti et al. [79] and Rodriguez et al. [80] , who conducted several preclinical studies showing that polyglycolic acid (PGA) + polylactic acid (PLA) would promote in vitro cell proliferation and matrix production, and in vivo cartilage formation after implantation. Mice were implanted with 3D ear-shaped scaffolds seeded with chondrocytes. After 12 weeks, scaffolds were almost entirely degraded; however, the neo-tissue maintained the original 3D structure and demonstrated histological cartilage appearance. These studies were the introduction of biotechnology to regenerative medicine [81] . The combination of seeded auricular chondrocytes (AuCs) to scaffolds and the computer-assisted design/ computer-aided manufacturing (CAD/CAM) technology [82][83][84] led to the start of clinical studies. The first clinical application was done in Shanghai in 2018 by Guangdong Zhou et al. [84] , where 5 patients with unilateral microtia were implanted with 3D printed PCL + PGA scaffolds seeded with autologous chondrocytes from the cartilage remnants of the microtia. 2.5 years later, they reported the follow-up of one patient showing the formation of cartilaginous tissue after histologic evaluation, the transition from a stiff graft to a more flexible one over the time, and the degradation of the scaffold without losing the original ear shape. Currently, autologous chondrocytes from the microtia auricle are being isolated, expanded, and seeded onto the constructs, showing normal elastic cartilage on histology [85] . However, monolayer expansion of chondrocytes results in dedifferentiation [80,86] , limiting the capacity to generate robust cartilage, and needs extensive 3D construct culture before implantation [84,87] . Mesenchymal stem cells have the potential of massive expansion and the ability to differentiate into chondrocytes through co-culture or coimplantation [88] . Studies have been done using articular cartilage co-cultures with MSC, though little is known about AuCs and MSC. Pre-clinical in vivo studies have shown the formation of cartilage, but the impact of these studies is limited due to the use of non-human cells, the lack of specific markers for elastic cartilage, and the absence of mechanical evaluation [89][90][91][92][93][94] . Cohen et al. [95] conducted a comparative preclinical study evaluating cartilage formation in constructs using human AuCs vs human AuCs and MSC in a 1:1 ratio. The study showed that the auricular cartilage generated in the 1:1 constructs was similar in structure, histology, biochemical development, and mechanical properties to discs containing only AuCs and native human auricular cartilage after 3 months in vivo. To date, no clinical study using AuCs in combination with MSC has been conducted. However, these findings suggest MCSs could solve several problems related to cartilage culture and could bring other benefits related to their immunomodulatory/anti-inflammatory potential. Cleft lip and palate Cleft lip (CL) and palate (CLP) are common congenital malformations in Mexico, with an incidence of 1 in 800 births [96] . Up to 2003, CLP had a prevalence of 139,000 affected children throughout the country, with approximately 10 new cases identified daily [97] . Patients with CLP undergo (on average) 4 surgical procedures during their lifetime: (1) lip closure and primary nasal repair; (2) palate closure; (3) alveolar bone graft; and (4) rhinoseptoplasty [98] . The alveolar bone graft is the placing of bone in the primary palate to restore the continuity of the maxillary arch and separate the oral and nasal cavity [99,100] . This allows adequate dental hygiene, promotes harmonic facial growth, and provides the necessary bone matrix for the eruption of the lateral and canine incisors [101,102] . The use of cell-based therapy represents one of the most advanced methods to approach craniofacial abnormalities. Several animal models have been used to test alveolar cleft-grafting materials including mice, rabbits, cats, dogs, goats, sheep, and monkeys. Studies have shown heterogeneous results in terms of biocompatibility, bone regeneration capacity, integration, resorption, and mechanical resistance due to the physicochemical characteristics of each material [120,121] . Existing systematic reviews support the ability of bone regeneration on these materials for the treatment of small periodontal bone defects, but recommend further studies on major bone defects such as palatal fissures [122][123][124] . Scaffolds, as in all biotechnology-related applications, have been a major research topic regarding CLP. The ideal scaffold should have macro-geometry, micro-architecture, bioactivity, and appropriate mechanical properties [125] . The first two characteristics have been addressed with the introduction of 3D printed scaffolds. A head CT scan is performed in patients with CLP, and a scaffold with the patient's exact macroscopic geometry is created. Bioactivity and mechanical properties are determined by the scaffold material. Several different materials like polycaprolactone (PCL) with hydroxyapatite and platelet-derived growth factor-BB [125] , cryogels [126] , demineralized bone matrices, PLA, among others, have been tested to evaluate bone regeneration and cellular migration [127][128][129][130][131] . Today, the use of bioceramics, such as calcium phosphate, in combination with biomimetic polymer scaffolds, folic acid derivatives, morphogens, and stem cells are currently considered the most promising alternatives for CLP regeneration [127] . The use of mesenchymal stem cells is emerging as an alternative treatment or in combination with previously-described therapies for patients with CLP. As mentioned earlier, MSC can be obtained from different parts of the body such as adipose tissue, bone marrow, and umbilical cord. The generation of an artificial alveolar cleft and the implantation of teeth in the regenerated bone region have been accomplished in dog models using BM-MSC [132][133][134] . Ahn et al. [135] reported the first case of regeneration of an alveolar cleft defect. Patient-specific 3D-printed bioresorbable polycaprolactone (PCL) scaffolds were seeded with iliac BM-MSC and showed 45% defect regeneration 6 months after transplantation, with a 75% bone mineral density compared to the surrounding bone. AD-MSC, due to their availability and easy handling, are excellent candidates for tissue engineering in CLP patients. Preclinical studies comparing bone regeneration between AD-MSC and autogenous bone graft in canine maxillary alveolar cleft models showed no significant differences, meaning AD-MSC can be an acceptable alternative [136] . However, clinical studies are needed to confirm their efficacy and reproducibility in humans. Unlike other alternatives, MSC derived from dental tissues have been studied for CLP patients due to their higher accessibility and less invasive retrieval. Lee et al. [137] reported that stem cells from human exfoliated deciduous teeth (SHEDs) have mineralization potential after expressing bone-specific osteogenic markers following insertion into ex vivo-cultured embryonic palatal shelves and in novo culture. Furthermore, Nakajima et al. [138] compared the bone regeneration ability of SHEDs, BM-MSC, and dental pulp stem cells in mice. They concluded that after 12 weeks of transplantation, the ratio of new bone formation was not significantly different among these groups. However, SHED produced the largest osteoid and widely distributed collagen fibers. Up until now, no clinical studies have been conducted using SHEDs. Although a huge effort has been devoted to the use of tissue engineering as a solution for treating bone defects, more evidence is still needed. CONCLUSION Mesenchymal stem cells are an emerging alternative for tissue engineering therapies. Besides their differentiation ability, they also express paracrine functions, which have shown to be immunomodulatory and anti-inflammatory. Taking advantage of these functions, MSC have been studied in different fields for the medical treatment of degenerative and congenital diseases. Despite favorable findings in preclinical studies, more clinical studies following all the steps described in translational medicine are needed to address their efficacy, safety, and clinical application. The complexity of these technologies must be considered carefully, and every country must follow a single regulatory pathway.
2020-07-02T10:03:15.863Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "452e89a326a0b8a3cda139ff470fcc14b4f60c0c", "oa_license": "CCBY", "oa_url": "https://parjournal.net/article/download/3525", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f18f8215581fe9baef6b6c8bde1f0cd53a6ecd91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246205735
pes2o/s2orc
v3-fos-license
Parametric Architecture beyond Form—Klein and Price: Pioneers in Computing the Quality of Life in Housing : This article proposes the investigation of two case studies of 20th century residential architecture that can be considered paradigmatic due to the pioneering use of parametric thinking in architecture. It deals with Alexander Klein’s plan analysis model and Cedric Price’s research on housing through his concept of 24-hour economic living toy. Both cases are analyzed using contemporary parametric tools to digitally reproduce the results of the analog diagrams developed by both architects. The reproduction of the diagrams makes it possible to recognize and make visible the specific parameters that are used in each case, demonstrating an evolution of housing research throughout the two periods. While Klein shows an observation focused on the efficiency of form, Price pursues a recognition of the uses to facilitate the adaptability of the architecture according to optimal usability. Introduction The architecture cataloged as parametric-produced from parametric and relational design processes-has now reached a certain degree of popularity as a mechanism for investigating new constructive forms. For many people it has become an inherited strategy from form finding research carried out by personalities of 20th century architecture and structural design, such as Frei Otto or Heinz Isler [1][2][3][4][5]. The great contribution of the parametric design model is the ability to interactively modify the final result of a project thanks to the fact that it has been defined through the relationships between the parameters that configure it. In this way, the relationship system is the goal of the design, while the form is only a manifestation of the result [6]. However, this relational capacity of parametric design also has great potential as a tool to achieve simple geometric artifacts, but whose determining parameters require the adoption of optimized or customized relationships by complex qualitative or quantitative criteria. As we will see below, this article explores how this model of thinking [7]-which we can consider as parametric thinking or algorithmic thinking-was already applied by researchers in residential architecture at different times in the 20th century. We will analyze two case studies: Alexander Klein and Cedric Price. Alexander Klein was a key figure in the search for new housing standards in the first half of the twentieth century. The grave economic crisis that engulfed all European nations following World War I instilled in architects a strong sense of social and political responsibility: design became a tool to build as much as possible with less cost. Rationalism was deemed to be an essential part of housing regulation, and Klein was a pioneer in this field. He investigated the topic of habitation in all of its complexities, including the psychological impacts of living situations. Klein's mathematical methodology in the design process started with comparing various dwellings to determine some critical parameters for evaluating the lodgings. The process of comparison consisted of many factors. The minimum requirements of the family and the person who lived in the lodgings were his focal points. In this method, any part of the area is designed for people's most basic needs. The space that is considered a "free zone area" of the dwelling is discarded. His scoring method of the successive increments was one of the most innovative parametric systems that became a manual for the design of the dwellings. A few decades later, Cedric Price's concerns started to take shape in criticism of rationalism. His research was always aimed at pinpointing the importance of users' behavior in the living environment. Throughout his career, he approached cybernetics as a science capable of modeling behavioral data to program the modification of spaces. It can be considered the first approach to architecture from a user-centered design perspective. This aspect is heavily emphasized in one of his research projects called "towards a 24-hour economic living toy." His diagrammatic comparison resembles Klein's drawings, with the significant difference of the introduction of the parameter of time. He begins to imagine how different people will use the same space in different time zones and how the spaces will be occupied throughout the dwelling's lifespan. Cedric Price's methodology and vision vindicates the parametric value of Klein's work in search of the responsive and transformative capacity of architecture. Parametric thinking has always been used to answer necessities in housing since the 20th century and should be considered among the innovative and experimental techniques displayed since the very beginning of Modernity. However, despite the fact that we have achieved many prospects in housing with today's advances in computational design, the technology has not made a design regulation to reach and open its full potential. With the help of current parametric tools, this essay investigates the works of these two renowned architects of the 20th century where the roots of parametric thinking flourished. This investigation aims to analyze their work by using the current parametric software. Thus, the two architects' concepts and parameters will be observed with today's digital tools, and we will be able to assess their impact and whether we are closer to user-centric design or rationalist principles. Hypothesis and Research Objectives The research hypothesis of this essay states that both the work by Alexander Klein and the work by Cedric Price are pioneers in parametric thinking. In both cases, investigations are being carried out on the types of housing that seek the qualitative optimization of results, and that is shown through generative tables of possibilities. Both propose in an intuitive and analogical way a parametric algorithm for the definition of relationships between the factors to be considered at every moment. In both cases it is possible to reproduce the logical process carried out and transcribe it using contemporary digital computing tools. To carry out this transcription, it will be necessary to identify both the parameters and determining factors of each investigation, as well as the operators and relationships that are proposed for optimization. Ultimately, to test the hypothesis, the most significant generative schemes of Klein and Price will be reproduced, with the aim of showing the computational algorithm used in each case. This way, it will be possible to make visible the factors that each author considers significant at every moment to address the housing problem. For all the above, we can identify three fundamental objectives of this research: • In the first place, the verification of the existence of a genealogy of parametric thought present in the architectural research of the 20th century and visible through key figures that we can consider especially influential. • Second, the comparison between the scientific approach to the housing problem in the interwar period and first modernity in the Western context with the approach to the same problem in the period after World War II. • Third, the use of contemporary parametric design tools to demonstrate their ability to compute problems of a conceptual nature-in this case, the quality of life provided by residential typologies-and not exclusively formal. Literature Review For the construction of this research, the state of the art of the three protagonists of the story has been observed: Alexander Klein, Cedric Price, and the concept of design and parametric thinking. In addition to the references explicitly indicated in the argumentation of the process, we want to make a brief mention of the generic bibliography that has been consulted for the most holistic approach to each of the protagonists. In the case of Alexander Klein, the author's own writings have been observed first, highlighting those works published in the 1930s that compile his various methods of plant analysis [8,9]. Direct consultation of his first essay on the graphic method for the valuation of plants, published in Berlin in 1927 [10], has also been important. Other of his articles of the time explore in a panoramic way the conditions of minimal housing [11,12], or his theoretical and methodological positions when facing the collective housing project [13]. Apart from his own writings, a particularly interesting text to get to know the figure of Klein is the compilation carried out by Matilde Baffa Rivolta and Augusto Rossari, which documents and analyzes the methods and experiences carried out by Klein as a researcher and designer [14], and that has been consulted through the Spanish translation [15]. Finally, academic articles produced in recent years have been observed. They fundamentally review the scientific nature of his methodological approach and his contributions to the history of residential architecture [16][17][18][19]. In relation to Cedric Price's work, a bibliographic source that has been fundamental to analyze his research in housing has been the collection of projects, articles, and conferences carried out by architect Samantha Hardingham and published in two volumes by the Architectural Association of London and the Canadian Center for Architecture in Montreal [20,21]. The two volumes include both the articles produced by Price between 1970 and 1972 for the Architectural Design magazine-in which details of his Housing Research are displayed-as well as projects and essays that accompanied this research in a propositional way-such as the project for the Steel House Competition (1965)(1966) or the housing projects for the Potteries Thinkbelt complex (1966)(1967). On the other hand, the attention paid to Price by contemporary academic literature is enormous, including books that critically review his entire work [22][23][24], and a great set of articles that focus on some of his most significant contributions-technological conception of projects, their temporal logic., social character of their approaches, etc. [25][26][27][28][29]. Finally, the research has contemplated a specific observation of literature related to the concept of design and parametric thinking. Its origin has been located in the work on Patterns by Christopher Alexander, a pioneer in the approach to generative design models [30,31]. Thanks to his conceptual approach, later works focused on the algorithmic conception of the registration of patterns were possible [32][33][34]. Understanding the scope of algorithmic thinking in the field of architecture has been a subject extensively studied by Professor Mario Carpo, an observer of the notions of repetition, copying, and variation, implicit in the design of systems and typical of this paradigm of thought [35][36][37]. The concept of parametricism has been revised and incorporated into contemporary debate by authors such as Patrick Schumacher, although in relation to the definition of a possible new style, the successor of modernism [38]. In this sense, there are abundant references to the form production capacity of parametric design [39], although its ideological and political implications are also questioned [40,41]. Finally, from the approach of this article, special attention has been paid to the relationships already established between computational design and the authors considered as case studies, such as the relationship observed between Cedric Price, Christopher Alexander, and Nicholas Negroponte [42], or the relevance of cybernetics in the particular case of Cedric Price [43]. Materials and Methods In this section, the reasons why Alexander Klein and Cedric Price's works deserve a contemporary review in relation to their parametric character will be presented. In their contexts, both architects based architectural research and knowledge production Architecture 2022, 2 4 on methods of purely scientific nature. Both tried to objectify decision-making related to contemporary living by considering functional and environmental factors, qualitative and quantitative, with the aim of optimizing the use of the available surface, reaching the benefits and requirements of residential needs. The following describes the work context of each of the authors, as well as the method and digital tools used for the translation into visual algorithms of two of their most representative synthetic schemes. Alexander Klein Although born in Russia, Alexander Klein (Odessa, 1879-New York, 1961) settled in Germany in 1920 in the full effervescence of the so-called "new objectivity" (Neue Sachlichkeit) [44,45]. After the disasters caused by World War I, the German architectural environment abandoned expressionism and the will for a new social and political commitment emerged that permeated all areas of society. By the mid-1920s, efforts to build affordable housing intensified, an area in which Klein was beginning to establish himself as an expert. In 1927 he assumed a position of responsibility in public administration of the city of Berlin, accepting the position of Baurat (responsible for building and public works) [46]. From this position, he developed management tasks related to economic issues of building, but also addressed research activities, trying to develop economic typologies of social housing, in institutions such as the RFG (Reichsforschungsgesellschaft für Wirtschaftlichkeit im Bau) [47]-organization for economic efficiency in construction. It was a period of exploration of new standards of housing typologies and their grouping models, as well as the new conditions of rationalization of the quality of the space-attention to factors such as ventilation, sunlight, or orientation-and innovation in construction-observing the possibilities of prefabrication or modular coordination [48]. In this context, Klein's ideas always work as a scientific approach to the problem, capable of putting in the background the subjectivity of the Modern Movement to incorporate a quantifiable method for the valuation of homes. His great contribution-and focus of analysis in this paper-is the method of valuation of housing plants that he developed in 1928 [49]. The method proposes a sequence of operations through which certain qualitative values of homes could be verified, at the same time as quantify some comparable indicators. The final objective should be the selection of the minimum dwelling (Existezminimum) [50] with the capacity to integrate the necessary benefits. To achieve this objective, three phases of work are proposed: • First, a questionnaire is proposed that addresses two types of questions: dimensional and functional. Dimensional questions can be answered in a numerical way, while functional questions-directed to aspects related to hygiene, habitability, and comfortare answered in a binary way (yes or no). From the application of the questionnaire, three evaluation coefficients are obtained [51]: Betteffekt (relation between built area and number of beds), Nutzeffekt (relation between useful area and built area), and Wohneffekt (relation between areas of living spaces and bedrooms and built area), and a cumulative score of positive responses to the qualitative questions. • Second, the reduction of all projects to a single scale is proposed, taking into account the parameters of depth of the building and width of the façade. The different alternatives are represented in diagrams that show a complete picture of possibilities, adapting the houses to the determined dimensions of depth and width. In particular, this comparative look makes it possible to identify the most favorable values for the Betteffekt coefficient, and, therefore, to assign the most efficient dimensions for homes that require a certain number of beds. • Third, a graphical analysis method is developed that allows validating the results obtained in previous stages by graphically checking the achievement of objective qualities [52]. These are: ordering of zones for corridors and route of the circulations; concentration of free surfaces; relationships between the elements of the plant; fractionation of surfaces; etc. Table 1, obtaining as a result that the variations with the highest qualification occupy the diagonal of the di-agram, where depth and width keep an adequate balance in relation to hygienic, comfort, and economic conditions. The result of the study shows the most suitable plants, indicat-ing their surface area and showing the location of the beds. • Third, a graphical analysis method is developed that allows validating the results obtained in previous stages by graphically checking the achievement of objective qualities [52]. These are: ordering of zones for corridors and route of the circulations; concentration of free surfaces; relationships between the elements of the plant; fractionation of surfaces; etc. Figure 1 shows an example of the type of results provided by this comparative graphical analysis method. In it, each house floor is reproduced in a simplified way (interior bays and staircase). From left to right the variations are arranged depending on the depth of the building. From top to bottom the variations are arranged depending on the width of the facade. The plants are evaluated according to the criteria set out in Table 1, obtaining as a result that the variations with the highest qualification occupy the diagonal of the diagram, where depth and width keep an adequate balance in relation to hygienic, comfort, and economic conditions. The result of the study shows the most suitable plants, indicating their surface area and showing the location of the beds. The final objective of this analysis process is the rationalistic objectification of conditions under which affordable housing (necessary to meet social demand) must be produced. The construction of the method aims to ensure that those institutions responsible for the processes share criteria for the observation of the variables [53]. Therefore, it is a process based on the consideration of a series of parameters that are adjustable according to established coordinates to obtain comparable results. Therefore, this is an approach to complexity that today we can recognize as a parametric design process. With regard to this research, the production of a diagram of variations of dwellings and its adaptation to variable parameters of width and depth (to achieve optimal habitability conditions according to the determined coefficients, especially the Betteffekt value), is of particular interest. This approach is clearly based on the construction of a system for the detection of housing variants that optimize their efficiency of use based on dimensional parameters. This will be the scheme that the research will reproduce to make visible the parameters and implicit operations in the system. Cedric Price In relation to a later historical moment, Price's case is also especially significant. Cedric Price (Stone, 1934-London, 2003 has been recognized on multiple occasions as the most influential British architect of the 20th century who built the least architectural work. It has even been said that his project with the greatest impact has been the construction of his own character [54]. He was an architect who was deeply critical of the role played by the architecture profession in relation to the social context of the second half of the 20th century. According to his own reflections, the profession had been institutionalized as a tool located at the end of the political, territorial, or urban decision-making processes, and therefore subordinate to them [55]. However, Price's professional interest was not oriented towards the productive function of architecture but focused on processes [56]. His approach to architectural projects always began by questioning the objective for which architecture should be developed, and if the final solution really should be a building [57]. He did so throughout his career and passed it onto the students who passed through his classes at the Architectural Association in London. His perception of architecture was clearly closer to a matter of social function than to a discipline of formal production [58]. Hence, he questioned the very identification of architecture with construction, and raised the time factor as an inescapable variable in intervention processes of the physical environment. Price's proposals were always projects based on adaptability and temporality of spaces and programs. Projects where mobility and flexibility acted as key factors [59]. Two of his most significant built works were London Zoo Aviary [60] (1961) and Interaction Center [61] (1974)-this latter demolished in 1999-while his two best-known projects, although not built, were Fun Palace [62] (1960-1961) and Potteries Thinkbelt [63] (1964). However, a topic to which he dedicated a good part of his work as a researcher was the field of housing, an aspect that focuses our attention here. Cedric Price's approach to the housing problem must be framed in the context of British society in the 1960s, the stage immediately after postwar housing policies. The effort to eradicate slums and increase the affordable housing stock through massive blocks or new towns had not met the existing demand, nor the expectations for quality architecture [64]. This was a time when the architecture department of Greater London Council (GLC) was striving to incorporate industrialized production methods and new parameters of flexibility in housing [65], while the National Building Agency (NBA) tried to standardize plans to streamline construction [66]. On the contrary, Price's work sought to provide home users with the ability to choose, to make the most of the possibilities of the available living space. His research about housing enjoyed great popularity at the beginning of the 1970s thanks to its publication in the magazine Architectural Design between 1970 and 1972 [67][68][69][70][71] (for this, he had been invited by editor Peter Murray). In the research, he developed the conceptual and speculative project of a housing model called Short-Life House, based on the indeterminacy of uses those future occupants could make of the different spaces. It approximated a housing system model based on the diversity of choice possibilities, rather than on a definitive product. In this way, both the decisions of the inhabitants in search of optimizing their comfort, as well as future changes in the composition of the living unit could be taken care of by the system. In his own words, "the house is no longer acceptable as a pre-set ordering mechanism for family life". In relation to his research on housing, three previous works should be mentioned in which Price addresses the fundamental concepts that will lay the foundations of his proposal. First, the residential project included in his proposal for Potteries Thinkbelt. Second, the Steel House project. Third, his essay on housing as a 24-hour economic living toy [72], which will be used in this analysis to understand the determinants factors of the architect's concerns. Apart from previous work for the Potteries Thinkbelt project, in which Price developed housing typological and constructive variables for different situations, the Steel House project is important for understanding his ideological approach to housing. The Steel House was carried out as a proposal for the contest sponsored by the ECSC (European Coal and Steel Community), an entity that sought to collect ideas for a pre-industrialized steel house model, in a standardized way, and assembled as demountable modules. In collaboration with Milles Park, Douglas Smith and Frank Newby, Price developed the proposal for a structural skin as a continuous metallic envelope capable of integrating interior cells that could vary over time. In this way he responded to the approaches of his essay Towards a 24-hour economic living toy, on which he worked simultaneously while developing the Steel House project. The fundamental message of the essay was the realization that the house can no longer be considered as a predefined mechanism for family life. Price puts in crisis the very existence of a single predetermined family model and raises the need for the typological plan to be modified over time to adapt to the changing needs of the group of people who occupy it. While the Steel House schematics show a changing pattern of house occupancy, capable of technically adapting to these changes, the essay approaches the problem critically, developing the virtual occupancy of different apartments to recognize the patterns of use that may be apprehended by the home for modification. Therefore, it is again a parametric approach to the conception of the design, in which the determining factors of the configuration of the house are related to the occupation habits of its inhabitants. Therefore, the occupations scheme applied to one of the typologies tested in the Towards a 24-hour economic living toy trial will be the diagram that will be reproduced with computational means in this case to make visible the parametric nature of the analysis system of home by Cedric Price. Figure 2 shows the occupancy diagram of different types of dwelling based on their use by the occupants throughout the day. The time bands are shown in columns. The occupied areas are shown by different hatched patterns applied to the spaces in use by each type of occupant-defined based on the responses indicated in accordance with the criteria in Table 2. The result of the diagram shows the areas with high density of occupation and underused spaces, providing information to the designer for the adaptation and optimization of spaces. in which the determining factors of the configuration of the house are related to the occupation habits of its inhabitants. Therefore, the occupations scheme applied to one of the typologies tested in the Towards a 24-hour economic living toy trial will be the diagram that will be reproduced with computational means in this case to make visible the parametric nature of the analysis system of home by Cedric Price. Figure 2 shows the occupancy diagram of different types of dwelling based on their use by the occupants throughout the day. The time bands are shown in columns. The occupied areas are shown by different hatched patterns applied to the spaces in use by each type of occupant-defined based on the responses indicated in accordance with the criteria in Table 2. The result of the diagram shows the areas with high density of occupation and underused spaces, providing information to the designer for the adaptation and optimization of spaces. Parametric Translation and Visual Algorithm Considering the concept of "parameter" as a catalyst element in the design process implies the identification of a series of variables whose absolute values determine a specific result. A parameter is an element of the system-one of the factors that determines the result-whose indicator allows the design to be quantitatively evaluated. For this reason, its modification by means of alternative values allows us to obtain variations of the design depending on this parameter. A geometric modeling process is determined by means of equations. We use parameters as unknowns of these equations, whose value allows obtaining variables from the result of the system, and therefore from the modeling. In short, parametric modeling-and therefore parametric design-is the mathematical system by which we can automatically generate variations in order to optimize their suitability to a defined context condition [73]. In this way, parametric design allows defining the shape of an object or structure from the relationships defined between the variables [74]. To achieve this form, a defined and finite sequence of operations must be followed as a computational method-what we know as an algorithm-in which the variables will consider the corresponding parameters [75]. The research poses a double task of algorithm construction: • First, to verify the parametric nature of Alexander Klein's methodology, this approach to his work proposes the reproduction of his comparative diagram of project variations and the evaluation of the Betteffekt coefficient using parametric design tools. • Second, to verify the parametric nature of Cedric Price's methodology, the partial reproduction of his scheme of the 24-hour economic living toy test is proposed as a result of a parametric algorithm. Thus, we can identify the character of the parameters used by Price for his housing proposal. For both cases, Grasshopper digital application will be used. It is a graphical algorithm editor built into Rhinoceros 3D modeling software. As an algorithmic modeling tool, Grasshopper allows the creation of generative shape algorithms using visual parametric nodes. In this way we can make a direct translation of Alexander Klein's generative diagram to his graphical algorithm, obtaining a visual scheme in which we can recognize the determining parameters of the form. And in the same way, we can display the factors that determine the occupation of the home in the 24-hour cycle by building an algorithm that recognizes the activity patterns of the inhabitants in the example by Cedric Price. The objective of both algorithmic translations will be to compare the nature of the parameters and variables involved in each of the cases. The contrast between these variables will demonstrate the evolution of housing concerns throughout the 20th century from the perspective of two of the architectural figures with a more scientific perspective of the design process. Klein's Method The plans were grouped on the basis of some dimensional variables and the distributive scheme, in order to be "reduced to the same size," that is, to be comparable on the basis of the number of beds, according to Alexander Klein's "Method of the Successive Increments." The planimetric diagrams were modified by increasing the length and the width of the building by constant amounts; as shown in Figure 3, they were disposed in a grid, where the rows represented the increase in depth, the columns the increase of the width. In order to realize Alexander Klein's parametric thinking approach, the drawing of the diagram is constructed by dividing the script in many chapters using Grasshopper. The first chapter is to construct a set of plans with the increment of depth and width. Based on the original table by Klein, the table of 10 × 10 has been created. The rectangle of 8.8 m by 7.7 has been presented as perimeter of the plan. The steps of 0.5 m have been added to each row and column in order to create 100 individual plans in the table. Based on this module, each function from further on will be applied to all of these plans individually, as depicted in Figure 4. The plans were grouped on the basis of some dimensional variables and the distributive scheme, in order to be "reduced to the same size," that is, to be comparable on the basis of the number of beds, according to Alexander Klein's "Method of the Successive Increments." The planimetric diagrams were modified by increasing the length and the width of the building by constant amounts; as shown in Figure 3, they were disposed in a grid, where the rows represented the increase in depth, the columns the increase of the width. In order to realize Alexander Klein's parametric thinking approach, the drawing of the diagram is constructed by dividing the script in many chapters using Grasshopper. The first chapter is to construct a set of plans with the increment of depth and width. Based on this module, each function from further on will be applied to all of these plans individually, as depicted in Figure 4. By dividing the width in two on each plan, the living room has been created. Each plan has been divided into two smaller plans. Half is dedicated to the living room, while the other half is dedicated to the two rooms and the corridor in between. In order to make the corridor, each length of the rectangle has been divided in two. The corridor has been added by offsetting the division line between the room. The result is a room with one living room and two bedrooms with a corridor in between. This function, depicted in Figure 5, has been created in the parametric software in seconds while Klein spent weeks creating this grid. By dividing the width in two on each plan, the living room has been created. Each plan has been divided into two smaller plans. Half is dedicated to the living room, while the other half is dedicated to the two rooms and the corridor in between. In order to make the corridor, each length of the rectangle has been divided in two. The corridor has been added by offsetting the division line between the room. The result is a room with one liv-ing room and two bedrooms with a corridor in between. This function, depicted in Figure 5, has been created in the parametric software in seconds while Klein spent weeks creating this grid. The way that the script is designed ( Figure 6) is according to the areas of the room. If the areas are in some domains, the single bed (rectangle of 1.8 × 0.9 m) or double bed (rectangle of 1.8 × 2 m) will be added to the plans. It will detect the increase in area and determine how many beds should be added to each room. The function is designed to categorize four rooms based on four domains. plan has been divided into two smaller plans. Half is dedicated to the living room, while the other half is dedicated to the two rooms and the corridor in between. In order to make the corridor, each length of the rectangle has been divided in two. The corridor has been added by offsetting the division line between the room. The result is a room with one living room and two bedrooms with a corridor in between. This function, depicted in Figure 5, has been created in the parametric software in seconds while Klein spent weeks creating this grid. Figure 5. Creation of rooms, corridor, and living room in parametric software grasshopper. The orange color is represented as the living room while magenta color is the fixed area of corridor between the two rooms. Blue and green represent the room. All the colors correspond to the drawing that is shown in Figure 3. The way that the script is designed ( Figure 6) is according to the areas of the room. If the areas are in some domains, the single bed (rectangle of 1.8 × 0.9 m) or double bed (rectangle of 1.8 × 2 m) will be added to the plans. It will detect the increase in area and determine how many beds should be added to each room. The function is designed to categorize four rooms based on four domains. The first typology applies when the room's area is less than 15 m2. The code will add the possible minor beds inside the room, which are two single beds. When the space is between 15-20 m2, the second type will add one king-size bed to the parents' room and two single beds to the children's room. When the area is between 20 and 25 m2, the script will add two beds to each of the two rooms, and when the area is greater than 25 m2, the script will add the maximum number of beds feasible. Figure 5. Creation of rooms, corridor, and living room in parametric software grasshopper. The orange color is represented as the living room while magenta color is the fixed area of corridor between the two rooms. Blue and green represent the room. All the colors correspond to the drawing that is shown in Figure 3. The bottom cluster is the first division of the script where two single bedrooms are inserted following the area, which is less than 15 m2.The red color switch is the visual representation of the beds which is shown on Figure 3. The script will work with any input plan and automatically generate the drawing for further comparison, as Klein wanted to achieve. Klein was producing all the drawings one by one to understand the difference and efficiency, while with the script, we can automatically detect the same conclusion. As a result, the diagram displayed in Figure 3 is obtained, which formally reproduces Klein's original drawing in Figure 1. The list of parameters used in the script would be as follows: Dimensions of the rooms and corridor • Number of beds Price's Method On the other hand, Cedric's Price approach to parametric thinking was due to the parameter of time inside the space. Price determined how each person will use the dwelling during the day. As shown in Figure 7, this plan was reconstructed in Grasshopper in order to realize price´s method of thinking. The bottom cluster is the first division of the script where two single bedrooms are inserted following the area, which is less than 15 m2. The red color switch is the visual representation of the beds which is shown on Figure 3. The first typology applies when the room's area is less than 15 m2. The code will add the possible minor beds inside the room, which are two single beds. When the space is between 15-20 m2, the second type will add one king-size bed to the parents' room and two single beds to the children's room. When the area is between 20 and 25 m2, the script will add two beds to each of the two rooms, and when the area is greater than 25 m2, the script will add the maximum number of beds feasible. The script will work with any input plan and automatically generate the drawing for further comparison, as Klein wanted to achieve. Klein was producing all the drawings one by one to understand the difference and efficiency, while with the script, we can automatically detect the same conclusion. As a result, the diagram displayed in Figure 3 is obtained, which formally reproduces Klein's original drawing in Figure 1. The list of parameters used in the script would be as follows: • Dimensions of the house • Built area • Amount of rooms • Dimensions of the rooms and corridor • Number of beds Price's Method On the other hand, Cedric's Price approach to parametric thinking was due to the parameter of time inside the space. Price determined how each person will use the dwelling during the day. As shown in Figure 7, this plan was reconstructed in Grasshopper in order to realize price s method of thinking. Following Price's "Towards a 24-hour economic toy", the plans are constructed by using a square grid. People are represented as points inside this grid with a radius of 250 cm. By detecting each person's movement inside the space, the script automatically merges their movement and determines the unused space inside the dwelling. The algorithm depicted in Figure 8 is based on breaking the plan into many small squares. The width is divided by 150, while the width is divided by 900. Each plan consists of 135,000 squares. This will help form a Boolean parameter that generates an on/off function. By adding a radius of 250 cm to each point, the boundary of people's movement is created. Using a function called "point in curve," the squares that have been inside this boundary curve will be removed, and the unused space will be determined. As a result, the diagram displayed in Figure 6 is obtained, which formally reproduces Price's original approach in Figure 2. The list of parameters used in the script would be as follows: Following Price's "Towards a 24-hour economic toy", the plans are constructed by using a square grid. People are represented as points inside this grid with a radius of 250 cm. By detecting each person's movement inside the space, the script automatically merges their movement and determines the unused space inside the dwelling. The algorithm depicted in Figure 8 is based on breaking the plan into many small squares. The width is divided by 150, while the width is divided by 900. Each plan consists of 135,000 squares. This will help form a Boolean parameter that generates an on/off function. By adding a radius of 250 cm to each point, the boundary of people's movement is created. Using a function called "point in curve," the squares that have been inside this boundary curve will be removed, and the unused space will be determined. Following Price's "Towards a 24-hour economic toy", the plans are constructed by using a square grid. People are represented as points inside this grid with a radius of 250 cm. By detecting each person's movement inside the space, the script automatically merges their movement and determines the unused space inside the dwelling. The algorithm depicted in Figure 8 is based on breaking the plan into many small squares. The width is divided by 150, while the width is divided by 900. Each plan consists of 135,000 squares. This will help form a Boolean parameter that generates an on/off function. By adding a radius of 250 cm to each point, the boundary of people's movement is created. Using a function called "point in curve," the squares that have been inside this boundary curve will be removed, and the unused space will be determined. As a result, the diagram displayed in Figure 6 is obtained, which formally reproduces Price's original approach in Figure 2. The list of parameters used in the script would be as follows: • As a result, the diagram displayed in Figure 6 is obtained, which formally reproduces Price's original approach in Figure 2. The list of parameters used in the script would be as follows: • Discussion The comparative observation of the results allows us, in the first place, to identify the character of the parameters that are used as variables in both cases. Table 3 shows a summary of the main factors used in the parametric calculation in relation to their use by each of the authors. Affirmative use is identified by the symbol [•], while negative use is identified by the symbol [-]. As can be seen, Alexander Klein makes use of dimensional and purely quantitative parameters. However, it should be clarified that this observation refers exclusively to the work corresponding to phase 2 of his work (reduction of all projects to a single scale), in which Klein identifies the variables that best adapt to the Betteffekt parameter. Work phase 3 of his would complete this purely quantitative work, qualitatively validating the graphic selection. In the case of Cedric Price, the parameters necessary for the reproduction of his thought mechanism make it necessary to use both quantitative and qualitative factors. The need to incorporate a classification of the type of occupants, their age, and their activity patterns within the home emerges. In this definition, it is necessary to construct a matrix determined by the time variable, in which the uses of the home by each occupant are converted into joint use densities. It should be mentioned that Price's work has occasionally been criticized for the predetermination of the parameters of change, which determines predictable architectural solutions, limiting the concept of flexibility or freedom for the user that is being promoted [76]. In that sense, his proposal has come to be considered as close to neoliberal paradigms, observed from a contemporary perspective [77]. However, as we can see, his work in this sense is oriented above all to the visibility of the need for change, giving the inhabitants the ability to obtain information on use, and therefore facilitating decision-making in the face of future changes. While Klein's work offers as a result a base of dimensions and proportions, which is the premise on which to check usability qualities, Price's work does the reverse: it recognizes the patterns and needs of use of each user from the activities in the home themselves to identify the architectural spaces that require certain qualities due to their density of use, and the architectural spaces with other potentials of use due to their current underuse. Therefore, Klein's exercise subordinates the usability of architecture to the efficiency of its form, while Price's exercise subordinates the form to the use that the inhabitants make of it. The geographies of occupation that Price's work results in are the premise for future modifications of the architecture itself and, therefore, presuppose the capacity of the domestic space to selfconfiguration based on use. In other words, Price in the domestic context is applying the same thinking parameters of a project as celebrated as the Fun Palace, in which cybernetics had to provide architecture with the capacity for data collection, machine learning, and technical mobility necessary to facilitate its own formal evolution. Finally, the observation of all these aspects must be completed with a critical consideration of the methodology used. The proposal of an algorithmic translation of the works of Klein and Price has been carried out in a rigorous way thanks to the fact that the docu-mentation and bibliography available from both authors includes information and graph-ic diagrams in which it is possible to identify the parameters that later have been able to be used in Grasshopper scripts. However, during the re-engineering process of the design methods, it has been observed that visually similar results could be achieved without the full participation of the same parameters (altering the dimensional conditioning factors in the case of Klein or simulating different activity patterns in the Price case). This demon-strates, on the one hand, the potential of using contemporary tools for retrospective analy-sis, and on the other, the risk that the manipulation of historical readings could entail. While re-engineering as an algorithmic translation of documented parameters involves the heuristic use of primary documentary sources and their computational analytical testing, a possible use of parametric tools as a reverse engineering method lacking primary documentary sources could lead to biased or distorted historical readings. Conclusions According to the starting hypothesis, it is shown in any case that the work of both architects, Alexander Klein and Cedric Price, was carried out under a model of scientific thinking that today we can recognize as parametric thinking. In both cases, work is done on the search for optimal housing proposals-Klein optimizes the form, Price optimizes the use. Both make use of recognizable parameters and therefore their experiences are reproducible and replicable. This replicability allows incorporating in both cases different context conditions (dimensional premises in the case of Klein, conditions of the family occupation model in the case of Price) that facilitate its application in different situations. For all these reasons, the research demonstrates the replicability of parametric design methodologies as support in making creative decisions that do not have to be exclusively formal (there is a parametric architecture beyond form). Although Price himself develops this type of parametric thinking to be able to be used by advanced technologies for the achievement of adaptive architectures-as cybernetics was at the time and today it could be artificial intelligence-its use in the field of architecture is yet to be fully developed. Despite so, the landscape of contemporary architecture does already have adaptive and collaborative design tools that can be considered heirs to the pioneering parametric thinking of authors such as Klein and Price. Experimental generative housing projects de-veloped by Jeroen Van Ameijde, Sidewalk Labs, or Van Wijnen Groep; companies that offer customized parameterized housing such as Cover or Daiwa House Industry, or adaptive design platforms and applications such as Finch (https://finch3d.com, accessed on 21 December 2021) or Wikihouse (https://www.wikihouse.cc, accessed on 21 December 2021) show the potential of parametric architecture for contemporary housing production, designed in a customizable and adaptable way. We see in these new paradigms of housing design and production the same parametric thinking of Klein and Price. The research also shows that current computational tools allow the reproduction in a simple way of the complex thinking of two personalities ahead of their respective times. This also makes it possible to claim the intuitive capacity of both to foresee the future computing capacity for solving architectural problems. On the other hand, it has been shown how this genealogy of thought eloquently evolves through the two case studies. We have detected this triple condition: • Design model focused on form versus design model focused on the use that people make of architecture. While form constitutes the origin of residential design in Klein's proposal, this center shifts to the use of architecture by individuals in Price's case. Therefore, there is a shift towards a user-centered design model (UCD), a label that will become popular in the 1970s with the emergence of the concept of usability in information technologies [78,79]. • Static architecture versus dynamic architecture. While the concern for the housing problem in the interwar period was situated in the quantitative production, and therefore in the maximum efficiency of reproducible architectures and with static and standardized typological models, the concern from the second half of the century begins to incorporate the need for a responsive architecture in relation to the changing needs of use. • Standard family model versus coexistence group diversity. In a very concrete way, we can observe how evolution shows a particularly eloquent parameter in relation to family models. While Klein works with the hypothesis of a nuclear family as a demographic standard, Price converts users into variables of occupation and, therefore, of architecture. The architectural variables end up responding to the diversity of uses made by various coexistence groups. Therefore, we can conclude with one last reflection: the exploration of the house from a parametric systematization allows incorporating the conditions of uncertainty and contingency as factors that intervene in the design processes, demonstrating their diffuse nature [80].
2022-01-23T16:16:57.611Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "e7efe588074eedecbca63afc2ee4f6d80d8a44fa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-8945/2/1/1/pdf?version=1642661416", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "07748b6c94d2ce58e0985c9ce69c78d80f4944f9", "s2fieldsofstudy": [ "Engineering", "Art" ], "extfieldsofstudy": [] }
119346672
pes2o/s2orc
v3-fos-license
Diffusion of gold nanoclusters on graphite We present a detailed molecular-dynamics study of the diffusion and coalescence of large (249-atom) gold clusters on graphite surfaces. The diffusivity of monoclusters is found to be comparable to that for single adatoms. Likewise, and even more important, cluster dimers are also found to diffuse at a rate which is comparable to that for adatoms and monoclusters. As a consequence, large islands formed by cluster aggregation are also expected to be mobile. Using kinetic Monte Carlo simulations, and assuming a proper scaling law for the dependence on size of the diffusivity of large clusters, we find that islands consisting of as many as 100 monoclusters should exhibit significant mobility. This result has profound implications for the morphology of cluster-assembled materials. I. INTRODUCTION Nanometer-size clusters -or simply nanoclustersare intrinsically different from bulk materials. 1,2 Yet, understanding of several of their most fundamental physical properties is just beginning to emerge (see for instance Refs. [3][4][5][6][7][8][9], thanks largely to rapid progresses in the technology of fabrication and analysis, but also considerable advances in computational tools and methodology. It has recently been demonstrated 10-12 that depositing clusters (rather than single atoms) on surfaces allow the fabrication of interesting nanostructured materials whose properties can be tailored to specific technological applications, e.g., micro-electronic, optoelectronic, and magnetic devices. 13 If single-atom deposition is used, the nanostructures have to be grown directly on the substrate through diffusion and agregation, which depends in a detailed (and in general very complicated) manner on the interactions between surface atoms and adatoms. By contrast, for cluster deposition, the clusters are prepared before they hit the surface, giving considerably more flexibility 14 in assembling or organizing clusters for particular applications. It has been shown, for instance, that by changing the mean size of the incident carbon clusters, it was possible to modify the structure of the resulting carbon film from graphitic to diamondlike. 10 This however requires that sufficient control over the cluster deposition and subsequent growth process be achieved. 11,12 Diffusion evidently plays a central role in the fabrication of thin films and self-organized structures by clus-ter deposition. It has been demonstrated experimentally that gold or antimony clusters diffuse on graphite surfaces at a surprisingly high rate of about 10 −8 cm 2 /s at room temperature, 15 quite comparable to the rates that can be achieved by single atoms in similar conditions. This was confirmed theoretically by Deltour et al. using molecular-dynamics simulations: 7 clusters consisting of particles which are incommensurate with the substrate exhibit very rapid diffusion. The cluster diffuses "as a whole", and its path is akin to a Brownian motion induced by the internal vibrations of the clusters and/or the vibrations of the substrate. This is in striking contrast with other cluster diffusion mechanisms, whereby the motion results from a combination of single-atom processes, such as evaporation-condensation, edge diffusion, etc. The latter mechanisms are more appropriate to clusters which are in epitaxy with the surface, and are likely not significant in cases where the mismatch is large and/or the substrate-cluster interactions are weak, such as in Refs. 15 In the present paper, we re-examine the problem of cluster diffusion in the cluster-substrate-mismatched case, now using a much more accurate model: indeed, in the work of Deltour et al., 7 cluster-cluster, clustersubstrate and substrate-substrate interactions were all assumed to be of the Lennard-Jones form, which cannot be expected to correctly describe "real materials". Here, we consider a simple, but realistic model for the diffusion of gold clusters on a graphite surface (HOPG). We are concerned with gold because it has been the object of several experimental studies, 12,[17][18][19] but also be-cause realistic semi-empirical, many-body potentials are available for this material. The energetics of gold atoms is described in terms of the embedded-atom-method (EAM), 20 while carbon atoms are assumed to interact via Tersoff potentials; 21 the (weak) interactions between gold and carbon atoms are modeled with a simple Lennard-Jones potential. A comparable model was used recently by Luedtke and Landman to study the anomalous diffusion of a gold nanocluster on graphite; 9 diffusion was found to proceed via a stick-slip mechanism, resulting in an apparent Lévy-flight type of motion. In the present work, we examine closely the variations with temperature of the rate of diffusion, as well as the microscopics of cluster dimers (diclusters). We find the diffusivity of monoclusters to be entirely comparable to that for single adatoms. Likewise, and most important, diclusters are also found to diffuse at a rate which is comparable to that for adatoms and monoclusters. It is therefore expected that large islands, formed by the aggregation of many clusters, should also be mobile. Based on this observation, we carried out kinetic Monte Carlo simulations of island diffusion and coalescence assuming a proper scaling law for the dependence on size of the diffusivity of large clusters. We find that islands consisting of as many as 100 monoclusters exhibit significant mobility; this is consistent with the observation on graphite of large (200 monoclusters) gold islands. The morphology of cluster-assembled materials is profoundly affected by the mobility of multi-cluster islands. II. COMPUTATIONAL DETAILS Diffusion coefficients for clusters can only be obtained at the expense of very long MD runs: there exists numerous possible diffusion paths, and there is therefore not a single energy barrier (and prefactor) characterizing the dynamics. These systems, further, do not lend themselves readily to accelerated MD algorithms. 22,23 Brute-force simulations -long enough for statisticallysignificant data to be collected -therefore appear to be the only avenue. This rules out ab initio methods, which can only deal with very small systems (a few tens of atoms) over limited timescales (tens of picoseconds at best): empirical or semi-empirical potentials must be employed. As mentioned above, we describe here the interactions between Au particles using the embedded-atom method (EAM), 20 an n-body potential with proven ability to describe reliably various static and dynamic properties of transition and noble metals in either bulk or surface configurations. 24 The model is "semi-empirical" in the sense that it approaches the total-energy problem from a local electron-density viewpoint, but using a functional form with parameters fitted to experiment (equilibrium lattice constant, sublimation energy, bulk modulus, elas-tic constants, etc.). The interactions between C atoms are modelled using the Tersoff potential, 21 an empirical n-body potential which accounts well for various conformations of carbon. The Tersoff potential for carbon is truncated at 2.10 A, which turns out to be smaller than the inter-plane distance in graphite, 3.35Å. Thus, within this model, there are no interactions between neighbouring graphite planes. This is of course an approximation, but not a bad one since basal planes in graphite are known to interact weekly. (This is why it is a good lubricant!). A pleasant consequence of this is that the substrate can be assumed to consist of a single and only layer, thus reducing formidably the (nevertheless very heavy) computational load of the calculations. Last -and most problematic -is the Au-C interaction, for which no simple (empirical or semi-empirical) model is to our knowledge available. One way of determining this would be to fit an ab initio database to a proper, manageable functional potential. However, since Au-C pairs conform in so many different ways in the present problem, this appears to be a hopeless task, not worth the effort in view of the other approximations we have to live with. We therefore improvised this interaction a little bit and took it to be of the Lennard-Jones form, with σ = 2.74Å and ǫ = 0.022 eV, truncated at 4.50Å. The parameters were determined rather loosely from various two-body models for Ag-C and Pt-C interactions. 25 Overall, we expect our model to provide a qualitatively correct description of the system, realistic in that the most important physical characteristics are well taken into account. It is however not expected to provide a quantitatively precise account of the particular system under consideration, but should be relevant to several types of metallic clusters which bind weekly to graphite. We consider here gold nanoclusters comprising 249 atoms, a size which is close to that of clusters deposited in the experiments. 12,19 The graphite layer has dimensions 66.15×63.65Å 2 and contains 1500 atoms. Calculations were carried out for several temperatures in the range 400-900 K. It should be noted that a free-standing 249atom Au cluster melts at about 650 K in this model. 6 This temperature is not affected in a significant manner by the graphite substrate as the interaction between Au and graphite-C atoms are weak. However, the dynamics of the cluster is expected to be different in the high-temperature molten state and the low-temperature solid state. All simulations were microcanonical, except for the initial thermalisation period at each temperature; no drift in the temperature was observed. Simulations were carried out in most cases using a fully dynamical substrate. In two cases, one for the single cluster and the other for the dicluster, extremely long runs using a static (frozen) substrate were performed: it has been found by Deltour et al. 7 that diffusion is quantitatively similar on both substrates (however, see Section III B below). The equations of motion were in-tegrated using the velocity form of the Verlet algorithm with a timestep of 1.0 and 2.5 fs for dynamic and static substrates, respectively. 26 (Carbon being a light atom, a smaller timestep is needed in order to properly describe the motion). The dynamic-substrate simulations ran between 10 and 14 million timesteps (depending on temperature), i.e., 10-14 ns. The static-substrate simulation for the monocluster, in comparison, ran for a total of 50 million timesteps, i.e., a very respectable 125 ns = 0.125 µs; the corresponding dicluster simulation ran for 75 ns. All calculations were performed using the program groF, a general-purpose MD code for bulk and surfaces developed by one of the authors (LJL). A. Dynamic-substrate simulations We first discuss diffusion on a dynamic substrate, i.e., with all parts of the system explicitly dealt with in the MD simulations. Fig. 1 gives the (time-averaged) meansquare displacements (MSD's) of the cluster's center-ofmass at the various temperatures investigated, which will be used to calculate the diffusion constant, D = lim t→∞ r 2 (t)/4t. As indicated above, the simulations extend over 10-14 ns, but the MSD's are only shown for a maximum correlation time of one ns in order to "ensure" statistical reliability. It is evident (e.g., upon comparing the results at 700 and 800 K) that the diffusion coefficients that can be extracted from these plots will carry a sizeable error bar. Nevertheless, it is certainly the case that (i) diffusion is very significant and (ii) it increases rapidly with temperature. There is no evidence from these plots that the MSD's obey a non-linear power law behaviour (i.e., that the cluster undergoes superdiffusion) which could be associated with "Lévy flights": the statistical accuracy of the data is simply not sufficient to draw any conclusions. The cluster does however undergo long jumps during the course of its motion. We will return to this point below when we discuss diffusion on a frozen substrate. In lack of a better description of the long-time behaviour of the diffusion process, we simply assume that r 2 (t) → 4Dt as t gets large. The resulting diffusion coefficients are plotted in the manner of Arrhenius, i.e., log D vs 1/k B T , in the inset of Fig. 1. If the process were truly Arrhenius, all points would fall on a single straight line. This is evidently not the case here. Though we could probably go ahead and fit the data to a straight line, attributing the discrepancies to statistical error, there is probably a natural explanation for the "break" that a sharp eye can observe between 600 and 700 K: As noted above, the free Au 249 cluster melts at about 650 K in the EAM model. 6 The presence of the substrate raises the melting point, but very little since the interactions between the cluster and the graphite surface are small. Thus, the cluster is solid at the lowest temperatures (400, 500 and 600 K), but liquid above (700, 800 and 900 K). The statistics are evidently insufficient to allow firm conclusions to be drawn; there nevertheless appears to be a discontinuity near the cluster melting point temperature, with activation energies on either side of about 0.05 eV. We discuss in Section III D the implications of these findings on the kinetics of growth. B. Static-substrate simulations The static-substrate simulations, carried out at a single temperature (for the cluster), viz. 500 K, serve many purposes: (i) re-assess the equivalence with dynamicsubstrate MD runs reported by Deltour et al.; 7 (ii) provide accurate statistics for a proper comparison of the diffusive behaviour of mono-and diclusters; (iii) examine the possible superdiffusive character of the trajectories. We focus, first, on a comparison between static-and dynamic-substrate simulations. As can be appreciated from the MSD's given in the inset of Fig. 2, there is a rather substantial difference between the two calculations: for the dynamic substrate at 500 K, the diffusion constant is 3.71 × 10 −5 cm 2 /s, while for the frozen substrate we have 1.09 × 10 −5 cm 2 /s. (This value is actually significantly smaller than that for the dynamic substrate at 400 K -100 K lower temperatureviz. 1.70 × 10 −5 cm 2 /s). Again, statistical uncertainties cannot be totally excluded to account for this discrepancy, but it is difficult to imagine that it could explain all of the observed difference (cf. inset to Fig. 1 for a better appreciation of this difference). The explanation might however be quite simple. As noted above, the cluster-substrate interactions are weak, and this likely plays an important role in determining the characteristics of the motion. Visual inspection of the x − y paths in the two different situations makes it apparent that the motion has a much stronger "stickand-jump" character on the frozen substrate than on the dynamic one. On the frozen substrate, further, the trajectory is more compact on a given timescale. This can in fact be characterized in a quantitative manner by considering, following Luedtke and Landman, 9 the function P τ (d), which gives the distribution of displacements of length d over a timescale of τ . The motion is best characterized using a value of τ corresponding to the period of vibration of the cluster in a sticking mode (see below). The function P τ (d) (normalized to unity) is displayed in Fig. 3 for the dynamic substrate at three different temperatures (400, 500, and 900 K) and for the static substrate at 500 K. The value of τ was determined from the frozen-substrate simulations by simply counting the number of oscillations over a given period of time; we found τ = 20 ps to within about 10%. We note that, for the dynamic substrate, the period of oscillations at 400 K is about 38 ps, while no oscillations can be found at 500 and higher temperatures, i.e., the sticking mode is absent above 500 K or so. The difference between static and dynamic substrates is striking: On the dynamic surface, P τ (d) is a broad featureless distribution, which gets broader as temperature increases. The maximum of the distribution at low temperature lies at about 1.6-1.8Å -roughly the distance between equilibrium sites on the graphite surface -clearly establishing that the motion proceeds in an quasi continuous manner via "sliding hops" to nearestneighbours; the hops get longer as temperature increases. On the static substrate, in contrast, a "sticky" vibrational mode, of amplitude roughly 0.25Å, is clearly visible. This is followed by a broad tail which corresponds, again, to the sliding jumps that are characteristic of the motion on the dynamic substrate. Sticking, therefore, is much more likely to take place on the static than on the dynamic substrate, thereby contributing to decrease the average distance traveled by the cluster over a given period of time. This conclusion is however not general: The system under consideration here is perhaps a bit peculiar in that the clustersubstrate interactions are especially weak. (In comparison, Luedtke and Landman's ǫ for the Au-C interaction is 0.01273 eV, even smaller than our own value.) One may conjecture that the vibrations of the surface are enough, in such cases, to overcome completely the barrier opposing diffusion, which might not be true of systems where the interactions are stronger (as in the case, e.g., of Deltour et al.'s simulations, Ref. 7). It also appears that our diffusion data do not cover a timescale long enough to warrant firm conclusions to be drawn on the possibility that superdiffusion might be taking place. This is certainly true, as we have seen above, of the dynamic-substrate simulations, which cover "only" 10-14 nanoseconds, but also of the static-substrate simulations (assuming, in view of the above discussion, that they are relevant to the problem under study), which extend to 125 ns. Certainly, the position of the cluster's center-of-mass does exhibit something of a self-similar character, as reported by Luedtke and Landman 9 , and as can be seen in Fig. 4. Evidently, one cannot trust statistics here over more than a decade or two in time. One might hope that superdiffusion would be more apparent in the long-time behaviour of the MSD's. To this effect, we plot in Fig. 2 log r 2 (t) vs log t, for a maximum correlation time of (here) 40 ns; 27 the slope of such a plot is the diffusivity exponent γ. The statistical quality of the data decreases with correlation time, and becomes clearly insufficient over 5 ns or so; the large dip at about 15 ns can testify. Our best estimate of the slope γ at "large" (more than ∼1 ns) correlation times is anywhere between 0.9 and 1.2, i.e., mild underdiffusion or mild superdiffusion... or no superdiffusion at all! This is consistent with the value reported by Luedtke and Landman, who find γ = 1.1 based on an analysis of sticking and sliding times. One point worth mentioning is that the velocityautocorrelation function for adatom diffusion in the in-termediate and high-friction regimes has been shown to follow a power-law behaviour at intermediate times; the exponential dependence resumes at very long times. 28 C. Diclusters The morphology of films grown by cluster deposition depends critically on the coefficient of diffusion of monoclusters, as we have just seen, but also, because clusters aggregate, on the coefficient of diffusion of multiclusters. From simple geometric arguments, it might be argued that the rate of diffusion should scale as N −2/3 , where N is the number of atoms in the cluster, as was in fact observed by Deltour et al. 7 for Lennard-Jones clusters. However, it can be expected that the morphology of the films depends, as well, on the shape of the multi-clusters following the aggregation of monoclusters, i.e., on the kinetics of coalescence. In a previous publication, 6 we examined the coalescence of gold nanoclusters in vacuum and found it to be much slower than predicted by macroscopic theories. This state of affairs can be attributed to the presence of facets and edges which constitute barriers to the transport of particles required for coalescence to take place 29 . The "neck" between two particles was however found to form very rapidly. We conjectured that these conclusions would apply equally well to the particular case of gold nanoclusters on graphite since the gold-graphite interactions are weak. We have verified this in the context of the present work: indeed, coalescence is little affected by the presence of the substrate, as demonstrated in Fig. 5. We considered both a free-standing and a supported pair of 249-atom gold clusters. Starting at very low temperature (50 K), temperature was slowly and progressively (stepwise) raised to 600 K. (As noted above, the 249-atom gold cluster melts at about 650 K in this EAM model and we therefore did not go beyond this point). We plot, in Fig. 5, the evolution with time-temperature of the three moments of inertia of the dicluster. Since the cluster can rotate, the moments of inertia provide a more useful measure of the shape of the object than, e.g., the radii of gyration. 6 A side view of the dicluster at 200 K, i.e., after the neck between the two monoclusters has formed completely, is shown in Fig. 6. It is evident that the dicluster does not wet the surface, and therefore the substrate plays a relatively minor role in the coalescence process. As can be seen in Fig. 5, the behaviour of the freestanding and supported diclusters are almost identical, except for the initial phase of coalescence: the supported cluster forms a neck much more rapidly than the free-standing cluster, presumably because the substrate offers, through some thermostatic effect, an additional route via which coalescence (by plastic deformation) can be mediated; it is conceivable also that the substrate "forces" the atomic planes from the two clusters to align. We have not explored these questions further; it remains that the end points of the two coalescence runs are identical within statistical uncertainty. Thus, again, coalescence is hampered by the presence of facets and edges; the timescale for complete coalescence is much longer than predicted by continuous theories. The shape of islands on the graphite surface will be strongly affected, and it is also expected that the rate of diffusion will be affected (since it is determined by the contact area between substrate and cluster). The MSD of the dicluster (after proper equilibration at 500 K) is displayed in the inset of Fig. 2. As mentioned earlier, this was calculated from a static-substrate run covering 75 ns. The same limitations as noted above for the monocluster should therefore hold in the present case. It is a very remarkable (and perhaps even surprising) result that the rate of diffusion of the dicluster is quite comparable to that of the monocluster, inasmuch as the frozen-substrate simulations are concerned. (We expect the diffusion constants on the dynamic substrate to be different -and larger -but in a proportion that would be quite comparable to that found here). The value of D = 1.38 × 10 −5 cm 2 /s we obtain for the dicluster is in fact a bit larger than that for the monocluster (1.09 × 10 −5 cm 2 /s). The difference is probably not meaningful; what is meaningful, however, is that the the mono-and the dicluster have comparable coefficients of diffusion; this has profound implications on growth, as we discuss in Section III D, below. The function P (d) for the dicluster at 500 K is displayed in Fig. 3; here we estimate that τ = 40 ps (vs about 20 ps for the monocluster). The distribution is quite similar to that found for the single cluster on the frozen substrate, though broader and shifted to slighlty larger displacements. This last result is likely due to the fact that, being larger, the dicluster is not as easily able to accomodate itself with the substrate as the monocluster; in this sense, it is more loosely bound to the substrate. D. Comparison with experimental results Deposition of gold clusters on graphite experiments were carried out in Lyon recently. 12,19 Several models have been proposed to extract the microscopic cluster diffusion coefficients from the measured island densities. 12 Of course, in order to provide a meaningful interpretation of the data, the models must take into account the precise conditions in which the experiments are performed. In Lyon, for instance, the flux of clusters is chopped, rather than continuous, and this affects the kinetics of diffusion and growth considerably. 30,31 Previous estimates of the rates of diffusion of Au on graphite, which overlooked this important detail, are therefore in error. In Ref. 19, a diffusion coefficient of 10 −3 cm 2 /s at 400 K is given; for a discussion, see Ref. 12. The "correct" number, in-cluding flux chopping, would be 1.0 cm 2 /s if monoclusters only were assumed to be mobile. However, as we have seen above, cluster dimers diffuse at a rate which is quite comparable to that for monoclusters, suggesting that larger clusters would diffuse as well. The Lennard-Jones simulations of Deltour et al. 7 indicate that the rate of diffusion of compact N -atom clusters scales roughly as the inverse of the contact area between the cluster and the substrate: D N = D 1 N −2/3 . (Compact clusters are expected to form through aggregation and coalescence; see Ref. 12). Experimentally, however, it is almost impossible to determine whether or not multiclusters do diffuse, and at which rate. In view of this, and the expected importance of multicluster mobility on growth, we have carried out a series of kinetic Monte Carlo (KMC) simulations in order to estimate the largest island which must be allowed to diffuse in order to account for the experimentallyobserved gold island density on graphite at 400 K, viz. 4 × 10 8 islands/cm 2 , or 1.1 × 10 −5 per site. 12,19 To do so, we assume that the diffusion constant for monoclusters found in the present simulations is correct, and that the rate of diffusion of N -clusters scales according to the law given above. All other parameters (incident cluster flux, temperature, chopping rate, etc.) are fixed by experiment. Figure 7 shows the results of the KMC simulations: we plot here the island density that would be observed if the largest mobile island were of size N max . The computational load increases very rapidly with N max and we therefore only considered islands of sizes less than or equal to 35. The data points follow very closely a power-law relation and we can thus extrapolate to larger values of N max , i.e., smaller island densities. We find in this way that islands up to a maximum size of about 100 mono-clusters must be mobile in order to account for the observed island density of 1.1 × 10 −5 per site. In what follows, we discuss in more detail the connection of this observation with experiment. We first note that, in the gold-on-graphite experiments, 12 large islands form which are "partially ramified", in the sense that the branch width is much larger than the size of the deposited clusters, each branch being formed by the coalescence of up to 200 monoclusters. In contrast, for antimony cluster deposition on graphite at room temperature, 12,15 the islands are fully ramified, i.e., have a branch width identical to the diameter of the monoclusters; this establishes unambiguously that cluster coalescence is not taking place in this case. It has been shown, further, that the mobility of the islands is negligible in antimony. 19 Our results suggest, therefore, when taken together with the work of Deltour et al., 7 that compact islands, which form through diffusion and coalescence, are mobile according to a N −2/3 law. In contrast, ramified islands, which form when coalescence does not take place, have much reduced mobility -certainly much less than would be expected from a N −2/3 law. N max , therefore, signals the crossover point between the two mobility regimes or, equivalently, the multicluster size at which the morphology of the islands crosses over from compact to ramified (or vice-versa). The physical reasons underlying the relation between mobility and morphology are not clear, but there appears to be no other ways to interpret the experimental results. This problem clearly deserves further studies. To summarize this section, the mobility of large islands is evidently a necessary ingredient to account for the experimentally observed island density. Our simulations suggest that these islands can be as large as 100 monoclusters; while this is consistent with experiment, the exact value, as well as the precise dependence of the diffusion rate on size, cannot at present be estimated. IV. CONCLUDING REMARKS Cluster-deposition techniques are of great potential interest for assembling materials with specific, tailormade applications. Yet, the fabrication process depends critically on the possibility for the clusters to diffuse on the surface in order to settle in appropriate positions, thus forming self-organized structures, or to aggregate/coalesce with other clusters in order to form largerscale structures and eventually continuous layers. In this article, we have demonstrated, using molecular-dynamics simulations with realistic interatomic potentials, that the diffusion of large metallic clusters on graphite can take place at a pace which is quite comparable to that for single adatoms. We have also established that the rate of diffusion of cluster dimers can be very sizeable, comparable in fact to that for monoclusters. An extremely important consequence of this is that islands formed by the aggregation of clusters are also expected to be mobile. Using kinetic Monte Carlo simulations and assuming a proper scaling law for the dependence on size of the diffusivity of large clusters, we estimate that islands containing as much as 25 000 atoms (100 monoclusters) are expected to undergo diffusion at a significant rate on graphite surfaces. These findings have profound consequences for the morphology of cluster-assembled thin films. ACKNOWLEDGMENTS We are grateful to Laurent Bardotti, Art Voter and Tapio Ala-Nissila for useful discussions. This work was supported by the Natural Sciences and Engineering Research Council of Canada and the "Fonds pour la formation de chercheurs et l'aideà la recherche" of the Province of Québec. LJL is grateful to the Département de physique des matériaux de l'Université Claude-Bernard-Lyon-I, where part of this work was carried out, for hospitality, support, and pleasant weather. Log-log plot of the time-averaged mean-square displacements for the cluster's center-of-mass on the static substrate at 500 K. The three curves correspond to different estimates: using the full extent of the run (full curve); only the first half (dashes); only the second half (dots). The difference between these curves gives a measure of the error on the estimated diffusion coefficient. Inset: Time-averaged mean-square displacements for the monocluster on a static substrate (full line), the monocluster on a dynamic substrate (dashes) and the dicluster on a static substrate (dots).
2019-04-14T02:20:16.879Z
1999-11-18T00:00:00.000
{ "year": 1999, "sha1": "a0ee52b836088584fb553efca6873c500c088daf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9911275", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "087016210251edaebdd77125e14e317e1949d512", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
213074395
pes2o/s2orc
v3-fos-license
Recent and rapid anthropogenic habitat fragmentation increases extinction risk for freshwater biodiversity Abstract Anthropogenic habitat fragmentation is often implicated as driving the current global extinction crisis, particularly in freshwater ecosystems. The genetic signal of recent population isolation can be confounded by the complex spatial arrangement of dendritic river systems. Consequently, many populations may presently be managed separately based on an incorrect assumption that they have evolved in isolation. Integrating landscape genomics data with models of connectivity that account for landscape structure, we show that the cumulative effects of multiple in‐stream barriers have contributed to the recent decline of a freshwater fish from the Murray–Darling Basin, Australia. In addition, individual‐based eco‐evolutionary simulations further demonstrate that contemporary inferences about population isolation are consistent with the 160‐year time frame since construction of in‐stream barriers began in the region. Our findings suggest that the impact of very recent fragmentation may be often underestimated for freshwater biodiversity. We argue that proactive conservation measures to reconnect many riverine populations are urgently needed. | INTRODUC TI ON We are now confronted by the sixth global mass extinction with the current rate of species losses far exceeding pre-anthropogenic background estimates (Barnosky et al., 2011). This crisis is particularly severe in freshwater ecosystems, which have shown declines of biodiversity greater than for either terrestrial or marine ecosystems (Darwall et al., 2018). Habitat loss and fragmentation are key factors leading to the genetic and demographic decline of populations that together threaten species persistence (Fischer & Lindenmayer, 2007). Over the last century, close to one million large dams and many millions of smaller in-stream barriers have been constructed globally (Jackson et al., 2001;Liermann et al., 2012). These barriers have had devastating ecological consequences by preventing or restricting connectivity among populations, leading to higher rates of genetic drift and inbreeding. This, in turn, can lead to lower fitness due to inbreeding depression and reduced evolutionary potential due to loss of genetic diversity (Frankham, 2005;Keyghobadi, 2007). Additionally, small populations become more vulnerable to extirpation due to stochastic demographic events (Lande, 1993) and, when this occurs on a regional scale, species extinctions are the inevitable result (Hanski, 1998). Landscape genetics provides a way to identify how human activities threaten the persistence of wild populations (Manel & Holderegger, 2013). The time lag between environmental change and any detectable genetic signal resulting from this change can, however, make it very difficult to disentangle the effects of historical from contemporary processes (Landguth et al., 2010). This is particularly the case for naturally structured populations such as those found in dendritic river networks (Coleman et al., 2018). The progression from landscape genetics to landscape genomics has increased both the spatial and temporal resolutions at which evolutionary processes can be examined, offering a more powerful framework with which to quantify the effects of very recent disturbance on populations (Allendorf et al., 2010;Grummer et al., 2019). Previous landscape genetics studies investigating the impact of in-stream barriers have often focused on larger, migratory species or assessed only one, or a few large barriers (Faulks et al., 2011;Gouskov et al., 2016;Meeuwig et al., 2010;Mims et al., 2019;Torterotot et al., 2014). For example, Muhlfeld et al. (2012) used microsatellite loci and simulations to understand the impact of placement of a single barrier on introgressive hybridization between native westslope cutthroat trout (Oncorhynchus clarkii lewisi) and non-native rainbow trout in Glacier National Park, USA. On the other hand, small-bodied but ecologically important species often receive relatively little attention from conservation managers (Olden et al., 2007;Saddlier et al., 2013). Regional-scale efforts to improve fish passage in Australia have been successful in restoring passage along the main river channel for large-bodied species (Barrett & Mallen-Cooper, 2006;Baumgartner et al., 2014); however, these measures have proved ineffective for most small fishes (Harris et al., 2017). The cumulative impact of numerous smaller in-stream barriers (e.g., weirs, farm dams and road crossings) is likely to greatly impact small-bodied and nonmigratory fishes; however, this has been the subject of much less research at a regional scale (Coleman et al., 2018;Diebel et al., 2015; but see Nathan et al., 2019). In this landscape genomics study, we examine the effects of recent habitat fragmentation on the southern pygmy perch (Nannoperca australis), a threatened small-bodied fish (<80 mm) that recently experienced major demographic declines and local extinctions across the Murray-Darling Basin (MDB), Australia Cole et al., 2016;Hammer et al., 2013). This ecological specialist is restricted to small streams and wetlands, is typical of many native small-bodied fishes in the region and offers a conservative model for guiding broader conservation strategies as the impacts of fragmentation are likely to be more pronounced for larger, migratory species. Since European colonization, freshwater habitat in the MDB has rapidly deteriorated due to severe water overharvesting, land clearing, habitat loss and fragmentation (Davies et al., 2010;Kingsford, 2000), and the MDB is now considered one of Australia's most vulnerable and threatened ecosystems (Laurance et al., 2011). The MDB has very few natural in-stream barriers, but it has been heavily modified with more than 10,000 dams, weirs, road crossings, levees and barrages constructed since the late 1850s (Baumgartner et al., 2014). As such, the MDB provides a unique opportunity to examine the consequences of recent habitat fragmentation without the confounding influence of prolonged human disturbance over hundreds of years as is common to many northern hemisphere river basins (e.g., Hansen et al., 2014). Environmental factors, including human disturbance, are known to influence genetic diversity for N. australis Cole et al., 2016); however, little is known about the specific role that widespread habitat fragmentation has played in the species recent and rapid decline. We hypothesize that, after accounting for historical patterns of genetic structure, genetic differentiation among demes should increase with the number of in-stream barriers separating them. We also predict that populations most isolated by fragmentation would exhibit reduced effective population size (N e ) and lower levels of genetic diversity. Additionally, we used forward genetic simulations to investigate whether high contemporary levels of genetic differentiation could have arisen in the relatively short time since the construction of in-stream barriers began in the MDB. Our results demonstrate that recent anthropogenic habitat fragmentation has contributed to the loss of genetic diversity and population isolation observed. They also suggest that proactive conservation measures to restore connectivity (e.g., environmental flows, habitat restoration) and increase evolutionary potential (e.g., genetic rescue) are urgently required for this, and potentially many other poorly dispersing aquatic species. To minimize the number of cohorts sampled per population, we targeted adult fish of similar size from each sampling site. To avoid the inclusion of highly related individuals in the data, we estimated pairwise relatedness among individuals from each site using the dyadic likelihood relatedness estimator described in Milligan (2003) and implemented in the R package related (Pew et al., 2015). The data were then filtered to retain only variants present in at least 70% of individuals and in 70% of populations, retaining only one biallelic SNP per locus with a minimum minor allele frequency of 0.05. | Sampling, ddRAD genotyping and SNP filtering Population structure and other demographic parameters such as effective population size should be assessed using neutral loci (Allendorf et al., 2010;Luikart et al., 2003). To define a putatively neutral data set, F ST outlier loci were detected using a Bayesian approach with BayeScan v.2.1 (Foll & Gaggiotti, 2008) and the coalescent-based FDIST method (Beaumont & Nichols, 1996) in Arlequin v.3.5 (Excoffier & Lischer, 2010). BayeScan was run for 100,000 iterations using prior odds of 10,000. Loci different from zero with a q-value < 0.1 were considered outliers. Arlequin was run specifying the hierarchical island model with 50,000 simulations of 100 demes for each of 13 populations (based on the 13 separate catchments sampled). Loci outside the neutral distribution at a false discovery rate (FDR) of 10% were considered outliers. Loci detected as outliers by either BayeScan or Arlequin were filtered. The remaining SNPs were examined for departure from expectations of Hardy-Weinberg equilibrium (HWE) using GenoDive 2.0b27 (Meirmans & Van Tienderen, 2004). Finally, loci out of HWE at a FDR of 10% in more than 50% of populations were removed. Detailed information concerning library preparation and bioinformatics are described in Appendix S1. | Population structure Pairwise F ST (Weir & Cockerham, 1984) was estimated among sampling sites using GenoDive (Meirmans & Van Tienderen, 2004) with significance assessed using 10,000 permutations. Bayesian clustering analysis of individual genotypes was then performed using fast-Structure (Raj et al., 2014). Ten independent runs for each value of K (1-25) were completed to ensure consistency, and the most likely K was assessed by comparing the model complexity that maximized marginal likelihood across replicate runs. F I G U R E 1 Nannoperca australis sampling locations in the Murray-Darling Basin (MDB). Stream sections are colour coded according to F ST estimated using the StreamTree model (Kalinowski et al., 2008). Cross markers represent the location of artificial in-stream barriers. Admixture plot is based on 3,443 SNPs depicting K = 12 clusters determined by maximum marginal likelihood using fastStructure (Raj et al., 2014) | Anthropogenic isolation of populations If anthropogenic habitat fragmentation has affected population connectivity and dispersal, we should expect genetic differentiation to increase in response to the number of in-stream barriers separating populations. To determine whether local characteristics of the stream network (i.e., in-stream barriers and other local-scale landscape heterogeneity) better explain population differentiation than isolation by distance (IBD), we used the StreamTree model of Kalinowski et al. (2008). Genetic distances among populations were modelled as the sum of all pairwise genetic distances that mapped to each section of the stream network. This provides a distance measure that is independent of the length of each stream section and identifies the reaches that contribute most to restricting gene flow (e.g., due to dendritic structure, in-stream barriers or other local landscape effects). Model fit was assessed by plotting the StreamTree fitted distance against observed F ST and calculating the regression coefficient of determination (R 2 ). This model was then compared with a model of IBD calculated using multiple matrix regression with randomization (MMRR) following the method of Wang (2013). Pairwise population distances along the river network were calculated with ArcMap v.10.2 (ESRI, 2012). Model significance for the MMRR was assessed using 10,000 random permutations. In dendritic river systems, hierarchical network structure and spatial hydroclimatic variation can also drive patterns of genetic diversity of stream-dwelling organisms (Fourcade et al., 2013;Hughes et al., 2009;Morrissey & de Kerckhove, 2009;Thomaz et al., 2016). To evaluate the relative contributions of anthropogenic habitat fragmentation, natural Australia, 2011;Stein et al., 2014). These were assigned to one of five categories describing variation in temperature, precipitation, flow regime, human disturbance and topography. Variance inflation factor (VIF) analysis was then used to exclude highly correlated variables using a VIF threshold of 10 (Dyer et al., 2010). The remaining variables were reduced to principal components (PCs) using the dudi.pca function in the ADE4 R package (Dray et al., 2016), and Euclidean distance matrices were constructed based on the PCs with eigenvalues > 1 (Yeomans & Golder, 1982) retained for each category. All distance matrices were z-transformed to facilitate direct comparison of partial regression coefficients (Schielzeth, 2010). Each variable was initially tested in an independent univariate MMRR before significant factors were combined in a multivariate MMRR model with 10,000 random permutations used to assess significance. | Habitat fragmentation, genetic diversity and population size To test the hypothesis that the most isolated populations exhibit reduced genetic diversity, we examined the relationship between population-specific F ST and expected heterozygosity (H E ). Populationspecific F ST was estimated for each sampling site using the method of Weir and Hill (2002), and H E was calculated using Genodive. Effective population size was estimated using the linkage disequilibrium (LD) estimator implemented in NeEstimator 2.01 (Do et al., 2014). This method is based on the assumption that LD at independently segregating loci in a finite population is a function of genetic drift and performs particularly well with a large number of loci and where population sizes are expected to be small (Waples & Do, 2010). In the absence of significant F ST , Lower Murray sites MID and MUN were considered one population and these samples were combined for the N e estimates. NeEstimator was run assuming random mating and using a P crit value of 0.075 following guidelines for small sample sizes suggested by Waples and Do (2010). | Eco-evolutionary simulations Simulation studies are becoming an increasingly important part of landscape genomics as a wide range of parameters can be explored for key evolutionary processes such as gene flow, genetic drift, mutation and selection (Hoban et al., 2012). In this case, we used (Table S1). | Population structure High | Anthropogenic isolation of populations The StreamTree model was used to identify parts of the stream network that contribute more to F ST (e.g., restricted dispersal due to barriers or other local environmental conditions). Results indicated that local characteristics of the stream network better explain F ST than the null hypothesis of IBD (i.e., the resistance to dispersal for any given stream section is determined by its length). Figure 1 Although there was significant IBD within-catchment groups (i.e. the first cluster in Figure 2b, R 2 = 0.730, β = 0.0016 [0.001-0.002 95% CI], p = 6.54 × 10 −8 ), IBD was not significant in models across the whole basin, in contrast to models of stream hierarchy and barriers (see below). In addition, even when comparisons were limited to sites within catchments, the number of barriers still provided a better model than IBD (R 2 = 0.81 versus 0.73, respectively; Figure S4). Figure 3 and Table 2). populations also harbouring the least genetic variation ( Figure S5; | Eco-evolutionary simulations The simulations demonstrated that contemporary population dif- generations with just one barrier (Figure 4; Table S4). | D ISCUSS I ON Habitat fragmentation is a key process implicated in the current and unprecedented worldwide loss of freshwater biodiversity (Fischer & Lindenmayer, 2007). Determining the contribution of recent human activities to the decline of riverine species is, however, challenging, as the genetic signal of recent disturbance can be confounded by historical patterns of dispersal shaped by hydrological network structure (Brauer et al., 2018;Coleman et al., 2018;Landguth et al., 2010). Integrating landscape genomics data with models of connectivity that account for landscape structure, we show that the cumulative effects of multiple in-stream barriers have contributed to the recent decline of a freshwater fish from the Murray-Darling Basin, Australia. Populations most isolated by recent habitat fragmentation exhibited reduced genetic diversity and increased population differentiation, and this signal remained strong after accounting for the historical effects of dendritic stream hierarchy. Interestingly, we found no evidence for isolation by environment (IBE), despite a previous genotype-environment association (GEA) study for the same species finding several hydroclimatic variables influenced putatively adaptive genetic variation at both regional and local scales . This is likely due to the ability of GEA methods to identify signal from relatively few regions of the genome responding to selection (Forester et al., 2018). In contrast, , the approach we used is known to perform well for small populations (Do et al., 2014;Waples & Do, 2010), and our results are consistent with expectations based on remnant habitat patch sizes, and estimates obtained in an earlier microsatellite study (Cole et al., 2016). Other previous work based on coalescent analyses of microsatellite DNA data sets has demonstrated that historical population sizes of N. australis were much larger before European colonization (Attard et al., 2016), and that populations across the MDB were also more connected until that time (Cole et al., 2016). Together, our findings support these studies and the hypothesis that the low genetic diversity, small N e and high F ST observed for contemporary populations likely reflects the combined impact of both historical and recent processes, rather than being due solely to natural demographic variability over longer evolutionary time scales. In addition, several populations sampled for this study have subsequently suffered local extirpation during prolonged drought, and the small size of most remnant populations indicate they are at high risk of extinction. Since the 1800s, land use and hydrology in the MDB has been increasingly modified due to urbanization and irrigation (Leblanc et al., 2012). These changes have included the construction of thousands of barriers to fish passage across the basin (Baumgartner et al., 2014), and it is now considered one of Australia's most fragmented and degraded ecosystems (Davies et al., 2010;Kingsford, 2000). The focus of most barrier mitigation actions in the MDB to date has been on restoring passage across larger dams along the main river channel (Barrett & Mallen-Cooper, 2006). Although some fishways have been designed to facilitate movement of smaller fish, they TA B L E 2 Results of multiple matrix regression with randomization (MMRR) tests for the relationship between pairwise genetic distance (F ST ) and geographic distance, catchment membership, number of in-stream barriers and environmental distances (Baumgartner et al., 2014). Furthermore, the spatial scale of dispersal for many small-bodied MDB fishes often restricts their movements to headwater streams and wetlands away from the main channel (Harris et al., 2017). Habitat loss and fragmentation associated with the thousands of smaller barriers in headwater streams have therefore likely contributed to the widespread decline of many smaller and more sedentary MDB fishes, including N. australis (Brauer et al., 2018;Cole et al., 2016;Hammer et al., 2013;Huey et al., 2017). It is perhaps surprising then, that there have been relatively few studies explicitly testing the genetic effects of anthropogenic fragmentation on small-bodied fishes in the MDB. One recent example in the neighbouring Yarra River catchment, however, combined a large empirical data set with spatially explicit simulations to examine the role of artificial barriers in driving local-scale patterns of genetic variation for river blackfish (Gadopsis marmoratus), a small and sedentary species also found in the MDB (Coleman et al., 2018). Based on eight microsatellite loci, genetic diversity was found to be lower for populations above barriers in small streams, with several isolated populations also exhibiting signs of inbreeding. In addition, their simulations demonstrated that power to detect recent impacts of barriers could be improved by increasing the number of loci used, highlighting the benefit of modern genomic data for conservation genetics. An unprecedented severe and prolonged drought between 1997 and 2010 caused catastrophic loss of habitat and local extirpation for some N. australis populations, particularly in the lower Murray Wedderburn et al., 2012). In response, an emergency conservation breeding and restoration programme was implemented in the lower MDB (Attard et al., 2016;Hammer et al., 2013) (Davis et al., 2015). Additionally, many species may be already depleted to the point where improved environmental conditions alone will not be sufficient to facilitate recovery. In this case, genetic rescue offers a potential solution for a broad range of threatened taxa (Ralls et al., 2018;Whiteley et al., 2015). However, despite strong evidence supporting the benefits of genetic rescue for fragmented populations, conservation managers are often reluctant to adopt these measures (Frankham, 2015). We suggest that the impacts of recent habitat fragmentation may have been underappreciated for many species, and that estimates of population structure solely attributed to historical evolutionary processes have potentially led to management frameworks that actually reinforce fragmentation and isolation at the expense of species-level genetic variation (sensu Coleman et al., 2013). F I G U R E 4 Number of generations (log scale) for global F ST to reach 0.2 with increasing levels of habitat fragmentation for simulated N. australis metapopulations of N e = 1,000, N e = 500 and N e = 100. Simulations were based on a stepping stone model assuming equal N e for each subpopulation and were allowed to run for 20,000 generations with a migration rate of 0.5 between adjacent demes before 300 generations with no migration. Red dashed line indicates the approximate number of generations since construction of in-stream barriers began in the MDB (160 generations) There is also increasing evidence that natural selection can influence the evolutionary trajectory of small and fragmented populations (Brauer et al., 2017;Fraser, 2017;Wood et al., 2016). Critically for conservation, this indicates that adaptive divergence of small populations can occur quickly following fragmentation (Brauer et al., 2017) and that even very recently isolated populations may harbour novel adaptive diversity. It is therefore important to build evolutionary resilience by facilitating genetic exchange among isolated populations to restore natural evolutionary processes and maintain species-level genetic variation, potentially valuable under a range of future selection regimes (Webster et al., 2017;Weeks et al., 2016). There is a global biodiversity crisis unfolding in freshwater ecosystems with aquatic vertebrate populations declining by 80% over the last 50 years (Darwall et al., 2018). Restoring functional connectivity for aquatic communities across river basins via traditional mitigation approaches is simply not feasible within the time frame required to enable many currently threatened species to persist. There is also now strong empirical evidence that several long-established beliefs central to prevailing conservation practices are overly cautious, and that the current local-is-best approach increases the prospect of managing species to extinction (Frankham et al., 2017;Pavlova et al., 2017;Weeks et al., 2016). Given widespread fragmentation, habitat loss and the ongoing global decline of freshwater biodiversity, a rapid paradigm shift is needed to empower conservation practitioners to take action before demographic issues become critical. There are risks associated with any proactive management intervention such as translocation or genetic rescue. These risks, however, need to be weighed against the ever-increasing risk of doing nothing. ACK N OWLED G EM ENTS We thank the many people who helped with fieldwork, especially PhD scholarship to Chris Brauer. CO N FLI C T O F I NTE R E S T None declared.
2020-02-13T09:11:02.664Z
2020-02-05T00:00:00.000
{ "year": 2020, "sha1": "2e82c3d5add9c4faf125f35b12e18f5b515df2d9", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.13128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b625aaf09f0b19beaf8de99559356091e9f1780c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Geography", "Medicine" ] }
158268147
pes2o/s2orc
v3-fos-license
Developing Russian Phd Students’ Academic Culture in EAP Courses for International Communication and Co-Operation The paper gives a didactic insight into the concept of “intercultural academic communication” /IAC/ analyzing its types, forms, structure and bilingual input for the purposes of improving Russian advanced students’ communication skills as intercultural speakers and writers in English-speaking academic settings. On the basis of the 2015-2017 cross-cultural analysis of Russian Master’s Degree & PhD Students’ experiences of intercultural communication it provides a didactically-oriented and competency-based classification of communicative barriers to effective cross-cultural academic communication, describing such of them as linguistic, pragmatic, sociocultural, cognitive and visual communication barriers. The paper argues that the theoretical framework for designing tasks aiming at improving PhD students’ bilingual pluricultural competence to use English as a lingua franca in intercultural academic settings is to be based on L. Vygotsky’s cultural historical theory, A.N Leontiev’s activity theory, A.A. Leontiev’s psychological theory of communication, S. Hall’s theory of cultural factors and contexts and culturally-oriented FLT approaches to developing students’ bilingual academic competences on a multidisciplinary basis. The paper concludes with some recommendations on creating a hierarchical set of multidisciplinary problem-solving tasks and activities specifically designed to help PhD students meet new 21st century challenges of intercultural communication & co-operation, avoiding culturebound academic pitfalls in today’s extremely complicated world. Among these tasks are those that involve PhD students’ into: a) observing and generalizing the similarities and differences of communicative and/or cognitive academic schemata in Russian and in English; b) classifying communicative barriers between intercultural speakers or writers (incl. English native & non-native speakers); c) interpreting the appropriacy of academic products in a FL from a global intercultural perspective; d) making suggestions for necessary pluricultural academic self-education in order to be able to foresee and/or identify communication barriers and find effective communicative tools to bridge intercultural academic gaps; e) doing thought-provoking case-studies in IAC; f) transforming interculturally inappropriate academic products in a FL into appropriate ones; g) group role-playing of IAC schema modes involving different academic roles that are typical of English-speaking international science co-operation settings; h) doing “Study & Innovate” projects. Introduction It is a well-known fact that a marked increase of international scholarly co-operation between Russian researchers and researchers from other countries has been taken place in the country since the end of the 20 th century.For the last twenty five years many Russian universities, especially research universities, have done much to encourage and promote international partnerships, collaboration and co-operation in research and education (Frumina & West, 2012).In 2016 V. Kaganov, the former deputy minister of Education and Science of the Russian Federation, in his speech "The Role of Russian Educational Policy and International Scientific and Educational Co-operation in the Development of Innovation and the Formation of "Knowledge Triangle"(2004) ,stated that in Russia the situation in these fields is characterized by: (a) the active participation of Russian educational and scientific organisations within the international partnership in the framework of the European Union's programs and the program "Horizon 2020" and Shanghai Co-operation Organisation's international scientific and educational programs; (b) the creation of the BRICS Network University within the research and Innovation initiative of the BRICS, which includes the implementation of mega-science projects, large-scale national programs, and also the development of joint research and innovation platform; (c) the co-operation of Russian universities with a number of international organizations and international projects (Kaganov, 2016, p.2-3) .All these steps made a great impact on developing and , then, renewing Russian Higher Education Standards in 2016-2017 in general, and on choosing new approaches to designing Master's and PhD's educational programmes, in particular.Quite recently, Russian universities have been widely discussing the Pan-European ideas and concepts of Open Education, Open Science, Open Innovation and Open to the World (The Three Os: Open Innovation, Open Science, Open to the World, 2016; G7 Science Ministry Ministers' Communiniqué, 2017) and how to implement them into training Russian researchers for effective international academic communication (Artamonova, Demchuk, Kagneev, Safonova, 2018). As English has been for a long time a lingua franca in the world of international science across the globe (including contemporary Open Education and Open Science fields), there are no doubts that it should be taught as a lingua franca of international research co-operation and collaboration, and not only with native English speakers, but with non-native English speakers as well.And that necessitates developing a pluricultural model of tertiary language education, involving teaching English for academic purposes /EAP/ with cross-cultural or pluricultural input.And though some steps have been made in EAP in terms of exploring cultural clashes experienced by international students, for example in the UK and other Englishspeaking countries (Jordan, 1997;Brick, 2006;Etherington, 2013), nevertheless the EAP methodology in general has not been fully oriented yet towards the real needs of postgraduates to become proficient intercultural academic speakers and writers, and experienced researchers in a globalized, increasingly digitalized multilingual and multicultural world of academic communication, and besides this, some national modifications of teaching EAP, for example, in Russia, are only on the way of forming linguacultural and methodological basis for solving contemporary educational EAP problems faced in its various educational contexts. Speaking about the linguacultural basis for teaching & Learning EAP in Russia, it seems worth mentioning that it presupposes to be formed on the results of a didactically oriented analysis of barriers to academic communication that have been faced by Russian postgraduates and postdocs in different settings of formal and informal academic communication, because in a number of works on comparative studies of sociocultural characteristics of academic communication in different cultural settings (see e.g.Jordan, 1997;Sternin, 2009;Etherington, 2013) it has been convincingly proved that sometimes international educational co-operation and research collaboration may be not effective enough because of sociocultural differences in educational, academic or research cultures (Sternin, 2009;Safonova, 2015).And as such, the latter often provoke communication gaps and barriers to efficient and successful academic communication.In other words, it seems reasonable that Russian EAP Methodology should be developed in the context of pluricultural approach (CEFR, 2001; CEFR/CV, 2018), trying to give a clear view of what the most common types of barriers Russian postgraduate students may come across in international settings of r academic communication worldwide, how to make them aware of these barriers & teach them to overcome them, at what level of tertiary education and self-education it seems most appropriately to be done and what approaches are to be used in Russian various educational contexts in order to develop step-by-step postgraduates' academic culture.Due to the considerations mentioned above, this paper discusses the concept of intercultural academic culture, focuses on providing a didactically-oriented classification of communicative barriers to Russian PhD students' effective international academic communication and gives some recommendations on designing problem-solving tasks to be used in the university classroom to help postgraduate students adopt proper communication strategies to overcome communication gaps in international academic contexts. Literature Review In the mid-1970s and early-1980s the concept of "English for Academic Purposes" was introduced into the British language methodology (Jordan, 1997) and since that time the EAP methodology has become a challenging research field not only in the UK, but across the world.Its rapid development as a branch of ESP (Jordan,1997) has been caused to a considerable extent by the intensive Internationalization and globalization of the world economy and all other spheres of human life, and, accordingly, the internationalization of Higher Education in which English functions as an academic lingua franca (Whong, 2009).Nowadays EAP is taught worldwide in a variety of sociocultural and didactic contests (Alexander, O. Argent, S. Spencer, J. 2008).Within the last two decades much has been done in establishing the theoretical framework for teaching EAP at tertiary level (see.e.g.Alexander, Argent, Spencer, 2008;Hyland, 2009) and improving classroom practices to help university students develop their academic voice in English (Brick, 2009).The studies undertaken over the last forty years in EAP provide us with: some definitions of EAP as a complex many-sided discipline (Jordan, 1997;Alexander, Argent & Spencer, 2008;Kemp, 2017) and special emphasis in these definitions is put on EAP interdisciplinary nature (Etherington, 2011;Bruce, 2015); the relatively new methodological concepts of "general EAP" and "specific EAP" (Hyland, 2011, pp.14-15) which are crucial for designing EAP curricula/syllabi and teaching materials for an endless variety of EAP educational contexts and settings in a close collaboration between language teachers and profile subjects teachers (Hyland, 2011); methodology appropriate to EAP that has been developed within a communication-oriented, learner-centered and specificprofile-oriented paradigm of university language education (Jordan, 1997, pp.109-125); linguadidactic basis for teaching academic reading (see.e.g., Jordan, 1997; Alexander, Argent & Spencer 2008, academic listening and speaking (see, e.g., Jordan, 1997;Brick, 2006;Alexander et.al.2008;Bruce, 2015), academic writing (see, e.g., Jordan, 1997;Alexander et.al., 2008;Hyland, 2009;Bruce, 2015) and some integrated communicative and cognitive skills (Bruce, 2015); a product-based approach (Jordan,1997), a process-based approach (Jordan, 1997), a genre-based approach (Brick, 2006, Hyland, 2009;Bruce, 2015) and a corpus-based approach (Bruce, 2015) to developing university students' academic skills related to their academic language competence in English and research powers in their specific profile fields of research; much evidence of some cultural or cross-cultural challenges (Jordan,1997, Brick, 2006: Sternin, 2009; Etherington, 2011; Okamoto, 2015; Sarmadi, Nouri, Zandi, Lavasani, 2017) facing international postgrate students which could not and should not be ignored in the theory and practices of teaching EAP. The methodological findings on academic culture (Jordan,1997, Brick, 2006;Okamoto, 2015;Sarmadi et.al., 2017) have raised a very important question about broadening the objectives and scope of EAP as a discipline or a number of interrelated subjects.These EAP studies have given special attention to the conceptualization of the notion of academic culture, considering it as a prerogative of any didactic model aiming at developing students' efficient academic skills and appropriate academic behaviours.But the thing is what we mean by academic culture , because in the EAP research field the concept may refer to: some universal characteristics of academic culture (Bergquist & Pawlak, 2008;Brown & Coombe, 2015) and its structural components (Bergquist & Pawlak, 2008); cultural/sociocultural characteristics of a particular academic culture in a particular culture -bound educational context (Jordan, 1997, Ballard & Clanchy 1984, 1991;Sternin, 2009); levels of academic culture, such as a macro level (national science policy, Institutional infrastructure, mission of academics in society, academic knowledge in society) and a micro level (academic discourse practices, publication practices, managing academic activities, knowledge acquisition practices, discipline practices) (Okamoto, 2015); special relations in academic world, including hierarchy / status, gender, nationality / ethnicity (Okamoto, 2015); discipline-specific academic subcultures, for example, the paper "Culture Shock?Genre Shock?" by Feak (2011) argues that though a larger academic culture exists, international students should realize that different disciples need to be viewed as cub-cultures with their specific values, processes, and world of value (Feak, 2011, p. 43-44). Referring to the last point, we could agree that these discipline-specific academic subcultures may be associated with the concept of academic culture (native or no-native), but at the same time we should not overestimate their role in academic settings, and, accordingly, in EAP methodologies.In truth, what is really badly needed is a much broader conceptualization of academic culture in EAP methodology, the one that was firstly put forward by Jordan in 1997.According to Jordan ,"Academic culture consists of a shared experience and outlook with regard to the educational system, the subject or discipline, and the conventions associated with it."(Jordan,1997 p.98).While reinforcing the ideas expressed in the cited definition of academic culture further , Jordan finds it necessary to focus on such elements (that are related, from his point of view, to academic culture) as: a) academic cultural clashes recorded in different British educational contexts as a consequences of existing differences in educational background and cultural background between native teaching staff and non-native Master's and PhD Students (Jordan, 1997, p.99-101), and b) academic conventions ( a clear understanding of academic hierarchy, academic verbal and non-verbal behavior schemas ) that are to be followed by international students in a culturally new academic context (Jordan, 1997, p. 101-103).Jordan's EAP assumptions were based on his brief analysis of the research findings reported by Thorp (1991), Coleman (1997), Holliday (1994) in their works and the research findings presented in his own study as well (Jordan,1997).All the findings and experiences in EAP discussed by Jordan lay reasonable grounds for drawing the scholars' attention to the necessity of designing a set of culture-bound courses including not only those that relate to the modes of academic behavior in the UK, but also those that help international students adapt themselves to the new cultural settings in different spheres of communication in the country.His suggestions on designing a course in British (Cultural) Studies may serve as an example of the courses dealing with general aspects of the host country. Jordan's ideas and approaches to developing students' academic culture are almost entirely based on his understanding of EAP problems that have been identified in the so-called anglophone educational contexts, in other words, in the UK universities and other universities within the Inner Circle of English (Kachru, 1996).Meanwhile, nowadays teaching EAP has also entered the non-anglophone zone within not only the Outer Circle of English, but the Expanding Circle of English (Kachru, 1996) as well, for example, in Russia.And recently in some top Russian universities EAP has started being taught through a set of interlinked subject-specific language courses with some linguacultural bilingual input.These courses have been designed to increase Russian postgraduates' employability skills and opportunities in the country and worldwide by developing their academic culture on an interdisciplinary and cross-cultural or pluricultural basis.Russian educationalists have come to a consensus that EAP courses should be designed with the view to developing postgraduate students' awareness of: global characteristics of academic communication (that is to a great extent a westernized Pan-European mode of academic patterns of perception, interaction and production); In other words, the EAP in Russia is mostly focused on internationalized academic communication as a specific phenomenon of the today's globalized, internationalized and digitalized academic world, but all the same the teaching of EAP in the country should not and would not ignore multicultural nature and pluricultural realities of contemporary academic communication. Not once has it been proved by scholars that academic clashes and communicative and/or cognitive barriers to effective academic communication can substantially impair students' academic achievements at university (Coleman, 1987;Jordan, 1997;Ballard andClanchy, 1984, Holliday, 1994;1991;Sternin, 2009;Feak, 2011) and their after-university professional life, however, these barriers have not been given a necessary methodological consideration in the EAP didactics yet, because till that time these barriers have been mostly studied in such fields of human knowledge as communicative linguistics (see, e.g., Sternin, 2009;Bogatikova, 2009) and cross-cultural or pluricultural studies, especially in business (see e.g.Gibson, 2002).But could we really nowadays move on in developing postgraduates' academic culture without making postgraduate students aware of those cultural clashes and barriers that they may come across in intercultural academic communication?Could we really train them for being efficient and competitive professionals and researchers without involving them in foreseeing, identifying and solving general and specific cultural academic problems that may often face them when they are involved in cross-cultural or pluricultural academic interaction?And could that be done without exploring and classifying the cultural difficulties experienced by postgraduates in a particular country's educational context or in a pluricultural environments? Conceptualizing the notions of academic communication and intercultural academic communication As has been mentioned before, the focus of EAP methodologists in anglophone contexts is on academic communication (mostly formal) related to scholastic environments.In Russia, meanwhile, especially when referring to intercultural academic communication, it is thought necessary to give a broader interpretation and conceptualization of academic communication with the view to bringing global perspectives into Russian cross-cultural / pluricultural tertiary education and into postgraduates' bilingual/trilingual and pluricultural developments through Russian and English (plus any other foreign/second language) , educating them as intercultural academic speakers and writers able to act in various national and international academic settings.Thus, international academic communication is understood as one of the spheres of professional intercultural communication that is related to scholastic environments and to research environments as well in which specifically structured verbal and non-verbal patterns of academic behavior are followed in English, general and specific characteristics of different academic discourse communities are taken into consideration while a) perceiving, collecting and evaluating academic information, b) producing academic products and reflecting on their quality, c) interacting in intercultural academic settings with native and non-native English speakers as representatives of cultural and/or subcultural and/or linguacultural academic communities, and d) using academic mediation activities (if the latter are required in particular academic situations for effective international collaboration and active academic co-operation). The 2015-2017 survey results of 46 Russian Arts & Humanities postdocs' showed that among the biggest problems facing them in international academic settings were as follows: Listening comprehension difficulties, because some times they could not concentrate on the academic issues that were raised, discussed or argued because of : a variety of Englishes used by the speakers (75% of the respondent) ; sociocultural terminological lacunas used by the academics (50 %) ; cross-cultural differences in research methodology, results delivery & their evaluation (91%). Speaking difficulties in formal academic settings.These problems often occurred in formal oral academic communication and they were due to: conceptual (including terminological) lacunas provoking academic misunderstanding between Russian academics and some other representatives of the English-speaking audience (84 %); sociocultural differences in English-speaking conventions of formal academic communications (e. g., in academic public speaking) and informal academic interactions (75 %). Speaking difficulties in informal academic settings.They often occur in the situations of informal academic communication and they were caused by the lack of the required knowledge and skills for being good at small talks (and not only academic ones) and lack of confidence in themselves in English informal academic environment (85%). Writing difficulties.Writing problems occurred mostly when the postdocs were to answer the calls for abstracts and papers this or that international conference, not missing the deadline, and, then, to structure their presentation texts in accordance with the conference requirements.These difficulties were mostly caused by postdocs' lack of knowledge on producing the academic genres mentioned above (75 %) in English in accordance with the international structural and content requirements. Behavioral verbal and non-verbal difficulties that were caused by: existing differences in understanding & following some sociocultural codes and schemas of academic interaction that have been established in Russian and English academic communities for years & years (86 %) ; the lack of experience in foreseeing or identifying and overcoming verbal and/or non-verbal misunderstandings that often led to break-downs in academic communication; the lack of mediating skills to repair academic communication reakdowns (92%). Classifying cross-cultural/pluricultural barriers to international academic communication Conceptually, there should be a clear differentiation between the notion of communicative barriers as regularly occurred and may be easily recorded in communication and the notion of communication break-downs that may have an occasional character. In contrast to occasional communication breakdowns, communication barriers can be defined as a permanently fixed crosscultural phenomena that can be regularly watched, identified (and if necessary & possible recorded) whenever it occurs in the formal or informal situations and settings of intercultural communication and destroys the latter.Barriers may occur in communication because speakers or writers do not share similar discourse modes of thinking , verbal and non-verbal behavioural schemas, sociocultural traditions and values. In terms of competence-based pluricultural approach (CEFR, 2001: CEFR/CV, 2018) communication barriers can be classified into a) linguistic barriers (including lexical., grammatical, semantic, phonological and orthographical barriers) , b) pragmatic barriers ( including discourse, functional and behavioral-scheme barriers) , c) sociocultural barriers (including cultural, sociolinguistic, ideological and ethical barriers), d) cognitive barriers and c) visual barriers (Safonova,2017).The diagrams below give some comparative information on the types of barriers that were named by MDs Students and PhD students from their own experiences .What do these diagrams say?First, these diagrams show that linguistic barriers (with the exception of terminological ones with reference to MDs students) do occur far less in their actual intercultural academic interactions than pragmatic and sociocultural barriers or cognitive and visual barriers.Second, if there was an expected difference between MDs Students and PhD students concerning how often they could face linguistic barriers, but in terms of the frequency of the appearance of pragmatic and especially sociocultural barriers in their academic communication it was a rather surprising situation because no really noticeable differences between MDs students and PhD students had been recorded, despite the fact that these two groups of postgraduates represent different levels of tertiary education.And, finally, these diagrams give us an indirect support to the ideas expressed earlier in the paper that barriers to intercultural academic communication should be careful studied in the EAP methodologies with a Pan-European dimension, especially oriented towards postgraduate levels. The data on PhD students' experiences in EAP and their self-assessment of the skills under consideration is given in tables 1-3.This data, though the number of respondent is not very large, still gives us some food for thought.First, the most part of the respondents didn't have much experience to use English even in traditional academic activities.Second, academic discussions and academic debates being very important academic activities are somehow their terra incognita .And finally, it seems, that the most of the respondents have hardly been involved in any real international academic co-operation or collaboration when English might have been used as a lingua franca of science, but without their real participation in international conferences and projects it is hardly possible for them to gain a valuable academic experience how to collaborate and co-operate efficiently with other academics and researchers.The theoretical framework for designing tasks aiming at improving postgraduate students' bilingual pluricultural competence to use English as a lingua franca in intercultural academic settings is to be based on L. Vygotsky's cultural historical theory (1934,1991), A.N Leontiev's activity theory (1975), A.A. Leontiev's psychological theory of communication (1999), S. Hall's theory of cultural factors (1971,1980) and contexts and culturally-oriented FLT approaches, for example, pluricultural approach (CEFR, 2001; CEFR/CV, 2018) or sociocultural approach (Safonova, 1996) or culture-sensitive approach (Holliday, 1994) to developing students' bilingual academic competences on a multidisciplinary basis.Besides, the implementation of these goals in the training model of postgraduates as international researchers through co-learnt languages (Russian, English and other FL) presupposes the development of a system of interlinked courses in teaching Russian and English (and any other FL) for academic purposes, Cultural Studies in Academic Communication and subjectspecific theoretical tandem courses that are read in the co-learnt languages.This system should be an instrument for adopting global perspective on training postgraduates as international research collaborators.The chart below shows some possible correlations between the European researcher's status (Towards a European Framework for Research Carriers, 2011) and researchers' intercultural bilingual activities. Chart 4. Global Perspectives in Researchers' Bilingual and Intercultural Development With the view to achieving the global goals mentioned above in Russia, what is suggested in the country as a didactic instrument for developing academic culture is a hierarchical set of multidisciplinary problem-solving tasks and activities specifically designed to help Russian PhD students meet new 21 st century challenges of intercultural communication & cooperation, avoiding culture-bound academic pitfalls in today's extremely complicated world.Among are those that involve PhD students' into: 1) observing and generalizing the similarities and differences of communicative and/or cognitive academic schemata in Russian and in English; 2) classifying communicative barriers between intercultural speakers or writers (incl.English native & non-native speakers); 3) interpreting the appropriacy of academic products in a FL from a global perspective and/or an intercultural perspective; 4) making suggestions for necessary pluricultural academic selfeducation in order to be able to foresee and/or identify communication barriers and find effective communicative tools to bridge intercultural academic gaps; 5) doing thought-provoking case-studies in intercultural academic communication; 6) transforming interculturally inappropriate academic products in a FL into appropriate ones; 7) group role-playing of IAC schema modes involving different academic roles that are typical of English-speaking international science co-operation settings; 8) academic and research simulations, 9) doing "Study & Innovate" projects involving PhD Students from other countries and discussing their results at Young Researchers' Forums, 10) organizing interdisciplinary conferences of Arts & Humanities PhD students with academic debates.Some of the tasks mentioned above (1-4) may be introduced into EAP courses much earlier, starting with Master's Degree programmes and even sometimes with Bachelor's programmes, because, in truth, what we really need is a three-level EAP system.A pre-condition for designing interdisciplinary problem solving tasks listed above is a comparative cross-cultural analysis /CCA/ of academic communications, first, cross-cultural, then, pluricultural, in Russian and in English.The CCA data can provide much food for thought in terms of : a) hypothesising schemas underlying a particular academic event in official and unofficial modes of professional intercultural communication in English; b) outlining relevant verbal and non-verbal intercultural speakers' resources & strategies; c) making decisions on the professional core knowledge and macro skills (with detailing a set of micro skills for each of them) that may be developed and then internally assessed in the Russian university classroom.And now it is high time to do this job without which it is hardly possible to bring real innovations into teaching EAP with global perspectives in Russia. Conclusions and Implications The teaching of EAP in Russia is undergoing serious changes with new challenges in developing Russian bilingual/trilingual researchers in the context of Open Education, Open Science and Open to the world.What has been discussed in this paper is only a beginning of introducing changes into the EAP/FLAP methodology in this country.Further researches in the field under consideration are planned to go on with collecting data on academic barriers (in order to get statistically reliable data), to focus on a detailed comparative cultural analysis of academic products that are expected to be professionally produced by postgraduate students at different tertiary levels and by postdocs, to develop evaluation instruments for measuring intercultural academic competence relating to four modes of academic communication: perception, interaction, production, mediation (CEFR/CV, 2018).Again the results of comparative cultural analysis of academic discourse could provide grounds for outlining academic-life based assessment criteria & designing multi-level scales for measuring core verbal & non-verbal skills that are crucial to intercultural academic communication. an international code of ethics in academic research: global academic & business academic etiquette; universal and specific academic practices in Russian academic communities and other linguacultural communities across the global: international research culture in comparisons with Russian & other academic and research cultures. 2017 Survey Interview Results: Types of Communication Barriers (to Effective Intercultural Academic Communication) Named by Russian MDs and PhD Students Specializing In Linguistics , Intercultural Communication and FL Methodology) .3. Designing a hierarchical set of multidisciplinary problem-solving tasks and activities for developing PhD students' academic culture on cross-cultural/pluricultural basis The Jordan, 1997)ross-Cultural Studies in Academic communication might be really helpful, something like Russian-French or Russian-Swedish or Russian-Norwegian Comparative studies.Additional cross-cultural academic training is surely needed to help us to avoid cultural pitfalls in administrative communication, informal academic communication, not only formal academic communication.The data in table3indicates that till that time mediation skills have not been given a proper place in Russian PhD programmes, and I believe, not only in Russia, because you can hardly find a section on developing mediation skills at tertiary levels in any EAP methodology books (see, e.g ,.Jordan, 1997), not to speak about EAP courses (see, e.g.,Alexander , et.al.,2008).findings on the communication barriers to effective intercultural academic communication that occur between Russian PhD students /postdocs and other representatives of academic linguacultural communities quite obviously indicate that the EAP methodology specialists involved in designing and implementing Arts & Humanities postgraduate programmes in Russia should reconsider the existing EAP theoretical framework and teaching & learning practices in order to make university language education capable of developing PhD students as: bilingual intercultural speakers who are active & interactive academic listeners, flexible, confident & professionally interesting speakers, and who are aware of verbal and nonverbal barriers in international communication and are able to overcome them; intercultural academic writers with advanced writing academic skills necessary for being able to produce academic products relating to general and subject-specific academic genres; academic mediators who are able to mediate academic texts, theories and core concepts underlying them, academic communications and to apply appropriate mediation strategies (CEFR/CV, 2018); ISSN 2411-4138 (Online) international researchers who are able to act in academic settings across the globe in accordance with the European Code of Conduct for Research Integrity (2017).
2018-12-18T00:59:15.852Z
2018-07-24T00:00:00.000
{ "year": 2018, "sha1": "ae34d762925738bbac9d99cbdc8cc37e49bdbfeb", "oa_license": "CCBY", "oa_url": "http://journals.euser.org/index.php/ejis/article/view/3552/3453", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ae34d762925738bbac9d99cbdc8cc37e49bdbfeb", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }