id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
2114589 | pes2o/s2orc | v3-fos-license | Comparison of long-term survival and toxicity of simultaneous integrated boost vs conventional fractionation with intensity-modulated radiotherapy for the treatment of nasopharyngeal carcinoma
Aim In recent years, the intensity-modulated radiotherapy with simultaneous integrated boost (IMRT-SIB) and intensity-modulated radiotherapy with conventional fractionation (IMRT-CF) have been involved in the treatment of nasopharyngeal carcinoma (NPC). However, the potential clinical effects and toxicities are still controversial. Methods Here, 107 patients with biopsy-proven locally advanced NPC between March 2004 and January 2011 were enrolled in the retrospective study. Among them, 54 patients received IMRT-SIB, and 53 patients received IMRT-CF. Subsequently, overall survival (OS), 5-year progression-free survival (PFS), 5-year locoregional recurrence-free survival (LRFS), and relevant toxicities were analyzed. Results In the present study, all patients completed the treatment, and the overall median follow-up time was 80 months (range: 8–126 months). The 5-year OS analysis revealed no significant difference between the IMRT-SIB and IMRT-CF groups (80.9% vs 80.5%, P=0.568). In addition, there were also no significant between-group differences in 5-year PFS (73.3% vs 74.4%, P=0.773) and 5-year LRFS (88.1% vs 90.8%, P=0.903). Notably, the dose to critical organs (spinal cord, brainstem, and parotid gland) in patients treated by IMRT-CF was significantly lower than that in patients treated by IMRT-SIB (all P<0.05). Conclusion Both IMRT-SIB and IMRT-CF techniques are effective in treating locally advanced NPC, with similar OS, PFS, and LRFS. However, IMRT-CF has more advantages than IMRT-SIB in protecting spinal cord, brainstem, and parotid gland from acute and late toxicities, such as xerostomia. Further prospective study is warranted to confirm our findings.
Introduction
Nasopharyngeal carcinoma (NPC) is one of the most common head and neck tumors in the People's Republic of China. The incidence of NPC was 20/100,000 in southern China. Most of the patients with NPC are usually diagnosed in locally advanced, nonmetastatic stage III or IV. 1,2 To date, radiotherapy (RT) has been recommended as the first option for treatment of NPC. Thus, it is essential to investigate multimodal RT to improve survival status of patients with NPC.
With the advent of RT equipment and computer technology, intensity-modulated radiotherapy (IMRT) has been extensively utilized in the treatment of NPC due to its submit your manuscript | www.dovepress.com Dovepress Dovepress 1866 Tao et al benefits in accurately targeting organs, reducing toxicities of organs at risk, and enhancing dose escalation. Importantly, the dosimetric advantage of IMRT has been widely identified and betters local control rate. 3 On the other hand, simultaneous integrated boost (SIB), also called simultaneous modulated and accelerated RT, can deliver different doses to target regions according to the level of risk, which exploited the "dose-painting" capacity of IMRT. 4 The safety and efficacy of IMRT-SIB have been confirmed in patients with NPC. [5][6][7] Meanwhile, IMRT with conventional fractionation (IMRT-CF) has also been widely used in the treatment of NPC; however, the efficacy of IMRT-CF was little reported, in that CF (2.0 Gy/ fraction) was subjected to two-dimensional conventional radiotherapy (2D-RT) or three-dimensional conventional radiotherapy (3D-RT). Besides, few studies compared the efficacy and safety of IMRT-SIB and IMRT-CF in recent decades.
Therefore, in the present study, we enrolled 107 patients with biopsy-proven and locally advanced NPC, and analyzed overall survival (OS), 5-year progression-free survival (PFS), 5-year locoregional recurrence-free survival (LRFS), and relevant toxicities. Our study will contribute to the improvement of clinical IMRT treatment strategies in patients with NPC.
Methods Patients
The present retrospective cohort was composed of 107 newly diagnosed and previously untreated patients with histopathologically confirmed NPC. All 107 patients were treated with IMRT in Shandong Cancer Hospital and Institute between March 2004 and January 2011. Median age of all patients was 43 years (16-78 years). Human participant approval was obtained from the ethical committee of Shandong Cancer Hospital and Institute. Written informed consent from patients was obtained. Routine workup included a thorough physical examination, hematologic and biochemistry profiles, fiberoptic endoscope examination of the nasopharynx, and magnetic resonance imaging (MRI) or contrast-enhanced computed tomography (CT) of the head and neck, with which the status of the primary tumor and regional lymph nodes could be accurately evaluated. At the same time, the chest X-ray or CT, whole-body bone scan, and abdominal region ultrasonography were used to exclude distant metastasis. Staging of the patients was completed using the American Joint Committee on Cancer/Union for International Cancer Control 2002 system for staging classification of NPC. radiotherapy All patients were immobilized with a thermoplastic mask. The simulation CT images extended from the vertex of the skull to 5 cm inferior of the clavicular heads, and were obtained at a slice thickness of 3 mm. All patients were treated by IMRT.
In the IMRT-SIB group, the gross tumor volume (GTV) included the primary nasopharyngeal tumor and lymph nodesinvolved tumors demonstrated by contrast enhancement CT or MRI. Clinical target volume (CTV) 1, a high-risk region, was defined by appending a 5 mm margin to GTV, which included the inferior sphenoid sinus, clivus, skull base, nasopharynx, ipsilateral parapharyngeal space, posterior third of the nasal cavity, maxillary sinuses, and Level II, III, and Va lymph nodes. CTV2, a low-risk region, was the lower neck below the cricothyroid membrane. A smaller margin (3 mm) was also acceptable in these regions close to the critical structures, such as brainstem, optic nerves, and optic chiasm. CTV intermediate-and low-risk regions were contoured according to Radiation Therapy Oncology Group (RTOG) recommendations. Planning target volume (PTV) was defined by adding a 5 mm margin to the CTV in all dimensions. The prescribed dose was 66 Gy/30 fractions at 2.2 Gy/fraction to the planning gross tumor volume (PGTV), 60 Gy/30 fractions at 2.0 Gy/ fraction to the PTV1, and 54 Gy/30 fractions at 1.8 Gy/fraction to the PTV2. Total radiation doses of 66-74, 60, and 54 Gy were delivered to PGTV, PTV1, and PTV2, respectively, with 30-34 fractions at five fractions per week.
In the IMRT-CF group, CTV included the primary nasopharyngeal tumor, lymph nodes-involved tumor, the high-risk regions including the entire nasopharynx, skull base, clivus, inferior sphenoid sinus, retropharyngeal lymph nodal regions, pterygoid fossae, parapharyngeal space, the posterior third of the nasal cavity, maxillary sinuses, and any high-or low-risk nodal regions including bilateral cervical lymph nodes Level II-V. PTV was generated with 5 mm margin. PTV was delivered at 2 Gy/fraction for 25 fractions with total dose of 50 Gy, followed by twice replanning. The first and second replanning were generated, respectively, using a "shrinking-field technique" after 25 fractions and 30 fractions. The accumulated radiation doses were 70-74 Gy to PTV of the GTV for the primary nasopharyngeal tumors and the involved lymph nodes, 60 Gy to high-risk PTV, and 50 Gy to low-risk PTV. All patients were treated with one fraction daily for 5 days per week. The target prescription dose and the critical structures limit dose were planned according to the RTOG trial 0225 criteria. The irradiated doses of both IMRT techniques are summarized in Table 1.
chemotherapy
In this study, the chemotherapy regimens were cisplatin alone, cisplatin/docetaxel, and cisplatin/5-fluorouracil. These
1867
comparison of iMrT-siB vs iMrT-cF for patients with nPc chemotherapy regimens were known to possess similar activity and effectiveness for treatment of NPC, and were administered as neoadjuvant, concurrent, or adjuvant treatment ( Table 2).
Follow-up
The duration of follow-up was calculated from the first day of treatment to either the day of death or day of the last follow-up. Patients were interviewed and examined at least every 3 months during the first 2 years, and subsequent every 6 months. Follow-up information must include clinical examination, and CT or MRI of the head and neck region. When patients had potential locoregional recurrence or distant metastasis, additional examinations or imaging modalities were performed to confirm disease progression at the discretion of the treating physician. Missing data were completed by calling the patient or the treating physician. Acute and late toxicities were scored according to the Common Terminology Criteria for Adverse Events version 3.0. Besides, the diagnostic criteria for injury of the nervous system involves the following: lesions or necrosis in the nervous system showed contrast enhancement on postcontrast T1-weighted MRI, heterogeneous hyperintense on T2-weighted MRI, and homogeneous peri-necrosis hyperintense on T2-weighted MRI. Furthermore, the recurrence or metastasis of tumor must be excluded when determining the site of radiation encephalopathy.
Last, the time to the first defined event OS, PFS, and LRFS was assessed. OS was measured from the first day of treatment until death or follow-up deadline. PFS was measured from the first day of treatment to the date of the first observation of local or regional recurrence or distant metastasis. LRFS was measured from the date of treatment to the date of the first observation of local and regional recurrence.
statistical methods
All statistical analyses were performed using SPSS 17.0 software (SPSS Inc., IBM Company, Armonk, NY, USA). The chi-square test was used to calculate statistical group comparisons of categorical variables. Survival analysis was performed using the Kaplan-Meier method, and comparisons were calculated using the log-rank test. The Mann-Whitney test and log-rank method were used to estimate the differences in both groups. Multivariate analyses with the Cox proportional hazards model were used to test independent prognostic factor by backward elimination of insignificant
Patient characteristics and baseline level
To figure out the baseline level of all patients, we summarized the characteristics of patients and treatment modality. As shown in Table 2, 78 males and 29 females were included, and the male/female ratio was calculated as ~2.6:1. The most patients presented with stage III (62.6%) and stage IV (32.7%). According to statistics, there were no significant between-group differences in sex, age, T and N classifications, and clinical stage (all P0.05), indicating that our study was reliable.
Dosimetric comparison
As shown in Table 3, we summarized the maximal doses (D max ) to the critical structures (spinal cord, brainstem, crystalline lens, and optic nerve), and calculated the mean doses (D mean ) to the parotid glands. Our results revealed that the doses of spinal cord, brainstem, and parotid gland were significantly lower in patients treated by IMRT-CF than in those patients treated by IMRT-SIB (all P0.05). Figure 1A-C illustrates the deformable image registration process in IMRT-SIB group. After recontouring parotids on 30th fraction images, both parotid volumes were mapped to the initial treatment plan with the same beam configurations. These plans were referred to as "simulated plans", and demonstrated significant migration of the parotids, especially toward the high-dose PTV regions, which would result in increased dose to the parotids. In addition, the dose-volume histograms showed that estimated delivered doses to both parotids were higher than initial plans during the RT course ( Figure 2).
radiation toxicities
To elucidate the potential toxicities of IMRT-SIB and IMRT-CF, we summarized all types of toxicities in our study, and toxicities after 6 months following RT were defined as late radiation toxicities. No treatment-related deaths were observed in either cohort. Acute and late radiation toxicities are shown in Table 4. Systemic acute toxicities were similar in both groups.
Of 107 patients, the most common late toxicities included xerostomia and hearing loss, which were observed in 40.1% and 30.8% of patients, respectively. Briefly, in the IMRT-SIB group, xerostomia can be found in 26 patients (48.1%), whereas the IMRT-CF group had 17 patients (32.1%) with xerostomia; the difference was significant (P0.001). Accordingly, the rate of late hearing loss in the IMRT-SIB group was also greater than that in the IMRT-CF group; however, no significant statistic difference was observed (P=0.491).
Subsequently, we summarized late toxicities of the nervous system, and found that in IMRT-SIB group, three patients with NPC (5.5%) suffered from late toxicities of the nervous system, including two patients (1.87%) with temporal lobe injury and one patient (0.93%) with brainstem injury. In addition to that, no injury was observed in temporal lobe, brainstem, and spinal cord.
Patterns of failure and survival outcome
In the present study, the overall median follow-up time was 80 months (range: 8-126 months). Of 107 patients, Seventeen patients with locoregional relapse included eight patients (7.5%) with local relapse, seven patients (6.5%) with regional relapse, and the rest of two patients (1.9%) with both local and regional relapse. Besides, two patients (1.9%) developed both locoregional and distant failure. Based on follow-up data, the overall 5-year OS, PFS, and LRFS rates were 80.7%, 72.9%, and 89.4%, respectively. According to statistics, we found no significant betweengroup differences in 5-year OS (80.9% vs 80.5%, P=0.568) (
Prognostic factors
To throw light on the independent prognostic factor, we conducted multivariate analysis using variables in this study, including age (45 years vs 45 years), T classification (T1-2 vs T3-4), N classification (N0-1 vs N2-3), and RT method (IMRT-CF vs IMRT-SIB). All these variables were summarized and calculated in the Cox proportional hazards model using the backward elimination method. Multivariate analysis revealed that age and N classification act as 011 and P=0.031, respectively). However, RT method cannot serve as a significant prognostic factor for OS, PFS, and LRFS. At the same time, it should be noted that T classification was another prognostic factor only for PFS (P=0.037) ( Table 5).
Discussion
In recent decades, IMRT has been widely utilized and benefited most patients with NPC in short-term treatment. It is reported that IMRT has some advantages in improving the local control rate and quality of life. [8][9][10] Till now, IMRT has been developed into two models, like IMRT-CF and IMRT-SIB. However, the real effects of IMRT-CF and IMRT-SIB on NPC treatment were not well featured. In the present study, the rate of 2-year OS and PFS in IMRT-CF-treated patients with NPC was 96.2% and 94.4%, respectively. Consistent with our data, Ng et al included 193 patients in their trial, and concluded that 2-year OS and PFS in IMRT-CF-treated patients with NPC increased up to 92% and 95%, respectively. 8 As reported, IMRT-SIB has been widely investigated in NPC treatment with a nominal total dose of 64.8-76 Gy to GTV in 2.12-2.4 Gy/fraction over 27-35 fractions. 8,11,12 Overall, NPC patients with IMRT-SIB treatment achieved a local or locoregional control rate of 88%-96% based on 2-5 years of follow-up, though SIB dose fractionation varied in these studies. Our results showed that the rate of 5-year OS, PFS, and LRFS in patients with IMRT-SIB treatment was 80.9%, 73.3%, and 88.1%, respectively. In accordance to our data, Sun et al 9 conducted a retrospective analysis on 868 nonmetastatic patients with NPC treated by IMRT-SIB, and found that the rate of 5-year LRFS and PFS for all patients was 91.8% and 77.0%, respectively, and the Note: iMrT-siB and iMrT-cF data presented as mean ± standard deviation. Abbreviations: iMrT-siB, intensity-modulated radiotherapy with simultaneous integrated boost; iMrT-cF, intensity-modulated radiotherapy with conventional fractionation; Oar, organs at risk; D max , maximal dose to the volume; D mean , mean dose to the volume.
1871
comparison of iMrT-siB vs iMrT-cF for patients with nPc 14 performed dosimetric studies comparing the target volume coverage and normal tissue sparing of IMRT-SIB vs IMRT-CF for NPC. Both the groups found that the maximal dose to spinal cord and brainstem was lower in IMRT-CF than in IMRT-SIB. They suggested that normal tissues embedded within the target regions may receive higher doses per fraction compared to the doses given by IMRT-CF delivery techniques. Therefore, IMRT-CF may be more appropriate than IMRT-SIB when the dose given to the normal tissues is the major concern. This is the same direction in our study which showed that IMRT-CF provided better sparing of spinal cord and brainstem than did the IMRT-SIB (P=0.001 for spinal cord and P=0.001 for brainstem). This indicates that a small area of brainstem and spinal cord, especially in T3-T4 patients, was inclined to receive a high dose per fraction (2.2 Gy/fraction) when patients were subjected to IMRT-SIB due to its approach to the boost volume. Therefore, caution should be taken when applying IMRT-SIB technique when these critical structures are very close to the boost volume. 13 On the other hand, reports by Chen et al 13 and Nesrin et al 14 demonstrated that IMRT-SIB could provide better sparing of the parotid glands than IMRT-CF. However, our study identified that IMRT-CF provided better sparing of parotid glands in comparison with IMRT-SIB. This is probably related to the following two reasons: (1) CT scans were performed after the 25th fraction in IMRT-CF, while after 30th in IMRT-SIB group, and then the treatment plans were re-conducted according to these scans. With the shrinking of the primary tumor and nodal masses, the parotids might deviate to the GTV (lymph node involvement) ( Figure 1C), which results in higher actual dose in parotid gland than expected ( Figure 2). Zhang et al 15 reported that there was a benefit to bilateral parotids through replanning at the fifth week. (2) If the involved lymph nodes in Level II were very large and close to parotids, the calculated total dose to the parotid gland may increase.
Afterward, we analyzed radiation-induced toxicities. Of 107 patients, the most common late toxicities included xerostomia and hearing loss, which were observed in 40.1% and 30.8% of patients, respectively. Consistent with our results, Peng et al 16 showed that 39.5% of patients with NPC had grade I-II xerostomia. Moreover, our results also showed that patients receiving IMRT-CF had a lower incidence of acute and late xerostomia than those who received IMRT-SIB. We think that it may be attributed to the statistical difference in the mean dose to parotid gland. 15 Recently, Marzi et al 17 reported a minor increase of parotid D mean based on the tolerance dose that can result in potential severe xerostomia. Obviously, IMRT-CF significantly decreased the doses to parotids (Table 3) to reduce xerostomia-related symptoms and improve the quality of life. At the same time, an insignificant high incidence of grade 3 dermatitis, oropharyngeal mucositis, dysphagia, and late hearing loss in the IMRT-SIB group was also observed in this work.
Tao et al
Finally, we demonstrated that N staging and age act as a significant predictive factor for OS, PFS, and LRFS. However, T stage was not a significant prognostic factor in this study. In contrast, previous studies showed that both T and N categories act as a significantly independent factor. 8,9,18 Maybe, our sample size in this study is relatively small. Most importantly, our analysis revealed that RT was not a prognostic factor for OS, PFS, and LRFS, and the difference between IMRT-SIB and IMRT-CF did not affect survival outcome.
In the clinical practice, the overall radiotherapy treatment time of IMRT-CF was significantly longer than IMRT-SIB. In this study, although a lengthening of overall radiotherapy treatment time of 5 days was obtained with the IMRT-CF technique, IMRT-CF only adds three times the cost for the RT. Besides, due to acute IMRT-SIB-related toxicities, the corresponding cost of supportive care and management will increase. Joiner 19 reported that reduction in overall treatment time was assumed to reduce the risk of tumor colognes regrowth during the late phase of radiation treatment, and improve the probability of tumor control. In our study, the biological effective dose to PGTV (90.12 Gy 10 ) was higher in IMRT-SIB than in IMRT-CF (88.80 Gy 10 ). However, the overall 5-year LRFS rates were not significantly different between the two groups (88.1% vs 90.8%, P=0.903).
To our knowledge, this is the first single-institution study to investigate the difference between IMRT-CF and IMRT-SIB in patients with NPC. The main limitations in this study exist in retrospective property and relatively small sample size. To draw a definitive conclusion, a randomized Phase II study on IMRT-SIB than IMRT-CF in patients with locally advanced NPC is being conducted in our hospital. This study will allow for the standardization of IMRT technique for patients with NPC.
Conclusion
The IMRT-SIB and IMRT-CF can benefit patients with local advanced-stage NPC, and better the survival status. The IMRT-CF has more advantages than IMRT-SIB in protecting spinal cord, brainstem, and parotid gland from acute and late toxicities, such as xerostomia. Thus, IMRT-CF should be recommended for patients with locally advanced NPC. The results of this study will be further validated in prospective, multicenter controlled trials. | 2018-05-08T18:18:14.840Z | 2016-03-31T00:00:00.000 | {
"year": 2016,
"sha1": "736b426dc74292b9cda84dd6ce4d37bf20134c26",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=29675",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "736b426dc74292b9cda84dd6ce4d37bf20134c26",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": []
} |
91513996 | pes2o/s2orc | v3-fos-license | Value chain of pangas and tilapia in Bangladesh
The study assessed pangas and tilapia value chain, and analyze the internal and external governance of the market actors at different levels. Two hundred samples (100 for each of pangas and tilapia fish) were included in the study from selected areas of Bangladesh. A combination of descriptive statistics and mathematical analysis were used to analyze the data. GAMM analysis was used to address the actors and their functions, product flow, information flow and governance of pangas and tilapia value chain. The study reveals that among all the actors, processors added the highest value and farmers followed it. The internal and external governance issues followed by different actors could be ranked as average, which reveals the improvement issue through intervention from respective authorities. SWOT analysis indicated high demand for pangas and tilapia in domestic and international markets, inadequate market infrastructure and increasing cost of feed as major strength, weakness and threat, respectively. The study recommended that good governance should be ensured from the production point to consumer along all the actors of value chain. Government should take step about monitoring the feed quality and improvement of pangas and tilapia value chain governance. Moreover, DoF, BFRI and NGOs should play the assigned role to train up the chain actors and provide extension services in order to explore the export potential of pangas and tilapia fishes in Bangladesh.
Introduction
Bangladesh has achieved remarkable progress in the fisheries sub-sector since its independence.The contribution of fisheries sub-sector to national GDP and agricultural GDP were 3.65% and 23.12%, respectively (MoF, 2016).Bangladesh has the third largest aquatic biodiversity in Asia behind China and India with about 800 species of fresh, brackish and marine waters having world's largest flooded wetland and three main river systems Ganges, Brahmaputra and Meghna (Hussain et al., 2009).Among the various species of fish, pangas and tilapia are particularly important because of its size and taste.In recent years, pangas and tilapia have become the most popular commercial cultivable species due to high yield, higher response to external feeding and availability of seeds to meet up the farmer's demand (Razeim et al., 2017;Rahman et al., 2012).
Pangas and tilapia production were 510097 and 370017 metric tones, which was 12.34% and 8.95% of total fish production, respectively (DoF, 2017).In total aquaculture production, greater Mymensingh area (i.e., Mymensingh, Sherpur, Jamalpur, Kishorganj and Netrokona) has got significant advancement regarding pangas and tilapia fish production commercially (Islam, 2009).The pangas and tilapia value chain is totally controlled by the private sector.A large number of market actors are associated with pangas and tilapia farming activities like hatchery and nursery owners, fry/fingerling traders, producers, fish traders (paiker, faria, aratdar, retailers, etc.), etc who receive a large amount of profit in the total market share (Ayubu, 2017;Loc, 2009).However, competition is minimal at wholesale level, there is an informal restriction on new entrants to the wholesale market due to the presence of a strong wholesale traders' association.On the other hand, at retailers' level, competition is high and open, both in fingerling and in fish selling; anybody can join the retail market and contact wholesalers or their agents directly using cash purchases or credit contracts with wholesalers.Usually credit-retailers initially get lowvalue fish from wholesalers for marketing with trust being built up over time; they are then allowed to sell high value fish (Apu, 2014).
Moreover, there is no particular policy for upgrading pangas and tilapia value chain.There are very limited active organizations for fish farmers, which have resulted in exploitation by the well-organized and influential actors such as fish traders, feed companies and so on.Although the fisheries sub-sector has experienced significant growth, the livelihoods of small actors have not been improved much, while the principal 504 actors (i.e., traders) of the fish value chain have accumulated the lion's share of the profits.That is why there is need for a justified upgradation of pangas and tilapia value chain in order to improve the positions of actors along the value chain.The potential for upgrading the activities of pangas and tilapia market actors include: (i) high levels of competition and participation at each node of the value chain; (ii) lack of institutional organization and coordination among actors at individual value chain nodes; (iii) the exclusion of smallholder fish farmers from higher value markets due to limited access to information; (iv) lack of formal capital appropriate for the fish production system; (v) lack of enforcement of standards and policies to enhance fish production; and (vi) widespread use of low quality inputs (Apu, 2014).
The present study is linked in some extent with other few studies, which are: Neela (2015) conducted a study on value chain and governance analysis of tilapia and found that wholesalers added highest value to tilapia and value chain governance was very weak in the study area.Hossain et al. (2013) reported that in case of small, medium and large fish, value addition occurred 14.0% to 23.0%, 17.0% to 23.0% and 52.0% to 69.0%, respectively.Maurice et al. (2010) conducted a study on the value chain of farmed African catfish and the main findings indicated that lack of cooperation in the domestic value chain, which had led to vulnerability of farmers though the chain had potential for higher income.Navy et al., (2012) conducted a value chain research on five key fishes in Cambodia and found that the main players in the fisheries marketing system comprise fishermen, collectors, wholesalers and retailers.Simpson (2012) explored the opportunities for small scale suppliers within the tilapia value chain in Achavanya of Dangme West District, Ghana and found that the value chain activities had no value addition since there was no processing factory in Achavanya to add value to the fish before it reaches the final consumer.
The above literature reviews indicate that a few studies were performed on pangas and tilapia value chain analysis with governance.This study analyzed value chain and governance of pangas and tilapia altogether, which will help to identify the problems of the value chain actors that are important for government, nongovernment organizations, business people and policy makers.The specific objectives of the study are: i) to develop pangas and tilapia value chain map and estimate the value addition by different actors; and ii) to address the governance structure of pangas and tilapia value chain actors.The findings from this study will help to make the policy options about pangas and tilapia fish culture and its extension.
Study Areas and Sample Size
The pangas and tilapia farmers and other actors were selected purposively from different study areas.Total sample size was 200, of which 100 were involved with pangas fish and remaining 100 were involved with tilapia fish (i.e., hatcheries and nurseries, farmers, input suppliers, aratdars, wholesalers, processors, retailers and consumers) (Table 1).The study was based on both primary and secondary sources of data and information.Researchers collected first-hand data and related information through direct interview and 09 focus group discussions (FGDs) using different types of questionnaire.The findings from FGDs were incorporated in value chain mapping and assessing governance issues.Secondary data and information having relevancy with this study were also collected for the purpose of analysis.
GAMM analysis
Gendered and Adapted Market Mapping (GAMM) analysis was used to examine the value chain map and value chain governance of pangas and tilapia.GAMM analysis incorporated a series of subsector mapping for purposes of analyses (OXFAM, 2013).Three parts of GAMM analysis are: Part I: Assessing and mapping value chain; Part II: Assessing and mapping service market; and Part III: Assessing and mapping of (dis) enabling environment.
Part I: Assessing and mapping value chain GAMM starts with identifying the core value chain and includes respective value additions and issues observed at each level.Different types and number of actors are identified first based on their respective roles relative to the product along the value chain.Once identified, these actors are placed along the chain according to the sequence of the flow of the product.The dynamics and issues associated with every type of actor are then articulated.Value additions of different stakeholders were estimated using the following equations (Acharya and Agarwal, 1987): Gross margin = Sales price -Production cost/Purchase price; Value addition by pangas and tilapia producers = Gross margin -Production cost; Value addition by individual actors = Gross margin -Purchase price; Price spread = Consumers' purchase price -Producers' sales price; Producers' share to consumers' Tk. = (Producers' sales price ‚ Consumers' purchase price) × 100 Part II: Assessing and mapping service market Along any product value chain, a network of actors termed the service market supports the core actors of the chain.The types of services and their providers vary across areas, products and time.As such, service market mapping incorporates a wide range of components related to this aspect i.e., starting from embedded services, free-based services, their payments, to service delivery mechanisms, and flow of benefits, amongst all other related factors.
Part III: Assessing and mapping of (dis) enabling environment
Issues such as government rules and policies, social norms and practices, infrastructure, topology, natural ambience and other underlying factors typically are not emphasized in core value chain frameworks.Despite their profound influence on the core value chain, these aspects are conventionally considered as 'extraneous' factors, and their primacy of being significant determinants in the power dynamics and structures of product markets are dismissed as secondary.These phenomena are labeled as (dis) enabling environments.
Value chain governance
Main issues in value chain governance include coordination, communication or transmission of information, distribution of (market) power, and collaboration.Governance tools in value chains include rules (or standards), which may be product standards (e.g., food hygiene standards) or process standards (e.g., health and safety standards for employees).Internal or formal governance refers to food safety and quality standards that buyers make on producers and exporters and change behaviour in supply chains.External or informal governance is the institutional framework that governs how the chains operate (Kruijssen and Young, 2012).
SWOT Analysis
SWOT is an acronym for Strengths, Weaknesses, Opportunities and Threats.By definition, Strengths (S) and Weaknesses (W) are considered internal factors over which people have some measure of control.In addition, by definition, Opportunities (O) and Threats (T) are considered external factors over which we have essentially no control.SWOT analysis guides to identify the positives and negatives inside of the organization (S-W) and outside of it, in the external environment (O-T).
Developing a full awareness of the situation can help with both strategic planning and decision making (Kotler et al., 2009).The SWOT analysis will give some insight to positive and negative sides of pangas and tilapia value chain.
GAMM Analysis: Part I -Assessing and Mapping Value Chain
The first part of Gendered and Adapted Market Mapping (GAMM) analysis includes respective value additions and issues observed at each level and sketch out the value chain map.Different types and number of actors are identified based on their respective roles along the value chain.
Involvement of actors and activity flow in pangas and tilapia value chain
Value chain analysis involves analysis of all supporting and primary activities in the process of transforming input into output which gives greater sense of value to the customer.There are three stages of pangas and tilapia value chain in the study areas.
Value addition by different actors of pangas and tilapia value chain
Table 2 reveals that the gross margin of farmers was Tk. 33 per kg pangas.The marketing cost of farmers was Tk.
3 per kg pangas.So, the value addition of pangas farmer was Tk. 30 per kg.The gross margin of aratdars was Tk. 5 per kg pangas.Aratdars' marketing cost was Tk. 2 per kg pangas and so, the value addition was Tk. 3 per kg pangas.Similarly, the gross margin and value addition of wholesaler and retailer were Tk. 12 and Tk. 5, and Tk. 13 and Tk. 9 per kg, respectively.Processors generally purchase large size of fish.Because to get 1 kg fillet, 3-4 kg (size per piece is more than 1.0 kg) fish is needed on average.The purchase price of per kg pangas fillet was Tk. 345 and the sales price was Tk. 450.The gross margin was Tk. 105 per kg and the marketing cost of per kg fillet was Tk.50 and thus the value addition of processor was Tk. 55 per kg.
On the other hand, tilapia farmers' gross margin and value addition was Tk. 34 and Tk.32 per kg, respectively with a marketing cost of Tk. 2 per kg.
Likewise, the gross margin and value addition of aratdar, wholesaler and retailer were Tk. 5 and Tk. 4, Tk.16 and Tk. 6, and Tk. 17 and Tk. 14 per kg, respectively.Processors' gross margin was Tk. 120 per kg tilapia.The marketing cost was Tk. 55 per kg fillet and thus the value addition of processor was Tk. 55 per kilogram.
Total value addition by all the market actors was Tk. 102 and Tk.121 per kg pangas and tilapia.Value addition was highest by the processor (53.9% and 53.7% for pangas and tilapia, respectively).It was followed by farmer (29.4% and 26.4%), retailer (8.8% and 11.6%), wholesaler (4.9% and 5.0%) and aratdar (2.9% and 3.3%) for pangas and tilapia, respectively (Table 2).The findings of INNOVISION (2013) were quite different where the study depicted that in Bangladesh, on an average, the percentages of total value addition by pangas and tilapia farmers, aratdars, paikers, and retailers were 30.0, 6.0, 34.0 and 30.0 percent; and 54.1, 4.1, 14.9 and 27.0 percent, respectively.Note: Processor sold directly to the export market and local consumers.A small portion also goes to retailers which is negligible.Hence, sales price of wholesaler is equal to purchase price of retailer.
Producers' share (73.9% and 68.3% for pangas and tilapia, respectively) was moderate which is considered as an indicator of increase in the efficiency of the marketing system in favor of the traders.It is also found that price spread (Tk.30 and Tk.38 per kg for pangas and tilapia, respectively) was high which indicates the lower efficiency of the marketing system (Table 3).
GAMM Analysis: Part II -Assessing and Mapping Service Market
The second part of GAMM analysis discussed about the service market mapping incorporates a wide range of components related to this aspect from embedded services, free based services, delivery mechanisms and flow of product and information.
Product flow
In the study areas, feed were mainly supplied from Trishal, Bhaluka, Maona, Savar, Dhaka and Chittagong districts.There were different feed companies such as Index Feed Ltd., Aftab Feed Products Ltd., Krishibid feed Ltd., Mega Feed Ltd., Quality Feed Ltd., ACI Feed Ltd., etc.There were also different feed shops which supplied loose feed or the inputs of loose feed.They brought these inputs from different districts.Private sector companies provided different types of technical information and supports through their retailers and distributors.These supports include training sessions, demonstrations, water and soil test, feed test, information availability, etc. Business incentives of the private sectors motivated them to do this type of activities.
In the study areas, only a few hatcheries produced pangas and tilapia fingerlings.Most of the farmers collected fingerlings from Bogra district.Hatcheries supplied fingerlings to nursery, farmers or fry traders.Farmers sold marketable size of pangas and tilapia at arat.Arat is the place where the market agent (aratdar) arranges or negotiates sales for the sellers on a commission basis, including financing of suppliers and buyers and often dealing on their own account (Goswami, 2016).From arat, one portion of pangas and tilapia was mainly supplied to Sherpur, Jamalpur, Netrokona, Mymensingh sadar and different upazilas of these districts, and another portion was supplied to Chittagong, Maona, Savar, Gazipur and different markets of Dhaka by different modes of transportation.So, these were the major demand centers of pangas and tilapia fish.From local arat, pangas and tilapia were purchased mainly by wholesaler of these districts.
Information flow
Different input suppliers such as feed and medicine supplier, fingerlings trader and other input supplier mainly provide information about their inputs like price, quality, availability, source, volume of inputs, etc. to farmers.Farmers provided information about price and volume of marketable pangas and tilapia fish to aratdar, processor and wholesaler.Aratdar and wholesaler provided information about price, quality, availability, source, volume, size of marketable fish to retailers.Retailers provided the same information to the consumers.All of the market actors circulated the information mostly by mobile phone.
Backward linkage actors and their safety compliances
Backward linkage actors such as hatchery, nursery, feed supplier, transporter, credit organization, etc. of pangas and tilapia value chain, their different services and performance in safety compliances along these services are presented in Table 4.
Forward linkage actors and their safety compliances
Forward linkage actors such as farmer, aratdar, ice supplier, wholesaler, processor, retailer, etc. of pangas and tilapia value chain, their different services and performance in safety compliances along these services are presented in Table 5.
GAMM Analysis: Part III -Assessing and Mapping of (Dis) Enabling Environment
Third part of the GAMM analysis discussed about rules and policies regarding quality improvement of product, social norms and practices, infrastructure, etc. and the perception of different actors about performance to maintain these rules and policies.
Internal and external governance followed by different actors
Internal or formal governance means buyers demand on food safety and quality standards of product from producers and exporters; and external or informal governance means rules and regulations like social and business norms, relationships between buyers and sellers, political issue, etc.All the actors of value chain have some standard governance issues which they maintained individually as a part of value chain.Table 6 reveals the perception of different actors on different indicators of internal and external governance.Five categories were formed such as 'high ', 'medium', 'low', 'rarely' and 'never' to measure the perception on different indicators under internal and external governance of value chain.It was found that the high extent internal governance was followed by hatchery and nursery to develop good quality of pangas and tilapia brood stock and fry and likely, 100% hygiene and bio-security were maintained.Quality feed, hormone and medicine were supplied for pangas and tilapia brood stock and fry.Hatchery and nursery farm had no technical limitation but number of skilled labor was medium.From the external governance point of view, it could be developed more if the farm get high environment of marketing facilities, outside financial support and export certificates because they have no scarcity of inputs.Farmers were highly concerned about improvement in quality of product (100%).In addition, they were 90% concerned about price reductions, and 80% about buyers' rejection system.There was a little provision of technical assistance by DoF in case of farmers.Input suppliers were 100% concerned about the hygiene and biosecurity, timely deliveries and medicinal use.Externally, they were 80% concerned about ensuring quality inputs.Aratdar had low restriction on hygienic and bio-security (70%) during marketing functions.They do not take responsibility directly for marketing functions because they act as a middle man for fish marketing.Based on internal and external governance factors of wholesaler, it points out that they are not concerned about hygiene and bio-security at all (0%), and they have rare improved storage and refrigeration facility (10%).Processors maintain commendable performance in case of improving grading system, and storage and refrigeration quality.Retailers were highly concerned about quality of product, buyers' rejection system and marketing facilities (70%, 80% and 80%, respectively).The findings are quite similar with Uddin (2009) where the author analyzed the food safety compliance performances of different stakeholders in value chains of Bangladesh and Thailand from mother shrimp collection to consumers' plate; and revealed that the competent authority monitored the hygiene and sanitation condition of the buyer-driven value chain activities, whereas processors-cum-exporters implemented HACCP procedures about 85.0% to 90.0% in all stages of production, distribution, processing and export of shrimp to assure the quality standards.
Key Factors and Outcomes of Governance
There are some factors which play a vital role in entire value chain governance.These factors are crucial because the condition of the value chain governance system of one product is comprehended by the outcome of these factors.Table 7 represents the key factors and outcomes of these factors in the study areas.The production potential of pangas and tilapia in Bangladesh is enormous but its export is constrained by quality and management.All the actors of value chain maintain some standards.Governance in pangas and tilapia value chain is not well developed due to low government inspection as well as lack of knowledge of respective stakeholders about the good governance practice.The governance issues followed by different actors could be ranked as average which reveals the improvement issue through intervention from respective authorities.Processing plants have generous capacity to export and a good aquaculture practices (GAP) associated with certification is required to develop.Hatcheries and nurseries should try to improve seed quality.Feed companies should try to improve their feed quality maintaining the international standard.Farmers' organizations need to be formed for producing pangas and tilapia in compliance with the demand of processors.Finally, interdepartmental committees should be established for monitoring good governance from the production point to consumer along the actors of pangas and tilapia value chain.
Factors
Outcomes Power practice in terms of price setting Relationships of power among all the actors were balanced.Price of the fish was determined by the bargaining power.
Driver of value chain
The value chain of pangas and tilapia was buyer driven where aratdars, wholesalers, processors, retailers, etc. precise the product specifications.
Information flow
Information flowed among the value chain of pangas and tilapia was crystalline.Information flowed within the actors mostly by mobile.
Relationship
Relationship among all the actors of pangas and tilapia was fair.The trust level was such high among the actors that sometimes the product transaction cost was paid later.
Mode of contract
Most of the contract done by actors within the chain was mutual.Very few contracts were made by written paper.
SWOT Analysis for Pangas and Tilapia Value Chain
Table 8 represents the SWOT analysis for pangas and tilapia value chain which reveals that the major strength was peoples' preference for pangas and tilapia due to cheaper price and taste (stated by 90% respondents).
Pangas and tilapia can be cultivated with other white fish which is stated by 90% respondents.According to 70% farmers, pangas and tilapia can be cultivated in the homestead ponds where women can play effective role.As major weaknesses, 90% respondents gave opinion about irregular supply of quality brood and poor socioeconomic condition of the farmers, respectively.
Questionable quality of fry and lack of proper value chain governance were also identified as weaknesses with the response of 80% farmers, respectively.The major opportunities included high demand in domestic market as well as international market, and increase in income of farmers, traders and associated groups (according to 90% respondents, respectively).All the respondents (100%) identified increasing cost of feed as serious threat.Complex and traditional marketing system, and market control by powerful intermediaries were also identified as threats by 70% respondents, respectively (Table 8).iii.Complex and traditional marketing system 70 iv.High demand for fillets and whole tilapia in international market is yet to be explored by Bangladeshi exporters 80 iv.Market controlled by powerful intermediaries 70 Source: Authors' estimation based on field survey, 2016.
The findings are similar with Apu (2014) to some context where the author pointed on source of food and nutritional security, skilled work force and large domestic market as major strengths; lack of quality fingerlings, small profit margin in fish trading and poor pond management practices as major weaknesses; increasing income level, growing consumer demand and having capability of improving livelihood as major opportunities; and rising cost of feed and other raw materials, limited financial capital and high competition in retail fish markets as major threats of Bangladesh fisheries.
Conclusion
The study concludes that potential of pangas and tilapia production in Bangladesh is enormous and many farmers have devoted them in pangas and tilapia culture because of their high income from these.The primary market actors of pangas and tilapia value chain were farmer, aratdar, wholesaler, processor and retailer.Among all the actors, processors added the highest value which was followed by farmers.Moreover, processors were fully aware of importance of quality standard and maintained all the indicators of quality standard sincerely.As profit earners, traders were the most casual group of value chain actors and they were not so serious with governance issues.The study identified that major strength of pangas and tilapia production was peoples' preference for tilapia due to cheaper price and taste.Questionable quality of fry and lack of value chain governance were identified as one of the major weaknesses.The major opportunities included high demand in domestic market as well as international market, and increase in income of farmers, traders and associated groups.The study also identified increasing cost of feed as serious threat.Some recommendations are put forward with a view to improve the entire value chain of pangas and tilapia fishes.Good governance should be ensured from the production point to consumer along all the actors of value chain.Government should take step about monitoring the feed quality and improvement of pangas and tilapia value chain governance.Moreover, DoF, BFRI and NGOs should play the assigned role to train up the chain actors and provide extension services in order to explore the export potential of pangas and tilapia fishes in Bangladesh. | 2019-04-03T13:09:10.593Z | 2018-12-28T00:00:00.000 | {
"year": 2018,
"sha1": "35e8ad0ca653a9c010cce34e0d1f1985f196cd15",
"oa_license": "CCBY",
"oa_url": "https://banglajol.info/index.php/JBAU/article/download/39448/29695",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "eb20a6b167242103cb99be16e9f4c4d6bbe926c2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
237460038 | pes2o/s2orc | v3-fos-license | THE EFFECT OF PLYOMETRIC TRAINING ON VERTICAL JUMP PERFORMANCE IN YOUNG BASKETBALL ATHLETES O EFEITO DO TREINAMENTO PLIOMÉTRICO NO DESEMPENHO DO SALTO VERTICAL EM ATLETAS JOVENS DE BASQUETE
This study investigated the effect of plyometric training (PT) on vertical jump in young basketball athletes. A total of 39 athletes were divided into two experimental groups (male – MEG and female – FEG) and two control groups (male – MCG and female – FCG). The My Jump app quantified jump height from flight time. Data analysis relied on repeated measures ANOVA, Cohen's effect size (ES) and magnitude-based inference, with a level of significance of p≤ 0.05. Results showed that the MEG and the MCG obtained significant improvements in the countermovement jump (CMJ) and in the squat jump (SJ). The FEG and the FCG presented significant differences in the SJ, with interaction effect; as for the CMJ, only the FEG showed improvements with interaction effect. Concerning ES, the MEG showed greater effects in the CMJ and in the SJ compared to the MCG; within the FEG, the ES was greater only for the CMJ in comparison with the FCG. Qualitative responses showed that PT is likely beneficial for the MEG, whereas for the FEG it is likely beneficial in the SJ, and very likely beneficial in the CMJ. It is concluded that PT brought about positive effects on the MEG and on the FEG, both in the CMJ and in the SJ. In the control groups, both obtained significant improvements in the SJ, but only the MCG showed an improvement in the CMJ. Furthermore, results were better in the MEG and in the FEG compared to the MGC and to the FCG. Thus, PT is suitable for vertical jump enhancement in young basketball athletes.
Introduction
Basketball is a sport characterized by acyclic movements and intermittent context, involving highly intense and short-distance actions interspersed with brief rest intervals 1,2 . It is a modality with anaerobic prevalence, requiring fundamentals of an explosive nature, as well as the need for jumping, a very important motor skill directly associated with better sports performances 3 .
Vertical jump in basketball is very present in specific skills, such as rebounding, shooting and blocking; a high number of jumps is performed during the game, making increased performance of the jumping ability a fundamental requirement to achieve success in basketball 2,4 . The training methods used by coaching staffs for vertical jump development include plyometric training (PT), widely adopted in team sports to promote the development of physical capacities in a variety of age groups, e.g., young athletes 5,6 . PT consists of jump exercises, which are actions involving the stretch-shortening cycle, a mechanism that happens when the muscle that is working switches from an eccentric action to a fast concentric action 7,8 .
Sports Science points in the direction that PT is one of the main strategies used in the sports environment to optimize the vertical jump, due to its methodological ease of application, as well as its extremely low cost 9 . However, this evidence needs to be confirmed in competitive teams made up of young athletes, and this is the main justification for the present investigation. In addition, as far as the authors know, this is the first study to use the My Jump tool to assess vertical jump in a research with experimental design. Thus, the objective of this study is to investigate the effect of six weeks of a PT program on vertical jump in young basketball athletes.
Participants
Thirty-nine young athletes playing in a basketball team participated in this study; they were selected in a non-probabilistic manner, 16 of whom were male, and 23, female ( Table 1). The inclusion criteria adopted were: being associated with the Pernambuco Basketball Federation [Federação Pernambucana de Basquete] (FPB), having at least one year of experience with basketball training, and presenting no muscle injuries or issues that prevented one from performing the activities with as much effort as possible. The exclusion criteria were: failing to attend 75% of the training sessions, missing the applied assessments, or injuring oneself during the application period of the training program. Two athletes were excluded due to injuries, and three, for missing more than 25% of the training sessions. One week before the tests were run, the athletes and their legal guardians were informed about the risks and benefits of the study, besides signing an Informed Consent form (ICF) and an Informed Assent Form (IAF), respectively. The study was approved by the Ethics Committee on Research Involving Humans of the Federal University of Pernambuco -Ethics Research Committee of the Federal University of Pernambuco's Health Sciences Center -under legal opinion No 2385105.
Study Design
This study is of a descriptive nature and has an experimental design. It was carried out during the training pre-season of the teams over 8 weeks. In the first week, the athletes were subjected to a familiarization with the PT and with the vertical jump test, by means of two types of jump: the squat jump (SJ) and the countermovement jump (CMJ). The training sessions altogether lasted 6 weeks, starting in the second week and ending in the seventh week. In the eighth week, the athletes had their vertical jumps reevaluated through SJ and CMJ. The young players were randomly divided into four groups: two experimental groups and two control groups. The experimental group was divided into males (MEG) and females (FEG); both underwent PT and continued with their regular basketball training routine. The other two groups were the control -males (MCG) and females (FCG) -; they did not change their regular basketball training routine, which consisted of technical-tactical training. All athletes were instructed not to perform any training other than that proposed.
Vertical Jump Assessment
To determine the height of the vertical jump, a protocol that assesses flight time using video recording was adopted, identifying the athlete's takeoff and landing frame, thus providing the height of the vertical jump through this equation: h = t² x 1.2262510. The assessment instrument was an iPhone 6s application, My Jump (Apple Inc., USA), which is part of the iOS operating system, uses the XCode software (5.0.5 for Mac OSX 10.9.2) and the Objective-C language, and is capable of recording at 240 Hz, with a quality of 720p 11,12 . To shoot the jumps, the evaluator stood 1.5 meters away from the athlete, focusing the shooting on the feet of the individual being evaluated. Taking into account other vertical jump assessment instruments, My Jump is an important tool due to its low cost, easy applicability and portability 13,14 . For the SJ, the athletes started from a static position, with their knees flexed at an angle of approximately 90 degrees. For the CMJ, they started from an upright and static position; subsequently, they performed an acceleration against their own center of gravity and a knee flexion at approximately 90°. Both jumps must be executed with the hands on the hips, just as described by Pupo, Detanico and Santos 15 . During the evaluation, the athletes performed 3 SJs, then 3 CMJs, with a 2-minute recovery interval between jumps; the highest height in centimeters (cm) between jumps was used for analysis, and all athletes were instructed to jump to the maximum of their individual capacity.
Training Program
The PT program started from a standardized warm-up, which included running at increasing speed and guided by verbal stimuli, as well as dynamic stretches lasting approximately 10 minutes. The PT was applied on two non-consecutive weekdays for 6 weeks (Mondays and Wednesdays). Training volume was defined based on number of jumps, which was gradually raised each week ( Table 2). In all training sessions, the jump exercises progressed in intensity and complexity. Intensity was assessed from a technique-based jump progression (exercise complexity). PT details are described in Table 2. Rests between jumps and sets comprehended 30 and 120 seconds, respectively; there was no rest between jumps, except for the group jump, in which the velocity of performance was the focus. The athletes were instructed on the mechanics of the jumps and encouraged to jump with as much effort as possible; all jumps were executed under the same environment and floor conditions. The basketball training occurred after the application of the PT program and consisted of technical-tactical training aimed at working on small-sided games, attack moves and individual defense. This training was held three times a week (Mondays, Wednesdays and Fridays).
Statistical Analysis
Data are presented as means with standard deviation (SD). A repeated measures ANOVA was used to compare the results of the tests prior to the training with the results afterwards. The percentage of variation (Δ%) was determined through the following equation: Δ% = ([post-intervention -pre-intervention]/pre-intervention) x 100. In addition, Cohen's effect size was determined for the statistical differences identified. Effect sizes (ES) whose values were 0.2, 0.5 and 0.8 were considered as representing small, medium and big differences, respectively. Besides this test, for each variable, percentage differences in the change scores between the EG and the CG from the pre-to the post-test period were calculated together with 90% confidence intervals. The likelihoods of differences in performance being better/bigger (that is, bigger than the smallest valid change [0.2 multiplied by the SD between subjects, based on Cohen's d principle]) and similar or worse/smaller were calculated. The quantitative likelihoods of beneficial/better effects and harmful/poorer effects were assessed qualitatively as follows: < 1%, almost certainly not; 1% to 5%, very unlikely; 5% to 25%, unlikely; 25% to 75%, possibly; 75% to 95%, likely; 95% to 99%, very likely; and > 99%, almost certainly 16 . A substantial effect was established at > 75%. If the likelihoods of having beneficial/better and harmful/poorer performances were both > 5%, the actual difference would be deemed unclear. Significance was established at an α level of 0.05. All statistical analyses were run on the SPSS statistical package for Macintosh (version 21.0, Chicago, IL, USA). Table 3 displays comparison results for CMJ and SJ between the CG and the EG for both sexes. Said results showed that everyone in the male groups improved their CMJ (F(2.12) = 11.11, p = 0.007) and SJ (F(2.12) = 11.07, p = 0.007), but with no interaction effect for either CMJ (F(2. In the Δ% analysis, the FEG showed a variation of 13.06 and 10.75 for SJ and CMJ, respectively. The FCG, in its turn, presented a Δ% of -3.01 for CMJ, and 2.58 for SJ. In the MEG, Δ% values were 11.98 and 10.98 for SJ and CMJ, respectively. In the MCG, the SJ presented a Δ% of 3.71, and the CMJ, a Δ% of 2.50. As for the chances of the PT program being efficient, there is a likely possibility of the training being efficient for the male group when it comes to both the SJ and the CMJ. In the female group, the PT showed a likely chance of improving the SJ and a very likely chance of improving the CMJ.
Discussion
The present study investigated the effect of a 6-week PT on vertical jump in young basketball athletes. The findings show that both the MEG and the MCG improved their SJ and CMJ. Among females, there was improvement in both groups for SJ; the EG's improvement was significantly greater than that of the CG, whereas the CMJ showed positive significance only in the EG.
Indeed, studies have shown the influence of PT on vertical jump in young athletes 5,17 . The present findings among females are in line with a study conducted by Idrizovic et al. 18 , who investigated the effect of an 8-week PT on young volleyball athletes; their results showed that the group subjected to PT obtained superior effects in the CMJ compared to the control group, presenting "almost certainly positive" qualitative descriptors. Post hoc analyses indicated that the group that underwent PT obtained better effects compared to the control group. Likewise, McCormick et al. 19 verified the effectiveness of PT for CMJ among young basketball athletes. In said investigation, female athletes were divided into two groups -one that underwent PT in the sagittal plane (SPP), and one that underwent PT in the frontal plane (FPP); it found that both groups improved their CMJ performance CMJ after the PT, but the SPP group, in comparison with the FPP, showed a significant improvement in the CMJ, with a percentage increase of 10.3% and 3.8%, respectively, from the pre-intervention to the postintervention period, thus showing that PT were effective in both groups.
The findings of the present study in the FEG for both SJ and CMJ were similar to those reported by Attene et al. 20 , who compared the effects of PT with basketball training over 6 weeks on vertical jump among young female basketball players (age of 14.9 ± 0.9 years and body weight of 54.0 ± 8.7 kg). The study showed that the group subjected to PT obtained significantly greater gains for the parameters of the SJ compared to those of the CMJ, presenting a percentage change before and after training of 15.4% and 11.3% in the SJ and the CMJ, respectively. There still are few studies in the scientific literature addressing PT in young athletes, especially when investigating its effects on females, so further research is needed to clarify this influence in this population.
Corroborating the findings for males, the effect of a 6-week PT on vertical jump in young basketball athletes was investigated; they were divided into 2 groups, one that underwent PT (EG) and another one that only played basketball (CG), with the EG obtaining improvements compared to the CG 21 . The same study observed that the EG improved their vertical jump by 24.1%. Results from a previous study also revealed a significant improvement of 23% in the vertical jump after a 6-week PT among semi-professional basketball athletes 22 . PT is also believed to be important in other sports, such as football, in which Chaabene and Negra 23 investigated and compared the effect of high-volume plyometric training (HPT) and low-volume plyometric training (LPT) in pre-pubertal players. The authors evidenced that 8 weeks of PT promoted significant gains in SJ and CMJ for both training types, showing that PT provides significantly equal gains at both intensities, having a direct influence on the athlete's sports performance.
Basketball training, due to its specificity, can contribute to improving vertical jump performance, since the execution of technical gestures requires the use of this skill, which is constantly performed during training 2,4 . Indeed, this scenario was presented both by the FCG in the SJ (Δ% = 2.58) and by the MCG in the SJ (Δ% = 3.71) and in the CMJ (Δ% = 2.50); however, the ES was small for both groups and jumps, which requires caution before trying to generalize the results. Nonetheless, it is possible to verify that regular basketball training combined with PT provided greater effects both on the FEG for SJ (Δ% = 13.06; QR = likely) and for CMJ (Δ% = 10.75; QR = very likely) and on the MEG for SJ (Δ% = 11.98; QR = likely) and for CMJ (Δ% = 10.98; RQ = likely). The enhancement of the stretch-shortening cycle is believed to be a factor that can explain significant improvements in vertical jump performance. The development of this cycle happens from a better utilization of the elastic components of the muscles and of the stimuli of proprioceptive reflexes through plyometric exercises, resulting in several positive adaptations in the neuromuscular system, which are directly associated with improvements in vertical jump performance [24][25][26] .
Investigations about the influence of PT on vertical jump in athletes of both sexes, in a variety of sports, are unanimous as to the quantitative improvement of this skill, even when these differences are not statistically significant, improving the athletes' performance [18][19][20][21][22][23] .
The absence of a biological maturation assessment, the small number of participants, the impossibility of assessing internal load, sleep routine and diet, as well as controlling the environmental temperature are the limitations of this study. Further research on PT need to be conducted with young basketball players, especially to assess biological maturation and internal load control, as they are factors that have a direct influence on performance.
Conclusions
This study highlights that the six-week PT induced positive effects on CMJ and SJ in young basketball athletes. In SJ and CMJ, the MEG and the MCG showed statistically significant increases, with greater increases in the MEG. Among females, both the SJ and the CMJ showed statistically positive increases between the EG and the CG. In the SJ, both the FEG and the FCG improved significantly, whereas only the FEG showed improvements in the CMJ.
The qualitative responses showed that, within the MEG, PT is likely beneficial for both the SJ and the CMJ. Now, in the FEG, it is likely beneficial for the SJ and most likely beneficial for the CMJ.
In view of these results, it is possible to confirm the importance of applying PT to this age group, especially for types of training that seek to enhance the vertical jump. | 2021-09-08T15:53:23.112Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "372a7f1d5f0667cf8f95e51e7747a5d6bc02b040",
"oa_license": "CCBYNC",
"oa_url": "https://periodicos.uem.br/ojs/index.php/RevEducFis/article/download/46964/751375150751",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8b50fa918a9533b132f94ba89706c21fba9703b1",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": []
} |
256861495 | pes2o/s2orc | v3-fos-license | KITLG Copy Number Germline Variations in Schnauzer Breeds and Their Relevance in Digital Squamous Cell Carcinoma in Black Giant Schnauzers
Simple Summary It is well known that dark-coated dogs, especially schnauzers, have a predisposition for digital squamous cell carcinoma (dSCC). Molecular genetic studies by other researchers indicate a pathogenetic role of copy number variations (CNVs) of the KIT ligand (KITLG) gene. Our study aimed to use droplet digital PCR of blood samples to investigate whether the copy number has a predictive value for the disease in schnauzer breeds. We showed that most giant (GS), standard (SS), and miniature schnauzers (MS) had more than four and up to seven copies of this gene segment. Furthermore, the copy number in black GS with dSCC was significantly higher than in the black control GS (p = 0.02). CNV values > 5.8 indicate a significantly increased risk for dSCC, while a reduced risk can be assumed for GS with CNV < 4.7. CNV values between 4.7 and 5.8 appear as a grey area. In summary, including the germline mutation (KITLG CNV) in breeding decisions will be complex, but this diagnostic test may help to assess the individual risk for dSCC and to sensitise owners of black GS accordingly if the CNV value is high. Abstract Copy number variations (CNVs) of the KITLG gene seem to be involved in the oncogenesis of digital squamous cell carcinoma (dSCC). The aims of this study were (1) to investigate KITLG CNV in giant (GS), standard (SS), and miniature (MS) schnauzers and (2) to compare KITLG CNV between black GS with and without dSCC. Blood samples from black GS (22 with and 17 without dSCC), black SS (18 with and 4 without dSSC; 5 unknown), and 50 MS (unknown dSSC status and coat colour) were analysed by digital droplet PCR. The results are that (1) most dogs had a copy number (CN) value > 4 (range 2.5–7.6) with no significant differences between GS, SS, and MS, and (2) the CN value in black GS with dSCC was significantly higher than in those without dSCC (p = 0.02). CN values > 5.8 indicate a significantly increased risk for dSCC, while CN values < 4.7 suggest a reduced risk for dSCC (grey area: 4.7–5.8). Diagnostic testing for KITLG CNV may sensitise owners to the individual risk of their black GS for dSCC. Further studies should investigate the relevance of KITLG CNV in SS and the protective effects in MS, who rarely suffer from dSCC.
Squamous cell carcinomas of the toes originate in the stratum spinosum of the claws or the sole epidermis. Clinically, they may initially look like inflammation of the nail bed. There may be lameness, thickening of the toes, and ulceration of the digital tissue. The nail material often appears softened and frayed [5]. Most commonly, the forelimbs are affected, and the mean age of the canine patients is about 10 years [4][5][6]. The tumour grows locally invasive and destroys the bone as it progresses [5,7,8]. In contrast to dogs, human SCC of the nail develops very rarely [9].
Amputation of the toe is the treatment of choice in dogs with dSCC [10]. The histological grade of malignancy can be assigned by two different systems adapted from human squamous cell carcinomas [11,12] to canine digital squamous cell carcinomas [13]. The risk of local recurrence can be considered low when resected in healthy tissue. However, multiple subungual squamous cell carcinomas on different toes of one dog have been described in several cases [4,6,14]. Varying metastasis rates depending on the breed have been described in the literature [5,6]. There may be lymph node metastases [5] or systemic spread affecting various organs [15].
Dark coat colour is correlated to the development and aggressiveness of squamous cell carcinoma [1][2][3][4][5][6]13,16]. A predisposition for squamous cell carcinoma in standard schnauzers (SS) and giant schnauzers (GS) has been identified by various authors [1,[4][5][6]17]. In the United States and Canada, the giant schnauzer is among the ten breeds most commonly affected by toe amputations [2]. A data collection of 79 dogs from Italy showed that schnauzers were the most common breed (31.6%) in the study population and have a poorer prognosis than dogs of other breeds [5]. A large study from Canada confirmed the strong predisposition for the development of dSCC in giant schnauzers (odds ratio (OR): 56.7), standard schnauzers (OR: 20.3), Gordon setters (OR: 18.3), black standard poodles (OR: 11.1), Kerry blue terriers (OR: 9.4), rottweilers (OR: 7.0), and several other large black dog breeds [6]. In a German cohort of digital neoplasms, predisposed breeds for SCC included the schnauzer (log OR = 2.61), Briard (log OR = 1.78), rottweiler (log OR = 1.54), poodle (log OR = 1.40), and dachshund (log OR = 1.30) when compared with mongrels [1]. Interestingly, the miniature schnauzer (MS) seems to be rarely affected by squamous cell carcinoma [1,17]. This suggests the presence of genetic factors for the development of dSCC in dogs. Moreover, there is one report of dSCC in three members of a giant schnauzer family [18].
Cutaneous squamous cell carcinomas in man, which are mainly induced by ultraviolet light, have a different and more complex mutational landscape, e.g., with NOTCH1, NOTCH2, TP53, and RAS mutations [19]. Non-digital SCC in dogs shows the same high molecular genetic variability as human squamous cell carcinomas [20]. Thus far, canine dSCC has not been investigated for somatic oncogenic mutations.
Tyrosine kinase receptors (TKR) comprise more than 60 molecules that play an essential role in the molecular pathways of cell survival and differentiation [21]. The KIT receptor (CD117) is a TKR involved in the processes of survival and proliferation of mast cells, melanocytes, epithelial cells, and others [21]. Mutations in the c-kit gene encoding the KIT receptor are well known in canine mast cell tumours [22], in human melanomas [23], as well as in human and canine gastrointestinal tumours [24,25].
The ligand for this KIT receptor is the stem cell factor (SCF), which is encoded by the c-KIT ligand (KITLG) gene. This ligand seems to play a role in canine mast cell tumours [26], melanogenesis, and hair colour intensity in dogs [27] as well as in human familial progressive hyper-and hypopigmentation (FPHH) [28]. Overexpression of KITLG seems to be important in promoting lymph node metastasis via the JAK/STAT pathway in mouse models and cell lines of human nasopharyngeal carcinoma [29]. Furthermore, growth and invasion in human colorectal cancer cell lines are correlated with the expression of KITLG [30]. However, in human non-small cell carcinomas, no correlations were found between KITLG gene copy number, KITLG mRNA expression levels, or KITLG immunopositivity [31].
Copy number variations (CNVs) are defined as genomic regions that vary in number by amplification or deletion of DNA sequences. They play an important role in the genetic diversity necessary for evolution but are also responsible for hereditary and somatic human diseases, such as cancer [32]. Small CNVs are often benign, but those larger than 250 kb are strongly associated with pathological findings such as developmental disorders and neoplasms [33]. The KITLG CNV spans only 6 kb (chr15: 29,821,450-29,832,950) and is located 152 kb upstream of the KIT ligand (KITLG) gene [27]. A variable copy number alters KITLG expression and subsequent pigment distribution throughout the coat colour in dogs and several other species [34].
A genome-wide association study (GWAS) on standard poodle DNA found an increased risk of developing digital SCC when an increased number of specific copies (>4 copies) was expressed at the KITLG locus [35]. Furthermore, in nine dogs with digital malignant melanoma, the number of copies at the KITLG gene locus varied between four and six. Four of these nine dogs had black coats [36]. These studies were based on germline mutations detected in blood samples.
The aims of this study were (1) to investigate KITLG CNV in miniature, standard, and giant schnauzers and (2) to compare the CNV of KITLG between black giant schnauzers with and without digital squamous cell carcinomas and to evaluate its possible predictive value in this breed. Samples from giant and standard schnauzers were obtained from pre-operative blood samples from animals with digital masses before amputation as well as from dogs with geriatric blood screening. The group of giant schnauzers (n = 39) included 13 male, 8 neutered male, 11 female, and 7 neutered female dogs aged 5 to 14 years (median 10 years). The group of standard schnauzers (n = 27) included 7 male, 7 neutered male, 5 female, and 8 neutered female dogs at the age of 5 to 13 years (median 10 years).
Materials and Methods
Clinical data of the standard and giant schnauzers included in study 1 were collected via questionnaires sent to the owners (breeders) and/or by telephone calls to the veterinarians. It was recorded whether the respective dogs had already suffered from SCC of the digit and which coat colour (black or pepper and salt) it had: All standard schnauzers and giant schnauzers in studies 1 and 2 were black.
Samples from miniature schnauzers mainly came from breeding animals (young and not neutered) that were routinely screened for relevant genetic diseases. The group of miniature schnauzers (n = 50) included 24 male and 26 female dogs between 1 and 9 years of age (median 1 year). Information about clinical findings or coat colour was not available.
According to the terms and conditions of LABOKLIN and the decision of the government of Lower Franconia RUF-55.2.2-2532-1-86-5, no special permission has to be obtained from the animal owners or the animal welfare commission for examinations on residual samples that are not needed for any further diagnostics.
Study 2
Two groups of black giant schnauzers were selected from the cohorts of study 1: (1) Control group (n = 11): Blood samples from black giant schnauzers aged >10 years (10-14 years; median 11.5) that remained free of digital SCC until the end of the study were used. There were 4 male dogs, 2 neutered males, 4 females, and 1 neutered female. Six younger dogs without dSCC from study 1 were excluded; (2) dSCC group: Blood samples from 22 black giant schnauzers aged 6-13 years (median 10) with squamous cell carcinoma of the digit were diagnosed by different pathology laboratories. This group included 7 males, 6 neutered males, 6 females, and 3 neutered female dogs.
Clinical questionnaires for the standard schnauzers identified 18 black standard schnauzers with dSCC and 4 without dSCC. No data about dSCC were available in five cases; thus, they had to be excluded. As the control group of SS was too small, these Vet. Sci. 2023, 10, 147 4 of 11 data were not useful for a valid statistical analysis. We therefore did not conduct another study with these data.
SCC of the digit is not expected in the young miniature schnauzers of cohort 1. Furthermore, clinical data and coat colour were not available. Thus, no further statistical analyses were performed.
Molecular Genetics
To determine the copy number of the relevant gene segments of KITLG, molecular genetic analyses were performed on blood samples as follows: Genomic DNA was isolated from EDTA blood with the MagNA Pure 96 system using DNA Tissue Lysis Buffer and viral NA Small RNA kit (Roche, Basel, Switzerland) according to the manufacturer's instructions. Copy number quantification of the KITLG was performed by digital droplet PCR (ddPCR) using TaqMan ® probes and primers specific for the KITLG sequence and proto-oncogene 1 (ETS1) as reference gene based on the paper of Bannasch et al. (2021) and as performed previously [36,37]. Measurement was take in duplicate. The mean value was used for further analyses. The intra-assay correlation was 0.85. The copy number was determined using the DropletReader (Bio-Rad, Feldkirchen, Germany) and QuantaSoftware 1.7.4.0917 (Bio-Rad, Feldkirchen, Germany).
Statistics
All statistical analyses and visualisations were carried out with the statistical framework R version 4.2 [37]. To compare the mean copy number values of KITLG in the three schnauzer groups (giant, standard, and miniature), we used the Kruskal-Wallis rank sum test. To compare the mean copy number values of the KITLG between the two breeds (giant schnauzer and standard schnauzer) with dSCC, we used the Wilcoxon rank sum test. Multiple logistic regression was employed for binary classification. To test which variables should be included in the multiple regression model, the Bayesian information criterion (BIC) was used as a model test. To estimate the expected accuracy of new data, we performed leave-one-out cross-validation (LOOCV) as implemented in the R package "boot" version 1.3-28.1 [38]. Throughout the whole manuscript, the following significance levels were used: p < 0.05 (weakly significant), p < 0.01 (significant), and p < 0.001 (strongly significant).
Study 1
The copy numbers of KITLG in the cohort of study 1 (animals with and without dSCC) ranged from 2.5 to 7.6. There were no significant differences between the KITLG CNV values of the giant, standard, and miniature schnauzer groups tested by Kruskal-Wallis ( Figure 1). Multivariate regression analysis showed that there was no significant effect of sex or castration status on the copy number value. Interestingly, there was one 13-year-old female giant schnauzer that had dSCC but had a very low copy number of 2.5. In this case, the histopathological diagnosis was confirmed in our laboratory, and analyses were repeated twice to prove this unexpected finding.
The CN values form 18 standard schnauzers with dSCC ranged from 3.8 to 6.7 (median 5.4). The copy number values in 22 giant schnauzers with dSCC varied between 2.5 and 6.9 (median 5.7). There was no statistical significance between these two breeds ( Figure 2).
Study 2
Of the 39 giant schnauzers of study 1, we selected 11 black giant schnauzers that had no SCC of the digit until the end of this study (controls) and 22 black giant schnauzers with digital SCC. In the dSCC group, 19 front legs and 3 hind legs were affected. Squamous cell carcinoma was diagnosed on all toes and was distributed as follows: toe I (n = 3), II (n = 7), III (n = 2), IV (n = 2), and V (n = 8). The other six GS without dSCC were younger than 10 years and were excluded from further statistical analyses because it was too uncertain Vet. Sci. 2023, 10, 147 5 of 11 whether they would develop dSCC later in life. Further medical history of neoplasia other than dSCC was available in only three dogs: One 13-year-old male GS never had any tumour. One 13-year-old female neutered GS died of cardiac insufficiency and suffered from diabetes insipidus. One 12-year-old female GS developed a non-specified mammary tumour after the end of this study.
Vet. Sci. 2023, 10, x FOR PEER REVIEW 5 of 12 effect of sex or castration status on the copy number value. Interestingly, there was one 13-year-old female giant schnauzer that had dSCC but had a very low copy number of 2.5. In this case, the histopathological diagnosis was confirmed in our laboratory, and analyses were repeated twice to prove this unexpected finding. The CN values form 18 standard schnauzers with dSCC ranged from 3.8 to 6.7 (median 5.4). The copy number values in 22 giant schnauzers with dSCC varied between 2.5 and 6.9 (median 5.7). There was no statistical significance between these two breeds ( Figure 2). effect of sex or castration status on the copy number value. Interestingly, there was one 13-year-old female giant schnauzer that had dSCC but had a very low copy number of 2.5. In this case, the histopathological diagnosis was confirmed in our laboratory, and analyses were repeated twice to prove this unexpected finding. The CN values form 18 standard schnauzers with dSCC ranged from 3.8 to 6.7 (median 5.4). The copy number values in 22 giant schnauzers with dSCC varied between 2.5 and 6.9 (median 5.7). There was no statistical significance between these two breeds ( Figure 2). The number of KITLG gene copies in the control dogs ranged from 4.5 to 6.5 (median 5.2). The copy number in dogs with dSCC varied between 2.5 and 6.9 (median 5.7). The quality of the classification in giant schnauzers with and without SCC was analysed Vet. Sci. 2023, 10, 147 6 of 11 with a multiple logistic regression approach. It was identified that the copy number was significantly higher in the giant schnauzers of the dSCC group compared to the controls (p = 0.02, Figure 3). Furthermore, age was a weakly significant (p = 0.04) factor ( Figure 2) and was adjusted in the statistical analyses. Sex did not have an impact, and there was no apparent tendency that the affected toe had any correlation with the copy numbers.
One 13-year-old male GS never had any tumour. One 13-year-old female neutered GS died of cardiac insufficiency and suffered from diabetes insipidus. One 12-year-old female GS developed a non-specified mammary tumour after the end of this study.
The number of KITLG gene copies in the control dogs ranged from 4.5 to 6.5 (median 5.2). The copy number in dogs with dSCC varied between 2.5 and 6.9 (median 5.7). The quality of the classification in giant schnauzers with and without SCC was analysed with a multiple logistic regression approach. It was identified that the copy number was significantly higher in the giant schnauzers of the dSCC group compared to the controls (p = 0.02, Figure 3). Furthermore, age was a weakly significant (p = 0.04) factor ( Figure 2) and was adjusted in the statistical analyses. Sex did not have an impact, and there was no apparent tendency that the affected toe had any correlation with the copy numbers. An ROC curve was used to determine the specificity (true negative rate) and sensitivity (true positive rate) for predicting whether the sample of a giant schnauzer is classified as "control" or "dSCC". In general, values of the area under the curve (AUC) can range from 0.5 (random noise) to 1 (optimal classification). The AUC value An ROC curve was used to determine the specificity (true negative rate) and sensitivity (true positive rate) for predicting whether the sample of a giant schnauzer is classified as "control" or "dSCC". In general, values of the area under the curve (AUC) can range from 0.5 (random noise) to 1 (optimal classification). The AUC value in the present analysis of copy number testing of the KITLG locus in black giant schnauzers was 0.88 ( Figure 4). To estimate the prediction quality on new samples, we applied leave-one-out cross-validation. This method yields an expected accuracy (percentage of correct predictions) of 81.25%. The results identified a 50% probability threshold of 5.2 copies with 95% confidence intervals for 10-year-old giant schnauzers as shown in Figure 5. To estimate the prediction quality on new samples, we applied leave-one-out crossvalidation. This method yields an expected accuracy (percentage of correct predictions) of 81.25%. The results identified a 50% probability threshold of 5.2 copies with 95% confidence intervals for 10-year-old giant schnauzers as shown in Figure 5. Additionally, for the ages of 9, 11, and 12 years, the optimal thresholds for giant schnauzers are visualised in Supplementary Figure S1. To estimate the prediction quality on new samples, we applied leave-one-out cross-validation. This method yields an expected accuracy (percentage of correct predictions) of 81.25%. The results identified a 50% probability threshold of 5.2 copies with 95% confidence intervals for 10-year-old giant schnauzers as shown in Figure 5. Additionally, for the ages of 9, 11, and 12 years, the optimal thresholds for giant schnauzers are visualised in Supplementary Figure S1.
Discussion
Previous publications have identified that there is a basic risk of developing digital SCC in canine breeds with dark coats, such as standard and giant schnauzers [1,6,17]. In our study, all standard and giant schnauzers were black, and we could not investigate the effect of the coat colour variations that are common in schnauzer variants. We did not select black dogs with dSCC in our cohort, but at the time of collecting samples, all dSCC were from black MS and GS only. As described by other authors [4,5], the front legs were more often affected than the hind legs in the dogs in our cohort, and the median age was 10 years. A regular medical check-up of the toes of dark-coated dogs, especially black standard and giant schnauzers, should be recommended as a standard practice.
A KITLG copy number value >4 was found to be correlated with a predisposition to dSCC in black standard poodles compared with light-coloured standard poodles [35]. In a recent study, we found that 55 black-coated and black and tan dogs with dSCC had mean copy number values ranging from 5.5 to 5.8 in contrast to four light-coated animals with dSCC that had a mean copy number value of 4.5 [16]. Interestingly, this study has shown that there is a predictive correlation between higher germline KITLG copy number values and increased histological aggressiveness of digital SCC in dogs (mainly black-coated breeds and black and tan breeds) [16].
The present study was the first to compare the three size variants of the schnauzer breed for their genetic profile with regard to the copy numbers of the KITLG locus. Our data did not find any significant differences in CN values among the three breed variants we examined. The results of our study showed that all but one of the schnauzers had a copy number value >4 and are thus probably at-risk animals, as it has been defined for Vet. Sci. 2023, 10, 147 8 of 11 black poodles [35]. However, it is interesting to see in the literature that only standard and giant schnauzers are predisposed to develop dSCC, while miniature schnauzers are rarely affected [1]. Thus, there must be additional protective factors in miniature schnauzers that may be correlated to body size or coat colour but have not yet been identified. Future studies should evaluate the effects of coat colour in detail. Schnauzers were originally bred in Germany, and all the dogs of our study are kept in Germany, but there is no information on the pedigree and any genetic influence from schnauzers of other countries. According to the website of the German Pinscher and Schnauzer Klub (https://psk-projekt.jimdo. com/unsere-rassen, accessed on 5 February 2023), German giant and standard schnauzers are mainly black (puppies per year: 1 pepper and salt GS for every 10 black GS and about 1 pepper and salt SS for every 1.8 black SS). In contrast, the coat colour of miniature schnauzers varies between black, pepper and salt, black-silver, and white (18.04.2007/EN FCI-Standard N • 183). The coat colour of the puppies per year is: 1 white to 1.3 black-silver to 3.4 black to 4.7 pepper and salt (https://psk-projekt.jimdo.com/unsere-rassen, accessed on 5 February 2023).
As we only had black SS and GS in our study, we investigated whether there are any differences in KITLG copy numbers between black giant schnauzers with or without dSCC during their life. The number of standard schnauzers with reliable clinical information was too small for a valid statistical analysis.
Yet, for the first time, a predictive statement could be made about the probability of SSC occurring in giant schnauzers. We found that in black giant schnauzers, more copies of the KITLG locus were significantly associated with the probability of developing dSCC (p = 0.02). One limitation of our GS control group is that it is based on the assumption that these animals have not developed squamous cell carcinoma of the digit even when older than 10 years. However, we have designed this group according to the control group of the standard poodles (older than 8 years) in the initial study mentioned above [35].
For diagnostic purposes in black GS, we recommend defining values between 4.7 and 5.8 as a grey area. A copy number value of more than 5.8 indicates a significantly increased risk for digital SCC in black giant schnauzers. A reduced risk can be assumed for black giant schnauzers with a copy number below 4.7. Further studies with higher case numbers are needed to confirm the results of this pilot study.
Nevertheless, further factors are likely to contribute to the development of such tumours, as there was one old giant schnauzer (13 years) with dSCC that had a CNV of 2.5. In this case, spontaneous dSCC can be assumed, which is independent of factors of breed predisposition.
However, the copy numbers in black standard and giant schnauzers with dSCC did not vary significantly. As we were not able to collect enough samples from control standard schnauzers by the end of this study, further investigations are necessary to determine similar diagnostic thresholds for this breed. Furthermore, this phenomenon should be analysed comparing black and pepper and salt standard schnauzers as well.
In contrast, miniature schnauzers very rarely develop digital SCC [17] but also had copy numbers ranging from 4.3-7.6 in our cohort. Thus, at the moment, the results of a KITLG CNV test should be restricted to black giant schnauzers. For the future, it is important to understand the oncogenic mechanisms and the influence of body size and/or coat colour in the schnauzer breed variants in more detail.
Thus far, only very few studies have investigated the KITLG (stem cell factor) expression level on mRNA or protein levels in human cells [30], mouse models [29], or canine samples [26]. However, no correlation was found between KITLG gene copy number, KITLG mRNA expression levels, or KITLG immunopositivity in human non-small cell carcinomas [31]. Further studies should investigate this correlation in cases of digital squamous cell carcinomas in dogs. Presuming that the ligand of the KIT receptor plays a role in oncogenesis, one would have to assume that the receptor itself is expressed in squamous cell carcinomas. Immunohistochemical analysis of the KIT receptor is well established in canine mast cell tumours [39], but, to the best of the authors' knowledge, it Vet. Sci. 2023, 10, 147 9 of 11 has not been investigated in larger case numbers of canine SCC so far. There is just one case report of a dog with an oral collision tumour of SCC and malignant melanoma that examined the expression of the KIT receptor in the neoplastic cells-but only melanocytic cells were positive [40]. In human dermal SCC, only 3 of 22 neoplasms showed expression of the KIT receptor [41]. Unfortunately, several cases of dSCC in the present study were not diagnosed in our laboratory, and samples were not available for further analyses, but prospective case collections should focus on this to achieve a better understanding of the pathogenetic correlations.
In summary, this is the first time that a prognostically relevant germline mutation has been detected for a specific canine tumour (dSCC in black giant schnauzers). However, the method of copy number analysis of KITLG does not say anything about the distribution of the chromosome sets. Thus, including CNV analysis of the KITLG locus in breeding decisions is complex, and the prevention of squamous cell carcinoma of the digit in giant schnauzers remains challenging for breeders. Nevertheless, this diagnostic test can help to assess the individual risk of developing the disease and to sensitise owners accordingly if the CN value is high. It may encourage owners to screen their dogs for neoplasms if they understand that there is a well-known breed predisposition. Thus, a regular medical checkup (adspection and palpation) of the toes is recommended especially in such high-risk dogs, and early surgery may prevent metastases.
Conclusions
In summary, detecting the copy number variation of the KITLG locus in black giant schnauzers is a promising new tool to predict the individual risk of developing squamous cell carcinoma of the digit. Further research is necessary to make comparable statements in standard schnauzers and to understand the protective mechanisms in miniature schnauzers.
Institutional Review Board Statement:
As all samples (tissue and blood) were submitted for routine diagnostic purposes, ethics committee approval was not required. All the material used was no longer needed for diagnostics. This was confirmed by the local government RUF-55.2.2-2532-1-86-5.
Informed Consent Statement: Not applicable.
Data Availability Statement: The raw data of the results presented in this study are available on request from the corresponding author. | 2023-02-15T16:11:48.819Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "f8575a7b29798add5e6aaa9466880eaca7fef8ec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/10/2/147/pdf?version=1676090187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a20de6fbcf374411d560a7e70dd67480ff3b8f70",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
238963877 | pes2o/s2orc | v3-fos-license | The Effect of Incubation Time on Various Type of Local Agricultural Waste in Madiun, Indonesia to Produce Cellulases using Trichoderma viride
This study aims to determine the effect of substrate types on the crude activity of cellulase enzymes from local agricultural waste such as peanut shells, coconut fibers, brans, and teak leaves with the variations of incubation times (day 1, 3, 5, and 7) produced by Trichoderma viride . Enzyme activity was measured based on the amount of reducing sugar that produced by the DNS (Dinitrosalycilic Acid) method on a wavelength of 540 nm spectrophotometer. The results showed that the best substrate for cellulase production on coconut fibers substrate with an incubation period of 7 days which had the highest enzyme activity value of 1.340 U / ml. Coconut fibers contain the highest cellulose content compared to the other substrates. The lowest activity was shown by coconut fibers substrate at day one incubation time of 0.660 U / ml with the reducing sugar content of 0.594 mg/ml and protein content of 0.147 mg/ml. The complexity of the chemical composition of coconut fibers caused the longer time to degrade cellulose into glucose at day one than other three substrates.
INTRODUCTION
Enzymes play important role in the industry. Enzymes become particular item of industry because with their use, energy can be saved and friendly to the environment. The enzymes that have entered the market are mostly derived from the class of hydrolytic enzymes, which are still produced conventionally and they are not optimal and imported from other countries. The need for enzymes is increasing every year. Indonesia was estimated to use enzymes reaching 2,500 tons with import values of around 200 billion Rupiah in 2017. Examples of enzymes that play important role in industrial applications are protease, xylanase, lactase, manase, chitinase, amylase, and cellulase. Cellulase is a commercial enzyme that has a very high selling value. In the 2011 Merck catalog, the price of 5 g cellulase (cellulose Onozuka R-10 from Trichoderma viride) pack is around $ 3,000, and 25 g cellulase pack is around $ 12,000. Cellulase sales continue to experience growth of up to 4% per year [1]. Nowadays, cellulase is widely used in various purposes in the industry. In the textile industry, cellulase is used in the process of finishing and bio blasting of fabrics. In the paper industry, cellulase is used to increase fiber softness. In the detergent industry, cellulase is used to increase color brightness and soften cotton. Cellulase is also widely used in the foods, medicines and cosmetics industries such as anticholesterimic, hypolipemic, oil absorber, or moisturising agents [2] ,as well as management of waste resource recycling and anti-pollution treatments [3]. Microbial cellulases also used in, living as rice farmers and gardeners. Agricultural waste is usually not used optimally by the community. With the abundance of agricultural and plantation wastes containing cellulose, it is necessary to conduct research to produce cellulase enzymes from various types of agricultural waste substrates containing cellulose. The application and utilization of cellulase in the industrial sector are hampered by the high price of commercial cellulase on the market today. Production of cellulase enzymes on an industrial scale requires high production costs so that the production of enzymes is expensive. To overcome the problems in the production of enzymes used alternative production substrates, one of which is by utilizing agricultural waste [13].
Agricultural waste substrate that known to have been used in research into the production of cellulase enzymes from Trichoderma viride is bagasse [14]. It obtained the best treatment combination to produce optimal activity of crude cellulase with 3% substrate concentration treatment and 7 days fermentation time with the average value of cellulase activity (filter paperase), dissolved protein, and cellulase specific activity respectively 0.771 Unit / mL, 0.262 mg / mL, and 2.940 Unit / mg. Whereas, a similar study was also conducted by Montesqrit (2007) with rice straw substrate which obtained the results that the maximum cellulase activity of Trichoderma viride was obtained on the 14 th day with a substrate concentration of 1.5% and obtained optimum cellulase activity at pH 5 and temperature of 60 0 C. A study using banana peel substrate reported the highest cellulase enzyme activity of CMC-ase was 4.4506 (IU / ml) and FP-ase was 1.4943 (IU / ml) on the 12 th day [16]. Lanasari (1999) used rhizome of alangalang substrate and found that cellulase activity tended to increase until the 9 th incubation time and decreased at the 11 th day incubation time. It is estimated that at the 9 th day incubation, molds were in the logarithmic phase and produced the highest activity, namely CMC-ase of 0.227 IU / mL, FP-ase of 0.141 IU / ml and β-glucosidase of 0.202 IU / mL. Cellulase enzymes that have been produced before being used in the industry must be tested for their enzyme activity. Enzyme activity that is known for its ability can be directly applied to get maximum results [18].
Enzyme activity is defined as the rate of substrate reduction or the speed of product formation at optimum conditions. One unit of enzyme activity is the number of enzymes that can produce 1 µmol glucose per minute [19]. Moreover, the specific activity of an enzyme is defined as the number of per milligram protein enzyme units. This study aims to determine the activity of crude cellulase enzymes by
Advances in Biological Sciences Research, volume 15
Trichoderma viride mold produced from various substrates of cellulose waste that are abundant, namely peanuts, coconut fibers, bran, and teak leaves. Trichoderma is filamentous fungi the species of which were previously considered to be culture contaminants. Trichoderma is a very versatile mold: a nuisance for people, a useful fungus for industry and biocontrol and a bane to other fungi. Trichoderma spp. is present in nearly all soils and other diverse habitats. In soil, they frequently are the most prevalent culturable fungi [20]. The results of this study are expected to make a major contribution to the optimization of cellulase enzyme production from microbes .
Substrate preparation.
Samples of natural substrates (peanut skins, coconut fibers, bran and teak leaves) were cleaned and chopped up to 2 cm in size then the substrate was mashed using a blender and sieved with a sieve of 60 mesh [21].
Trichoderma Rejuvenation on PDA media.
A PDA of 1.95 grams were put into 50 mL of distilled water and next heated and homogenized. After that, ± 5 ml of PDA solution were put into a petri dish and sterilized in an autoclave for 15 minutes at 121 0 C. Mold rejuvenation was done by inoculating Trichoderma viride molds into Potato Dextrose Agar (PDA) then incubated at 32°C for 6 days [21].
Delignification of the substrate.
The substrates that had been blended and sieved were soaked in a 4% NaOH solution with a ratio of 1:10 (substrate powder: 4% NaOH) for 24 hours, then the substrates were washed with distilled water to neutral the pH. The neutral pH condition was stated if the pH of the water from the substrates was equal to the pH of distilled water. After that, the substrates sample were dried in an oven at 50 0 C and stored at room temperature for further use as a substrate for Trichoderma viride growth medium.
Propagation of Trichoderma viride on PDB media.
PDB as much as 2.4 grams were dissolved in 100 ml of distilled water and homogenized. The solution was autoclaved for 15 minutes at 121 O C at a pressure of 15 Psi (2 atm), afterward waited for it to cool. Trichoderma viride in a petri dish were taken with an ose needle and put it in 10 ml of sterile distilled water. The solution was shaken until it was turbid. The turbid solution was put into the Erlenmeyer tubes which already contained a GDP of 90 ml. The solution was incubated at room temperature (27 O C -30 O C) using an orbital rotator for 6 days [21].
Acclimatization.
The purpose of acclimation is to adapt and survive in the environment with a nutrient medium in the enzyme production process. Acclimatization in this study was carried out in two stages, namely acclimatization 1 and acclimatization 2. The following is an acclimatization design
Cellulase enzyme production.
The production of cellulase enzymes was carried out with nutritional media mendels 100% without GDP. The nutritional composition of mendels as follows:
Reduction of sugar measurement.
Cellulase activity was quantitatively carried out using DNS reagents based on the estimated amount of reducing sugar produced from 1% CMC media. A total of 1 mL of 1% CMS media was added to 1 ml of crude cellulase enzyme and put in a tube then incubated at 55 0 C for 15 minutes. A total of 1 ml of DNS reagent was added to stop the reaction and boiled at 100 0 C for 5 minutes. The amount of reducing sugar released was determined by measuring 540 nm wave length spectrophotometer [25]. After the standard glucose curve was obtained, then the equation line y = ax + b was used to determine the concentration of glucose (x) from the sample to be measured absorbance. Cellulase enzyme activity was calculated based on data of relative glucose levels as mg of glucose produced by 1 mL of crude cellulase filtrate. One unit of enzyme activity was defined as the amount of 1 μmol glucose produced from media hydrolysis by 1 mL of crude extract of cellulase enzyme during the incubation period to see the magnitude of one unit of enzyme activity using the formula [26].
RESULT AND DISCUSSION
Comparative values of total protein levels, reducing sugars, enzyme activities and enzyme-specific activities can be observed in Table 2 value of 0.594 mg / ml contained in the S2T1 treatment (coconut fiber substrate time incubation 1 day) with protein content value of 0.660 U/ml. Enzyme activity was measured by the DNS method based on the amount of glucose (reducing sugar) produced by cellulose hydrolysis. The highest cellulase enzyme activity of Trichoderma Viride in this study was 1,340 U / ml which obtained from the treatment of S2T4 (coconut fiber substrate for 7 days incubation time). The lowest activity was obtained from the treatment of S2T1 (coconut fiber substrate for 1 day incubation time) of 0.660 U / ml.
Enzyme specific activity was obtained by dividing the amount of enzyme activity by total protein content per treatment sample. The research data showed that the treatment of S4T2 (teak leaf substrate incubation time 3 days) displayed the highest value of enzyme-specific activity that was equal to 56.622 U / mg with the enzyme activity of 0.978 U / mL and total protein content of 0.021 mg/ml. The lowest cellulase enzyme-specific activity value was obtained from the treatment of S2T3 (coconut fibers with an incubation time of 5 days) of 3,616 U / mg with an average enzyme activity of 1,270 U / ml and total protein content of 0.354 mg/ml. Based on Figure 1. The results obtained that for each sample cellulase enzyme protein levels in the substrate type treatment and incubation time showed fluctuating values. Explanation of these conditions is related to the need for molds for carbon sources to survive. When carbon demand for mold decreases, mold responds to synthesize cellulase enzymes to break down cellulose in the environment into glucose. In other conditions when carbon requirements for molds are met, molds will respond not to synthesize cellulase enzymes.
Because the cellulase enzyme is a protein, so when the mold is active in synthesizing cellulase the value of its protein content will increase, in other hand when the mold does not actively synthesize cellulase, the value of its protein content will decrease. The fluctuation in the value of protein content tends to decrease at the beginning of fermentation then rises to the 5th and falls on the 6th day [11]. The protein content in crude cellulase enzymes had many non-enzyme proteins, the value of protein content in crude enzymes that are too high or too low is assumed because the protein contained in the crude enzyme (crude enzyme) cellulase is a mixture of enzyme protein and non-enzyme protein [27] The increase number in protein is in line with mold growth because the mushroom body consists of elements that contains nitrogen [28]. Nitrogen is a constituent component of cell proteins and nucleic acids [27]. Furthermore, the fungal cell walls contain 6.3% protein while the cell membranes in hyphal fungi contain 25-45% protein and 25-30% carbohydrates [28]. So that the protein released by the enzyme also depends on the metabolism of the mold itself in excreting the enzyme which is a protein.
The process of cellulose hydrolysis by cellulase will produce reducing sugars in the form of glucose. Reducing sugar levels are measured by the 540 nm wavelength dinitrosalicylic acid (DNS) method based on the amount of reducing sugar as a result of cellulose hydrolysis. DNS reagents are commonly used in measuring crude sugar reducing enzymes because of their high level of accuracy.
Based on the picture above it can be seen that the value of reducing sugar levels increases with increasing incubation time. The incubation time affects the resulting reducing sugar levels. The condition which is due to the amount of substrate at the beginning of the hydrolysis is still quite large so that with the longer incubation time (hydrolysis), the resulting reducing sugar also increases but also due to sugar as a source of nutrition is still widely available so as to allow reducing sugar levels at a certain time [29]. The incubation time provides an opportunity for Trichoderma viride to multiply cells so that the number of cells produced increase. The increase in reducing sugars also shows that the activity of Trichoderma viride in hydrolyzing cellulose into glucose and cellobiose components increases. Cellulose in agricultural waste is the main substrate needed as a carbon source to obtain energy, as well as being degraded to synthesize metabolite products in the form of glucose groups. The results of the analysis of variance showed a significant effect (p ≤ 0.05) due to the treatment of incubation time on reducing sugar levels but did not have a significant effect on the type of substrate. The treatment of coconut fiber substrate for 7 days incubation was the highest substrate concentration. It is assumed that coconut fibers have the highest cellulose content compared to the other three types of substrates (peanut shells, bran, and teak leaves). Also, with the high cellulose content, more cellulose substrates can be hydrolyzed by cellulase enzymes to become monomers so that glucose levels increase. Cellulase enzyme activity was tested by using the CMC substrate (Carbonmethyl Cellulose) using DNS reagents (3,5-dinitrosalicylic acid) which will be observed based on the amount of glucose formed. Cellulase enzymes are a group of enzymes consisting of several enzymes that work synergistically in breaking down cellulose into glucose by hydrolyzing the β (1,4) bond in cellulose. Cellulase activity is measured using CMC (Carbonmethyl Cellulose) because CMC is a cellulose-derived compound [10] and has an amorphous part cellulose structure, so when cellulase enzymes are given the appropriate substrate (cellulose) a cellulose hydrolysis reaction will occur with glucose. The active cellulase enzymes work in the amorphous region of cellulose and produce cellooligosaccharides [30]. Cellulase activity will increase with the length of the cellulose chain to be hydrolyzed. Figure 2. sh owed that the highest cellulase enzyme activity was 1.34 U / ml occurred in the treatment of Coconut Fiber Substrate with an incubation time of 7 days and the lowest activity was seen on the coconut fiber substrate for d day 1 incubation time with an enzyme activity value of 0.66 U / ml. When linked to a bar chart, the highest activity value on all types of substrates lies on day 7 th while the lowest activity value of all substrates at day 1 incubation time. Enzyme activity will increase with increasing incubation time or fermentation time. The growth phase begins with the lag phase (the adaptation phase) which is the phase in which microorganisms adjust themselves due to changes in the media and environment. This phase occurs shortly after the inoculation takes place where the cell is still relatively fixed [31]. Next is the log phase (growth phase), the growth phase is characterized by a significant increase in the number of cells because the cell division process occurs optimally. The growth phase is the best in determining the optimal time of inoculation of a cell. If related to Figure 2, the 7-day incubation time is the optimal time for cellulase production from Trichoderma viride. The longer the incubation time, the hydrolysis of cellulose into glucose by cellulase enzymes produced by Trichoderma viride is getting higher. Based on these explanations it can be concluded that the higher the glucose produced, the higher the enzyme activity. The cellulase activity would increase at an optimal incubation time, increasing in the number and activity of enzymes causing more and more cellulose-forming bonds (β-1-1-glycosides) to be broken down to produce oligosaccharides to eventually be converted into monoglucose, so that levels of cellulose in the fermentation medium decreases [32]. Based on the analysis of variance, the type of substrate does not affect cellulase enzyme activity. However, enzyme activity showed high values on substrates containing high cellulose content and showed lower values on substrates containing less cellulose. On average agricultural waste contains 28-47% cellulose and 10-30% lignin. The presence of lignin that binds to cellulose can interfere with the process of cellulose hydrolysis because the cellulose enzyme only works on the cellulose substrate. If lignin is still bound to cellulose, the active site of the enzyme will not open and it is difficult to hydrolyze cellulose.
The reason of the highest value of activity on coconut fibers is because coconut fibers have the highest cellulose content compared to peanuts, rice bran, and teak leaves. Cellulose in coconut fibers was 47.7%, with other components including 29.9 hemicelluloses, 17.8% lignin, and 0.8% ash [33]. Specific activity indicates the degree of purity of the enzyme [34]. Specific activity is always related to enzyme protein levels. The relationship between enzyme protein levels and enzyme activity can be seen in Figure 4.
The value of protein content when viewed from the graph can be seen to follow the value of enzyme activity, namely if the enzyme activity is low, the protein content is also low. In the other hand, if the enzyme activity is high, the protein content is also high. However, the value of the protein content of one substrate is different from the value of the protein content of the other substrate at each incubation time. For example in S1T3 treatment (peanut shells, incubation time 5 days) showed lower protein levels compared to S2T3 treatment (coconut fibers, incubation time 5 days). A high protein released in coconut fibers indicates the presence of other proteins (in addition to cellulase enzymes) which may include other cell wall hydrolyzate enzymes [35]. So that the protein content in coconut fibers is higher than other substrates. The amount of protein released is a function of the complexity of carbon sources. The more complex the carbon source, the greater the amount of protein produced. The specific activity of cellulase enzyme is related to proteins because the value of specific activities can be determined by dividing the activity of enzymes with specific activities [35]. The following graph are the relationship between protein levels on enzyme and specific activities. The relationship between protein content and specific activities shows that the lower the value of protein content the higher specific activities, conversely if the value of protein content increase, the specific activity will be decrease. The data is supported by another research, that described the value of the protein content of 0.3319 has a specific activity of 1.4462 [36]. The protein content of 0.2883 has a specific activity of 1.8352 and protein content of 0.2778 has a specific activity of 2.77812. In general, enzyme-specific activity testing is carried out for purified enzyme samples. Purification process causes decreased protein levels. Decreased protein levels indicate that other proteins besides cell enzymes are already separated then it causes an increase in enzyme activity because the enzymes can work without interference from other impurities [37].
CONCLUSION
The substrate types affected to crude cellulase enzyme activity. The highest cellulase enzyme activity of Trichoderma viride in this study was 1,340 U / ml was obtained from S2T4 treatment (coconut fiber substrate for 7 days incubation time) and the lowest activity is obtained from the treatment of S2T1 (coconut fiber substrate for 1 day incubation time) of 0.660 U / ml. Based on the value of reducing sugar level shows that the S2T4 treatment (coconut fiber substrate for 7 days incubation time) had a protein content value of 1,207 mg/ml while the lowest value of 0.594 mg/ml was found in the S2T1 treatment that was (coconut fiber substrate time incubation 1 day), then the highest protein content was shown in the S2T3 treatment (coconut fiber substrate at 5 days incubation) that had the highest protein content with a protein content of 0.354 mg/ml. Whereas the treatment of S4T2 (teak leaf substrate incubation time 3) days showed the lowest percentage of protein with a protein content of 0.021 mg/ml. | 2021-08-27T17:18:27.812Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "16151916653d3c52439624f56525f2dbc1a10a93",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125959974.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9935f47c7cade0df37144e526767ed1f6d122024",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
259195652 | pes2o/s2orc | v3-fos-license | Pollination, pollen tube growth, and fertilization independently contribute to fruit set and development in tomato
In flowering plants, pollination, pollen tube growth, and fertilization are regarded as the first hierarchical processes of producing offspring. However, their independent contributions to fruit set and development remain unclear. In this study, we examined the effect of three different types of pollen, intact pollen (IP), soft X-ray-treated pollen (XP) and dead pollen (DP), on pollen tube growth, fruit development and gene expression in “Micro-Tom” tomato. Normal germination and pollen tube growth were observed in flowers pollinated with IP; pollen tubes started to penetrate the ovary at 9 h after pollination, and full penetration was achieved after 24 h (IP24h), resulting in ~94% fruit set. At earlier time points (3 and 6 h after pollination; IP3h and IP6h, respectively), pollen tubes were still in the style, and no fruit set was observed. Flowers pollinated with XP followed by style removal after 24 h (XP24h) also demonstrated regular pollen tubes and produced parthenocarpic fruits with ~78% fruit set. As expected, DP could not germinate and failed to activate fruit formation. Histological analysis of the ovary at 2 days after anthesis (DAA) revealed that IP and XP comparably increased cell layers and cell size; however, mature fruits derived from XP were significantly smaller than those derived from IP. Furthermore, there was a high correlation between seed number and fruit size in fruit derived from IP, illustrating the crucial role of fertilization in the latter stages of fruit development. RNA-Seq analysis was carried out in ovaries derived from IP6h, IP24h, XP24h and DP24h in comparison with emasculated and unpollinated ovaries (E) at 2 DAA. The results revealed that 65 genes were differentially expressed (DE) in IP6h ovaries; these genes were closely associated with cell cycle dormancy release pathways. Conversely, 5062 and 4383 DE genes were obtained in IP24h and XP24h ovaries, respectively; top enriched terms were mostly associated with cell division and expansion in addition to the ‘plant hormone signal transduction’ pathway. These findings indicate that full penetration of pollen tubes can initiate fruit set and development independently of fertilization, most likely by activating the expression of genes regulating cell division and expansion.
Introduction
Tomato (Solanum lycopersicum L.) is both an economically important crop in the world and a model plant for fruit science and production (Ezura, 2009). Fruit initiation and development from tomato flowers can be divided into four distinct phases (Ariizumi et al., 2013;Quinet et al., 2019), viz. fruit set (phase I), cell division (phase II), cell expansion (phase III), and fruit ripening (phase IV), all of which require the coregulation of genetic and hormonal elements via complicated pathways (Molesini et al., 2020;Fenn and Giovannoni, 2021). Both pollination and fertilization are believed to be prerequisites for fruit set and development (Quinet et al., 2019), but seedless fruits can be produced independently of fertilization, as is the case with the parthenocarpy phenomenon which can be achieved either by exogenous hormone treatments or genetic mutation approaches. Parthenocarpy is a highly desirable agronomic trait, as fruit formation is less affected by environmental factors (Molesini et al., 2020). Thus far, several parthenocarpic mutations, such as pat-2, iaa9-3, and pad-1, display high seedless fruit set ratios, and hence, they are considered potential genetic materials for the breeding of seedless tomato fruit cultivars (Takisawa et al., 2019;Matsuo et al., 2020;Takisawa et al., 2020;Tran et al., 2021).
Like in other flowering plants, pollination in tomato occurs on the stigma surface as the first step in the reproduction process. Pollination is then followed by germination of pollen grains to form unique structures known as pollen tubes. The pollen tubes provide a link between pollination and fertilization, as they act as vehicles to deliver sperm cells in pollen grains to egg cells in the ovules which are located in the ovary. Interestingly, soft X-ray-irradiated pollen containing inactivated sperm cells produced standard pollen tubes which penetrated ovules and eventually resulted in parthenocarpic fruit development in watermelons (Hu et al., 2019). The soft X-rayirradiated pollen induced both auxin signalling and the accumulation of various hormones including gibberellins, cytokinins, and auxins, and the resultant parthenocarpic watermelons were comparable in size to normal seeded fruits (Hu et al., 2019). Other reports have also demonstrated that there is a swift activation of ethylene biosynthesis and perception during pollen tube growth in multiple plant species (Holden et al., 2003;Jia et al., 2018;Althiab-Almasaud et al., 2021). These data suggest the potential roles of pollen tubes in the regulation of hormonal synthesis and signalling to induce fruit set and development even in the absence of fertilization. In tomato, however, soft X-ray irradiation applied directly to dried pollen strongly impaired pollen germination and led to production of tiny parthenocarpic fruits (Nishiyama and Tsukuda, 1961;Uematsu and Nishiyama, 1967). In this sense, no further research on this topic has been conducted since then, and hence the potential use of X-rayirradiated pollen to produce parthenocarpic fruits in tomato remains unexplored. Furthermore, the role of pollen tubes in fruit initiation and development at the molecular level in tomato is still unknown.
In this study, we used dead, intact, and soft X-ray-treated pollen to explore the independent effects of pollination, pollen tube growth, and fertilization on fruit initiation and development in tomato. Partial pollen tube growth in the styles triggered the expression of various genes which are associated with the release of cell cycle dormancy, but these changes did not adequately initiate fruit set. However, full penetration of the pollen tubes into the ovary activated genes associated with cell expansion and division most likely through many hormonal pathways independently of fertilization and eventually initiated fruit set and development. In addition, we show that fertilization could contribute to the latter stages of fruit development by activating the expression of a distinct set of cell expansion genes. Altogether, these findings suggest that pollen tube penetration into the ovary can sufficiently trigger normal fruit set and development regardless of fertilization, a physiological function of pollen tubes that has not been established previously in tomato.
Plant material and growth conditions
Seeds of S. lycopersicum cultivar "Micro-Tom", both wild type (ID: TOMJPF00001) and an EMS parthenocarpic mutant iaa9-3 (ID: TOMJPE2811-1), were supplied by the National Bioresource Project archived at the TOMATOMA database 1 . The plants were grown in rockwool blocks in a growth room set at 25°C under photosynthetically active light (75-110 mmol/m 2 /s) for 16 h and 20°C in the dark for 8 h.
Pollen preparation
Fresh anther cones were collected at the anthesis stage and then used to prepare three types of pollen, that is, dead pollen (DP), soft X-ray-treated pollen (XP), and intact/normal pollen (IP). To prepare DP, the anther cones were dried at 100°C for 2 h. IP were prepared by drying the anther cones at 35°C for 6 h. Finally, to make XP, fresh anther cones were first subjected to a soft X-ray irradiation (Model: OM-303M, OMIC Corporation, Japan) of 1000 Gy for 72.15 minutes followed by drying at 35°C for 6 h. The protocol for soft X-ray irradiation was derived from our work (unpublished data) at the University of Tsukuba.
Pollination and treatments
Flowers were emasculated one day before anthesis (-1 DAA) to avoid self-pollination. The emasculated flowers were then pollinated on the next day (0 DAA) using the pollen types described in section 2.2. Styles were removed from IP-pollinated flowers 3 h, 6 h, 9 h, 12 h, and 24 h after pollination (denoted as IP3h, IP6h, IP9h, IP12h, and IP24h, respectively). For XP-and DPpollinated flowers, styles were removed 24 h after pollination (denoted as XP24h and DP24h, respectively). In all cases (IP3h, IP6h, IP9h, IP12h, IP24h, XP24h and DP24h), ovaries that remained after style removal were left to stand on the plants (10 plants per each treatment) for further histological and RNA-Seq analyses as well as assessment of fruit set ratios. The ovaries were collected at 2 DAA (for RNA-Seq analysis) and from 2-10 DAA (for histological analysis). Ovaries from emasculated but unpollinated flowers (denoted as E) with style removal 24 h after anthesis were also collected at the same timepoints to represent control treatments.
2.4
In vivo pollen tube growth assays and aniline blue staining Pistils were collected 3 h, 6 h, 9 h, 12 h, and 24 h after pollination with IP pollen, and 24 h after pollination with XP and DP. The pistils were then fixed in 3:1 ethanol (100%): acetic acid (100%) solution for 12 h, washed in 70% ethanol and finally softened in 5 N NaOH for 12 h. Softened pistils were washed five times in distilled water and stained overnight in the dark with 0.01% aniline blue solution in K 3 PO 4 buffer. The stained pistils were then mounted in 100% glycerol on a slide and observed under a UV microscope (Olympus BX50, Olympus-Life Science Company, Japan). At least three pistils were observed for each of the seven treatments.
Determination of fruit set ratios
Fruit set ratios were evaluated using five different plants and a total of 21-30 flowers for each style removal (IP3h, IP6h, IP9h, IP12h, IP24h, XP24h and DP24h) treatment. The ratios were expressed as the percentage of ovaries which remained on the flowers and showed an increase in size at 10 DAA.
Histological analysis and fruit growth measurements
Histological analyses were carried out using Technovit 7100 (Kulzer Technik, Germany) according to the protocol of Yeung and Chan (2015) with slight modifications. Briefly, ovaries collected at 0, 2, 4, and 10 DAA were fixed as described in Section 2.4 and then dehydrated by passing them sequentially through graded ethanol (70%/2 h ! 80%/2 h ! 90%/2 h ! 100%/overnight). This was followed by three infiltration steps in ethanol:Technovit solutions at different concentrations (2:1/2 h ! 1:1/2 h ! 1:2/2 h), and overnight immersion in pure Technovit. The ovaries were then embedded in a mixture of Technovit and Hardener II (15:1 v/v) and allowed to polymerize at room temperature for 12 h before sectioning at a thickness of 5 mm using a rotary microtome machine (Leica RM2235, Leica Biosystem Ltd., China). Finally, the sections were mounted in water on a glass slide, dried at 40°C, and stained with toluidine blue (Sigma-Aldrich T3260, Merck, USA). A drop of Entellan New (Sigma-Aldrich 107961, Merck, USA) was added to the slide before observation under a light microscope (Olympus BX50). At least three ovaries were observed for each style removal treatment. To estimate fruit growth, cell layer and cell volume measurements were conducted on the microscope images using ImageJ software.
RNA extraction, library construction and RNA sequencing
Samples for RNA-Seq analysis included emasculated but unpollinated ovaries (E) as a control (absence of pollination, pollen tube growth and fertilization), and ovaries from IP6h, IP24h, DP24h, and XP24h style removal treatments. These samples were selected based on pollen tube growth observations; IP6h represented partial pollen tube growth (no penetration into the ovaries and hence no fertilization), IP24h represented full pollen penetration into the ovaries with fertilization, XP24h represented full pollen tube penetration without fertilization, while DP24h represented pollination only (no pollen tube growth and no fertilization). All samples were collected 48 h after pollination (2 DAA), and each sample contained 5 ovaries with three replications. Total RNA was extracted from the ovary samples using the RNeasy ® Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Treatment with DNase I (Nippon Gene, Tokyo, Japan) was carried out to remove genomic DNA contamination. mRNA was then purified from the total RNA using a poly(A) mRNA magnetic isolation module kit (New England BioLabs). The pure mRNA was used to construct paired-end libraries for Illumina using the Ultra ™ II directional RNA library prep kit (New England BioLabs), and sequencing was performed on an Illumina NovaSeq 6000 platform (Illumina, Inc.).
RNA-seq data analysis
RNA-Seq data were analysed on the Galaxy main server 2 . The raw RNA sequence reads were qualified using the FastQC tool and then trimmed by the Trimmomatic tool (Bolger et al., 2014). The resultant high-quality sequences were then mapped to tomato reference genomes (v. SL4.0) using the Hisat2 tool with default parameters (Kim et al., 2019). Mapped reads were counted using the featureCounts tool (Liao et al., 2014). The read counts obtained were normalized to the gene expression level as transcripts per kilobase million (TPM) reads. Gene Ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were conducted on the web application iDEP (v. 0.93) 3 and ShinyGO (v. 0.77) 4 (Ge et al., 2018).
Statistical analysis
Data obtained in this study are presented as average values ± SE. Mean comparisons were tested by Tukey's HSD (honestly significant difference) test at P < 0.05. All statistical analyses were performed using Statistical Tool for Agricultural Research (STAR), version 2.0.1 (Gulles et al., 2014).
Effect of pollen tube growth on fruit setting in tomato
In vivo pollen tube growth assays, following pollination of tomato flowers with IP, XP and DP, were crucial in this study to determine the independent effects of pollination, pollen tubes, and fertilization on fruit setting. For IP-pollinated flowers, pollen tubes elongated to approximately one-third and three-quarters of the styles after 3 h and 6 h ( Figure 1A), respectively and the earliest penetration of the pollen tubes into the ovaries, indicated by a limited signal below the style baseline, was observed 9 h after pollination. Full penetration of IP-generated pollen tubes into the ovaries occurred 12 h and 24 h after pollination, as the signal strengths inside the ovaries were comparable. Likewise, pollen tubes in XP-pollinated flowers elongated and fully penetrated the ovaries 24 h after pollination ( Figure 1A). As expected, DP failed to germinate and hence pollen tubes could not be observed in the styles of DP-pollinated flowers even after 24 h. Therefore, at the specified times of style removal, pollen tubes were still in the styles for IP3h and IP6h, but there was partial penetration into the ovaries for IP9h. For IP12h, IP24h and XP24h, full penetration of the pollen tubes into the ovaries had already taken place.
Ovaries that were left on the plants after style removal treatments were then assessed at 10 DAA for their ability to develop into fruits. Results indicated that IP3h and IP6h ovaries Pollen tube growth and its effect on fruit setting in tomato. (A) Images showing pollen tubes in the pistils. Pollen tubes were stained by aniline blue and appear green under UV microscopy at a magnification of 40X (n = 6). Lower panels in IP9h, IP12h, IP24h and XP24h columns indicate pollen tube signals inside the ovary. (B) Fruit set ratios of ovaries of the specified pollen type and style removal treatments. Values are percentages of fruit numbers on the examined flowers (21-30 flowers per treatment) at 10 DAA. Different letters indicate statistical difference using Tukey's HSD (honestly significant difference) test at P < 0.05. White bars in (A) images = 50 mm. IP3hpollinated with intact pollen and styles removed 3 h later; IP6hpollinated with intact pollen and styles removed 6 h later; IP9hpollinated with intact pollen and styles removed 9 h later; IP12hpollinated with intact pollen and styles removed 12 h later; IP24hpollinated with intact pollen and styles removed 24 h later; XP24hpollinated with x-ray-irradiated pollen and styles removed 24 h later; DP24hpollinated with dead pollen and styles removed 24 h later. failed to initiate fruit set, similar to DP24h ( Figure 1B). On the other hand, IP9h ovaries displayed a considerable fruit set ratio (70.2%), although it was significantly lower than IP12h (84.4%) and IP24h (93.5%) ovaries. Interestingly, XP24h ovaries also showed a noticeably high fruit set ratio (78.3%), albeit significantly lower than that of IP24h ovaries. Together, these findings indicate that pollination alone (represented by DP24h) or partial pollen tube growth (represented by IP3h, IP6h and to some extent, IP9h) were not sufficient to initiate fruit setting. Contrarily, full penetration of pollen tubes into the ovaries even in the absence of fertilization (as shown in XP24h) can sufficiently trigger fruit set.
Full penetration pollen tubes results in normal fruit development
After fruit set assessments, we asked whether full penetration of pollen tubes into the ovaries minus fertilization can also lead to normal cell division and expansion. Histological analyses showed that both the number of cell layers and cell sizes of pericarp tissues of XP24h ovaries were higher at 2 DAA compared to the initial 0 DAA ovaries (Figures 2A-C). It is worth noting that both the number of cell layers and area of pericarp cells in XP24h ovaries at 2 DAA were not statistically different from those of IP12h and IP24h ovaries at the same timepoint ( Figures 2B, C). By contrast, IP9h ovaries, which had only partial pollen tube penetration ( Figure 1A), only showed larger cells at 2 DAA with no significant increase in cell layer numbers. Ovaries of IP6h, DP24h and the control E showed insignificant changes in both the cell area and number of cell layers, and the cell sizes were similar to those of ovaries at 0 DAA.
Pericarp cell measurements were also conducted on the young fruits (at 4 and 10 DAA) that developed from XP24h and IP24h ovaries. As shown in Figure 2D, young fruits of XP24h and IP24h showed a comparable number of cell layers both at 4 and 10 DAA. However, IP24h fruits showed a consistently larger pericarp cell area than XP24h fruits at the evaluated timepoints. It is worth noting that from 0 DAA to 2 DAA, there were no significant differences between IP24h and XP24h with regards to the increase in both cell layer numbers (26%) and cell area (30%) (Figures 2A-C). At 4 DAA, however, the increase in cell area relative to control E ovaries at 2 DAA was remarkably higher (210%) than that of the number of cell layers (46%). Furthermore, there was only a slight increase in cell layer numbers between 4 DAA and 10 DAA (11%), but a remarkably sharp increase in cell area (830%) was recorded. Altogether, these findings suggested that during early stages of fruit development (0-4 DAA), cell division (indicated by cell layer numbers) has a greater contribution than cell expansion (indicated by cell area). By contrast, cell expansion has a greater contribution than cell division during later fruit developmental stages (from 4 DAA).
To examine the impact of fertilization on late fruit developmental stages, fruit size and seed numbers in ripe fruits derived from style removal treatments that successfully set fruits (IP9h, IP12h, IP24h and XP24h) were also determined ( Figure 3). The largest fruit weight was registered in ripe fruits obtained from IP12h and IP24h (3.69 g and 3.61 g, respectively) ( Figure 3A). Fruits that developed from IP9h ovaries displayed the smallest average weight (1.77 g), but XP24h fruits had a slightly higher average weight (2.38 g), albeit significantly smaller than IP12h and IP24h fruits. Both the diameter and height of IP12h and IP24 h fruits were bigger than those of XP24h ( Figure 3D), which correlated well with fruit weight data. These results suggested that the degree of fertilization might have a positive impact on the final size of fruits at maturity. Specifically, IP12h and IP24h ovaries underwent full penetration of pollen tubes (and hence presumably, complete fertilization) which likely led to the development of normalsized fruits. However, IP9h ovaries underwent only partial pollen tube penetration, and hence partial fertilization, which might account for the small fruit sizes. Indeed, there was no significant difference in the number of seeds per fruit for IP12h and IP24h, but IP9h fruits had noticeably a smaller number of seeds ( Figure 3B). Likewise, XP24h ovaries did not undergo fertilization even though full penetration of pollen tubes from the sterile X-ray-irradiated occurred, likely contributing to smaller fruit sizes than IP12h and IP24h. It is also worth noting that XP24h fruits did not produce regular seeds; instead, they produced many tiny seed-like structures ( Figure 3D), which failed to germinate (data not shown). This finding therefore confirmed that the X-ray-treated pollen used in the present study indeed lost their fertilization function. Finally, combination of fruit weight and seed number data revealed a strong positive correlation (R 2 = 0.85) between these two phenotypes ( Figure 3C).
Transcriptome sequencing analysis
After phenotypical analyses, RNA-Seq was performed to identify transcriptomic changes in ovaries at 2 DAA triggered by the different pollen types and style removal times. A total of 500 most variable genes were used to construct a heatmap showing the overall expression patterns based on the log 10 transformed TPM values. As shown in Figure 4A, the expression patterns in DP24h and IP6h ovaries were similar to those in the control (E) ovaries, while XP24h and IP24h ovaries showed highly similar patterns. Indeed, Pearson's correlation coefficient analysis, based on 75% of the top variable genes, confirmed that there were highly positive correlations between XP24h and IP24h samples (r = 0.99), while there were high correlations (r=0.98~0.99) among E, DP24h, and IP6h samples ( Figure 4B). In addition, principal component analysis (PCA) was conducted to explain 78% of the variation in the datasets (PC1 = 74%, and PC2 = 4%), in which XP24h and IP24h samples were also grouped together, but clearly separated from E, DP24h, and IP6h ( Figure 4C).
Differential gene expression analysis
The DEGs with a minimal fold change of 2 and FDR < 0.01 were considered significant and selected for enrichment analysis. As illustrated in Figures (Figures 5B, D). By contrast, there were no DEGs between DP24h vs E samples or between XP24h vs IP24h ( Figure 5D).
Venn diagrams were also constructed to visualize the commonly or uniquely up-or down-regulated DEGs under different pollen treatments. This analysis revealed that XP24h and Histological analysis of ovary pericarp tissues after pollination with three different pollen types and style removal treatments. Ovaries were collected at 0 DAA and 2 DAA (A-C), and at 4 DAA and 10 DAA (D). Samples were stained with toluidine blue after carrying out the Technovit 7100 (Kulzer Technik, Germany) procedure, and observed under a light microscope at a magnification of 100X. Data in (B, D) are means ± SE. Different letters in (B, C) indicate statistical difference using Tukey's HSD test at P < 0.05. Significant differences in (D) bar charts were analysed using Student's t-test assessment (**P < 0.01; ***P < 0.001). White bars = 100 mm in (A, D) (4 DAA) and 500 mm in (D) (10 DAA). IP6hpollinated with intact pollen and styles removed 6 h later; IP9hpollinated with intact pollen and styles removed 9 h later; IP12hpollinated with intact pollen and styles removed 12 h later; IP24hpollinated with intact pollen and styles removed 24 h later; XP24hpollinated with x-ray-irradiated pollen and styles removed 24 h later; DP24hpollinated with dead pollen and styles removed 24 h later; (E)emasculated but unpollinated control. IP24h samples shared the majority of up-regulated DEGs (1336 genes), while there were 11, 65, and 178 uniquely up-regulated genes in IP6h, XP24h, and IP24h samples, respectively ( Figure 5E). Among the downregulated DEGs, 2800 genes were common between XP24h and IP24h samples ( Figure 5F). We found that 53, 181, and 747 genes were uniquely downregulated in IP6h, XP24h and IP24h samples, but only 1 gene was commonly downregulated in all three samples ( Figure 5F).
Interestingly, the term enrichment analysis results for IP24h ovaries were highly similar to those of the XP24h ovaries ( Figure S1B; Table S1). In contrast, the GO terms and KEGG pathways for the 65 DEGs (11 up-regulated and 54 downregulated) identified in IP6h ovaries were different. Notable GO terms and KEGG pathways in IP6h ovaries were 'deoxyribonucleotide biosynthesis process' 'nucleosome assembly', 'chromatin remodelling', 'nucleosome', 'DNA packing complex', 'asparagine synthase (glutaminehydrolysing) activity', 'purine metabolism' and 'pyrimidine metabolism' ( Figure S1A; Table S1). These findings further supported the notion that partial pollen tube might have limited effects on the remodelling of genetic materials. The changes induced might be essential preparations for cell division, but they are likely not sufficient to initiate fruit set in tomato.
Identification of potential genes/gene families responding to fully elongated pollen tubes
To explain the histological results that XP24h led to increased cell layer numbers and cell size at 2 DAA, we examined the expression patterns of some genes which are well-known for regulating cell division and expansion at the early stage of tomato fruit development. As illustrated in Figure 7A, thirteen members belonging to three subclasses (A, B, and D) of the cyclin family were significantly upregulated in XP24h and IP24h ovaries; highly expressed genes were cyclin B1_2 (Solyc10g080950) and cyclin B2_7 (Solyc03go32190). Likewise, five members of the expansin gene family and one gene encoding an expansin precursor (Soly02g088100) showed up to 3.6-fold higher expression in XP24h and IP24h ovaries ( Figure 7B), than in IP6h and DP24h ovaries.
The vital role of phytohormones in regulating tomato fruit initiation and development was illustrated consistently through previous reports (Fenn and Giovannoni, 2021). In the present study, the KEGG enrichment analysis results also revealed that many DEGs were involved in the 'plant hormone signal transduction' pathway ( Figure 6B). We therefore examined the expression pattern of genes involved in the synthesis, transportation, and signalling of various hormones ( Figure 7C; Table S2). As indicated in Figure 7C, ethylene and auxin seemed to be the most active hormones in XP24h and IP24h ovaries. The expression of 8 genes related to ethylene (3 encoding ethylene receptors and 5 encoding ethylene biosynthesis enzymes) and 12 genes related to auxin (10 involved in auxin response and signalling, and 2 involved in auxin transport) were changed significantly. Specifically, there was significant downregulation (log 2 FC = -2) of ARF7 and ARF5, which are key transcription factors that regulate fruit set and early fruit development. In addition, XP24h and IP24h ovaries displayed increased expression of two genes that positively regulate gibberellin synthesis (GA20ox1 and GA20ox2), and repression of GA2ox4, a negative controller of gibberellin catabolism ( Figure 7C). Altogether, these results illustrated that complete penetration of pollen tubes (as in XP24h) might broadly affect hormonal responses to activate both cell division and cell expansion events by increasing the expression of cyclin and expansin genes.
The effect of fully elongated pollen tubes on the expression of genes associated with parthenocarpy
In this study, pollination by X-ray-irradiated pollen resulted in a considerably high fruit set ratio (78.3%) of parthenocarpic fruits ( Figure 1B), with an average weight approximately 65% that of seeded fruits on the same plant ( Figure 3A). These effects are comparable to many previously reported parthenocarpy mutations in tomato (Sharif et al., 2022). We therefore examined the expression patterns of 23 well-known parthenocarpic genes in XP24h ovaries, compared to IP6h, IP24h, and DP24h. As a result, we found that 10 of these genes were differentially expressed (log 2 FC ≥ 2), particularly in XP24h and IP24h ovaries ( Figure 8A). The top variable genes were GA20ox1 (log 2 FC = 3.09), NCED1 (log 2 FC = -2.75), GA2ox2 (log 2 FC = -2.26), AGL6 (pat-k) (log 2 FC = -2.17), and ARF7 (log 2 FC = -2.02).
Although many genes related to auxin signalling and transport were differentially expressed in XP24h ovaries, IAA9 which is an early auxin-responsive gene remained unchanged ( Figure 8A). To further examine whether IAA9 is involved in parthenocarpic fruit development from XP24h ovaries, we pollinated the flowers of iaa9-3, a well-known parthenocarpic mutant (Saito et al., 2011), with Xray-irradiated pollen. Compared to the control (E), X-ray-irradiated pollen slightly increased fruit set by approximately 14% ( Figure 8B), but had little effect on cell layer formation ( Figure 8C). In addition, the average weight of XP24h fruits (3.82 g) was similar to that of E fruits ( Figure 8D), both of which were nearly 75% of the weight of IP24h fruits. These results suggested that full pollen tube penetration might regulate auxin signalling via altering the interaction between ARF7 and IAA9 proteins.
Discussion
The role of pollen tubes in both the production of seedless fruits and study of parthenocarpic mechanisms has been well established in watermelons, through pollination tests with soft X-ray-irradiated pollen (Sugiyama and Morishita, 2002;Qu et al., 2016;Hu et al., 2019), or foreign (bottle gourd) pollen (Sugiyama et al., 2014). In either case, penetration of watermelon ovaries by pollen tubes led to reasonably high fruit set and the resultant parthenocarpic fruits had virtually the same size as control seeded fruits, most likely due to accumulation of various phytohormones including auxins, gibberellins, and cytokinins (Hu et al., 2019). Attempts to replicate these findings in tomato (both S. lycopersicum and S. pimpinellifolium) have been unsuccessful to date, as irradiation with X-ray or gamma-ray on dried pollen severely affected pollen germination, reduced fruit set, and resulted in very tiny parthenocarpic fruits (Nishiyama and Tsukuda, 1961;Uematsu and Nishiyama, 1967). In the present study, X-ray irradiation was applied on fresh anther cones before drying, as opposed to previous studies where it was applied directly on dried pollen. As a result, we The potential involvement of well-known parthenocarpic genes in seedless fruit formation in flowers pollinated with X-ray irradiated pollen. (A) The expression patterns of 21 genes (previously associated with parthenocarpy in tomato) as affected by the different pollen types and style removal timing. (B-D) The effect of pollination with IP or XP and varying style removal timings on fruit initiation and development in iaa9-3, a wellestablished parthenocarpic "Micro-Tom" mutant line. Different letters in C and D indicate statistical difference, using Tukey's HSD test at P < 0.05. IP24hpollinated with intact pollen and styles removed 24 h later; XP24hpollinated with x-ray-irradiated pollen and styles removed 24 h later; E emasculated but unpollinated control.
found that pollen germination was unaffected and pollen tubes from XP elongated in a similar manner as those from IP, fully penetrating the ovaries 24 h after pollination ( Figure 1A). This observation allowed us to further study the distinct function of pollen tubes in tomato fruit formation. It is intriguing that full penetration of the ovaries by pollen tubes emanating from XP triggered comparably high fruit set ( Figure 1B), unlike partial pollen tube growth (as in IP6h) which resulted in failed fruit set. Although the parthenocarpic fruit which developed after XP pollination were slightly smaller than seeded fruit obtained by IP pollination (Figure 3), their average weight was within acceptable limits of most previously reported parthenocarpic tomato fruits. A possible explanation for the relatively small-sized XP-derived fruits is the lack of fertilization, evidenced by production of empty seeds ( Figure 3E). In strawberry, successful fertilization was recently shown to induce auxin biosynthesis, resulting in normal fruit growth and development (Guo et al., 2022). Increased expression of auxin biosynthetic genes coupled with accumulation of auxins was also reported in watermelon ovaries at 2 DAA following pollination with X-ray-irradiated pollen (Hu et al., 2019), which most likely accounted for comparable sizes between parthenocarpic and seeded fruits (Sugiyama and Morishita, 2000;Sugiyama and Morishita, 2002). In the present study, auxins were not quantified but transcriptome analysis revealed that only auxin signalling and transportation genes were differentially expressed while auxin biosynthetic genes remained unchanged in XP-pollinated ovaries at 2 DAA ( Figure 7C). Given that the genes associated with cell expansion were upregulated in XP-pollinated ovaries to a similar degree as IPpollinated ones ( Figure 3E), it is plausible that lack of fertilization (and hence no auxin accumulation) hinders normal fruit growth and development. The auxin effect most likely targets cell expansion rather than cell division ( Figure 2D), ultimately affecting fruit size at maturity. This hypothesis is further supported by recent findings that the auxin content in ovaries of a new parthenocarpic tomato line "R35-P" was twice as much as that of the normal line "R35-N" (Zhang et al., 2021), resulting in similar-sized seeded and seedless fruits. Future studies should explore possible strategies to increase auxin content at early developmental stages for production of normal-sized parthenocarpic tomatoes.
In the present study, many previously reported parthenocarpic genes were differentially expressed in response to ovary penetration by pollen tubes (Figure 8A), which was consistent with their respective mutants (Schijlen et al., 2007;De Jong et al., 2009;Garcıá-Hurtado et al., 2012;Martıńez-Bello et al., 2015;Liu et al., 2018;Takisawa et al., 2018). This finding suggests that the regulatory mechanisms by which full pollen tube penetration into the ovary induces seedless fruit development in tomato involves the coordinated action of diverse parthenocarpic genes. However, it was puzzling that the expression of IAA9 did not change ( Figure 8A), yet its loss-of-function mutant iaa9-3 was shown to induce parthenocarpy (Saito et al., 2011). Pollination of iaa9-3 flowers with XP slightly increased fruit set but failed to impact fruit development ( Figures 8B-D). A possible explanation lies in the downregulation of ARF7 following pollination of wild type "Micro-Tom" flowers with XP ( Figure 8A). Previously, the ARF7/IAA9 complex was shown to regulate fruit initiation in tomato by acting as a transcriptional repressor of auxin and gibberellin biosynthetic genes . Therefore, a decrease in ARF7 transcript levels points towards a weakening ARF7/IAA9 complex, which would then result in increased expression of gibberellin biosynthetic genes ( Figure 7C), as well as expansins and cyclins ( Figures 7A, B), eventually leading to fruit initiation and development.
Both ethylene production and signalling were also reported to change significantly during pollen tube growth in tomato (Althiab-Almasaud et al., 2021), and increased ethylene content in young tomato ovaries inhibit fruit set, either by promoting pedicel abscission (Roberts et al., 1984), or inhibiting cell division (Vandenbussche et al., 2007). In the present study, ethylene biosynthetic genes were down-regulated in ovaries at 2 DAA following full penetration by pollen tubes from XP ( Figure 7C), a change that, in all likelihood, would avert pedicel abscission and promote cell division culminating in successful fruit formation.
Based on the results of this study, we propose a revised schematic model illustrating the distinct contributions of pollination, pollen tubes and fertilization to fruit formation in tomato (Figure 9). According to this model, the effect of pollination alone is non-significant. Partial pollen tube growth (or growing pollen tubes) significantly alters several pathways related with 'nucleosome assembly', 'chromatin assembly', 'DNA packaging complex', and 'protein-DNA complex', to release cell cycle dormancy. However, these changes are limited and cannot sufficiently activate cell division and expansion processes, resulting in failed fruit set. On the other hand, full pollen tube penetration sufficiently initiates fruit set and contributes to the early stages of fruit development (at least up to 4 DAA) by regulating different hormonal pathways likely through diverse parthenocarpic genes, such as ARF7, ARF5, CHS1, CHS2, AGL6, and GA20ox1. Finally, fertilization contributes to fruit development primarily by upregulating auxin synthesis, which then stimulates gibberellin synthesis and accelerates the expression of late-responding cell expansion genes. The effect of fertilization is evident towards later stages (beyond 4 DAA) of fruit development. In addition, our results suggest that the contribution of cell expansion to fruit development is greater than that of cell division from 4 DAA onwards, which is a much earlier timepoint than previous models (Ariizumi et al., 2013;Quinet et al., 2019).
Conclusion
Overall, we reported that the complete penetration of pollen tubes into ovaries regulates the expression of essential genes involved in diverse pathways to accelerate both cell division and cell expansion events in tomato ovary tissue. These effects are independent of fertilization, resulting in a high fruit set ratio and parthenocarpic fruit development. These findings contribute to a better understanding of the mechanism by which tomato fruits are set and developed.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: DNA Data Bank of Japan (DDBJ) database (DRR461225-DRR461239). | 2023-06-20T13:18:06.755Z | 2023-06-20T00:00:00.000 | {
"year": 2023,
"sha1": "000d5b54be07a3e56be400aa1988dd67cef18199",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "000d5b54be07a3e56be400aa1988dd67cef18199",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231862661 | pes2o/s2orc | v3-fos-license | Hybrid BW-EDAS MCDM methodology for optimal industrial robot selection
Industrial robots have different capabilities and specifications according to the required applications. It is becoming difficult to select a suitable robot for specific applications and requirements due to the availability of several types with different specifications of robots in the market. Best-worst method is a useful, highly consistent and reliable method to derive weights of criteria and it is worthy to integrate it with the evaluation based on distance from average solution (EDAS) method that is more applicable and needs fewer number of calculations as compared to other methods. An example is presented to show the validity and usability of the proposed methodology. Comparison of ranking results matches with the well-known distance-based approach, technique for order preference by similarity to ideal solution and VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) methods showing the robustness of the best-worst EDAS hybrid method. Sensitivity analysis performed using eighty to one ratio shows that the proposed hybrid MCDM methodology is more stable and reliable.
Introduction
Massive utilization of robots in industries is due to extensive progress in engineering and information technology. Robots have so many features, specifications and capabilities to do work accurately as compared to manpower. A robot is a multipurpose machine that is selfcontrolled and reprogrammable, and can perform various tasks in diverse industrial applications for example welding, spray painting, loading, finishing, assembly, etc. [1][2][3]. Utilization of robots enhanced the productivity and profitability of organizations. Factors like operation speed, quality, reliable production process, etc. are enhanced by the utilization and implementation of modern technology in the organizations. Further, due to the vast competitive market, it becomes very difficult for companies and organizations to choose proper machines/robots that best fit their requirements. The key point in the selection process of a robot is to identify the attributes according to the requirement of the work [4]. These attributes are classified into two categories, benefit and non benefit, benefit attributes need to be high in value and on contrary non benefit should have low values. e.g. cost is non benefit so require least value and repeatability is benefit attribute so require highest value. Table 1 represents the acronyms with their descriptions that are used in this paper. There are many MCDM methodologies for suitable industrial robot selection. Knott and Getto [5] developed a robot selection methodology considering uncertainty of prediction time, labour components, net present value of several alternatives evaluated at the same time reference and overheads of alternatives. Dooner [6] simulated robot operations in workspace and workspace is aided for robot selection. According to Hinson [7] work environment is the major factor for the selection of robot. Huang and Ghandforoush [8] evaluated robots based on budget requirement and investment etc. Jones et al. [9] pointed out the importance of marginal value function for the selection of robots. Imany and Schlesinger [10] proposed a linear goal programming model for robot selection and presented a comparative analysis with ordinary least square methods. A fuzzy method is applied by Wang et al. [11] in their decision support system for robot selection. Boubekri et al. developed an expert system for the selection of industrial robot considering functional, economic and organizational factors. Agrawal et al. [12] proposed robots selection methodology based on TOPSIS and presented a decision support system to ease inexperienced users, the drawback of the method is not considering the qualitative nature of attributes. [32] proposed IVIHF entropy method and used FVIKOR techniques to make priority of industrial robot. Ali and Rashid [33] proposed group BWM for industrial robot selection by weighting decision makers based on their previous experience by an executive and showed the robustness of the proposed method by comparative study and checking minimum violation and total deviation. All these methods have their advantages and disadvantages, most methods, especially fuzzy methods, have extensive calculation work. Rezaei [34,35] presented BWM in 2015 that is a refined form of AHP method with more consistency and less pairwise comparisons. Rezaei et al. [36,37] applied BWM to link supplier development with the supplier segmentation. Gupta and Barua [38] used BWM to rank enablers of technological innovation by identifying them. Gupta et al. [39] ranked barriers to energy efficiency in building by the utilization of BWM to the development and improvement of energy efficiency measures. Gupta [40] developed hybrid best-worst VIKOR method for prioritization of service quality attributes to evaluate and enhance service quality of the airline industry and evaluated ranking of best airline. Ren [41] employed BWM for the technology selection for ballast water treatment and determined their grades. Ren et al. [42] solved urban sewage sludge treatment problem with the help of BWM. Ali Torabi et al. [43] applied BWM for a business continuity management system which is an enhanced risk assessment framework. Kaa et al. [44] applied BWM to evaluate the competitive advantage of fuel cell and battery-powered electric vehicles. Rezaei et al. [45] solved a supplier management problem with the help of BWM by proposing purchasing portfolio matrix hybrid with supplier potential matrix. The supply chain is a sensitive problem for the production industry that is solved by many researchers with the help of BWM, that is a great success of the method [37,[45][46][47][48][49][50][51]. Parkash Garg and Sharma [52] proposed VIKOR method hybrid with BWM for the selection and evaluation of sustainable outpouring partner. Mokhtarzadeh et al. [53] prioritize twenty-three technology options for a high-tech company by finding their weights with the help of BWM (a case study in Iran). Zolfani and Chatterjee [54] proposed a comparative study of BWM and SWARA methods for the household furnishing materials selection. Serrai et al. [55] evaluated web service selection using BWM and compared it with the results of VIKOR, simple additive weighting, TOPSIS and complex proportional assessment methods. Uncertain extensions of BWM are proposed by different researchers [56][57][58][59][60] these extensions have different advantages but require extensive calculations. Pamucar et al. [61] proposed a new full consistency method (FUCOM) for criteria weight calculations showing the method perform better then the BWM and AHP method with respect to consistency and pairwise comparisons but the method require an initial priority of criteria by the decision-maker or expert on the basis of their experience or preference that can confuse DMs to make proper preferences but using BWM DMs require only to select most favorable and least favorable criteria and make best to other and others to worst pairwise comparisons that is much easy task. However, FUCOM is also being integrated with other MCDM methods for selection problems [62,63].
Ghorabaee et al. [64] proposed EDAS a multiple criteria decision-making method in 2015 and used it for the classification of inventory. EDAS is a compensatory method in which criteria are independent, for evaluation by EDAS qualitative attributes are converted to quantitative, decision matrix determines the input information and using this method excavator is selected for a road construction company [65]. Aggarwal et al. [66] applied EDAS method for the selection of an appropriate smart-phone within the particular budget, particularly in the Indian market. Kundakci [67] applied EDAS method combined with measuring attractiveness by a categorical based evaluation technique for the selection of steam boiler for dyehouse of a textile company. Ecer [68] applied EDAS along with FAHP and Delphi technique for group decision selection of the best among four third-party logistics services for a marble company. Stevic et al. [69] applied AHP and EDAS method for the selection of one of the four scenarios are evaluated. EDAS is prominent as its solution is obtained from the average solution that eliminates the unfairness risk of the experts towards an alternative. Simplicity and need for fewer computations are the most significant characteristics of the EDAS method. BWM got a huge success in application are due to its consistency and fewer pairwise comparisons. Similarly, application of EDAS method is also very vast due to its simplicity and robustness. So, it is advantageous to integrate BWM with the EDAS method as BWM is more applicable and more consistent for weight calculation and EDAS method provide more stable results with low calculation cost.
In this paper, a hybrid best-worst EDAS method is proposed for industrial robot selection. The motivation of this paper is to provide a simple, reliable, and robust MCDM methodology for industrial robot selection with fewer calculation cost. No one integrated BWM with EDAS method for robot selection problem. There are three advantages of BWM method for weight calculations, 1) BWM provide consistent results, 2) it required fewer pairwise comparisons as compared to other MCDM methods, 3) selecting the best and the worst criteria and comparing with other criteria is much easy for DMs using 1 to 9 scale. EDAS method is selected for ranking the robots as it is new method and have wide application area, it requires low calculation cost as compared to other MCDM methods. The ranking results are compared with TOPSIS, VIKOR and DBA methods. Sensitivity analysis performed with respect to criteria shows that s 3 and s 4 are sensitive to assign weighs and are more important for selection process.
Best-worst method
AHP is a more applicable and frequently used method but has drawbacks of consistency and need for more comparisons [34]. Rezaei remedy these issues by presenting BWM in 2015 [34]. BWM is a pairwise comparison weight deriving process that is more consistent, need fewer pairwise comparisons and hence is more reliable. BWM comprises of five steps for calculating the weights of criteria.
Step 1: First step involves to select decision criteria sets.
Step 2: In this step, decision makers decide for the best or more favorable criterion and the worst or least favorable criterion e.g. load capacity may be best criterion and vendor's service may be worst criterion.
Step 3: In this step, preference of the best criterion over all the other criteria are determined using 1 to 9 scale represented by A BO = (s B1 , s B2 , . . ., s Bn ), where s Bj represent the preference of the best criterion B over the criteria j.
Step 4: Preference of all the other criteria over the worst criterion is determined using 1 to 9 scale by the decision makers and is represented by A OW = (s 1W , s 2W , . . ., s nW ), where s jW represent the preference of all the criteria j over the worst criterion W.
Step 5: Fifth step is to determine weights ðw � 1 ; w � 2 ; :::; w � n Þ of criteria. BWM have three advantages 1) it is always consistent, 2) it require fewer comparisons as compared to AHP, 3) it require fewer calculations. Eq (1) represent the mathematical model of BWM.
The equations in (1) can easily be converted in the form represented by the equations in (2). min ε s:t: Solution of the equations in (2) are the weights of criteria ðw � 1 ; w � 2 ; w � 3 ; . . . ; w � n Þ: Using BWM reliability of comparisons is determined using consistency ratio (CR) that is calculated using Eq (3), ε � (optimal value obtained by the solution of the model (2) and its consistency index (CI) [34] whose values are taken from Table 2.
EDAS method
In EDAS method, PDA solution and NDA solution are calculated, the optimal alternative has the higher distance from the nadir solution and lowest distance from the ideal solution. This method is useful for conflicting criteria and worthy due to need for fewer calculations. EDAS method comprises the following steps.
Step 1: First step involved the selection of the most important criteria for the alternatives.
Step 2: Construction of the decision-matrix (M), presented by Eq (4): where m ij determines the performance value of ith alternative with respect to jth criterion.
Step 3: In this step, Eq (5) is used to determine the average solution to all criteria: where, AV j ¼ P r i¼1 m ij r .
Step 4: The matrix PDA and NDA are calculated according to the benefit and cost criteria as follows: if jth criterion is beneficial, and if jth criterion is non-beneficial, where PDA ij and NDA ij denote the positive distance and the negative distance of ith alternative from average solution in terms of jth criterion, respectively.
Step 5: The weighted sum of PDA and NDA are determined at this step and are represented by the Eqs (6) and (7): where w j represents the weight of jth criterion.
Step 6: The values of SP and SN are normalize, shown as follows: Step 7: The appraisal score (AS) are calculated for all the alternatives by using Eq (8)): Best-worst method is integrated with EDAS method. Weights derived using BWM are used to make priority ranking of robots using EDAS method. The proposed hybrid multiple criteria decision making methodology is represented in Fig 1.
Ranking evaluation for robot selection
In this section, weights are derived using BWM, and then EDAS method evaluated the ranking for the best selection of robot using these weights. Load capacity s 1 , repeatability s 2 , velocity ratio s 3 and degree of freedom s 4 are considered as criteria for industrial robot selection, here decision makers determines s 3 as the worst criterion and s 2 as the best criterion. Table 3 represents the comparisons of the criteria by the decision-makers.
The weights of criteria are evaluated using model (9). min ε s:t: The solution of the model (9) will provide interval solutions for each w i , average to which will provide the weights of criteria. i.e. Table 4 represents the evaluation of robot 1, robot 2, robot 3, robot 4 and robot 5 with respect to the load capacity, repeatability, velocity ratio, and degree of freedom.
The matrices (10) and (11) represents the positive distance from average and the negative distance from average, respectively. Calculations of SP, NSP, SN, NSN, AS and ranking are determined in Table 5.
Comparison of results
The final results of EDAS method are compared with DBA, TOPSIS and VIKOR methods represented in Table 6 and noted that the ranking results for all these methods are the same for the weights derived by the BWM.
Sensitivity analysis
The sensitivity analysis is the process/tool to check the priority ranking consistency of the MCDM method. The sensitivity analysis is done with assigning eighty per cent of total weight to one criterion and the rest of the weight to all the criteria with equal strength, methodology is adapted from the Jain et al. [70]. Different scenarios of the weight selection for criteria is presented in Table 7. Normalized ASs of scenario 1,2,3 and 4 are graphically shown in Figs 4-7, respectively, and its corresponding ranking effects on alternative robots are shown in Table 8 and Figs 8 and 9. The results show that for scenario 1 and scenario 2 the ranking order of robot 4, robot 5 interchange their position but first three preference do not effect the results. But scenario 3 and scenario 4 are more sensitive to the ranking where Robot 3 that was at first rank gone to the rank 4 but for scenario 3 good thing is that second and third preferences become first and second preferences, respectively but for scenario 4 4th preference becomes 5th preference and fifth becomes first preference that is extremely sensitive result as compared to the proposed result. The results of sensitivity analysis show that criteria s 3 and s 4 are more sensitivity with respect to assigning weights because they provide more rank reversal for alternative robots as compared to criteria s 1 and s 2 whose 4th and 5th preferences are sensitive. If we conclude this analysis the criteria s 3 and s 4 are important criteria for this selections process.
Conclusion
The main reason of the enhanced utilization of robots in the industrial latest manufacturing system is the rapid advancement in the information technology and engineering sciences. Manufacturers, in industrial applications, preferably use robots to perform different repetitious, uncertain and difficult tasks with precision. Hence, for a particular task, to boost the quality of products and enhance productivity in a manufacturing company the most difficult and crucial concern is the selection of proper and suitable robot. To deal with the decision making process, load capacity, repeatability, velocity ratio and degree of freedom are considered for appropriate robot selection in industries using the best-worst method integrated with EDAS method. The advantage of the proposed method is that it is more consistent and need fewer calculations. The proposed methodology is more reliable and robust as its ranking results match with the well known existing methods. Sensitivity analysis shows that the results are stable for criteria s 1 and s 2 and sensitive with respect to s 3 and s 4 . This methodology have following advantages, 1) weight deriving process is consistent, 2) it required fewer pairwise comparisons, 3) user friendly for DMs to provide opinions, 4) less calculation cost. The proposed hybrid BW-EDAS methodology can be used for any number of criteria, qualitative or quantitative, to make preference ranking of the robots. The proposed methodology is a general procedure that can help decision makers to solve any industrial selection problem having finite selection criteria. In future, we will utilize FUCOM method to draw weights and will use EDAS methods for ranking process and conduct a comparative analysis with our proposed method. The work can also be extended in fuzzy environment.
Managerial applications
A hybrid decision methodology has been developed to evaluate the optimal robot for industrial application. The firms can derive advantage from the decision methodology developed in the study, which can be employed as a road map to a consensus understanding to assess firms'
PLOS ONE
activities robot selection. Based on the findings of this study, with the optimal robot evaluation tool developed, now firms can determine the various ways to enhance their production and quality assurance. Thus, managers can develop a resilient relationship with their partners, depending on their strengths and take necessary actions to overcome the weaknesses. The digitalization of the firms can be possible by adopting Industry 4.0 approaches and sustainabilityrelated issues for systematically analyzing the decision problem. Moreover, the results of the study can also assist managers in selecting the best robot for specific requirement. Therefore, the results of the study are very significant for implementing the developed decision framework for best fit industrial robot. | 2021-02-11T06:19:40.572Z | 2021-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "daa40210194cc06f96a67dfe4c49f7f9537f84b2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246738&type=printable",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "edeb12c4344b9f33d00a538430e5433cc99d6a80",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5042769 | pes2o/s2orc | v3-fos-license | Multidimensional Mapping Method Using an Arrayed Sensing System for Cross-Reactivity Screening
When measuring chemical information in biological fluids, challenges of cross-reactivity arise, especially in sensing applications where no biological recognition elements exist. An understanding of the cross-reactions involved in these complex matrices is necessary to guide the design of appropriate sensing systems. This work presents a methodology for investigating cross-reactions in complex fluids. First, a systematic screening of matrix components is demonstrated in buffer-based solutions. Second, to account for the effect of the simultaneous presence of these species in complex samples, the responses of buffer-based simulated mixtures of these species were characterized using an arrayed sensing system. We demonstrate that the sensor array, consisting of electrochemical sensors with varying input parameters, generated differential responses that provide synergistic information of sample. By mapping the sensing array response onto multidimensional heat maps, characteristic signatures were compared across sensors in the array and across different matrices. Lastly, the arrayed sensing system was applied to complex biological samples to discern and match characteristic signatures between the simulated mixtures and the complex sample responses. As an example, this methodology was applied to screen interfering species relevant to the application of schizophrenia management. Specifically, blood serum measurement of antipsychotic clozapine and antioxidant species can provide useful information regarding therapeutic efficacy and psychiatric symptoms. This work proposes an investigational tool that can guide multi-analyte sensor design, chemometric modeling and biomarker discovery.
Introduction
Blood is the conduit for transporting chemicals throughout the body. These chemical species perform diverse functions such as communication and energetics, and their activities are vital to the body's effort to respond to threats and maintain homeostasis [1,2]. Thus, information of chemical compounds in blood provides a snapshot of an individual's health status and consequently, blood tests are routinely used to discern diagnosis, facilitate prognosis and assess therapeutic actions [2][3][4][5]. Blood tests are typically performed in a central laboratory facility using common bench-top equipment (i.e., chromatography, mass spectroscopy) [6], although the need for real-time and frequent monitoring has led to the development of various point-ofcare (POC) devices [7]. Such portable sensor systems can access chemical information in blood and be employed on-site by health-care professionals (i.e., at an office or pharmacy) or at home by a patient or care-giver to provide faster test results and therapeutic interventions [8]. Other benefits of POC testing include fewer unnecessary hospital admissions, better optimized drug treatment, and improved quality of life [8,9]. The most successful example of a POC device that accesses chemical information in blood is the glucose sensor, which is routinely used by diabetics to monitor their blood sugar and guide the administration of insulin [9]. Glucose measurement is enabled by a lock-and-key sensor design whereby selective recognition elements (enzymes) recognize and transduce the chemical signal of glucose into an electrical signal [4,5].
While the glucose sensor has transformed the management of diabetes, it may not be the best paradigm for other applications such as neuropsychiatric disorders, where the specific biomarkers for diagnosing or monitoring these diseases are not completely understood [10][11][12][13] and are in turn more difficult to diagnose and manage [14][15][16]. The state of these disorders may not be exemplified by a single marker but rather by integrated body responses [10][11][12][13][16][17][18]. Investigational tools that allow samples to be analyzed for population commonalities would enable better understanding of these disorders [10,11,13,15,16]. The typical lock-andkey sensor approach employing specific biological recognition elements (i.e., enzymes) may be impeded by the lack thereof, which creates a major challenge of cross-reactivity among molecular species in the complex sample (i.e., blood, urine, sweat, saliva) [7,19,20]. New sensing methodologies for mental health management that are based on multi-analyte, multi-sensor platforms can bypass the need for selective biological recognition elements by accounting for cross-reactions present in complex biological fluids.
Schizophrenia exemplifies such complex conditions lacking well-understood blood markers [10][11][12]15,16,[21][22][23]. Nonetheless, emerging evidence implicates oxidative stress to have psychophysiological significance in schizophrenia among other conditions. For instance, measures of antioxidants have been shown to correlate inversely to worsening symptoms of psychosis [24,25]. Thus, small molecule candidate markers may include biological antioxidants (i.e., ascorbic acid [26], uric acid [27]). Frequent monitoring of oxidative stress along with antipsychotic medication levels will allow physicians to assess the efficacy and toxicity of the treatment and guide drug titration [28]. The antipsychotic clozapine (CLZ) is a fitting example of the need of frequent monitoring in schizophrenia treatment. It is the most effective antipsychotic for treatment-resistant schizophrenia and its narrow therapeutic range suggests continuous measurement of its blood level [28]. These markers are currently measured at centralized facilities using common bench-top equipment (i.e., chromatography, mass spectroscopy) [29]. Nonetheless, the iterative nature of this practice requires a faster and more convenient monitoring platform that can be employed at the POC to improve treatment outcomes. Incorporating miniaturizable electrochemical sensors in POC devices for schizophrenia treatment management is well-suited to probe the redox activities of blood samples and can extract information of individual CLZ and antioxidant indicators due to their electroactive nature [29][30][31].
We previously demonstrated the ability to detect CLZ with a POC device by incorporation of a semi-selective a redox cycling film on electrode surfaces to enable oxidative signal amplification [32,33]. However, in measuring serum samples, electroactive components present in the matrix generated overlapping and potentially inter-dependent signals to the CLZ measurement due to selectivity limitations (i.e., no biological recognition elements).
Rather than relying on a single semi-selective electrode, a sensor array is a more suitable platform for accessing the broader scope of information available in the complex sample [16,18,22]. For instance, the diagnosis or prognosis may be inferred from a particular pattern of various semi-specific markers [18]. Thus, concepts such as the artificial tongue, which have enabled multi-analyte assessment in complex mixtures by coupling multi-sensor array responses with pattern recognition (i.e., chemometric) models, can be a more appropriate platform [22,[34][35][36][37][38]. This method relies on semi-selective sensors that each provide diverse and synergistic information regarding the markers of interest as well as other variable background signals to create fingerprint patterns for the markers of interest. Quantitative or qualitative models can be applied to obtain either presence/absence information or concentration values [34]. Moreover, advanced models such as artificial neural networks (ANN) are capable of being trained to a certain sample matrix, such as the particular baseline signal of an individual's blood sample, and can account for the sample matrix changes that may differ from person to person [39]. Thus, the diversity of a sensor array provides multidimensional information that can account for variable cross-reactive species in complex samples.
We demonstrate an integrated bottom-up and top-down approach that uses an arrayed sensing system to discern and match electrochemical signatures of CLZ and antioxidant species in buffer and serum-based matrices towards the development of a monitoring device for schizophrenia treatment management. In a bottom-up study, a systematic methodology is employed to select, screen and identify electrochemical patterns of endogenous electroactive species in buffer-based solutions. Furthermore, the cross-reactivity of these species with the exogenous CLZ marker is characterized by testing them in simulated mixtures. To capture broader chemical information and enable multi-level comparisons, varying sensing elements are incorporated in an array. The miniaturizable sensing elements consist of differential changes in the input parameters, including changes to the electrical input signal, the sample pH and the sensor material. The sensor array was shown to produce differential responses that provide synergistic information about the chemical species. Lastly, multi-dimensional array responses from bottom-up buffer based studies were compared to a top-down study based on blood serum samples, to discern and match signatures between the two sample matrices, as illustrated in Fig. 1. This methodology provides an understanding of the individual and combined sensor responses of CLZ, uric acid and other components of serum. Broadly, the results outline an investigational tool for identifying signatures of interest (i.e., cross-reactive species) in any sample, which can be applied to suggest guidelines for targeted sensor design, to chemometric models, or to discover biomarkers.
Materials
Phosphate buffer saline (PBS, pH 7.4) containing 0.01 M phosphate buffer, 0.0027 M potassium chloride, and 0.137 M sodium chloride was prepared by dissolving concentrated tablets with deionized water. Pooled commercial human serum (from human male AB plasma, originated in the USA, sterile-filtered) were divided into 1 mL aliquots and stored in-20°C conditions. All materials were purchased from Sigma-Aldrich, except urea (Fisher Scientific), L-glutamine (JRH Biosciences), and glycine (Bio-Rad).
Electrochemical methods
A glassy carbon disk electrode (GCE, 3 mm diameter), a platinum disk electrode (Pt, 2 mm diameter), a platinum wire electrode, and a Ag/AgCl (1 M KCl) electrode were purchased from CH Instruments. Electrochemical tests were performed with a CHI660D potentiostat (CH Instruments) in a three-electrode configuration of either GCE or Pt as the working electrode, platinum wire as the counter electrode and Ag/AgCl as the reference electrode. All potential values presented are written in reference to a Ag/AgCl half-cell potential. The working electrodes were cleaned by successively polishing with 1, 0.3, and 0.05 μm alumina powders and were electrochemically validated before every test. Validation was performed by measuring the ferrocyanide/ferricyanide redox couple using a cyclic voltammetry technique (CV: initial E and final E = 0.19 V, high E = 0.44 V, low E = -0.06 V, positive initial scan polarity, scan rate = 0.2 V/s, 6 sweep segments, sample interval 0.001 V). The redox couple solution consisted of 10 mM PBS, 100 mM sodium chloride, 10 mM ferricyanide and 10 mM ferrocyanide. A differential pulse voltammetry technique (DPV: initial E = 0 V, final E = 0.7 V, increment E = 0.001V, amplitude = 0.05 V, pulse width = 0.2 s, sampling width = 0.0167 s, pulse period = 0.5 s) was employed for elements A-C and F of the sensor array (Table 1). CV (initial E and final E = 0 V, high E = 0.7 V, low E = 0 V, positive initial scan polarity, scan rate = 0.01 V/s, 6 sweep segments, sample interval 0.001 V) was used for elements D-E of the sensor array ( Table 1). The pH was changed by adding 1 M HCl or 1 M NaOH under mixing conditions for elements A-C of the sensor array. All buffer-based tests were performed in same-day triplicates and the averages of the signals are shown graphically, such that larger positive responses correlate to higher oxidative current.
Serum sample preparation
The frozen serum aliquots were thawed by placing the tube on an ice bed at room temperature. Centrifree Ultrafiltration Devices (# 4104) were purchased from Merck Millipore Ltd and run with 1000 μL serum samples at a centrifugal force of 2000xg for 150 minutes, to remove macromolecules (>30K g/mol) from the sample. A 10 mM CLZ stock solution was spiked into serum prior to measurement to a concentration of 5.6 μM CLZ that accounted for up to 1% of the total sample volume. The pH of serum was found to be about 7.70 and 8.54 before and after molecular weight filtration, respectively. Thus, the pH was adjusted according to Table 1 by adding 1 M HCl for level comparison with the buffer-based results.
Signal Visualization of Arrayed Responses
MATLAB (R2013b) was used to create heat map representations of the electrochemical responses, using the "imagesc" tool. All signals represented by heat maps were normalized by taking the absolute value and mean-centering, such that larger positive responses correlate to larger oxidative currents for the DPV and the CV's oxidative sweeps, and larger positive responses in the reductive sweep of the CV correlate to larger reductive currents. Furthermore, the measured electrochemical signal was averaged for 20 mV intervals to assemble the response into 35 groups, and a selected potential range of this smoothed data is shown in the heat maps. The sample array is composed of 8 elements as shown in Table 1: elements A-C represent changes in the sample pH prior to DPV measurement with a GCE working electrode, elements D and E represent the third cycle oxidation and reduction signals from CV measurements with a GCE working electrode, elements ΔD and ΔE represent the difference between the first and third CV cycles of elements D and E with positive values representing an increase of the electrochemical current between cycles, and element F represents the DPV response using a Pt working electrode. Moreover, for the analysis of elements D-F, the electrochemical signal in the presence of the analytes was subtracted by the signal of the buffer solution in the absence of the analytes. Electrochemical peaks on the heat maps are outlined black boxes to highlight the SMA signatures. Simplified heat maps, where only the outlined peaks are shown, are used for overlaying the signatures of several samples such that they can be more conveniently compared. Raw signals not shown here are summarized in the supplemental information.
Bottom-Up Approach
Electrochemical Signatures. DPV scans contain two major components that represent the signatures of an electroactive species, its oxidation potential (E p ) whose location varies according to the specific species' energetics, and its oxidation current (I p ) which varies according to the species concentration. For instance, a well-defined CLZ oxidation I p is seen at an E p of 0.336 V in buffer (GCE, pH 7.4) as seen in Fig. 2, consistent with the literature [29,40]. This electrochemical response represents one signature of CLZ. And the peak information can be mapped onto a heat map, as shown in Fig. 2, to facilitate comparison across various samples or sensors.
Screening of Cross-Reactivity with Serum Components. From all the endogenous (naturally-present) components that make up the complex serum matrix, 17 species were chosen for this study based on the following criteria: (1) common antioxidants [23][24][25][26], (2) high abundance [41,42], and (3) known interferences in similar sensors, as shown in Table 2. The latter includes electrochemical sensors for other CLZ sensing schemes as well as other analytes sensed in the measured potential range of 0-0.7 V [4,5,[43][44][45][46][47][48][49]. Individual species were systematically screened for electroactivity. These criteria can be further individualized for other applications depending on the sensor and analyte of interest. Only four components, aside from CLZ, were shown to be electrochemically active ( Table 2): uric acid (UA), L-cysteine (CySH), ascorbate (AA) and oxalic acid (OA) (S1 Fig.). One common theme among the former three species is their antioxidant properties [50][51][52], with UA accounting for a majority of antioxidant capacity [26], suggesting that antioxidant measurement is amenable to electrochemical sensing assays.
Additionally, cross-reactivity between the endogenous species and the CLZ exogenous species was assessed in simulated mixtures of the two species at their upper physiological concentrations (Table 2) [41,42], as suggested in the CLSI EP7-A2 guidelines [19]. The electrochemical peaks found in simulated mixtures (individual endogenous species + CLZ) were compared to CLZ-only responses to assess the effect of the simultaneous presence of the two components on the CLZ response. These synthetic mixtures are referred to as simulated because they represent interactions only between two-component mixtures, a step toward identifying species interactions in complex biological fluids. Thus, it does not, for example, consider the simultaneous presence of additional species that may lead to further cross-reactions or effects from sample stability, preparation and storage. The cross-reactivity is represented in terms of the signal difference of the mixture and the CLZ-only solution as well as the corresponding p-values obtained from a one-way ANOVA test between the two sample populations, as seen in Table 2. The cutoff significance level was chosen to be α = 0.1 for the peak current, and α = 0.01 for peak potential since the latter has intrinsically less variation. UA and CySH were found to have significant cross-reactivity with CLZ with respect to both peak current and peak potential, as shown by the p-values in Table 2. Thus, their interaction with CLZ was further studied in the following section.
Multi-Modal Sensing Array. Creating a sensing methods array (SMA) enables multi-dimensional signal mapping and facilitates the determination and comparison of characteristic signatures across different conditions. The 8 various sensing elements composing the SMA are shown in Table 1 and are further described below. First, the SMA was validated using a CLZ sample and then it was applied to UA and CySH species individually and in simulated mixtures with CLZ in order to characterize their cross-reactivity.
The design of the sensing methods was chosen to yield differential responses while minimizing complexity. As shown in Table 1, they comprise of only one variable change compared to a control (sensing element A), and include changes of the input potentiostat settings, the pH, Electrochemical oxidation peak values of selected serum species (cross-reactive), and statistical analysis of cross-reactivity when CLZ is tested in mixture with chosen serum species, as a method for screening significant cross-reactions. a E p;CRS represents the mean peak current of the cross-reactive species (CRS) alone, if there is observable oxidation within the testing range. b DI p;mix¼ jI p;mix j À jI p;CLZ j, where I p;i represents the mean peak current of either CLZ or the mixture of CLZ and CRS. c The p-value is the result of a one-way ANOVA test between CLZ-only and simulated mixture peaks. The corresponding peak in the mixture was determined as the largest detectable peak. d DE p;mix¼ jE p;mix j À jE p;CLZ j, where E p;i represents the mean peak potential of either CLZ or the mixture of CLZ and CRS.
doi:10.1371/journal.pone.0116310.t002 and the electrode materials. These design choices are further explained below, and produce diverse signatures as shown by its application to clozapine measurement.
Electrode Material. The properties of the electrode material can affect molecular interactions between the electrode and species being measured, and in turn can affect the sensor semiselectivity across different species [53]. For instance, attraction/repulsion forces between the surface structures of the electrode and the species in solution can affect reaction kinetics [54,55]. In regards to the GCE, studies have shown that species containing amine groups can form a carbon-nitrogen linkage with GCE after oxidation of the amine group to the corresponding cation radical [56,57]. The level of substitution of the amine plays a major role in the electrochemical oxidation kinetics as well as in the GCE linkage. For instance, tertiary amines have been found to have most facile oxidation but an undetectable degree of binding to the GCE, likely because of steric hindrance. Furthermore, secondary and primary amines have the least facile oxidation but are seen to form carbon-nitrogen linkages with GCE [57]. CLZ oxidation on GCE is shown in Fig. 2 with a clean background signal.
A metallic electrode material, Pt, undergoes different surface reaction processes (adsorption, kinetics, surface functionalization) compared to the carbon-based GCE [58,59]. As shown in Fig. 3a, the Pt electrode has background DPV peaks near 0 V and 0.57 V in the PBS solution.
In the presence of CLZ, an additional DPV peak was seen at 0.33 V, which does not overlap with the background signals (Fig. 3a). Due to the presence of background reactions, we suspect that changes in the sample matrix may not only affect the CLZ peak but also the background oxidation processes. Thus, the background signal is advantageous in this type of sensor array because it can account for environmental changes and matrix effects that may not be detectable with a GCE. Additionally, combining Pt and carbon electrodes has previously been shown to add orthogonal information in an array of several electrode materials [60].
pH Changes. Changes to the solution pH have the ability to affect the oxidation peak potential, E p (V), which depends on the pH and the specific oxidation properties of the species according to the Nernst equation: with the potential at standard conditions E o (V), gas constant R (J K -1 mol -1 ), temperature T (K), the ratio of protons m to electrons n transferred during the redox reaction, and Faraday constant F (C mol -1 ). The Nernst equation assumes a reversible redox reaction, fast kinetics, and the concentrations at the electrode surface to be in equilibrium with the potential. These assumptions are not completely met for the semi-reversible, slow reaction of CLZ but serves as a general trend. Fig. 3b shows the linear correlation of CLZ oxidation E p at varying pH, with a slope of 0.048 V/pH and intercept value of 0.682 V as outlined in Table 3. The behavior of the I p (Fig. 3b) is seen to increase between the pH 6.5 and 7.4 interval and decrease between pH 7.4 and 8.0, a trend similar to that seen by Huang et al [61]. This could be attributed to several factors. One possibility is changes to pH-dependent functional groups within the GCE surface that can lead to differences in attraction/repulsion forces, reaction kinetics or reaction mechanisms. An additional factor is the effect of pH on the solubility of CLZ, which is least soluble in basic pH. Dependence of peak potential on pH based on the Nernst equation (E p ¼ E o À 2:3RTm nF pH), with the peak potential E p (V), potential at standard conditions E°(V), gas constant R (J K -1 mol -1 ), temperature T (K), the ratio of protons m to electrons n transferred during the redox reaction, and Faraday constant F (C mol -1 ). The slope values correspond to the ratio of protons to electrons involved in oxidation reactions and the intercept is an estimate of the Table 1). The insert shows the linear trendline of peak potential dependent on pH. Each curve represents the average of triplicate measurements. Electrochemical Technique. In addition to DPV, a different electrochemical method with the potential to incorporate additional useful information was chosen. While DPV is a pulsing technique that is swept across potentials in one direction, cyclic voltammetry (CV) is a (nonpulsing) linear sweep performed in cycles spanning the anodic and cathodic potential directions. Applying CV can lead to oxidation as well as reduction peak responses, and multiple CV cycles allow dynamic visualization of byproducts formed over time. Van Leeuwen et al. showed the following scheme (equation 2), where CLZ can be reversibly oxidized and reduced. Due to the instability of reaction products, byproducts are formed, some of which possess redox behavior. The electroactive byproduct is thought to correlate to hydroxylated CLZ derivatives [40]. Fig. 3c shows the oxidation and reduction peaks of the CLZ cyclic voltammogram near 0.34 V as well as the appearance of oxidation and reduction of the CLZ byproduct near 0.17 V. Furthermore, the peak changes across consecutive CV cycles demonstrates a decrease of the primary CLZ oxidation peak and an increase of the byproduct oxidation peak with cycles (time).
SMA Signatures of Individual Endogenous Species. The signatures of UA and CySH are similarly characterized using the SMA to determine their trends across the array for comparison with CLZ signatures and later, for comparison with serum-based responses.
UA has a characteristic oxidation peak at 0.245 V in element A as shown in Fig. 4b. Elements A-C illustrate how the peak potential shifts with varying pH between 6.5 and 8.0 according to equation 1, with a trend shown in Table 3. The fitted slope value of 0.046 V/pH is similar to that of CLZ (0.048 V/pH), which suggests a similar ratio of proton to electron produced during the oxidation reactions of these species (equation 1). Nonetheless, the intercept value, representative of the E o , provides a distinguishable parameter between the two species as shown in Table 3. Notably, the UA produces the highest response at pH 6.5 (element B) and lowest at pH 8.0 (element C). Elements D-E (Fig. 4b) represent the CV signatures of UA. Element D shows the oxidation peak of UA positioned near 0.30 V, and element ΔD demonstrates a decrease in the UA peak over time, suggesting a large consumption of UA throughout cycling. This behavior likely arises due to the electrochemically irreversible behavior of UA, seen by the lack of reduction peaks (element E). Lastly, the UA response at a Pt electrode (element F) shows a drastic shift of 0.18 V in its E p position between the GCE and Pt electrode materials, demonstrating that different electrode processes can be achieved by varying the electrode (Table 1), and the peak signatures are outlined in black rectangles. Each response represents the average of triplicate measurements. Note that two scales are used in each heat map to enhance visualization.
material. This trend is a useful signature because it will enable verification of suspected UA peaks in serum by verifying the trend of relative E p positions between GCE and Pt. Fig. 4c shows the characteristic signatures of CySH across the SMA. The response at element A shows a wide oxidation peak of CySH at 0.347 V. This peak is suggested to correspond to a 2-step reaction of CySH's thiol group, which yields CySand subsequently disulfide CyS-SCy [62,63]. Thus, an important physiological role of CySH is the formation of disulfide bonds between polypeptide chains, contributing to protein folding [62], which are formed via an oxidation reaction that yields CySH intermolecular links [64].
Elements A-C illustrate large differences in oxidation E p and I p across the pH range (Fig. 4c), with a distinguishable wide peak observed at neutral pH. At higher pH (8.0), this peak shifts 0.08 V to lower potentials, and at lower pH (6.5) it becomes ill-defined. Thus, CySH oxidation depends on pH as outlined in Table 3, with a slope of 0.141 V/pH and intercept value of 1.400 V, which are much higher than those measured for CLZ or UA. This suggests characteristic differences in electrochemical processes among the species. Elements D-E show the CV signatures of CySH. A CySH oxidation is seen in the CV as a large peak near 0.48 V (element D), with a magnitude that decreases over time as seen by element ΔD. Similar to UA, this behavior suggests consumption of CySH at the electrode throughout cycling. Moreover, no reduction reactions are detected in the CV, as seen by the lack of peaks in elements E-ΔE (Fig. 4c). Lastly, a DPV response using the Pt electrode (element F) demonstrated a 0.07 V shift of CySH oxidation to higher E p compared to the GCE control. Moreover, an additional peak appears near 0.15 V, but no conclusions can be drawn about this peak since the response between 0 and 0.2 V also correspond to background oxidation processes of the background PBS response, as seen in Fig. 3a.
SMA Signatures of Simulated Mixtures. Simulated mixtures of CLZ in the presence of UA or CySH were characterized with the SMA in order to elucidate the effect of their simultaneous presence a further mimic their combined behavior in complex serum solutions. Fig. 5a illustrates the signatures of simulated UA/CLZ mixtures as a simplified heat map. The response of element A has a single DPV oxidation peak positioned at the characteristic UA oxidation location (as seen by the blue-shaded outline). This peak has a near 5-fold higher amplitude compared to the CLZ peak I p , and the CLZ signature is masked in the mixture as seen by the lack of mixture signals near the red-shaded outline of the CLZ peak. Similar behavior is seen in the CV oxidation response (element D). Additionally, at varying pH, the linear trend closely follow the parameters seen for UA (Table 3). Even at pH 8.0 (element C), where UA exhibits the lowest signal amplitude, the CLZ peak remains masked. The CV reduction peaks of CLZ in element E were also masked in the presence of UA. This finding corresponds with previous studies, where the interference from high concentrations of UA for electrochemical CLZ measurements was reported [61]. This mixture response suggests an interaction where the UA may be preventing or masking CLZ's signature oxidation peak, and the behavior may be related to UA's potent antioxidant activity. Another suggestion may be the formation of linkages between oxidized UA and the GCE surface due to the presence of high concentration of UA's secondary amines, which can foul the electrode surface (as described for GCE previously) and thus decrease the current response of subsequent electrode reactions. The response of the Pt electrode in element F shows a change in the width of the UA signal in the presence of CLZ, but again the UA signal dominates over that of CLZ. Nonetheless, even when the CLZ is shown to be masked by the upper UA concentration in the simulated mixtures, complex solutions may have different integrated responses in the simultaneous presence of other species. This is further studied in the Top-Down section below. Fig. 5b shows the signatures of simulated CySH/CLZ mixtures. While the individual peaks of CySH and CLZ have similar E p values around 0.35 V (for element A) as seen by the overlapping shaded signals in Fig. 5b, the mixture DPV response contains only one major peak located at 0.284 V, with a 0.62-fold change in I p compared to CLZ individually. In parallel, a small additional peak near 0.472 V (at pH 7.4) appeared in the mixture (S4 Fig.). Because this peak was not observed in either CLZ or CySH individual signals, a cross-reaction between these two components is likely, and can also explain the decreased E p and I p . Van Leeuwen et al. showed that the CySH-containing glutathione species can react with activated CLZ to form CyS-CLZ adducts, which are electroactive at higher potentials [40]. Thus, the latter peak shown at 0.472 V may be the electroactive product of the Cys-CLZ cross-reaction. Moreover, the pH dependency of the major peak of the mixture follows a trend shown in Table 3, with a slope of 0.059 V/pH that is closer to that of CLZ (0.048 V/pH) than to that of CySH (0.141 V/pH). Thus, this peak is likely governed by the CLZ rather CySH reaction kinetics. And the additional peak near 0.472 V also has a pH-dependent slope of 0.0432 V/pH closely resembling that of CLZ. The CV reduction response in element E shows that the CLZ reduction peak missing. This behavior along with the decreased peak current may be explained by the CyS-CLZ production, which would increase the rate of CLZ consumption in a reaction with CySH and reduce CLZ's reduction reaction. Lastly, the response of the Pt electrode in element F illustrates signatures that resemble the CySH trend and mask the CLZ peak. This sensing element is seemingly more selective to CySH than to CLZ. (Table 1). doi:10.1371/journal.pone.0116310.g005
Top-Down Approach
Serum complexity makes comparison of buffer and serum based signal responses challenging due to matrix effects. The matrix effect refers to the differences in chemical (simultaneous presence of a combination of interfering species, pH) and physical (viscosity, electrostatic forces, temperature history) properties of the matrix compared to the calibration solution (PBS), that may lead to alterations of the signal [19,65]. Additionally, changes in matrix composition as well as interactions between matrix components and the analyte, need to be considered. One can account for some of the matrix property differences (i.e., adjusting serum pH, choosing a buffer with similar ionic properties, and predicting the effect of preparation), but unaccounted matrix effects are likely to affect the response where highly complex cross-reactive behavior is expected. Thus, the top-down approach is the direct measurement of complex solutions such as blood serum, and accounts for the combined behavior of the analytes in the presence of other matrix components and conditions. Previously measured simulated mixtures (of CLZ + individual cross-reactive component) resemble serum-based solutions more closely than individual species because their response accounts for some combined cross-reactive behavior. Nonetheless, it only accounts for the cross-reactions of two species at a time. Thus, comparison between the signatures seen in bottom-up simulated mixture studies with topdown serum-based studies is important to account for matrix effects.
Single-Sensor Analysis. The electrochemical activity of un-spiked and CLZ-spiked serum was tested first with a single GCE sensor (element A) with the DPV technique at pH 7.4. Fig. 6 shows the measured voltammograms and corresponding heat map representing the response of serum with and without CLZ. A predominant peak present in the DPV response of the serum background was observed at 0.248 V and another peak at 0.647 V. Comparing these signatures with the corresponding buffer-based signatures, the first peak can be matched to the characteristic oxidation potential of UA (0.245 V). While about 6-8 times higher I p magnitudes were expected for the UA signal due to its high physiological concentrations, species degradation may have occurred during sample preparation, transportation and preservation [66,67]. Moreover, the second peak at 0.647 V did not match to any of the interfering species studied in previous sections. The latter may be a result of signal changes in the simultaneous presence of several interfering species or a result of endogenous species not assessed in this study. Lastly, current peaks that can be matched to CySH peak signatures (0.347 V) were not observed in the serum response. This species may not be detectable in serum due to its low concentrations, overlap with the larger UA peak, or potential cross-reactions with other endogenous components during storage and handling. In the presence of CLZ, a third peak at 0.312 V appears. This peak is near the location of the CLZ peak measured in PBS and has about 0.78-fold decreased I p compared to the buffer-based response. Furthermore, the same peaks found in the un-spiked serum response remained present in the spiked serum response, with a slight decrease of the suspected UA peak. This decrease may be due to a cross-reactive dependencies between CLZ and UA.
SMA Analysis. The SMA was applied to serum-based tests to demonstrate the advantages of applying a multi-dimensional sensor array to characterize un-spiked and CLZ-spiked serum responses, and match their responses to the bottom-up studies of buffer-based samples. shows the heat maps representing the integrated SMA response of the serum samples in order to facilitate comparison with buffer-based signatures.
The responses of elements A-C illustrate the change in serum electrochemistry with varying pH at the GCE in Fig. 7. In the un-spiked serum, the suspected UA peak (at 0.248V, element A) follows a pH response trend matching that of the individual UA response, as seen in Table 3. Thus, this results further points to the likelihood of this peak representing UA oxidation since the fitted values corresponding to the proton to electron production ratio and the standard potential was similar to that of UA oxidation. Interestingly, the pH trends for the two overlapping peaks seen in the CLZ-spiked serum demonstrate a consistent decrease in slope and intercept values of the two peaks compared to the buffer-based trends, shown in Table 3. This discrepancy in the pH trends may be caused by cross-reactions between UA and CLZ in the presence of other interfering species in this complex mixture, and further illustrates the dependence of analyte measurement on other components of the sample. Notably, the highest resolution and differentiation of the two major peaks was seen at pH 6.5 (element B), as seen in Fig. 7b.
The response of elements D-E illustrate the CV responses and include Elements ΔD-ΔE which represent the difference in current over a period of three CV cycles (or time). As seen in Fig. 7, the CV oxidation response of serum in element D is similar to the DPV response in element A. However, the changes in the CV oxidation response over time (element ΔD) further points to a larger decrease of the suspected UA peak compared to a smaller decrease seen for the CLZ peak, as shown in S7 Fig. Thus, this difference provides an additional signature for distinguishing the two species. Elements E and ΔE provide information about reduction reactions, which was previously shown to be another differentiating factor between UA and CLZ because CLZ has a reduction peak whereas UA does not. As can be seen in the response of un-spiked serum (Fig. 7a, element E), no reduction peaks can be distinguished. Even when CLZ was spiked into serum, no reduction peaks were detectable (Fig. 7b, element E), which matches the behavior seen in the simulated mixtures of CLZ with UA (Fig. 5a).
The response of element F corresponds to the Pt electrode response. As seen in Fig. 7a (element F), a major peak near 0.430 V was observed for the un-spiked serum sample. The location is drastically different compared to the background serum peak in element A at the GCE. This peak shift across the two electrodes matched the behavior seen for the individual UA bufferbased response, further corroborating this peak likely corresponds to UA. When CLZ was spiked into serum, only one peak remained in the response near the peak seen for un-spiked serum, although it was shifted toward lower potentials. Similar behavior was seen in the bufferbased UA/CLZ simulated mixture response, which was hypothesized to belong to an integrated UA/CLZ response. This sensing element suffers from reduced resolution compared to the GCE, however, it shows a characteristic signature of UA that changes in the presence of CLZ. Thus, this response can provide an integrated measure of multiple species.
Conclusions
A novel integrated bottom-up and top-down approach was employed using an arrayed sensing system to discern and match signatures in buffer and complex solutions. Multidimensional signatures of individual species and their simulated mixtures were collected using a sensing methods array (SMA) to elucidate inter-dependencies. By applying this investigational tool to the model application of serum-based measurement of CLZ and antioxidant analysis, the bottomup study identified UA and CySH as being electroactive and cross-reactive with the antipsychotic CLZ during measurement. Using the SMA for top-down studies of the complex serum matrix, UA and CLZ signature trends were matched to bottom-up results across the elements of the array. Some differences in the signatures of these species in buffer and complex solutions were observed and attributed mainly to additional molecular cross-reactivity and integrated matrix effects. These results further highlight the advantages of using an arrayed sensing system for mapping complex solutions as well as the challenges of inter-dependence between the analyte and other matrix components. We show that collecting broader information enabled discerning of cross-reactions in simulated and complex serum matrix. The ultimate sensor design is envisioned as an integrated SMA platform incorporated with savvy pattern recognition (i.e., chemometric) data processing that takes to account the inter-dependence of cross-reactive species for measurement in complex samples. Furthermore, this methodology can be applied for the investigation and simultaneous measurement of various disease-related markers, for the assessment of various sample conditions or treatments, or for sensor characterization and optimization. (Table 1). (PNG) | 2016-02-24T08:38:05.773Z | 2015-03-19T00:00:00.000 | {
"year": 2015,
"sha1": "f47a808ec8e985ee343f9f9afd9539ef455dfbf0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0116310&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f47a808ec8e985ee343f9f9afd9539ef455dfbf0",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
195812091 | pes2o/s2orc | v3-fos-license | RNA Sequencing of Osteosarcoma Gene Expression Profile Revealed that miR-214-3p Facilitates Osteosarcoma Cell Proliferation via Targeting Ubiquinol-Cytochrome c Reductase Core Protein 1 (UQCRC1)
Background Osteosarcoma (OS) is a common primary malignant bone tumor for which the molecular mechanisms remain unclear. Studies on coding and non-coding RNAs are needed to determine the molecular mechanism. Material/Methods To explore the potential roles of miRNAs and mRNA in OS, we determined the miRNA and mRNA expression profile of 3 pairs of OS and paracancerous tissues from patients with OS by sequencing and bioinformatics analysis. The expression levels of critical miRNAs and mRNAs were verified in 10 pairs of OS and paracancerous tissues. An miRNA inhibitor and mimics were used to investigate the interactions between miRNAs and target genes. The cell counting kit-8 assay was performed to evaluate OS cell proliferation after miRNA interference. Results A total of 184 miRNAs and 2501 mRNAs were identified (fold-change >2.0 or <2.0, P<0.05), with up-regulation of 82 miRNAs and 1320 mRNAs and down-regulation of 102 miRNAs and 1181 mRNAs in OS tissue. The protein protein interaction network revealed that UQCRC1 (ubiquinol-cytochrome c reductase core protein 1) is a critical gene and a potential target gene of miR-214-3p. Both UQCRC1 and miR-214-3p were significantly differentially expressed in OS tissue and cell lines (down and up-regulated, respectively). Down-regulated miR-214-3p expression increased UQCRC1 expression and suppressed OS cell proliferation. In contrast, overexpression of miR-214-3p decreased UQCRC1 expression and promoted OS cell proliferation. Conclusions High miR-214-3p expression may promote OS cell proliferation by targeting UQCRC1, providing insight into a potential therapeutic target for preventing and treating OS.
Background
Osteosarcoma (OS) is a type of bone tumor arising from primitive transformed cells of mesenchymal origin in teenagers and young adults [1,2]. Currently, most cases of OS are treated clinically by surgery combined with chemotherapy [3]. However, it is estimated that the incidence of OS is 4-5/1,000,000 and the 5-year survival rate for metastatic OS is less than 20% [4,5]. The origin and etiology of OS involve complicated genome rearrangement, highly variable patterns of gene expression, and high metastatic capacity [6,7]. Therefore, studies are needed to investigate the underlying molecular mechanisms and new strategies for diagnosis and treatment of OS.
MicroRNA (miRNAs) are a large class of small non-coding RNAs consisting of approximately 20-25 nucleotides and have been demonstrated to regulate approximately 30% of genes via 3' untranslated regions [8,9]. Abnormal expression of miRNAs has been widely reported in cancer and plays key roles in many biological processes, such as cell proliferation, differentiation, and apoptosis [10,11]. Studies of OS in recent years has revealed important roles for miRNAs, such as miR-184, miR-664, and miR-885-5p, among others [12,13]. However, few studies have been performed using high-throughput sequencing to evaluate OS tissues compared to sequencing of OS cell lines. Additional studies of critical miRNAs and mRNAs as well as their biological roles in OS tissues are needed.
In the present study, we performed RNA-seq to investigate the miRNA and mRNA expression profiles of OS tissue. Bioinformatics analysis and further experiments were conducted to identify the key genes involved in OS dysregulation. Our study provides insight into the molecular mechanisms of OS.
Material and Methods
The research design was as follows.
Patients and controls
A total of 10 patients with primary OS (age range 10-63 years, 4 males/6 females) treated at the Department of Orthopedics of the Second Hospital of Jilin University between May 2017 and December 2018 were enrolled in this study. The diagnosis of OS was confirmed by pathological analysis. The patients were not treated with radiotherapy or chemotherapy before the surgery. Patients with concurrent congenital diseases or tumorrelated diseases were excluded. OS tissues and their matched adjacent normal tissues were obtained from patients who underwent complete resection surgery. All tissue samples were immediately frozen in liquid nitrogen. This research was approved by the Ethics Committee of Jilin University (2016.169), The Second Hospital, and all patients involved in this research signed written informed consents. The clinical and demographical characteristics of the patients are summarized in Table 1.
miRNA and mRNA sequencing
The miRNA and mRNA sequencing of 3 pairs of OS and paracancerous tissues from patients with OS were performed at Beijing Novogene Co., Ltd. (Beijing, China). A total of 3 μg total RNA per sample was used as input material to prepare the miRNA and mRNA libraries.
Real-time RT-PCR analysis
Total RNA of miRNA and mRNA were reverse-transcribed to cDNA by using an All-in-one™ miRNA First-Strand cDNA Synthesis Kit (GeneCopoeia, Rockville, MD, USA) and Oligo (dT) priming method (Prime Script TMRT Reagent Kit; TaKaRa, Shiga, Japan), respectively. RT-qPCR (Applied Biosystems 7500, Foster City, CA, USA) was performed using power SYBR ® Green PCR Master Mix (Applied Biosystems). The expression levels of miRNA and mRNAs were normalized to the levels of U6 and GAPDH, respectively.
Quantification and the fold-change of miRNA and mRNA expression were calculated with the 2 -DDCt method. All experiments were performed in triplicate.
Functional enrichment analysis
The functions and pathway enrichment of differentially expressed genes were analyzed using multiple online databases. Gene Ontology (GO, http://www.geneontology.org/) describes genes and gene products involved in molecular function (MF), cellular component (CC), and biological process (BP) [14,15]. GO is a widely used method for identifying characteristic biological attributes of high-throughput genome or transcriptome data. Kyoto Encyclopedia of Genes and Genomes (http:// www.genome.jp/kegg/), which stores information on how molecules and genes are networked, was used for pathway mapping [16]. The Database for Annotation, Visualization and Integrated Discovery (https://david.ncifcrf.gov/) [17] is a website that lists gene annotation, visualization, and integrated discovery function, and thus, can systematically extract biological meaning from large gene or protein lists.
Integration of protein-protein interaction (PPI) network and module analysis
The Search Tool for the Retrieval of Interacting Genes (STRING) (http://string-db.org) [18] is an online tool designed to evaluate the differentially expressed mRNA-encoded proteins and PPI information. The OmicsBean (http://www.omicsbean.com:88/) database was used to construct a protein interaction relationship network and analyze the interaction relationships of differentially expressed candidate genes based on the STRING analysis results.
Cell Counting Kit-8 assay
The OS cell line (MG63) was seeded into 96-well plates at a density of 3000 cells per well in a volume of 100 µL, and proliferation ability was detected using Cell Counting Kit-8 (Dojindo
Result Differential expression of miRNAs and mRNAs in patients with OS
In total, 184 miRNAs and 2501 mRNAs were identified, among which 82 miRNAs and 1320 mRNAs were up-regulated, while 102 miRNAs and 1181 mRNAs were down-regulated (foldchange >2.0, P value <0.05). The expression profiles of miRNA and mRNA were distinguishable in hierarchical clustering analysis ( Figure 1). The detailed information of the top 10 miRNAs (up-and down-regulated) is shown in Table 2.
GO and pathway enrichment analysis
GO analysis was conducted to identify the key roles performed by the differentially expressed genes. Cell growth and/or miR-181b-3p ** maintenance, lysosome and extracellular matrix structural constituent were the most significantly up-regulated terms in BP, CC, and MF ( Figure 3A-3C). Energy pathways, mitochondrion, and oxidoreductase activity were the most significantly down-regulated terms enriched in each of the 3 categories ( Figure 3D-3F). The pathway enrichment results showed that 54 pathways exhibited significantly differentially expressed genes, including 20 pathways with up-regulated genes and 34 pathways with down-regulated genes ( Figure 3G, 3H).
PPI network of differentially expressed genes
Based on the STRING and Omics Bean databases, a PPI network of differentially expressed genes was established. A total of 410 interactions (edges), 40 proteins (nodes), and 10 pathways were involved in the PPI network according to their internal correlation ( Figure 4). Importantly, ubiquinol-cytochrome c reductase core protein 1 (UQCRC1) played a critical role in the network. In all 21 nodes, 7 pathways were found to interact with UQCRC1.
The regulatory function of miR-214-3p in OS was verified using an miRNA inhibitor or mimics to knock down or overexpress miR-214-3p. The expression of UQCRC1 in MG63 cells was examined by RT-qPCR. The results showed that UQCRC1 expression was up-regulated when miR-21-3p was knocked down ( Figure 5E). When miR-21-3p was over-expressed by miR-21-3p mimics, UQCRC1 expression was substantially downregulated ( Figure 5F).
Effect of miR-214-3p on OS cell proliferation
The function of miR-214-3p in the proliferation of OS cells was measured by CCK8-assay. The proliferation of MG63 cells was recorded every 24 h after transfection with the microRNA inhibitor or mimics. The proliferation of MG63 cells transfected with miR-214-3p mimics was significantly higher than control subject (non-transfected cells) at days 3, 4, 5, and 6. However, OS cells transfected with miR-214-3p inhibitor exhibited significantly decreased cell proliferation ( Figure 6).
Discussion
As a high-grade aggressive soft tissue tumor with high mortality, early diagnosis of OS is extremely important for successful treatment. Studies are urgently needed to identify sensitive and specific biomarkers of OS. Over the past decade, numerous studies have evaluated the molecular mechanisms of OS. Multiple miRNAs and protein-coding genes have been described and identified as OS biomarkers [20][21][22][23]. However, the expression profile of miRNA and mRNA in human OS tissues remains unclear. Furthermore, the regulatory relationship between miR-214-3p and UQCRC1 in OS has not been reported. In the current study, we determined the miRNA and mRNA expression profiles of OS tissue from patients. Total of 184 miRNAs and 2501 mRNAs were found to be differentially expressed (foldchange ³2, P value <0.05). Bioinformatics analysis revealed UQCRC1 as the key molecular target among differentially expressed genes. We further predicted and verified that UQCRC1 is a direct target of miR-214-3p. Additionally, overexpression of miR-214-3p significantly facilitated the proliferation of OS cells (P<0.05). In contrast, knockdown of miR-214-3p significantly decreased OS cell proliferation (P<0.05).
However, in pathway enrichment analysis of differentially expressed genes, our results did not reveal a large number of pathways involved in OS. Proteoglycans in cancer and PI3K/AKT signaling pathway were significantly enriched as up-regulated pathways. The phosphatidylinositol 3-kinase (PI3K)/Akt pathway is one of the most important oncogenic pathways in human cancer and a key target pathway for treating OS [24,25].
4988
Several genes have been shown to be involved in dysregulation of OS through the PI3K/Akt pathway such as Wilms' tumor gene 1 [26], phosphatase and tensin homolog, mammalian target of rapamycin [25], and others. In our pathway enrichment results, 40 genes were enriched in the PI3K/Akt pathway among the differentially expressed genes. Further studies are needed to explore the potential roles of these genes in OS. UQCRC1 is a nuclear-encoded protein localized to the inner mitochondrial membrane [27]. Numerous studies have reported that UQCRC1 is dysregulated in cancer. For instance, in breast and ovarian cancers, the expression of UQCRC1 is upregulated [28]. However, in clear cell renal cell carcinoma [29] and gastric cancer [30], UQCRC1 is significantly down-regulated. UQCRC1 was detected as a critical gene in our PPI network.
4989
Furthermore, our results revealed that UQCRC1 was significantly down-regulated both in OS tissues and in OS cell lines (MG63, U2OS, HOS), suggesting a role in the process of OS tumorigenesis. The relationships between UQCRC1 and other key genes in the PPI network provide basis for further studies.
When mRNAs cannot accurately explain the mechanism of disease, non-coding RNAs, as the main transcripts of genes, provide guidance for in-depth studies of the disease mechanism. miR-214 is excised from the precursor hairpin by the enzyme Dicer [31], which is within the sequence of the Dmn3os transcript. miR-214 plays an important role in regulating vital processes of the cell cycle, such as apoptosis, proliferation, and angiogenesis [32]. Recent studies reported that miR-214 is highly dysregulated and variable in multiple types of cancer, such as cervical cancer [33], pancreatic cancer [34] and OS [35]. However, few studies have examined the molecular mechanisms of miR-214-3p in tumorigenesis of OS. In this study, we confirmed that miR-214-3p regulates the proliferation of OS cells by targeting UQCRC1, which may be an important regulatory target for treating OS.
The results indicate that up-regulation of miR-133a-5p weakens the anti-neoplastic effect.
Recent studies showed that miR-199a-5p was significantly associated with a variety of cancers, such as by suppressing tumorigenesis in hepatocellular carcinoma [41], which is correlated with clear cell renal cell carcinoma [42], promoting the proliferation and metastasis and epithelial-mesenchymal transition in cervical carcinoma [43], and other functions. More importantly, serum miR-199a-5p has been identified as noninvasive biomarker for detecting and monitoring OS [44]. Our study showed that miR-199a-5p expression was significantly up-regulated in OS tissue. This result is consistent with the results of previous research and expands the understanding of the important role of miR-199a-5p in OS.
There were several limitations to our study. First, few OS samples were available for sequencing, which may have affected the miRNA and mRNA expression profile data. Second, the precise targets of the miRNAs were not fully explored. Finally, the mechanisms between the differentially expressed miRNAs and mRNAs require further analysis.
Conclusions
The miRNA and mRNA expression profiles of OS tissue from patients were identified. Differential expression of 184 miRNAs and 2501 mRNAs was detected (fold-change ³2, P value <0.05).
We also predicted and confirmed that UQCRC1 is the direct target of miR-214-3p. Additionally, miR-214-3p promotes OS cell proliferation by targeting UQCRC1. Our results provide insight for studies aimed at treating and preventing OS. | 2019-07-07T13:05:13.065Z | 2019-07-05T00:00:00.000 | {
"year": 2019,
"sha1": "9c798b146f2be9344189f67fc20e057cf2d065ae",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6626500?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c798b146f2be9344189f67fc20e057cf2d065ae",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
27264303 | pes2o/s2orc | v3-fos-license | OVERVIEW OF 3 D DOCUMENTATION DATA AND TOOLS AVAILABLE FOR ARCHAEOLOGICAL RESEARCHES : CASE STUDY OF THE ROMANESQUE CHURCH OF DUGNY-SUR-MEUSE ( FRANCE )
In this paper, the 3D documentation of the full structure of the Romanesque church of Dugny-sur-Meuse is discussed. In 2012 and 2013, a 3D recording project was carried out under the supervision of the Photogrammetry and Geomatics Research Group from INSA Strasbourg (France) in cooperation with C. Kraemer, archaeologist from Nancy (France). The goal of the project was on one hand to propose new solutions and tools to the archaeologists in charge of the project especially for stone by stone measurements. On the other hand, a simplified 3D model was required by the local authorities for communication purposes. To achieve these goals several techniques were applied namely GNSS measurements and accurate traverse networks, photogrammetric recordings and terrestrial laser scanning acquisitions. The various acquired data are presented in this paper. Based on these data, several deliverables are also proposed. The generation of orthoimages from plane as well as cylindrical surfaces is considered. Moreover, the workflow for the creation of a 3D simplified model is also presented. * Corresponding author
INTRODUCTION
The old church of Dugny-sur-Meuse in North-East of France, replaced by a more vast neo-gothic church in the second middle of the XIX th century, dates in its original shape from the second half of the XI th or the second quarter of the XII th century.Probably built in several stages, this edifice which follows a basilical plan remains fundamentally a flagship of Romanesque architecture of rural Lorraine, despite late changes at the transept level and its fortification during the Hundred Years War.Moreover, it is for its hoardings that it is a first time mentioned in the famous reasoned dictionary of French architecture from XI th to XVI th century of the famous architect and archaeologist Eugène Viollet le Duc (Viollet le Duc, 1854-68).
Two communications were presented at French archaeological congresses in Reims in 1933 and Nancy in 1991 related to the registration of the church as Historic Monument on 28 th December 1904.Two other monographs are also devoted to it in two books giving a panorama of Romanesque architecture in Lorraine (Collin, 1983, Marschall andSlotta, 1984).The significant differences of authors' points of view considering the chronology of the original church, and the willingness of local authorities to make this disused church a cultural centre have largely motivated the recovery of the project meanwhile enriched by a significant graphic documentation which complete the plan realized in the beginning of the XX th century simultaneously with the first photographs taken by the French specialist of medieval architecture Camille Enlart.This documentation was conducted on behalf of Historical Monument, according to an insufficient level of detail for a fine study of building archaeology.
The use of 3D scanning and photogrammetry already experimented on other archaeological sites, religious (Landes et al., 2013) or not (D'Agostino et al., 2013), is intended to fill this gap while creating documentation that may be used for patrimonial valorisation of the edifice.It also allows giving technical responses for graphical restitutions particularly for cylindrical surfaces.
A 3D recording project was carried out with students from INSA Strasbourg (France) under the supervision of the Photogrammetry and Geomatics Research Group in 2012 and 2013 to answer the needs of both the archaeologist of the site which cooperated during the project and the local authorities which supported the project.
The first aim of the project was the generation of orthoimages of plane as well as cylindrical surfaces to assist the archaeologist in further studies of the church.Orthoimages are mainly used by archaeologists for stone by stone surveys.The second aim of the project was the creation of a simplified 3D model for the local authorities for communication purposes.
In order to achieve these goals, different types of measurement technologies were considered: GNSS measurements and accurate traverse networks setup, terrestrial laser scanning (TLS) surveys and photogrammetric recordings.Our group is used to merge several techniques in the field of cultural heritage modelling (Grussenmeyer et al., 2012).
In this paper, the different deliverables for structure study and conservation of historical buildings are listed in a related work section.Then the site under study and the different acquired data are presented.Finally, the creation processes of several 2D and 3D deliverables, namely orthoimages of plane surfaces and cylindrical surfaces and a 3D simplified model, are described.
RELATED WORK
Several deliverables can be considered for the structure study and the conservation of cultural heritage buildings.First of all, 3D models can be created based on point clouds.These models are either geometric models or meshed models.When the considered surface is complex and irregular and the level of details required is high, a meshed model is used.Schueremans and Van Genechten (2009) used a meshed model for the assessment of the safety of masonry vaults of Saint-Jacobs church for example.Meshed models are very faithful to the reality but associated files require a large volume of storage.For buildings constituted by geometric shapes (planes, cones, cylinders, etc.), geometric models are used.This type of models is lighter.Each element of the considered building is described by a geometric primitive through a segmentation process.Then, either the primitives are intersected between them or a boundary extraction is performed to obtain a wire-frame model (Macher et al., 2014).Hybrid models constitute a compromise between geometric models and meshed model.In this type of models, complex parts are meshed whereas continuous areas are modelled by geometric primitives or polylines.Kersten and Lindstaedt (2012) proposed an hybrid model of the imperial cathedral of Königslutter (Germany) composed by a CAD model and meshed models of small complex objects.Based on point clouds, 2D deliverables can also be created.Sectional views of churches are often considered for archaeological purposes (Soria-Medina et al., 2013).Landes et al. (2013) propose a semi-automatic method of sectional view creation based on boundary extraction of geometric primitives.Orthoimages constitute also a 2D deliverable used to study the structure of historical buildings and determine the different steps of their construction.This study is achieved particularly through stone by stone surveys (Drap et al., 2000).As mentioned by (Hemmleb and Wiedemann, 1997), regarding the type of surface, different methods can be applied to generate orthoimages.If the considered surface is a plane, a projective rectification is performed.It consists in determining coefficients, which link the image plane and the projective plane thanks to control points.Digital unwrapping techniques can be applied to parametric surfaces such as cylinders and cones (Karras et al., 1996, Theodoropoulou et al., 2001, Hemmleb and Wiedemann, 1997, Meyer et al., 2004).Numerous architectural elements are described by such surfaces as for example towers, columns or vaults.Considering undevelopable surfaces, different cases can be considered.For irregular surfaces, a polynomial transformation can be applied.The number of control points required and the risk of oscillations increase with the degree of the polynomial.For spherical surfaces, (Guerra and Miniutti, 2000) explore cartographic projections offered by the Mapping Toolbox of Matlab.The choice of the projection depends on the needs.For example, Karras et al. (1997) advise an equivalent transformation for surface measurements.For more complex surfaces, differential rectification methods are used.These methods are widely used in aerial photogrammetry but rarely in architectural photogrammetry.
SITE UNDER STUDY
The village of Dugny is situated seven kilometres south of Verdun (France), on the left bank of the Meuse.The Romanesque church under study (Figure 1) located in the centre of its original habitat is dedicated to the Nativity of the Virgin.This dedication, combined with the discovery of fragments of chancel and sarcophagi from the High Middle Ages found in the same area, close to the remains of a Gallo-Roman sanctuary, argue for the existence of a former Christian worship place to the tenth century.Deconsecrated in 1870 during the construction of the neo-Gothic church nearby, the Romanesque church whose bell tower retains elements of defensive architecture attributed to the fourteenth century by Viollet-le-Duc is one of the few fortified churches yet visible in Meuse region.This church, which size is 17 by 29 m, is composed by three naves having an apparent roofing framework with four spans of 2.30 m, and two aisles; a fore choir with circular apse decorated with frescoes of the XV th century and whose elevation of at least 60 cm from the beginning, conceals the bases of square pillars; a span pursued by an apse and two apses chapel; a porch tower of 17 m height under the hoarding and 23 m under the ridge, located at west on the first span of the nave and fully integrated into its construction; finally, under the choir, a crypt corridor (Figure 2a) with a T shape covered of a barrel vault of at the most 2.35 m high and which could result from the choir elevation in a posterior phase, contemporary, perhaps, of the fortification of the church.Other works are carried out during the XIX th century on the roof of the porch tower and its south wall.The operation is repeated in 1822 and then regularly from 1856, when the tower of the choir is levelled in the nave's roofing plan and the walls of the aisles levelled, until a decision is made to build a new larger church for a growing community.All the other works will be carried out under the protection of the edifice classified as historical monuments in 1904.This is the case of the choir vaults and the preserved apse, restored in 1934, and in the fifties, the lower parts of the porch tower and its roofing.All these works of maintenance, repair, consolidation, fortification, which some are derived from a visual examination of the edifice, participate in its history and deserve to be analysed in a building archaeology or monumental approach, which consists in the deconstruction of the object in order to examine successively its different parts and aspects and then in the reconstruction according to a chronological scenario (Guild, 2005) and deserve beyond to be analysed in an anthropological concern of the construction, in other words in a concern of archaeology of the site (Bessac, 2005).
DATA ACQUISITION
Different types of acquisitions were carried out to document the church.They are presented in this section.
Definition of a reference framework
A reference framework was defined by geodetic GNSS measurements (Figure 3).Based on measured points, an accurate traverse measurement was carried out by the Leica TS02 total station all around the church.This traverse was notably used to survey spheres placed in the scene during the terrestrial laser scanning (TLS) acquisitions.The georeferencing of the TLS data is then performed with this survey.
Terrestrial laser scanning acquisitions
The interior and exterior of the church were scanned with the Faro Focus 3D S120 laser scanner.Figure 4 presents an overview map of the church and the locations of the scans and their heights.Spheres were placed into the scene for the registration of the point clouds in a post-processing step and some spheres were surveyed for the georeferencing of the point clouds.
Figure 4. Locations of scans in an overview map 13 scans were realized outside the church and 17 scans were realized inside the church.A density of one point every 6 mm at 10 metres was chosen except for a scan located in the centre of the church for which a density of one point every 3 mm was chosen.
To complete these scans, 3 scans were carried out by the Leica C10 scanner for the acquisition of the roofs and the porch tower.Indeed, the measuring range of this scanner of almost 300 metres is higher.A density of one point every 6 mm at 10 m was chosen.A direct georeferencing supported by the accurate traverse was used with this scanner.
Figure 5. Colorized point cloud of the church (24 million points) Figure 5 shows the complete point cloud of the church spatially resampled at 1 cm.Based on this point cloud, it is already possible for archaeologists working on the church to study the internal and external envelopes of the church considering for example sections in the point cloud.The point cloud is a visual support as well as a measurement support but it's not adapted for stone by stone survey.Indeed, the creation of orthoimages from the point cloud provides an insufficient resolution for it.
Photogrammetric recordings
For photogrammetric recordings, a Canon EOS 5D Mark II digital camera was used.It provides a 24 per 36 mm sensor and a pixel size in the sensor of 6,5 microns.Several lenses were used namely lenses of respectively 28 mm and 50 mm.A pixel size of approximately 5 mm was achieved on the object.
Different well-known rules were respected during the recordings.The focal length was notably kept constant during the recordings.Photographs were performed all around the church for the exterior.Two bundles were realized for the lower part and for the higher part of the church.Considering the interior of the church, photographs were realized for each subpart of the structure that is along each span of the church.
GENERATION OF ORTHOIMAGES
In order to study the church, orthoimages were created.The projective rectification is well known and a few tools already exist.However, a tool was developed for the generation of orthoimages of plane surfaces but also cylindrical surfaces, which are encountered for the church under study.Not only were the orthoimages used for the study of the church, but they also were used for the creation and the texturing of a simplified 3D model as mentioned here after.
Projective rectification
If the object surface describes a plane, a projective rectification is used.A geometric transformation between the image plane and the projective plane is necessary to perform this rectification.The equations for projective rectification are given as: where X, Y = object coordinates x, y = image coordinates a1, a2,…, c2 = rectification coefficients Control points in the object plane are required for the calculation of the eight unknown coefficients of the projective transformation.Regarding the number of unknown coefficients, a minimum of 4 control points is necessary.Well-distributed control points were selected directly in the raw point cloud of the plane.
If several photographs describe the same plane, they are rectified individually and then grouped into a mosaic of orthoimages.They are grouped thanks to the object coordinate system linked with them.Figure 6b presents an example of orthoimage mosaic created with 2 rectified photographs.The presented mosaic (Figure 6b) was assessed by comparing distances in the mosaic and the same distances in the raw point cloud.For 40 measurements from 0 to 16 meters, almost 70% of deviations are comprised between -1 cm and 1 cm.
Unwrapping of cylindrical surfaces
Cylindrical surfaces are also encountered in the church under study as for example the choir.The creation of orthoimages of cylindrical surfaces was thus studied.For this type of surface, photographs, control points and also a point cloud of the surface are required.A surface unwrapping technique was used (Karras et al., 1996, Theodoropoulou et al., 2001).
The idea is to unwrap the considered cylindrical entity into a planar entity.Translations and rotations of the cylinder are performed so that its revolution axis coincides with the vertical axis.By using the parameters of the cylinder calculated with the point cloud, the cylinder is unwrapped as a plane.
Based on the developed point cloud, a regular grid is created.
The interval between points of this grid corresponds to the pixel size in the final image and is fixed by the user.The defined grid is then rewrapped with an inverse transformation.
In order to determine the position of the grid points in the image, a Direct Linear Transformation (DLT) is used.This transformation allows moving from 3D coordinates to 2D coordinates through the following equations: where X, Y, Z = object coordinates x, y = image coordinates L1, L2,…, L11 = DLT parameters A minimum of 6 control points is required to determine the eleven parameters of the DLT.As previously, well-distributed control points were selected in the raw point cloud of the considered cylindrical surface.
Once the DLT parameters are determined, the colour value of each point of the grid can be interpolated and the orthoimage can be generated by applying a colour to each pixel.
The unwrapping technique was used for the choir of the church as depicted in Figure 7. 4 photographs were used to create a mosaic of orthoimages of the cylindrical surface.(c) mosaic of orthoimages The mosaic of orthoimages of the choir (Figure 7c) was assessed by comparing distances in the mosaic and the same distances in the raw point cloud previously unwrapped.For 40 measurements from 0 to 6 meters, almost 80% of deviations are comprised between -1 cm and 1 cm.
Photogrammetric plot
Based on mosaics of orthoimages, the archaeologist of the site can realized photogrammetric plots.It consists in a stone by stone survey directly on the mosaic.Boundaries of stones as well as interstitial cement are represented.
An example of photogrammetric plot carried out by the archaeologist of the site is presented Figure 8.The stone by stone survey allows representing different phases in the structure.Figure 8a presents the stone by stone survey and Figure 8b shows the phasing obtained based on the stone by stone survey.The phasing was realised thanks to structural data (morphology of stones) and aesthetic (Romanesque capitals, framing of bays and chronology of bibliographical and historical data).The archaeologists generally represent this type of plot with a scale of 1 per 20.Regarding the deviations of the order of magnitude of 1 cm, they will represent only 0.5 mm in the plot.
The quality of the mosaics of orthoimages is thus very satisfactory to carry out stone by stone measurements.
CREATION OF A SIMPLIFIED 3D MODEL
For communication purposes, a simplified 3D model was created for the local authorities.The different steps of the followed workflow are presented Figure 9.A few refinements need to be applied to the result.Moreover, the modelling of cylindrical surfaces is not proposed.However, the Building Extractor tool is easy to use and allows modelling the church very quickly.Thus, this tool seems to be a good solution for the creation of simplified models of buildings that are composed mainly of planes.A tool involving well-known techniques in photogrammetry, namely the projective rectification technique and the surface unwrapping technique, was created to generate orthoimages of plane surfaces and cylindrical surfaces.This tool was applied to the church and the resulting orthoimage mosaics allow the archaeological study of the church through stone by stone surveys.For the processing of facades composed of multiple parallel vertical planes, the stone by stone is realised separately in mosaics involving the different planes and the drawings are then combined to avoid distortions.The need to adapt the tool to archaeological issues (Duval et al., 2006, Giuliato et al., 2013) involved a close collaboration between archaeologists and surveyors.It may also be appropriate, regarding new technologies, to revisit the requirements of archaeological drawing and adapt the settings of the measurement process and the representation approach.(Saint-Aubin, 2008).
A workflow was also proposed for the creation of a 3D simplified model.3D pdf format was an easy way to diffuse the obtained model.The workflow involves different software and is quite long.However, since the end of the project, new tools were released as for example the Building Extractor tool of 3D Reshaper software.This tool seems to allow a faster creation of simplified 3D model and will be studied in more details in future works.
Figure 3 .
Figure 3. Geodetic GNSS measurements around the church
Figure 7 .
Figure 7. Unwrapping of a cylindrical surface: (a) colorized point cloud of the surface; (b) photographs;(c) mosaic of orthoimages
First
sections are created from point cloud of the church through Realworks software(Trimble).Horizontal as well as vertical sections are exported in form of 3D polylines.Horizontal sections are created at characteristic heights of the structure whereas vertical sections are considered only for the ground and form a squaring.The exported 3D polylines are simplified in AutoCAD (Autodesk) by reducing the number of points which compose the polylines.Then, based on horizontal sections, surfaces are created in Sketchup (Trimble) and extruded successively between them.Some details are modelled with orthoimages as for examples discontinuities.Considering vertical sections describing a squaring of the ground, the "Sandbox" tool provided by Sketchup allows to create the surface which follows this squaring thanks to the creation of a triangulated irregular network (TIN).Next, the obtained 3D model is textured with the generated mosaics of orthoimages whose sizes were determined.For roofs and the ground, the same textures are repeated.
Figure 10 .
Figure 10.Final 3D model (3D pdf format)A high accuracy was not required for this 3D model.However, its size is proportional to the reality.Regarding 30 distances
Figure 11 .
Figure 11.Meshed model obtained with 3D Reshaper tools The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W7, 2015 25th International CIPASymposium 2015, 31 August -04 September 2015, Taipei, TaiwanThe state of disrepair of the choir and the apses requires, around 1618, substantial works registered in a quote preserved in the departmental archives of Moselle in the fund from the abbey of St. Vincent of Metz which is then collative of the Dugny's Church (departmental archives of Moselle, H 2361).The tower of the choir is then raised up to levelling course of the walls of the nave.The figurative representation of the church taken from a plan of 1689 (Figure1a; departmental archives of the Meuse, E deposit 123 DD7) confirms these changes but does not suggest the existence of collateral in front of the lateral south wall of the nave. | 2018-01-23T04:13:20.532Z | 2015-08-12T00:00:00.000 | {
"year": 2015,
"sha1": "ab312a521a6cf808498b65efb0a6fcfa7e807836",
"oa_license": "CCBY",
"oa_url": "https://isprs-archives.copernicus.org/articles/XL-5-W7/323/2015/isprsarchives-XL-5-W7-323-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ab312a521a6cf808498b65efb0a6fcfa7e807836",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
239968949 | pes2o/s2orc | v3-fos-license | A Fusion Method of Optical Image and SAR Image Based on Dense-UGAN and Gram–Schmidt Transformation
: To solve the problems such as obvious speckle noise and serious spectral distortion when existing fusion methods are applied to the fusion of optical and SAR images, this paper proposes a fusion method for optical and SAR images based on Dense-UGAN and Gram–Schmidt transformation. Firstly, dense connection with U-shaped network (Dense-UGAN) are used in GAN generator to deepen the network structure and obtain deeper source image information. Secondly, according to the particularity of SAR imaging mechanism, SGLCM loss for preserving SAR texture features and PSNR loss for reducing SAR speckle noise are introduced into the generator loss function. Meanwhile in order to keep more SAR image structure, SSIM loss is introduced to discriminator loss function to make the generated image retain more spatial features. In this way, the generated high-resolution image has both optical contour characteristics and SAR texture characteristics. Finally, the GS transformation of optical and generated image retains the necessary spectral properties. Experimental results show that the proposed method can well preserve the spectral information of optical images and texture information of SAR images, and also reduce the generation of speckle noise at the same time. The metrics are superior to other algorithms that currently perform well.
Introduction
With the development of remote sensing imaging technology, the advantages of various remote sensing images, such as resolution and readability, have been greatly improved. However, images from a single source will inevitably encounter problems such as single imaging mode and narrow applicable scenes, which are difficult to be well utilized [1]. The Synthetic Aperture Radar (SAR) is an active side-looking radar system [2] which has high spatial resolution, can image all day and all weather, and is sensitive to target ground objects, especially land, water, buildings, etc. The SAR images contain rich texture characteristics and detailed information [3]. Optical depends on the reflection imaging of sunlight on the surface of ground objects, which can directly reflect the spectral information and contour features, and has excellent visual effects. Therefore, fused optical and SAR images can obtain their effective information, so as to accurately depict the scene and display the ground features from multiple angles. It has important applications in military reconnaissance and target detection [4].
At present, the fusion method of optical and SAR image can be roughly classified into two categories: transform domain method and spatial domain method [5]. Transform domain method is an image fusion method based on the traditional multi-scale transformation theory. Firstly, the source image was decomposed and then the decomposed sub-images were fused with appropriate fusion rules; finally, the sub-images were reconstructed to
GAN
In 2014, Goodfellow et al. [19] proposed a confrontational generation model based on two-person zero-sum game. The original GANs consists of two parts: generator G and discriminator D. The generator is used to capture the distributed data and describe how the data is generated. The discriminator is used to distinguish the data generated by the generator from the real data. The model is widely used in image generation, style transfer, data enhancement and other fields. In this network, the input of the generator is random noise z, after being processed by the generator, the output data G(z) is input into the discriminator D for judgment, and D will output a true or false judgment result, Remote Sens. 2021, 13,4274 3 of 17 namely D(G(z)), which is used to indicate the probability that G(z) is close to real data. When the output probability is close to 1, it means that the generated data G(z) is close to real data. Otherwise, G(z) is false data. Therefore, in the training process, the goal of the generator is to generate as close to the real data as possible, while the discriminator is as accurate as possible to discriminate that the data generated by the generator is a fake data. The generator and the discriminator constantly play games. When the data generated by the generator can be "faked", that is, it cannot be discriminated by the discriminator, the network reaches "Nash equilibrium". Among them, the target loss function can be expressed as: where E(·) represents the mathematical expectation of the distribution function; P data represents the distribution of real data; z represents random noise, that is, the input of generator G; P z represents the distribution of random noise z; P g represents the data distribution generated by the generator; G represents the generator; D stands for discriminator.
Gram-Schmidt (GS) Transformation
GS transform is a common method in linear algebra and multivariate statistics. It eliminates redundant information by orthogonalizing matrix or multidimensional images, which is similar to PCA transform. Figure 1 is the flow chart of GS fusion. The method first calculates the multi-spectral bands according to a certain weight to obtain a gray image, which is regarded as GS 1 [20]. Additionally, then using GS 1 to perform GS positive transformation with multi-spectral bands. Next, calculating the mean and standard deviation of GS 1 and the generated image, respectively, perform histogram matching on the generated image to simulate GS 1 [18]. Finally, the matched generated image is used to replace GS 1 for GS inverse transformation, and the fused image is obtained. and discriminator . The generator is used to capture the distributed data and describe how the data is generated. The discriminator is used to distinguish the data generated by the generator from the real data. The model is widely used in image generation, style transfer, data enhancement and other fields. In this network, the input of the generator is random noise , after being processed by the generator, the output data is input into the discriminator for judgment, and will output a true or false judgment result, namely , which is used to indicate the probability that is close to real data. When the output probability is close to 1, it means that the generated data is close to real data. Otherwise, is false data. Therefore, in the training process, the goal of the generator is to generate as close to the real data as possible, while the discriminator is as accurate as possible to discriminate that the data generated by the generator is a fake data. The generator and the discriminator constantly play games. When the data generated by the generator can be ʺfakedʺ, that is, it cannot be discriminated by the discriminator, the network reaches ʺNash equilibriumʺ. Among them, the target loss function can be expressed as: Where • represents the mathematical expectation of the distribution function; represents the distribution of real data; represents random noise, that is, the input of generator ; represents the distribution of random noise ; represents the data distribution generated by the generator; represents the generator; stands for discriminator.
Gram-Schmidt (GS) transformation
GS transform is a common method in linear algebra and multivariate statistics. It eliminates redundant information by orthogonalizing matrix or multidimensional images, which is similar to PCA transform. Figure 1 is the flow chart of GS fusion. The method first calculates the multi-spectral bands according to a certain weight to obtain a gray image, which is regarded as [20]. Additionally, then using to perform GS positive transformation with multi-spectral bands. Next, calculating the mean and standard deviation of and the generated image, respectively, perform histogram matching on the generated image to simulate [18]. Finally, the matched generated image is used to replace for GS inverse transformation, and the fused image is obtained.
Overall Structure
The Dense-UGAN network structure proposed in this paper includes a generator and a discriminator. When training the network, supply the registered optical and SAR images into the generation network in the form of one image with multiple channels. Then, introduce the generated single image and label image (optical image and SAR image) into discrimination model to complete the two-classification task. Finally, the high-quality generated image is output. After the network training is completed, we only need to import the registered optical and SAR cascade images into the trained generator, and then perform GS transformation on the output images and optical images to obtain the final fusion result. Figure 2 is the framework of proposed fusion method.
The Dense-UGAN network structure proposed in this paper includes a generator and a discriminator. When training the network, supply the registered optical and SAR images into the generation network in the form of one image with multiple channels. Then, introduce the generated single image and label image (optical image and SAR image) into discrimination model to complete the two-classification task. Finally, the highquality generated image is output. After the network training is completed, we only need to import the registered optical and SAR cascade images into the trained generator, and then perform GS transformation on the output images and optical images to obtain the final fusion result. Figure 2 is the framework of proposed fusion method.
Network architecture of generator
The purpose of the generator is to extract more and deeper image features to generate a new fused image. However, the traditional convolution network or fully connected network will inevitably have problems such as information loss and wastage when transmitting information. At the same time, when there are too many layers, problems such as gradient disappearance or gradient explosion will occur, resulting in the network being unable to train. Therefore, based on the traditional GAN model, this paper uses the combination of dense connection and U-shaped network to reconstruct the generator network structure. As shown in Figure 3, it can be seen that the generator is composed of dense connection network modules mainly composed of four convolution layers and U-shaped network modules mainly composed of six convolution layers with five deconvolution layers. The dense connection network establishes a cross-layer connection between the front convolution layer and the back layer, which improves the network performance and potential through feature reuse, generates a compression model with easy training and high parameter efficiency, mines deep features efficiently. In the Ushaped network module, set the kernel sliding step size of C2 and C4 convolutional layers to 2, so as to realize the downsampling of the input, and at the same time, the size of the feature graph is correspondingly halved. In the decoding process, used two deconvolution modules to upsample and recover the feature image. In addition, the U-shaped network in this paper uses 6 convolution layers with step length of 1 and 4 convolution layers with step length of 2 alternately, which reduces the compression degree of the feature graph and preserves the feature completeness of the supplied image.
Each convolution layer and deconvolution layer in the generator has a BatchNorm layer [21] and the activation function LeakyRelu [22]. The last layer of the deconvolution layer uses Tanh as the activation function to normalize the results to the interval of (-1,1), thus realizing the output normalization.
For the feature map obtained from each layer in the U-shaped network decoder, we used skip connection, which is equivalent to sequentially introducing the feature maps of the original images with the same resolution and containing intuitive low-level se-
Network Architecture of Generator
The purpose of the generator is to extract more and deeper image features to generate a new fused image. However, the traditional convolution network or fully connected network will inevitably have problems such as information loss and wastage when transmitting information. At the same time, when there are too many layers, problems such as gradient disappearance or gradient explosion will occur, resulting in the network being unable to train. Therefore, based on the traditional GAN model, this paper uses the combination of dense connection and U-shaped network to reconstruct the generator network structure. As shown in Figure 3, it can be seen that the generator is composed of dense connection network modules mainly composed of four convolution layers and U-shaped network modules mainly composed of six convolution layers with five deconvolution layers. The dense connection network establishes a cross-layer connection between the front convolution layer and the back layer, which improves the network performance and potential through feature reuse, generates a compression model with easy training and high parameter efficiency, mines deep features efficiently. In the U-shaped network module, set the kernel sliding step size of C2 and C4 convolutional layers to 2, so as to realize the downsampling of the input, and at the same time, the size of the feature graph is correspondingly halved. In the decoding process, used two deconvolution modules to upsample and recover the feature image. In addition, the U-shaped network in this paper uses 6 convolution layers with step length of 1 and 4 convolution layers with step length of 2 alternately, which reduces the compression degree of the feature graph and preserves the feature completeness of the supplied image.
Each convolution layer and deconvolution layer in the generator has a BatchNorm layer [21] and the activation function LeakyRelu [22]. The last layer of the deconvolution layer uses Tanh as the activation function to normalize the results to the interval of (−1,1), thus realizing the output normalization.
For the feature map obtained from each layer in the U-shaped network decoder, we used skip connection, which is equivalent to sequentially introducing the feature maps of the original images with the same resolution and containing intuitive low-level semantic information during the upsampling process. The feature map is superimposed with the feature map obtained by upsampling and then convolved to perform cross-channel information integration, which can help the decoder part of the network recover the image information with the lowest cost.
Remote Sens. 2021, 13, 4274 5 of 17 mantic information during the upsampling process. The feature map is superimposed with the feature map obtained by upsampling and then convolved to perform crosschannel information integration, which can help the decoder part of the network recover the image information with the lowest cost.
Network architecture of discriminator
The purpose of the discriminator is to distinguish whether the target image is an image generated by the generator or a real image, and then classify the target image by feature extraction. Its network structure is shown in Figure 4. It can be seen that the discriminator is a 5-layer convolutional neural network, which is a 3×3 filter from the first layer to the fourth layer, and the stride is set to 2. The first four layers all use BatchNorm to normalize the data, and use LeakyRelu as the activation function. The last layer is a linear layer for classification.
Loss function
Loss function can be used to measure the gap between network output results and data labels. The loss function in this method consists of two parts: the loss function of the generator and the loss function of the discriminator; the ultimate goal is to minimize the loss function and obtain the best training model.
Network architecture of Discriminator
The purpose of the discriminator is to distinguish whether the target image is an image generated by the generator or a real image, and then classify the target image by feature extraction. Its network structure is shown in Figure 4. It can be seen that the discriminator is a 5-layer convolutional neural network, which is a 3 × 3 filter from the first layer to the fourth layer, and the stride is set to 2. The first four layers all use BatchNorm to normalize the data, and use LeakyRelu as the activation function. The last layer is a linear layer for classification.
Remote Sens. 2021, 13, x FOR PEER REVIEW 5 of 17 mantic information during the upsampling process. The feature map is superimposed with the feature map obtained by upsampling and then convolved to perform crosschannel information integration, which can help the decoder part of the network recover the image information with the lowest cost.
Network architecture of discriminator
The purpose of the discriminator is to distinguish whether the target image is an image generated by the generator or a real image, and then classify the target image by feature extraction. Its network structure is shown in Figure 4. It can be seen that the discriminator is a 5-layer convolutional neural network, which is a 3×3 filter from the first layer to the fourth layer, and the stride is set to 2. The first four layers all use BatchNorm to normalize the data, and use LeakyRelu as the activation function. The last layer is a linear layer for classification.
Loss function
Loss function can be used to measure the gap between network output results and data labels. The loss function in this method consists of two parts: the loss function of the generator and the loss function of the discriminator; the ultimate goal is to minimize the loss function and obtain the best training model.
Loss Function
Loss function can be used to measure the gap between network output results and data labels. The loss function in this method consists of two parts: the loss function L G of the generator and the loss function L D of the discriminator; the ultimate goal is to minimize the loss function and obtain the best training model.
Generator Loss Function
In the fusion process of optical and SAR images, it is desirable to preserve the contour information of optical and the texture details of SAR images. Therefore, the loss function of the generator is mainly considered in four parts, which can be expressed as: In which L GAN (G) is the adversarial loss, L L1 (G) is the content loss, L SGLCM (G) is the texture feature loss, and L PSNR (G) is the peak signal-to-noise ratio loss, which will be described in detail below. λ, µ, and η are the weight coefficient for balancing the four loss functions.
•
Adversarial loss L GAN (G) is the adversarial loss between the generator and the discriminator, which can be expressed as: In which I f represents the generated image, N represents the number of fused images, D θ d I f represents the result of classification, and c is the value that the generator wants the discriminator to believe for fake data. •
Content loss
The luminance information of optical image is characterized by its pixel intensity, while the texture detail information of the SAR image can be partially characterized by gradient. Therefore, in order to obtain a fused image with similar intensity of optical image and similar gradient of the SAR image, we can use L L1 (G) to express the content loss of the image in the generation process.
where H and W represent the height and width of the input image, respectively, · F represents the Frobenius norm of the matrix, ∇ representing the gradient operator and ξ controlling the weight between the two items [12].
• Texture feature loss SAR images do not contain spectral color information, so texture feature analysis is particularly important. Texture essence of SAR image is a phenomenon that specific gray level appears repeatedly in spatial position. The gray level correlation between two pixels at a certain distance in image space represents the correlation characteristics of image texture. The gray level co-occurrence matrix is defined as the joint distribution probability of pixel pairs, which reflects the relevant indexes of the image by counting the frequency distribution of two gray levels in the specified spatial distribution. It not only reflects the comprehensive information of image gray level in adjacent direction, adjacent interval and change amplitude, but also reflects the position distribution characteristics among pixels with the same gray level. It is the basis of calculating texture features. Therefore, in order to make the generated image and the supplied SAR image have similar texture features, this paper introduces the generated image and SAR image's L 1 norm of the gray level co-occurrence matrix as a measure of texture similarity.
where GLCM(·) represents the gray level co-occurrence matrix of the image. In addition, contrast measures the distribution of matrix value and the amount of local changes in the image, reflecting the clarity of the image and the depth of the texture; energy measures • Peak signal-to-noise ratio loss In order to minimize the loss of texture details and edges of SAR images, it is easy to produce speckle noise when fusing optical and SAR images. The peak signal-to-noise ratio is based on the error between corresponding pixels, which can be used to measure the noise level in the image. Therefore, to make the generated image contain less noise and reduce the image distortion, this paper introduces the loss of peak signal-to-noise ratio to improve the image quality. The final calculation formula is Within, , MAX represents the maximum value of image point color, and it is 255 for the 8-bit sampling point.
For the weight between PSNR I f , I v and PSNR I f , I s , we use the pixel normalization method. In this paper, the pixel points v and s in the pixel histograms of optical image and SAR image, where the area difference between the highest pixel intensity value in optical image and SAR image histogram is 0.5 (frequent and continuous). Additionally, the weight ratio between them is obtained. The specific results are shown in the following Figure 5. the local variation of image texture, reflecting the degree of dispersion between image pixel value and mean value; homogeneity measures the similarity of image gray levels in the row and column direction, reflecting the local gray correlation. Therefore, in order to make full use of the texture features of SAR images, this method mainly introduces four texture features: contrast, energy, variance and homogeneity. Peak signal-to-noise ratio loss In order to minimize the loss of texture details and edges of SAR images, it is easy to produce speckle noise when fusing optical and SAR images. The peak signal-to-noise ratio is based on the error between corresponding pixels, which can be used to measure the noise level in the image. Therefore, to make the generated image contain less noise and reduce the image distortion, this paper introduces the loss of peak signal-to-noise ratio to improve the image quality. The final calculation formula is Within, is the mean square deviation For the weight between , and , , we use the pixel normalization method. In this paper, the pixel points and in the pixel histograms of optical image and SAR image, where the area difference between the highest pixel intensity value in optical image and SAR image histogram is 0.5 (frequent and continuous). Additionally, the weight ratio between them is obtained. The specific results are shown in the following Figure 5.
Discriminator loss function
In actuality, in the absence of a discriminator, the fusion image with some information about the optical and SAR image can be obtained by using this method. However, the result is not particularly good. Therefore, in order to improve the image generated by the generator, we introduce the discriminator. Additionally, we establish a confrontation game between the generator and discriminator to adjust the generated image. Formally, the loss function of the discriminator includes two parts: one is the adversarial loss between the generator and discriminator; the other is the structural simi-
Discriminator Loss Function
In actuality, in the absence of a discriminator, the fusion image with some information about the optical and SAR image can be obtained by using this method. However, the result is not particularly good. Therefore, in order to improve the image generated by the generator, we introduce the discriminator. Additionally, we establish a confrontation game between the generator and discriminator to adjust the generated image. Formally, the loss function of the discriminator includes two parts: one is the adversarial loss L GAN (D) between the generator and discriminator; the other is the structural similarity loss L SSI M (D), which will be described in detail below. This can be expressed as: Among them, δ is the weight coefficient.
a and b respectively represent labels of generated image I f and optical image I v , D θ d I f and D θ d (I v ), respectively, represent classification results of generated image and optical image.
• SSIM loss
When eyes observe the image, it actually extracts the structural information of the image, not the error between image pixels [23]. The peak signal-to-noise ratio loss function is based on error sensitivity to improve image quality, and does not take into account the visual characteristics of the human eye. Structural similarity is an evaluation criterion based on structural information to measure the degree of similarity between images, which can overcome the influence of texture changes caused by light changes, and is more suitable for human subjective visual effects. By calculating the structural similarity loss between the generated image and SAR image, the generated image can retain more rich texture features of the SAR image, and generate edge details consistent with the human visual system.
where SSI M I f , I s represents the structural similarity (SSIM) index of image blocks in the generated image and SAR image, which can be calculated as: In the formula, µ x represents the average gray level of image x, µ ; σ xy represents the covariance between image x and image y, σ xy = 1 non-zero constants introduced to avoid the system instability when µ 2 x + µ 2 y and σ 2 x + σ 2 y are close to 0. The value range of SSIM function is [0,1]. The larger the value, the smaller the image distortion and the more similar the two images are.
Dataset and Parameter Settings
The research site is located in Nanjing, Jiangsu Province and its vicinity. SAR images was collected by Canada's RADARSAT-2 satellite with a resolution of 5 m and the collection time was 11 April 2017. Optical images are several 5 m-resolution images taken by the Rapideye satellite in Germany in April 2017.
First, we randomly select 60 pairs of optical and SAR images with a resolution of 256 × 256 from the dataset as the experimental training set to train the network. In order to get a better model, we set the sliding window step to 14 to clip each image [13], fill the cut sub-block size into 132 × 132 and then input them into generator. After that, the size of generated image is 128 × 128. Next, we introduce the generated image, optical, and SAR pairs into the discriminator and use Adam optimizer [24] to continuously improve the network performance until reached the maximum training times. Finally, we select another four pairs of images in the dataset for qualitative and quantitative analysis.
Our training parameters are set as follows: the size of batch images is set to 64, the number of training iterations is set to 10, and the training step k of the discriminator is set to 2. λ is set to 100, η is set to 100, µ is set to 2000, δ is set to 0.1 (the parameter setting will be discussed in the later), ξ is set to 5, and the learning rate is set to 10 −4 . Label a of the generated image is a random number ranging from 0 to 0.3, label b of the optical image is a random number ranging from 0.7 to 1.2, and label c is also a random number ranging from 0.7 to 1.2. Because labels a, b, and c are not specific numbers, they are called soft labels [25].
Valuable Metrics
To avoid the inaccuracy in subjective evaluation, we use some objective measures to calculate the corresponding values of fused images. Such as information entropy [26], average gradient [27], peak signal-to-noise ratio [28], structural similarity [29], spatial frequency [30], and spectral distortion. These evaluation indexes can calculate the fused image from the aspects of energy, spectrum, texture, and contour, reflecting the quality of the fused image with specific values.
• Entropy (EN)
The entropy of the image can reflect the amount of information contained in the image. The greater the entropy, the better the image fusion effect.
p i is the probability of the i-th grayscale value. L represents the total number of pixels in the image.
•
Average Gradient (AG) Assuming that the size of the image is M × N, G(m, n) represents the gray value of the image at point (m, n). The value of AG can reflect the performance ability of the image in local details. The larger the value, the clearer the image. • Spatial Frequency (SF) SF can be used to detect the total activity of fused images in spatial domain, and indicate the ability to contrast small details. The larger SF is, the richer the edges and textures the fused image has. SF = (RF) 2 + (CF) 2 (14) In 2 represents the line frequency. 2 represents the column frequency.
• Spectral Distortion (SD)
Spectral distortion mainly reflects the loss of spectral information between the fused image and the source image.
Because the spectral characteristics of optical images are more consistent with the visual observation of human eyes on ground objects in remote sensing images, the spectral distortion in this paper is calculated between fused images and optical images. The smaller the SD, the better the spectral features remain.
Results and Analysis
In this experiment, we compare the fusion performance of different methods for different scene images from subjective and objective evaluation. Figure 6 shows the selected images. These images come from the optical and SAR image pairs of different scenes in the dataset, including the scenes such as land, water, and buildings which are mainly considered in the process of image fusion. Because the spectral characteristics of optical images are more consistent with the visual observation of human eyes on ground objects in remote sensing images, the spectral distortion in this paper is calculated between fused images and optical images. The smaller the SD, the better the spectral features remain.
Results and Analysis
In this experiment, we compare the fusion performance of different methods for different scene images from subjective and objective evaluation. Figure 6 shows the selected images. These images come from the optical and SAR image pairs of different scenes in the dataset, including the scenes such as land, water, and buildings which are mainly considered in the process of image fusion.
Group1
Group2 Group3 Group4 In order to avoid the problems of gradient disappearance and gradient explosion in the GAN, this paper uses Dense-UGAN network as the main structure of the generator to realize image feature extraction. Additionally, we compare the fusion results with the generative adversarial network based on DCGAN, U-Net, and skip connection [31] to illustrate the effectiveness of the Dense-UGAN in the fusion of optical and SAR images. Use the original GAN loss function for training [13], and the results are shown in Table 1. In order to avoid the problems of gradient disappearance and gradient explosion in the GAN, this paper uses Dense-UGAN network as the main structure of the generator to realize image feature extraction. Additionally, we compare the fusion results with the generative adversarial network based on DCGAN, U-Net, and skip connection [31] to illustrate the effectiveness of the Dense-UGAN in the fusion of optical and SAR images. Use the original GAN loss function for training [13], and the results are shown in Table 1. It can be seen from Table 1 that no matter which network is used for image fusion, the final result is better than the original GAN network; that is, all objective evaluation parameters are generally improved. Secondly, according to the above table, when the Dense-UGAN network is used as the main structure of the generator for image fusion, the EN, AG, SSIM, and SF are increased by 7.13%, 43.77%, 0.62%, and 67.79%, respectively, than the original GAN. Among them, the SF is 13.636, which is 26.69% higher than SC-GAN, indicating that the structure of this paper performs better for fusion performance than other excellent networks.
Therefore, we combine the generative adversarial network based on Dense-UGAN and Gram-Schmidt transform to achieve optical and SAR image fusion.
Loss Function Analysis
In this part, we first evaluate the fusion effect of networks with different loss functions. Then, the weight parameters λ, µ, η, and δ in the loss function of generator and discriminator are discussed, so as to fine-tune the model to the best setting.
The second row of Table 2 is the experimental results of the Dense-UNet network using the original loss function in [13]. We will use it to conduct ablation experiments with the experimental results of introducing different loss functions. It can be seen that compared with the original loss function, after introducing the texture feature loss L SGLCM (G) into the generator loss function, the objective evaluation indicators EN and STD increased by 5.88% and 34.77%, respectively, indicating that the texture feature loss is beneficial to improve the performance of the image in local details. Additionally, after introducing structural similarity loss L SSI M (D) into the discriminator loss function, several texture feature indexes are also improved. Finally, we compare the complete loss function of this article with the original loss function results; we can see that EN, STD, PSNR, SSIM, and SD have increased by 5.15%, 25.41%, 2.06%, 31.64%, and 49.9%, respectively. It shows that the loss item proposed in this paper is an effective function for optical and SAR image fusion tasks, which can achieve the purpose of urging the fusion image to contain more and more spectral and spatial characteristics.
For the discussion of weight parameters, there are four parameters and coupling might exist between different loss functions, so the strategy is to increase the loss items one by one in the order of magnitude [32]. Firstly, we fix the weight λ of content loss L L1 in the generator loss function as 100 [33], then determine the weight parameter η of peak signal-to-noise ratio loss L PSNR , and finally determine the weight µ of texture feature loss L SGLCM . Similarly, the weight coefficient δ of structural similarity loss L SSI M in the discriminator loss function is also discussed in the same way. To quantitatively evaluate the results, we use the average value of each objective evaluation index of the selected four groups of source image pairs to compare different weight models. The experimental results are shown in Figure 7. It can be seen from the experimental result that when the weight parameter of is set to 1(x100), the weight of is set to 20(x100), and the weight coefficient of is set to 0.1, the objective index results of the fused image are relatively the best, and the amount of information is the largest.
Different Algorithms Comparison
To effectively evaluate the proposed optical and SAR image fusion method, this paper compares it with other five representative image fusion methods, including multiscale weighted gradient fusion method (MWGF) [34], wavelet transform-based fusion method (DWT) [35], fast filter-based fusion method (FFIF) [36], non-subsampled contourlet transform domain-based fusion method (NSCT) [18], and the Fusion-GAN fusion method [13]. Among them, MWGF and FFIF belong to the spatial domain. DWT and NSCT are representative methods based on transform domain, and the fusion rule adopted for NSCT is ʺSelect-Maxʺ. Fusion-GAN is a method based on deep learning. Different methods have different fusion effects. The results of the four scenes selected in the experiment are shown in Figure 8. It can be seen from the experimental result that when the weight parameter η of L PSNR is set to 1 (×100), the weight µ of L SGLCM is set to 20 (×100), and the weight coefficient δ of L SSI M is set to 0.1, the objective index results of the fused image are relatively the best, and the amount of information is the largest.
Different Algorithms Comparison
To effectively evaluate the proposed optical and SAR image fusion method, this paper compares it with other five representative image fusion methods, including multi-scale weighted gradient fusion method (MWGF) [34], wavelet transform-based fusion method (DWT) [35], fast filter-based fusion method (FFIF) [36], non-subsampled contourlet transform domain-based fusion method (NSCT) [18], and the Fusion-GAN fusion method [13]. Among them, MWGF and FFIF belong to the spatial domain. DWT and NSCT are representative methods based on transform domain, and the fusion rule adopted for NSCT is "Select-Max". Fusion-GAN is a method based on deep learning. Different methods have different fusion effects. The results of the four scenes selected in the experiment are shown in Figure 8.
paper compares it with other five representative image fusion methods, including multi-scale weighted gradient fusion method (MWGF) [34], wavelet transform-based fusion method (DWT) [35], fast filter-based fusion method (FFIF) [36], non-subsampled contourlet transform domain-based fusion method (NSCT) [18], and the Fusion-GAN fusion method [13]. Among them, MWGF and FFIF belong to the spatial domain. DWT and NSCT are representative methods based on transform domain, and the fusion rule adopted for NSCT is ʺSelect-Maxʺ. Fusion-GAN is a method based on deep learning. Different methods have different fusion effects. The results of the four scenes selected in the experiment are shown in Figure 8. From the subjective fusion results, it can be seen that the fused images obtained by FFIF, DWT, and MWGF methods inherit the spatial characteristics of SAR images, but the spectral characteristics do not inherit the optical images well. The spectral features of the fusion image using NSCT transform are inherited from the optical image, and the spatial features of SAR image are retained, but the image has more speckle noise. The fusion image obtained by the original GAN method is not suitable for the normal per- From the subjective fusion results, it can be seen that the fused images obtained by FFIF, DWT, and MWGF methods inherit the spatial characteristics of SAR images, but the spectral characteristics do not inherit the optical images well. The spectral features of the fusion image using NSCT transform are inherited from the optical image, and the spatial features of SAR image are retained, but the image has more speckle noise. The fusion image obtained by the original GAN method is not suitable for the normal perception of human eyes because only the lightness component of optical image is considered in the fusion process. However, the spectral features of the fusion results obtained by the method in this paper are obviously well inherited; the gap between the fusion results and optical images is smaller, the image definition is higher, and the loss of texture details and contour is less.
In addition, in order to further compare the performance of the fused image and optical image in detail, we intercepted a part of area from the experimental results, which includes water and residential. Then, we used the Canny algorithm to simply detect the edge of the optical image and the fused image. The experimental results are shown in Figure 9. sults and optical images is smaller, the image definition is higher, and the loss of texture details and contour is less. In addition, in order to further compare the performance of the fused image and optical image in detail, we intercepted a part of area from the experimental results, which includes water and residential. Then, we used the Canny algorithm to simply detect the edge of the optical image and the fused image. The experimental results are shown in Figure 9. As shown in Figure 9, the edge of the middle building is well reflected in the fusion results, and other places also show more texture details, which means that the proposed method performs well in keeping the details of the source image, and the fused image contains more content.
Further data processing the fusion results of Group1 image under the different methods. The image objective evaluation results are obtained and showed in Table 3. Table 3 that the performance of this method is better than other methods regarding AG, PSNR, SSIM, SF, and SD. For AG and SF, compared with the MWGF method with better performance, it is improved by 45.2% and 30.42%, respectively. Additionally, compared with the NSCT method with better performance for PSNR, SSIM, and SD, it is improved by 1.74%, 11.59%, and 34.83%, respectively. In a word, although the proposed method cannot achieve the best in every index, the spectral distortion of the fused image has been improved, and the objective index has a good effect.
Moreover, we have also conducted fusion experiments on the other three groups of original image pairs in the selected dataset. Figure 10 is the line charts of objective results. From the objective data, it can be seen that the method proposed in this paper can be well applied in heterogeneous image fusion. As shown in Figure 9, the edge of the middle building is well reflected in the fusion results, and other places also show more texture details, which means that the proposed method performs well in keeping the details of the source image, and the fused image contains more content.
Further data processing the fusion results of Group1 image under the different methods. The image objective evaluation results are obtained and showed in Table 3. It can be seen from Table 3 that the performance of this method is better than other methods regarding AG, PSNR, SSIM, SF, and SD. For AG and SF, compared with the MWGF method with better performance, it is improved by 45.2% and 30.42%, respectively. Additionally, compared with the NSCT method with better performance for PSNR, SSIM, and SD, it is improved by 1.74%, 11.59%, and 34.83%, respectively. In a word, although the proposed method cannot achieve the best in every index, the spectral distortion of the fused image has been improved, and the objective index has a good effect.
Moreover, we have also conducted fusion experiments on the other three groups of original image pairs in the selected dataset. Figure 10 is the line charts of objective results. From the objective data, it can be seen that the method proposed in this paper can be well applied in heterogeneous image fusion.
Discussion and Conclusion
Firstly, this paper presents the theory of generative adversarial network and Gram-Schmidt transform, then introduces the Dense-U network into the GAN generator to obtain deeper semantic information and comprehensive features of optical and SAR images. At the same time, the loss function of the generative adversarial network is constructed. The PSNR and SGLCM loss are introduced into the generator loss function, and the SSIM loss is introduced into the discriminator to optimize the network parame-
Discussion and Conclusions
Firstly, this paper presents the theory of generative adversarial network and Gram-Schmidt transform, then introduces the Dense-U network into the GAN generator to obtain deeper semantic information and comprehensive features of optical and SAR images. At the same time, the loss function of the generative adversarial network is constructed. The PSNR and SGLCM loss are introduced into the generator loss function, and the SSIM loss is introduced into the discriminator to optimize the network parameters and obtain the best network model. Finally, the cascaded source image pairs are input into the trained generator to obtain a generated image, and the generated image is GS-transformed with the optical image to obtain the final fusion result. The experimental results show that the fusion image obtained by this method can well retain spectral characteristics of the optical image and texture details of the SAR image, while reducing the generation of coherent speckle noise, and can be well applied in the pixel-level fusion of heterogeneous images. | 2021-10-27T15:09:30.245Z | 2021-10-24T00:00:00.000 | {
"year": 2021,
"sha1": "50e6df06982a9b6b43e1dfa45fceaacd9ed86580",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/21/4274/pdf?version=1635073723",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "25e88f7ece5ea0deb754eb8701f6c719151c81be",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
251358750 | pes2o/s2orc | v3-fos-license | Plasmablastic Lymphoma Causing Adult Intussusception After Cardiac Transplantation
Intussusception in adults is a rare occurrence at approximately 5% and malignancy as the cause comprises half that number. The most common malignancies found are primary adenocarcinoma, metastatic carcinoma, lymphoma, and gastrointestinal stromal tumors. Lymphoma is the second most common. The management of adult intussusception is generally surgical, which is due to the higher likelihood of malignancy being the underlying cause. The patient's history helps to direct management and the most likely underlying diagnosis. This is especially important in patients who are immunosuppressed and with a history of lymphoproliferative disease. Early management and proper surgical intervention allow for the best survival rate. Here we present a case of adult intussusception caused by a rare and aggressive type of non-Hodgkin lymphoma.
INTRODUCTION
Intussusception, the invagination of a bowel segment into an adjacent one, is an exceedingly rare phenomenon in adults. 1 In adults the likelihood of a malignant process causing intussusception is high. 2 From the various malignancies, lymphoma being the causative pathology is rare and for the most part discussed in isolated case reports. It is important to realize the potential severity of the underlying malignant process. We believe it is important to continue reporting these rare occurrences to increase our understanding of the disease process, presentation, and treatment options. Here we present a case of plasmablastic lymphoma, an aggressive non-Hodgkin lymphoma, causing adult intussusception after cardiac transplantation.
CASE REPORT
A 26-year-old White female was admitted with a three-day history of diffuse, colicky abdominal pain that was most severe in the right lower quadrant. She also endorsed nausea and new onset hematochezia.
The patient's medical history was significant for restrictive cardiomyopathy, for which she underwent an orthotopic heart transplant at age 11. She developed stage II Hodgkin like post-transplant lymphoproliferative disease several years following transplantation in the setting of chronic immunosuppressive therapy. She had completed three cycles of chemotherapy and had been, at the time of current presentation, in remission for five years. Her current immunosuppressive regimen included cyclosporine and sirolimus.
She was hemodynamically stable at the time of presentation. Physical examination revealed a thin, nontoxic appearing young woman. Her abdomen was soft, nondistended, and tender in the right lower quadrant with localized guarding. She had no other significant physical examination findings. Initial laboratory results showed a normal white blood cell count. A basic metabolic panel revealed a creatinine of 1.4, however she has a known history of renal insufficiency with a baseline creatinine of 1.3.
Computed tomography scan demonstrated ileocolic intussusception with associated mesenteric edema and free fluid, concerning for possible ischemia with no lymphadenopathy ( Figure 1). Subsequently, the patient was brought to the operating room for diagnostic laparoscopy. Informed consent: Dr. David Berler declares that written informed consent was obtained from the patient/s for publication of this study/report and any accompanying images.
ileum was confirmed ( Figure 2). Additionally, an enlarged, associated mesenteric lymph node was identified. The patient then underwent a laparoscopic right hemicolectomy and mesenteric lymph node excision. Dissection began with identification ileocolic pedicle and high ligation with an Endo GIA stapler. This was followed by a lateral to medial mobilization of the right colon. As the patient was quite thin, and of relatively short stature, a Pfanenstiel incision was chosen as the extraction site, after sufficient mobilization, through which the specimens were delivered with ease. Examination of the resected bowel revealed a 3.9 cm mass within the cecum ( Figure 3).
The patient was discharged to home on postoperative day three in stable condition, having suffered no postoperative complications. She was seen in the outpatient setting two weeks later at which point she continued to do well. Immunohistochemical analysis were positive for CD138, CD45, and CD10 and negative for CD20. The Ki-67 proliferation rate was 80% -90%. The anaplastic large cell lymphoma kinase and human herpes virus were both negative. She was diagnosed with plasmablastic lymphoma.
DISCUSSION
Intussusception is an invagination of a segment of bowel into the lumen of an adjacent segment. It is rare among adults; in fact, only 5% of all cases are found within this population. 1 Reportedly, at least 50% of these are secondary to an underlying malignancy. 2 In the current surgical literature, there are only 36 reported cases within an 11-year period that attribute lymphoma as the causative etiology in adult ileocolic intussusceptions 4elucidating the rare nature of this pathologic entity. The most common malignancies found are primary adenocarcinoma, metastatic carcinoma, lymphoma, and gastrointestinal stroma tumors, with lymphoma being the second most common. 3 Currently, the mainstay of management for adult intussusception is surgical. This is in part due to the higher likelihood of malignancy as being the underlying cause. Notably, there have been reports of nonoperative management for adult intussusception, but one must take into consideration the patient's clinical circumstance. 10 Patient history indicating a higher likelihood of malignancy and clinical findings concerning for obstruction or ischemia should direct surgical management. In our case, a history of post-transplant lymphoproliferative disorder as well as a classic presentation for intussusception, in addition to convincing radiologic evidence of this disease process, heightened our clinical suspicion for an underlying malignant process. For ileocolic intussusception, it is important to consider whether to attempt reduction or not. Signs or concern for ischemia and or gangrene should preclude one from attempting reduction. Colonoscopy can be helpful in determining if the lesion at the lead point is benign or malignant, 11 but should not supplant or delay necessary surgical therapy (which usually entails resection). Treatment for lesions highly suspected to be benign may begin with attempts at reduction and local excision. Lesions that are suspicious for malignancy or segments that cannot be reduced should be resected en bloc with the supplying mesentery. Our case illustrates that a minimally invasive approach is safe and feasible in such scenarios. Furthermore, a Pfanenstiel extraction site (when anatomically permissible) confers a more cosmetic scar in young patients and is associated with lower rates of incisional hernia, particularly in patients with compromised wound healing secondary to chronic immunosuppressive therapy in the setting of organ transplantation.
Our patient's pathology revealed a rare, highly aggressive, non-Hodgkin's lymphoma known as Plasmablastic lymphoma (PBL). PBL was first described in the oral cavity/jaw of HIV infected patients. However, there are increasing cases of HIV-negative PBL found in extraoral sites including the gastrointestinal tract. 5 PBL is male predominant, most frequently occurring in the fourth decade of life in HIV positive patients, compared to HIV negative patients, in which is occurs more commonly during the sixth decade. 6 PBL is associated with other causes of immunosuppression such as post-transplant state, as seen in our patient. 6 Chronic immunosuppressive therapy, Epstein Barr virus (EBV) and genetic susceptibility may contribute to the increased risk of developing PBL 6 Our patient's history included both sirolimus and cyclosporine therapy; each drug increases the risk of lymphoproliferative disease 7,8 by 1% to 6%.
PBL is an aggressive tumor with poor prognosis. The heterogenous manifestation of this lymphoma renders its treatment challenging. Presently there is no standard of care; multiple chemotherapy regimens such as cyclophosphamide, doxorubicin, vincristine, and prednisone have been associated with different outcomes. Patients with complete clinical response have a median overall survival of 48 months. 6 However, without complete response the median overall survival was found to be three months. 6 Other modalities include autologous stem cell transplant, immunomodulators, and antiviral agents and EBV-targeted therapy, but further studies are needed in these areas. 9 We report a case of Plasmablastic lymphoma as a cause for intussusception in an immunocompromised, postcardiac transplant individual who is HIV-negative. As this is a highly aggressive tumor with no current standard of care, it is important to recognize a patient's clinical history in order to assist in diagnosis. The high incidence of intussusception in adult malignancies should be kept in mind. In order to achieve the best survivability, early surgical management and oncologic treatment are required for this rare and aggressive disease. | 2022-08-06T15:09:44.731Z | 2021-11-02T00:00:00.000 | {
"year": 2021,
"sha1": "4f6154d5aed30cc93ad675eaad0b48f040abb741",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "062c9f24e81ee05f5611e5f7a19d309dccf5aeda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263872073 | pes2o/s2orc | v3-fos-license | Online Learning of Any-to-Any Path Loss Maps
Learning any-to-any (A2A) path loss maps might be a key enabler for many applications that rely on a device-to-device (D2D) communication, such as vehicle-to-vehicle (V2V) communications. Current approaches for learning A2A maps have a number of important limitations, including i) a high complexity that increases rapidly with the number of samples, making the problems quickly intractable, and ii) the inability of coping with a time-varying environment, among others. In this letter, we propose a novel approach that reconstruct A2A path loss maps in an online fashion. To that end, we leverage on the framework of stochastic learning to deal with the sequential arrival of samples, and propose an online algorithm based on the forward-backward splitting method. Preliminary simulation results show a significant decrease in complexity, while its performance is comparable to that of a batch approach.
I. INTRODUCTION
M ANY applications in wireless networks can benefit from information related to the spatial distribution of path loss. Among them, applications involving Device to Device (D2D) communication are the most challenging because of fast increase of communication links when the number of nodes grows. Such applications include machinetype communications (MTC) or vehicle to vehicle (V2V).
As an example, consider a platoon of vehicles that have to constantly exchange information about their position, acceleration, and so on. If the path loss between any two vehicles along the route was known in advance, this information would give the vehicles enough time to adapt their distance accordingly and save a considerable amount of fuel [1]. Other benefits include reliability of communications and safety. However, in realistic networks, the acquisition of all path loss values in D2D communication is infeasible mainly because we would need to measure the path loss between any location to any other location of a map. In addition, we have to take into consideration that measurements can be outdated due to changes in e.g., the propagation environment, network density or weather conditions, to name a few. In general, any-toany (A2A) maps are very suited for those applications, but the Manuscript received December 9, 2020; accepted January 1, 2021. Date of publication January 8, 2021; date of current version May 6, 2021. This work was supported by the Federal Ministry of Education and Research (BMBF) of the Federal Republic of Germany as part of the AI4Mobile project (16KIS1170K). The associate editor coordinating the review of this letter and approving it for publication was Z. Zhao. ( challenge is to cope with a rapid increase of complexity with the size of the map while keeping high prediction accuracy. The learning of radio maps has been a major topic of interest both in academia and the industry for years [2], [3]. In recent years, the framework of tomographic projection technique (TPT) has gained a great deal of attention as a model that characterizes the long-term shadowing of links caused by objects such as buildings or trees [4], and in turn this shadowing is used as a proxy to characterize the path loss. In TPT, a spatial loss field (SLF) captures the absorption generated by objects in a field, while a window function models the influence of each location on the attenuation that a link experiences. The shadowing is then modeled as the weighted integral of the SLF across the field. A line of research [4] has exploited the high correlation in SLF of nearby locations, using Kriging interpolations to estimate radio maps. A different approach exploits the concept of the Fresnel zone [5], [6]. However, the main limitation of these approaches is the fact that they rely on a fixed model, and models cannot track reality under all circumstances.
More recently, the authors in [7] proposed an algorithm that learns both the SLF and the window function in a blind manner, i.e. no model is assumed and both the SLF and the window function are learned. In [8], this approach is further improved by capitalizing on the fact that both the SLF and the model are assumed to be block-sparse. A problem with elastic net regularization and multi-kernels is formulated, and an algorithm based on the ADMM is used to obtain a solution.
Both algorithms in [7], [8] have a key limitation since they batch algorithms, i.e. upon arrival of new measurements, the algorithms are carried out again including all the historical data, which increases the complexity dramatically. This poses a tremendous hurdle for real-world applications because i) in both approaches the problem complexity and the number of variables that have to be stored in memory increases cubicly with the number of pixels in the map, and ii) because they are not able to track accurately a changing environment over time due to e.g. different traffic profiles or change in the underlying map. As a motivational example, in both approaches [7] and [8], for a map of 10 × 10 pixels, we already require the storage of floating-point vectors with up to 495K entries, and such vectors increase to 31.92M for 20 × 20 maps and to 364.095M for 30 × 30 maps. The reason for this is that TPTs require the storage of structures that track the relationship between every link and every pixel, so if an A2A map is divided into P pixels, we have P (P − 1)/2 possible links assuming reciprocity, and we need to track P 2 (P − 1)/2 variables. Such rapid increase in complexity renders batch methods inadequate for real-world applications.
Because the approach proposed in this letter is based on an online algorithm, it can overcome these limitations. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ More precisely, we extend the work in [8] and pose an optimization problem that, upon arrival of new measurements, aims at obtaining new estimates of both the SLF and the window matrix. We do this by minimizing the least squared error regularized by elastic nets [9]. We leverage the framework of stochastic learning to address the limitation of the sequential arrival of samples. The original problem is highly ill-posed, so we impose additional structure on the problem by considering a non-linear kernel method. We propose an algorithm based on the descent version of an alternating minimization where we take one step of the forward-backward splitting method to update the SLF, followed by another step to update the model.
II. SYSTEM MODEL
Consider a two-dimensional area A ⊂ R 2 . We model the long-term average path loss between any two points x i , where δ > 0 is the path loss exponential decay, s : R 2 ×R 2 → R represents the shadowing between two points, and s > 0 is a scalar that accounts for the error in the measurements. As in [5], [7], we model the shadow fading based on the TPT: 2 are functions that measure the length of the direct link and the length of the path going through an intermediate point, respectively; P x , P y is the number of horizontal and vertical pixels of the map, respectively, and x p is the coordinate of the pixel p ∈ {1, . . . , P }, with P = P x P y . Intuitively speaking, the shadowing between any two points in a map is potentially influenced by the SLF at any point of the map. This assumption is due to the multi-path nature of radio signals propagation. To capture these effects, the SLF is weighted by a window function that models the influence of each position on a link.
We now proceed to write (2) in a matrix form. To this Assuming channel reciprocity in the path loss between any two points, the total number of links T in a map of P pixels is T := (P (P − 1))/2, and the index set M of all links is M := {1, . . . , T }. We define a bijective mapping L : P × P → M : m → P (j − 1) + i that maps any two indexes i, j ∈ P onto an index m ∈ M. By W ∈ R T ×P we define a matrix containing all possible weight values of the map, where w m,p : The shadow fading vector s ∈ R T is generated by stacking all possible values of s(x i , x j ), such that: III. PROBLEM STATEMENT Consider that measured path loss values arrive at a central entity at different time instants. Letŝ t ∈ R Mt be the (noisy) shadowing measurements acquired at time instant t ∈ N, and Ω t is the measurements index set with cardinality M t . For the sake of simplicity, we assume that M 1 = M 2 = · · · = M for the reminder of this manuscript. The elements inŝ t represent a selection of all elements contained inŝ ∈ R T , which is in turn a vector containing all possible (noisy) measurements in a map. Analogously, the matrix W t ∈ R M×P denotes the weight matrix whose rows are related to the link measurements received at time t.
The proposed approach is based on the assumptions that both the vectorized weight matrix vec(W ) and the SLF vector f are block-sparse vectors [10]. The block-sparsity of vec(W ) results from the fact that far pixels from a link have little or no impact on the shadowing experienced by the link, while the block-sparsity of f is justified by the fact that most pixels of a map represent the free space, whose absorption value is negligible compared to the absorption of solid bodies, and therefore assumed to be zero. Further, non-zero entries of f are those belonging to walls and other physical structures, therefore they are concentrated in groups.
In light of the above assumptions, an intuitive approach to estimate the weight matrix and the SLF is to minimize the least squared error regularized by elastic nets to improve blocksparsity.
Previous studies such as [7], [8] in this particular application domain have shown that attempts at minimizing W fail to give good results because the problems are in general severely ill posed. To address this limitation, we impose additional structure on W by considering a non-linear kernel approach similar to [8].
With some abuse of notation, we define the vector c : as a two-dimensional vector with arbitrary c m , d m,p stacked together; and we assume that the window function can be written as a function of a positive definite kernel in the following form: (4) where α q(m,p) ∈ R are appropriate scalars to be determined, are the ordered elements of Ω t , and q(m, p) is an index obtained as q(m, p) := P (m − 1) + p. In particular, in this study we use radial basis function (RBF) as kernel: where σ is the width of the kernel. Define the kernel matrix K t ∈ R MP ×MP with (q, q ) element given by κ q,q , and also define the vector α t as α t = [α 1 , · · · , α MP ] .
Considering the non-linear kernel approach, we can write an optimization problem over f and α τ , τ = 1, . . . , t as follows: where μ 1 , μ 2 , λ 1 and λ 2 are selected regularization parameters [9], and A f = I M ⊗ f ∈ R M×MP is the Kronecker product of the identity matrix I M ∈ R M×M and f .
IV. ONLINE SLF LEARNING
Problem (5) is not jointly convex in f , α 1 , . . . , α t , but it is convex in f if α 1 , . . . , α t are fixed, and vice versa. Therefore, we consider an alternating minimization strategy to address (5) where, at time t of arrival of new measurements, one set of variables is optimized while the remaining ones are kept constant until all sets of variables have been addressed or a solution is obtained.
A. Addressing the SLF Subproblem
To address the SLF subproblem, we define (∀t ∈ N), (∀α 1 , . . . , α t ∈ R MP ) a new functionȟ t : R P → R such that: where A α τ = M n=1 e n ⊗ (α τ K τ ) ∈ R M×P , and e n ∈ R M is a unitary vector with all zeros but the nth entry one.
The motivation behind this approach lays on the fact thať h t (f ) is convex ∀f ∈ R P , since it represents the sum of 2 and 1 norms, and the sum of convex functions is also a convex function [11]. We can rewriteȟ t (f ) in a more convenient way for our online algorithm in the following way: where Tr(·) is the trace of a matrix, C = Tr(ŝ tŝ t )/2t,Ā t = t τ =1 A ατ A α τ ∈ R P ×P , and b t = t τ =1 A ατŝ τ ∈ R P . Note that the structuresĀ t and b t do not change size with increasing t. This suggests an algorithm in which, at time t, we keep track and updateĀ t and b t , and a new estimatef t is found after minimizing functionȟ t (f ) in (7) w.r.t. f . Note that the functionȟ t is coercive, proper and strongly convex, thereforef exists and is unique ∀α 1 , . . . , α t ∈ R MP [12]. The functionȟ t can be expressed as the sum of two functions g 1 and g 2 , where g 1 is convex and differentiable, while g 2 is convex but non-smooth. More precisely, define and The problem min fȟt (f ) can be then expressed as: This kind of problems are well understood and there is a plethora of algorithms to solve them. We propose using the forward-backward splitting method [13]. It can be shown [13] that Problem (9) admits one solution and that, for a certain γ ∈]0, [ and > 0, its solution is characterized by the fixed point equation f = prox γ,g2 (f − γ∇g 1 (f )), where prox γg2 is the proximal operator of g 2 with attracting factor γ. An iterative solution to (9) is then given by where soft λ1 (·) is the soft thresholding function with threshold λ 1 , and n ∈ N is the iteration index.
B. Addressing the Subproblem to Learn the Model
Unlike the minimization of f in Problem (5), the minimization over α 1 , . . . , α t is not coupled in the summation of functions through its variables, meaning that we can separate Problem (5) with respect to α 1 , . . . , α t , while keeping f fixed.
Similar as in the previous section, we define the problem of minimizing the sum of two convex functions ∀f ∈ R P , where k 1 is continuous and differentiable, and k 2 is continuous but non-smooth. The problem of estimating α t becomes min.
Using again the forward-backward splitting method, we obtain the following iterations: where a (n) t t , and β is the attraction factor of the operator prox βk2 .
C. Algorithmic Solution
The missing piece in the online SLF learning problem is the combination in an algorithm of the stochastic estimatesf t of (8) with the iterative solutions of f (n) t and α (n) t in (10) and (12), respectively.
One important caveat of the online algorithm is that we implement a "descent" version of the outlined alternating minimization process. This means that instead of running the iterations in (10) until a stopping criterion is met, and then proceed with the iterations in (12) again until improvements are small enough, we take only one step at a time of the iterations (10), and another step of iterations (12), alternatively until a combined stopping criterion is met. The rationale behind this strategy is that the improvement from f (1) might not be relevant enough to justify finding the optimal solutions of the two convex sub-problems in α t and f t alternatively. Indeed, simulations for this particular application have consistently shown better performance and shorter execution time with the "descent" strategy.
Assume that i) the samplesŝ 1 ,ŝ 2 , . . . are i.i.d. samples drawn from a common distribution p(ŝ) with compact support, ii) the iterates (f t ) ∞ t=1 are contained in a compact set, iii) γ < 2L −1 g for any iteration in (10) and any t, and iv) β < 2L −1 k for any iteration in (12) and any t, where L g and L k are the Lipschitz constants of ∇g 1 and ∇k 1 , respectively. Finally, sinceȟ t is expected to be close toȟ t−1 for large values of t, so are under suitable assumptionsf t andf t−1 , which makes it efficient to usef t−1 as "warm" initialization for computing f t . Our procedure is summarized in Alg. 1.
Algorithm 1 Online Algorithm
, since updates (12) and (10) decrease monotonically the objective function. Further, h t (f , α 1 , . . . , α t ) is bounded below since it represents the sum of non-negative functions. Therefore, we can state that Alg. 1 converges in the objective.
D. Complexity
The complexity of Alg. 1 is dominated by the two matrix multiplications (one to compute α (n+1) t , the other one to compute f (n+1) t ) required in each iteration n, which we assume O(n 3 ). We also assume that the stopping criterion in both cases is given by a maximum number of iterations N . The complexity is then given by O(t max N (P 3 M 3 + P 2 M )).
V. NUMERICAL EVALUATION
To assess the performance of the proposed algorithm, we simulate a vehicular network whose users communicate within a synthetic map based on the Madrid scenario [14]. The original scenario has a size of 140 × 97 meters, and we discretize it into a 13×9 map, with each pixel being 10.8×10.8 meters of size. The map has four 3 × 2 buildings, one 3 × 2 park, and other four 3 × 1 buildings. The rest of the scenario represents roads connecting the different parts of the map. The normalized SLF at each location, i.e. the attenuation that a link experiences while crossing that location, is set for buildings at 1, for the park at 0.1, and for road pixels at 0, since the SLF of the air is negligible. Vehicles are only allowed to be at road locations, which means that no measurements inside the buildings and park are acquired. After 100 iterations, we change the map to evaluate how the online algorithm adapts to changes in the underlying structures. Figures 1a and 1b show the SLFs of the two scenarios. To generate a synthetic window function, we use the normalized elliptical model from [5]: where η is the signal wavelength. We set the wavelength to η = 0.1499m in our simulations. The maximum number of vehicles, which coincides with the total number of road locations, is P tx = 72. The total number of links in the map is given As evaluation metric, we use the expected normalized mean squared error (NMSE) of the reconstructed vectors, given by where v is any vector andv its reconstructed version.
We compare the proposed algorithm with the batch approach presented in [8], for which all the possible 2556 samples are used. Figure 3a shows the NMSE of the three reconstructed structures for the online algorithm, namely the normalized SLF vector f , the vectorized window matrix w := vec(W ), where each entry is obtained from the vectors α as in (4); and the shadowing vectorŝ calculated from f and W as in (3). Note that, although we are focused on the reconstruction ofŝ, f conveys important information that can be exploited in other applications, since it resemblances a physical map.
First, Figures 2a and 2b show the reconstructed SLF of the proposed algorithm after 100 and 200 iterations, respectively. The resemblance to the original maps in Figs. 1a and 1b is clear. In more detail, the dashed lines in Fig. 3a represent the performance of the batch algorithm for each of the three structures in question. There are two distinct regions with respect to the number of iterations displayed in Fig. 3a: from iterations 1 to 100, and from 101 to 200. The region in the left represents the results of the algorithms with the original map in Fig. 1a. At iteration 101, we modify the map and change the park for road locations (i.e. changing the SLF of the park pixels from the original 0.1 to 0, as shown in Fig. 1b) in order to analyze how the online algorithm adapts to changes.
Note the non-monotonicity of the online reconstructed structures even when the map does not change. This is because of the nature of stochastic optimization. For iterations 1 to 100, the batch algorithm performs better than the online version for the three structures. In any case, the error of the reconstructed vector f seems to converge to the error of the batch approach. At iteration 101, one can clearly see the effect of the map changing. The online algorithm shows an instant decrease of performance, but, in a few iterations, it is able to track the new scenario. The NMSEs decrease to similar values as before the change happened. For the contrary, the performance of the batch algorithm degrades considerably, showing higher error forŝ f and w compared to the online algorithm. This is due to the nature of the batch algorithm. The batch algorithm cannot cope with a changing environment, because once it obtains a solution, such solution cannot be updated without running the algorithm with complete recalculations. Therefore, a change in the environment means that the obtained solution diverges from the ground truth with the corresponding loss in precision.
In order to compare the complexity of the online approach with the batch method in [8], note first that the complexity of the online algorithm for these numerical evaluations is O(t max N (P 3 M 3 + P 2 M )), while for the batch approach is O(N (P 3 T 3 tx + P 2 T tx )). Since in general T tx is much larger than both M and t max (here T tx = 2556, M = 67 and t max = 200), we state that the computational complexity for the online algorithm is several orders of magnitude smaller than the batch one. To analyze in more detail the reduction of computational complexity, we show in Fig. 3b a comparison between the baseline and the online approach for different map sizes. Concretely, we run simulations for maps of size 5 × 5, 10 × 10, 15 × 15 and 20 × 20 pixels, with T tx = 100 in case of the batch algorithm, while M = 4 and t max = 25 for the online one. Simulations were run on a 64-bit Intel(R) machine Core(TM) i7-6600U CPU with 2 cores (4 virtual cores) at 2.60GHz with 16GB RAM of memory. One can clearly see in Fig. 3b the exponential increase of computational time for the batch algorithm, while for the online approach the increase is almost linear.
VI. CONCLUSION In this letter, we have addressed the online learning of path loss maps. More concretely, the original problem is defined as the minimization of the least squared error of the measurements and the TPT-based model, regularized by the elastic net, i.e. the linear combination of the 1 and 2 norms.
The problem is highly ill-posed, so we add certain structure by considering a non-linear kernel approach. We propose an online algorithm based on stochastic optimization and alternating minimization to tackle the challenge posed by the sequential arrival of samples. Simulation results comparing the proposed algorithm with a batch method from the literature show that a reasonable performance compared to the baseline can be achieved, while greatly reducing the complexity. | 2021-05-08T13:23:45.752Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "f2ab13aef565935c98bdbbcf6b70a3f95e9a8bb6",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4234/9426614/09317815.pdf",
"oa_status": "HYBRID",
"pdf_src": "IEEE",
"pdf_hash": "f2ab13aef565935c98bdbbcf6b70a3f95e9a8bb6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
220880790 | pes2o/s2orc | v3-fos-license | A nanoscale metal organic frameworks-based vaccine synergises with PD-1 blockade to potentiate anti-tumour immunity
Checkpoint blockade therapy has provided noteworthy benefits in multiple cancers in recent years; however, its clinical benefits remain confined to 10–40% of patients with extremely high costs. Here, we design an ultrafast, low-temperature, and universal self-assembly route to integrate immunology-associated large molecules into metal-organic-framework (MOF)-gated mesoporous silica (MS) as cancer vaccines. Core MS nanoparticles, acting as an intrinsic immunopotentiator, provide the niche, void, and space to accommodate antigens, soluble immunopotentiators, and so on, whereas the MOF gatekeeper protects the interiors from robust and off-target release. A combination of MOF-gated MS cancer vaccines with systemic programmed cell death 1 (PD-1) blockade therapy generates synergistic effects that potentiate antitumour immunity and reduce the effective dose of an anti-PD-1 antibody to as low as 1/10 of that for PD-1 blockade monotherapy in E.G7-OVA tumour-bearing mice, with eliciting the robust adaptive OVA-specific CD8+ T-cell responses, reversing the immunosuppressive pathway and inducing durable tumour suppression.
T he clinical benefits of checkpoint blockade therapy rekindle the hope of cancer immunotherapy [1][2][3][4][5] . However, objective response rates in checkpoint blockade therapy targeting programmed cell death 1 (PD-1), cytotoxic T lymphocyteassociated protein-4 (CTLA4) or programmed cell death ligand 1 (PD-L1) remain at~10-40% owing to multiple immunosuppressive factors, such as T-cell exclusion, immunosuppressive cells, deprivation of tumour-infiltrating lymphocytes and neoantigens, and negatively regulating markers and cytokines [1][2][3][4][5] . On the other hand, checkpoint blockade therapy is associated with significantly high costs that greatly imposes economic burden on the patients and society 6 . Most importantly, checkpoint blockade with systemic administration of antibodies (Abs) is associated with the risk of immune-related adverse events including cytokine storm and autoimmune diseases in the longterm 1,2,7,8 .
To broaden the clinical benefit and minimise the therapeutic costs, the use of a combination cancer immunotherapy is considered to be the future direction of cancer treatment 3,5,[9][10][11][12][13][14] . Combination cancer immunotherapy that simultaneously "releases the immunological break" using immune checkpoint inhibitors and "presses the immunological accelerator" by stimulating antigen presentation, which thus prime and activate effector T-cell responses, would be more effective than monotherapy 3,5,[9][10][11][12][13] . That is, appropriate cancer vaccines that stimulate antigen presentation and T-cell priming when given in combination with immune checkpoint inhibitors are expected to minimise the dose, therapeutic costs and the risk of adverse events induced by immune checkpoint inhibitors. Since the response rate in checkpoint blockade therapy depends on T-cell immunity 15 , a combination of cancer vaccines may increase the response rate by strengthening the immunogenicity of cancer antigens, triggering and amplifying the specific T-cell immune responses towards cancer antigens [9][10][11][12][13][15][16][17][18] .
For successful cancer vaccines, a rational design of adjuvants to integrate cancer antigens and immunopotentiators is pivotal, since the administration of these components separately may result in nonspecific immune responses in the entire body and severe side effects 19 . A majority of cancer vaccines adopt a mixture of cancer antigens and immunopotentiators with or without vehicles [9][10][11][12][13]15,16 . However, they have the problems of initial burst and off-target release, resulting in a decrease in vaccination efficiency. To realise the full efficacy of cancer vaccines, the requirements indispensable for adjuvants include (1) efficient encapsulation of cancer antigens and immunopotentiators to prevent their initial burst release and realise their controlled release, (2) targeting delivery to antigen-presenting cells (APCs) and lymph nodes, (3) shaping effective anti-tumour T-cell responses, and (4) good biocompatibility. Satisficing all these requirements is not easy. Inspired by the superior biomimetic mineralisation encapsulation capability of the metal organic framework (MOF) for biomolecules [20][21][22] and excellent intrinsic immune-shaping properties of mesoporous silica (MS) 23,24 , we fabricated nanoadjuvants on the basis of MOF-gated MS (MS@MOF) to realise targeted delivery to APCs and lymph nodes, and navigate antitumour immunity.
Here, we propose an ultrafast, low-temperature, universal selfassembly route to integrate a cancer antigen and an immunopotentiator into each nanoparticle consisting of MS as a core container and MOF as a gatekeeper for fabricating MS@MOF cancer vaccines as the magic bullet for combination cancer immunotherapy. An integrated formulation of cancer vaccines with a pH-switch button can realise targeted, controlled and efficient codelivery of the antigen (ovalbumin, OVA) and immunopotentiator (polyinosinic-polycytidylic acid, polyIC) to draining lymph node, enhance their availability and minimise the off-target effects. Furthermore, MS@MOF cancer vaccines, in combination with systemic checkpoint blockade at merely 10% dose of PD-1 blockade monotherapy 9,11,25 , exhibit synergetic effects that reverse the immunosuppressive tumour microenvironment, elicit robust adaptive cancer antigen-specific immune responses, and effectively induce durable tumour suppression in tumour-bearing mice.
Results
Hierarchical self-assembling synthesis of MS@MOF nanoadjuvants. MS@MOF free of antigens and molecular immunopotentiators at various MS-to-MOF ratios were synthesised by immersing MS in solutions containing Zn 2+ and 2methylimidazole at 0°C for 15 min ( Fig. 1; Supplementary Figs. [1][2][3][4]. MS exhibits stellated pore channels and a dendritically open gate up to 35 nm ( Supplementary Fig. 1a, b). When the Zn 2+ and 2-methylimidazole concentrations are low, MOF precipitate onto the inner wall of stellated pore channels in MS and form a thin layer. With increasing Zn 2+ and 2-methylimidazole concentrations or decreasing MS concentration, the precipitated MOF gradually increases in amount, fills the pore channels and blocks the open gate. Extremely high Zn 2+ and 2methylimidazole concentration solutions or low MS initial amounts result in the aggregation of MS nanoparticles into one large particle (Fig. 1a, b). Scanning electron microscopy (SEM) images show that with increasing MOF amount, MS@MOF gradually changes in shape from isolated nanospheres of~100 nm to aggregated nanocomplexes of~500 nm ( Fig. 1b; Supplementary Figs. 2 and 3). Wide-angle X-ray diffraction (WAXRD) patterns exhibit the crystallinity of ZIF-8 MOF phase in the MS@MOF with the intense diffraction peaks at around 7.3, 10.4, 12.7, 14.6, 16.4 and 18.0°, compared with the amorphous silica phase in MS with a broad peak at~23° (Fig. 1c). The hydrodynamic diameter of MS@MOF increases from 100 to 1200 nm with the increase in MOF amount, as determined by dynamic light scattering (DLS; Fig. 1d). The zeta potential of MS@MOF is centred at −18 mV in phosphate-buffered saline [PBS(−)], whereas those of MS and MOF are at around −16 and −24 mV, respectively ( Supplementary Fig. 4a). The specific surface areas of MS, MS@MOF and MOF are 526, 323 and 1098 m 2 /g, respectively ( Supplementary Fig. 4b-d). The average pore size of MS is 13.2 nm, which is large enough for adsorption of biomolecules. In contrast, MOF shows a small average pore size of about 1.6 nm, which makes it useful as the gatekeeper to protect the interiors from robust release. Scanning transmission electron microscopy (STEM) mapping shows the uniform distribution of Zn 2+ elements in the entirely emanative channels of MS from the exterior to the interior (Fig. 1f) [7][8][9][10][11][12]. OVA was efficiently encapsulated into MOF (OVAinMOF). Increasing the OVA concentration from 0 to 25 mg/mL did not affect the particle size of the formed MOF maintaining a size in the range of 100-200 nm. XRD patterns of OVAinMOF with various OVA concentrations exhibit the ZIF-8 phase. The encapsulation efficiency of OVA within MOF was~75% when the OVA concentration was 25 mg/mL. Moreover, polyIC was encapsulated into MOF to obtain polyICinMOF, which exhibits a particle size in the range of 100-200 nm and XRD patterns of the ZIF-8 phase, being similar to those of OVAinMOF. The encapsulation efficiencies of polyIC within MOF at 25 and 5 mg/mL of polyIC were about 55 and 80%, respectively.
Then, versatile biomolecules (model antigen, molecular immunopotentiator and checkpoint inhibitor antibodies) were encapsulated in the open pore channels of MS in conjunction with the growth of MOF on MS by hierarchical self-assembly to fabricate MOF-gated MS cancer vaccines. In a typical synthesis, well-dispersed MS nanoparticles were initially immersed in an OVA-containing aqueous solution to adsorb OVA sufficiently, and Zn 2+ and 2-methylimidazole were added to the solution to form inner OVAinMOF within the stellated channels of MS, named as MS@(OVAinMOF). In the second step, the obtained MS@(OVAinMOF) particles were immersed in an aqueous solution containing either an anti-CTLA4 Ab or polyIC, Zn 2+ and 2-methylimidazole to fabricate (MS@OVAinMOF)@(anti-CTLA4inMOF) or (MS@OVAinMOF)@(polyICinMOF). The encapsulation efficiency of the anti-CTLA4 Ab within MOFgated MS was calculated to be about 100% from the standard curve ( Supplementary Fig. 13). The zeta potentials of OVA, anti-CTLA4 Ab and polyIC in PBS(−) were about −9.5, −2.5 and −27 mV, respectively ( Supplementary Fig. 14). The zeta potential of MS@(OVAinMOF) was~−16 mV. After the second step, the zeta potentials of (MS@OVAinMOF)@(anti-CTLA4inMOF) and (MS@OVAinMOF)@(polyICinMOF) shifted to about −10 and −25 mV, respectively. The changes in the zeta potential reflect the successful encapsulation of various biomolecules into the MOFgated MS. We evaluated reproducibility of the manufacturing method for different batches by XRD, SEM and TEM analyses ( Supplementary Fig. 15). The results show that the morphology and particle size of samples from different batches were highly uniform without obvious difference. MS@(OVAinMOF), (MS@OVAinMOF)@ (polyICinMOF) and (MS@OVAinMOF)@(anti-CTLA4inMOF) synthesised from different batches exhibit similar ZIF-8 MOF phases with the intense diffraction peaks at around 7.3, 10.4, 12.7, 14.6, 16.4 and 18.0°in the WAXRD patterns. MS@(OVAinMOF), (MS@OVAinMOF)@(polyICinMOF), and (MS@OVAinMOF)@ (anti-CTLA4inMOF) show similar nanosphere morphologies and sizes of~100 nm.
The presence of the agents in the nanoadjuvants was further confirmed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE, Supplementary Fig. 16). OVA-loaded nanoadjuvants and corresponding supernatant samples after suspending them in water, including free OVA, OVAonMS, MS@(OVAinMOF) and OVAinMOF, were tested. Herein, OVA solution was mixed with MS to prepare OVAonMS. For the supernatants of free OVA, the band of OVA was clearly detected. For the supernatants of OVAonMS, the band of OVA becomes weaker owing to the partial desorption of OVA molecules from MS. In contrast, no obvious band of OVA was detected in the SDS-PAGE for the supernatants of MS@(OVAinMOF) and OVAinMOF, indicating a strong affinity between OVA and the carriers ( Supplementary Fig. 16a). Moreover, the OVA-loaded nanoadjuvants, including OVAonMS, MS@(OVAinMOF), and OVAinMOF, show the band of OVA clearly in the SDS-PAGE similarly to the free OVA group, indicating the presence of OVA in the nanoadjuvants ( Supplementary Fig. 16b).
We developed an ultrafast, low-temperature, universal aqueous-phase route to encapsulate high-molecular-weight cancer antigens and immunopotentiators into stellated pore channels of MS in conjunction with the low-temperature growth of MOF in an economically and highly efficient way. The open and stellated pore channels with a size as large as 35 nm in MS and the subsequent crystallisation of MOF entrapping biomolecules provide the possibility to accommodate and encapsulate a wide range of high-molecular-weight biomolecules into the MOF-gated MS. Although multiple biomolecules were encapsulated in one particle of MOF-gated MS by the layer-by-layer self-assembly process, they can also be encapsulated by a one-pot route. The present synthesis techniques are quite essential for cancer vaccines since the use of a combination of various biomolecules is crucial to eradicating established tumours. This MOF-gated strategy applies to not only MS nanoparticles, but also MS scaffolds and other nanomaterials.
PH-sensitive degradation and biomolecule release from MOFgated MS. The degradability of nanoadjuvants is a key parameter to be considered for future clinical applications. Here, we comprehensively evaluated the degradation properties and biomolecule release properties of MOF-gated MS (Fig. 2a- [17][18][19][20][21]. MS gradually degrades into silicic acid over time, whereas MOF degrades into Zn ions and imidazolate. The degradation profiles of MS@(OVAinMOF) or MS@MOF were studied by inductively coupled plasma atomic emission spectroscopy (ICP-AES) in acetate buffer (pH = 5) and Tris-HCl buffer (pH = 7.4). The degradation properties of OVAinMOF and OVAonMS were also investigated as the controls. In neutral buffer, MS@MOF exhibited a slow and sustained release of Zn ions with a low initial release rate up to~11 μg/mL within 1 day followed by a cumulative release rate of up to about 23 μg/mL within 8 days. On the other hand, in acetate buffer, a burst release of Zn ions up to about 49 μg/mL was observed within 1 day. MS@(OVAinMOF) and OVAinMOF showed a similar trend in Zn release to MS@MOF, although OVAinMOF exhibits a faster Zn release in neutral buffer than the other two. MS@MOF, MS@(OVAinMOF) and OVAonMS demonstrate a sustained release of Si ions in both neutral and acetate buffers, although the Si release in neutral buffer is faster than that in acetate buffer. In addition, Tris-HCl buffer supplemented with 10% serum was also used to test the degradation behaviours of MOF-gated MS. As a whole, the degradation curves in serumsupplemented buffer are similar to those in pure buffer ( Supplementary Fig. 20a, b).
The release of biomolecules from MOF-gated MS shows the same trend as the degradation of MOF-gated MS. OVAinMOF and MS@(OVAinMOF) exhibit a slow and sustained release of OVA in neutral buffer, whereas they show a burst release of OVA in acetate buffer (Fig. 2c, Supplementary Figs. 17 and 18). Similarly, polyICinMOF and MS@(polyICinMOF) exhibit a slow and sustained release of polyIC in neutral buffer, and a burst release of polyIC in acetate buffer (Fig. 2d, Supplementary Fig. 19). OVAonMS and polyIConMS show a burst release of OVA or polyIC in both neutral and acetate buffers (Fig. 2c, d). The presence of serum in the buffer did not markedly affect the release of biomolecules from MOF-gated MS (Fig. 2c, d; Supplementary Fig. 20c, d). Here, ferritin was used instead of OVA to investigate protein release in serum. In Tris-HCl buffer supplemented with 10% serum, ferritininMOF, polyICinMOF, MS@(ferritininMOF) and MS@(polyICinMOF) exhibit a slow and sustained release of ferritin or polyIC, whereas ferritinonMS and polyIConMS show a burst release.
The slow degradation in neutral buffer and the burst degradation in acetate buffer of MOF-gated MS is advantageous for preventing the premature release of antigens and immunopotentiators in the extracellular environment and facilitating their delivery into the intracellular environment. MOF-gated MS encapsulating biomolecules exhibit the pH-responsive release of biomolecules with a slow release in neutral buffer and a rapid release in acetate buffer. In addition, the coordination of Zn with OVA enhances the retention of OVA in MS@(OVAinMOF) or OVAinMOF, which accounts for the slower release of OVA than of other molecules. The controlled degradability and release of MOF-gated MS in a pH-responsive manner can facilitate antigen delivery, antigen presentation, and priming antitumour T-cell immunity. The pH-responsive endosomolytic nanoadjuvants facilitate the antigen escape from endo/lysosomes to the cytoplasm and the subsequent cross-presentation associated with major histocompatibility complex (MHC) I molecules 26 . Here, MOF-gated MS exhibits a pH-responsive release of biomolecules with a slow release in neutral buffer and a rapid release in acidic buffer, which suggests the potential of vaccination based on MOF-gated MS to enhance cross-presentation of cancer antigens to CD8 + T cells.
Cellular uptake, activation and antigen presentation of dendritic cells in vitro. We first examined the impact of MOF-gated MS on cellular uptake and activation of APCs, since the activation of APCs is the first step to initiate adaptive immune responses. Here, fluorescein-conjugated OVA (fOVA) was used as model antigen. Bone marrow dendritic cells (BMDCs) were cultured with fOVAonMS, MS@(fOVAinMOF), and fOVAinMOF using FITC-conjugated OVA (fOVA) or the counterparts of OVA to investigate the effects of nanoadjuvants on cellular uptake and OVAinMOF pH = 5 OVAinMOF pH = 7.4 OVAonMS pH = 5 OVAonMS pH = 7.4 OVAinMOF pH = 5 OVAinMOF pH = 7.4 [22][23][24]. The medium, free fOVA and OVA were used as the controls. The free fOVA group shows very weak green fluorescence from FITC, whereas the fOVAonMS, MS@(fOVAinMOF), and fOVAinMOF groups show intense green fluorescence. The average fluorescence intensities of BMDCs after coculture with free fOVA, fOVAonMS, MS@(fOVAinMOF) and fOVAinMOF are 47, 4594, 14383 and 8956, respectively (Fig. 2f, Supplementary Fig. 22). The presence of MS, MS@MOF or MOF facilitates the secretion of interleukin (IL)-1β and tumour necrosis factor (TNF)-α from BMDCs (Fig. 2g). The OVAonMS and MS@ (OVAinMOF) stimulate much more IL-1β secretion from BMDCs than the free OVA and OVAinMOF. MS@(OVAinMOF) stimulates more TNF-α secretion from BMDCs than OVA, OVAonMS and OVAinMOF. BMDCs cocultured with MS@ (OVAinMOF) show significantly increased MHC-I + , MHC-II + , CD80 + , CD40 + , and CCR7 + cell populations as compared with those cultured with free OVA. BMDCs cocultured with MS@ (OVAinMOF) show the highest MHC-I + , MHC-II + , CD80 + , and CCR7 + cell populations among all the groups. BMDCs cocultured with OVAinMOF show higher CD40 + cell populations than those cultured with free OVA, OVAonMS, and MS@ (OVAinMOF) (Supplementary Figs. 23 and 24). MHC I and MHC II molecules on the surface of APCs mediate antigen presentation. The cross-presentation of exogenous antigens on MHC I molecules is necessary for priming CD8 + T-cell responses and plays vital roles in antitumour immunity 18,27 . The chemokine receptor CCR7 plays a central role in mediating APC homing to lymph nodes. MOF-gated MS most efficiently enhances cancer antigen presentation, upregulates the expression of costimulatory molecules (CD40 and CD80), and promotes chemokine receptor CCR7 expression, which implies their great potential in cancer vaccines, compared with MS or MOF.
To investigate the effects of the mode of biomolecule loading on BMDCs activation, fOVA-or OVA-adsorbing MOF (MOF-ad) and fOVA-or OVA-encapsulating MOF by coprecipitation (MOFen) were compared (Supplementary Figs. 25 and 26). The MOF-en group exhibits much higher cellular uptake of fOVA and higher secretion of TNF-α and IL-1β from BMDCs than the MOF-ad group. These results suggest that encapsulation of biomolecules into MOF by coprecipitation is superior to the simple adsorption of biomolecules onto the MOF surface. This finding further supports the above-mentioned results that MOF-gated MS exhibits much higher BMDC stimulation capability than MS.
Prolonged retention, enhanced delivery to lymph nodes and promoted cross-presentation of antigens by MOF-gated MS in vivo. We next investigated whether cancer antigens could be retained for a long time using Alexa Fluor 647-conjugated OVA (A647-OVA) as the model cancer antigen (Fig. 3a-d).
The average fluorescence intensities of the free A647-OVA and MS@(A647-OVAinMOF) groups at the injection site are comparable at 6 h. The MS@(A647-OVAinMOF) group tends to show an average fluorescence intensity two times higher than free A647-OVA at 1 and 3 days.
To analyse the ability of APCs to capture and transport cancer antigens to the draining lymph nodes, MOF-gated MS loaded with fOVA was subcutaneously injected, and then, cryosections of the draining lymph nodes were observed 16 h after injection ( Fig. 3e-g; Supplementary Fig. 27). The free fOVA group was used as the control. The green fluorescence intensities in the cryosections of lymph nodes are much higher for the fOVAonMS, MS@(fOVAinMOF) and fOVAinMOF groups than for the free fOVA group. To analyse the distribution of MOF-gated MS in vivo, MS@(A647-OVAinMOF) in organs of mice was examined by IVIS and ICP-AES analyse ( Supplementary Fig. 28). From the IVIS images, MS@(A647-OVAinMOF) is mostly accumulated in nearby draining lymph nodes, whereas its amount is negligible in other organs, including the spleen, lung, heart, kidney, and liver. Mice administrated with MS@(A647-OVAinMOF) show significant increases in average fluorescence intensity, Si content, and Zn content in nearby draining lymph nodes as compared with control. Furthermore, crosspresentation of the OVA epitope on MHC I of DCs was analysed by flow cytometry using an anti-mouse OVA257-264 (SIINFEKL) peptide bound to H-2Kb. The nanoparticulate formulation of fOVAonMS, MS@(fOVAinMOF), or fOVAinMOF induces in the generation of larger numbers of CD11c + MHC-I + DCs in the lymph nodes than free fOVA. The MS@(fOVAinMOF) shows the highest efficiency of antigen cross-presentation among all the groups (Fig. 3h).
The effective adaptive antitumour immune response relies on the persistence of a cancer antigen in the distal injection site, the timely immune cell communication between the periphery and the draining lymph nodes, and the subsequent antigen presentation to T cells in lymph nodes [28][29][30][31][32] . Prolonged retention of antigens within adjuvants in the injection sites is considered to be crucial to ensuring the long-term stimulation of DCs to break immune tolerance [28][29][30][31][32][33] . Here, A647-OVA encapsulated within MOF-gated MS is retained for a significantly longer time around the injection site than free A647-OVA ( Fig. 3a-d); thus, A647-OVA encapsulated within MOF-gated MS facilitates the longterm stimulation and activation of DCs and breaks the immune tolerance towards cancer antigens 16,[28][29][30][31][32][33] .
In this study, MOF-gated MS induces higher antitumour efficacy than MOF adjuvant free of MS with an equivalent weight, which suggests that MS is a determining factor for stimulating antitumour immunity 24 . MS triggers antitumour immunity, which is derived from not only its functions as the carrier of antigens and immunopotentors but also its intrinsic immunomodulatory effects 23,24 . It should be mentioned here that the MS used in this study is quite different from the previously reported hollow MS 23 in pore size (several tens nm and 3-6 nm in the present and previous MS, respectively) and particle morphology, resulting in clearly different characteristics in retention and release of biomolecules. Table 1.
Combination of MOF-gated MS vaccination with PD-1 blockade at a low dose significantly improves the therapeutic Overall, the MOF-gated MS as an adjuvant elicited antitumour immune responses more effectively than MS without MOF. An efficient cancer vaccine adjuvant should help antigen delivery to draining lymph nodes, enhance DC maturation and crosspresentation, and lead to robust CD8 + T-cell responses. Here, the MOF-gated MS effectively encapsulates the cancer antigen and immunopotentiator, prevents their off-target release, and enhances their targeted delivery to APCs and lymph nodes, resulting in the increase in tumour-specific CD8 + T populations, compared with the MS without MOF. We hypothesise that the pH-responsive gatekeeper property associated with MOF contributes to the efficient delivery of the high-molecular-weight antigen and immunopotentiator loading into MS having pore sizes as large as several tens nm.
Specific cytotoxic T lymphocyte assay for E.G7-OVA cancer cells. To characterise the antigen-specific cell killing activity of CD8 + T cells against E.G7-OVA cells, splenocytes were collected from the mice at the endpoint and subcultured with IL-2 and OVA for 7 days in vitro (Fig. 6c-e, Supplementary Figs. 33-35). Fig. 6 Cytokines secretion in tumour sites and OVA-specific cytotoxic CD8 + T-cell killing. a, b Cytokines in tumour at the endpoint (a, d, e, f, n = 4 independent animals; b, c, n = 5 independent animals; one-way ANOVA followed by Tukey's multiple comparisons post hoc test; IL-2, p < 0.0001). c A schematic representation of antigen-specific cytotoxic T lymphocyte assay. The splenocytes were obtained from mice at the endpoint and cocultured with CFSE -stained live E.G7-OVA cancer cells or healthy NIH3T3 cells at the ratio of E/T = 10 and the specificity of cytotoxic CD8 + T cells against OVA were analysed using Ghost Dye™ Violet 450 staining and flow cytometry. d Representative flow cytometry plots of the splenocytes derived from different mice against E.G7-OVA cancer cells or healthy NIH3T3 cells. e Cytotoxicity of the splenocytes derived from different mice against E.G7-OVA cancer cells or healthy NIH3T3 cells (n = 3 independent samples, one-way ANOVA followed by Tukey's multiple comparisons post hoc test; E.G7-OVA, p = 0.0002). All data (a, b, e) are presented as mean + S.D.
NIH3T3 fibroblasts and PC-12 pheochromocytoma cells were used as the controls. The splenocytes from mice treated with (MS@OVAinMOF)@(polyICinMOF) plus a low dose of i.p. anti-PD-1 (group f) show significantly higher cytotoxicity against E. G7-OVA lymphoma cells expressing OVA than those from mice treated with groups free OVA plus a high dose of i.p. anti-PD-1 (group a), only free OVA (group b), free OVA plus a low dose of i.p. anti-PD-1 (group c), OVAonMS plus a low dose of i.p. anti-PD-1 (group d), and OVA/polyIConMS plus a low dose of i.p. anti-PD-1 (group e). The splenocytes from mice treated with (MS@OVAinMOF)@(polyICinMOF) plus a high dose of i.p. anti-PD-1 (group i) show the highest cytotoxicity against E.G7-OVA lymphoma among all groups. In contrast, splenocytes from all groups show a mild cytotoxicity against NIH3T3 fibroblasts and PC-12 pheochromocytoma cells without significant differences. CD8 + cytotoxic T lymphocytes (CTL) are the main effector cells in cell-mediated antitumour immunity 31,34 . CD8 + cytotoxic T lymphocytes in spleen derived from mice immunised with a (MS@OVAinMOF)@(polyICinMOF) cancer vaccine show higher specific killing ability against E.G7-OVA lymphoma cells expressing the epitope of OVA (SIINFEKL).
Biocompatibility of MOF-gated MS.
To confirm the safety profiles, healthy C57Bl/6J mice were subcutaneously administered with 1 mg of MS@MOF or MOF, and blood biochemical and tissue compatibility analyses were carried out ( Supplementary Fig. 36). The saline-administrated group was used as the control. No obvious hepatic or renal toxicity is observed for MOF-gated MS and MOF as indicated by alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels for hepatic function and creatinine (CRE) and blood urea nitrogen (BUN) levels for renal function as compared with those for saline. Histological sections of the kidney, spleen, heart, liver, and lung derived from mice administrated with saline, MOF or MS@MOF exhibit no significant difference, suggesting no obvious tissue toxicity.
Discussion
Cancer immunotherapies are increasingly recognised to be a promising strategy to elicit systemic immune responses and establish wide-spectrum treatment regimens for a variety of tumour types, since they aim to target the immune system rather than the tumour itself 13,35 . Combination immunotherapy based on synergetic effects between cancer vaccines and immune checkpoint blockade therapy can decrease the dose of immune checkpoint blockade as low as 1/10 while maintaining the effectiveness with minimising the possible immunotoxicity and therapeutic cost. The synergetic effects of suppressing tumour growth and activating antitumor immunity arise from enhancement of immunogenicity of cancer antigens and activation of antitumour CD8 + T-cell responses owing to the vaccine component, as well as remedy of immunosuppressive condition owing to the checkpoint blockade.
The integration of an antigen, immunopotentiator, and adjuvant into one particle with a pH switch for locally administered cancer vaccines is considered to enable the codelivery of these components to the same APCs, the colocalization and retention of loaded components in lymph nodes, highly efficient antigen cross-presentation, and the maximisation of cancer-antigenspecific T-cell response while preventing their entry into the systemic circulation, suppressing the initiation of undesirable stimulation in the blood or tissues, and improving the safety profiles 35,36 . The results in this study suggest that MOF-gated MS hold significantly higher capability to facilitate the intracellular uptake of cancer antigens by APCs, deliver the cancer antigens to lymph nodes, enhance antigen cross-presentation, promote the Tcell activation, and cytokine secretion, than MS nanoadjuvants. For MS nanoparticles with large pores and open channels, they provide the space to accommodate cancer antigens and immunopotentiators. However, when putting the molecules-loaded MS nanoparticles into the release buffer or injecting them into the body, the open porous structure of MS will result in the rapid leakage of the loaded components. Benefiting from the protective function of MOF as the gatekeeper, MOF-gated MS will greatly prevent the premature leakage of the encapsulated cancer antigens and immunopotentiators in the injection sites, since the MOF-gated MS exhibits small or negligible amounts of release at pH of~7.4. MOF-gated MS showing a slow release of encapsulated cancer antigens and immunopotentiators in a neutral environment and a rapid release in an acidic environment will greatly promote the delivery of cancer antigens and immunopotentiators to pivotal APCs, the activation and trafficking of APCs to nearby tumour-draining lymph nodes, the presentation of digested fragments to naïve T cells, the clonal expansion of immune cells such as CD4 + and CD8 + T cells, the cytokine secretion to gain helper functions and thus the eradication of tumour cells.
The synergistic effects between cancer vaccines and checkpoint inhibitory antibodies occur to attack cancer cells and achieve the effective therapeutic activities against tumours. Systemic administration of anti-PD-1 Ab can alter the tumour microenvironment by blocking immunosuppressive signals. However, checkpoint blockade cancer therapy only exerts its effects when the tumours are immunogenic in patients, which might explain the low response rate of checkpoint blockade (~10-40%) in clinical trials. The prerequisite for the anti-PD-1 Ab to exert its effect is that the cancer antigen-specific antitumour immune response has been initiated. Thus, the external stimulation with cancer vaccines will be critical in strengthening the immunogenicity of tumor antigen, stimulating anti-tumour immunity, enhancing CD4 + and CD8 + T-cell populations and promoting Th1 cytokine secretion. Vaccination using MOF-gated MS encapsulating OVA and polyIC greatly decreases the systemic dose of anti-PD-1 Ab when the vaccination and PD-1 blockage are combined. Under normal circumstances in a healthy body, the immune system can recognise cancer antigens and kill the cancer cells. However, once a tumour occurs in the body, tumour immunosuppressive microenvironments obstruct immune recognition and the process of cancer-immunity cycle 37 owing to the weak immunogenicity of cancer antigens, creating negative regulatory pathways and other mechanisms. Vaccination using MOF-gated MS encapsulating OVA and polyIC efficiently triggers anti-tumour immune responses, and at the same time, the administration of anti-PD-1 Ab at a low dose blocks the immunosuppressive pathways. The synergistic effects of MOF-gated MS cancer vaccines and the anti-PD-1 checkpoint blockade make it easier to break down the immune equilibrium between promotive and suppressive factors, overcome the activation energy barrier associated with the immunosuppressive tumour microenvironment, and surmount the cancer-immune set point 38 . Then, the cancer-immunity cycle will be reinitiated, which covers a series of steps, including the capture of cancer antigens by APCs, antigen presentation to T cells, priming and activation of effector T cells, trafficking of effector T cells to tumours, infiltration of effector T cells into the tumour bed, recognition of cancer cells by T cells, killing of cancer cells, release of cancer antigens and so on 37 .
Conclusions
In summary, cancer vaccines made from MOF-gated nanoadjuvants in combination with low-dose checkpoint blockade therapy are promising for cancer treatment. Inspired by the superior biomimetic mineralisation encapsulation capability of MOF for biomolecules and excellent intrinsic immune-shaping properties of MS, we fabricate MOF-gated nanoadjuvants to obtain the targeted delivery of immunology-associated large molecules to draining lymph nodes and navigate antitumour immunity. A combination of MOF-gated vaccine with systemic anti-PD-1 Ab administration successfully decreases the dose of anti-PD-1 Ab to 1/10 while maintaining the antitumour effectiveness. Notably, the MOF-gated MS delivery system is expected to be widely applicable for various therapeutic agents ranging from peptides, nucleic acids, molecule immunopotentiators, chemotherapeutic drugs to imaging contrast agents.
Methods
Physicochemical characterisation. The nanoadjuvants were observed using a field emission scanning electron microscope (FE-SEM, JEOL) after being coated with platinum and using transmission electron microscope (TEM, JEOL). The hydrodynamic diameter of nanoadjuvants was analysed by a dynamic light scattering photometer (DLS-8000HAL, Otsuka Electronics). The phases of nanoadjuvants were analysed using a powder X-ray diffractometer employing CuKα X-ray (Model RINT 2500, Rigaku). The zeta potential of nanoadjuvants was analysed using a Delta Nano C Particle Analyzer (Beckman Coulter Inc, America) by dispersing particles in calcium and magnesium -free phosphate -buffered saline (PBS(−)). The nitrogen gas (N 2 ) adsorption-desorption isotherm of nanoadjuvants was measured by a surface area and porosity analyser (TriStar II, Micromeritics, America) and the BET specific surface areas and pore size distributions were calculated subsequently. MS were synthesised using a soft-templating method 39 . Typically, hexadecyltrimethylammonium p-toluenesulfonate (CTAT, Sigma-Aldrich) and triethanolamine (TEA, Sigma-Aldrich) were added into ultrapure water with stirring at 70°C and tetraethoxysilane (TEOS, Wako, Japan) was slowly added. The molar ratio of the reaction mixture was 1.00 TEOS: 0.06 CTAT: 0.026 TEA: 80 H 2 O, respectively. The reaction mixture was continuously stirred for 2 h to obtain a precipitate. The obtained product was centrifuged, washed with ultrapure water/ethanol, dried and heat-treated at 550°C OVAinMOF samples contained in a bag of dialysis membrane in an acetate buffer (pH = 5) or a Tris-HCl buffer (pH = 7.4) at a particles-to-buffer ratio of 1 mg/mL was quantitatively analysed after incubation at 37°C by measuring Si and Zn using an inductively coupled plasma atomic emission spectrometer (ICP-AES: SPS7800, Seiko Instruments). The OVA release was determined in an acetate buffer (pH = 5) or a Tris-HCl buffer (pH = 7.4) at a particles-to-buffer ratio of 3 mg/mL at 37°C.
The MS@(polyICinMOF) and polyICinMOF samples were synthesised by the method same as that for the above MS@(OVAinMOF) and OVAinMOF samples except that the mass ratios of MS:MOF:polyIC and MOF:polyIC are 2.4:0.6:0.5 and 3:0.5, respectively. The polyIConMS was prepared by mixing polyIC solution and MS particles with a MS:polyIC mass ratio of about 3:0.5. The polyIC release was determined in an acetate buffer or a Tris-HCl buffer at a particles-to-buffer ratio of 1 mg/mL at 37°C.
In addition, Tris-HCl buffer supplemented with 10% serum was used as a third type of media to test the degradation of nanoadjuvants associated with release of molecules using same protocol as those for an acetate buffer or a Tris-HCl buffer. In the serum solution, ferritin was used as a model biomolecule, due to the overlap of OVA and serum in spectra. The release of ferritin was quantitatively analysed by measuring Fe using ICP-AES. The release of polyIC in serum solution was tested using a StrandBriteTM Green Fluorimetric RNA Quantitation Kit (AAT Bioquest).
The standard solutions of biomolecules for release experiments, including OVA, ferritin and polyIC, are obtained in acetate buffer, Tris-HCl buffer or Tris-HCl buffer supplemented with 10% serum, according to the corresponding experimental parameters.
Sodium dodecyl sulfate-polyacrylamide gel electrophoresis. OVA-loaded nanoadjuvants, including OVAonMS, MS@(OVAinMOF) and OVAinMOF, were dispersed in PBS(−) with the final concentration of 200 ng/μL OVA and 800 ng/μL particles, respectively. Free OVA was used as control. To prepare the supernatant samples, OVA-loaded nanoadjuvants were suspended in PBS(−) for 1 h and centrifuged at 13,000 rpm for 10 min. Then, OVA-loaded nanoadjuvants or supernatant samples were mixed with 2× SDS-PAGE sample buffer, incubated at 50°C for 10 min, loaded into the gel and subjected to electrophoresis at 30 mA for 70 min running with 1× Tris-Glycine-SDS buffer according to the manufacturer's instructions. The gels were visualised by staining with Rapid Stain Coomassie Brilliant Blue kit.
Cellular uptake and activation of dendritic cells in vitro. Bone marrow derived dendritic cells (BMDCs) were obtained from mice femurs 40 . After removing red blood cells, I-A/I-E and phycoerythrin-conjugated anti-CD4, CD8 expressing cells, the left cells were cultured in RPMI 1640 (Gibco) containing 10% fetal bovine serum and 20 ng/mL granulocyte macrophage colony-stimulating factor (GM-CSF, Bioreagent). The BMDCs were collected on day 9. In all, 2 × 10 5 BMDCs were precultured in glass bottom dish or 96-well plate for 6 h. Nanoadjuvants prepared using OVA or fluorescein-conjugated OVA (fOVA, Life technologies), where they were OVAonMS, MS@(OVAinMOF), OVAinMOF, fOVAonMS, MS@(fOVAin-MOF) and fOVAinMOF, were added to the BMDCs culture media at a particle concentration of 30 μg/mL and a OVA or fOVA concentration of 5 μg/mL. After overnight culture, the BMDCs were stained with Hoechst (Thermo Fisher) for cell nuclei and observed by a confocal laser microscope (Leica). The quantitative analysis of cellular uptake fluorescence images was calculated using image J software. TNF-α and IL-1β in the supernatant were quantified using mouse ELISA kit (BD Biosciences) according to the manufacturer's instructions. To further measure activation of BMDCs, 2 × 10 6 BMDCs were cocultured in 24-well plate with free OVA, OVAonMS, MS@(OVAinMOF) and OVAinMOF at a particle concentration of 30 μg/mL and a OVA concentration of 5 μg/mL, respectively. After 3 days' culture, the BMDCs were collected using Trypsin-EDTA, blocked with anti-CD16/ CD32 Ab (2.4G2, BioLegend) with 1/100 dilution and stained with anti-mouse OVA257-264 (SIINFEKL) peptide bound to H-2Kb Ab, anti-mouse MHC II (I-A/ I-E) Ab, anti-mouse CD80 Ab, anti-mouse CD40 Ab and anti-mouse CD197 (CCR7) Ab (BioLegend) with 1/50 dilution. Flow cytometry was carried out using FACSAria (BD Bioscience, USA). For all flow cytometry experiments, 1-3 million cells per sample were collected for staining. Among them, at least 10-50 thousand cells per sample were used for flow cytometry analysis. Flowjo software was used to analyse the flow cytometry data.
In addition, to compare the difference between adsorption and encapsulation of fOVA or OVA, fOVA-or OVA-adsorbing MOF (MOF-ad) and fOVA-or OVAencapsulating MOF by coprecipitation (MOF-en) were prepared, and added to the BMDCs culture at a particle concentration of 25 μg/mL and a OVA or F-OVA concentration of 5 μg/mL.
Antigen retention at the injection sites and particle distribution in vivo. Alex Fluor 647-conjugated OVA (A647-OVA, Molecular Probes) or MS@(A647-OVAinMOF) was injected into the flank of the female C57Bl/6J mice (n = 3, every group; CLEA Inc.) at a A647-OVA dose of 100 μg/mouse and particle dose of 600 μg/mouse in 100 μL saline. At 6 h, 1 day and 3 day, the mice were observed using IVIS imaging system with excitation wavelength of 580 nm and emission wavelength of 680 nm. To clearly see the body distribution of nanoparticles, the main organs (nearby draining lymph node, spleen, lung, heart, kidney and liver) of mice were collected and observed using IVIS imaging system at day 1. Living image software was used to analyse the data. Moreover, ICP-AES measurement was used to quantitatively analyse the targeting distribution of nanoparticles (Si and Zn content) in nearby draining lymph node.
APCs-mediated delivery to lymph nodes and cross-presentation of OVA in vivo. Female C57Bl/6J mice (3 mice, every group; CLEA Inc.) were immunised by injecting fOVA, fOVAonMS, MS@(fOVAinMOF) and fOVAinMOF subcutaneously into the left flank at particles and fOVA doses of 600 μg/mouse and 100 μg/mouse, respectively. The immunised mice were killed 16 h later, and the nearby draining lymph node was collected. The cryostat sections of lymph nodes were prepared, stained with DAPI and observed using a fluorescence microscope (Olympus BX51) with a highly-sensitive camera (Olympus DP74). Fluorescent images were acquired under identical parameter settings. The quantitative analysis of fluorescent images was calculated using image J software. Moreover, the draining lymph node was collected, milled, vortexed and filtered through a 40-μm cell strainer to obtain single-cell suspension. The single-cell suspension was washed with PBS(−) containing 0.5% bovine serum albumin (BSA). Non-specific staining was prevented by blocking the cells with anti-CD16/CD32 Ab (2.4G2, BioLegend) with 1/100 dilution. The cells were stained for 30 min with anti-mouse CD11c Ab and anti-mouse OVA257-264 (SIINFEKL) peptide bound to H-2Kb Ab (BioLegend) with 1/50 dilution. Flow cytometry was performed using FACSAria (BD Bioscience, USA).
Combination cancer immunotherapy. First, live E.G7-OVA cells (2 × 10 5 cells mouse −1 ) were injected subcutaneously into the right flank of sixty three female C57BL/6 mice (7 mice/group; 6 weeks old, CLEA Inc.). On day 3, 7, 14, 21 posttumour inoculation, mice were divided into the following nine groups and injected with the following subjects in 100 μL saline: free OVA (100 μg/mouse) plus a high dose of i. Cytokine contents in tumour sites and spleen. At the endpoint of prophylactic and combination cancer immunotherapy, the tumour sites and spleen were excised and lysed with a T-PER tissue protein extraction reagent (Thermo Fisher Scientific), and the amounts of cytokines in tumour sites were quantified using mouse ELISA kit (BD Biosciences) according to the manufacturer's instructions.
Analysis of antigen-specific T-cell populations. At the endpoint of prophylactic and combination cancer immunotherapy, splenocytes were collected from the spleen, milled, vortexed and filtered through a 40-μm cell strainer to obtain singlecell suspension. Anti-CD16/CD32 antibody (2.4G2, Biolegend) with 1/100 dilution was used to prevent the nonspecific staining. Anti-mouse CD8α Ab (BioLegend) and anti-mouse T-Select H-2Kb OVA Tetramer-SIINFEKL Ab (MBL) with 1/50 dilution were used to stain the cells for 30 min. Then, the intracellular cytokine was stained by anti-mouse IFN-γ Ab (BioLegend) with 1/50 dilution. Flow cytometry was performed for the cell suspensions using a FACSAria cell cytometer (BD Biosciences).
Specific cytotoxic T lymphocyte assay for E.G7-OVA cancer cells. At the endpoint of combination cancer immunotherapy, mice from all groups were killed to harvest splenocytes. After 7 days of splenocytes subculture with 40 ng/mL mouse IL-2 and 20 μg/mL OVA, the splenocytes were cocultured with 5-(and -6)-Carboxyfluorescein diacetate succinimidyl ester (CFSE, Dojindo) -stained live E.G7-OVA cancer cells and NIH3T3 fibroblasts at effector cells/target cells ratio of 10, respectively. In addition, mouse from group f were cocultured CFSE -stained live E.G7-OVA cancer cells and PC-12 cancer cells at effector cells/target cells ratio of 0, 5, 10 and 20, respectively. The cells were then stained with Ghost Dye™ Violet 450 (Bay bioscience) 24 h later. The cytotoxicity of splenocytes against E.G7-OVA cancer cells, NIH3T3 fibroblasts and PC-12 cancer cells were analysed using a FACSAria cell cytometer (BD Biosciences), respectively. The cytotoxicity is calculated by the following formula: cytotoxicity = (total dead target cells−spontaneous dead target cell)/(total target cells−spontaneous dead target cell) × 100%.
Biocompatibility of nanoadjuvants. To examine the in vivo safety, the saline, MOF-gated MS and MS (1 mg/mouse in 100 μL saline) were subcutaneously injected into the left flank of C57/BL6J mice (5 mice, every group; CLEA Inc.). Mice were euthanized 3 days later, and blood was harvested for blood haematology analysis. The organs, including kidney, spleen, heart, liver and lung, were collected, fixed in 10% neutral buffered formalin solution (Wako), embedded in paraffin and stained with hematoxylin and eosin. | 2020-07-31T14:57:50.078Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "7f06a6b0a0815b9039e9ddb234a4f9207ad72670",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-17637-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b28acf14a25c561d4eb08ac6a0f6f472d8d8224f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
234054197 | pes2o/s2orc | v3-fos-license | Reversible Image Processing using ‘Magical Triangle’
The original images were converted in to a reversible matrix values by discovery, the values of the pixels using payload technique. The payload is to calculate the pixels values on reversible data hiding, we converting the image pixels into a value. Generating the matrix form is converting an image using matrix algorithm. Data Embedding is used to converting the values in to a matrix pixels, if we give an input value then it form the matrix by using that input value. Thus, Combination of two results the reversible matrix form.
INTRODUCTION
Data hiding techniques are very young and fast growing techniques in digital image processing.90% of the publications in the past six yearsare highly multi-disciplinary field in the combination of image and signal processing with copyright cryptographic, stenography and water marking. Data hiding techniques have tremendous interest from the industry and different government and military applications. The need for Data Hiding is imperative for digital images, authentication, fraud detection and cryptography. Data hiding can be grouped into three: lossless compression method, difference expansion (DE) method and histogram modification (HM) method.
Data Hiding is based on the 'Magic triangle' between capacity, invisibility and robustness. The ability to extract hidden information is only possible after certain operations.
Fig.1 Properties of DH
The embedded information cannot be viewed except a secret key for the hidden images. The Data hiding technique provides highest security for personal, private, confidential data and to avoid any misused of data.
Data hiding is part of Reversible Data Hiding (RDH). These techniques are widely used for the recovery of original images without any distraction techniques from the hidden images. These techniques are used 0 or minimum histogram of an image.RDH is also called as looseness data embedding into digital images in a reversible manner. The RDH has different techniques proposed over years, one is LVS modification technique, and histogram shift based, prediction error and vector pointization techniques. The data embedding process will usually help to prevent permanent loss of the digital images.
RELATED WORK
Lossless data embedding (LDE) technique by means of to improved the host compact disk not including any deformation subsequent to the extract. It is utilize the arithmetic average of difference histogram (AADH). In generalized statistical quantity histogram (GSQH) is to construct the spread out and underflow of impediment, withdrawal and Embedding of construction. In sender side the host image is send to GSQH subsequent to extracting it is departing to embedding region collection. The extracting come to an end income after its go's to subsidiary canal it's separated into two sides one is spread out and underflow impediment. One more is side in sequence the most recent footstep is the image will be embedding and extraction [1].
Reversible data embedding is reconstructing the innovative picture by means of payload technique. To residential the far above the ground capability on embedding significance on bits of condition on set of pixels. To using payload technique is capability restrictions. Illustration superiority is interior decoration related mechanism of reversibly embed information. The innovative image will be from top to bottom resorted on digital images [2].
The Block diagram of Reversible Data Hiding is shown below explains how the host images are embedded into secret data and then extracting as an original image at the receiver end.
DATA HIDING TECHNIQUES
The data hiding capability is extremely elevated in reversible watermarking algorithm has to be developed for color images. It is store in extract unique image after the procedure to be reversed. Most of all to keep away from the overflow and overflow, extra data the algorithm is color components to apply following it is separately to each color component is functional. The reversible component convert is color conversion is applying to substandard presentation of spatially algorithm. It is quadrangle based spatial embed would be satisfactory for most appliance of recursively across the color components [3].
Difference expansion (DE) is reformulated the makeover of well-organized pair. It is enables pre-estimating the embedding misrepresentation of a given pixel building block. The blocks are a smaller amount deformation can preferentially choose to implant information. It can be make available a great embedding charge in a particular embedding bypass. Embedding is applied a blocks of uninformed size in integer renovate. Payload is dependent position diagram which occupy a diminutive payload. It is measure up to by way of additional accessible schemes [4].
Reversible data embedding is allows to implant a comparatively lager quantity of records into an images in such way of unique images. There are establish into two techniques one is Sweden's Lifting system to second-hand on smallest amount considerable small piece level surface by means of the in sequence controlled will be the majority considerable bit planes. The LSB is approximation to calculation excellence which to a great extent improves the presentation of the process. One more is Tian's techniques is far above the ground maximal capability in manifold period. It is deformation near to the ground when embedding diminutive memorandum and without human intervention construct a sufficient capacity to embed the preferred payload [5].
The virtually all in progress data thrashing technique the host images is without doubt and enduringly indistinct by a number of little quantity of sound outstanding to data hiding. The palette imagery is come into view in stenography images is the same. The maximal capability of concealed in order is second-hand to influential tool to achieve a diversity of genuine point in time everyday jobs. The palette images are used to accumulate not to be disclosed minutes in health check images. It is assurance to come into view of stenography images is indistinguishable in color image the deformation to host image throughout data hiding [6].
The joint photographic experts group (JPEG) LS is based on pixel importance calculation is drop off the misrepresentation caused by hiding of the underground data. The reversible differentiation extension is greater than ever payload of images with helps of reversible underground data. The clandestine data hiding can be conveyed by a couple of pixels. Difference importance of two successive pixels and to put in one secret bit by customized. [8] The innovative images is extracted with difficulty opinion and bottom assortment is embedding base and JPEG-LS prognostic purpose is forecast miscalculation together are joint in unusual growth embedding scheme the data embedding procedure come to an end income stenography image embedded with be of assistance of secret data [7].
Reversible data hiding is an effective method to provide image authentication or copyright protection. Since it is reversible, the hidden data can be extracted and the recovery of the original image can be processed without any distortion.
Most of the methods result in distortion of the cover media in the process of inserting hidden data without having the ability to be able to retrieve the host medium once the message extraction occurs. But, the reversible recovery of cover media plays a vital role in applications such as medical imagery, enforcing law and other fine art fields. This paper presents a simple and effective data hiding scheme for medical imagery that is reversible. Experimental analysis shows that the proposed process is capable of providing a payload that is highly pure, resulting in no noticeable distortions. [9].
SYSTEM ARCHITECTURE
In RDH methods, a buffer can be made to accommodate the encrypted data provided it is compressible. However, their capacities are not high. Payloads of this method is low due to the limit on how much each block can carry is limited to one.
Here, the optimal role of data hiding will be found and RDH. This paper is an attempt to analyze the various techniques used in stenography and to identify areas where these techniques can be applied. Apart from this, the paper also highlighted above reversible data hiding for message extraction and image reconstruction. To evaluate the performance of RDH we have taken few metrics of the hiding rate and marked image quality are the important metrics. Fig.3 System Architecture Fig.3 shows the proposed steps for the RDH systemto find the predictive errors of the images. It is classified into calculative prediction error 3x3 with 10 pixel points. When histogram shifting is applied the input image is converted into embedded image and saved as embedded. The encrypted image can be viewed using a secret key for the conversion of encrypted image into original recovered image [10].
ARCHITECTURE DIAGRAM
The RDH technique introduces dissertation. The dissertation is reduced by predictive error value of images by histogram shifting algorithm. It will also increase the quality of embedded images and pay-load. Histogram modification mechanism can be implemented as a difference between subsampled images and prediction error of the host pixels. There are several othereffective prediction approaches that have found to improve the performance of reversible data hiding.Here, the pay-load is usually in the form of binary sequences and each block can only carry one bit at a time. The payload option provides the fragile image authentication [11].
The embedded images that have gone through data embedding are obtained along with their respective error values. The data extraction process is the reverse data embedding process, and the encrypted is extracted from the encrypted image in the reverse order using the AES (Advanced Encryption Standard). After that the original image is extracted by using glow fish algorithm. The same process can be applied for videos and digital images for the successful image Recovery.
FUTURE ENHANCEMENT
The proposed model can be further enhanced by following certain provisions. The MLSB technique is applicable post embedding when there is a need to retain the pixelated image to as close to the original as possible. It could also be of use in the networking where exchange of keys happens through a secure channel. Lastly, in order to eliminate any distortion produced by the two keys in reversible data hiding, a superior 3 key process could be followed to help attain a higher quality image.
CONCULSION
Reversible data hiding methodis demonstrated using Magic triangular Matrix Algorithm. These work demonstrated the data encryption with reversible data hiding images. There are two main process used for data encryption AES and glow fish algorithms. Also, the proposed systemhighlights about digital images and moving pictures. A receiver can decrypt an encrypted image with embedded data using an encryption key. The decrypted image is similar to its original reference. With respect to a data hiding key, the spatial correlation in natural images can be accurately extracted and yet still be able to recover the original image for the given encrypted image. To ensure precise data is extracted and recovered, the process might allow the block length to be a higher value. In addition, error correction mechanism can be introduced before data hiding to help secure the additional data at the cost of payload reduction. | 2021-05-10T00:04:37.318Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "6d887970731137638261cb88b9cf53afb2751539",
"oa_license": null,
"oa_url": "https://doi.org/10.5120/ijca2021921011",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9829119e2ebd01150e34aeef422c856d1908fbef",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
259648669 | pes2o/s2orc | v3-fos-license | Role of Micronutrients Towards Crop Productivity under Biotic and Abiotic Stresses: A Review
In recent scenarios, the world’s food system is highly insecure and become a great challenge to the rapidly growing civilization. Agriculture is intricately tied to food security and life sustainability on earth. The arising climatic alteration forces plants to compete for nutrients from the soil. Crop yielding and their nutritional quality are highly influenced by environmental factors (biotic and abiotic stress), agronomic factors, pests and nutrient availability in the soil. These are the nanoparticle that monitors the elements in soil by sharing some signals. Micronutrients help to maximize plant growth and protect plants from diseases and pathogens. But the alteration of the environmental condition is highly responsible for nutrient limitation and growth inhibition. 35% of the world’s agricultural lands are decreasing their soil fertility, which leads to nutrient deficiency in plants. These insufficient nutrients of crops lower the yield and nutritional quality of food and affected human health. In the reproductive stage of plant growth, environmental stress is highly responsible for flower drop, pollen tube deformation, ovule abortion, pollen sterility and yield losses. A sufficient quantity of micronutrients can help to reduce the biotic and abiotic stress in agricultural crops. The current review gives a brief knowledge to understand the current features of micronutrients in the agricultural crop which focus on the mechanism, absorption activity, toxicity and deficiency of micronutrients in plants and how it secures our food system by increasing the yield and nutritional quality.
The rapidly growing modern civilization drastically changes our climatic scenario and disturbs the natural balance for survival on the earth (Mrabet, 2023).The multiple activities warmed the atmosphere, oceans and land (Steela et al., 2022).Their activity also reduces carbon dioxide emissions and other greenhouse gases (GHGs).Agroeconomic conditions also highly affected and threaten food security worldwide (Kumari et al., 2022).Environmental stresses are the foremost factor that causes major losses in crop plant growth, quality and yield.Biotic stress includes the attack of various pathogens such as fungi, bacteria and harmful insects that directly targeted their hosts' nutrients which leads to the death of crop plants (Alessandro and Daniela, 2023).Abiotic stress is fully different from biotic stress, these factors are the major yield-limiting factors for crop plants.It harshly affected the crops through environmental factors such as temperature, drought, heavy metal, flood, salinity and heavy metals (Ullah et al., 2021) (Fig 1).These stresses are highly emitting CO 2 , increasing soil salinization, augmenting and destroying the soil quality which leads to total for cultivation failure (Yang et al., 2023).Due to the abiotic stresses, the world's total one-third of arable lands are losing their fertility, so about 50% of yield losses in major food crops (Godoy et al., 2021).This increasing environmental factor is greatly responsible for limiting plant growth, yield and seed quality of the crop.The environmental alteration disbalances the micronutrient quantity and decreases the soil quality (Zhang et al., 2023).Micronutrition is the inorganic minerals that absorb by plants' roots as ions in soil water and develop a healthy plant.Boron, chlorine, copper, iron, manganese, molybdenum and zinc are the most essential minerals required for whole plant growth and development.These micronutrients widely involve in plants' biological functions like photosynthesis, respiration, chlorophyll synthesis, nitrogen fixation, nutrient uptake mechanisms, DNA synthesis, etc (Gui, et al., 2022).But due to the increasing biotic and abiotic stress, the micronutrient supply is gradually limited and restricts the quality of plant growth and production (Bolaji Umar et al., 2022) (Table 1).Micronutrients like Cu, B, Fe, Mn, Mo, Ni, Se and Zn activate scavengers for reactive oxygen species (ROS) (Tavanti et al., 2021).These are present as antioxidants in plants and fatal for soil fertility and crop yielding (Dumanovic et al., 2021).Plants get stimulated by external environmental stresses and then generate appropriate cellular responses (Prusty et al., 2022).Deficiency and excess of the micronutrients cause abnormalities in development, yielding and metabolism or even death of the plant (Fig 5).In mild or short-term stress Role of Micronutrients Towards Crop Productivity under Biotic and Abiotic Stresses: A Review environment the plant can be recovered from injuries but in severe stresses, the crop plant cannot survive by preventing flowering, seed formation and induce signals (Zheng et al. 2023).The available data show that the interaction of different pathogens with multiple hosts can increase virulent strains (Stevens et al., 2021).Therefore, various techniques invented to improve plant performance such as varietal modification, exogenous supplementations of beneficial elements, growthpromoting hormones, advanced disease-pest management techniques, enzymes and nutrient management are used to develop the stress resistance plant.Among these techniques, nutrient management/regulations are eco-friendly and costeffective (Kumar et al., 2022).
Types of micronutrients and their role in plant growth
In plant sciences, micronutrients are essential for various biological functions in plants such as nutrient regulation, fruit and seed development, reproductive growth, chlorophyll synthesis, plant metabolism, production of carbohydrates, etc.The nutrients are highly influenced by the availability of minerals and heavy metals in the soil (Boudj abi and Chenchouni, 2023).Previous research shows that the micronutrient deficiency in plants is gradually increasing and limiting plant growth and production.Plants required a specific amount of nutrients for their healthy development.But the increasing climatic stresses are responsible for the alteration of micronutrients in soil (Assuncao et al., 2022).The excess of micronutrients in the soil is proven toxic for plant cultivation and human health and less con centration d ecreases plant growth and lim its productivity (Chrysargyris et al., 2022) (Fig 2).The acidic soil contained enough micronutrients.Globally, zinc and boron deficiency harshly disturb the productivity of crops such as maize, rice and wheat (Dwivedi et al., 2023).Micronutrients regulate a plant's ROS (Reactive Oxygen Species) scavenging system which involves enzymatic and non-enzymatic antioxidant mechanisms of plants.ROS have generated in plants' cellular metabolism under light regulation and increases the antioxidative activity Zandi and Schnug (2022).Cobalt is a mineral present in the form of vitamin B12 in plants and reduces transpiration rate to increase growth and regulate plant water utilization (Gombart et al., 2020).Nickel is another essential nutrient and it required very little amount to build a healthy plant.Molybdenum consists of more than 60 metalloenzymes and proteins, that enhance the total chlorophyll concentration in plants (Zhang and Zheng, 2020).Zinc induces several biochemical reactions in photosynthesis and is represented in all six classes of enzymes.Boron is an essential nutrient that is responsible for developing flower and fruit formation, pollination and seed formation and is involved in cell wall synthesis and other biological/cellular functions (Matthes et al., 2020).Unlike other micronutrients, copper is required to develop different organelles in plants by involving important biological processes and participating in an oxidation-reduction reaction (Atri et al., 2023).Iron (Fe) reduces chlorophyll production, which develops interveinal chlorosis.Fe also involves in plant respiratory and photosynthetic reactions (Li et al., 2021).The main symptom of Mn deficiency is interveinal chlorosis, the complete yellowing of the young leaves (Santiago et al., 2020).Copper is needed for chlorophyll production, respiration and protein synthesis.Copper also intensified flavour and colour in vegetables and colour in flowers.The action of Cu-deficient plants is chlorosis in younger leaves, delayed maturity, stunted growth, lodging and melanosis (Laporte et al., 2020).The highly increasing abiotic/biotic stresses are responsible for micronutrient deficiency in the soil and that is not beneficial for crop and their production (Table 2).
Role of stresses on plants' growth and productivity
Climate variation affects crop yield and productivity by altering plant metabolic homeostasis and modifying sourcesink relationships.Under stress conditions, modification of the source-sink relationship has two processes such are (a) premature leaf senescence and yellowing, which degrade the chlorophyll and their components (b) decreased consumption in the sink tissues, which causes accumulation of assimilates in the source leaves, producing photosynthesis (Shafi and Zahoor, 2020).Stresses involved multiple functions of plants such are altered gene expression, cellular metabolism, changes in growth rates, crop yields, etc. Plants basically respond to two types of stresses such are abiotic and biotic stresses.These stresses differently affected soil fertility and decrease 20%-70% of agricultural production worldwide (Suleiman et al., 2021).Both abiotic and biotic stress have the common feature that enhanced ROS production which causes nutrient and water deficiency and alters soil pH, temperature, oxygen supply and mechanical pressure, injury to plants.Under abiotic stresses, plants get infected by bacteria, fungi, viruses and nematodes like pathogens and attack herbivore pests.Environmental factors like soil pH and moisture directly affect the soil microbes that help decompose soil organic matter (Fan et al., 2021).Drought stress and fungal infection decrease Root system architecture (RSA) causing total root length reduction, several root tips and magnitude of root branching (Xiong et al., 2021).Mainly drought stress involves in reduction of mass flow and micronutrients uptake by roots This stress also limits the diffusion rate of nutrients in the soil toward the roots (Guarnizo et al., 2023).Due to the stress plants get contaminant and disturb the nutrients transport to the shoots and limit active transport, transpiration flux and membrane permeability.Previous studies show drought increases Mn and Cu and decreased Fe content.Plant nutrient and physiological responses are both genotypes dependent under drought stress (Suleiman et al, 2021).Drought and different heavy metals like Ni, Cu, Co and Cr are responsible for the limitation of the growth of red maple, altering the xylem structure and hydraulic conductivity (Muhammad et al., 2021).Huang et al. (2022) reported that the presence of excess organic matter with high pH in the soil is highly responsible for the manganese deficiency in plants.
Salinity or salt stress is mainly responsible for nutritional disorders in plants.This adversely affected on availability of essential nutrient crops productivity and quantity (Gupta et al., 2021).Cold stress delayed the germination of rice and enhanced starch metabolism, respiration rate, antioxidative defense system (glutathione) and lower lipid peroxidation.This stress induces ionic and osmotic stress, which produces ROS in plants.High light and temperature stress accumulate ROS by damaging the membrane and photorespiration (Anderson et al., 2021).Flooding is another factor that causes hypoxia, programmed cell death and oxidative stress in plants (Leon and Gayubas, 2020).Sometimes it inhibits nutrient uptake and metabolism for healthy growth.UV radiation causes morphological changes, inhibit growth and photosynthesis and changes in ion permeability of the thylakoid membrane and the level of pigments (Nassour and Abdulkarim, 2021) (Fig 4).Biotic stress-causing agents are weeds, insects, fungi, bacteria, viruses, herbivores and other plants.This stress induces a hypersensitive reaction that causes physiological and biochemical changes in the plants (Chaudhary et al., 2022).Almeida et al. (2019) reported that over 80,000 fungal species are responsible for various plant diseases.Various pathogens cause plant wilt, leaf spot, root rot, or root damage in plants.Insects are causes severe physiological damage in plants that affect stem, leaf, bark and flowers.Insects also act as a carrier of various viruses and bacteria, which may be from infected plants or healthy plants (Kolliopoulou et al., 2020).Weeds highly damage the flower and reduce the crop productivity of plants.
Physiological activity of altered micronutrients on stress-inducing plants
Micronutrients can absorb and accumulate in plants by involving various mechanisms, which converse to more soluble ionic forms and are followed by specific/non-specific transporters (Pasala et al., 2022).The alteration of micronutrients is harmful to human health and the minerals deficiency causes yield reduction and improper plant growth.Recently, the WHO reported that every year more than 10 million people die because of micronutrient deficiency (Venkatesh et al., 2021).Under biotic and abiotic stresses, the micronutrient limitation decreases the resistance of plants (Kumari et al., 2022).These stresses increase the atmospheric CO 2 which can change the photosynthetic rate of plants.The alteration in photosynthetic rate reduces the plant growth rate and decreases the nutritional quality of crops (Huang et al., 2022) (Fig 5).The physiological activity like photosynthesis and gas exchange, nutrient translocation, the transcriptional activity of genes, transposable elements, cell death, changes in cell wall composition, lipid signalling, metabolites, proteins and antioxidant profile can be changed during stresses.The plant can improve its nutrient uptake by increasing soil mineral availability with the interaction of rhizosphere microorganisms (Zahra et al., 2021).The uptake, storage, mobilization and translocation of the micronutrients are excessing the seed micronutrient, that coordinates the regulation of many genes.A recent report has shown that zinc and iron content in grains should be increased by the association of two chromosome regions with quantitative trait loci (Calayugan et al., 2020).A proper understanding of plant nutrient distribution and its mechanisms can improve plant growth and food sources and reduces human malnutrition.
Micronutrients consumption of plants and their effects on human health
World Health Organization (WHO) reported that in human beings, micronutrients are present in the form of vitamins and minerals.The human metabolism required about 40 micronutrients for a healthy diet (https://www.who.int/healthtopics/micronutrients#tab=tab_1).Nutritional food can improve infant, child and maternal health, safer pregnancy and childbirth, stronger immune systems and lower risk of non-communicable diseases (Behera et al., 2022).Vitamins and minerals produce energy and balance body fluid inside the human body.This is also highly required for immune function, blood clotting, growth and bone health (Alagawany et al., 2020).Micronutrient deficiency highly affected the DNA synthesis process and develops various types of chronic diseases such as cancer, congenital malformations in pregnancy, etc. (Berger et al., 2022).The excessive concentrations of harmful minerals in soil limit crop production and their nutritional quality and also affected human health.Sarangi et al. (2022) reported that the excess quantity of manganese (Mn) and aluminum (Al) damages about 40% of the world's agricultural land by producing acid soil which is highly toxic for crops and their production.Kumar et al. (2022) recently reported that micronutrients can prevent genome mutations and protect the genome by Role of Micronutrients Towards Crop Productivity under Biotic and Abiotic Stresses: A Review modulating transformation in the cellular processes.Micronutrients have antimutagenic potential and in the form of antimutagenic agents', they can stable the genome (Mishra et al., 2022).
CONCLUSION
Due to the fluctuating climate condition, plants lose their genetic potential for healthy growth and reproduction.Both abiotic and biotic stresses generate stressful conditions and severe damage in the plants which represents a new challenge for crop improvement in plant science.The interaction of stresses and their impact on plants is known as the "disease triangle".The interaction of stresses may negatively or positively affect plant growth.A sufficient micronutrient can develop healthy plants and secure food resources.But climatic variation causes micronutrient deficiency and limited crop productivity worldwide.Therefore, in recent years researchers are focusing on global food conservation and developed iron-and zinc-rich biofortified foods and low-cadmium rice.Agronomic practices have also developed that can decrease the accumulation of arsenic or cadmium in rice grains.Deficiency of plant nutrients could be reduced by the supply of mineral fertilizers or by the cultivation of genotypically modified (GM) crops with higher metal concentrations.The 'Climate-crop disease' model, breeding, or genetic manipulation are the most efficient and reliable techniques for healthy plant growth and production under biotic and abiotic stress conditions.To achieve greater successful results scientific research works on this topic are necessarily required.
Fig 1 :
Fig 1: Different types of environmental stresses and their causative, agents.
Fig 2 :
Fig 2: Action of excess and limited micronutrients on plants and human health.
Fig 3 :
Fig 3: Micronutrient benefits on plants.Here M, B, Cu alleviates heavy metal stress and UV radiation: Cu, Mn, Zn alleviates Biotic and abiotic stress: Co, Ni, Fe increases plants growth and yielding and Mo, Co, Fe protects from insects/pest disease.
Fig 4 :
Fig 4: Different functions of micronutrients on a plant's organelles.
Fig 5 :
Fig 5: Plant root growth and yield under fewer micronutrients in the soil.
Table 1 :
Types of micronutrients and their mode of action on plants.
Co-factor in plant reaction, chloroplast production, enzyme activation.
Table 2 :
Concentration of different micronutrients present in soil. | 2023-07-11T16:18:40.949Z | 2023-07-04T00:00:00.000 | {
"year": 2023,
"sha1": "1931ae8c07f378a6de30bff37aa8e6942b5e9731",
"oa_license": "CCBY",
"oa_url": "https://arccarticles.s3.amazonaws.com/OnlinePublish/Final-article-attachemnt-with-doi-BKAP634-6089603e9087340b9bdcbbb1.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "841235ce3cac9d7935b18ff6a4614e10e4b2da79",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
261153453 | pes2o/s2orc | v3-fos-license | The influence of surface imperfections on phosphate coating performance of nodular cast iron substrates
: The service performance of the nodular cast iron wheel hub was modelled by investigating the effect of surface morphology and characteristics on the phosphate coating size, morphology, and corrosion resistance properties of the zinc phosphate conversion coated cast iron substrates. The machined and unmachined surfaces as well as the coatings were examined by a scanning electron microscope coupled with an energy-dispersive X-ray spectrometer (SEM-EXD). The influence of the substrates’ surface imperfections on the phosphate coating and the subsequent corrosion resistance was assessed and rated according to the standard procedures. Surface analysis of the machined and unmachined cast iron hubs reveals the existence of graphite and foreign material inclusions on the substrate surface that impact the phosphate coating properties and resistance to corrosion. The average phosphate coating crystal size is 1.74 µm and 2.58 µm for the unmachined and machined cast iron substrates, respectively. The corrosion resistance of the coated unmachined wheel hub surfaces was rated poor and disapproved based on the application requirements. The poor corrosion resistance was ascribed to the influence of the substrates’ surface characteristics on the coating adhesion to the substrate. However, the cast surfaces should be properly shot-blasted to remove any adhere foreign materials on the as-cast skin to enhance the coating adhesion.
Introduction
A crucial element of the steering system is the wheel hub assembly, which is an automotive part used in most vehicles-passenger vehicles, and light and heavy trucks.Because the hub maintains the wheel's connection to the vehicle while allowing unfettered rotation.The wheel hub assembly is in the centre of each wheel, where the wheel bearings are located.Without wheel bearings, the wheels on vehicles wouldn't revolve smoothly which is the major purpose of the wheel hub assembly.The truck wheel hub is the inner shape of the tire, is used to support the barrel body of the tire, and the centre is assembled on the wheel hub shaft.Along with bearings, the hub assembly is also responsible for housing the speed sensors that manage the vehicle's anti-lock braking system.Without a properly functioning hub assembly, the vehicle's antilock braking system and traction control would be compromised, making the automobile unsafe to drive [1] .The bearings, hub, rotor, seal, and drive shaft itself make up most of the vehicle wheel hub assembly.Each of these components must work in unison with the others to manage the pressures at the braking to ensure safe handling of the vehicle during regular use.A typical wheel hub assembly for a truck is shown in Figure 1.The wheel hubs for truck vehicles are usually manufactured from cast iron materials because of their inherent properties and manufacturability.Cast irons are multicomponent ferrous alloys that have found applications in several engineering components [2][3][4][5][6] .The major constituents of cast iron are iron, carbon, and silicon, and occasionally contain different minor alloying elements.Because of the higher carbon content than that of steel, the structure of cast iron exhibits a richer carbon phase.Cast iron can solidify into either the stable iron-graphite system or the thermodynamically metastable Fe-Fe3C system, principally depending on composition, cooling rate, and melt treatment.Iron carbide (Fe3C) is the rich carbon phase in the eutectic when the metastable solidification path is taken; when the stable solidification path is taken, graphite is the rich carbon phase.Based on the graphite morphology, cast irons can be classified as flake (lamellar) graphite, spheroidal (nodular) graphite, compacted (vermicular) graphite, and temper graphite [7,8] .Among the different classifications of cast irons, spheroidal graphite iron (also known as ductile or nodular graphite cast iron) is typically used to manufacture truck wheel hubs due to its superior mechanical properties as compared to other cast iron materials (Table 1).Nodular iron has excellent castability, superior damping capacity, and a good combination of strength, ductility, and toughness [9] .Over several decades, the automotive industry has used spheroidal graphite cast iron more and more frequently.It is more ductile and has stronger fatigue strength than grey cast irons, and its static strength is comparable to cast steel.Its excellent castability and machinability combined with its medium level of stress resistance make it a viable solution for safety-critical applications and components.Typical applications of nodular cast iron in the automotive industry include wheel hubs, brackets, main covers, differential cases, steering knuckles, and other mechanical power transmission equipment.
Table 1.Typical properties of various pearlitic cast irons [8,10] A major challenge of using cast iron materials in many automotive applications is their propensity to corrosion when they are in constant contact with various corrosive media under harsh service conditions.Hence, automotive components are usually coated to enhance their corrosion resistance.Several coating technologies have been applied to automobile parts to improve their service life [5,[11][12][13] .When the functionality of parts and threaded fasteners depends on certain fixed functions, such as the torque-tension relationship, phosphate coating is a prescribed process.The crucial procedure of phosphate-coating metal surfaces provides corrosion protection and lays the groundwork for strong paint adhesion.The most prevalent variation is the trication (Zn/Ni/Mn) phosphating procedure.The treatment involves dissolving the top surface layer via a combination spray-dip procedure.The substrate is dipped into a solution of phosphoric acid and metal ions, such as zinc, manganese, and nickel.An accelerator, such as nitrate, is introduced to the reaction that is reduced at the cathode.In addition, to nitrate reduction, other reactions include metal oxidation at the anode and hydrogen reduction at the cathode [14] .Zinc phosphates are usually used as the coating material on truck wheel hubs at the final stage of production.This surface treatment process is purposely conducted to protect the component against corrosion.The phosphate coating crystals should cover the whole surface of the substrate and be uniformly distributed over the substrate's surface to ensure full protection.Different crystals are formed depending on the substrate and the most common ones are phosphophyllite and hopeite.The coating formation is initiated on nucleation sites on the substrate [15] .which implies that the initial substrate's surface properties are a controlling factor that would influence the phosphate crystal morphology.
Property
However, the purpose of this study is to examine the initial surface characteristics of the machined and unmachined surfaces of nodular cast iron substrates and their impact on the zinc phosphate coating morphology; and consequently, relate the coating morphology to the general corrosion behaviour of the zinc phosphate and electro-coated (BASF cathoguard 570F black) ductile cast iron substrates.The study will also observe any impurity on the initial surface of the substrates that may affect the phosphate coating.
Materials and methods
Two truck wheel hubs were supplied by the Automotive Components Floby, Sweden, from where samples were extracted for this investigation.One of the wheel hubs was in as-machined condition while the other was already phosphate-coated as shown in Figure 2. Two samples for surface analysis were obtained from the machined and unmachined areas of the uncoated and zinc phosphate-coated hubs.These samples were provided in Figure 3, namely, 1) the unmachined sample, 2) the machined sample, 3) the unmachined phosphate coated, and 4) the machined phosphate coated.A metallographic sample was extracted from Figure 2a and prepared according to the standard metallographic procedure and etched in 4% Nital to reveal the pearlitic microstructure of the cast iron.The chemical composition (%-wt) of this material is as followed -3.40% C, 2.39% Si, 0.27% Mn, 0.047% Mg, 0.030% P, 0.002% S, 0.30% Cu, and the balance is Fe.All the samples were characterized using an optical microscope as well as a scanning electron microscope coupled with energy-dispersive X-ray spectroscopy (SEM-EDS).
After the machining operation, the wheel hubs were pretreated with zinc phosphate coating followed by an electro-coating process using a BASF cathoguard 570F black reagent.The appearance of the component becomes black at the end of the coating process as observed in Figure 2b.Corrosion coupons were extracted from the coated hub (Figure 2b) and used for the assessment of the general corrosion of the hub according to the ASTM B117 salt spray standard procedure [16] .The corrosion coupons were subjected to a six-week accelerated salt spray corrosion test.The corrosion-tested coupons were pictorially evaluated according to the ISO 4628-3 standard [17] when rating the degree of rust formed on parts that have been tested or used under accelerated or atmospheric conditions.The standard ISO 4628-3 rust rating scale is presented in Table 2.
Microstructure analysis of the substrate
Figure 4 depicts the substrate's optical images and scanning electron micrographs.It can be seen from Figures 4a and b that the substrate consists of graphite nodules with a size less than and equal to 6.The graphite nodules are surrounded by ferrite and the matrix is mainly pearlite.
Surface analysis of substrates
The as-cast and machined surfaces of the cast iron substrates were analyzed to identify the composition of their surfaces.The purpose of this examination is to understand the impact of the substrate's surface characteristics influences the phosphate coating's effectiveness.Figure 5 represents the images of the as-cast skin of the substrate while Figure 6 provides the micrographs of the machined surface.It is apparent from Figure 5 that the topography of the as-cast surface is uneven with different structural contrast.The surface of the as-cast substrate looks rougher than the machined one with some microcracks.The machined surface looks smoother, but cracks and rough surfaces were noted in some areas during the machining operation as seen in Figure 6.The graphite nodules are also observable on the machined and unmachined surfaces (the circled dark spots in Figures 5 and 6).Examination of Figure 7 reveals that the as-cast surface contains different oxide compounds which include aluminium oxide, magnesium oxide, silicon oxide, iron oxide, and graphite.But in the machined surface, only Fe, Si, and C are detected (Figure 8).Some of these notable features and compounds may affect the deposition of phosphate coating on the substrate.10a and b).Some rough coating areas are noticed in Figure 10c, which coincide with the machined surface with cracks and roughness, as shown in Figure 6.Besides, some regions with full coating coverage are observed in both the unmachined and machined surfaces of the coated substrates, see Figures 9d and 10d , the regions with poor phosphate coating are rich in carbon, silicon, potassium, and aluminium.The presence of these elements suggests the existence of sand inclusions on the as-cast surface of the substrate (Figure 11).In Figure 12b, the regions with poor coating are the areas with graphite phases.It is also observed that the nature of the substrate surface affects the shape and size of the phosphate coating.The unmachined surface with the as-cast skin shows spherical phosphate coating crystals (Figure 9d) while the machined surface reveals a long tetragonal crystal structure (Figure 10d).The average phosphate crystal size was measured and provided in Table 3.The crystal size of the unmachined phosphate-coated surfaces ranges from 0.633 µm to 3.182 µm, while that of the machined surfaces ranges from 0.828 to 6.240 µm.In summary, it is apparent from this investigation that, the presence of graphite and foreign inclusions adhesion on the substrate surfaces damages the phosphate coating.Also, rough surfaces with cracks lead to poor phosphate coating.The phosphate coating sizes and morphology are influenced by the surface characteristics of the coated substrate.
Corrosion resistance of the phosphate-coated substrates
Based on the automotive application and the harsh service environment of the coated wheel hubs, the component is usually subjected to a six-week accelerated corrosion test by the ASTM B117 salt spray standard procedure [16] .Before the salt spray corrosion test, scribe lines were created on the unmachined and machined surfaces of the test coupons.This is purposely done to simulate damages of the coating to assess the corrosion resistance of the coated wheel hubs after their exposure to a corrosive environment for a prolonged period.The electro-coat film thickness on the machined surfaces ranges between 22 µm and 30 µm.The appearance of the coated wheel hubs before the corrosion test is displayed in Figure 13.However, at the end of their exposure to the corrosive media, the corroded coated test coupons were assessed using the standard procedures for evaluating the painted or coated specimens subjected to corrosive environments.Figure 14 shows the pictures of some of the coupon specimens after the corrosion test.The inner corners of the unmachined surface of the wheel hubs were much more corroded than the outer surfaces of the unmachined region.The machined surfaces of the components are free of rust, implying excellent corrosion resistance.The corrosion performance of the coatings on the surface of the wheel hubs after their exposure to the corrosive medium was evaluated by the ASTM D1654 standardized procedure [18] .This standard describes the scribing of coated test coupons into the substrate from the coating layer using one of several scribe tools.Analyzing the corrosion that occurred close to the scribed line allows for a qualitative evaluation of the coating's corrosion resistance.An estimation of the coating's corrosion resistance can be made using this method, but it does not reveal anything about the mechanisms that prevent corrosion [19] .The coated substrate is intentionally scribed (damaged) to expose the coating/metal interface to corrosive media to assess the function of passivating agents (if any), the level of coating adhesion to the substrate, and the surface state of oxides on which the coating is applied [19] .Threadlike filaments may emerge when scribes are exposed to corrosive environments.The developed filaments can be assessed and scored according to the ASTM D1654 technique for corrosion testing of metallic and other inorganic coatings on metallic surfaces.In addition, the general corrosion resistance of the coated-wheel hubs exposed to the accelerated cyclic corrosion test procedures was evaluated and rated according to the ISO 4628-3 standard [17] .Based on these standard methods, the corrosion resistance of the coated substrates was examined and scored as presented in Table 4.For this component to pass the corrosion resistance requirements, the width of the scribe lines after post-exposure to corrosive environments should be less than or equal to 8 mm.The general corrosion resistance of the entire surface of the coated substrates after the corrosion test is rated and must be scored a grade less than or equal to Ri 1 according to the ISO 4628-3 rust rating scale.As observed in Table 4, all five samples tested passed the scribed coated substrate corrosion resistance requirements.The coated machined surfaces of the wheel hub substrates also passed the general corrosion resistance requirements, but most of the unmachined surfaces of the coated substrate did not fulfil the requirement.In general terms, the corrosion resistance of the coated unmachined surfaces was disapproved according to the application requirements of the component.The poor corrosion resistance of the unmachined surfaces of the coated substrate could partly be influenced by the characteristics of the surface and the coating process parameters.However, to improve the surface properties of the unmachined areas, the cast component should be properly shot-blasted to remove any adhere foreign materials on the as-cast skin surface, which may hinder the adherence of the coatings on the substrate surface.Table 4. Post-corrosion evaluation of the coated parts based on the ASTM D1654 and ISO 4628-3 standards [17,18] .
Conclusions
This investigation examined the surface characteristics of machined and unmachined surfaces of ductile cast iron wheel hubs.The influence of these surface imperfections on the Zinc phosphate coating and the subsequent corrosion resistance was assessed and rated according to the standard procedures.Surface analysis of the machined and unmachined hubs revealed the existence of graphite and sand bonding on the substrate surface, possibly affecting the phosphate coating.Foreign material on the as-cast surfaces and cracks caused poor phosphate coating.The average phosphate coating crystal size is 1.742 µm and 2.578 µm for the as-cast and machined substrates, respectively.The substrate surface roughness influences the size and shape of the phosphate coating crystals.It is recommended that the as-cast surface should be properly cleaned to remove sand bonding on the as-cast skin.Surface cracks on the machined surface should be minimised to ensure perfect phosphate coating.Generally, the corrosion resistance of the coated unmachined surfaces was disapproved based on the application requirements of the wheel hubs.The poor corrosion resistance of the unmachined surfaces of the coated substrate was attributed to the influence of the substrates' surface characteristics on the coating processes.The cast surfaces should be properly shot-blasted to remove any adhere foreign materials on the cast skin to enhance the adhesion of the phosphate coatings on the substrate surface.
Figure 1 .
Figure 1.A typical wheel hub (a) and wheel hub assembly (b) for trucks.
Figure 5 .
Figure 5. Micrographs of the unmachined surface of the cast iron substrate.
Figure 6 .
Figure 6.Micrographs of the machined surface of the cast iron substrate.
Figure 8 .
Figure 8. SEM-EDS elemental mapping of the machined substrate.
Figures 9
Figures 9 and 10 show the SEM images of the phosphate-coated unmachined and machined nodular cast iron substrates, respectively.Figure9aand c reveals some unmachined areas without coating, which the arrows indicate.Cracks are also noticed on the coated surface (Figure9a and b).Similarly, from Figure10, some uncoated regions are seen which are indicated by the arrows and broken circles (Figure10a and b).Some rough coating areas are noticed in Figure10c, which coincide with the machined surface with cracks and roughness, as shown in Figure6.Besides, some regions with full coating coverage are observed in both the unmachined and machined surfaces of the coated substrates, see Figures9d and 10d.
Figures 9 and 10 show the SEM images of the phosphate-coated unmachined and machined nodular cast iron substrates, respectively.Figure9aand c reveals some unmachined areas without coating, which the arrows indicate.Cracks are also noticed on the coated surface (Figure9a and b).Similarly, from Figure10, some uncoated regions are seen which are indicated by the arrows and broken circles (Figure10a and b).Some rough coating areas are noticed in Figure10c, which coincide with the machined surface with cracks and roughness, as shown in Figure6.Besides, some regions with full coating coverage are observed in both the unmachined and machined surfaces of the coated substrates, see Figures9d and 10d.
Figures 11 and
Figures 11 and 12 represent the SEM-EDS elemental mapping of the phosphate-coated unmachined and machined substrates.It is indicative from Figures 11 and 12that, the regions with poor phosphate coating are rich in carbon, silicon, potassium, and aluminium.The presence of these elements suggests the existence of sand inclusions on the as-cast surface of the substrate (Figure11).In Figure12b, the regions with poor coating are the areas with graphite phases.It is also observed that the nature of the substrate surface affects the shape Figures 11 and 12 represent the SEM-EDS elemental mapping of the phosphate-coated unmachined and machined substrates.It is indicative from Figures 11 and 12that, the regions with poor phosphate coating are rich in carbon, silicon, potassium, and aluminium.The presence of these elements suggests the existence of sand inclusions on the as-cast surface of the substrate (Figure11).In Figure12b, the regions with poor coating are the areas with graphite phases.It is also observed that the nature of the substrate surface affects the shape
Figure 13 .
Figure 13.The appearance of the corrosion coupons before the test, where the solid and broken lines represent the unmachined and machined surfaces, respectively.
Figure 14 .
Figure 14.The appearance of the sample coupons after six weeks of accelerated corrosion test, where the solid and broken lines represent the unmachined and machined surfaces, respectively, and broken circles indicate the scribed lines.
Table 3 .
Phosphate coating crystal size Figure 11.SEM-EDS elemental mapping of the phosphate-coated unmachined substrate. | 2023-08-26T16:06:37.389Z | 2023-08-21T00:00:00.000 | {
"year": 2023,
"sha1": "f872a62be41337da656252b659019165084340a3",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.piscomed.com/index.php/MPC/article/download/3343/3122",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e2a72e722051ac3fd41b54fe647a1d4d6bdf5f73",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
255943480 | pes2o/s2orc | v3-fos-license | Improving gene regulatory network inference and assessment: The importance of using network structure
Gene regulatory networks are graph models representing cellular transcription events. Networks are far from complete due to time and resource consumption for experimental validation and curation of the interactions. Previous assessments have shown the modest performance of the available network inference methods based on gene expression data. Here, we study several caveats on the inference of regulatory networks and methods assessment through the quality of the input data and gold standard, and the assessment approach with a focus on the global structure of the network. We used synthetic and biological data for the predictions and experimentally-validated biological networks as the gold standard (ground truth). Standard performance metrics and graph structural properties suggest that methods inferring co-expression networks should no longer be assessed equally with those inferring regulatory interactions. While methods inferring regulatory interactions perform better in global regulatory network inference than co-expression-based methods, the latter is better suited to infer function-specific regulons and co-regulation networks. When merging expression data, the size increase should outweigh the noise inclusion and graph structure should be considered when integrating the inferences. We conclude with guidelines to take advantage of inference methods and their assessment based on the applications and available expression datasets.
Introduction
A gene regulatory network (GRN) is responsible for sensing environmental cues and responding accordingly. It represents directed regulatory interactions between genes coding transcription factors (TFs) and their target genes (TGs). Successful developments in synthetic biology require that the designed circuit properly integrates into the global and local regulatory circuits . This is a current challenge as there is not a single complete experimentally-validated GRN (Escorcia-Rodriguez et al., 2020), only a handful (< 4) of bacterial organisms has a known GRN having completeness > 70%, and its experimental reconstruction is a time-and resource-consuming task. Consequently, computational network inference is frequently used. Whereas previous works have evaluated network inference tools using synthetic and experimental data for several organisms (Marbach et al., 2010;Marbach et al., 2012;Chen and March 2018), they did not assess several essential criteria for the inference of GRNs such as data noise variation, and the global structure of the predictions and the gold standard (GS). Riet De Smet and Kathleen Marchal reviewed the advantages and limitations of several inference methods through the biological interpretation of the network structure but did not use the structure itself to assess the predictions (De Smet and Marchal, 2010).
Employing artificial data with varying amounts of noise, Deniz Seçilmiş et al. recently evaluated various tools and discovered that using the perturbation design matrix outperformed methods without it. (Secilmis et al., 2022). Synthetic data are the first alternative for benchmarking inference methods ( Van den Bulcke et al., 2006). However, the generation of synthetic data relies on simulation parameters (e.g., dimension and noise of the dataset), which may not reflect the variability in biological data. Regarding the transcriptomic technique, most of the tools developed for GRN inference from microarray data have been indiscriminately coupled with RNA-seq (Iancu et al., 2012;Salleh et al., 2018;Zhang et al., 2019) despite tools for bulk RNA-seq data have been already developed (Proost et al., 2017;Imbert et al., 2018).
The authors of the DREAM5 network inference challenge evaluated a plethora of genome-scale transcriptional regulatory network predictions from gene expression data. Their results provided insights into the difficulty of GRN inference using correlation and mutual information between gene pairs and found that contrary to synthetic data, the dependencies between genes interacting in the cell barely exceeded the dependencies between non-interacting gene pairs in biological data. Interestingly, with synthetic and Escherichia coli data, the correlations between genes regulated by identical sets of TFs exceeded those between genes in the actual regulatory network (Supplementary Note S5 in Marbach et al. (2012)), but most of those interactions between co-regulated genes would be false positives (e.g., structural genes shaping a transcription unit). Recently, Simon Larsen et al. performed an in-deep analysis on this matter, their results show that the correlation of pairs of random genes is indistinguishable from those involved in known regulatory interactions in E. coli (Larsen et al., 2019). Doglas Parise et al. confirmed the results on Corynebacterium glutamicum (Parise et al., 2021).
According to the DREAM5 team, integrating predictions from different inference techniques through the Borda count method ("community network") is the best strategy because method performance is not consistent across species. (Marbach et al., 2012). Since then, the community approach has been broadly applied (Akesson et al., 2021;Zorro-Aranda et al., 2022). ComHub is a pipeline for integrating predictions from various methods to rank regulators according to their average out-degree using gene expression. (Akesson et al., 2021). Recently we inferred a GRN for Streptomyces coelicolor and identified the global regulators applying the NDA (natural decomposition approach) (Freyre-Gonzalez et al., 2008;Freyre-Gonzalez et al., 2012) on the acrossmethods community network preserving only TF-TG interactions . However, some methods are better suited to particular global topological structures (Stolovitzky et al., 2009). Thus, the hubs may differ across methods and have different biological interpretations in each global network due to the inherently different structure.
The inferences are commonly assessed using standard performance metrics such as the area under the recall vs. precision (AUPR) and true negative rate vs. recall curves. These metrics rely heavily on the ranking of the interactions (Marbach et al., 2010). Based on the ranking scheme and the cutoff value, the global network will also have a different structure. For example, using the Pearson correlation coefficient with no post-processing step as the ranking score, co-regulated genes from the same transcription unit (TU) will be at the top of the prediction and the global network will be shaped by interactions between coexpressed genes. This would be a good co-regulation network, but it will be highly penalized if it is assessed against a GRN. The edges represent different biological associations (De Smet and Marchal, 2010); therefore, the networks have a different global structure and are better suited for different purposes (Michoel et al., 2009). However, the assessment and integration of inference methods designed for co-expression are still being directly used and compared with those inferring regulation (Marbach et al., 2012;Bellot et al., 2015;Pratapa et al., 2020;Secilmis et al., 2022).
We previously explored structural properties and systems-level components to analyze curated and inferred GRNs for Streptomyces coelicolor . Here, we focused on the factors influencing the inference of GRNs and their assessment. Mainly, the structural characteristics of the GS and the inferred networks, the quality of the input data and the GS, and the assessment strategy. Besides synthetic data with varying noise and completeness levels, we use biological data for Escherichia coli, Bacillus subtilis, and Pseudomonas aeruginosa along with their experimentally-validated GRNs (Escorcia-Rodriguez et al., 2020) as the GS. Because the networks used as GS are not complete, unknown actual interactions identified in the prediction will be misclassified as a false positive. To check whether our results will hold when the GS networks are complete, we used historical snapshots with different completeness levels and evidence (Escorcia-Rodriguez et al., 2020). Figure 1 summarizes the complete workflow.
Results and discussion
We reviewed the literature to construct a collection of network inference tools. After the application of filter criteria (see Materials and methods), 15 tools were selected to be assessed along with "Community" reconstructions integrating interactions from several tools. Then, we arranged the inference tools according to the output network type into three groups (Table 1; Figure 1): 1) The COEX tools infer interactions between genes with correlated expression profiles.
2) The CAUS tools use a TFs list to infer regulatory interactions between the TFs and their TGs (i.e., GRNs) (Hecker et al., 2009). 3) The HYBR (hybrid) group contemplates ANOVA and Friedman which are based on analysis of variance and therefore do not infer causality. However, we used a list of TFs to keep only TF-TG interactions. The classification of Community relies on the type of interactions that it includes. It is considered HYBR when it integrates interactions from different network types, but it will be considered CAUS if it only integrates interactions from CAUS tools. Similarly, Community will be considered COEX if it only contains interactions from COEX tools. See Table 1 and the Introduction section in the Supplementary Information for a detailed description of the tools.
Tools for inferring co-expression networks should be assessed apart from those for inferring causality We used synthetic and biological datasets to assess the tools inferring networks from microarray data ( Figure 2A). We assessed the inferred networks using 30 synthetic gene expression datasets with varying noise levels and sample sizes against the biological regulatory network used to generate the synthetic data. There was an overall improvement with larger datasets with less noise ( Figure 2B and Supplementary Figure S2). GENIE3 and Inferelator performed the best, even better than Community, contrasting with the results of the DREAM5 challenge where Community outperformed all the single-tool predictions on the assessment with synthetic data (Marbach et al., 2012). On the other hand, ANOVA and WGCNA showed poor performance despite the data variations. There was no clear difference among the tools at the group level.
We collected gene expression data for E. coli, B. subtilis, and P. aeruginosa from GEO and generated three datasets for each organism, each with different preprocessing levels: raw data, Robust Multiarray Averaging (RMA) normalization, and RMA normalization plus batch correction (R-B). For the GS, we retrieved experimentally-supported GRNs from Abasy Atlas for the three organisms. As a group, CAUS performed the best followed by HYBR. On the other hand, COEXP showed poor results. Among the CAUS tools, GENIE3, Inferelator, and TIGRESS performed the best across the three organisms. GENIE3 was the best method in E. coli and P. aeruginosa, but TIGRESS and Inferelator outperformed it in B. subtilis, the organism with the smallest dataset (Supplementary Figure S3). This could be due to the lower prediction stability of GENIE3 to data size variations in contrast with TIGRESS and Inferelator. Among the HYBR tools, Friedman and Community improved their performance with R-B data, while ANOVA showed inconsistent results. Most of the tools performed better with fully preprocessed R-B ( Figure 2C).
For each inference tool, we averaged its prediction score with the highest-quality data: R-B for each organism and the complete synthetic dataset with the lower noise level (5%). GENIE3 obtained the highest overall score, followed by Workflow of this work. We generated synthetic data using GeneNetWeaver for E. coli and collected several biological microarray datasets from GEO for E. coli, B. subtilis, and P. aeruginosa, as well as RNA-seq data from GEO and PRECISE for E. coli (left column). The synthetic and biological datasets were used as input for the inference methods (middle row). The inference methods were classified according to their final network type. COEX tools generate undirected networks. CAUS tools generate directed networks using a list of regulators to compute the predictions as part of their algorithm. HYBR includes Friedman and ANOVA implementations (Zorro-Aranda et al., 2022) that generate co-expression networks that are trimmed to only include regulations mediated by a known transcription factor. The Community networks are classified according to the type of tools they include. We used biological networks as the gold standard to perform the assessment and analyses (right column). From the directed gold standard ("CAUS" GS) we generated a co-regulation gold standard (GS CO-REG). We performed the standard statistical and a structure-based assessment. SS: steady-state data, TS: time-series data, GS: gold standard, TF: transcription factor, TG: target gene. See Supplementary Figure S1 for further details.
Frontiers in Genetics frontiersin.org 03 Inferelator and TIGRESS ( Figure 2D). Community ranked fourth in the overall score despite it includes interactions from the COEX predictions. In concordance with the DREAM5 challenge (Marbach et al., 2012), this suggests that despite low-scored predictions integration, Community still has reliable performance. A community integration seems to be a safer choice because the rank of individual tools differs among organisms, but CAUS tools outperformed COEX tools with biological data every time ( Figure 2D).
Unlike the COEX tools, the CAUS and the HYBR tools require a list of the genes coding for TFs (Table 1) to keep only TF-TG interactions and avoid TG-TG edges that are not expected in a GRN, such as the networks used as GS. On the other hand, only a few of the interactions inferred by the COEX tools include a TF, i.e., most edges are TG-TG interactions (Supplementary Figure S4). As an effort to perform a fair assessment of COEX tools, we modified the E. coli GS to resemble a co-regulation network where each regulon, set of coregulated genes, is a clique (every node is interconnected). This way, COEXP outperformed the rest of the tools ( Figure 2E).
The performance of every tool declined with the biological datasets in contrast to the synthetic ones. It is expected because the synthetic datasets were generated with the network used as GS. Besides, training and evaluating the tools with biological data is rare due to data accessibility (Marbach et al., 2010). There is a clear difference between the performance of CAUS and COEX tools with the biological datasets and a GRN as the GS (Figures 2C,F). On the other hand, the COEX tools succeeded with a simulated coregulation network as the GS ( Figure 2E). C3NET obtained the highest overall score, followed by CLR, ARACNE, and WGCNA.
These results suggest that even though we should use CAUS tools for the inference or GRNs, tools inferring co-expression networks should be assessed apart from those inferring causality. Ignoring the direction of the GS interactions to make a fairer comparison (Chen and March 2018) is not enough. Because of the nature of the network, the interactions inferred by COEX tools will be closer to representing co-expression and co-regulation rather than regulation. Moving to regulation is not trivial, but some approaches are already trying to infer causality from coregulation and co-expression networks (Aibar et al., 2017;Chen and Liu, 2022).
Inference methods based on Bayesian approaches take advantage of time-series data to infer causal relationships (Lo et al., 2012). We assessed two tools based on a Bayesian approach: scanBMA (Young et al., 2014) and iterativeBMA (Annest et al., 2009), along with a Community reconstruction integrating both predictions. The performance with synthetic data improved with larger datasets and less-noise levels. iterativeBMA obtained the best scores, slightly better than Community (Supplementary Figure S5). Then, we assessed the tools with biological data, one time-series experiment for E. coli and one for P. aeruginosa. We used only raw (non-normalized) and RMA pre-processing steps as batch correction is not necessary for the one-source samples. Overall, scanBMA performed better than iterativeBMA ( Figure 2G). Both tools with Bayesian approaches performed poorly despite their advantage over other methods to infer causal relationships, perhaps because of the few samples available. Future data availability along with experimental annotation might improve the performance of Bayesian approaches. Frontiers in Genetics frontiersin.org 04
FIGURE 2
Assessment of network inference tools for microarray data. 100% of the synthetic dataset contains a total of 788 conditions. The Community Network is the integration of the single-tool predictions using the Borda count method (Marbach et al., 2012). (A) Network classification. Network inference tools for microarray data were classified according to the type of network they infer. (B) GENIE3 is the best tool for synthetic data. Synthetic gene expression datasets with different levels of noise and completeness were generated from the biological network of E. coli (511145_v2017_ sRDB16_eStrong). The same network was used as the GS for the assessment. (C) Batch correction and knowledge of the transcription factors improve the inference of transcriptional GRNs. Causal and Hybrid tools outperformed Co-expression tools in the assessment of GRNs using biological data for E. coli, B. subtilis, and P. aeruginosa with different levels of data normalization: raw data, Robust Multiarray Averaging (RMA), and RMA plus batch correction. Inferences were assessed with experimentally-validated GRNs. (D) GENIE3 is the best tool for the inference of GRNs. (E) Assessment for the inference of co-regulation network. The COEX tools outperformed CAUS and HYBR tools. C3NET performed the best. (F) Boxplot representation of data in panel C to highlight the differences across tool groups. (G) scanBMA outperformed iterativeBMA with biological data. The Community network for this panel only integrates interactions from scanBMA and iterativeBMA.
RMA with batch correction on large datasets improves the predictions
To provide deeper insights into the effects of data normalization on network inference, we contrasted the results using none (raw), RMA, and R-B preprocessing levels. The removal of batch-effect over RMA (R-B) normalization seems to slightly improve the predictions ( Figure 3A and Supplementary Figure S6). RMA normalization without batch correction worsens the performance of the tools. This is because some tools might be leveraging data heterogeneity or information lost in the normalization process (Sirbu et al., 2010). Besides, the assumptions considered by normalization pipelines could be violated, resulting in spurious predictions (Evans et al., 2018). Therefore, either raw data or normalized and batch-effect-corrected data should be used for network inference with highly heterogeneous datasets.
In addition to data preprocessing, the dataset size should be considered a relevant factor in the prediction outcome. The dataset for E. coli was collected from three GEO platforms with a different number of samples (see Materials and methods): GPL73 (12 GSM), GPL199 (759 GSM), and GPL3154 (1379 GSM) ( Figure 3B and Supplementary Figure S7). We assessed the predictions using individual GEO platforms with the three preprocessing levels as input ( Figure 3C and Supplementary Figure S8). In general, there is an improvement in the prediction scores for larger datasets. The scores with GPL199 and GPL3154 are considerably higher than the score for the smallest platform (GPL73). However, there is not a remarkable difference between GPL199 and GPL3154 with RMA and R-B normalization. In the case of raw data, it seems to be an improvement as the data size increases. From these results, we can conclude that the larger the dataset the better the predictions. However, previous studies have shown that not only the dataset size but also the variability of conditions are relevant factors for network inference (Sastry et al., 2019). This is evident with the smallest platform which seems to have less heterogeneity among the platforms. In contrast, the other two platforms have better results alone than together which suggests that both have redundant information. Otherwise, normalized datasets with a size of two orders of magnitude would be good enough for network inference. These results are consistent across the three tool groups.
A network-type-driven selective community is the best choice when a GS is not available A previous DREAM challenge suggested that integrating multiple single-tool predictions into a community network is a safe choice, especially when there is no partial network to use as GS (Marbach et al., 2012). Even though the AUPR and AUROC tend to be constrained to higher values as more single-tool predictions are Effects of results integration, GS incompleteness, and Regulon-level assessment. (A) Probability of a tool to outperform Community by its integration with others (# tools) into a selective community. CAUS tools are affected rather than improved by others. (B) Assessment of GRN inference methods with the historical reconstruction of the E. coli GRN. The incompleteness of the GRN used as GS does not affect the AUPR score. (C) AUPR ratio between a "strong" GS and a "weak" one. In most cases, the tools performed better when a "weak" GS was used. The "weak" GS is a superset of the "strong" GS including interactions supported by non-directed experiments. (D) Regulon prediction assessment with Matthew's correlation coefficient (MCC). Each dot represents a regulon inference for an E. coli TF, higher is better. Out-degree connectivity (Kout) for the TF controlling the regulon is normalized by the maximum connectivity (Kmax) of the E. coli network. Figure S10). This is due to the poor predictive power of some tools, which perform better only when integrated with several other predictions (e.g., ANOVA). The beginning of the prediction list is critical for the performance of the tools (Marbach et al., 2010). While COEX tools tend to have their true positive interactions scattered throughout the entire prediction, CAUS tools include most of their true positive interactions from the beginning (Supplementary Figure S11).
COEX tools capture function-specific regulons and non-direct interactions
We assessed the predictions with snapshots of the historical reconstruction of the E. coli GS, each of these networks with two versions; one with all the interactions discovered at a specific timepoint ("all") and the other one with only validated protein-DNA interactions ("strong"). The assessment methodology showed robustness to the incompleteness of the GS ( Figure 4B), suggesting that CAUS tools outperform COEX tools with every snapshot of the GS, disregarding its completeness level. Moreover, even though all the tools improved the performance with the "all" GS, the difference is bigger for COEXP tools ( Figure 4C). While the "strong" GS only contains direct TF-DNA interactions, the "all" GS may contain nondirect interactions (i.e., an interaction mediated by a third biological entity) (Escorcia-Rodriguez et al., 2020). Gene expression data capture both direct and non-direct regulatory events. Therefore, inference tools based solely on gene expression data tend to also infer non-direct interactions, especially COEX tools ( Figure 4C). Perhaps, this consideration may shed light on the search for consistency between GRNs and gene expression data (Larsen et al., 2019;Parise et al., 2021). On the other hand, every tool performs better with the "strong" GS on AUROC (Supplementary Figure S12), but this is because of the highly unbalanced positives/ negatives ratio (Saito and Rehmsmeier, 2015).
We assessed the predictions at the regulon level using the F1 score. The CAUS tools performed better on large regulons (i.e., those of global regulators) (Supplementary Figure S13). On the other hand, the COEX tools are the best alternative for local Frontiers in Genetics frontiersin.org 07 regulators, which are associated with function-specific regulons . To discard potential bias induced by the F1 metric (Chicco and Jurman, 2020), we also used Matthew's correlation coefficient (MCC), obtaining consistent but less meaningful patterns ( Figure 4D). The explanation for this is that COEX tools distribute the interactions among all the genes subestimating the number of TGs for global regulators, while CAUS and HYBR tools distribute the interactions only among the TFs list provided over-estimating the number of TGs for each TFs, especially for local TFs (Supplementary Figure S14).
Unsupervised learning with global structural properties segregates COEX inferences from the rest of the networks Beyond assessing the tools solely based on the standard statistical metrics, we analyzed global structural differences among the networks. We computed the following structural properties for the regulatory networks: density, number of regulators, maximum out-connectivity, feedforward and complex feedforward circuits (Alon, 2007;Freyre-Gonzalez and Tauch, 2017), 3-feedback loops, size of the giant component, average clustering coefficient, diameter, average shortest path length, and the coefficient of determination for the degree P(k) and clustering coefficient distribution C(k) (Albert, 2005). See Supplementary Table S2 for the definition of the structural properties. Then, we clustered the networks based on their normalized global structural properties (Materials and methods).
For the E. coli networks, COEX tools were clustered into one group ( Figure 5A). On the other hand, CAUS and HYBR tools were clustered into a second group, excluding ANOVA. Even though the GS was not clustered with any of the two major groups, it was closer to the latter one ( Figure 5A). We obtained similar results with the networks for B. subtilis ( Figure 5B) and P. aeruginosa ( Figure 5C), although the GS for B. subtilis got much closer to the CAUS and HYBR group ( Figure 5B).
The clusters were not conserved with synthetic data inferences, suggesting that inferences with synthetic data were structurally similar disregarding their type of interactions ( Figure 5D). Contrary to biological data, GeneNetWeaver generates the synthetic data following the topology of the network provided (Schaffter et al., 2011), making it easier for the tools to recover such topology. Several structural properties are constrained by the graph complexity and characterize the GRNs with causal interactions, despite different network completeness levels (Campos and Freyre-Gonzalez, 2019;Escorcia-Rodriguez et al., 2021). Therefore, we expect such properties to remain similar in the final GS, and the overall topological assessment of the predicted networks will be like the one performed with the current GS.
We then used an in-house Python implementation of the previously reported D-value (Schieber et al., 2017), which assesses network similarity based on topological evidence taking centrality into account. For the biological datasets, CAUS tools were always clustered with Community and Friedman but never with the GS (Supplementary Figure S15). Noteworthy, the GS was not clustered with the COEX tools either. Instead, it was isolated, as well as the ANOVA network. Overall, the results remain consistent across organisms, clustering CAUS networks apart from the COEX ones. Further topological analysis with all historical GS for E. coli showed that, despite GS completeness, the same conclusions are expected (Supplementary Figure S16). Noteworthy, two networks might have identical global structures with no intersection between their regulations (shuffled node labels). This explains why ANOVA was repeatedly clustered with the GS, despite its poor performance with standard assessment metrics. However, between the two strategies to assess the structure of the networks, the one based on the normalized structural properties in GRNs ( Figure 5) is more consistent with the standard metrics. We suggest using this approach as a complementary assessment always a GS is available. When no GS exists for the organism of interest, the structural assessment can be used along with other biological networks and random models to prove the prediction is structurally more similar to a biological network than random. We recently used this approach to assess network inferences for Rhizobium etli (Taboada- Castro et al., 2022).
Analyzing the structural properties individually (Supplementary Table S2 and Supplementary Figure S17), COEX tools have higher density and fraction of regulators. Given that the predictions have the same number of interactions, having a higher fraction of regulators results in lower max out-connectivity. On the other hand, synthetic predictions tend to have higher max outconnectivity values than their biological counterparts. Noteworthy, the max out-connectivity for the P. aeruginosa GS might be underestimated due to low genomic coverage (Escorcia-Rodriguez et al., 2020). Regarding normalized path-related properties, the COEX tools have the largest normalized diameter and average path length due to their small fraction of nodes in their giant component (Supplementary Table S2). Contrary to COEX tools that reach more than 200 components, CAUS and HYBR tools predict networks with a few components (Supplementary Figure S18) because their maximum is constrained to the number of TFcoding genes; and every interaction connecting regulons decreases the number of components. A high P(k) coefficient of determination (R 2 ) value was found across all biological predictions and all GSs. The C(k) R 2 was good only for COEX and HYBR biological predictions suggesting their modularity, like the one found in the GS. Regarding network motifs, the COEX inferences were the most similar to the GS. This agrees with the motifs search in DREAM5 where feed-forward loops were recovered most reliably by mutual-information and correlation-based methods (Marbach et al., 2012) (i.e., COEX tools).
GENIE3 outperformed tools developed for bulk RNA-seq
We interrogated the performance dependence of GRNs inference related to transcriptomic technique, comparing two COEX inference tools developed exclusively for bulk RNA-seq data (RNAseqNet (Imbert et al., 2018) and LSTrAP (Proost et al., 2017)) and the best CAUS microarray-based approach (GENIE3). We retrieved RNA-seq datasets for E. coli and performed a crossevaluation between the tools, exchanging the input data. First, we used a subset (see Materials and methods) of our raw and RMA microarray datasets of E. coli to reduce the impact of data size Frontiers in Genetics frontiersin.org variation and observed that GENIE3 outperformed RNASeqNet and LSTrAP significantly (Supplementary Figure S19). Next, we used the RNA-seq datasets (raw counts, normalized with DESeq2, and PRECISE (Sastry et al., 2019)) as input. The COEX RNA-seqbased tools performed better with the homogenous largest RNAseq dataset, PRECISE (Supplementary Figure S19). Despite the improvement of RNASeqNet and LSTrAP with the RNA-seq data, GENIE3 still performed better (Supplementary Figure S19). These results agree with a previous synthetic gold standard-based benchmarking of network inference methods for scRNA-seq data where GENIE3 is still placed within the top-performing tools methods (Pratapa et al., 2020), making GENIE3 a top-performing tool regardless of the transcriptomic technique. Furthermore, we assess the predictions based on their global structure (Supplementary Figure S20). We only considered the inferences datasets with the best MCC scores (Supplementary Figure S19), PRECISE for RNA-seq, and raw for microarray data. Both datasets and metrics showed consistent results clustering the GS with GENIE3, RNAseqNet, and Community, leaving LSTrAP out (Supplementary Figure S20). This suggests that RNAseqNet infers networks with structural properties more similar to the GS than LSTrAP does. However, non-ranked interactions might be a shortcoming for RNAseqNet.
Overall, compared to how well the tools performed with microarray data, RNA-seq data did not significantly improve their performance. It agrees with a previous assessment in A. Thaliana, where networks derived from simple correlations and microarray data obtained higher scores than inferences with RNAseq data (Giorgi et al., 2013). Although RNA-seq has progressively replaced microarrays (Lowe et al., 2017), the gene coverage referred to as an advantage of RNA-seq, is less of a problem for microarrays in model prokaryotes where new microarrays have overcome the coverage issue (Swarbreck et al., 2008). Despite the amount of available RNA-seq data, most organisms do not have an appropriate annotation (Salzberg, 2019), while large microarraybased transcriptomic data have continuously grown into public databases (Barrett et al., 2013;Athar et al., 2019).
Conclusions and guidelines
All the CAUS tools (GENIE3, TIGRESS, Inferelator, and Statmodel) outperformed the COEX tools when assessed with a GRN as the GS (TF-TG interactions) with biological and synthetic data and, taking advantage of a TFs list. Even though we filtered TF-TG interactions from the co-expression inferences approaches (HYBR), the performance of CAUS tools was still better. GENIE3 and Inferelator performed the best for synthetic and biological data. GENIE3 also outperformed inference tools developed for bulk RNA-seq data. COEX tools performed better when assessed with a GS resembling a co-regulation network (interactions among co-regulated genes). Regarding time-series tools, scanBMA performed the best, although it is highly affected by dataset size.
Larger datasets result in better predictions but require a selective inferences-integration process and batch correction to mitigate technical variability; applying only RMA worsened the predictions. The probability of CAUS tools outperforming Community decreases as more tools are integrated into a community network, suggesting the use of a selective community based on the desired output network type (coregulation or GRN). Although CAUS tools are the best alternative to infer global GRNs, COEX tools are better at inferring regulons for function-specific (i.e., local) TFs. An assessment against a GS including potential indirect interactions suggests that COEX tools might be the best alternative to identify indirect regulations. This is useful when the goal is to identify all the regulators affecting the expression of a gene, and not only DNA-binding TFs .
Based on global structural properties, COEX tools segregate from CAUS tools when using biological predictions, highlighting the differences among their output network type. Individual structural properties support the similarity between CAUS inferences and the GRNs used as GS. However, no clear clusters were found with synthetic data, suggesting that biological data is required for the structural assessment because synthetic data generation is based on the topology of the input network (Schaffter et al., 2011). Historical snapshots of the GS suggest the statistical and structural assessment to be robust to GS incompleteness.
The overall modest performance of the tools is evident and the potential inherent pitfalls to the conjecture that statistical relationships between expression profiles correspond to regulatory interactions have been previously noted (Pratapa et al., 2020;Freyre-Gonzalez et al., 2022). Recent works leveraging prior networks, structural constraints, and sequence motifs to improve transcriptomic-based GRN inference have shown promising results (Castro et al., 2019;Lim et al., 2022;Zorro-Aranda et al., 2022). Following we provide guidelines for the inference and assessment:
Inference
• Identify the best kind of tool for your purposes.
• CAUS or Community for whole GRNs or regulons of global TFs. • COEX for regulons of local TFs (few targets), co-expression, or co-regulation networks. • Using a list of TFs to filter inferences based on coexpression (e.g., ANOVA and Friedman) to get a causal network is not enough to infer a global GRN. Integrate the TFs into the inference pipeline. • A selective community based on the type of network required is better than an all-inclusive community. • If you want to use one COEX tool, use C3NET but keep in mind you will obtain a co-expression network, not a GRN. • If you want to use one CAUS tool, use GENIE3 disregarding the type of gene expression data used. • Merge datasets only when the final size of the data outweighs the noise of merging different sources. • In prokaryotes, dataset size and preprocessing are more important than the transcriptomic technique used to generate the data. • Normalize your data using Batch correction if it is necessary.
Using only RMA is not recommended. • If it is feasible, take advantage of biological information such as a list of TFs.
Frontiers in Genetics frontiersin.org Assessment • Using synthetic data to assess the predictions might provide insights about the performance of the tools but expect it to worsen when assessed with biological data and the inferred networks to have a different global structure. • Take advantage of several experimentally-validated bacterial GRNs to be used as GS (e.g., https://abasy.ccg.unam.mx/for bacteria). • Whenever possible, use historical snapshots or network sampling to prove the results hold despite GRN incompleteness. • Use MCC to perform a regulon-level assessment of the predictions. • Compare network structural properties to assess the global topology of the networks inferred from biological data. • A structural assessment of the predictions applies to biological data only. Because of the mechanisms to generate the data following the topology of an input network, predictions with synthetic data have a similar structure despite inherent differences.
Selection of GRN prediction methods to be assessed
We thoroughly reviewed the literature and selected methods that were able to infer a GRN from an expression data matrix. We also considered usability, which takes into account 1) open-source availability, 2) enough documentation, and 3) the ability to be run by a command line.
Synthetic data
The synthetic datasets, all with 788 conditions (rows) and 197 genes (columns), were generated using GeneNetWeaver software (Schaffter et al., 2011) applying the standard procedure reported by the DREAM5 consortium, with the E. coli network (511145_v2017_sRDB16_eStrong) from Abasy Atlas (Escorcia-Rodriguez et al., 2020) being used as the seed. To explore the effects of noise levels in GRN inference, we generated datasets with 20%-step values for the noise parameter, as well as the 5% noise level selected for the DREAM5 challenge. To study the effect of sample size in GRN inference, we sampled each of the previous datasets at 10, 25, 50, 75, and 100% of experimental conditions, preserving an equal representation of each experimental condition. The same procedure was followed for time-series 4,207 conditions and 197 genes data generation.
Microarray data extraction and processing
The microarray data for Escherichia coli K-12 MG1655, Bacillus subtilis 168, and the pathogen Pseudomonas aeruginosa PAO1 were retrieved from the (GEO) database using four main inclusion criteria: A) records were associated with public Affymetrix platforms and had an available CEL file useful to perform Robust Multi-chip Averaging (RMA) normalization by Oligo R package (array annotation package); B) an available GEO Series Matrix, an expression matrix annotated as non-normalized data, referred here as raw data, and C) more than one available sample. In addition, we excluded GEO samples related to more than one organism. For E. coli, a total of 2,154 GEO samples (GSM) from 182 GEO Series (GSE) were retrieved from the GEO platforms GPL73 (1 GSE, 12 GSM), GPL199 (33 GSE, 759 GSM), and GPL3154 (153 GSE, 1379 GSM). After applying RMA, we kept with the shared genes among E. coli GPLs belonging to the K-12 MG1655 strain, obtaining a total of 4,003 genes, which comprise 87.7% of the genome. For B. subtilis we used the platform GPL343 and retrieved 7 GSE with a total of 64 GSM, obtaining a total of 4,010 genes, which comprises 88.5% of the genome. Finally, for P. aeruginosa we used the GPL84 platform with 125 GSE and a total of 1133 GSM, obtaining a total of 5,548 genes, which comprise de 97.4% of the genome. Microarray raw data (CEL files) were normalized through the RMA implementation in the R package oligo (Carvalho and Irizarry, 2010), using default parameters. Next, we removed all the conditions in which NANs or zeros were present due to normalization effects. Lastly, we performed a batch-effect correction using ComBat (Johnson et al., 2007) implementation in the sva R package with a non-parametric adjustments approach (function from the sva R package using the following parameters: mod = NULL, par.prior = FALSE, mean.only = FALSE).
Time-series microarray data and condition sampling
Since GEO does not provide a feasible way to filter TS experiments, we used all public metadata of samples to identify GSE records with a timeline progression and filtered them with our inclusion criteria. From the identified TS GSE list we selected the largest record for each organism. For E. coli we used GSE12411 and retrieved 28 GSM with three time-points with 4, 12, and 12 replicates respectively, regarding P. aeruginosa we used GSE52445 with 28 GSM representing 14 time points each one with two replicates. For the assessment, we used only raw and RMA preprocessed data, the batch correction step was not necessary for the one-source samples.
We sampled the Abasy Atlas networks (Escorcia-Rodriguez et al., 2020) to allow dimensionality reduction by the Bayesian tools (Annest et al., 2009;Young et al., 2014). We sampled the networks 511145_v2018_sRDB18_eStrong for E. coli and 208964_v2015_s11-RTB13 for P. aeruginosa. We applied snowball sampling (Heckathorn and Cameron, 2017), also known as link-tracing, using the network nodes with the highest degree of centrality as seed and 198 as the cutoff value for the sampling to get the same size of data as in the in silico timeseries assessment. The final sample sizes were 139 samples x 198 genes for E. coli and 45 samples x 198 genes for P. aeruginosa. We used 198 genes for consistency with the time series synthetic data.
Frontiers in Genetics frontiersin.org
Data collection, and assessment for crossevaluation
To compare the performance dependency of the RNA-seq-based and microarray-based inference methods, we swap their transcriptomic input data and compare it with the original correspondence input results. Due to the diversity of RNA-seqbased methods, we preselected LSTrAP, RNAseqNet, and VCNet exclusively developed for GRN inference from bulk RNA-seq. However, we excluded VCNet from the analysis since it cannot be applied to a large number of samples unless you optimize the computational complexity inherent in its loop-based code. On the other hand, RNAseqNet and LsTrAP are low-time-consuming algorithms that aim to increase the reliability of inference from biologically related datasets (Imbert et al., 2018).
Bulk RNA-seq data extraction and processing
We collected two bulk RNA-seq datasets for E. coli K-12 MG1655. The small one was retrieved from GEO NCBI (GSE73673) (Kim et al., 2016), we downloaded the 87 sample files with the processed reads (Kim et al., 2016) for 3,923 genes. Next, we applied the DESeq2 normalization (Love et al., 2014) a commonly used method that has been evaluated against different normalization methods (Dillies et al., 2013;Maza et al., 2013;Soneson and Delorenzi, 2013;Smid et al., 2018). For the largest one, we download the available processed (log_tpm.tsv) dataset from PRECISE 1.0 (Sastry et al., 2019), a Precision RNA-seq Expression Compendium for Independent Signal Exploration, build it with 15 studies derived with a standardized protocol from the same research group and PRECISE developer. We only kept with the genes shared between the PRECISE dataset and our microarray dataset resulting in 278 conditions and 3,557 genes.
Microarray data transformation
We sampled a subset of 87 samples from our collected E. coli microarray dataset. We used only the raw and RMA version since batch correction was not applicable. Unfortunately, the RNAseqNet algorithm takes as input read counts or TMM normalized counts data; thus, we avoided negative values from sampling. To the best of our knowledge, RNAseqNet is not able to work with microarray or RNA-set datasets without filter genes with at least 70% of sample coverage.
Gold standards
We used strongly-supported, meta-curated GRNs from Abasy Atlas v2.2 (Escorcia-Rodriguez et al., 2020) as GSs for E. coli (511145_v2018_sRDB18_eStrong), B. subtilis (224308_v2008_ sDBTBS08_eStrong) and P. aeruginosa (208964_v2015_s11-RTB13). The nodes of Abasy GRNs depict either genes, regulatory sRNAs, or regulatory protein complexes. For this work, we converted networks with genes and regulatory protein complexes into gene-gene networks to use as GS since only those interactions can be inferred. We removed the genes for which no expression data was retrieved since the prediction of its interactions would not be inferred by the methods assessed in this work. We obtained a total of 4,075 interactions among 1780 genes for E. coli, 2294 interactions among 1,298 genes for B. subtilis, and 1,297 interactions among 868 genes for P. aeruginosa. For GS incompleteness analysis, we also retrieved from Abasy various public versions of the E. coli GRN (hereafter referred to as historical snapshots), with different completeness levels.
For the construction of the GS with interactions between coregulated genes, we connected each regulon of 511145_v2018_ sRDB18_eStrong so each of them forms a clique and obtained a total of 737,913 interactions between the same number of genes, overestimating the density of the network. Note that, in such network representation, the TGs from a regulon formed a clique, including the TF only if it regulates its own transcription. For the synthetic GS, we used 511145_v2017_sRDB16_eStrong as input for GeneNetWeaver (Schaffter et al., 2011) to generate datasets with 5, 20, 40, 60, 80, and 100% noise variations. From such datasets, we generated subsamples with 20, 25, 50, 75, and 100% completeness.
Integration of individual predictions into a community network
A confidence score provided by each tool (when available) was used to rank predictions and missing interactions were ranked right after the last predicted one. Therefore, longer predictions penalize more the missing interactions. Inferred interactions sharing a common score by a method were ranked equally. The average rank is used as a score for the Community. For biological data, predictions were previously trimmed to the number of interactions that the complete organism-specific GRN would have according to previous work (Campos and Freyre-Gonzalez, 2019;Escorcia-Rodriguez et al., 2020). Those values correspond to 12,000 for E. coli and B. subtilis and 16,000 for P. aeruginosa.
Assessment
Unless otherwise described in the analysis, network predictions larger than the expected number of interactions in the complete GRN were trimmed (Campos and Freyre-Gonzalez, 2019;Escorcia-Rodriguez et al., 2020). The first 12,000 inferred interactions were considered for the assessment with E. coli and B. subtilis and the first 16,000 inferred interactions for P. aeruginosa. Interactions shaping the GS were used as the positive set (P), while interactions absent in the GS were labeled as the negative set (N). Inferred interactions were considered True Positive (TP) if they were present in the GS and False positive (FP) if otherwise. Interactions in the GS that were not recovered by the algorithm were considered False Negative (FN). The Area Under Receiver Operating Characteristics (AUROC) and Area Under Precision-Recall (AUPR) curves were used to assess the predictions. While AUROC represents the specificity (FP/N) and the sensitivity Frontiers in Genetics frontiersin.org (TP/P) of the prediction compared with the whole set of potential interactions, AUPR focuses on the list of predictions and its precision (TP/(TP + FP)) as well as the sensitivity of the algorithm. We select PR as the main assessment measure, due to the imbalance between positive and negative sets (Saito and Rehmsmeier, 2015). For the overall score, we used the average score for the complete dataset with 5% of noise for the synthetic data and scores obtained with RMA plus batch effect correction for biological data. For the study of the effect of GS incompleteness, we used each historical snapshot of the E. coli GRNs as the GS. Inferred interactions sharing the same score were considered as equally ranked by the method and genes present neither in the GS nor in the expression data were not considered for this assessment. For the assessment of predictions not providing a score for each interaction, we used the MCC which is the best-suited coefficient for imbalanced datasets (Boughorbel et al., 2017). Note that MCC was used only for the comparative assessment between GENIE3 and RNAseqNet, and the regulon-level assessment. RNAseqNet does not score the predictions. Therefore, we considered the first 12,000 interactions to assess its prediction with MCC so the ranking of the interactions does not impact the score. Note that this is not ideal as the true positives-as well as novel interactions-may be at the bottom of the prediction making it disadvantageous for the experimental validation of such inferred interactions. For the regulon-level assessment, we trimmed the predictions to the expected number of interactions once the corresponding network is completed and compared each of the regulons against the cognate regulon in the GS using MCC and F1 score. The scores were plotting against the normalized out-connectivity.
For the prediction of the COEX methods, we duplicated every interaction in the prediction list, changing the direction. This is because the outputs provide interactions between two genes with no direction (e.g., symmetric adjacency matrix). Given the nature of the assessment with a directed network as the GS, we considered every interaction to be in both directions. While this would increase the number of predictions, consideration of the direction is required. On the other hand, for the assessment of the predictions with a coregulation GS, we did not consider the direction of the interactions. This way, the direction of the interactions predicted by a CAUS method, was not considered.
Combinatorial
We constructed selective communities with the possible combinations of the 12 methods used in the assessment with biological data. We use the dataset normalized with RMA and batch correction for the three organisms. To measure the effect of each method on the community network, we computed the dominance score defined as the probability of a selective community network with a given tool outperforming the allinclusive community network: dominance freq AUC Tool > AUC comm maxT Where maxT is the theoretical maximum of selective communities with each tool, n is the number of methods (12) used for the combinatory, and r is the number of elements in the combinatory [2][3][4][5][6][7][8][9][10][11].
Structural properties
We computed several structural properties for GRNs at a global scale and normalized them as follows: Regulators, self-regulations, maximum out-connectivity, and giant component size were normalized by the network size (number of nodes). Density was used as its product with the fraction of nodes acting as regulators. Network diameter was normalized by the number of nodes-2 (as if no shortcuts would exist). Network motifs were normalized by the number of potential motifs in the network, defined as: Where n is the size of the network, r is the number of nodes in the motif (r 3), TF n is the number of TFs in the network, and TF m is the number of TFs required for each type of motif (TF m 2 for feedforward and complex feedforward loops,; TF m 3 for 3feedback loops). We scaled the property values across networks between 0 and 1. We clustered networks and properties using Ward's method. Further, we used pairwise Pearson correlation for the network property vectors and clustered them according to the Euclidean distance using Ward's method.
We used an in-house Python implementation of the dissimilarity measure proposed by Schieber et al. (2017) to quantify the differences in the structural topology between two networks considering global structural properties, node-level structural properties, and centrality. We used the parameters the authors recommend (0.45, 0.45, 0.1) and applied used to compare the networks pairwise. The dissimilarity matrix was clustered using Pearson and ward's method.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Funding
This work was supported by the Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica (PAPIIT-UNAM) IN202421 to JF-G. | 2023-01-18T14:09:16.520Z | 2023-02-17T00:00:00.000 | {
"year": 2023,
"sha1": "ee4f773ea673da78ca8f1948bcd4d2acb5a6d246",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2023.1143382/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34d667a6f37ac0ede21f017f05cdf8a147068f49",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
117904738 | pes2o/s2orc | v3-fos-license | Set theory and cyclic vectors
Let H be a separable, infinite dimensional Hilbert space and let S be a countable subset of H. Then most positive operators on H have the property that every nonzero vector in the span of S is cyclic, in the sense that the set of operators in the positive part of the unit ball of B(H) with this property is comeager for the strong operator topology. Suppose \kappa is a regular cardinal such that \kappa \geq \omega_1 and 2^{<\kappa} = \kappa. Then it is relatively consistent with ZFC that 2^\omega = \kappa and for any subset S \subset H of cardinality less than \kappa the set of positive operators in the unit ball of B(H) for which every nonzero vector in the span of S is cyclic is comeager for the strong operator topology.
Introduction
Let H be a separable, infinite dimensional Hilbert space and let B(H) be the set of bounded linear operators A : H → H. A closed subspace E of H is invariant for such an operator A if A(E) ⊂ E. The invariant subspace problem (ISP) for Hilbert spaces asks whether there exists an operator A ∈ B(H) whose only closed invariant subspaces are {0} and H.
It was shown by Enflo [2] that there exists a bounded operator on a Banach space that has no proper closed invariant subspaces. Read [7] showed that the Banach space could be taken to be l 1 = l 1 (N). A simplified version of Read's example is given in [1]. The ISP remains open for Hilbert spaces; it is also unknown whether there exists a separable, infinite dimensional Banach space on which every bounded operator has a proper closed invariant subspace. This note was motivated by the observation that the ISP for Hilbert spaces can be rephrased as a question about the existence of a generic filter on a certain poset. (This material is not needed for Section 2.) The construction is this. Let P be the poset consisting of all partially defined operators A on the Hilbert space l 2 = l 2 (N) with the properties Order P by reverse inclusion. For any unit vectors v, w ∈ l 2 define D v,w = {A ∈ P : there exists n such that A n (v) is defined and A n v, w = 0}. * Supported by NSF grant DMS-0070634 Math Subject Classification numbers: 03E35, 03E50, 47A15, 47A16 It is not too hard to see that every D v,w is dense in P , and a filter of P which intersects every D v,w defines a bounded operator with no proper invariant subspaces. Conversely, if there is such an operator it can be scaled to have norm < 1, and then its finite dimensional restrictions define a filter of P which intersects every D v,w . Thus, the ISP for Hilbert space can be cast in set-theoretic terms: it is equivalent to the existence of a D-generic filter on The poset P is not ccc, but this is not essential; for example, it can be replaced by the countable poset of all finite matrices with rational entries, ordered by a reasonable notion of approximate inclusion. Thus, one can apply Martin's axiom (see, e.g., [6]) to obtain the consistency of an operator which meets "many" of the sets D v,w . This raises the possibility that the ISP for Hilbert space may be independent of ZFC. However, the assertion that A ∈ B(l 2 ) has no invariant subspaces is Σ 1 (it can be reformulated as "for all unit vectors v, w ∈ l 2 there exists n such that A n v, w = 0") and hence absolute, so if the ISP is independent, this cannot be shown using a straightforward forcing argument.
A relative consistency result
As we indicated above, although Martin's axiom alone will not suffice to prove the consistency of an operator with no proper closed invariant subspaces (unless this can already be proven in ZFC), it does allow one to prove partial results in this direction. In this section we present perhaps the strongest natural result along these lines. It was in fact originally proven directly from Martin's axiom, but here we give a better proof based on a suggestion of Kenneth Kunen.
It is easy to see that the operator A ∈ B(H) has no proper closed invariant subspaces if and only if every nonzero vector is cyclic, i.e., for every nonzero v ∈ H the span of the sequence (A n v) is dense in H. Thus, the more cyclic vectors A has, the "closer" it gets to being a counterexample to the ISP.
If X is a Banach space then we let [X] 1 denote its closed unit ball. The strong operator topology on B(H) is generated by the basic open sets If H is separable then this topology is second countable and its restriction to Proof. Let (x n ) be an orthonormal basis of H. For m ∈ N and δ > 0 let U m,δ be the set Fix m and δ for the remainder of the proof. We first show that U m,δ is open. Let A ∈ U m,δ and for each unit vector v ∈ E let f (v) be the smallest integer such that Then the function f is upper semicontinuous on the unit sphere of E (which is compact), so f is bounded. Let N be an upper bound for f and let Since E is finite dimensional, so is F . Also, by compactness of the unit sphere δ ′ < δ.
Then every vector in the unit sphere of F has a neighborhood in which g is bounded away from 0, so g must be bounded below by some positive ǫ, and we have A ∈ U ǫ ⊂ U m,δ for this ǫ. Thus U m,δ is strong operator open.
Now we must show that U m,δ is strong operator dense in [B(H)] + 1 . Fix A ∈ [B(H)] + 1 , a finite dimensional subspace F of H which contains E and x m , and ǫ > 0. We will find an operator B ∈ U m,δ such that (B − A)w < ǫ for all w ∈ [F ] 1 . Let F ′ = span(F + A(F )), let n = dim(F ′ ), let P F ′ be the orthogonal projection of H onto F ′ , and let A ′ = P F ′ AP F ′ . Note that A ′ v = Av for all v ∈ F . Next, choose an integer r > 4n/δ 2 and let X be a subspace of H of dimension nr which contains F ′ . We can identify X with the tensor product space K n ⊗ K r (where K is the scalar field, K = R or K = C) in such a way that F ′ is identified with K n ⊗ {(r −1/2 , . . . , r −1/2 )}.
Since A is positive and A ≤ 1, the same is true of A ′ . Thus F ′ is spanned by eigenvectors of A ′ , each of which belongs to an eigenvalue between 0 and 1, inclusive. Let v ′ 1 , . . . , v ′ n be an orthonormal set of eigenvectors belonging to the eigenvalues λ 1 , . . . , λ n .
(The λ i need not be distinct.) According to the above identification we have v ′ i = v i ⊗ (r −1/2 , . . . , r −1/2 ) for some orthonormal basis {v i } of K n .
Let {w j : 1 ≤ j ≤ r} be the standard orthonormal basis of K r . Then the vectors v i ⊗ w j constitute an orthonormal basis of X ∼ = K n ⊗ K r . Define B ′ ∈ B(X) by setting To complete the proof, we will show that there exists B ∈ [B(X)] + 1 such that B − B ′ < ǫ and BP X ∈ U m,δ . We will define B by choosing an orthonormal basis {e ij } of X and corresponding values 0 ≤ σ ij ≤ 1 and setting Be ij = σ ij e ij . If each e ij is sufficiently close to v i ⊗ w j and each σ ij is sufficiently close to λ i then we will have B − B ′ < ǫ. Thus, we must show that there exist {e ij } and {σ ij } arbitrarily close to {v i ⊗ w j } and {λ i } which achieve BP X ∈ U m,δ .
First, we claim that there exist orthonormal bases {e ij } arbitrarily close to the basis {v i ⊗ w j } with the property that every n-element subset of the set {P F ′ (e ij )} is linearly independent. That is, any n vectors in the basis orthogonally project to independent vectors in F ′ . This is true because, for any n indices i 1 j 1 , . . . , i n j n the family of bases {e ij } for which the vectors P F ′ (e i 1 j 1 ), . . . , P F ′ (e i n j n ) are dependent is a variety of codimension 1 in the manifold of all orthonormal bases. Thus, the family of bases for which some n elements project onto a dependent set is a union of rn n meager sets, and hence meager. So, we can perturb the basis {v i ⊗ w j } by an arbitrarily small amount and achieve this condition. Now, having chosen {e ij } so as to satisfy the previous claim, we conclude by showing that any choice of distinct values σ ij such that each difference |σ ij − λ i | is sufficiently small will ensure BP X ∈ U m,δ . To see this, observe first that for any nonzero v ∈ F ′ at most n − 1 of the inner products v, e ij are zero. Otherwise, n of the vectors e ij would be orthogonal to v, and hence n of the vectors P F ′ (e ij ) would be orthogonal to v, which would imply linear dependence since dim(F ′ ) = n. This contradicts the previous claim. Now for each nonzero v ∈ F ′ let F v = span{e ij : v, e ij = 0}.
Distinctness of the σ ij implies that the vectors B k v are linearly independent for 0 ≤ k < dim(F v ); since F v clearly contains span{B k v : k ∈ N} (it contains v and is invariant for B) this shows that the two are equal. Thus, we must show that d(x m , F v ) < δ. But x m ∈ F ′ , so | x m , v i ⊗ w j | ≤ r −1/2 for every i and j. We may therefore assume that | x m , e ij | ≤ 2r −1/2 < δ/ √ n for every i and j.
i.e., x ′ m < δ, since F v contains all but at most n − 1 of the vectors e ij . This shows that d(x m , F v ) < δ, as desired.
Since a countable intersection of comeager sets is comeager, the lemma immediately implies the following result. Theorem 1 is related to the main theorem of [5]. That result implies, for instance, that for any countable linearly independent subset S of a separable, infinite dimensional Hilbert space H there exists A ∈ B(H) for which every v ∈ S is cyclic.
Sophie Grivaux has pointed out to me that in Theorem 1 one can explicitly construct an operator for which every nonzero vector in the span of S is cyclic. Namely, first find an orthonormal basis of H whose span contains S (this can be accomplished by applying the Gram-Schmidt algorithm to S); then one can show directly that the operator V + V * , where V is the unilateral shift for the new basis, has the desired property. Moreover, with a little work this idea can be used to show that the operators in [B(H)] + 1 for which every nonzero vector in the span of S is cyclic is dense for the strong operator topology [4]. | 2018-12-29T23:34:26.369Z | 2002-02-25T00:00:00.000 | {
"year": 2002,
"sha1": "93d35fac176cba2d9a2a4dd7089baab6a550d779",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7854af3c92fb696b575e9f6a34229e49e2a767fc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
33239329 | pes2o/s2orc | v3-fos-license | Structure-activity relationships regarding the antioxidant effects of the flavonoids on human erythrocytes
The effects of eleven flavonoids on lipid peroxidation, protein degradation, deformability and osmotic fragility of human erythrocytes exposed in vitro to 10 mM H2O2 for 60 min at 37 ̊C have been studied. The following flavonoids; quercetin, rutin and morin significantly protected erythrocytes against lipid peroxidation caused by H2O2. This inhibition of lipid peroxidation could be explained by the presence of at least two hydroxyl groups in ring B of the flavonoid structure, regardless of their positions. However, the flavonoids; quercetin, 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside and 3-hydroxy flavone significantly protected erythrocytes against protein degradation. This inhibition could also be explained by the presence of a hydroxyl group at C-3 in ring C of the flavonoid structure. Quercetin and 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside significantly protected erythrocytes against loss of deformability and increased osmotic fragility, indicating that the loss of erythrocyte deformability and the increase in osmotic fragility of erythrocytes exposed to H2O2 are related to protein degradation rather than to lipid peroxidation. The other flavonoids (chrysin, 2-carboxy ethyl dihydroxy flavone, apigenin, cirsimaritin, α-naphto flavone and flavanone) failed to protect erythrocytes against the observed oxidative damages. The results demonstrate the importance of the chemical groups substituted on the basic skeleton of the flavonoids in dictating the type of antioxidant activity, and also demonstrate the hemorheological potentials of flavonoids that have particular protein-antioxidant activities.
INTRODUCTION
Erythrocytes are susceptible to oxidative damage as a result of the high polyunsaturated fatty acid content of their membranes and the high cellular concentrations of oxygen and haemoglobin [1].In healthy erythrocytes, significant oxidative damage is prevented by a very efficient antioxidant system, consisting of a number of antioxidant compounds and enzymes [2].However, when free radicals overwhelm the capacity of antioxidant system in the erythrocyte, oxidative damages may occur endangering the integrity of the erythrocytes [3,4].In vitro exposure of erythrocytes to oxygen radical generating systems (such as H 2 O 2 , ascorbate/Fe 3+ , cumene hydroperoxide, tert-butyl hydroperoxide, etc.) was shown to induce lipid peroxidation, protein degradation, loss of deformability, an increase in osmotic fragility, membrane lipid bilayer perturbation, inhibition of enzymes and hemolysis [5][6][7][8][9][10][11][12].However, Free radical reactions occur in the human body and food systems.Free radicals, in the form of reactive oxygen and nitrogen species, are an integral part of normal physiology.An over-production of these reactive species can occur in the human body, due to oxidative stress brought about by the imbalance of the body antioxidant defense system and free radical accumulation.These reactive species can react with biomolecules, causing cellular injury and death.This may lead to the development of chronic diseases such as those involve the cardio-and cerebro-vascular systems and cancers.The consumption of fruits and vegetables containing antioxidants has been found to offer protection against these diseases.Dietary antioxidants can augment cellular defenses and help to prevent oxidative damage to cellular components [13].Plants are rich in phenolic compounds and flavonoids which have been reported to exert multiple biological effects, such as antioxidant ac-tivities, free radical scavenging abilities, anti-inflammatory and anti-carcinogenic [14].Epidemiological and in vitro studies on medicinal plants and vegetables strongly support the idea that plant constituents with antioxidant activity are capable of exerting protective effects against oxidative stress in biological systems [15][16][17].Crude extracts of herbs and other plant materials are rich in phenols and flavonoids and several studies reported a positive linear correlation between the total phenolic compounds and the antioxidant activities of aqueous and methanolic extracts of different plant species [18,19].The antioxidant activity of phenols and flavonoids is mainly due to their redox properties, which allow them to act as reducing agents, electron/hydrogen donators, and singlet oxygen quenchers.In addition, they have a metal chelating potential [20].The unique chemical structures of phenolic compounds that are characterized by an aromatic ring possessing one or more hydroxyl substituents are predictive of their antioxidant potential in terms of radical scavenging, hydrogen-or electron-donating and metal-chelating capacities [21].The flavonoids contain a C6-C3-C6 flavon skelton (Table 1) in which the three-carbon bridge is cyclismed with oxygen [22].The antioxidant activity of phenolic compounds including flavonoids is related to the acid moiety and the number and relative positions of hydroxyl groups on the aromatic ring structure [20,23].
An earlier study from our laboratory indicated that the antioxidant effects of medicinal plants on human erythrocytes could be varied from being either anti-lipid peroxidant, anti-protein degradant or having both activities [24].This variation in the property of antioxidant activity could be due to the structural variation of the antioxidant compounds involved.The present study therefore aimed to screen selected flavonoids with known chemical structures (Table 1), for their protective effects against lipid peroxidation, protein degradation, deformability loss and increased osmotic fragility of human erythrocytes exposed to H 2 O 2 .This study also aimed to draw conclusions based on the structure-activity relationships regarding the antioxidant properties of the flavonoids.
Flavonoids
Stock solutions of the flavonoids (All Aldrich Chemi- cal Company, Milwaukee, USA) were prepared by dissolving the powder form of the flavonoid in a few drops of 0.1 N NaOH and then making up the required volume with normal saline.These stock solutions (1.2 mg/ml) were stored in the refrigerator at 4˚C until use.The flavonoids used and their chemical structures are shown in Table 1.
Exposure of Erythrocytes to H 2 O 2 with and without Flavonoids
Washed erythrocyte suspensions were prepared by centrifugation of heparinized whole blood from adult volunteers, who were told and explained to the objectives of this study and venous blood drawn after taking their consent, to remove the buffy coat layer and then washing the packed cells three times with cold phosphate buffered saline (PBS) as described by Dacie and Lewis [25].Washed erythrocyte suspensions were pre-incubated with 2 mM sodium azide for 60 min at 37˚C in a shaking water bath to inhibit catalase.Next, equal volumes of cell suspension and 20 mM H 2 O 2 were mixed and incubated for a further 60 min at 37˚C.Controls contained PBS instead of H 2 O 2 .Following the incubation period, the suspensions were mixed and used for MDA and alanine determinations as well as for deformability and fragility studies [12].A given flavonoid at a final concentration of 90 µg/ml (i.e.75 µl/ml) was added to erythrocyte suspensions at 30 min of the pre-incubation period with sodium azide.
Malonyldialdehyde (MDA) Determination
Erythrocyte MDA was determined as a measure of lipid peroxidation according to Stocks and Dormandy's method [26] as modified by Srour et al. [27].The principle of this method depends on extraction of MDA from erythrocyte suspension by trichloroacetic acid (TCA) solution.MDA in the TCA extract is then reacted with thiobarbituric acid (TBA) giving a pink colored complex (absorption max 532 nm).To 2.0 ml erythrocyte suspension (2.5% in PBS), 1.0 ml of TCA-arsenate (TCA 28%, arsenate 0.1 M) was added and mixed.This mixture was centrifuged for 15 min at 1050 g (6000 rpm).An aliquot (2.0 ml) of the supernatant was mixed with 0.5 ml of TBA solution (1% in 0.05 M NaOH).This mixture was placed in a boiling water bath for exactly 15 min, then immediately cooled under tap water.The absorption of the reaction mixture was read at 532 nm against reagent blank.For experiments that involved sodium azide, the absorbance was also read at 600 nm, and the difference between 532 nm and 600 nm was used as the basis for calculation of MDA concentration.For preparation of the standard curve, standard MDA (1,1,3,3,-tetraethoxypro-pane) dissolved in distilled water (4 -20 nmol/ml) was assayed as above.All MDA concentrations were expressed in nmol/g Hb.All MDA concentrations were expressed in nmol/g Hb.The Hb was measured by the cyanmethemoglobin method as described by Dacie and Lewis [25].
Alanine Determination
Alanine is not synthesized de novo in erythrocytes, so net production of alanine can only occur via protein degradation.Erythrocyte alanine concentration was determined as a measure of protein degradation was determined by the alanine dehydrogenase (ADase) method at alkaline pH as described by Davies and Goldberg method [28] and modified by Srour et al. [12].To 3.0 ml of erythrocyte suspensions, was added 1.0 ml of cold 1.6 M perchloric acid.After vortexing and 10 min on ice, the suspensions were centrifuged at 500 g for 10 min.Then 1.0 ml aliquots of the supernatants were taken to pH 9.0 with 0.2 ml of 2.0 M KOH and buffered by the addition of 0.8 ml of 0.5 M Tris-HC1 (pH 9.0).After 1 -2 hrs on ice, during which the perchlorate precipitated, the alanine content of each sample supernatant was measured.To 0.5 ml aliquot of each supernatant, the following reagents were added; 0.5 ml of 0.8 M Tris-HC1 (pH 9.0) buffer containing 0.04 M EDTA, 0.5 ml of 6.6% hydrazine hydrate solution (pH 9.0), 0.1 ml of 20 mg/ml NAD + and 0.1 ml of 6 IU/ml alanine dehydrogenase.After incubation for 60 min at 37˚C, the alanine content was determined spectrophotometrically at 340 nm by the reduction of NAD + to NADH.Standard alanine solutions (5-25 nmol/ml) were assayed along the erythrocyte suspensions to construct a standard curve.All alanine concentrations were expressed in nmol/g Hb.The Hb was measured by the cyanmethemoglobin method as described by Dacie and Lewis [25].
Deformability Studies
Leukocyte-depleted and platelet-depleted erythrocyte suspensions were prepared by pre-filtration of heparinnized whole blood from adult volunteers through Imugard IG500 cotton wool (Termo corporation, Tokyo, Japan) as described by Bilto et al. [29].The cotton wool filtered erythrocytes were resuspended in PBS at a hematocrit of 7% and then exposed to H 2 O 2 with and without flavonoids as described above.Erythrocyte deformability was measured by filtration of erythrocyte suspension through 5 μm pore diameter polycarbonate membranes (Nuclepore corporation, Pleasanton, USA) using a temperature controlled Hemorheometre MK1 [30] at 37˚C.A small batch of 12 membranes was used and reused after cleaning by ultrasonication in aqueous sodium dodecylsulfate (1%, w/v) for 10 seconds [31].Re-sults were expressed as an index of filtration (IF) of the flow time for the erythrocyte suspension relative to buffer and corrected for hematocrit [29].An increase in IF indicates loss of deformability.
Osmotic Fragility Measurements
Aliquots (0.2 ml) of erythrocyte suspensions (2.5% hematocrit) incubated with H 2 O 2 in the presence and absence of flavonoids at a final concentration of 90 µg/ml (i.e.75 µl/ml) were added to 1.8 ml of buffered saline solutions of decreasing concentrations, pH 7.4, (NaCl range of 9.0 -1.0 g/l).The suspensions were mixed and then allowed to stand for 30 min at room temperature, mixed again and then centrifuged for 5 min at 1200 rpm.The supernatants were removed and the amount of lysis was determined spectrophotometrically at 540 nm.The percentage of hemolysis was calculated from the ratios of the absorbances [25].
Statistical Analysis
The results presented are means ± SD of 5 separate experiments for each test with duplicate tubes.Statistical significance was determined using one-way analysis of variance followed by Student's t-test for paired samples.Differences were considered significant when p < 0.05.
RESULTS
Incubation of erythrocytes with H 2 O 2 for 60 min at 37˚C caused a significant increase in intracellular MDA (i.e. an increase in lipid peroxidation) from 20.0 ± 3.7 nmol/g Hb without H 2 O 2 to 402.0 ± 56.7 nmol/g Hb with H 2 O 2 (Table 2).When erythrocytes were pre-incubated with 90 mg/ml of tested flavonoids, only quercetin, rutin and morin caused a significant inhibition of MDA production (Table 2).
In an attempt to explain the observed effects of H 2 O 2 and the tested flavonoids on erythrocyte deformability, the osmotic fragility of erythrocytes exposed to H 2 O 2 and to tested flavonoids was studied.As shown in Figure 1, erythrocytes exposed to H 2 O 2 showed increased osmotic fragility (as the osmotic curve shifted to the right position) when compared to control erythrocytes incubated in the absence of H 2 O 2 .However, pre-incubation of erythrocytes with rutin and α-naphtho flavone which did not have any effect on either alanine production or erythrocyte deformability, showed no effect on the osmotic fragility of erythrocytes exposed to H 2 O 2 , since they didn't affect the position of the osmotic curve for these erythrocytes (Figure 1).In contrast, quercetin and 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside which inhibited alanine production and prevented the deformability loss improved the fragility of erythrocytes exposed to H 2 O 2, since they shifted the osmotic curve for these erythrocytes towards the left (Figure 1).
DISCUSSION
The present study evaluated the antioxidant properties of eleven flavonoids using human erythrocytes exposed to 10 mM H 2 O 2 (Table 2).The results of lipid peroxidation (Table 2) showed that flavonoids with no or with one hydroxyl group in ring B (e.g., 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside, 3-hydroxy flavone, chrysin, 2-carboxy ethyl dihydroxy flavone, αnaphtho flavone and flavanone, cirsimaritin and apigenin) had no effects on MDA production.Whereas, quercetin and its 3-rutinoside (rutin), which both have ortho 3',4'-dihydroxy substitution in ring B, inhibited significantly the MDA production in H 2 O 2 -treated erythrocytes (Table 2).This emphasizes the importance of the ortho 3',4'-dihydroxy substitution in ring B for the inhibition of lipid peroxidation.However, the hydroxyl groups at C-3 and C-5 do not seem to be important for this inhibition, as the C-3 hydroxyl group in rutin (which inhibited lipid peroxidation) is replaced by a disaccharide and the flavonoids which have C-5 hydroxyl groups did not inhibit lipid peroxidation.These results are consistent with those of other researchers [32] who found that quercetin protected H 2 O 2 induced lipid peroxidation of erythrocytes, and with others [33] who found that quercetin and rutin protected low-density lipoproteins (LDL) against Cu 2+ ion-dependent oxidation, and also with others [34] who found that flavonoids which possess only one hydroxyl group at C-4' on the B ring and which also lack the C-3 hydroxyl group (i.e.apigenin and chrysin) did not show a scavenging activity towards peroxynitrite, whereas, quercetin showed the most prominent scavenging activity of all the tested flavonoids.The precise mechanism by which the ortho 3',4'-dihydroxy groups protected lipids against peroxidation is uncertain, likely mechanisms could be their oxidation by the free radicals, thus scavenging the free radicals, and/or chelation of iron and thus reduction of free radical formation [33,[35][36][37][38][39][40].
As lipid peroxidation was also inhibited by the flavonoid morin (Table 2), which has meta 2',4'-dihydroxy substitution in ring B, it seems that the number of hydroxyl groups located in ring B of the flavonoid could also be important for the inhibition of lipid peroxidation, with two hydroxyl groups being the minimum requirement, regardless of their positions.In support of this, it has been reported that the scavenging activity of flavonoids for hydroxyl radicals increases with the number of hydroxyl groups substituted in ring B, and that the presence of hydroxyl group at C-3 or its glycosylation does not further increase the scavenging effect [37,41].Consequently, it seems that oxidation of the meta 2',4'dihdroxy groups by free radicals could have been responsible for the anti-lipid-peroxidant activity of morin observed in the present study.The anti-lipid-peroxidant activities for quercetin, rutin and morin were also observed by Affany et al. [42], who also found no effect for the flavonoid flavone which supports further our results and the conclusion reached above.
Quercetin, 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside and 3-hydroxy flavone significantly protected erythrocytes against protein degradation as compared with those treated with H 2 O 2 alone (Table 2).In contrast, the flavonoids rutin, morin, chrysin, 2-carboxy ethyl dihydroxy flavone, apigenin, cirsimaritin, α-naphtho flavone and flavanone were not able to stop this protein degradation (Table 2).This emphasizes the importance of the hydroxyl group at C-3 in ring C for the inhibition of protein degradation, while other hydroxyl groups in A and B rings do not seem to be important for the observed inhibitory activity of protein degradation.The precise mechanism by which these flavonoids protected proteins against degradation is uncertain.However, Brown et al. [33] reported that the presence of a C-3 hydroxyl group in the flavonoid structure enhances the oxidation of quercetin and kaempferol, whereas luteolin and rutin, each lacking the C-3 hydroxyl group, do not oxidize as readily, in the presence of Cu 2+ ions.
Morin, which although it has a hydroxyl group at C-3 in the C ring, did not protect proteins against degradation.This is presumably due to the presence of a hydroxyl group at C-2' in the B ring of morin.In support of this, it has been reported that the addition of a hydroxyl group at C-2' lowers the phosphodiesterase (PDE) and H + , K + -ATPase inhibitory activities of the flavonoids [36,43].
The antioxidant activity of flavonoids could also be dependent on their partitioning abilities between the aqueous and lipophilic environments [44,45].The present study does suggest that the anti-protein-degradation activity would require the flavonoid to have partitioning ability towards lipophilic environment, as the flavonoids which showed anti-protein-degradation activity ranged from a very lipophilic such as 3-hydroxy flavone (insoluble in water) to quercetin which has an equal distribution between aqueous and lipophilic environments, whereas rutin which is more hydrophilic than quercetin (due to the presence of a disaccharide at C-3) did not prevent protein-degradation [33].On the other hand, it seems unlikely for the anti-lipid-peroxidant activity to be dependent on the partitioning abilities of the flavonoids, as this activity was reported for various antioxidants ranging from a very water soluble compounds such as vitamin C to a very lipophilic compounds such as vitamin E [24,42].
Quercetin and 3,5,7-trihydroxy-4'-methoxy flavone-7rutinoside significantly protected erythrocytes against loss of erythrocyte deformability as compared with those treated with H 2 O 2 alone (Table 2).In contrast, the flavonoids rutin, morin, chrysin, 2-carboxy ethyl dihydroxy flavone, α-naphtho flavone and flavanone showed no effects on IF values (Table 2).The protective activity of the flavonoid 3,5,7-trihydroxy-4'-methoxy flavone-7rutinoside against loss of erythrocyte deformability appeared to be independent of lipid peroxidation since this flavonoid inhibited protein degradation in erythrocytes exposed to H 2 O 2 without affecting lipid peroxidation (Table 2).Thus, these findings support our previous reports that under oxidative stress, the loss of deformability in erythrocytes or neutrophils is related to protein-degradation and independent of lipid peroxidation [12,24,46].
Erythrocytes exposed to H 2 O 2 exhibited increased sensitivity to osmotic shock when compared to controls.However, when these erythrocytes were pre-incubated with the flavonoids rutin and α-naphtho flavone the osmotic fragility of H 2 O 2 -treated erythrocytes was not changed (Figure 1).On the contrary, pre-incubation of erythrocytes with the flavonoids quercetin and 3,5,7trihydroxy-4'-methoxy flavone-7-rutinoside decreased significantly the osmotic fragility of H 2 O 2 -treated erythrocytes (Figure 1), presumably because of their abilities to inhibit protein degradation.This would suggest that both quercetin and 3,5,7-trihydroxy-4'-methoxy flavone-7-rutinoside make the cytoskeleton of erythrocytes more resistance to mechanical insult by protecting them against protein degradation.These findings also support our previous reports that under oxidative stress, the increase in osmotic fragility of the erythrocyte and the loss of erythrocyte deformability are related to protein-degradation rather than to lipid peroxidation [12,24].
In conclusion, the flavonoids which had two hydroxyl groups in ring B, regardless of their positions were found to be anti-lipid-peroxidant and the flavonoids which had an hydroxyl group at C-3 were found to be anti-protein-degradant as well as rheologically protecting against oxidant stress.
Table 1 .
List of studied flavonoids with their chemical structures.
Table 2 .
Alanine and MDA concentrations, and IF of normal erythrocytes when incubated at 37˚C for 60 min with or without 10 mM H 2 O 2 or with H 2 O 2 plus 90 µg/ml tested flavonoids.Values are presented as a mean ± S.D. of 5 experiments with duplicate tubes. | 2017-10-15T22:51:54.396Z | 2012-09-19T00:00:00.000 | {
"year": 2012,
"sha1": "c464912038932191ecbf62e375b57c32fb8f8773",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=22465",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c464912038932191ecbf62e375b57c32fb8f8773",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
118529018 | pes2o/s2orc | v3-fos-license | Perturbative Roughness Corrections to Electromagnetic Casimir Energies
Perturbative corrections to the Casimir free energy due to macroscopic roughness of dielectric interfaces are obtained in the framework of an effective low-energy field theory. It describes the interaction of electromagnetic fields with materials whose plasma frequency $\omega_p$ determines the low-energy scale. The na\"ive perturbative expansion of the single-interface scattering matrix in the variance of the profile is sensitive to short wavelength components of the roughness correlation function. We introduce generalized counter terms that subtract and correct these high-momentum contributions to the loop expansion. To leading order the counter terms are determined by the phenomenological plasmon model. The latter is found to be consistent with the low-energy description. The proximity force approximation is recovered in the limit of long correlation length and gives the upper limit for the roughness correction to the Casimir force. The renormalized low-energy theory is insensitive to the high-momentum behavior of the roughness correlation function. Predictions of the improved theory are compared with those of the unrenormalized model and with experiment. The Casimir interaction of interfaces with low levels of roughness is found to be well reproduced by that of flat parallel plates with the measured reflection coefficients at a distance that is slightly less than the mean separation of the rough surfaces.
I. INTRODUCTION
FIG. 1. Two semi-infinite slabs of the same material separated by vacuum. The low-energy electromagnetic properties of the material are described by a bulk-permittivity ε(ζ = iω) that only depends on the frequency of the electric field. In Cartesian coordinates the planar interface is at z = −a and the mean separation of the two interfaces is a. The surface of the rough slab is at z = h(x) where h(x) is a profile function that generally depends on both transverse coordinates x = (x, y). We develop a perturbative expansion valid for |h(x)| a with no restrictions on the profile other than that it be single-valued. h(x) in particular need not be as smooth as shown here.
free energy to leading order of the variance are derived. They differ from earlier results by the inclusion of a counter term that corrects uncontrolled high-momentum contributions to loop integrals. We obtain the limits of very large and very small correlation length as well as the ideal metal limit and determine the plasmon coupling at low energies by analyticity arguments. Sec. V develops the low-energy effective field theory of electromagnetic interactions with materials to one loop including generalized counter terms. We state the renomalization conditions that determine them. Sec. VI presents our numerical results and compares them to unrenormalized perturbation theory and experiment. Sec. VII is a summary of the approach. Basic ingredients and some detailed calculations are relegated to four appendices.
II. THE ELECTROMAGNETIC FREE ENERGY OF A ROUGH AND A FLAT MATERIAL INTERFACE
The present approach is based on Schwinger's low-energy effective field theory [3] for electromagnetism. The partition function in this model is a functional of the local dielectric permittivity tensor ε(ζ n , x, z) of the material and of an external polarization source P n ( x) = P (ζ n , x). It is the product of contributions from (independent) thermal modes [31] of the electric field to Matsubara frequency 1 , ζ n = 2π|n|T ≥ 0 for all n ∈ Integers . (1) The partition function formally is given by the functional integral, Z T [ P ; ε] is the partition function of QED in axial gauge A 0 = Φ = 0 for a medium with local dielectric permittivity ε(ζ, x). In this gauge E n = ζ A n , B n = ∇ × A n and the current source j n = ζ P n . We consider the standard Casimir configuration of two parallel semi-infinite plates at an average separation a that is much less than their transverse dimension [1]. In the following we restrict the discussion to the configuration shown in Fig. 1 of two semi-infinite dielectric (metallic) slabs of the same material separated by vacuum, only one of which is rough, ε 3 (ζ) = 1 , ε 2 (ζ) = ε(ζ) = ε 1 (ζ) .
We forego the ability to address lateral Casimir forces. Which are finite and vanish if one of interfaces is flat. Lateral Casimir forces depend on cross-correlations of the two profiles. At separations a σ they are small and involve only low momenta. For corrugated plates they have been computed in [32]. We here are interested in the effect of profiles on the normal Casimir force. The physical interpretation and consistent subtraction of (potentially divergent) contributions will be our main concern.
The rough interface is assumed to be without enclosures and the deviation from a flat one at z = 0 is described by a single-valued function h(x) that satisfies 2 , The point of reference for defining the separation a of the two slabs should be irrelevant. However, a consistent perturbative expansion is feasible only in the absence of so-called tadpole contributions. These vanish if the separation a is such that Eq.(4) holds. Eq. (4) in this sense defines the distance a between the interfaces. When the cross-sectional area A of the slab is taken arbitrary large, boundary effects can be ignored and the 2-point correlation function, is invariant under transverse translations. The roughness variance, is a measure for the roughness amplitude. The dielectric permittivity function ε(ε, x) in this effective low-energy field theory is of the form, where is the deviation due to the roughness profile h(x) from the dielectric permittivity of a transversely homogeneous medium given by, V (ζ, z) = 1[ε 3 (ζ) + (ε 2 (ζ) − ε 3 (ζ))θ(z) + (ε 1 (ζ) − ε 3 (ζ))θ(−z − a)] + δV h (ζ, z) .
We shall argue that the counter term δV h (ζ, z) to the dielectric permittivity of three flat slabs is necessary for a consistent perturbative expansion in the framework of a low-energy theory. δV h (ζ, z) depends on gross properties of the profile h(x) but not on the transverse position x nor on the separation of the two interfaces. This counter term ensures that the single-interface scattering matrix is reproduced by the low-energy theory. To leading order δV h (ζ, z) is proportional to the variance σ 2 of the rough interface. We are thus calculating the perturbative expansion for the rough interface at z = 0 about an effective x-independent (bare) permittivity, ε eff (ζ, z) = 1ε 2 (ζ)θ(z) + 1ε 3 (ζ)θ(−z) + δV h (ζ, z) .
δV h (ζ, z) has support near the surface at z ∼ 0 only 3 . To approximate scattering off a rough interface by an effective ε eff (ζ, z) is a conceptually appealing idea and not new [33,34]. We develop a consistent low-energy approach in which this is realized perturbatively. Contrary to commonly used ansätze for the effective ε eff , δV h (ζ, z) generally is not isotropic.
The inherent limitations of the effective low-energy description derive from the fact that electromagnetic interaction with matter is encoded in the permittivity function. They are not restricted to a perturbative analysis. The dimensionless permittivity ε(ζ) = ε(ζ/ω p ) depends implicitly on a scale that can be identified with the plasma frequency ω p of the material. At momentum-or energy-transfers (or temperatures) that are much larger than ω p the effective low-energy theory of Eq.(2) fails to incorporate non-linear effects or to account for the creation of free charges. The ansatz that the permittivity does not depend on the profile furthermore is incorrect at wavelengths comparable to the plasma wavelength l p = 2π/ω p and a description in terms of the bulk permittivity of the homogeneous material is not warranted within the plasma skin depth of order l p . For gold surfaces commonly used ω p ∼ 0.046nm −1 ∼ 9eV. The low-energy description of electromagnetic interactions with such materials by Eq.(7) therefore is already questionable at wave numbers q ω p ∼ 0.046nm −1 that resolve less than 20nm or about 200 gold atoms. We will find that roughness corrections to the Casimir force with correlations lengths l c 1/ω p depend on momentum transfers q ω p that are inadequately described by the low-energy theory. The conservative approach is to use the effective low-energy theory to only compute roughness corrections with l c 1/ω p ∼ 20nm, a regime where the PFA generally is quite accurate. We improve on this by introducing phenomenological input.
It is interesting in this regard that many comparisons of theory with experiments in the literature are for correlation lengths l c ∼ 25nm ∼ 1/ω p Au . The unimproved theory is highly sensitive to l c in this regime and (re)produces large variations with only small changes in parameters. Roughness corrections computed with this unimproved model for such short correlation lengths are uncontrolled and in fact physically untenable [22].
We herre compute roughness corrections that are consistent with the low-energy effective model by using low-energy (experimental) data to systematically subtract and correct high-momentum contributions to the loop expansion. The method is quite general [26,27] and has been successfully applied to low energy effective field theories as diverse as chiral perturbation theory [28] and (quantum) gravity [27]. In our case it yields a consistent expansion in σ/a for any value of 0 < l c ω p < ∞ at the expense that the reflection of electromagnetic radiation perpendicular to the rough plate has to either be measured or be reliably modeled.
A. The Green's Function and Casimir Energy of Two Parallel Flat Interfaces
Schwinger obtained the free energy and the response to an external polarization source P n (x, z) = P (x, z; ζ n ) for three parallel slabs in the framework of the low energy effective field theory given by Eq. (2). The free energy in this case is [3], Here F T is the well-known Casimir free energy for three parallel slabs, where the reflection coefficients at the i-th interface of area A for the TE-and TM-modes are, .
The response to the n-th Matsubara mode of an external source of polarization is, Due to translational invariance in transverse directions, G (x, z, y, z ; ζ, a) is a function of x − y and the Fourierrepresentations in Eq.(14b) are, G (x, z, y, z ; ζ, a) = dk (2π) 2 e ik(x−y) G (k, z, z ; ζ, a) and P n (k, z) = dx e −ikx P n (x, z) .
G can be decomposed into a single-interface Green's dyadic G | (x − y, z, z ; ζ) = G (x, z, y, z ; ζ, a → ∞) where the second interface has been removed and the correction G |a| (x − y, z, z ; ζ, a) due to the presence of a second flat interface at mean separation a. In momentum space the latter vanishes exponentially for a → ∞, Explicit expressions for components of G | (k, z, z ; ζ) and G |a| (k, z, z ; ζ) when z and z are in slab #2 or slab #3 are collected in App. A.
B. Perturbative Roughness Correction to the Casimir Free Energy: Greens Function Formalism
A straightforward perturbative expansion in the roughness potential V h is possible only for media with ε 2 − ε 3 1. Since the Casimir free energy itself is rather small, roughness corrections are not very important in this weak coupling scenario. However, the support of V h is restricted to |z| ≤ max x |h(x)| ∼ σ a and a perturbative expansion in σ/a may exist even for media whose permittivity is rather large. This expansion in fact is possible even for ideal metals.
The part of the free energy that captures the dependence on the average separation a of two interfaces is by definition the Casimir free energy due to their interaction 5 . In terms of the Greens-dyadic G of three parallel slabs satisfying Eq.(15), the full Greens-dyadic G for the combination of a rough and a flat interface formally is the solution of, with, The change in free energy due to roughness of one interface therefore is [32,35], where the trace includes a summation over Matsubara frequencies and over a complete set of scattering states. The expression in Eq. (20) is rather formal because it includes the change in free energy due to roughness in the absence of the second (flat) interface. This infinite single-body contribution to the free energy does not depend on the mean separation a. Subtracting from ∆F T [h, a] its value when the two interfaces are infinitely far apart gives the correction to the Casimir free energy due to roughness of an interface as, where is the formal scattering matrix due to the roughness potential V. T h does not depend on the separation a and describes scattering due to roughness in the absence of the second (flat) interface. Since high momenta are exponentially suppressed in G |a| , the Volterra series of ∆F Cas converges when the norm of T h G |a| is bounded and sufficiently small.
III. THE ROUGHNESS SCATTERING MATRIX T h
Noting that the component G | zz (k, z, y, z ; ζ) in Eq.(A1) includes a δ-function singularity, Eq. (22) can be rewritten, in terms of the Green's dyadicG | with Fourier components, and a new potentialṼ.G is devoid of δ-function singularities (but not continuous at z = 0) with components are given in Eq.(A7). To order σ 2 the potentialṼ is, The reformulation of Eq. (22) in the form of Eq.(24) resums local contributions of the same order in h. It allows the formulations of a consistent perturbative expansion in σ even in the ideal metal limit ε(ζ) → ∞. Just as for V, the support ofṼ is restricted to to the interval |z| < max x |h(x)| ∼ σ only. SinceG | is free of ultra-local δ-function singularities, contributions to T h of n-th order inṼ are at least of n-th order in the standard deviation σ of the profile h.
To second order in σ we need only consider the first two terms of the Volterra series, since the counterterm potential δṼ h is itself of order σ 2 (as will be seen). The second-order contribution T (2) of Eq. (27) is at least of order σ 2 and its integrated expectation to this order is, Because dzṼ h (x, z, ζ) already is of order σ, the Fourier components of t (2) are 6 , zz components of the dyadic (see Appendix A) are discontinuous at z = 0, one has to separately consider correlators of positive and negative components of the roughness profile in Eq. (29). With h ± (x) = h(x)θ(±h(x)), these signed correlators are, for a roughness correlation function D 2 (r) that is positive and monotonically decreasing with r = |x − y|. The signed correlators do not vanish and approach ±σ 2 /(2π) for r → ∞ if D 2 (r ∼ ∞) ∼ 0. At small separations r = |x − y| l c , cos φ = D 2 (r)/D 2 (0) ∼ 1 − βr α . Thus φ ∝ r α/2 for r ∼ 0 with an exponent α > 0. The expressions of Eq.(31) for small φ then imply the behavior, After Fourier transformation the asymptotic behavior at large momenta ql c 1 of D ++ (q) = D −− (q) is the same as that of 1 2 D(q 1/l c ), whereas the mixed correlations D +− (q) = D −+ (q) fall off more rapidly. For l c 1/ω p high momentum contributions are appreciable or even dominate the 1-loop corrections to the diagonal components of the scattering matrix in Eq. (29). For example, Whether or not loop integrals like Eq.(33) diverge depends on the roughness correlation function. For Gaussian correlations the integral converges, but the roughness "correction" becomes (arbitrary) large for l c ∼ 0. This invalidates the perturbative expansion in σ/a and, for sufficiently small l c , violates unitarity. It furthermore is unphysical that roughness corrections to the scattering matrix with profiles of fixed variance become arbitrary large as l c → 0. For a scalar field and Gaussian roughness correlation, higher orders in the loop expansion are of the same order in σ/l c in this limit [22]. Assuming the scalar model is valid at all energy scales, we resummed the leading σ/l c contributions to the scalar Casimir energy and found that they amount to a change in the effective separation ∆a ∼ σ 2 /l c of the two interfaces.
However, the effective low-energy electromagnetic theory of Eq.(2) evidently is not valid for momenta that far exceed the plasma frequency ω p . One furthermore is not assured that summing incorrect higher loop contributions in this effective low energy theory improves the situation. We therefore will not follow that line and proceed differently in this case.
For correlation functions with non-vanishing slope at r = |x − y| = 0, that is D 2 (r = 0) = 0, the situation is even more serious. For instance, the 2-dimensional Fourier transform of an exponential correlation function, decays as a power law proportional to q −3 at large momenta. The integral in Eq.(33) and other (diagonal) components of the roughness correction t (2) in Eq. (29) to the single-interface scattering matrix in this case are logarithmic UVdivergent for any correlation length l c > 0 . Experiment [19] does not distinguish Gaussian roughness correlations 7 , and roughness profiles with correlation lengths l c ω p 1 are readily manufactured. Restricting the model to a particular form for the roughness correlation would not address the fact that the effective low-energy theory does not describe high-momentum contributions to loop integrals correctly.
From a practical point of view the problem is that roughness corrections to the Casimir free energy and other lowenergy observables are exceptionally sensitive to high-frequency components of the profile because G | (k ∼ ∞, 0, 0; ζ) Ks(r √ 2s/lc). Ref. [12] uses a correlation in this affine class with s = 0.9 for which the loop integral converges but is sensitive to contributions from high-momenta. k at large momenta. Fig. 2 depicts typical roughness profiles to three different correlation functions with the same correlation length and variance: a) exponential as in Eq. (35), b) Gaussian as in Eq. (34) and c) Rational as D Rational (r) = σ 2 /(1 + (r/l c ) 2 ) 2 . It is evident from Fig. 2 that the three profiles differ only in their high-frequency components. However, to leading order in the variance, corrections to the low-energy scattering matrix are extremely different for the three types of profiles. The roughness correction diverges in the exponential case a) but is finite for profiles b) and c). This sensitivity can be traced to the UV behavior of the 1-loop integrands like that of Eq.(33). It is unphysical and an artifact of taking the low-energy effective theory beyond its limits.
Analogous difficulties arise in any non-renormalizable low-energy effective field theory [27,28] and we here resort to a similar cure: whereas high momenta may dominate loop corrections to the scattering matrix, they generally are sufficiently suppressed in differences thereof. Differences of elements of the scattering matrix often can be reliably estimated within the framework of the low-energy effective field theory. However, phenomenological input is required to determine high-momentum contributions to loop integrals that are beyond the reach of the low-energy theory.
One for instance can rewrite t where q = k − k, κ ε = (k + q) 2 + ζ 2 ε(ζ) and κ ε = q 2 + ζ 2 ε(ζ) in the last expression. The one-loop correction to t (2) xx (0, ζ) in Eq.(36) converges for any D(q) for which FIG. 3. The counter potentialṼ h includes two contributions of order σ 2 . It subtracts the one-loop contribution to the average scattering matrix at vanishing (transverse) momentum and replaces it by the phenomenological one. The latter is modeled by the tree-level plasmon contribution at vanishing transverse momentum. The plasmon couples to radiation due to the roughness of the surface only and its coupling g 2 σ 2 to this order is proportional to the variance of the roughness profile. The plasmon propagator (dashed) is the one-interface Green's functionG(z = z = k = 0). We show in the text that g 2 (ζ/ωp, lcωp) = 1 at low frequencies.
More importantly, the correction to t (36) is of order (kσ) 2 and thus small at low transverse momenta for any correlation length l c of the profile. This correction to t (2) xx (0, ζ) thus is reliably computed in the framework of the low-energy theory.
It remains to estimate t (2) (0, ζ). This is the correction due to roughness to the (analytically continued) scattering matrix of an electromagnetic wave of frequency ω = iζ incident perpendicular to the rough plate. t (2) (0, ζ) is a single-interface low-energy characteristic that, at least in principle, can be derived from ellipsometric measurements of the rough interface. Instead of directly incorporating such experimental data, we here model the corrections of order σ 2 to the low-energy scattering matrix by the coupling to surface plasmons induced by roughness. We determine the coupling by demanding that this phenomenological description of t (2) (0, ζ) be consistent with the low-energy field theory in the limit of large correlation length and that the ideal metal limit exist at any correlation length.
Roughness couples electromagnetic radiation to surface plasmons [37]. At low transverse wave numbers this coupling is of the order of the rms-roughness σ. To order σ 2 the corresponding tree-level correction to the scattering matrix is schematically shown in Fig. 3. The diagram depicts the creation, propagation and subsequent annihilation of a surface plasmon by an incident electromagnetic wave.
For k → 0 a surface plasmon on the interface of a flat plate at z = 0 propagates with the dyadic, To second order in σ, the correction t (2) (0, ζ) to the scattering matrix at vanishing momentum transfer from surface plasmons thus is, where g(ζ/ω p ; l c ω p ) is a dimensionless coupling that depends only on the frequency of the plane wave incident perpendicular to the rough plate. The coupling g(ζ/ω p ; l c ω p ) in general is not calculable within this low-energy effective model and has to be determined phenomenologically. We argue below that g 2 ∼ 1 at low energies. Since g(ζ/ω p ; l c ω p ) is a phenomenological function rather than just a constant, one could have directly modeled t (2) (k = 0, ζ). However, the ansatz of Eq.(39) is consistent with the low-energy scattering theory in the sense that roughness correlation functions for large correlation length l c ω p 1 approach representations of the δ-distribution 8 , Loop integrals in the limit l c → ∞ become trivial and furthermore involve only momenta q ω p . Predictions of the low energy theory therefore should be reliable in the limit l c → ∞. Evaluating the loop integrals of Eq.(29) for k → 0 using Eq.(40) and comparing with the plasmon contribution of Eq.(39) this requires that , We will find that Eq.(41) not only ensures consistency, but also the existence of an ideal metal limit. It in addition ensures that the PFA to the Casimir free energy is recovered in the limit l c ω p → ∞. At finite l c ω p 1 the coupling g(ζ/ω p , l c ω p 1) in principle has to be determined phenomenologically. However, the coupling is severely constrained if we impose some theoretical requirements. Since the range of frequencies ζ that contribute to the Casimir energy satisfy ζa 1 ω p a and the plasmon coupling does not diverge at low frequencies, we in the following ignore the ζ-dependence of g(ζ/ω p , l c ω p ) and for low frequencies approximate, in Eq. (39). Eq.(42) assumes that the plasmon coupling is strongest for an ideal metal l c ω p 1. Note that the fact that g is dimensionless links the ideal metal to the large l c limits.
To order σ 2 the subtraction of the one-loop contribution t (2) (k = 0, ζ) and its replacement by phenomenological plasmon scattering is implemented by a (local in transverse coordinates) counter term potential δṼ(ζ, z) of the form, Note that the support of δV h (ζ, z) is in the immediate vicinity of z = 0 only. Due to rotational and translational symmetry of the rough plate, this "counter potential" is local and diagonal but anisotropic 9 .
As mentioned in Sec. II, the counter potential may be interpreted as the modification of the dielectric permittivity (to order σ 2 ) in the vicinity of the flat interface necessary to describe the rough interface with permittivity ε and roughness correlation D 2 (x − y). There is no compelling reason for perturbing about a flat interface with the same permittivity as the rough one. We have seen that the expansion about a flat plate with the same permittivity is not consistent with the low-energy description, since it implies unacceptably high momenta in the loop integrals. Expanding instead about the bare permittivity function of Eq.(9) yields a better controlled approximation and Eq.(43) strongly suppresses high-momentum contributions to 1-loop.
IV. ROUGHNESS CORRECTION TO THE CASIMIR FREE ENERGY OF ORDER σ 2
We now evaluate the roughness correction to the Casimir free energy within the framework of the improved lowenergy effective field theory. From Eq.(23) and Eq. (27) we have altogether four contributions to order σ 2 , We consider them in turn.
The first is the seagull contribution of Fig. 4a given by, This roughness contribution to the free energy is entirely local and does not depend on the correlation length l c . The loop-integral over transverse momenta and the sum over Matsubara frequencies are exponentially restricted to momenta 2aκ 1 and the evaluation of the seagull diagram using the low-energy propagators should be accurate for all aω p 0.5, that is for a 12nm in the case of gold plates. Due to the κ ε factor of the integrand, the contribution of Eq.(45) is proportional to ω p σ 2 /a 4 for aω p 1 T a and diverges in the ideal metal limit. Fortunately the seagull is not the whole story to order σ 2 .
B. The Single Diffusive Scattering Contribution TrṼ hG|Ṽh G |a|
The other contribution to the Casimir free energy of order σ 2 from a single scattering off the rough interface corresponds to the diagram of Fig. 4b. This unsubtracted 2-loop contribution is formally given by, with q = |k − k | and interaction vertices, The signed correlation functions in Eq.(46) combine and Eq.(48) depends on the roughness correlation D(|k − k |) only. In App. C the integral over θ in Eq.(48) is performed analytically for the class of correlations D s (q), but this angular integral in general has to be evaluated numerically. More importantly, the leading term of order ω p in the limit ω p → ∞ of Eq.(48) cancels the leading asymptotic behavior ∝ ω p of the seagull term in Eq.(45). The limit of Eq.(48) for large correlation length l c 1/ω p is found using Eq.(40) to trivially evaluate the k -integrals. Some algebraic manipulations simplify the expression in this limit to, (49)
C. The Counterterm Correction
As for t (2) in Eq. (29), the loop-integral of Eq.(48) generally includes high momentum contributions k ω p for which the low-energy description is not justified. The same 1-loop counter potential of Eq.(43) that corrects roughness corrections to the scattering matrix to 1-loop also removes the uncontrolled high-momentum contributions to the Casimir free energy and replaced them by the phenomenological plasmon contribution.
The correction of the Casimir free energy by this counter potential is shown diagrammatically in Fig. 4c and the two Feynman diagrams of this counter term are depicted in Fig. 3. To order σ 2 the contribution to the Casimir free energy from the counter potential δṼ of Eq.(43) is, This correction to the Casimir free energy remains finite in the ideal metal limit when Eq.(41) is satisfied. The existence of this limit is assured by the consistency of the low-energy theory in the limit l c 1/ω p . Using Eq.(40), the counterterm correction of Eq.(50) for l c ω p becomes, and vanishes when Eq.(41) is enforced. This should be expected of a model that is valid at low energies. Note that the reason magnetic and electric modes do not enter the counter term correction symmetrically even at large correlation length is because we subtracted at k = 0: the factor κ 2 ε /(εk 2 + κ 2 ε ) in Eq.(51) differs from unity in order k 2 /ω p 2 only.
D. Contributions of Second Order in the Roughness Scattering Matrix
Both loop integrals of this contribution (represented in Fig. 4d) to the Casimir free energy are exponentially constrained to low momenta k, k 1/(2a) ω p -a regime in which the low-energy description is expected to hold. We find that, For profiles with large correlation length l c 2a 1/ω p Eq.(52) simplifies to when Eq.(40) holds. Although l c a is a necessary condition for the PFA, the limiting expressions of Eqs. (49) and (51) evidently hold only when l c is large compared to a and 1/ω p . The latter restriction arises because the scattering matrix locally can be approximated by a flat surface only if the plasma length is shorter than the typical length scale of the surface structure.
For a rough profile with l c max(1/ω p , a) Eqs. (49), (51) and (53) should all be reasonable approximations. Including the seagull term of Eq.(45), the roughness correction to the Casimir free energy of Eq.(44) in the limit of large correlation length l c max (1/ω p , a) is, where F T (a) is the Casimir free energy for two flat parallel semi-infinite slabs at a separation a given by Eq. (12). This is precisely the roughness correction in PFA for a rough surface with h(x) = 0 and h 2 (x) = σ 2 . Although trivial, one should note that the PFA here emerges in the limit of large l c from requiring consistency of the low-energy effective field theory. It is due to the absence of high-momentum contributions in this limit and does not require any additional phenomenological input.
F. Ideal Metal Limit ε → ∞
It perhaps is remarkable that the requirement of Eq.(41) not only guarantees that the PFA is recovered in the l c → ∞ limit but also ensures the existence of an ideal metal limit. If g 2 is analytic at zeta = 0 one can argue that ζ/ω p and 1/(l c ω p ) (see Eq.(58)) corrections are absent and g 2 for large ω p has the expansion g 2 = 1 + O(ζ 2 /ω p 2 ). The ideal metal limit in this case is uniquely given by, Note that the counter term contribution of Eq.(55c) does not vanish and cancels the contribution from high kmomenta in Eq.(55a) also for the ideal metal. High-momentum contributions to the roughness correction thus persist in the ideal metal limit. Without counter term this perturbative correction would diverge for l c → 0 (and for some correlations would diverge for all l c ). This apparently is at odds with exact calculations for square-wave profiles [16] and demands an explanation. The reason for convergence of the exact calculations in the limit l c → 0 (and divergence of the unsubtracted perturbation theory) for such profiles is subtle and related to the fact that for l c σ the leading term in the exact calculation is O(σ) and not O(σ 2 ) as perturbation theory suggests [16]. The non-analytic dependence on σ for l c → 0 arises due to an effective UV-cutoff in the exact calculation of O(σ) -there is no other scale to compare with in this limit. Ignoring this effective cutoff (as a perturbative expansion in σ does ) leads to an UV-divergent expression in the limit l c → 0. The non-analyticity of the exact result for σ/a 1 in the limit l c → 0 is only possible if wave-numbers of order 1/σ of the profile contribute significantly. The non-analyticity in σ in this sense implies that high-momenta 1/a < k < 1/σ must dominate the exact Casimir energy calculation for an ideal metal in the limit 0 ≤ l c < σ ∼ 0. A simple model that qualitatively reproduces this explanation of the non-analytic dependence on σ is obtained by replacing l c → l c + γσ in the Gaussian correlation function of Eq. (34) where the constant γ is of O(1). For l c σ one recovers the quadratic perturbative dependence on σ in leading order, but for 0 ≤ l c σ → 0 the k integral of Eq.(55a) is of order σ 2 /(l c + γσ) lc σ −−−→ σ/γ as in the exact calculation. The UV-divergence ∝ 1/σ 3 of the k -integral that gives this leading (non-analytic) behavior is due to momenta k ∼ 1/σ 1/a. Although the exact evaluation of such high-momentum contributions is of itself correct, the low-energy description used to compute them is not justified. The fact that the plasmon contributes and the counter term of Eq.(55c) removes high momentum contributions even for an ideal metal indirectly supports the assertion that roughness corrections of real materials in fact remain analytic in the variance σ 2 even in the limit of uncorrelated roughness.
G. The Limit of Uncorrelated Roughness and the Plasmon Coupling g 2 The high-roughness limit l c 1/ω p is obtained by examining the loop integrals in Eqs. (45), (48), and (50) at large momentum transfers q = |k − k|. In the limit of uncorrelated roughness l c → 0 the correction is, Note that the correction to the Casimir free energy for l c = 0 is strictly negative when g 2 ≤ 1. The Casimir free energy of a rough interface thus is always larger in magnitude than of a flat one at the same average separation. We believe this is the result of two opposing effects. The specular reflection off a rough surface with vanishing l c but finite σ never is quite the same as that off a flat interface with the same bulk permittivity: the situation is analogous to the change in bulk permittivity due to the inclusion of sub-wavelength spheres of a different material. Since the included "material" in this case is vacuum with ε = 1, the effective reflection coefficient decreases compared to that for the flat plate. This effect by itself would tend to decrease the Casimir free energy in magnitude for l c → 0. However, this decrease is more than compensated by the reduced separation to this effective interface. The ideal metal limit of Eq.(56) exists only for g 2 → 1 and is analytically given by, The ideal metal and l c → 0 limits in fact commute and g 2 → 1 is required for the ideal metal limit to exist. Assuming that g 2 (ζl c , l c ω p ) is analytic in both arguments, the existence of an ideal metal limit implies, 1 = lim ωp→∞ lcωp=β g 2 (ζl c , l c ω p ) = g 2 (0, β) .
We therefore have that g 2 = 1 at low frequencies for any value of l c and ω p . We in the following therefore consider only,
V. THE EFFECTIVE LOW-ENERGY FIELD THEORY OF ELECTROMAGNETIC INTERACTIONS WITH ROUGH SURFACES.
Although we obtained a roughness correction that is compatible with the low-energy theory of Schwinger by a Greens-function approach, it is instructive to construct the effective low energy field theory from which these corrections derive. The effective field theory allows one to in principle explore other approximations and corrections. It also provides a general framework for systematically taking into account higher orders or for including other interactions. In this formulation the necessity of counter terms furthermore is readily apparent.
A. The Generating Functional of Roughness Correlations
The construction of the field theory is based on the generating function of the roughness correlation functions rather than the roughness correlations themselves. This approach was already used in the scalar case [22]. The n-point roughness correlation functions for an interface of (large) area A with a particular profile h(x) are the averages, The interface is assumed large enough for boundary effects to be negligible. Transverse translational invariance then implies that these correlations depend only on differences of the transverse coordinates 10 . Isotropy of the roughness profile yields further restrictions: the n-point correlation function in this case depend only on distances between the points. We assume that the profile and therefore all n-point correlation functions of Eq.(60) can, at least in principle, be measured when the rough interface is far removed from the other. The mean separation a between the two interfaces is determined so that Eq.(4) holds, that is the (constant) one-point function D 1 vanishes. We formally collect all roughness correlation functions of Eq.(60) for a particular profile h(x) in a single generating functional with, In general Eq.(62) is only the leading quadratic term in a cumulant expansion of Z h . A Gaussian generating functional relates all higher order correlations to the 2-point function. In App. B we for instance determine signed correlation functions in terms of D 2 for such a model. Stochastic roughness is fully described by the covariance of the profile and a Gaussian model by definition is exact in this case. A Gaussian model for the generating functional also suffices to obtain corrections to the free energy and the scattering matrix to leading order in the variance of the roughness profile.
To order σ 2 the correlations of a periodic 1-dimensional profile h ω (x) = σ sin(ωx) can be found using a Gaussian model, but the four-point correlation function in this case is only half of what the Gaussian model asserts, To correctly obtain effects due to a periodic profile to order σ 4 requires the inclusion of a 4 th order cumulant. Note that the 2-point correlation D ω 2 in Eq.(64) of a periodic corrugated profile is not positive definite and has no probabilistic interpretation. However, in momentum space it is proportional to the sum of two δ-functions and therefore positive semi-definite.
The basis for a field theoretic approach to roughness is that any analytic functional R[h] of the profile h(x) with translation invariant coefficients can be evaluated using Z h [α]. To show this, consider a typical monomial in the Taylor expansion of R[h] for small profiles, (66)
B. The Partition Function of the Low-Energy Effective Field Theory
In the presence of external sources of polarization P n (x, z) = P (x, z; ζ n ), Schwinger's free energy for two parallel interfaces is given by Eq. (11). The partition function for a flat and a rough interface described by the profile h(x) corresponding to the potential V(ζ, h(x), z) of Eq.(19) therefore formally is, where V n [h] is the functional derivative operator, representing the interaction of the n-th Matsubara mode with the roughness profile h(x).
C. Counter Terms of the Low-Energy Effective Field Theory
The counter potential of Eq.(43) corresponds to a functional derivative operator of the form, .
This counter-term to the free energy therefore does not affect thermodynamic state functions like the enthropy or pressure. It cancels loop contributions to the energy (at T → 0) when the flat interface is removed (a → ∞). The Casimir free energy remains (its finite, a-dependent value at T = 0 is the Casimir energy).
In obtaining the Casimir free energy by the Green's function method the contribution to the free energy from the counter term coefficient c 2 (x − y) was implicitly taken into account by subtracting ∆F T [h, ∞] in Eq. (21). Requiring the absence of one-loop corrections to the 2-point roughness correlation at large separation a and temperature T = 0 determines c 2 (q). The Feynman diagrams involved in this condition are shown in Fig. 5. The counter term c 2 also ensures that there is no single-interface correction to the Casimir energy at T = 0. For T > 0 a finite a-independent contribution to the single-interface free energy remains that we have not calculated here.
The Green's function approach implicitly also accounted for contributions of c 1 (a, T ) by simply assuming that Eq.(4) holds to order σ 2 . c 1 (a, T ) cancels tadpole contributions to the scattering matrix (see Fig. 6) and 1-particle reducible contributions to the Casimir free energy like those of Fig. 7 vanish in this case.
We defined the mean separation a by Eq.(4) and demanding that corrections to h ± (x) vanish determines c 1 (a, T ) FIG. 7. 1-particle reducible dumbbell contributions to the free energy that are cancelled by the c1 counter term given in Eq.(72). 1-particle reducible contributions to the free energy are of order 1/T at low temperatures and would violate Nernst's theorem.
to one loop. The diagrammatic form of this condition is shown in Fig. 6 and evaluates to, where c 1 (∞, T ) is the (infinite) one-interface contribution that does not depend on the separation a. The interpretation of Eq.(72) is straightforward and could have been anticipated: for h = 0, the separation a is redefined at one loop. Since To leading order in h , the c 1 -counterterm arises from the free energy of two parallel flat interfaces at separation a B , where a = a B + h , is the separation at which Eq.(4) holds.
The a-independent but temperature-dependent contribution from c 1 (∞, T ) similarly is the difference in free energy due to a shift of a flat interface by − h . The bulk contribution to the free energy density thereby increases by, where F γ T [ε]/V is the free energy density of a photon gas in a homogeneous medium with permittivity ε(ζ). The difference in free energy density in the dielectric and in vacuum depends on the permittivity ε(ζ). For the plasma model with ε(ζ) = 1 + (ω p /ζ) 2 , this separation-independent contribution to the free energy is, where the modified Bessel function K 2 (x) is normalized to K 2 (x ∼ 0) ∼ 2/x 2 . The generally infinite constant c 1 (∞, 0) does not depend on temperature nor on the separation a. It is sensitive to the behavior of ε(ζ) at energies ζ ω p . Estimating this contribution to the free energy in the framework of the low-energy effective theory is meaningless since the loop integral is dominated by momenta and energies k, ζ ω p . For the sake of completeness, this formal contribution with a proper time cutoff β is, It is a quadratically and logarithmically UV-divergent constant contribution to the total energy of the system. It may be absorbed in the counter term c 0 and in the absence of gravitational interactions has no physical implications.
D. The Complete Low-Energy Effective Field Theory
Since the Greens-function G of parallel interfaces as well as the counter terms are invariant under transverse translations, the partition function Z T ( P = 0, h) defined in Eq.(67) for vanishing polarization sources is a functional of the roughness profile h(x) with translation-invariant coefficients. We thus can use Eq.(66) to evaluate it using the correlation functions of the profile h(x) rather than the profile itself. We therefore have that, with Z α [h] defined by Eq.(61). The complete generating functional of the Gaussian model we are considering thus is, Instead of employing the Green's function approach, one can derive the loop corrections to the free energy from Eq.(78). The Casimir free energy to one loop is the same in both approaches. However, the generating functional Eq.(78) of the low-energy effective theory has conceptual and methodical advantages: once the set of counter-terms is determined, the field theory yields consistent low-energy results not just for the Casimir energy, but for the scattering matrix as well. No ad-hoc arguments and procedures are required to cancel uncontrolled high-energy loop corrections and the necessity of the counter terms and their interpretation is readily apparent.
VI. NUMERICAL INVESTIGATIONS
We numerically investigated the correction ∆F Cas T (a) to the Casimir free energy given in Eq.(44) due to the roughness of an interface. To order σ 2 this correction is linear in the roughness correlation function and one may define [17] a response function R T (q, a), A. The Response with and without Counter Term Fig. 8 gives the normalized response when the counterterm of Eq.(50) is omitted as a function of the dimensionless variable q/ω p . The low-energy theory is in the shaded momentum region q/ω p < 1. Note the linear rise of the lowenergy response function for all separations a in the uncontrolled region q/ω p 1. The integration weight qD(q) for Gaussian and exponential roughness correlation with a typical correlation length l c ∼ 1/ω p is superimposed. A sizable contribution to the roughness correction in Eq.(79) evidently is due to loop momenta q > ω p for which low-energy expressions are unreliable.
Inclusion of the counter potential gives a constant high-momentum response. Fig. 9 shows the response functions with and without the counterterm contribution of Eq.(50). With the same model for the bulk permittivity of gold, the response function shown in Fig. 3 of Ref. [17] is reproduced when the counter-potential is omitted. Inclusion of the counter potential gives a constant high-momentum response and the correction to the Casimir (free) energy is of order σ 2 . Note that with g 2 = 1 the response at q = 0 does not change.
The correction to the Casimir energy at T = 0 for Gaussian roughness with and without inclusion of the counter term of Eq.(50) is shown in Fig. 10. Whereas the PFA-limit l c → ∞ coincides for both cases, the behavior is remarkably different at finite l c . Including the counter term of Eq.(50) the roughness correction to the Casimir energy decreases in magnitude for decreasing correlation length and approaches a finite (uncorrelated) limit for l c → 0. Roughness increases the Casimir force but the PFA is an upper bound in this case. The ratio of the roughness correction to the PFA furthermore approaches a constant, l c -dependent, value with increasing separation rather than increasing indefinitely as in the unsubtracted case (for exponential roughness, the roughness correction without the FIG. 8. The dimensionless normalized response ρ(q, a) = RT (q, a)/RT (0, a) without counter potential δV h = 0 for the permittivity ε(ζ) = 1 + (ωp/ζ) 2 to leading order in σ 2 at T = 0. The dependence on q/ωp of this ratio of the roughness response function RT (q, a) (defined by Eq.(79)) is shown for aωp = 2.31(− −), 9.24(· · · · ·) and 18.48(−−− −). For the plasma frequency ωp = ωp(Au) ∼ 0.046nm −1 , this normalized response without counter potential is identical with that obtained by Ref. [17]. [For ωp = 0.046nm −1 the curves here corresponds to those of Fig. 4 in Ref. [17] at separations a = 50, 100, and 200nm.] Note the change in behavior and subsequent linear rise in the region qωp 1. The region qωp 1 where the effective low-energy theory is valid is shaded light green. We superimpose typical integration densities for the response function in Eq.(79): the momentum space function qD(q) for Gaussian and exponential 2-point roughness correlation with lc = 1/ωp. The roughness correction to the Casimir energy with exponential correlation diverges logarithmically and even for Gaussian roughness correlation the (unshaded) region q/ωp > 1 contributes significantly in this uncorrected case. Note that for a gold surface the correlation length here is lc = 1/ωp(Au) ∼ 21nm.
counter term of Eq.(50) would diverge at any separation and for all l c ). Let us also note that for l c 1/ω p the roughness correction at large separations is less than 50% of the PFA prediction. Although we here are considering only perturbative roughness corrections, the suppression at large separations for l c 1/ω p is of a similar magnitude as that observed [25] for machined profiles with correlation length l c ∼ 1/ω p .
B. (In)sensitivity on High Momentum Components of the Roughness Correlation
The counter potential δV h was introduced to correct for uncontrolled high-momentum contributions to loop integrals with the help of phenomenological input. We therefore investigated the sensitivity of the roughness correction to the correlation function D(q) numerically. Fig. 11 shows the ratio of the correction for Gaussian-and for exponentialroughness of the same correlation length l c . The two are identical for l c = 0 and l c ∼ ∞ (PFA) at any aω p . The (dimensionless) ratio of these corrections never drops below 85% for any separation aω p and correlation length l c ω p . Without counter potential this ratio is infinite. Exponential roughness always gives a smaller correction than Gaussian roughness of the same correlation length and variance. The two correlation functions provide rather similar descriptions of low energy scattering and the low-energy effective theory with counter potential depends only weakly on their (very different) behavior at high momenta. Fig. 3 in Ref. [17] at separations of a = 50, 100, and 200nm. Note that the renormalized roughness response is monotonically decreasing and approaches a constant at large momenta that is a factor of 2-3 smaller than the response at q = 0. Most of the correction to the Casimir energy in this case arises from the shaded integration region q/ωp < 1 where the low-energy description is valid.
C. Comparison with Experiment
The low energy theory for electromagnetic interactions with rough surfaces ultimately must be compared to experiment. Unfortunately only very few studies are dedicated to the systematic investigation of Casimir forces between rough surfaces. Many employ non-isotropic machined surfaces with rather large σ/a-ratios [24,25] that are not accessible perturbatively. Nevertheless, these experiments qualitatively contradict the predictions of exact calculations, that essentially any kind of roughness tends to increase the Casimir force above the PFA estimate. A notable exception is a series of investigations of isotropically rough surfaces by Palasantzas et al. [11,12]. For sufficiently rough surfaces, this group does observe (see Fig. 3 of Ref. [12]) an increase of the Casimir force by 200-400% at small separations. This sharp increase in the force was attributed to particularly high islands of the surface profile that can also be seen in some of the AFM scans of the gold surfaces. The pronounced effect of such islands is beyond the scope of a perturbative analysis and was explained by a semi-empirical approach [13] based on the PFA.
However, gold films with 100nm and 200nm thickness of relatively low roughness appear to be almost free of such buildup effects. At small separations the force in these cases is smaller than the PFA prediction. In Fig. 12 we compare the low-energy theory to the measurements of Ref. [12] on these thin films. The experiments measure the force between a gold-coated sphere and a gold-coated plate. Both surfaces are rough, but their profiles are uncorrelated. For two parallel rough gold-coated plates the correction to the Casimir energy to leading order in σ/a is that for a single rough plate with a roughness correlation that is the sum of the roughness correlations functions of the sphere and the flat plate, We use Derjaguin's PFA approximation [5] to correct for the curvature of the sphere of radius R = 100µm a. The force f T (a) at temperature T between the sphere and a plate with (closest) separation a in this approximation is, where F Cas T [a]/A is the Casimir free energy per unit area (not the pressure) of two parallel rough plates. Due to the large radius of the sphere, this is an excellent approximation for separations a < 200nm ∼ R/500. Fig. 12a gives the . Note that the lc → 0 curve (red) is a lower bound that exists only in the renormalized case. The counter term vanishes in the PFA limit lc → ∞ (orange), and this limit is the same for both. Whereas the PFA is an upper bound for the magnitude of the roughness correction when the counter potential is included, it is a lower bound without. The ratio of the roughness correction to the PFA at finite lc approaches a finite value at large separations when the counter term is included whereas it otherwise increases indefinitely. The roughness correction in the subtracted case at large separations is less than 50% of the PFA-prediction when lc 1/ωp. Except for lc = 0, the roughness correction approaches the PFA estimate at sufficiently small separation, but it quickly decreases and approaches the lower bound for lcωp < 1.
ratio ρ(a) of this force to the Casimir energy per unit area F T [a]/A of two flat parallel gold plates with separation a, at T = 0. The experimental Casimir force for the rough sphere and plate at separations σ a < l c is up to 30% greater than the Casimir energy for flat plates.
Since we do not differentiate between contributions from high and low peaks of the roughness profile and only use a single correlation function, all standard deviations of Ref. [12] were multiplied by a factor of 1.7. We used σ Sph = 8nm, σ 100 = 2.6nm and σ 200 = 4.3nm for the coatings of the sphere, 100nm and 200nm thick films respectively. These standard deviations also approximately correspond to those estimated from the AFM-scans of these surfaces (see Fig. 1 in Ref. [12]). The correlation lengths l Sph c = 33nm, l 100 c = 21nm and l 200 c = 25nm are those of Ref. [12]. The ratio ρ(a) for the 200nm thick film is well reproduced by the low-energy theory with exponential as well as with Gaussian correlations. We only show the result for exponential roughness in Fig. 12, but the fit for Gaussian roughness is of similar quality. For comparison we show the roughness correction in PFA for the same standard deviations.
The ratio ρ(a) is close to unity at larger separations 100nm < a < 150nm where roughness corrections are relatively small. While this on average is approximately observed for the 200nm film, the ratio for the 100nm film is systematically about 6% above unity at larger distances. To correct for this (unexplained) discrepancy we multiplied the force In the PFA (lc → ∞) and uncorrelated (lc → 0) limits the corrections coincide but differ by up to 15% at some separations. For the same variance σ 2 and correlation length lc, the roughness correction with exponential correlation is always smaller than with Gaussian correlation. Note that the two types of roughness correlation approach the PFA quite differently: at large separations the corrections still differ by over 5% even for lcωp ∼ 100.
observed on the 100nm thick film by 0.94 before 11 comparing with theory.
From a practical point of view the comparison in Fig. 12b with the Casimir energy of two parallel flat plates at a slightly smaller separation a eff = a − δa perhaps is more useful. The Drude-model permittivity describing reflection off these effective flat plates in Ref. [12] was obtained from ellipsometric measurements on the rough surfaces. We merely adjusted δa for the best fit. Fig. 12b shows that effective flat surfaces at a reduced separation a − δa reproduce the low-roughness data remarkably well. [The force data of the 100nm film was multiplied by the same correction factor of 0.94 as in the graph of Fig. 12a. ] Since ellipsometric measurements on thin films are quite standard, this observation essentially reduces low-roughness corrections to Casimir energies to a determination of the optimal shift δa. Instead of measuring the absolute average distance between the profiles of two rough surfaces (in itself a delicate procedure that involves a number of corrections), we suggest that precision Casimir studies with low-roughness surfaces simply determine aneffective separation for flat plates with the measured (perpendicular) reflection coefficients. Fig. 12b is evidence that the data at small separations robustly determines this distance to better than 1nm, at the same time all but eliminating the need for roughness corrections.
VII. CONCLUSION
We obtained roughness corrections to low-energy scattering and the Casimir free energy in the framework of Schwinger's effective theory of low energy electrodynamics. The energy scale in this theory is the plasma frequency ω p ∼ 0.046nm −1 ∼ 9eV of typical materials like gold. We found that roughness corrections generally include large (82) of the Casimir force between a rough gold-coated sphere and a rough gold-coated plate to the Casimir energy between ideal dielectric flat plates. The experimental data is from Ref. [12]. The thickness of the gold coating on the flat plate is 100nm (upper graphs) and 200nm (lower graphs). An exponential roughness correlation and a Drude parametrization of the permittivity is assumed. The standard deviation and correlation length for the sphere's profile is σ Sph ∼ 8nm and l Sph c ∼ 33nm. a) The ratio of the force on the rough plate to the Casimir force between a gold-coated flat plate and a smooth sphere at the same mean separation. A Drude parametrization of the permittivity with ωp = 9eV, γ = 0.045eV was used. (Red) dots is the ratio for experimental data of Ref. [12]. The measured force on the 100nm thick plate was multiplied by a correction factor of 0.94 (see text for details). The solid (blue) line is our best theoretical fit to this ratio with the indicated parameters for the roughness correlation function of the plate in Eq.(80). Note that the ∼ 30% enhancement at separations a ∼ 20nm is well reproduced for both films. The dashed line gives the PFA for roughness of the same total variance. b) The ratio of the force on the rough plate to that between a smooth sphere and a flat plate at the separation a − δa. The indicated ωp eff for the effective permittivity of the flat plate was obtained from ellipsometric measurements [12] on the rough ones. We assumed the same effective plasma frequency ωp Sph eff = 7.5eV for the sphere as for the (similarly rough) 200nm film. The solid (blue) line gives the ratio to the force on the effective flat plate and sphere for the same force including the roughness corrections shown in a). Note that this ratio of the force with roughness corrections to that between a flat plate and smooth sphere with the measured reflection coefficients at a reduced separation is close to unity for all separations.
contributions from high momentum excitations. Evaluating them in the low-energy framework is inconsistent and notoriously unreliable. We emphasize that this is not a limitation of the perturbative approach developed here: exact (numerical) solutions of a model can also only be as accurate as the model itself. The Casimir energy of shortwavelength periodic rectangular profiles for instance involves momenta at which a description in terms of the bulk permittivity of the material breaks down and the mathematically exact analysis of such a model can lead to physically erroneous conclusions. Using the bulk permittivity to describe scattering off profile structures with sizes of the order of the inverse plasma frequency or smaller (about 25nm for gold) is not justified. Effects due to roughness on the scale of the plasma frequency generally are grossly overestimated by the uncorrected low-energy theory. This has been experimentally verified for machined profiles with a period λ 2π/ω p : the exact calculations [38,39] for such profiles tend to over-estimate the observed [25] Casimir force by factors of 2-3.
We presented a perturbative analysis of roughness corrections based on a low-energy effective field theory that employs counter-terms to correct for uncontrolled high-momentum contributions. The counter terms subtracts highmomentum contributions to loop integrals at the cost of phenomenological input. Apart from correlations of the roughness profile itself, we in addition modeled the averaged single-interface scattering matrix at vanishing transverse momentum by the plasmon contribution. To leading order in the roughness variance σ 2 this semi-empirical ansatz depends on a single coupling constant g 2 . Consistency of the low-energy theory and the existence of an ideal metal limit at any correlation length constrains this dimensionless coupling to g 2 = 1 at low energies (see Eq.(59)). The resulting low-energy theory is free of high-momentum contributions to one-loop integrals, approaches the PFA for l c ∼ ∞ and has a finite ideal metal limit for any l c . It is relatively insensitive to the high-momentum behavior of the roughness correlation function and has a drastically different but more transparent dependence on l c than the uncorrected model. Instead of large (infinite) differences, roughness correlation functions that differ only at high momenta now give similar low-energy predictions. Roughness of shorter correlation length no longer increases the Casimir force (indefinitely). Instead the magnitude of the force decreases with decreasing correlation length and approaches a finite lower bound for uncorrelated roughness.
Although the coupling g 2 in the plasmon contribution to the counter-term potential Eq.(43) was constrained to g 2 = 1 by selfconsistency and the existence of certain limits of the effective low-energy theory, this is a model for the roughness contribution to the average scattering matrix at low transverse momenta. It may be phenomenologically preferable to parameterize empirical data for this component of the scattering matrix instead. However, there is some evidence that the plasmon describes low-energy scattering due to roughness reasonably well. It in this sense is a reasonable model for the leading roughness correction that is relatively simple and consistent with the low energy theory.
Interestingly the PFA is accurate at small separations only for l c 1/ω p and at large separations may overestimate the correction to the force by up to 250% (see Fig. 10). For l c 1/ω p the roughness correction to the Casimir energy is significantly (a factor ∼ 1/2 − 1/3) below the PFA prediction at all but the smallest separations. The ratio remains approximately constant for a ∼ ∞ and does not increase with increasing separation as in the uncorrected model. Although we considered only isotropic roughness profiles, it perhaps is interesting that the reduction of the correction compared to the PFA prediction by a factor of 2 for l c ∼ 1/ω p is of the same order of magnitude as the experimental reduction in the overall force observed [25] by experiments with corrugated rectangular wave profiles.
The Casimir energy of low-roughness profiles was found to be essentially that of flat plates with the measured reflection coefficients at a distance that is slightly smaller than the mean separation of the interfaces. The change in separation is less than the standard deviation of the rough profile. Although the precise value of this shift depends on properties of the profile, this observation enables one to empirically correct for (low-level) roughness and accurately calibrate the effective separation in the plate-sphere geometry.
For conceptual reasons we here derived all expressions for the Casimir free energy at finite temperature, but only investigated implications of this theory at T = 0. We intend to extend the numerical investigations to finite temperature in the future. Although the roughness correction at finite temperature is not expected to change at small separations, the regime 1 < a/l c < aT where temperature and roughness corrections are of similar importance could be of some interest.At this point we only wish to observe that the summands in all expressions at finite temperature are finite when ζ → 0 for any reasonable permittivity function (Drude-or plasma-model). Predictions of this low-energy effective field theory at temperatures 2πT > ω p ∼ 2 × 10 4 o K nevertheless would be meaningless.
We divide the reduced Green's functions into g | i for a single flat plate and its correction g . | 2014-02-17T22:43:16.000Z | 2014-02-11T00:00:00.000 | {
"year": 2014,
"sha1": "ba3eb32f91e73dcc51f60f9ef6ba80b1250ec379",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.2527",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba3eb32f91e73dcc51f60f9ef6ba80b1250ec379",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
129252213 | pes2o/s2orc | v3-fos-license | TRAJECTORY CONTROL OF A TWO-WHEELED ROBOT
Robots have been used in many applications in the past three decades. One type of robot is a two-wheeled robot that requires control for both balancing and maneuvering. This thesis shows the design of a controller that can both balance and provide trajectory control for a two-wheeled robot. The controller is a digital tracking system that utilizes pole placement for system stability. The thesis provides a detailed description for modeling the plant, design of the controller, simulating the final system and implementation into hardware. Three pole placement methods are analyzed as well as tracking designs that use steady-state or ramp tracking. Each of these controllers have simulation results and the controller that has the best simulation performance is implemented into the hardware. The final controller is implemented into Lego R © Mindstorm R © hardware using Matlab R © and Simulink R ©. The results of the simulations are compared to the results found in hardware.
LIST OF TABLES
The ability to control robots has been in the forefront of control theory in the past three decades. Robots have been used in many applications. They can be used to make everyday living for the handicapped easier while allowing tasks to be performed more efficiently in industry. These robots are becoming more sophisticated to where they can autonomously perform tasks.
The control of each type of robot becomes its own engineering problem.
Robots can move around using wheels, tracks or belts. They may even stay in place and perform tasks along an assembly line. The actuators can be electromechanical or use hydraulics. All these differences result in different kinematics and dynamics of the robot.
There are many examples of controlling an inverted pendulum. Seul Jung and Sung Su Kim [1] use a Neural Network to provide tracking control for a singleinput-multiple-output (SIMO) two-wheeled robot. The cart is controlled forward and back, tracking a sinusoidal motion. Vaccaro [2] uses digital state feedback tracking to control a SIMO inverted pendulum that uses a cart coupled to a drive screw. In work introduced by S.W. Nawawi et al, [3] a sliding mode controller was used to balance a two-wheeled robot; however, it does not indicate that the controller can be used to control the robot's trajectory.
Problem Identification
The two-wheeled robot is an inverted pendulum that can also move about in a two dimensional plane. The robot contains sensors that provide feedback for balancing and trajectory tracking. Pole placement will be used in the design of the controller. Therefore, the two-wheeled robot will need to be modeled. Since pole placement is used to design the controller the method of placement needs to be scrutinized for both stability and performance. Consideration will also need to be taken when implementing the new controller in real hardware.
Contributions of this Thesis
The two-wheeled robot has some challenges that are not seen in the inverted pendulum problem. In the inverted pendulum problem found in the book by Vaccaro [2], the pendulum is mounted to a cart. The cart is directly coupled to the drive motor, while the pendulum moves freely on a mounted encoder. The two-wheeled robot has the inverted pendulum directly coupled to the drive motor.
Also, the inverted pendulum rotates on a single shaft. The two-wheeled robot does not rotate on one shaft, but uses two shafts, one for each wheel. This problem was examined by Yamamoto [4] by controlling the average of the two wheels.
Yamamoto was successful in creating a closed-loop control for balancing; however, the controller did not provide closed-loop control for trajectory tracking. Rather than control the average of the wheels (a single control input), the controller in this thesis will send independent coordinated signals to each motor (two control inputs).
The design of this system will require the tools of multivariable control theory.
Tests with the hardware robot will demonstrate the applicability of multivariable control theory to a real-world system. Modifications may have to be made for the theoretical results to be implemented in hardware.
Overview of this Thesis
The research for this master's thesis will require a scientific approach. The dynamics of the system need to be carefully scrutinized and also the micro controller architecture, which can introduce delays [5]. A plant model exists [4] and some system parameters will be found directly from measurements of the hardware system. The final plant will be used to design a controller based on Digital Tracking System Theory. The design will include a comparison of multiple methods used to calculate a feedback matrix for the desired closed-loop pole locations, and a stability analysis to determine robustness. Simulations will be used to observe the response of the system. The controller that provides the best stability margins and performance will be loaded into hardware for testing. Finally, collected data from the hardware and simulations will be compared. The two-wheeled robot model used in this thesis was derived by Yamamoto The state-space equation for the two-wheeled robot is based on the coordinate system in Figure 2.2. The equations that define the motion of the two-wheeled robot in the coordinate system are shown in Equations (2.1) through (2.7) and
List of References
have the initial heading along the positive x-axis at t = 0. These equations show that the final state-space equation only requires the variable θ, Ψ, and φ.
State-Space Equations
The state-space equation is derived from the translational kinetic energy, rotational kinetic energy, potential energy, DC motor torque and the DC motor viscous friction. Equation (2.8) is the motion equation for the two-wheeled robot for the variables θ and Ψ.
The matrices E,F,G and H are defined in Equation (2.9) and the constants are defined in Equation (2.10).
Equation (2.11) is the motion equation for the two-wheeled robot for the variable φ. The constants are defined in Equation (2.12).
The state vector is defined by x and the input vector is defined by u. 14) The matrices A and B are defined in Equation (2.15) along with the definitions of the constants.
Experimental Parameters
The model that was provided in reference [1] had all the required parameters defined. However, it was determined that some of the parameters could be more accurate if they were extracted experimentally. The distance to the center of mass L and the friction coefficient between the body and DC motor f m were two of these parameters. L was chosen since the current model assumed that the body was rectangular. This is clearly not true since the shape of the body is not rectangular. f m was chosen because the friction can easily change from one DC motor to another.
The first parameter that was extracted was L. This was accomplished by using the natural frequency of the freely hanging robot. By suspending the robot upside down by the axle and not rotating the wheels, Equation (2.8) can be simplified.
Since the wheels are not rotating, the angle θ and Ψ are equal and u is equal to 0. This reduces Equation (2.8) to the state Equation (2.16).
The following equation can be used to characterize an inverted pendulum by its natural frequency w p as stated in Vaccaro [2].
Since a speed controller is being used for the experiment,ẋ 3 in Equations (2.20) and (2.21) drops out once the two wheels have reached a steady-state speed. Also, setting v l and v r to u results in the Equation (2.22).
The ratio for the measured output to the measured input is defined as α, where (2.23) Using Equation (2.23) and substituting the constants for C4 and C6 with the constants from Equation (2.10) the value of f m can be determined.
CHAPTER 3 Digital Tracking System Theory
The two-wheeled robot must balance while maneuvering. Balancing the robot is the well known inverted pendulum problem. However, to maneuver the robot with closed-loop control is not an easy task. The robot as described in Equation (2.15) has six state variables which all contribute to the stability of both maneuvering and balancing of the robot. The task also has an added level of difficulty due to the fact that there are two control actuators.
where T is the sampling interval and k is the discrete-time index.
The digital control toolbox contains the function zohe that will convert the continuous state-space matrices A and B to Φ and Γ. The function requires A, B and T as inputs. It uses a single matrix exponential shown in Equation (3.2) that can simultaneously calculate the matrices Φ and Γ. The method is credited to C.
Controllability
The robot must be determined to be controllable prior to the design of the , has a determinant that is not equal to zero or the rank equals n, where n is the number of states. In a single-input-single-output (SISO) system Φ is an n × n matrix and Γ is a n × 1 matrix. This means that W c is an n × n square matrix and the determinant can be calculated. The two-wheeled robot is a multiple-inputmultiple-output (MIMO) system, therefore, Γ is a n × p matrix and p is defined as the number of inputs. In this case controllability of the ZOH models can only be determined by the rank not the determinant of the matrix W c . 3) The ZOH model must also be controllable.
Tracking System
The control of the two-wheeled robot is accomplished by using a digital tracking system as shown in Figure 3.
Additional Dynamics
The additional dynamics are required to have a tracking system. The additional dynamics in a digital tracking system are implemented as a state-space equation as seen in Figure 3.1. The inputs to the additional dynamics are the difference of the measured outputs from the plant and the commanded reference.
The outputs of the additional dynamics are based on the feedback matrix L 2 .
The additional dynamics drive the system to zero steady-state error by adjusting the output. This can be implemented to remove steady-state error only (single integrator) or remove error to a ramp (double integrator).
In hardware the output of a controller can saturate. At the point of saturation, or just before saturation, the additional dynamics need to be disabled. Disabling the additional dynamics prevents the integrators from running away, or "winding up".
Feedback Matrix
The feedback matrix is used to place the closed-loop poles.
where z is the discrete-time domain pole location, s is the continuous-time domain pole and T is the sampling rate.
The command fbg from the controls toolbox can now be used to calculate the feedback matrix. There are also other tools provided by Matlab R such as place and acker that can be used to generate the feedback matrix. The command acker will not be examined in this thesis. It only produces results for a single input system, and the two-wheeled robot is a multiple input system. There will also be another recently developed algorithm called TFBG The closed-loop system matrix Φ d − Γ d L has poles λ i and eigenvectors Ψ i that satisfy This equation can be rewritten as The dimensions of the P (λ i ) are n × (n + p) where n is the number of states and p is the number of inputs. This means that the null-space has a dimension p which is partitioned into an n-vector Ψ i and a p-vector LΨ i . There are up to p linearly independent vectors that can be chosen for each pole location in the closed-loop system. The number of vectors chosen for each pole is equal to the multiplicity of the pole.
To find the value of L, each null-space vector for the desired pole location is partitioned into a "top part" t i and "bottom part" b i . Using Equation (3.10) then t i can be used to solve for L.
Since n is equal to the number of desired poles, this is repeated n times, grouping all the top and bottom null-space vectors. This results in a top matrix T which is n × n and bottom matrix B which is n × p with the equation which a unique L can be solved for by Although this provides a unique solution for L, the construction of B and T are not unique. Different constructions of B and T will result in the desired closedloop poles, however may not produce the optimal performance for the closed-loop system [4]. It should also be noted that to avoid L having complex values, complex poles need to include the complex conjugate pole location.
The algorithms fbg and Matlab's place are based on methods introduced by Kautsky, Nichols, and Van Dooren [5]. This method calculates eigenvectors that are as orthogonal as possible, while achieving the desired closed-loop poles. This is accomplished by using an iterative method. The matrix T is first found by initializing the matrix using the method above. Then, one at a time, each eigenvector in T is projected to be as orthogonal as possible to all other eigenvectors, which produces the optimal eigenvector. Before calculating the next eigenvector in T, the new eigenvector replaces the old one. Once all the eigenvectors of T are calculated, the process is repeated. The iteration that results in the nonsingular matrix T with the best condition number is then used to calculate B. In order to calculate the matrix B a vector α is introduced where The vector α can be found with the knowledge of T (λ i ) and Ψ i by using Equation (3.16).
The knowledge of α i allows b i to be determined using the the following equation and repeating this n times to form the matrices T and B. Then Equation (3.14) can be used to find L.
There are slight differences in both fbg and Matlab's place that will be seen later numerically. The most likely place where they may differ is the criteria that is used to choose the initial null-space vectors.
The new method of calculating L is based on optimizing a stability robustness norm, which is described in Burl [6]. The calculation of L is performed in the function TFBG. TFBG is initialized using the same method as fbg, which is the only commonality between the two algorithms. The new algorithm searches for stability robustness in a system with a MIMO plant. The stability robustness indicates how big ∆(z), the error in the plant, can be before the system goes unstable, where I + ∆(z) is a MIMO transfer function cascaded with the plant.
The inputs and outputs of ∆(z) are y d and w d respectively, and if ∆(z) = 0 then the control system is using the nominal plant model. The condition for stability of the control system is the robustness norm δ max can be found by The objective of TFBG is to find L that results in the largest possible δ max . The two-wheeled robot requires another constraint, where the rows of L have the same magnitude and certain columns have opposite signs.
Stability Margins
The value of δ max is directly related to the gain and phase margin of the control system. In a SISO system the stability margins can be found using q, which represents gain and phase uncertainty of the plant in classical control. In a MIMO system q could be used; however the stability margins are only representative of the individual plant inputs without any consideration of simultaneous errors on each plant input. Therefore, there would be stability margins only for each individual input. Assuming δ max has been calculated for a given control system and ∆(z) is assumed to be a complex number C, the corresponding classical gain margins can be found by using where the upper gain margin (UGM) and lower gain margin (LGM) are U GM = 20 log 10 q max (3.24) LGM = 20 log 10 q min (3.25) The UGM and LGM are to be greater than 3db and less than -3db, which requires δ max to be greater than or equal to 0.4.
In order to find the phase margin, the equation q = e −jφ = 1 + C is used, where 1 is the center of the circle and C is a disk with the radius δ max . Therefore, the phase margin (PM) is To have a PM of at least 30 • , δ max needs to be greater than or equal to 0.5.
Therefore, a δ max that has good PM will always have good gain margins. [2] C. F. V. Loan, "Computing integrals involving the matrix exponential," IEEE Transactions on Automatic Control, vol. AC-23, pp. 395-404, June 1978.
CHAPTER 4
Hardware The two-wheeled robot is built using the LEGO R Mindstorm R NXT system
Design Tools
The controller for the robot is designed using Matlab and Simulink R and is
Bluetooth Interface
The bluetooth interface provides a communications link from a bluetooth enabled device to the Lego Mindstorm NXT main processor. The link contains the information shown in
GUI
The PC has a graphical user interface (GUI) that was designed for this application and can be seen in Figure 4.4. This custom interface provides a method to select and open a communications port. It also provides a command to balance, a command for θ, a command for φ, and a method to record data.
The first step in the operation of the GUI is to make a connection through the bluetooth communications port. This requires the Lego Mindstorm NXT to be paired with the computer by performing the following steps adapted from Chikamasa [3].
1. On the robot, in the main NXT screen scroll to the bluetooth symbol.
2. In the bluetooth menu scroll to the ON/OFF and ensure that it is on.
3. Return to the bluetooth menu.
4. Place the PC's bluetooth into Discovery mode.
5.
In the NXT bluetooth menu scroll and select search, then select the PC that the NXT should be paired to.
6. The NXT will provide a pairing number and once that number is accepted it will attempt to connect to the PC.
7. The PC should indicate that the NXT is trying to connect and requires the pairing number. Provide the number and finalize the connection.
8. Then to determine what communications port will be used, go to the bluetooth settings on th PC and locate the port associated with "outgoing NXT 'Dev B"'. The communications port that was found using the pairing method used above is the port selection used in the GUI. This port will remain the link to the Lego Mindstorm NXT until the device is removed from the PC. Now that the bluetooth is configured, the GUI can be used to control the robot. The following steps are taken to run the robot and assume that the controller with the bluetooth interface block has been preloaded.
1. Turn on the Lego Mindstorm NXT and press the orange button until the nxtOSEK screen flashes.
2. Select "Run" by pressing the right arrow. The robot will beep once the calibration of the Gyro is complete. The robot should not be moved prior to the beep since this will result in an abnormal gyro calibration.
3. Run the GUI in Matlab.
4. In the drop down menu, select the communications port for the bluetooth interface.
5. Click on "Connect". Once a connection is made the "Balance" button will activate.
6. Place the robot in an upright position on a surface and press the "Balance" button. Once the robot starts to balance, let go and let the controller do the work.
7. In the test panel, select from the four tests that can be performed.
8. Clicking "Start" will send the profile and collect data. Once the profile is finished, the data is available in Trial.mat.
Controller Design Considerations
The main processor is a 32 bit fixed point processor. The Real-Time Workshop allows floating point calculations to to be performed on the 32 bit fixed point processor. These operations will consume more processing time. The controller will be implemented in floating point; therefore, the extra computation is a factor.
The data that will be sent back and forth over bluetooth communications also requires processing time. These factors determined that a 10ms sampling interval should be used. The high sampling interval is also much higher than the interval at which the I2C communications are handled with the AVR. Therefore, the I2C delays will not be a factor in the design and analysis of the controller.
The hardware will also affect the design of the controller since the resolution of the sensors is low. This requires the controller to have a slower setting time so that the controller does not respond to the bit changes. This The plant is then examined and determined to be controllable by obtaining W c from Equation (3.3) and verifying that the rank is equal to 6.
Additional Dynamics
The next step is to determine the additional dynamics. The ability to track steady-state or ramps in θ and φ were both examined. The additional dynamics for a steady-state tracking are shown in Equation The closed-loop poles need to be determined before the feedback matrix can be calculated. The two tracking designs, steady-state and ramp, will have their own set of closed-loop pole locations. This is due to the fact that step tracking requires an 8th order closed-loop system and ramp tracking requires a 10th order closed-loop system. In both ramp and step tracking, three of the closed-loop poles from the plant are used.
Feedback Matrix
The design of the controller can utilize one of three algorithms for calculating the feedback matrix. These gains were calculated for both the steady-state and ramp tracking. Therefore, six sets of gains were determined in all. The stability margins are also calculated for the six controller designs. The performance of each of these designs is later examined in the Matlab/Simulink Model.
Stability Margins
The stability margins for each of the closed-loop systems were calculated using the equations in Section 3.5. The system N(z) to calculate δ max is the state-space model in Equation (5.5).
LGM Table 5.4 do indicate that using TFBG will provide the most robust controller. These controllers are examined further for performance in Chapter 6.
Plant Inputs and Outputs
In order to implement the controller, the feedback and commands of the robot need to be transformed into the states that are shown in Equation (2.14). The feedback signals from the robot are θ m l , θ m r andΨ which are converted from degrees to radians and then used to find the required states. θ and φ are determined by using Equations (2.1) and (2.2). Ψ is calculated by using a forward Euler integration ofΨ.
The state variablesφ andθ require the derivative of θ and φ. The derivative was implemented as a second order filter with a specific settling time in order to reject any frequencies that are higher than the system response.
Estimates of the state variablesφ andθ could have been implemented using a linear observer system. However, modeling has shown that non-linearities due to quantization make it difficult to use the linear observer accurately.
The outputs of the plant require scaling. Since the DC motor voltage is PWM based, the voltage from the controller is scaled and limited by the battery voltage (±V bat ). The commands sent to the motors are PWM l,r = 100 V l,r V bat . The presence of the limiters require that the additional dynamics, which are integrators, be held in order to keep them from rapidly growing while in the limits. The integrators are not held exactly at ±V bat but at a voltage slightly lower. Limiting the integrators at a lower voltage than the battery allows the controller to respond as a regulator for the remaining voltage head room. The regulator is the main component used to maintain the robot's balance, giving a priority to the system, which is first balancing and second maneuvering.
Modeling and Simulations
The controller design is first implemented in a computer model utilizing Matlab and Simulink. The model used can be seen in Figure 6.1.
Performance Testing
The model was used to determine the performance of the controller design for each of the feedback matrix calculation algorithms to both steady-state and ramp tracking. In Chapter 5 the stability margins for each feedback matrix calculation algorithm with steady-state and ramp tracking were calculated. The stability margins are a good indication of how the design will perform. However, even though the stability margins may show the system to be more robust, the dynamics may still not produce the best performance.
Steady-State Tracking
Testing was performed using the steady-state tracking designs utilizing place, fbg, and TFBG. The following figures were generated from data collected from the model. The inputs to the model, in the first test are a ramping profile in φ and a constant zero reference to θ. Figure 6.4 shows that the place algorithm allows θ to be affected by a command in φ, while the other algorithms hold to the reference. In all cases balancing is achieved; however, Figure 6.6 shows that place introduces disturbances into Ψ. This is an example of where place had higher stability margins than fbg as seen in Table 5.4, and does not have better performance. The model was also executed with a ramping profile in θ and a constant zero reference to φ. The voltages in Figure 6.7 show that both wheels are provided commands in the same direction which causes the robot to move forward and back.
However, it is evident that the left wheel has much more damping than the right wheel with the place algorithm. Figure 6.8 show that all three of the algorithms produce tracking system with zero steady-state error and all have approximately the same error to a ramp. Figure 6.9 shows that the place algorithm allows φ to be affected by a command in θ, while the other algorithms hold to the reference.
In all cases, balancing is achieved and a disturbance is introduced in Ψ as shown in Figure 6.10. The steady-state tracking modeling shows that place had the worst performance. TFBG seems to show marginally better performance than fbg. Using place would cause the robot to move very sloppily because the states are highly coupled to each other. This would make it very difficult to coordinate commands in θ and φ that would result in a desired trajectory. In all cases steady-state tracking was achieved. While it is not expected that any of these designs track a ramp perfectly since they were designed for steady-state tracking, it would be desirable that they track with the least amount of error as possible.
Ramp Tracking
Testing was performed using the ramp tracking designs utilizing place, fbg, and TFBG. The following figures were generated from data collected from the model. The inputs to the model, in the first test were a ramping profile in φ and a constant zero reference to θ. Figure 6.11 shows the voltage commanded to both wheels, and as expected, the voltages are equal in magnitude and with opposite direction, which results in the robot turning. The voltage commands show that the TFBG algorithm has a faster response, which can also be seen in Figure 6.13 where TFBG has less overshoot. Figure 6.12 shows that the place and fbg algorithms allow θ to be affected by a command in φ, while the TFBG algorithm hold to the reference. In all cases balancing is achieved. However, Figure 6.14 shows that place and fbg introduces disturbances into Ψ. In this case TFBG from Table 5.4 has the best performance and stability margins. [rad] State Ψ FBG TFBG Place Figure 6.14. Ramping φ Tracking Measured Ψ State The model was also executed with a ramping profile in θ and a constant zero reference to φ. It is expected that the voltages should have commands with equal magnitude and direction, as in Figure 6.15, for the robot to move forward and back. However, it is evident that the right wheel has much more damping than the left wheel with the place algorithm. It can also be seen that TFBG has a slower response to commands in θ, and therefore has the most overshoot to a commanded θ as seen in Figure 6.16. Figure 6.17 shows that the place and fbg algorithms allow φ to be affected by a command in θ, while TFBG algorithm holds to the reference. The state Ψ shown in Figure 6.18 indicates that during maneuvering the robot remains balanced. Tracking a figure eight will test the controller's ability to track both φ and θ at the same time. During this testing only the design that uses TFBG will be used.
The performance testing in the previous section shows that using place would cause the system response to be very sloppy. It also shows that fbg is sloppy when performing ramp tracking even with the ramp tracking design. TFBG has better performance due to the fact that it not only optimizes phase and gain margin, it also ensures that the feedback matrix has a certain symmetry which can be seen in Tables 5.2 and 5.3. The fbg algorithm had shown this symmetry in the steady-state tracking design, which had good performance. In the ramp tracking design, fbg did not have this symmetry. This explains why fbg performed better in steady-state tracking than in ramp tracking. Therefore, since neither fbg or place guarantee the symmetry needed to get the best performance, TFBG will be used in the controller design for figure eight tracking. The figure eight will help to determine if steady-state or ramp tracking is better for hardware implementation.
Figure Eight Profile Generation
The figure eight profile generation is calculated in the commands block of the model. A command for θ and φ are calculated from the desired x m and y m . The desired x m and y m are calculated using the following equations where A x is the amplitude of x m , A y is the amplitude of y m and T is the period of the figure eight. This means that a figure eight can be commanded to remain in an area of A x × A y and complete the figure eight in 2πT seconds. In the following tests T is set to 10, A x to 50 cm and A y to 100 cm.
The locations x m and y m are transformed to θ and φ using Equation (2.4) which results in the following equations The tan −1 is calculated by using atan2 in order for all quadrants to be properly calculated. The atan2 function is bound by ± π 2 so an "unwrapper" is used so that the discontinuities in the atan2 function are removed. This is needed because the robot expects absolute position, not relative position. The 2π wrap causes the robot to get a large step in rotational position that would try to spin the robot 1 revolution very rapidly, saturating the voltage commands.
The initial angle φ is a large step. The large command takes time to settle out, so the command generator applies the initial angle φ and then waits for the system to settle before starting the figure eight commands.
Steady-State Tracking
The following plots show how the system responds to the commands to a figure eight with the TFBG steady-state design. Figures 6.20 and 6.21 clearly show that there is a delay in both θ and φ. This delay results in a figure eight that is shifted as seen in Figure 6.23. The figure eight is clearly out of the limits of ± 50 cm on the x axis and slightly out of ± 100 cm in the y axis. Figure 6.23 shows the extent of the error in both x and y. The trend seems to show that as time passes the figure eight would continue to shift to the negative x and y axis. Figure 6.22 shows that the robot will remain balanced while maneuvering the figure eight.
Ramp Tracking
The following plots show how the system responds to the commands to a figure eight with the TFBG ramp tracking design. Figures 6.26 and 6.27 show that there is very little delay in both θ and φ. There is, however, a start-up delay in θ which does cause a misalignment in time between φ and θ. This results in a slightly malformed figure eight as seen in Figure 6.29. The figure eight is out of the limits of ± 50 cm on the x axis and slightly short of the ± 100 cm in the y axis. The figure eight does have less error than the steady-state design which can be seen in Figure 6.30. This design also results in very stable balancing which can be seen in Figure 6.28. x m vs y m x m vs y m offset that needs to be removed. This is accomplished by using a 10 Hz low pass filter for 1 second at the start of the program. The output of the filter is the gyro offset provided that the robot has been stationary. The filter is disabled after 1 second which holds and outputs the offset value. is the "Controller" block shown in Figure 7.7. This is where the actual controller resides. The block contains the state-space for the additional dynamics that is enabled when the output is in range. When the additional dynamics block is disabled, it holds the last output value until it is enabled once again. The enabling of the subsystem is handled by the "Integrator Logic" which ensures that the magnitude of the voltage commands are less than the battery voltage. The block will have the robot generate a beep whenever the output voltage command is out of range. Also, the "Controller" block generates the error for θ and φ and adds the output of the −L 1 gain to the output of the additional dynamics. from the simulations so that the results can be compared.
Balancing
This test was conducted to see how well the the robot would balance in place.
Theta Ramping
In the θ ramping tests, the voltages in Figure 7.15 generally behave as expected where they have the same direction and magnitude for each wheel. The right wheel does show extra dynamics that the left wheel does not. The voltages are comparable to the voltages that can be seen in Figure 6.15 from simulation, which is a good indication that the model and the real hardware respond very similarly.
This can also be seen in Figure 7.16 where the state variable θ has an overshoot of approximately 4 radians as in Figure 6.16 of the simulation. The response of φ to a ramping θ as shown in Figure 7.18 is not ideal since φ looks to be perturbed, unlike
Phi Ramping
In the φ ramping tests, the voltages in Figure 7.23 are comparable to the voltages that can be seen in Figure 6.11. The voltages are opposite in direction and have equal magnitude for each wheel. Figure 7.25 has an overshoot of approximately 0.5 radians as in Figure 6.13 of the simulation. The response of θ to a ramping φ as shown in Figure 7.26 is not ideal since θ is perturbed, unlike The figure eight in this test was commanded with an A x = 50 cm, A y = 100 cm and T = 10. The voltages in Figure 7.31 are comparable to the voltages that can be seen in Figure 6.25 of the simulations, except they are much noisier. Figure 7.32 shows that the commanded θ is tracked very well and is comparable to the simulation results in Figure 6.26. This is also seen in the tracking of φ when comparing Figure 7.34 to Figure 6.27 of the simulations. Figure 7.36 shows that Ψ has very little movement so the robot is well balanced. Finally, the XY plot Figure 7.38 shows how the robot moved in the XY plane. The figure eight is slightly deformed but it's deformity can be compared to Figure 6.29. This error is due to how well the states θ and φ match in time. The biggest issue is the start-up of θ which does not track for the first 3 seconds. This causes θ and φ to always be misaligned. The robot does track ramps in both θ and φ very well. The feedback control algorithms place, fbg and TFBG were all examined.
Analysis and simulations showed that even though a feedback matrix may have better stability margins, it may not have the best performance. These algorithms were all compared to one another in both steady-state tracking and in ramp tracking. The place algorithm did not produce very good results, which has been the case in other studies [2]. TFBG showed the best results. The algorithm searches for the best stability margins for both inputs. It also searches for a desired symmetry in the feedback matrix.
The controller was able to track θ and φ, which was demonstrated in both simulation and in the hardware. The hardware and simulation showed difficulty in tracking a figure eight at high a high speed. In both simulation and hardware, slowing down the commands by changing the parameter T from 10 to 20 allowed the system to track much better. The result was a very slow moving system that tracked. | 2017-11-28T20:10:12.985Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "8ea958f980bd993072376942018090538b3bb31f",
"oa_license": "CCBY",
"oa_url": "https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1113&context=theses",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "79fc3081c6b69ac7dbe891d413d5b96d2a865b49",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
266157900 | pes2o/s2orc | v3-fos-license | A paper-based ratiometric fluorescent sensor for NH3 detection in gaseous phase: Real-time monitoring of chilled chicken freshness
Highlights • A ratio fluorescent probe with high sensitivity and selectivity.• A portable paper-based sensor is prepared to detect NH3.• The LOD value of the sensor for NH3 in gaseous phase is 288 nM.• This sensor can monitor chicken freshness online.
Introduction
In recent years, global chicken sales have increased by years.Because of its high protein content and low-fat content, chicken meat is very in line with the needs of modern people for a healthy diet.Chicken has become one of the main sources of meat consumption(P.Y. Li, Rao, Wang, & Hu, 2022).People's requirements for chicken quality and safety are also getting higher and higher(Z.Jia, Luo, Wang, Dinh, Lin, Sharma, et al., 2021;Kim, Park, Kim, & Shin, 2022;Kuswandi, Jayus, Oktaviana, Abdullah, & Heng, 2014;Lu, Yang, Liu, Liu, Ma, Wu, et al., 2020).To cater to consumers' demand for information about the freshness of chicken meat, the freshness indicator label is more and more concerned by the researchers as it can provide consumers with indicators of chicken freshness intuitively and quickly.At present, the most widely studied is the application of colorimetric sensing membranes based on pH changes in meat freshness indicators (Lee, Baek, Kim, & Seo, 2019;Lee, Park, Baek, Han, Kim, Chung, et al., 2019;Mastnak, Mohr, & Finsgar, 2023).The basic principle of colorimetric sensing membranes is that, as the meat degrades, protein is degraded by enzymes and microorganisms, along with, the production of nitrogenous volatile compounds, total volatile base nitrogen (TVB-N), which includes ammonia, dimethylamine, trimethylamine, and other substances (Qin, Ke, Faheem, Ye, & Hu, 2023;Wojnowski, Namiesnik, & Plotka-Wasylka, 2019).Levels of TVBN gradually shift the atmosphere within the package toward alkaline conditions as meat rotting rises, and the use of pH-based colorimetric sensing films can detect these changes.(Mastnak, Mohr, & Finsgar, 2023;Y. Sun, Zhang, Adhikari, Devahastin, & Wang, 2022).However, acid-base neutralization also occurs.Such as, chicken is easily infected by Salmonella, which metabolizes protein to produce acidic H 2 S etc. (Chen, Ding, Zhu, Guo, Tang, Xie, et al., 2023;Lu, et al., 2020;Rosniawati, Rahayu, Kusumaningrum, Indrotristanto, & Nikastri, 2021).This causes acid-base neutralization of the headspace gas, so evaluating chicken freshness from changes in headspace gas pH alone could result in errors (see Fig. 1).
Usually, volatile compounds such as NH 3 , TMA, DMA etc. are considered as total volatile basic nitrogen (TVB-N).Among these substances, NH 3 has the most considerable odor-release strength, produced as a result of the destructive activities of microorganisms and is considered one of the most important freshness indicators to monitor the quality and safety of meat products (Cai, Song, Zhang, Wang, Jian, Xu, et al., 2022;Jiao, Sun, Hui, Dong, Li, Hui, et al., 2022;Meng, Luo, Dong, Zhang, He, Long, et al., 2020;Moosavi-Nasab, Khoshnoudi-Nia, Azimifar, & Kamyab, 2021;Rosniawati, Rahayu, Kusumaningrum, Indrotristanto, & Nikastri, 2021).So NH 3 was selected as target objects as chicken freshness indicators in this study.Due to the high requirement for timeliness and non-destruction in detecting the freshness of chicken (Mastnak, Mohr, & Finsgar, 2023).The development of an extremely sensitive and focused sensor for the online detection of ammonia gas in chilled chicken is crucial.An electronic nose is the most common method for rapid non-destructive testing of gases (Luo, Zhu, Lv, Wu, Yang, Zeng, et al., 2023).However, electronic noses based on metal oxide gas sensors need to work at high temperatures and are not suitable for chicken freshness detection.Visible light colorimetric technology is the most intuitive NH 3 gas visualization detection method(W.Y. Li, Sun, Liu, Rotaru, Robeyns, Singleton, et al., 2022), but it cannot avoid errors caused by environmental light interference.Infrared spectroscopy is also can be used for NH 3 gas analysis (Xiao, Sui, Yu, Chen, Yin, & Ju, 2019), however, it is susceptible to ambient humidity.The storage and transportation environment of cold fresh chicken is humid, low temperature, and dim, so these detection methods are difficult to use for real-time online monitoring of chicken freshness.Among these detection methods, fluorescence detection of NH 3 is simpler and more convenient (Mallick, Chandra, & Koner, 2016 Zhang, 2019) designed a cellulose-based proportional fluorescent material and proposed a visualization method for monitoring the freshness of seafood.The prepared proportional fluorescent material has a fast and reversible response to ammonia gas.These fluorescent probes have a good response to ammonia gas, but there are also some problems in practical applications.For example, the synthesis process of organic probes is very complex, which brings great difficulties to the actual preparation process.The complex volatile components of chicken require a high selectivity of the probe, and the relatively low concentration of ammonia produced by chicken requires a high sensitivity of the probe.In addition, some probes even operate in the "off" state with relatively low sensitivity.
In this study, the coordination polymers (CPs) were used as sensors to obtain complexes with different properties with stable, porous threedimensional structures by combining different metal ions with different organic ligands (Lü, Chen, Li, Wang, Müllen, & Yin, 2019).However, the properties of coordination polymers formed by the combination of different metal ions and organic ligands are different, so designing CPs capable of detecting organic amines is challenging.Compared with single fluorescence emission, ratiometric fluorescent probes show better self-calibration(J.Wang, Li, Ye, Qiu, Liu, Huang, et al., 2021).Therefore, a solid-state sensor with Zn(PA)@CNQDs composite coordination polymer was designed as the probe to detect fatty amine gases.The electron-rich amine gas combines with the functional site on the surface of CPs to turn on the fluorescence of the polymer, and the purpose of detecting amines is achieved under the dual emission fluorescence signal of Zn(PA) and CNQDs.
Preparation of Zn(PA) (BPE) @CNQDs
The synthesis of CNQDs is as follows: ammonium chloride (0.53 g), sodium citrate (0.1 g), and water (5 ml) are mixed, added to a Teflon aluminum autoclave, and then heated to 180 • C for 4 h before being left at ambient temperature to allow for natural cooling.To get rid of non- reactive precursors, the mixture is filtered on a 1000 Da dialysis membrane after freezing.Keep at 4 • C with the purified CNQDs solution after centrifuging the final product for 5 min at 8,000 rpm to get rid of any large, insoluble particulates.
Zn(PA)(BPE) @CNQDs synthesized according to the following method: 0.8 mmol Zn(NO 3 ) 2 ⋅6H 2 O was mixed and stirred with 8 ml to obtain solution A, and 0.4 mmol bis(4-pyridyl)ethane was mixed with 0.8 mmol of bis(4-pyridyl)ethane with 40 ml ultrapure water to obtain solution B. Adjust the pH of solution B to 8 with 1 M KOH, mix solution A and solution B, and then put it into an autoclave for 5 min and heat it at 120 • C for 3 days to obtain a solid powdery substance.Vacuum freezedried for 2 days to obtain solid powder Zn (PA) (BPE), then take 3 mg CPs dissolved in ultrapure water mixed with 100 μl CNQD, stand for 1 h and centrifuge to wash off the supernatant to dissolve the mixture in 4 ml of ultrapure water again.
Zn(PA) @CNQDs complex for detection of ammonia
Ammonia was configured to react with the configured Zn(PA) @CNQDs complexes with different concentration gradients, and the concentrations of the complexes were optimally selected according to the supporting information.The emission peak intensity at 440 nm and 540 nm was recorded under wavelength excitation at 365 nm, and all relevant measurements were performed 3 times.
Detection of total nitrogen content in chicken breast (TVB-N)
Fresh chicken breasts were provided by a local slaughter plant (Fengyuan Co., Zhenjiang, China) within one hour after slaughter in insulated polystyrene boxes on ice.A Kjeldahl nitrogen analyzer was used to detect TVB-N in chicken breast.First, a 10 g chicken breast sample was homogenized with 75 ml ultrapure water.Next, 1 g MgO was added to the mixture utilized to eliminate the fundamental nitrogenous compounds from the chicken breast sample, which were then distilled off and absorbed via boric acid.The nitrogen concentration was then determined by mixing the mixture with 0.1 mol/L HCl.The TVB-N content in chicken breast samples was measured every 24 h, and the TVB-N content was calculated as follows: where X is the content of TVB-N in chicken breast, the unit is mg/ 100 g; m is the weight of the chicken breast sample, in g; V 1 is the volume of HCl consumed by the sample in ml; V 2 is the volume of HCl consumed by the contrast agent in ml; C is the concentration of HCl in M.
Preparation of portable ratiometric fluorescence sensors
The 7.5 mg/128 ml complex was dissolved in 2 ml of ultrapure water, the mixture was placed in a 10 ml centrifuge tube after uniform shaking, 20 μl of sample was dropped on qualitative filter paper, and a solid paper-based sensor was obtained after nitrogen blowing for 10 min.Finally, the sensor is placed in a sealed gas generator and different concentrations of ammonia solution are injected to react with the sensor.The change in the RGB value of the sensor is observed by the UV four-purpose analyzer, and its value is recorded with a smartphone (Huawei P40).The distribution of ammonia in the liquid and gas phases was analyzed, the volatilization of ammonia gas from various volumes of ammonia aqueous solution was estimated, and the precise calculation method was presented in the accompanying information.The standard concentration of ammonia gas is generated as follows: Dry fluorescent paper-based sensors are placed in homemade gas generators.Different concentrations of aqueous ammonia solutions are added to the sample cell, and ammonia is partially ionized in the water and volatilized into the air (H.Wang, Cui, Arshad, Xu, & Wang, 2018).
Where P(NH 3 ,g) is partial pressure of gaseous ammonia, K B is Henry's constant 0.297 Pa(H 2 O in Water, 298.15 K), C BX is molar concentration of NH 3 mol/ L.
Where v(NH 3 ,g) = 0.3927 L, which was the volume of the sealed flask used in the home-built NH 3 generator, R = 8.314 J/(mol•K), T = 298.15K, M(H 2 O) = 18 g/mol, M(NH 3 ) = 34 g/mol, n(NH 3 ,g) is the amount of ammonia in the gaseous state.
Where n 0 (NH 3 ⋅H 2 O) is the amount of ammonia in the H 2 O in the initial state, n (NH 3 ,l) is the amount of ammonia in the liquid state, n the amount of OH -.On basis of the equation (1) (2) (3) (4), Where P(NH 3 ,g) is partial pressure of gaseous ammonia, K B is Henry's constant 0.297 Pa(H 2 O in Water, 298.15 K), n(NH 3 ,g) is the amount of ammonia in the gaseous state, n 0 (NH 3 ⋅H 2 O) is the amount of ammonia in the H 2 O in the initial state, n (NH 3 , g) is the amount of ammonia in the gaseous state in the current, v(H 2 O) is volume of water, ρ(H 2 O) is the density of water, M(H 2 O) is Molar mass of water H 2 O.
Use of fluorescent paper-based sensor to detect chicken breast sample freshness
Fresh chicken breast weighing 30 g was placed in a single-use plastic container.The fluorescent sensor made of paper was positioned at the bottom of the box and kept at 4 • C. Every 24 h, sensor fluorescence images were captured.
Complex structural analysis
Zinc metal complexes have a strong absorption peak between 350 nm ~ 400 nm, and under the excitation peak of 365 nm wavelength, the complex has a weak emission peak at 540 nm.According to Fig. 2(a) Two emission peaks can be observed at the excitation wavelength at 365 nm.The emission peak at 440 nm is the characteristic peak emitted by CNQDs, while the emission peak at 540 nm is the characteristic peak emitted by the Zn complex.As can be seen from the SEM diagram (Fig. 2 (b)), Zn(PA)(BPE) appears blocky, while according to TEM, CNQDs are adsorbed on the surface of Zn(PA)(BPE) (Fig. 2(c)).According to the electron microscopy diagram, we preliminarily obtained the Zn metal complex and CNQDs combined complex.As shown in potential diagram (Fig. 2(d)), the Zn(PA)(BPE) has a positive charge on the surface, while CNQDs have a negative charge on the surface.Therefore, under the action of physical conditions, positively-charged Zn(PA)(BPE) and negatively-charged CNQDs were adsorbed together by electrostatic action.The infrared spectrum of the complex (Fig. 2(e)) was also observed, with the characteristic peaks of the infrared spectrum of both CNQDs and Zn complexes.For CNQDs, the peaks at 1406 cm − 1 are assigned to stretching modes of C -N heterocycles.For CPs, the sharp absorption peak at 810 cm − 1 was attributed to the ring bending vibration modes that are characteristic of C -O.The other peaks present in 1456 cm − 1 and 1374 cm − 1 correspond to the typical stretching vibration modes of C -N heterocycles.For the compound, the peak at 3200-3000 cm − 1 is assigned to O-H and N-H stretching (Mani, Ojha, Reddy, & Mandal, 2017), although the two elastic stretching vibration peaks mostly overlap, subtle differences can still be observed, thus inferring that the complex has the surface structure of the two substances.
Analysis of optical properties of complexes
The optical signal of the complex at different pH was further studied.Prior to proceeding, the complex concentration was optimized to ensure maximum response of the optical signal during the complex reaction.According to Fig. 3, different concentrations of complexes undergo different degrees of fluorescence enhancement after reacting with 0.1 M ammonia.As shown in Fig. 3 (a) when the concentration of the complex was 1.5 mg/ml, the fluorescence inner filter effect occurs after the complex reacts with ammonia, resulting in a decrease in fluorescence, while the low concentration of the complex result in a low fluorescence enhancement effect and a weak visualization effect, in Fig. 3 (h).The concentration of 3 × 10 -2 mg/ml was selected for the study of the optical properties of the complex, and at this concentration, the complex had an obvious fluorescence enhancement effect.
Sensitivity and selectivity analysis of complex detection of ammonia
In order to investigate the fluorescence characteristics of the complex at different pH, this study measured the fluorescence enhancement intensity of a certain concentration of Zn (PA) @ CNQDs complex at different pH.According to Fig. 4(a), in strong acidic and alkaline environments, the fluorescence intensity of the composite decreases significantly at 540 nm, while in certain alkaline environments, the fluorescence intensity of the composite at this location is greatly enhanced.This indicates that alkaline environments can lead to an increase in the fluorescence intensity of the complex, but the intensity of fluorescence enhancement is not determined by the size of alkalinity.Similarly, the effect of different concentrations of ammonia solution on the fluorescence intensity of the composite was investigated in this study, as demonstrated in Fig. 4(a).The composite has two emission wavelengths of 440 nm and 540 nm at an excitation wavelength of 365 nm.With the addition of various ammonia aqueous solution concentrations, the emission peak at 540 nm gradually increased and the emission peak at 440 nm gradually decreased.To further explore the mechanism of the influence of ammonia on the fluorescence of the complex, Guassian software was used to calculate the LOMO and HOMO orbital energy levels of the complex and ammonia (Fig. 4(b)).Complex structure was optimized by density functional theory (DFT) and theoretical values were calculated using B3LYP/6-31G basis groups.From the Fig. 4(b), the LOMO orbitals of the complex were located between the ammonia level orbitals, and previous studies have known that fluorescence is enhanced when the LOMO orbitals of fluorescent material are between the energy level orbitals of the analyte (Jiang, Feng, Wang, Gu, Wei, Chen, et al., 2013).Therefore, it indicates the electrons in the ammonia are transferred to the orbital in the complex, resulting in enhanced fluorescence.At the same time, the emission peaks of CDs overlap to a certain extent with the absorption peaks of the complex (Fig. 4(c)), and according to the FRET effect, the emission peak at 440 nm weakens with the enhancement of 540 nm (Huang, Zhang, Huo, & Yin, 2021) (Fig. 4(d)).In the range of ammonia concentration from 0 to 3 mM, the intensity of I540/I440 is positively correlated with ammonia concentration, the correlation coefficient is 0.996, and the detection S1).
Construction of a portable ratiometric fluorescence sensor for ammonia detection
Inspired by the results of the reaction of the complex to an aqueous ammonia solution, this study further attempts to prepare a fluorescent paper-based sensor for detecting ammonia gas.Various volatile substances are produced during the spoilage process of chicken breast, among which ammonia and amine substances are the main characteristic spoilage gases.As shown in Fig. 5 (a), the fluorescence intensity of the sensor no longer changes after being exposed to the same environment as liquid ammonia for 24 h.Also, under the blank sample, the fluorescence intensity of the sensor did not change significantly.Therefore, the sensor has good stability in this environment.Therefore, we believe that the sensor can complete the detection after 24 h in a certain concentration of ammonia.To explore the sensing detection limit of solid-state sensors for ammonia, different concentrations of ammonia were placed with the sensor in a sealed reaction device (Fig. 5 (b)), and after waiting for a period of time, the change of its RGB value was obtained through the smartphone image and analysis (Fig. 5(c)).As shown in the figure, to explore the anti-interference of this sensor, some volatile components (such as hydrogen sulfide, acetone, hexane, cyclohexane, trichloroethane, ethylene, isopropanol, and methanol) were selected to test, as can be observed from the Fig. 5(d), the composite sensor has some ammonia selectivity.By correlating the obtained RGB value with the ammonia concentration, it concludes that in the range of ammonia gas concentration 0 ~ 1.296 μM, the detection limit of the solid sensor is 288 nM, and the sensor has good detection sensitivity and selectivity (Fig. 5(e)).The specific concentration of ammonia in the gas phase is obtained by Henry's law (the eq. ( 6).
Application of solid-state sensor in chicken breast freshness detection process
In order to detect the fluorescence change of the sensor during the chicken breast spoilage process, the fluorescence paper-based sensors were placed with the chicken breast in a same box at 4 • C to observe the fluorescence change (Fig. 6(a)).The sensor was photographed every 24 h to analyze the change of the sensor RGB value with storage time.The relationship between the change in sensor fluorescence and the freshness of chicken breast was analyzed.At the same time, the Kjeldahl method was used to calculate the TVB-N content of chicken breast every 24 h.In the first three days, the TVB-N concentration were no more than 15 mg/100 g (Fig. 6(b)).It is mean that the chicken breast is fresh and the ΔG/B value of fluorescence paper-based sensors was less than 0.6 (Fig. 6(c)).4 days later, the TVBN level was above 20 mg/100 g, the fluorescent paper-based sensors' ΔG/B value climbed to 1.0 because the chicken breast was not fresh..An increase in ΔG/B value above 1.0 demonstrates that the membrane's color has altered and entered a different color space, as demonstrated in Fig. 6(b).In contrast to chemical detection techniques, the fluorescence paper-based sensor in terms of sensitivity, the connections between the TVB-N concentration and the ΔG/B values were established: where the y is concentration of TVB-N, x is ΔG/B values.
The data analysis revealed that the fluorescence paper-based sensor's G/B value can accurately considering the chemical quality parameters of chicken breast, particularly when those indications include volatile components that are typical.The anticipated correlation rate was more than 0.98.The ΔG/B value of fluorescence paper-based sensor can deliver real-time information regarding the condition of the chicken breast to users (manufacturers, merchants, and consumers).
Conclusion
In order to develop an anti-interference, intuitive and sensitive intelligent label for monitoring the freshness of chicken breast, a ratio fluorescent ammonia gas sensor was constructed in this study.Firstly, the two fluorescent materials CNQDs and Zn(PA)(BPE) utilize electrostatic interaction to form a relatively stable complex.The complex is excited at 365 nm excitation wavelength, the fluorescence of the complex gradually changes from blue to green as the concentration of ammonia increases.This fluorescence probe can detect NH 3 in liquid phase with a linear range of 0-3 mM and LOD value of 10.4 μM.Secondly, in order to judge the freshness of the chicken breast, the compound was prepared into a fluorescence paper-based sensors to detect the ammonia in gaseous phase, and LOD value was 288 nM.The fluorescence paper-based sensors were successfully employed as a low-cost, high sensitivity and NH 3 quick-responsive intelligent tagging system for real-time monitoring of the chicken breast freshness, which was verified with the standard food-monitoring methods, TVB-N.The correlations between the TVB-N concentration and the ΔG/B values of the fluorescence paper-based sensors were also established, over 0.98 was the predictive correlation coefficient.These fluorescence paper-based sensors are of great significance for ensuring food safety, which have great practical application value in food industry.
; E. S. Zhang, Hou, Yang, Zou, & Ju, 2020).Zhang et al.(J.Zhang, Xu, Shi, & Yang, 2021) proposed a highly selective NH 3 detection sensing strategy based on N, S CO-doped carbon dots (N, S-CDs).The prepared N, S-CDs exhibited excellent photoluminescence performance and fluorescence stability, while N, S-CDs exhibited fluorescence quenching in a wide linear range in the presence of 2-80 mmol/L NH 3 concentration.Sun et al.(L.Sun, Rotaru, & Garcia, 2022) designed a new high-performance gas colorimetric material based on a porous Fe (II) complex for detecting NH 3 .It performs simple color recognition through smartphones, real-time monitoring and in situ evaluation of meat freshness.Jia et al.(R.Jia, Tian, Bai, Zhang, Wang, &
Fig. 4 .
Fig. 4. (a) Fluorescence intensity of complexes at different pH; (b) HOMO and LOMO energy level orbits of complexes and ammonia; (c) CDs emission spectra and Zn (PA)(BPE) absorption spectra; (d) Fluorescence intensity change of complex at 0 ~ 50 mM ammonia concentration; (e) Standard curve between ammonia concentration and fluorescence intensity ratio.
Fig. 5 .
Fig. 5. (a) Sensor fluorescence intensity over time at different ammonia concentrations; (b) Ammonia gas reactor; (c) Sensor fluorescence change diagram in the range of 0 ~ 1.29356 μM ammonia concentration; (d) Changes in fluorescence intensity of sensors in different volatile gas environments (5 μM); (e) Standard curve between ammonia concentration and sensor △G/B.
Fig. 6 .
Fig. 6.Color evolution of composite fluorescence paper-based sensors to monitor fresh chicken breast deterioration at 4 • C, (b) TVB-N content in chicken breast at 4 • C and (c) ΔG/B values of the fluorescence paper-based sensors used to monitor fresh chicken breast spoilage at 4 • C. | 2023-12-11T16:07:48.708Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "5a02cda985ea90089020abf8b205bed4fc3b16a6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.fochx.2023.101054",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84c9c51290583a31799011d0626c4349771540b9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246979689 | pes2o/s2orc | v3-fos-license | A Case of Peritoneal Dialysis-Related Peritonitis Caused by Ewingella americana
Peritoneal dialysis (PD)-related peritonitis is a frequent complication. PD units should be aware of all possible pathogens and share their experience about prevention and optimal management. Uncommon bacteria, a special group with crescent incidence in PD practice, may require singular considerations. A case of peritonitis due to Ewingella americana, a rare human pathogen, is reported, with a favorable outcome. To date, only three other cases have been described in the literature. New evidence is necessary for a better understanding of this pathogen and its consequences in PD modality.
Introduction
Peritonitis is a frequent and potential serious complication of peritoneal dialysis (PD), directly related to adverse outcomes, including technique failure and mortality [1,2]. Prevention of PD-associated peritonitis should, therefore, be a focus of every PD unit. Knowing the source of peritonitis, including transmission patterns of pathogens, is essential for a personalized approach when it comes to retraining the patient after an infection. For example, coagulase-negative staphylococcal species and Staphylococcus aureus, known colonizers of human skin, are responsible for the majority of PD-related peritonitis cases [3,4]. Atypical organisms, on the contrary, show increasing relevance in our daily practice and must be dealt with care, particularly in patients from impoverished or rural environments, where habitational context takes a special importance.
Case Description
A 45-year-old female patient on PD for the past 2 years was admitted to the hospital with diffuse abdominal pain and vomiting. She lived in a rural area, and her background was relevant for end-stage renal disease due to IgA nephropathy. Presently, she was on automated PD (APD), and her previous PD history included one peritonitis episode due to Streptococcus salivarius, with a favorable outcome after a course of intravenous vancomycin and one chronic exit-site infection requiring removal of the PD catheter and temporary transition to hemodialysis before a new catheter could be safely inserted.
On admission, she was afebrile (tympanic temperature 36.4°C), with stable vital signs (blood pressure 127/77 mmHg and pulse 83/min). Physical examination revealed abdominal tenderness without rebound discomfort, and peritoneal dialysate was hazy at macroscopic observation. Inspection of the exit site did not show signs of infection. Laboratory workup showed a white blood cell (WBC) count of 14.1 × 10 9 /L (12.3 × 10 9 /L neutrophils) and C-reactive protein (CRP) of 225.12 mg/L, and peritoneal dialysate analysis revealed a WBC count of 7261 cells/μL with polymorphonuclear predominance (6252 cells/L). A diagnosis of peritonitis was established, and empirical treatment with intravenous vancomycin and intraperitoneal (IP) ceftazidime was started ( Table 1).
Samples of dialysate were obtained and sent to the microbiology department for analysis. After 48 hours of incubation, Gram-negative bacilli were detected on Gram staining. Identification of the bacteria was initially performed using the conventional VITEK 2 ™ automated system and later confirmed by MALDI-TOF MS having both identified the bacilli as Ewingella americana. Blood cultures were negative. Antimicrobial susceptibility was performed and found ampicillin, ceftazidime, trimethoprim/sulfamethoxazole, and ciprofloxacin to be effective against this microorganism. After these results, vancomycin was discontinued, and the patient completed a 3-week course of IP ceftazidime, with total recovery.
When questioned about the PD technique, the patient denied shortcuts or inadequate hygiene, but referred using water from a nearby fountain as domestic water.
Discussion
Ewingella americana was first described in 1983 by Grimont et al. and its generic name honors American bacteriologist William Ewing, while the species name refers to the American source of the clinical isolates described. It is a rare member of the order Enterobacterales and the only known species in the genus. e rarity of reported infections in humans raised initial doubts as to its true pathogenicity. However, though sparse and scattered in time, increasing reports have confirmed clinical infections due to E. americana in multiple contexts, including bacteremia [6][7][8][9], pneumonia [10], conjunctivitis [11,12], Waterhouse-Friderichsen syndrome [13], and peritonitis [14][15][16]. Susceptible populations include immunocompromised patients, but previously healthy patients were described too.
A recent review by Khurana et al. [16] showed only three reported cases of peritonitis secondary to E. americana. To the best of our knowledge, this is the fourth worldwide peritonitis caused by this organism and the first ever reported in Portugal. According to Khurana et al., all three previous cases occurred in female patients, as in our case; however, ours is much younger comparatively. e previous cases were found to be nonsusceptible to commonly used empiric antibiotics, such as cephalosporins, but not in our case, although all patients had a favorable outcome, without the need of catheter removal.
Despite not being a recently discovered organism, little is known about its natural habitat. Available data from case reports suggest that E. americana survives without relevant nutritional needs and preferably grows at 4°C. It was also proposed in two case reports that water could be a reservoir for this pathogen [14,16]. Similarly, in our case, despite the fact that the source of this Gram-negative microorganism remains undetermined, we may presume that use of contaminated water and break in sterile technique could help explaining this infection. As stated before, prevention of PDassociated peritonitis is a key feature in patient management, and proper care of the catheter exit site plays a pivotal role in prevention. e patient previous PD history was suggestive of a precarious technique, which corroborates this assumption.
Conclusion
We describe a rare case of peritonitis due to Ewingella americana in a patient on CAPD, the first ever reported in Portugal and the fourth worldwide. To date, there is limited evidence concerning the natural habitat of this organism and its clinical significance in humans. Still, available reports account for a nonaggressive and treatable infection, with a favorable outcome. Future studies are needed to clarify the clinical potential of E. americana and its ecology, including possible role of contaminated water as the source of this pathogen.
Data Availability
e literature used to support the findings of this case report is included within the article.
Ethical Approval
Hospital Geral de Santo António does not require ethical approval for reporting individual cases or case series.
Consent
Written informed consent was obtained from the patient for their anonymized information to be published in this article.
Conflicts of Interest
e authors declare no conflicts of interest. | 2022-02-20T16:12:36.455Z | 2022-02-18T00:00:00.000 | {
"year": 2022,
"sha1": "94d251d98e8c86f8df100a95c50127e537344141",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/criid/2022/5607080.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b5a3ac9c1b887cfc87ca2442c8937419bc82431",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13832716 | pes2o/s2orc | v3-fos-license | Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis
Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE) and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE) technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR) and atrial fibrillation (AF) was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
Introduction
Biomedical signals are characteristic of their corresponding physiological events and carry specific signatures [1]. Consequently, deciphering signal characteristics provides information regarding underlying processes that can be useful to inform or guide therapy. Most physiological processes are characterized by specific signals that reflect the nature and activities of such processes, which can contain biochemical, electrical, or physical information coming from molecular, cellular, organ, or systemic level sources [2]. Hence in a disease state, alterations to these physiological processes yield signal signatures that are different in some aspects from the normal state [1]. Electrocardiogram (ECG), electroencephalogram (EEG), electromyogram (EMG), electroretinogram, and so on are some examples of electrical signals that are commonly acquired for risk assessment, prognosis, diagnosis, therapy evaluation, and prevention of various diseases [3].
Biomedical signal analysis requires accurate quantification of the system state to distinguish between normal and pathological function or to predict the future state of the system using only short time series data that may only last a few seconds. Signal analysis is typically complicated by contamination with electromagnetic interference, power line interference, zero mean white noise, pink noise, brown noise from electrode movement, and other random noise [2].
Many biomedical signals are captured only for 3-8 s and therefore are short nonstationary and/or nonlinear time series data, which prevent ordinary biomedical analysis algorithms from completely capturing their intrinsic complexity. For instance, Shannon entropy (SE) is commonly used for biomedical complexity analysis of EEG and ECG recordings [3][4][5]. However, one of the major limitations of the SE approach is related to the specific characteristics of the nonstationary and/or nonlinear time series data that work well for long but is not robust for short data segments. Several other symbolic dynamic approaches that use various entropy-based measures, such as Kolmogorov entropy, spectral entropy, wavelet entropy, permutation entropy, approximate entropy, and sample entropy, have been proposed to capture the intrinsic dynamics of nonstationary time series data to quantify their complexity [6][7][8][9][10][11][12]. However, it has been shown that these various entropy-based methods are efficient only for long time series and do not completely capture the complexities of shorter nonstationary time series data [13].
Recently, a multiscale entropy (MSE) technique was proposed for coarse-grained time-scaling procedures to offer more robust determination of the complexity of time series data [14]. Such coarse-graining procedures may result in invalid entropy value estimation for shorter time series; and this limitation was addressed by implementing a moving-average time series estimate [15]. However, the moving average in prior work was only performed in the forward direction, which can lead to significant underestimation of the complexity information that is present in the time series data [15]. Several variants of MSE have thus been proposed [16], but all of them provide only slight modifications from the original technique [15] and specifically depend on a one-sided moving average, which yields biased entropy estimates over different time scales. Several variants of MSE have been applied to test synthetic biomedical datasets without a rigorous demonstration of their feasibility for a biomedical application [17][18][19]. Therefore, using entropy-based techniques for rigorous complexity analysis of a biomedical signal in normal and diseased states has been very limited. Several researchers have used MSE technique for a variety of analysis using cardiac signal analysis [20][21][22][23][24][25] showing some promise for complexity assessment to aid diagnosis. However, the authors identify a major limitation of these MSE variants with the systematic bias in the one-sided average which may have affected the results. The introduced bias becomes extremely important to consider for improvement because most biological signals embed only subtle changes in short time series data which may have significant diagnostic potential that could be lost with such bias.
The challenge with short time series data analysis comes from the fact that the complexity of the data may not embed in the raw signal. Previously developed MSE techniques were introduced with time-averaged time series over multiple time scales for short time series analysis [15]. However, forward averaging introduces a systematic bias in the complexity estimation. To overcome this limitation, we proposed a nearest-neighbor moving-average kernel to better capture the complexity of nonlinear, nonstationary short time series data. We introduce the concept of "memory" by taking into account the past and future time series value while computing the nearest-neighbor moving average for time series data. Therefore, we introduce the time-scale factor "τ", which represents time scaling in both forward and reverse directions with respect to a particular time point. Once this new time series is derived, the MSE estimate can be obtained by calculating the entropy of the new time series sample over multiple time scales to fully capture the intrinsic complexity of nonlinear and nonstationary time series data.
In this work, we propose an improved MSE technique, which includes significant and robust modification of the previously described MSE techniques. Specifically, we propose computation of the new time series with a nearestneighbor moving-average kernel that uses information from the "past" and "future" values to accurately capture the intrinsic dynamics of the short time series. Our modification will allow a robust analysis of nonlinear and nonstationary time series.
The efficacy and robustness of the improved MSE technique will be validated by performing noise analysis with respect to white, pink, and brown noise, which are commonly present in cardiac signals such as the ECG. Since SE has been used widely for biomedical signal complexity analysis so far, we will use it as a "gold standard", and we will compare the performance of the novel MSE technique with SE. We further hypothesized that the improved MSE technique will robustly quantify the complexity of nonlinear and nonstationary short time series data. We tested this hypothesis by applying the improved MSE technique for the analysis of the two physiological applications: (i) discrimination between normal sinus rhythm (NSR) and atrial fibrillation (AF) using a single-lead ECG and (ii) the accurate identification of the pivot point of rotors, which are potential ablation targets for AF and other arrhythmias.
An Improved MSE Technique with Nearest-Neighbor Moving-Average Kernel
The improved MSE algorithm consists of several steps as described below. Let x = x 1 , x 2 , x 3 , … , x N represent the electrogram time series of length N.
(1) Nearest-neighbor moving-averaged time series z τ is computed for the chosen time-scale factor "τ" as illustrated in Figure 1 using the following equation: where 1 ≤ j ≤ N − τ and i = 1,2,3, … , N; Figure 1 shows the schematic to obtain the nearest-neighbor moving-window-averaging approach to obtain the new time series.
(2) Template vectors y m k δ with dimension m and delay δ are constructed from z τ (see Figure 1) at each specific τ as the following: (3) The Euclidean distance d m ij for each pair of template vectors {y m i , y m j } is calculated using the infinity norm as below: where 1 ≤ i, j ≤ N − mδ and j > i + δ. In this manuscript, the value for r is chosen to be 0.2 times the standard deviation of the raw time series x. The delay factor δ is chosen to be 1. The total number of matched template vectors is computed and denoted by n m, δ, r .
Steps 2-4 are then repeated for m + 1 dimension, and the total number of matched template vectors being computed is denoted by n m + 1, δ, r .
Finally, the improved MSE is calculated as the following: MSE x, m, δ, r = −ln n m + 1, δ, r n m, δ, r 5
Materials and Methods
3.1. Noise Analysis. We evaluated the performance of the improved MSE technique and compared it with the performance of SE approach with respect to the most common sources of noise: (i) zero mean white noise, (ii) pink noise which has the inverse frequency response (1/f ), and (iii) brown noise which has the inverse frequency squared response (1/f 2 ) [26,27].
White, pink, and brown noises were simulated in MATLAB™, with 10,000 sample points. Ten short time series (TS) versions of these data were created with 250, 500, 750, 1000, 2000, 4000, 5000, 6000, 8000, and 10,000 samples. MSE was calculated via (5) for each noise using different time-scale factors "τ" from 1 to 20 over varying time series lengths. Normalized MSE (for τ = 1, 2, 3, 5) and SE were calculated by dividing the MSE (and SE) values by the maximum value of MSE (and SE) across varying time series. MSE and SE results for τ > 5 are quantitatively similar to that of τ = 5 and therefore are not shown.
Description of Datasets for Noise Analysis.
To test the robustness of an improved MSE technique in the presence of various noises, we used (1) simplified non-physiological sinusoidal wave and (2) physiological ECG signal, which is the most commonly used time series signal for the diagnostic of various diseases of the heart.
(1) A sinusoidal wave with single frequency of 10 Hz and a multifrequency sinusoidal wave with superposition of 2, 5, 10, 15, and 20 Hz frequencies were used. Ten short time series versions of the data were simulated in MATLAB.
(2) Noise-free flat baseline ECG was obtained using an electronic ECG simulator with 10,000 sample points at 250 Hz sampling rate. Ten short time series versions of these data were created.
White, pink, and brown noises were added to the noisefree signals and the analysis was performed as described in sub-Section A to compare the performance of MSE and SE techniques.
NSR and AF ECG Discrimination Analysis.
Publically available ECG datasets were obtained from the MIT-BIH Physionet database during NSR and AF [28]. Ten NSR and AF datasets of 10-second duration and 250 Hz sampling rate were used for analysis. The signals were not preprocessed for noise removal and τ = 3 for MSE calculation. NSR and AF datasets were compared using custom MATLAB software. Mann-Whitney test with p value of (1) Figure 1: Schematic illustration to produce nearest neighbor moving-average time series with scale factor τ = 1 for the MSE algorithm. Blue squares represent raw time series data, and red dots represent the nearest-neighbor moving-averaged time series from which MSE is obtained. Brown squares represent the moving-window-averaging kernel for the raw second time point (X 2 ) that averages one neighbor on both sides with τ = 1 to produce the first new time series point Z 1 (1) . Similarly, green square produces Z 2 (1) and so on (orange square) with the blue square producing the last time series point Z i+1 (1) . 0.01 was used for testing statistical significance and was performed using OriginPro software (OriginLab Corporation, Northampton, Massachusetts).
Optical Mapping Data from Isolated Rabbit Hearts.
Optical mapping movies during a single rotor or figure-of-8 reentry were obtained from an isolated rabbit heart by inducing ventricular tachycardia via burst pacing as described previously [29,30]. The movies were 3-second long, acquired at 600 frames per second temporal and 64 × 64-pixel spatial resolution. Two-dimensional (2D) MSE maps were generated for both single rotor and figureof-8 reentry using the MSE values with the scale factors τ = 1, 2, and 3 at each pixel location across all the frames. For comparison purposes, the 2D SE map was computed. A custom MATLAB (MathWorks Inc., Natick, MA) program was developed for all processing. Supplemental videos SV1 and SV2 are provided for reference that shows the phase movie of single and double rotor, respectively. Figure 2 shows the robustness of MSE and SE techniques with respect to different types of noise: white (a), pink (b), and brown (c [14][15][16] which was seen in Figure 2(a) middle panel with increasing scale factor due to the nearest neighbor averaging that leads to lower MSE for white noise is shown. For pink noise which has a 1/f response, higher MSE than white noise is expected but with a constant value across multiple time scales [14]. As expected, MSE levels out at higher time series lengths above 1000 sample points across the different time scales seen in Figure 2 values even for the shortest time series, thereby capturing the complexity better than SE. Overall, the results indicate that if at least 1000 sample points are available, MSE can capture the complexity robustly compared to SE. For most physiological monitoring, 250 Hz sampling frequency is common, which indicates that 4 s short time series data should be sufficient for robust analysis using MSE. Figure 3 demonstrates the robustness of MSE compared with SE technique for single-frequency sinusoidal wave in the absence and presence of different noises. Figure 3 and SE as a function of TS length. Our results suggest that MSE captures the complexity of sinusoidal waves better than SE in the presence of these noises. Figures 4 and 5 show the results for the multifrequency sinusoidal wave and the noise-free flat ECG, respectively. Similar to the response seen in Figure 2 for raw noise, (b)-(d) of Figures 4 and 5 demonstrate that SE is very small for short TS and gradually increases with increasing TS length, while MSE has high values even for the shortest TS, thereby capturing the complexity better than SE. The results demonstrate the efficacy of the novel MSE technique in quantifying the complexity of complex time series data in the presence of noise better than that of the commonly used SE approach. Figure 6 shows the raw ECG with NSR (a) and AF (b). Note that visual inspection of these traces cannot be used to correctly discriminate between NSR and AF. Figure 6(c) shows the boxplot of MSE values for 10 AF and NSR datasets demonstrating statistically significant differences (p < 0 01) and therefore accurate discrimination between NSR and AF. As observed in Figures 6(a) and 6(b) visually, it is difficult to interpret the difference between NSR and AF on the ECG as the chaotic nature of AF manifests itself into small morphological disturbances which need robust algorithms to effectively capture the complexity. MSE robustly discriminates NSR and AF.
Identification of Pivot Point of the Rotor.
A snapshot of a phase movie of a single rotor in isolated rabbit heart is shown in Figure 7(a). In this movie, different colors represent different phases of the action potential, and the pivot point of the rotor can be easily identified as the point where different phases converge. Corresponding voltage traces from the core (pixel "1") and periphery of the rotor (pixel "2") are also shown. At the core of the rotor, broader distribution of voltage amplitude occurs due to the chaotic nature at the rotor pivot point and therefore, a higher MSE value was expected. At the periphery of the rotor, more uniform electrical activity is observed and hence, a lower MSE value was expected. Figure 7(b) shows the 2D MSE maps for three time-scale factor τ = 1, 2, and 3. Note the MSE technique can accurately identify the location of the pivot point of the rotor for each τ. As seen from (b), the pivot point has higher MSE values than the periphery thereby enabling its precise localization, and higher values of "τ" results in better contrast between the rotor core and periphery. Figure 7(c) shows the normalized 2D SE map of the same single rotor. It is important to note that although SE can correctly identify the pivot point of the rotor, the contrast between SE values at the core and the periphery is low, which challenges accurate identification. for each τ and that the performance of the MSE technique is much better than SE observed in Figures 8(b)-8(c). As seen in Figure 7(b), it is seen that a scale factor of τ = 1 was sufficient enough to provide the necessary contrast to identify the rotor pivot points with higher MSE values at the rotor pivot point than that in the periphery. Higher scale factor values provided improved contrast as seen when comparing 2D MSE maps in Figure 7(b). Similar results are observed for figure-of-8 reentry data seen in Figure 8. It is interesting to note that at pixel location "1," the rotor meanders to some extent which is also captured robustly by MSE compared to SE.
Discussion
In this study, we developed an improved MSE technique with nearest-neighbor moving-average kernel and demonstrated that it can be successfully used for the analysis of nonlinear approach for short time series analysis of biomedical signals. We demonstrated that both for single-frequency and multifrequency sinusoidal waves with added noise, SE underestimated the complexity at short time series for all three noise cases and performed better at longer time series lengths. However, MSE was robust even at shorter time series with 1000 sample points in the presence of the three types of noise. The results suggest the value of MSE technique in analyzing complex short time series physiological signals that can be contaminated with these noises and its use for the prognosis and diagnosis of various disease states.
5.2.
Noise-Free ECG Analysis. ECG analysis is very commonly used for a wide variety of cardiac conditions to yield information regarding the state of the heart. Since most remote and ambulatory real-time ECG monitoring present at most 3-5 seconds of ECG data, conventional complexity analysis methods such as SE are limited. However, we demonstrated that MSE robustly estimated the complexity of short time series ECG data even in the presence of noise.
Discrimination between NSR and AF.
AF is the most common sustained cardiac arrhythmia that is associated with increased risk of stroke, heart failure, and death affecting more than 2.3 million people in the United States and over 30 million people worldwide [31]. Although the persistent form of AF can be detected relatively easy, detecting paroxysmal AF is often a challenge since continuous monitoring is required, which in turn requires methods to discriminate NSR from AF through large quantities of data [32].
Although there are several methods available for NSR and AF discrimination, they face limitations in successfully detecting AF with high sensitivity and specificity using short-time ECG data [32][33][34]. The major issues with these approaches are that they often distort the ECG by several preprocessing steps with filters, they do not provide reliable discrimination using short ECG time series data, and many of them lack real-time capability that makes it difficult to trust the data for diagnosis and treatment. Here, we demonstrated that the improved MSE technique can robustly discriminate AF from NSR using a single-lead ECG. The results motivate the application and use of this MSE technique for many hand-held and remote ECG monitors to autodetect AF.
Identification of Pivot Points of Rotors.
Catheter ablation to treat paroxysmal AF has been shown to be up to 87% successful using pulmonary vein (PV) isolation [35][36][37][38][39][40]. However, in patients with persistent AF ablation, it is challenging since the location of the triggers is unclear, and it has been shown that triggers commonly arise outside the PVs. Recent research suggests that AF ablation has a success rate of 28% with 51% after multiple repeat procedures in persistent AF [41].
It is believed that rotors are caused by reentrant mechanisms which might be responsible for maintaining persistent AF. Identification of the rotor pivot point as a suitable ablation target has been the research focus for many investigators. However, these investigations are challenged with short time series data in the clinical setting. Here, we used optical mapping data in which rotors can be clearly visualized, and we demonstrated that the improved MSE technique can precisely identify pivot points in both single rotor and figure-of-8 reentry, thus offering a robust mapping tool to guide identification of AF ablation targets. In the clinical setting, electrogram recordings are frequently limited to 2.5-5-second segments due to the need for frequent catheter repositioning during the procedure, challenging conventional mapping approaches to precisely identifying substrates in AF and other arrhythmias.
Limitations.
A limitation of the improved MSE technique is the need to select a correct choice of the time scale factor "τ." Since the nearest-neighbor moving averaging is employed, large time scales will cause excessive smoothing of the data which may lead to loss of some complexity information. Therefore, caution should be used in the appropriate choice of scaling factor. The results from this study suggest that a scale factor of τ = 3 may be a reasonable starting point for many applications, but clinical validation is needed. In addition, our analysis was limited to relatively small number of datasets. More rigorous evaluation using a larger number of datasets is critical in order to validate these findings for ECG discrimination as well as for rotor identification. Finally, we did not specifically evaluate ex vivo examples of AF but only of more organized cardiac arrhythmias to determine critical rotor elements. Given the higher-order complexity associated with AF, further study is needed in experimental models of AF to validate the use of MSE for characterization of rotors in these arrhythmia examples.
Conclusions
An improved MSE technique with nearest-neighbor moving-average kernel was developed to eliminate the systematic bias from one-sided averaging. The results demonstrate that MSE technique can be successfully used for the analysis of nonlinear and nonstationary short time series physiological data. Compared to the commonly used SE approach, MSE robustly estimated complexity with short time series data with various noises such as white, pink, and brown noises. The MSE discriminated NSR and AF on single-lead ECG of 10 s recordings without any preprocessing steps and precisely identified the pivot point of the rotors with 3 s optical mapping data from isolated rabbit hearts by providing better contrast between the rotor core and the periphery region when compared to the SE approach. Wide-range application of this technique on a variety of time series data can open new avenues for analysis and interpretation.
Future Work
Future work will focus on further validating the efficacy of NSR and AF discrimination on a larger dataset. Also, the MSE algorithm will be validated with a variety of rotor data for accurate identification of ablation targets using both optical mapping and intracardiac electrograms that can guide patient-specific mapping and ablation.
Ethical Approval
In this study no animal studies were performed. Data from previous animal studies were used for which all applicable international, national, and/or institutional guidelines for the care and use of animals were followed. This article does not contain any studies with human participants performed by any of the authors.
Conflicts of Interest
S. P. Arunachalam declares that he has no conflict of interest. S. Kapa declares that he has no conflict of interest. S. K. Mulpuru declares that he has no conflict of interest. P. A. Friedman declares that he has no conflict of interest. E. G. Tolkacheva declares that she has no conflict of interest. | 2018-05-03T02:53:34.281Z | 2018-03-05T00:00:00.000 | {
"year": 2018,
"sha1": "090895c38ef86c9e0ea6abf62022f088af32193f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jhe/2018/8632436.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "742615d58332947a4ca418096534f191367a9f39",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
271570362 | pes2o/s2orc | v3-fos-license | Choroidal thinning in myopia is associated with axial elongation and severity of myopic maculopathy
High myopia can lead to pathologic myopia and visual impairment, whereas its causes are unclear. We retrospectively researched high myopia cases from patient records to investigate the association between axial elongation and myopic maculopathy. Sixty-four eyes were examined in patients who visited the department between July 2017 and June 2018, had an axial length of 26 mm or more, underwent fundus photography, and had their axial length measured twice or more. The average axial length was 28.29 ± 1.69 mm (mean ± standard deviation). The average age was 58.3 ± 14.4 years old. Myopic maculopathy was categorized as mild (grades 0 and 1) and severe (grades 2, 3, and 4). The severe group had longer axial lengths than the mild group (P < 0.05). Moreover, the severe group exhibited thinner choroidal thickness than the mild group (P < 0.05). When subjects were grouped by axial elongation over median value within a year, the elongation group showed thinner central choroidal thickness than the non-elongation group (142.1 ± 91.9 vs. 82.9 ± 69.8, P < 0.05). In conclusion, in patients with high myopia, the severity of maculopathy correlated with choroidal thickness and axial length. Thinner choroidal thickness was associated with axial elongation based on the baseline axial length.
defined as having pathologic myopia 8 .The risk of myopic maculopathy increases exponentially with the severity of myopia 9 .The Tajimi Study in 2006 revealed that myopic maculopathy accompanying pathologic myopia was the leading cause of monocular blindness, accounting for 22.4% of blindness cases 10 .
The choroid, located between the retinal pigment epithelium and the sclera, plays roles in suppressing and absorbing light scatter, providing nutrients to the outer retina, and regulating eye temperature.When the retina detects defocus signals, the choroidal thickness modulates to adjust the focus of the eye to the image 11 .It is known that the choroid becomes thinner with increasing refractive error and axial elongation in myopia 12 .Furthermore, several studies have reported choroidal thickness to be associated with high myopia or myopic maculopathy [13][14][15][16] .However, risk factors for the progression of myopic maculopathy have not been clearly identified.To adequately explain the relationship between choroidal thickness and myopic maculopathy, more research is needed.
In this study, we extracted cases of high myopia from the medical records of patients who visited the Keio University Hospital Department of Ophthalmology and investigated the relationship between axial length, in the International Classification of Myopic Maculopathy, and choroidal thickness.
Patient characteristics of the entire cohort
Patient characteristics of the cases are presented in Table 1.The total number of cases was 64 eyes, comprising 32 eyes of male patients and 32 eyes of female patients.One eye is used for one case.The mean age was 58.3 ± 14.4 years, with the youngest at 22 years and the oldest at 81 years.The average axial length was 28.29 ± 1.69 mm, and the mean choroidal thickness was 111.1 ± 85.7 µm.The site for measuring choroidal thickness is the central choroidal thickness.The average observation period for axial length was 4.3 ± 2.2 years, with the longest being 7.3 years and the shortest being 0.2 years.All patients were phakia and there were no cases that underwent surgical interventions during the course of observation.
The degree of myopic maculopathy is correlated with choroidal thickness and axial length.
To investigate the relationship between the degree of myopic maculopathy and choroidal thickness as well as axial length, the myopic maculopathy categories were classified.Categories 0 and 1 were considered as the mild group, while categories 2, 3, and 4 were considered as the severe group (Table 2).The axial length was 27.65 ± 1.15 mm in the mild group and 29.42 ± 1.90 mm in the severe group, and it was significantly longer in the severe group (Fig. 1A).Furthermore, the choroidal thickness was 153.6 ± 78.9 µm in the mild group and 35.7 ± 20.5 µm in the severe group, with a significant thinning observed in the severe group (Fig. 1B).
The association between posterior staphyloma, choroidal thickness, and axial length
A posterior staphyloma is an outpouching of a circumscribed region of the posterior fundus and has been considered a hallmark of pathologic myopia 17 .There are reports that cases with posterior staphyloma have a thinner choroid 18 .In this study, 26 eyes do not have a posterior staphyloma, while 38 eyes do.The axial length in the group without posterior staphyloma is 27.58 ± 1.23 mm, whereas in the group with posterior staphyloma, it is 28.77 ± 1.80 mm, showing a significantly longer axial length in the group with posterior staphyloma (Fig. 2A).Moreover, the choroidal thickness is 171.7 ± 82.9 µm in the group without posterior staphyloma, and 74.4 ± 64.4 µm in the group with posterior staphyloma, indicating that the choroidal thickness is significantly thinner in the group with posterior staphyloma (Fig. 2B).
The relationship between gender, myopic maculopathy, choroidal thickness, and axial length
Being female is considered a risk factor for the progression of myopia 19 .Therefore, we investigated the relationship between gender and myopic maculopathy, choroidal thickness, and axial length.The central choroidal thickness in males was 126.4 ± 75.9 µm, while in females, it was 95.2 ± 93.4 µm, and no significant difference was observed (Fig. 3A).The axial length was 27.91 ± 1.51 mm in males and 28.66 ± 1.79 mm in females, with no significant difference (Fig. 3B).The annual change in axial length was − 7.35 µm in males and 22.06 µm in females, with no significant difference (Fig. 3C).
No significant difference was found in the relationship between the International Classification of Myopic Maculopathy and gender (Fig. 3D).
The relationship between axial length elongation, choroidal thickness, and international classification of myopic maculopathy
To examine the relationship between axial length elongation, choroidal thickness, and the International Classification of Myopic Maculopathy, the cohort was divided into groups based on whether the axial length elongated beyond the median change of 5.27 µm over one year.The group with axial length elongation was labeled as the "elongation group," while the rest were categorized as the "non-elongation group."The choroidal thickness was 137.2 ± 82.1 µm in the non-elongation group and 92.9 ± 84.5 µm in the elongation group, with a significant thinning observed in the axial length elongation group (Fig. 4A).Regarding the axial length elongation over 1 year, there was a change of 8.14 ± 48.41 µm in the mild myopic maculopathy group and 5.94 ± 126.01 µm in the severe myopic maculopathy group, without a statistically significant difference (Fig. 4B).
Factors associated with progression of myopic maculopathy
To investigate factors associated with the progression of myopic maculopathy, logistic regression analysis was conducted.The severity of myopic maculopathy was found to be correlated with choroidal thickness.However, there was no statistically significant association observed between the baseline axial length and gender, as well as between axial length and myopic maculopathy progression (Table 3).
Factors associated with axial length elongation
To explore factors associated with axial length elongation, logistic regression analysis was conducted.Axial length elongation was found to be correlated with choroidal thickness and the baseline axial length.However, no statistically significant associations were observed between axial length elongation and gender, as well as between axial length elongation and the International Classification of Myopic Maculopathy (Table 4).
Discussion
Myopic maculopathy is a critical complication of high myopia and can potentially lead to vision impairment and blindness.Choroidal thickness has been reported to be associated with the severity of myopic maculopathy 16 .This study confirmed that myopic maculopathy tends to worsen as choroidal thickness decreases.Additionally, it is known that longer axial lengths are associated with thinner choroidal thickness 18 .This study also validated this relationship.
While it was known that myopic eyes tend to have thinner choroids, the underlying mechanism was not fully understood.However, experiments using LDL Receptor Related Protein 2 (LRP2) and Vascular endothelial growth factor (VEGF) gene-altered mice have shown that inducing choroidal thinning leads to axial elongation.Reduced VEGF derived from the retinal pigment epithelium (RPE) causes choriocapillaris underdevelopment, leading to choroidal thinning, which in turn promotes axial elongation and the development of myopia.Adequate levels of VEGF derived from the RPE are necessary for normal eye development to maintain choroidal In this study, it was found that myopic maculopathy was correlated with choroidal thickness and that thinner choroids were associated with axial elongation.The blood flow in the choroid has been reported to be associated with choroidal thickness 21 .Decreased choriocapillaris diameter and density have been found in a myopia animal model 22 .Additionally, in Jeong et al. 's study, bunazosin hydrochloride (BH), an alpha-1-adrenergic blocker, which selectively inhibits α1-adrenergic receptors in vascular smooth muscle cells and alleviates vasoconstriction, has been shown in animal experiments to suppress choroidal thinning and increase choroidal blood flow 23 .Further research may prove beneficial in maintaining choroidal thickness, preventing axial elongation, and inhibiting the progression of maculopathy by suppressing the reduction of choroidal blood flow.Advancements in optical coherence tomography (OCT) have greatly contributed to the diagnosis and treatment of retinal and choroidal diseases.This study demonstrated the potential of predicting axial elongation through choroidal thickness measurements.Moreover, given the relationship between choroidal thickness and myopic maculopathy, measuring choroidal thickness might become a predictive factor for the progression of myopic maculopathy.Considering the rapid advancements in myopia treatment, early detection of such abnormalities in highly myopic eyes is crucial.Based on the results of this study, measuring choroidal thickness using OCT in patients with high myopia could become an important indicator for predicting axial elongation and the progression of myopic maculopathy.
This study has several limitations.First, the limited number of cases resulted from the requirement of having at least two measurements of axial length.Second, due to the hospital-based nature of the study, most patients who underwent retinal imaging were expected to have some form of retinal pathology, potentially inflating the prevalence of these conditions.Furthermore, in this study, due to its hospital-based nature, it was assumed that a certain number of glaucoma patients were included, so intraocular pressure was not adopted as an analytical item.This decision was made because it was believed that it might be mistakenly reflected in the multivariate analysis.Third, the measurement time, especially whether AM or PM, for choroidal thickness varies from case to case and therefore is not unified.Fourth, the involvement of outdoor activity time and near-work time is unknown in this study.Fifth, regarding the calculation of axial elongation, per-year elongation is calculated from the degree of elongation for the observational period.Because there is a report that the degree of axial elongation differs in every season, it could be inaccurate that per-year elongation is calculated by multiplying the observed elongation by the number of 1 divided by the corresponding years 24 .Sixth, measuring choroidal thickness was challenging because thin choroids are more common in myopic eyes.Measurement discrepancies were particularly prevalent in cases with thin choroids.
In conclusion, this innovative method of this study revealed that the severity of myopic maculopathy was correlated with choroidal thickness.Furthermore, it was demonstrated that thinner choroids were associated with axial elongation, which may suggest that thinning of the choroid causes axial elongation and as a result affects the severity of myopic maculopathy.
Ethical guidelines
This study was retrospective research conducted in a hospital setting.It adhered to the principles of the Helsinki Declaration, ethical guidelines for medical and health research involving human subjects, and local regulatory requirements.The study was conducted under the approval of all institutional review boards (IRBs) and ethics committees.The study was approved by the Keio University School of Medicine IRB (Approval Number: 20180189).We applied Opt-out method to obtain consent on this study by using a website.The website was approved by the IRB.
Participants
Data were collected and analyzed from medical records of patients who visited the Keio University Hospital Department of Ophthalmology between July 1, 2017, and June 30, 2018.Unnecessary imaging or laboratory tests were not performed for the purpose of this study.Refraction and axial length are indicators of myopia, but refraction can change due to factors such as cataract surgery history.Therefore, this study utilized axial length as the indicator of myopia instead of refraction.Participants were selected based on the following criteria: axial length measured to be 26 mm or greater (IOLMaster 500, Zeiss, Jena, Germany) through optical biometry, and subjects who underwent ultra-widefield retinal imaging (Optos, Nikon, Japan, or TRC-50DX, Topcon, Japan) capturing 9-directional fundus images, with at least two measurements of axial length taken (Fig. 5).Patients with a history of cataract surgery were excluded.
Data analysis
Patient information and medical history data were retrieved from medical records.The classification of myopic maculopathy was based on the diagnostic guidelines of the META-PM study.OCT images were captured using the NIDEK RS3000 and the central choroidal thickness was measured from the RPE to the choroid-sclera interface, using built-in scales (Fig. 6).Both measurements and determinations were conducted by two ophthalmologists.The per-year elongation is calculated by multiplying the observed elongation by the number of 1 divided by the corresponding years as a formula below.
Statistics
Data were presented as mean ± standard deviation.All obtained data were used for statistical analysis.Statistical analysis was performed using SPSS version 27.0 for Windows (IBM, Armonk, NY, USA), employing chi-square tests, t-tests, and logistic regression analysis.Statistical significance was defined as p < 0.05.Continuous variables showing parametric distribution were analyzed with Student's t-test, and variables showing nonparametric distribution were analyzed with the Mann-Whitney U test.Logistic regression was used to estimate odds ratios (ORs) and 95% confidence intervals (CIs).The method for measuring central choroidal thickness.OCT images were captured using the NIDEK RS3000 and the central choroidal thickness was measured from the RPE to the choroid-sclera interface, using built-in scales.This is an OCT image of the right eye of a 66-year-old male with an axial length of 26.70 mm.
Figure 2 .
Figure 2. (A) The axial length in the group without posterior staphyloma is 27.58 ± 1.23 mm, and in the group with posterior staphyloma, it is 28.77 ± 1.80 mm, showing a significantly longer axial length in the group with posterior staphyloma (P < 0.01).(B) The choroidal thickness is 171.7 ± 82.9 µm in the group without posterior staphyloma, and 74.4 ± 64.4 µm in the group with posterior staphyloma, indicating that the choroidal thickness is significantly thinner in the group with posterior staphyloma (P < 0.01).**P < 0.01.Data are shown as mean ± SD.
Figure 3 .
Figure 3. (A)The central choroidal thickness in males was 126.4 ± 75.9 µm, and in females, it was 95.2 ± 93.4 µm, with no statistically significant difference observed (P = 0.16).(B) The axial length was 27.91 ± 1.51 mm in males and 28.66 ± 1.79 mm in females, and there was no statistically significant difference (P = 0.07).(C) The annual change in axial length was − 7.36 µm in males and 22.06 µm in females, with no statistically significant difference observed (P = 0.16).(D) There was also no statistically significant difference found in the relationship between the International Classification of Myopic Maculopathy and gender (P = 0.07).
Figure 4 .
Figure 4. (A) The choroidal thickness was 137.2 ± 82.1 µm in the non-elongation group and 92.9 ± 84.5 µm in the elongation group, with a statistically significant thinning observed in the axial length elongation group (P < 0.05).(B) The axial length elongation over one year was 8.14 ± 48.4 µm in the mild myopic maculopathy group and 5.94 ± 126.0 µm in the severe myopic maculopathy group, with no statistically significant difference observed.(P = 0.937) *P < 0.05.Data are shown as mean ± SD.
Figure 5 .
Figure 5. Flow chart of the selection of subjects in this study.We conducted a retrospective study by investigating cases of patients who visited our department between July 2017 and June 2018, had an axial length of 26 mm or more, underwent fundus photography, and had their axial length measured twice or more, based on the medical records.
Figure 6 .
Figure 6.The method for measuring central choroidal thickness.OCT images were captured using the NIDEK RS3000 and the central choroidal thickness was measured from the RPE to the choroid-sclera interface, using built-in scales.This is an OCT image of the right eye of a 66-year-old male with an axial length of 26.70 mm.
Table 1 .
Patient background of all cases. | 2024-08-01T06:16:32.397Z | 2024-07-30T00:00:00.000 | {
"year": 2024,
"sha1": "edafaae91b23b0466cf03fb453e8d35a6aa8f45b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b03f6e5d0a6ab18cfe680b38dcaa1020bb254a7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248509182 | pes2o/s2orc | v3-fos-license | Cisplatin exposure acutely disrupts mitochondrial bioenergetics in the zebrafish lateral-line organ
Cisplatin is a commonly used chemotherapeutic agent that causes debilitating high-frequency hearing loss. No targeted therapies currently exist to treat cisplatin ototoxicity, partly because the underlying mechanisms of cisplatin-induced hair cell damage are not completely defined. Zebrafish may offer key insights to cisplatin ototoxicity because their lateral-line organ contains hair cells that are remarkably similar to those within the cochlea but are optically accessible, permitting observation of cisplatin injury in live intact hair cells. In this study, we used a combination of genetically encoded biosensors in zebrafish larvae and fluorescent indicators to characterize changes in mitochondrial bioenergetics in response to cisplatin. Following exposure to cisplatin, confocal imaging of live intact neuromasts demonstrated increased mitochondrial activity. Staining with fixable fluorescent dyes that accumulate in active mitochondria similarly showed hyperpolarized mitochondrial membrane potential. Zebrafish expressing a calcium indicator within their hair cells revealed elevated levels of mitochondrial calcium immediately following completion of cisplatin treatment. A fluorescent ROS indicator demonstrated that these changes in mitochondrial function were associated with increased oxidative stress. After a period of recovery, cisplatin-exposed zebrafish demonstrated caspase-3-mediated apoptosis. Altogether, these findings suggest that cisplatin acutely disrupts mitochondrial bioenergetics and may play a key role in initiating cisplatin ototoxicity.
Introduction
Cisplatin is a chemotherapeutic agent that is commonly used in the treatment of solid tumors. Cisplatin's anti-tumor activities arise from its ability to intercalate into DNA, thereby promoting the formation of DNA adducts that block transcription and trigger apoptosis in rapidly dividing cells (Ciccarelli et al., 1985;Ormerod et al., 1994;Pinto and Lippard, 1985). However, treatment with cisplatin also causes permanent hearing loss in more than 50% of adults (Brock et al., 2012;Schmitt and Page, 2018) and up to 60% of children (Brock et al., 2012;Knight et al., 2017). Hair cells are not mitotically active, but they are still susceptible to cisplatin injury, suggesting that cross-linking of nuclear DNA may not be the pre-dominant mechanism of cisplatin-induced hearing loss. Numerous studies have suggested that the pathologic generation of reactive oxygen species (ROS) plays a critical role in cisplatin ototoxicity . However, multiple pathways can mediate oxidative stress, and it is unclear how cisplatin drives ROS production (Gentilin et al., 2019;Kros and Steyger, 2019;Sheth et al., 2017).
Mitochondria are one of the main sources of endogenous ROS, and they have been strongly implicated to play a role in cisplatin ototoxicity. In vitro studies of cancer and non-cancer cell lines demonstrate that cisplatin exposure generates mitochondrial-dependent oxidative stress, and that more metabolically active cells show greater susceptibility to cisplatin (Marullo et al., 2013). Mitochondria are also involved in the later phases of cisplatininduced hair cell death. In Mongolian gerbils, cisplatin ototoxicity was associated with increased expression of Bax, a protein that induces the mitochondrial apoptotic pathway, and decreased levels of Bcl-2, a protein that promotes cell survival, throughout all turns of the cochlea (Alam et al., 2000). Similar findings were recapitulated in cisplatin-treated guinea pigs (Wang et al., 2004) and in UB/OC-1 cells (Borse et al., 2017), suggesting that impaired mitochondrial function is a significant contributor to cisplatin ototoxicity. However, the specific cellular events that drive mitochondrial dysfunction, oxidative stress, and hair cell death remain an open question.
A major barrier to understanding the cellular mechanisms that underlie cisplatin-induced hair cell death is an inability to observe dynamic processes within the cochlea. Research on cochlear cisplatin ototoxicity has been complemented by studies using zebrafish models because they possess hair cells along the surface of their body within the lateral-line sensory system in organs called neuromasts (Domarecka et al., 2020;Wertman et al., 2020). The superficial location of neuromasts enables reliable drug treatment protocols and optical accessibility for live-imaging techniques to directly observe hair cell response to cisplatin in vivo.
The goal of the present study was to investigate the effect of cisplatin on hair cell mitochondrial activity and homeostasis within intact neuromasts of the zebrafish lateral-line organ. Transgenic zebrafish with genetically encoded biosensors were treated with cisplatin for two hours and their neuromasts were imaged live immediately after completion of drug exposure. Hair cell mitochondrial activity (as measured by uptake of tetramethylrhodamine ethyl ester; TMRE) and hair cell mitochondrial calcium levels (as measured by the genetically encoded indicator mitoGCaMP3 fluorescence) were both increased in response to cisplatin treatment. These changes in mitochondrial function were also associated with an increase in ROS production. Subsequent canonical activation of caspase-3-mediated apoptosis in hair cells treated with cisplatin was observed, suggesting that mitochondrial dysfunction may be an essential component of cisplatin ototoxicity. Cumulatively, these results indicate that changes in hair cell mitochondrial function may be among the first events to occur following cisplatin exposure and suggest that the intrinsic (mitochondrial) apoptosis pathway drives cisplatin-induced hair cell death.
Zebrafish husbandry and lines
All experiments and procedures on zebrafish were performed in accordance with the Washington University Institutional Animal Use and Care Committee.
Adult zebrafish were maintained in group housing and standard conditions at the Washington University Zebrafish Facility. Embryos were maintained in embryo media (EM: 15 mM NaCl, 0.5 mM KCl, 1 mM CaCl 2 , 1 mM MgSO 4 , 0.15 mM KH 2 PO 4 , 0.042 mM Na 2 HPO 4 , 0.714 mM NaHCO 3 ) at 28 °C with a 14-hour light and 10-hour dark cycle (Westerfield, 2000). After 4 days post-fertilization (dpf), larvae were raised in 100 -200 mL EM in 250-mL plastic beakers and fed rotifers daily. The sex of the animal was not considered because it cannot be determined in zebrafish larvae. Experiments were started in the mid-morning and completed by the late afternoon. At their conclusion, zebrafish were euthanized by quick chilling to 4 °C in an ice water bath.
The transgenic lines Tg(myo6b:mitoGCaMP3) (allele number: w119Tg) and Tg(TNKS1bp1:EGFP) (allele number: y229Gt) were used in this study (Behra et al., 2012;Esterberg et al., 2014). Tg(myo6b:mitoGCaMP3) fish were used to quantify mitochondrial calcium levels in response to cisplatin. Tg(TNKS1bp1:EGFP) fish label neuromast supporting cells and were used to outline neuromast hair cells in live imaging experiments. This approach was used instead of labeling hair cells with 4′,6-Diamidino-2-Phenylindole (DAPI) in live imaging experiments because imaging DAPI-labeled nuclei requires a highfrequency laser, which may unintentionally injure hair cells during live acquisition. Larval zebrafish were screened for transgenic fluorophores at 3 -5 dpf under sedation with 0.01% tricaine in EM using a Leica MZ10 F stereomicroscope with fluorescence equipped with a GFP and a DSR filter set.
Exposure of neuromast hair cells to cisplatin
Lateral-line hair cells were treated with cisplatin, a chemotherapeutic drug with wellestablished ototoxic effects (Domarecka et al., 2020;Ou et al., 2007), by exposing freeswimming zebrafish larvae at 6 dpf. Groups of ~20 -30 larvae were placed in 25 mm cell strainers (Corning Cell Strainer) and incubated for 2 h in 30 mL of EM containing 0.1% dimethyl sulfoxide (DMSO) with either 250 μM or 1 mM cisplatin (Abcam) at 28 °C. A systematic review of cisplatin ototoxicity studies in zebrafish indicated that studies used a wide range of exposure duration (45 min to 24 h) and concentrations of cisplatin (50 uM to 1 mM). Based on these data, a relatively shorter exposure time of 2 h and moderate to high concentrations of cisplatin were used to produce a consistent hair cell lesion while minimizing systemic toxicity, emulating a clinically relevant exposure protocol.
Larvae were then rinsed 3x in 30 mL EM and incubated in various fluorescent indicators for live imaging, or were maintained in 30 mL EM for an additional 2 to 4 h, depending on the specific experiment. Control larvae underwent an identical protocol, but were treated with 0.1% DMSO. Although DMSO increases cisplatin potency and is not a required carrier for intracellular uptake, zebrafish are commonly used as a model for otoprotective drugs so experiments were designed to be generalizable to otoprotective drug studies (Domarecka et al., 2020;Uribe et al., 2013).
Live hair cell labeling, imaging, and analysis
For live imaging experiments, larvae that had just completed the treatment protocols described in Section 2.2 were incubated for 30 min in either 8 mL of 250 nM TMRE (Invitrogen) in EM or 1 mL of 5 or 10 μM CellROX Deep Red (Invitrogen) in darkness and then rinsed 2x in EM. TMRE and CellROX Deep Red were used to measure mitochondrial activity and oxidative stress, respectively. Two different concentrations of CellROX Deep Red were used to account for batch-to-batch variability in the fluorescent indicator.
Larvae (6 dpf) underwent live imaging, following previously published methods . Briefly, individual larvae were sedated with 30 mL EM with 0.01% tricaine and mounted lateral-side up within a small amount of 2% low-melt agarose on a FluoroDish (World Precision Instruments, Cat# FD3510). Mounted larvae were then submerged in ~0.5 -1 mL EM with 0.01% tricaine. Z-stack images (step size of 1 μm) from neuromasts L3 -L6 of the posterior lateral-line were acquired with an ORCA-Flash 4.0 V3 camera (Hamamatsu) using a Leica DM6 Fixed Stage microscope with an X-Light V2TP spinning disk confocal (60 μm pinholes) and a 63x/0.9 N.A. water immersion objective. TMRE imaging used a 555 nm wavelength laser (RFP), operating at 20% power, with 120 ms/ frame exposure time. CellROX Deep Red imaging used a 646 nm wavelength laser (Cy5), operating at 20% power, with 150 ms/frame exposure time. Images of mitoGCaMP3 activity were obtained using a 470 nm wavelength laser (GFP), at 20% power and 120 ms/frame acquisition times. TNKS1bp1:EGFP fish were imaged with a 470 nm wavelength laser (GFP), at 20% power and 150 ms/frame scan time. All image acquisition was controlled by MetaMorph software (Molecular Devices).
Digital images of neuromasts were processed using ImageJ software (Schneider et al., 2012) and Adobe Illustrator. Single-channel z-stacks were individually measured for each neuromast (L3 -L6). Background subtraction was performed using a rolling ball radius at the following sizes: 100 pixels for TMRE, and 150 pixels for mitoGCaMP3 and CellROX Deep Red. Maximum intensity projections of each z-stack were generated, their corresponding neuromast was outlined, and mean fluorescent pixel intensities of each neuromast were measured. Lastly, to account for fish-to-fish variability, the mean fluorescent intensities of neuromasts originating from the same zebrafish were averaged. Within each experimental trial, the average intensity per zebrafish was normalized to the median value of the average intensity per control zebrafish to address experiment-to-experiment variability.
Whole-mount immunohistochemistry, imaging, and analysis of fixed specimens
2.4.1. MitoTracker experiments-For fixable MitoTracker experiments, neuromast hair cell labeling was performed as previously described . MitoTracker was used as an additional indicator of mitochondrial activity. Briefly, larvae that had just completed 2 hr of cisplatin exposure were incubated for 4 min in 15 mL EM with 5 μg/mL 4′,6-Diamidino-2-Phenylindole (DAPI; Invitrogen) at 28 °C in the dark, and then rinsed 2x in 30 mL EM. Larvae were then incubated for 30 min in 15 mL EM containing 500 nM MitoTracker Red CMXRos and 500 nM MitoTracker Deep Red at 28 °C in darkness, followed by 3x rinses in 30 mL EM. At this point, specimens were euthanized by rapid cooling of fish medium, and fixed overnight in 4% paraformaldehyde in 0.1 M phosphate-buffered solution (PBS pH = 7.4) at 4 °C. The following day, fixed specimens were rinsed 3x in PBS. MitoTracker-treated larvae were then mounted on glass slides in elvanol (13% w/v polyvinyl alcohol, 33% w/v glycerol, 1% w/v DABCO (1,4 diazobicylo[2,2,2] octane) in 0.2 M Tris, pH 8.5) and covered with #1.5 cover slips prior to imaging.
Larvae stained with MitoTracker dyes were imaged on an X-Light V2TP spinning disk confocal microscope, using a 63x/0.9 N.A. oil immersion objective (Leica). Image stacks of posterior lateral-line neuromasts L4 -L6 were acquired, at 1 μm step size. MitoTracker CMXRos Z-acquisition parameters were 555 nm wavelength laser (RFP) set to 20% power and 120 ms/frame. MitoTracker Deep Red data were collected with a 646 nm wavelength laser (Cy5), at 20% power and100 ms/frame. DAPI images were acquired with a 405 nm wavelength laser, at 20% power and 100 ms/per frame. All image acquisition was controlled by MetaMorph software (Molecular Devices).
Digital images were processed using ImageJ software (Schneider et al., 2012) and Adobe Illustrator. Whole neuromast fluorescence intensity was measured as in Section 2.3 with a rolling ball radius of 200 pixels for both MitoTracker dyes. The mean intensity of the fluorescent indicators for each neuromast was measured. Average neuromast intensity was normalized to the median value of the average neuromast intensity observed in DMSO controls.
Caspase-3 experiments-After
completing a 2 h exposure to cisplatin followed by a 2 h recovery period in EM, fish were euthanized by rapid chilling and fixed in paraformaldehyde as described in Section 2.4.1. In preparation for cleaved caspase-3 labeling, fixed specimens were blocked for 2 h with 5% normal horse serum (NHS) in PBS with 1% Triton X-100, at room temperature and with gentle agitation. The blocking solution was then replaced with primary antibodies diluted in PBS, 2% NHS, and 1% Triton X-100, and incubated overnight at room temperature and with gentle agitation. The primary antibodies were HCS-1 (hair cells, mouse monoclonal, 1:500, DSHB, University of Iowa), and anti-cleaved caspase-3 (rabbit polyclonal, 1:400, Cell Signaling). The following day, larval zebrafish were rinsed 3x in PBS, incubated for 2 h in DAPI (5 μg/mL) and secondary antibodies (anti-mouse IgG, conjugated to Alexa Fluor 488 and anti-rabbit IgG, conjugated to Alexa Fluor 555; Invitrogen), both diluted 1:500 in PBS with 2% NHS and 1% Triton X-100, at room temperature in darkness. Specimens were then rinsed 3x in PBS and mounted as described above.
The posterior lateral-line neuromasts (L3 -L6) of fixed zebrafish immunolabeled with HCS-1 and anti-cleaved caspase-3 were imaged with an LSM 700 laser scanning confocal microscope (Carl Zeiss). Each neuromast was evaluated for the presence or absence of cleaved caspase-3 labeling in hair cells and data were expressed as the percentage of neuromasts with activated caspase-3. Z-stack images of representative cisplatin-exposed and control neuromasts (step size of 0.29 μm) were acquired on the scanning confocal microscope using a 63 × 1.4 N.A. Plan-Apochromat oil immersion objective (Carl Zeiss).
Statistical analysis
Statistical analyses were performed with Prism 9 (GraphPad Software Inc). Normality of data were determined with the D'Agostino-Pearson test. Statistical significance between two groups was determined using an unpaired Student's t-test or a Mann-Whitney U test, as appropriate. Comparison of multiple groups was evaluated by one-way ANOVA or Kruskal-Wallis test with appropriate post hoc tests as needed.
Cisplatin exposure causes an acute increase in mitochondrial activity in hair cells
Although hair cells are susceptible to cisplatin injury, they do not proliferate, suggesting that cisplatin-induced hearing loss takes place through a mechanism other than the induction of adducts in nuclear DNA (Martens-de Kemp et al., 2013;Ou et al., 2007;Yimit et al., 2019). Since hair cells are highly metabolically active and cisplatin may accumulate within hair cell mitochondria (Yang et al., 2006), one hypothesis is that cisplatin directly affects mitochondrial bioenergetics, leading to mitochondrial dysfunction and subsequent oxidative stress. Initial experiments characterized the association between mitochondrial activity and cisplatin exposure. Prior in vivo and in vitro studies demonstrate increased mitochondrial ROS and upregulated proteins associated with mitochondrial-induced apoptosis after cisplatin exposure (Alam et al., 2000;Borse et al., 2017;Wang et al., 2004). To explore acute changes in hair cell mitochondrial activity following cisplatin treatment, live intact neuromasts were imaged after uptake of TMRE, a cell-permeant fluorescent indicator sequestered by active mitochondria. Confocal imaging of Tg(TNKS1bp1:EGFP) zebrafish at 6 dpf captured mitochondrial activity immediately following a 2 h exposure to 250 μM cisplatin or to 0.1% DMSO (control) (Fig. 1A -F). Each fish was considered as a single biological sample. Data were derived from confocal images of neuromasts L3 -L6 (4 neuromasts/zebrafish, n = 15 -16 zebrafish/group, N = 4 experimental trials), and indicate an acute increase in hair cell TMRE intensity in cisplatin-treated zebrafish (Fig. 1G, **p = 0.0021, unpaired t-test).
We next compared changes in mitochondrial membrane potential in response to two different doses of cisplatin. Fish (6 dpf) were loaded with CMXRos and MitoTracker Deep Red, which are fixable fluorescent indicators of mitochondrial membrane potential in live cells. Following dye loading, fish were treated for 2 h with 0.1% DMSO (control), 250 μM cisplatin, or 1 mM cisplatin. They were then rinsed in EM, stained with DAPI and MitoTracker, euthanized, and fixed. Confocal images of neuromasts L4 -L6 were obtained, to determine the effect of cisplatin on mitochondrial membrane potential ( Fig. 2A -I; 3 neuromasts/zebrafish, n = 45 neuromasts/group, N = 3 experimental trials). Data derived from these images demonstrated that mitochondrial membrane hyperpolarization trended upwards in response to increasing concentrations of cisplatin (Fig. 2J -K). However, Tukey's multiple comparisons test of MitoTracker CMXRos only detected a significant difference between DMSO and 1 mM cisplatin (Fig. 2J, **p = 0.0039). In contrast, Tukey's multiple comparisons test of MitoTracker Deep Red showed significant differences between all three treatment conditions (Fig. 2K, **p = 0.0011; ****p < 0.0001). Interestingly, hair cells within the same neuromast appeared to heterogeneously accumulate MitoTracker regardless of treatment group instead of being equally distributed throughout the entire neuromast. This heterogeneous uptake may indicate that there are underlying factors that affect individual hair cell susceptibility to cisplatin. When considered with the data from the TMRE experiments (Fig. 1), these observations suggest that mitochondrial activity increases following cisplatin treatment in a dose-dependent manner.
Cisplatin exposure causes acute increases in hair cell mitochondrial calcium levels and ROS production
Mitochondria serve as buffers, sensors, and modulators of intracellular calcium signaling (Rizzuto et al., 2012). Prior in vitro studies on cancer cells and hair cell-like cell lines demonstrate that dysregulated mitochondrial calcium handling in response to cisplatin may initiate toxic levels of ROS production and mitochondria-mediated apoptosis (Bernardi, 1999;Kleih et al., 2019;Lu et al., 2019;Zhao et al., 2022). To investigate the effect of cisplatin on mitochondrial calcium and oxidative stress, we employed confocal imaging of live intact neuromast hair cells expressing the mitochondrial calcium indicator mitoGCaMP3. Tg(myo6b:mitoGCaMP3) zebrafish at 6 dpf were treated for 2 h with 0.1% DMSO or 250 μM cisplatin, rinsed in EM, incubated in fluorescent ROS indicator, CellROX Deep Red, and used for live imaging. Representative images of neuromasts in the control and cisplatin groups are depicted in Fig. 3A -F. Data were derived from confocal images of neuromasts L3 -L6, demonstrating that cisplatin exposure increases hair cell mitochondrial calcium levels (Fig. 3G, **p = 0.0074, Mann-Whitney U test, 4 neuromasts/zebrafish, n = 32 -34 zebrafish/group, N = 8 experimental trials) and ROS production (Fig. 3H, *p = 0.014, Mann-Whitney U test, 4 neuromasts/zebrafish, n = 26 -27 zebrafish/group, N = 7 experimental trials). As with the mitochondrial membrane potential assays, mitochondrial calcium levels of individual hair cells within the same neuromast were heterogeneous regardless of treatment group. However, the present methods were unable to delineate whether this heterogeneity affects susceptibility to cisplatin.
Cisplatin-induced hair cell death is mediated by activated caspase-3
Prior studies indicate that cisplatin causes hair cell apoptosis by elevating the ratio of Bax to Bcl-2, resulting in mitochondrial membrane permeability, leakage of cytochrome-c and activation of canonical caspase-3-mediated apoptosis (Borse et al., 2017;Devarajan et al., 2002;Wang et al., 2004). To investigate the effect of cisplatin on mitochondrially-mediated apoptosis in neuromast hair cells, 6 dpf zebrafish were treated for 2 h with 0.1% DMSO, 250 μM cisplatin, or 1 mM cisplatin. They were then thoroughly rinsed and maintained for an additional 2 h in EM, at which point they were euthanized, fixed and processed for visualization of hair cells and cleaved caspase-3. Representative confocal images of neuromasts L3 -L6 (4 neuromasts/fish, n = 30 neuromasts/group, N = 3 experimental trials) are depicted in Fig. 4A -C, and demonstrate the presence of activated caspase-3 in hair cells of cisplatin-exposed zebrafish, but not in DMSO-exposed controls (Fig. 4D, p < 0.0001, Kruskal-Wallis test). Analysis of the percentage of neuromasts containing a subset of hair cells that were immunoreactive for cleaved caspase-3 suggest that cisplatin induces neuromast hair cell apoptosis at moderate (250 μM) and high (1 mM) doses (Fig. 4D, **p = 0.003, ****p < 0.0001, Dunn's multiple comparisons test).
Discussion
Cisplatin is commonly used in the treatment of solid tumors in both adult and pediatric populations. Permanent hearing loss is a frequent consequence of cisplatin chemotherapy and no FDA-approved methods exist to mitigate cisplatin ototoxicity. Although mitochondria have been identified as potential drivers of cisplatin injury in hair cells , characterizing changes in mitochondrial function in response to cisplatin will identify critical cellular events that result in apoptosis. While drug delivery to the inner ear, off-target effects, and appropriate patient selection continue to challenge successful translation of an otoprotective drug (Freyer et al., 2020;Hazlitt et al., 2018;Yu et al., 2020), understanding the effect of cisplatin on mitochondria will address a major gap in knowledge that in part, prevents the development of otoprotective therapies.
The primary functions of mitochondria are to produce ATP and mediate intracellular calcium homeostasis (Marullo et al., 2013;Rizzuto et al., 2012). Prior in vitro and in vivo studies indicate that cisplatin rapidly enters hair cells (Thomas et al., 2013), accumulates within mitochondria (Yang et al., 2006), and leads to canonical caspase-3mediated apoptosis (Borse et al., 2017;Devarajan et al., 2002;Wang et al., 2004). While these studies have established an important framework for determining the contribution of mitochondrial dysfunction to cisplatin ototoxicity, a common limitation is that they observe the downstream effects of mitochondrial dysfunction and are unable to characterize dynamic changes in mitochondrial bioenergetics that occur within a live intact hair cell. In this study, we measured the intensity of fluorescent indicators and genetically encoded biosensors within live and fixed transgenic zebrafish exposed to cisplatin, in order to identify acute changes in hair cell mitochondrial function. We found that cisplatin hyperpolarized hair cell mitochondria (Figs. 1 and 2), elevated hair cell mitochondrial calcium levels (Fig. 3), and increased oxidative stress (Fig. 3) within hair cells of the zebrafish lateral line. Such knowledge enhances our understanding of when mitochondrial dysfunction begins, and further supports the notion that mitochondrial dysfunction is an essential component of cisplatin ototoxicity.
The timing of mitochondrial dysfunction after exposure to cisplatin and its association with hair cell death has not been previously characterized. A recent series of studies exploring neomycin ototoxicity have demonstrated that mitochondrial dysfunction is among the first events to occur after exposure to neomycin, beginning with excessive mitochondrial calcium uptake and hyperpolarization, followed by rapid collapse of mitochondrial membrane potential within 30 min (Esterberg et al., 2014(Esterberg et al., , 2016Owens et al., 2007). These events ultimately cause the generation of pathologic levels of ROS and subsequent hair cell death (Esterberg et al., 2016). Our data suggest that cisplatin may produce acute changes in mitochondrial bioenergetics, similar to those observed after exposure to neomycin, e.g., mitochondrial hyperpolarization (Fig. 2), elevated mitochondrial calcium levels (Fig. 3), and increased hair cell ROS production (Fig. 3). These findings are consistent with studies of the mammalian ear that indicate cisplatin exposure leads to calcium accumulation in the cytosol and mitochondria of hair cells (Lu et al., 2019;Zhao et al., 2022). Since disrupted mitochondrial bioenergetics have previously been associated with mitochondrial ROS production (Gentilin et al., 2019;Kros and Steyger, 2019;Sheth et al., 2017) and caspase-3-mediated hair cell death (Fig. 4), we propose that hyperpolarized membrane potential and elevated calcium levels immediately following cisplatin exposure may be initiating events in cisplatin ototoxicity.
Our data further indicate that there appears to be heterogenous mitochondrial activity and mitochondrial calcium levels of hair cells within the same neuromast regardless of treatment group 3A and D). The biological and clinical significance of this heterogeneity remains an open question. Nonetheless, we speculate that it may represent as-yet unidentified factors that contribute to variability in hair cell vulnerability, and that these differences may be rooted in mitochondrial function. Prior studies of aminoglycoside ototoxicity report that hair cell vulnerability may be linked to cumulative mitochondrial activity, rather than acute changes in mitochondrial activity (Pickett and Raible, 2019;Pickett et al., 2018). This notion corresponds with clinical observations that age is an independent risk factor for cisplatin ototoxicity (Fernandez et al., 2019;Theunissen et al., 2015), and that the more metabolically active high-frequency outer hair cells are at greater risk of cisplatin injury than low-frequency inner hair cells (Prayuenyong et al., 2021). A limitation of this study is that the live imaging techniques used enable whole neuromast analysis at a single point in time. Such techniques cannot explore the effect of baseline mitochondrial bioenergetics on hair cell survival, which require repeated measures of individual hair cells across multiple time points. In future studies, similar methods as those used to investigate susceptibility of individual hair cells to aminoglycoside ototoxicity can be used to test the contribution of hair cell mitochondrial age, activity, and redox history on cisplatin vulnerability (Lukasz et al., 2022;Pickett et al., 2018).
The relationship between cisplatin and oxidative stress has been well established (Gentilin et al., 2019;Kros and Steyger, 2019;Sheth et al., 2017), and prior studies demonstrate that mitochondria play a key role in cisplatin-induced oxidative stress in cancer and non-cancer cell lines (Marullo et al., 2013) as well as hair cells (Li et al., 2021;Lu et al., 2022). Here, we build on this literature and demonstrate that oxidative stress occurs shortly after cisplatin exposure. However, multiple pathways cause ROS production and the possibility of cytosolic sources of ROS cannot be ruled out. Future studies may utilize time-lapse live imaging of transgenic zebrafish expressing genetically encoded ROS indicators for ratiometric analysis (e.g. HyPer3, (Bilan et al., 2013)) after incubating with a mitochondrial oxidation dye (e.g. mitoSOX Red), in order to estimate mitochondrial contributions to ROS production in response to cisplatin. Using these methods, changes in ROS production may also be correlated with time-lapsed live imaging of transgenic zebrafish expressing mitoGCaMP3 in hair cells, to characterize the dynamic relationship between mitochondrial dysfunction and oxidative stress over time. Such experiments may provide further mechanistic insights to the role of mitochondria in cisplatin ototoxicity.
Conclusion
In summary, our results show that the function of hair cell mitochondria is acutely disrupted by cisplatin, ultimately leading to canonical caspase-3-mediated apoptosis. Specifically, cisplatin causes hyperpolarized mitochondrial membrane potential, elevated mitochondrial calcium levels, and increased ROS production. Whether these changes lead to eventual collapse of mitochondrial membrane potential and terminal mitochondrial dysfunction remain unknown, but our findings further implicate mitochondrial impairment as a critical early-stage cellular event in cisplatin ototoxicity. | 2022-05-04T14:08:20.854Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "7a6ff35fcaeedb144b7f1f32f97c418725d2a336",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heares.2022.108513",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6df65407647b67045ebb10dead51c9ecd82a71f8",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88278986 | pes2o/s2orc | v3-fos-license | Isolation and Identification of Inhibitory Compounds from Morus alba cv. Kuksang on α -amylase and α -glucosidase
The objective of this research was to evaluate the inhibitory activities of phenolic compounds isolated from mulberry ( Morus alba ) leaves of 109 types against α -amylase and α -glucosidase. The inhibitory activity of the water extracts from Morus alba cv. Kuksang against α -amylase and α -glucosidase were determined as 93.8% and 48.7% respectively. The total phenolic content of extracts from Morus alba cv. Kuksang was 9.7 ± 0.2 mg/g soluble in water and 14.3 ± 0.2 mg/g soluble in ethanol. The inhibitory activity of the water extracts from Morus alba cv. Kuksang at 200 μ g/ml phenolics concentration against α -amylase and α -glucosidase were determined as 100% and 82.6% respectively. The purification of inhibitory compounds was carried out by Sephadex LH-20 and MCI-gel CHP-20 column chromatography using a gradient elution procedure by nomal phase type (EtOH → distilled water) and reverse phase type (distilled water → MeOH). The quercetin was confirmed to be the chemical structure of the inhibitory compound against α -amylase and α -glucosidase by spectroscopic analysis of FAB-MS, NMR and IR spectrum.
Introduction
As humans desires to live long, various studies have been undertaken in order to find available materials from natural sources, which have varied physiological functions, such as antibacterial, antioxidant, anticancer functions, or strong immunosuppressive activity [22]. Most of biological active materials are phenolic compounds in plants and these are mainly composed of flavonoids, simple phenol, phenolic acid, phenylpropanoids, and phenolic quinine. They also play important antibacterial, anti-allergy, antioxidant, anticancer, anti-tumor, anti-mutans, anti-heart disease, and anti-diabetes role [9].
Morus alba cv. Kuksang has been used to treat diabetes, stroke, and beriberi disease, so diabetes is divided into three types those dependant on insulin, independent on insulin and of those desiring insulin types, a further 91% are in-depending on insulin type which mainly occurs after 40 years because of low insulin activating or producing small amounts of insulin [9,20,21]. As the epidemiologic investigation, a local resident who has kept a traditional life style would rarely occur in independent type diabetes, but people who moved from a developing country to an advanced country would occur remarkably [18]. Insulin independent type diabetes, which constitute the larger portion of diabetes patients has been studied in order to control the diabetes and the absorption of sugar using α-amylase inhibit [2,23].
α-Amylase is the enzyme used to resolve α-D-(1,4)-glucan bond of carbohydrates so it is very important for humans, animals, bacteria and insects. The α-amylase inhibitor for treating diseases (diabetes, obesity, over blood sugar, ets.), concerned with carbohydrates, is originated from wheat, barley and, leguminous plants [3,6,16,24] which are almost glucoprotein, but there are a few reports about inhibitory materials being founds in medicine from medicinal plants and bacteria [7,14], In addition, there is nothing to develop anti-diabetes foods using natural materials in order to prevent diabetes. In this study, we tried to obtain basic diabetes data to determine and develop functional food materials which have inhibitory effects on α-amylase and α-glucosidase of isolated phenolic compounds from Morus alba cv. Kuksang extracts.
Total phenolic assay
One milliliter of water extract and 80% ethanol extract were added into test tubes and mixed with 5 ml distilled water and 1 ml 95% ethanol and then 0.5 ml 1 N Folin-ciocalteu reagent was added. After 5 min, 1 ml 5% Na2CO3 solution was added and the reacted mixture was allowed to stand for 60 min, then the absorbance was measured at 725 nm. The calculation was established using the standard curve with gallic acid [1].
α-Amylase inhibitory activity
The α-amylase inhibitory activity was determined by agar diffusion method [8]. The plate was prepared by putting 5 g agar and 5 g soluble starch into 500 ml distilled water and then sterilizing them at 121℃ for 15 min. The control was prepared by mixing 0.8 ml distilled water and 0.2 mL α-amylase (1,000 unit/ ml) and samples were prepared by mixing Morus alba cv. Kuksang extracts and enzyme, then they were added into disc paper on the plate, incubated at 37℃ for 3 days and then was added 5 ml I₂/KI (5 mM I2 in 3% KI), colored for 15 min, and the percentage of en- Next, the produced glucose was measured at the absorbance of 550 nm [12]. Glucose was calculated using the standard curve which was prepared by pure glucose, and the percentage of enzyme inhibition was calculated by inhibition (%) = [1-(glucose products of sample/glucose products of control)] ×100.
α-Glucosidase inhibitory activity
The α-glucosidase inhibitory activity was measured by Tibbot et al. [25] method. The substrate was prepared by us- IR spectrum was assayed by the halogenic alkalic tablet method, the nuclear magnetic resonance (NMR) spectrum ( 1 H and 13 C-NMR) was investigated by melting 10 mg whole purified subjects with DMSO solvent and comparing them with a tetramethylsilane (TMS) standard using proton magnetic resonance (PMR, 300 MHz). The mass spectrum was measured using negative ion fast atom bombardment mass (FAB-MS) spectrum with 1 g sample under decompression (10 -4~1 0 -6 mmHg). Thioglycerol was used as the solvent, and mass analysis was carried on 22~28 eV emitter current, and 6~7 kV accelerative pressure of the ion source. Element analysis was assayed with 1 mg sample which removed the moisture by decompressive drying for 48 hr, and analyzed the amount of hydrogen and carbon by auto element analyzer, also oxygen was calculated based on molecular weight [15].
Results and Discussion
Inhibitory activity against α-amylase and α-glucosidase of extracts from various mulberry leaves An inhibitory material against α-amylase and α-glucosidase which are the essential enzymes on carbohydrate metabolism was researched, and activities were found from varied mulberry leaves extracts (Table 5.). Among the mulberry leaves of 109 species, three species which had the both of inhibitory activities on α-amylase and α-glucosidase were selected and they were identified as Napal, Kuksang, and Sacheongeum. Especially, the Kuksang leaf had the most excellent inhibitory activities on α-amylase as 93.75% and αglucosidase as 48.7%, so we chose Kusang as the sample for our study. Thus, mulberry leaves extracts because of their behavior at the final step of starch digestion, being concerned with inhibiting the activities of α-amylase and α -glucosidase, could be possible use mulberry leaves for diabetes treatment as being able to remedy the faults of diabetes medicine and resolve side-effects.
Content of phenolic compounds in Morus alba cv. Kuksang extracts
Phenolic compounds are one of the second metabolic materials, bonds easily with huge molecule because their various structures and molecular composition, and has various biological functions such as antioxidant, and antibacterial activities [Blois, 1958]. In this study, phenolic compounds were 9.7±0.2 mg/g soluble in water and 14.3±0.2 mg/g soluble in ethanol (Table 2). When compared to the report of Moon Each value represents the mean±SD (n=6). Each value represents the mean±SD (n=6), concentration of sample was 200 μg/ml phenolics. p<0.05. et al. [17], the phenolic compounds of Morus alba cv. Kuksang extracts were higher than Camellia sinensis (10.9 mg/g), Phellinus lieus (17.9 mg/g), and Artemisia iwayomogi (6.7 mg/g).
Kuksang extracts
As α-amylase is the essential enzyme for carbohydrate metabolism, we compared each phenolic compounds in Morus alba cv. Kuksang extracts ( Table 3). The α-amylase inhibitory effects showed that the inhibitory rate was 93.8±1.1% in water and 28.6±0.8% in ethanol extracts. Paek and Kim [20] reported that Distylum racemosum has lots of phenolic compounds at the end of its leaves in order to protect itself from pathogenic bacteria and vermin, so the phenolic contents of Crataegi fructus extracts which have higher inhibitory effects against α-amylase, are supposed to have the α-amylase inhibitory effects.
Lee et al. [13] reported that PA inhibits some functions to be related with carbohydrates. Most of ⍶-amylase inhibitory materials from plants are concerned with protein but PA is a phenolic system and inhibits α-amylase and α-glucosidase. So, the phenolic material of Morus alba cv. Kuksang extracts can also be suggested to inhibit the enzyme activity of carbohydrate hydrolysis. Table 4). The phenolics in each fraction were the highest at H2O, and the α-glucosidase inhibitory activity was the highest in n-butanol as 27.6% (Table 5) but the α-glucosidase inhibitory activity was relatively low with activity being below 30%. So the butanol layer which had higher activities on both two enzyme was purified, isolated and identified as the active substance. To purify the inhibitory substance on α-amylase and α-glucosidase from Morus alba cv. Kuksang extracts, Sephadex LH-20 column (5×120 cm) chromatography was used and 5 fractions were obtained (Fig. 1). The phenolics of each fraction were higher at fraction D and fraction E as 32.2 μg/ml and 29.4 μg/ml, respectively (Table 6). When the inhibitory activities of α-amylase and α-glucosidase were measured by fixing the phenolic concentration of fractions as 200 μg/ml in fraction D and E, they had 100% inhibitory activity on α-amylase and had the inhibitory activity on α- Phenolics content in extracts were 200 μg/ml for inhibitory activity on α-amylase and α-glucosidase activity. p<0.05. 2). Hulme and Johnes [10] reported when plant extracts were isolated, enzyme inhibition was high in specific fractions, Cho et al. [5] reported that polyphenolics were able to be partition as numbers of hydroxyl group in phenolics, so this study can be concluded that it is similar to Hulme and Johnes report [10] because enzyme inhibition was confirmed at specific fraction (fraction D and E) of Morus alba cv. Kuksang extracts.
The first fractions were partitioned with Sephadex LH-20 dextran gel and MCI-gel which are available to isolate the structural isomer phenolics by a gradient of ethanol→distilled water as normal phase type and distilled water→methanol as reverse phase type. Fig. 1 shows three substances (comp. D-1~comp. D-3) were yielded from fraction D and two substances (comp. E-1 and E-2) were obtained from fraction E, and every substance showed the inhibitory activity. Especially, fraction E-1 showed the highest inhibitory activity as 35.25±1.42% against α-glucosidase. In the partitions of fractions D and E, fraction D-3 and E-1 were confirmed to have the enzyme inhibitory activities so they were lyophilized ( Table 7). The lyophilized compound was purified by Sephadex LH-20 column (3×50 ㎝) chromatography using distilled water→ethanol (100%) gradient elution and D-3-1 and D-3-2 were gained from fraction D-3, also E-1-1, E-1-2 and E-1-3 were obtained from fraction E-1 (Fig. 1). The result to determine the enzyme inhibition effect, fractions D-3-2 and E-1-1 showed the inhibitory activities (Table 8) so the two substances were confirmed to be purified by HPLC (Fig. 3), and they were the same substances, D-3-2 and E-1-1.
The identification of the purified compound
The result of analyzing the purified compound which had Each value represents the mean±SD (n=6), phenolics content in extracts were 200 μg/ml for inhibitory activity on α-amylase and α-glucosidase activity. p<0.05. Table 9. Spectroscopic data of purified compound with inhibitory activity on α-amylase and α-glucosidase from Morus alba cv. Kuksang , and these results are similar to Jang report [11], and so purified compound was identified with quercetin ( Fig. 4). | 2019-03-31T13:42:31.447Z | 2015-08-30T00:00:00.000 | {
"year": 2015,
"sha1": "50620bf0b4fe3e828b565e4068f41ca55c048247",
"oa_license": null,
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201528672604267&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "646d4d2cb6211880777cd50cd69a95c8e67ab79e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
250185586 | pes2o/s2orc | v3-fos-license | Dosimetry of Occupational Eye Lens Dose Using a Novel Direct Eye Dosimeter, DOSIRIS, during Interventional Radiology Procedures
In response to the recommendation by the International Commission on Radiological Protection to lower the equivalent eye dose limit, the Japanese Government in April 2021 lowered the equivalent dose limit for the eye lens for occupational exposure. A considerable number of interventional radiology operators are exposed to levels above the new limit. For this reason, a need exists to more accurately evaluate eye lens dose in interventional radiology operators by using a novel direct eye dosimeter, the DOSIRIS™(IRSN, France), which is capable of measuring a 3-mm dose equivalent under protective glasses. The DOSIRIS is a thermoluminescent dosimeter that exhibits good energy dependence and better directional properties than other dosimeters. Dosimetry using DOSIRIS might be accurate and compatible with the latest regulations.
Introduction
Interventional radiology (IR) has become increasingly important as a medical technique in recent years due to its minimal invasiveness, and radiologists increased opportunities for involvement in IR procedures. Recently, IR procedures have become more time-consuming due to increased technical complexity, which led to an increase in adverse events such as skin disorders from greater radiation exposure in patients, together with concerns over greater exposure for IR operators. The International Commission on Radiological Protection (ICRP) Statement from the 2011 Seoul Meeting recommended that the equivalent dose limit for the eye lens should be 20 mSv in a year, averaged over de-fined periods of 5 years, with no single year exceeding 50 mSv [1,2]. In response to the recommendation by the ICRP to lower the equivalent eye dose limit, the Japanese Government in April 2021 lowered the equivalent dose limit for the eye lens for occupational exposure and made revisions to the Ordinance on Prevention of Ionizing Radiation Hazards, including changes to the dose measurement and calculation methods. The previous equivalent dose limit for the lens of the eye was 150 mSv/year, but the revised regulations lowered this to an equivalent dose for the lens of the eye of ≤ 100 mSv over 5 years and ≤ 50 mSv over 1 year , and they also added measurement at a 3-mm dose equivalent to the dosimetry method. Following this latest reduction in the dose limit for the eye lens, a considerable number of IR operators are found to demonstrate doses above the limit when measured using the current method of eye lens dosimetry with a glass badge without protection [3]. Consequently, a need exists to more accurately assess the eye lens dose in IR operators using a novel direct eye dosimeter the DOSIRIS TM (IRSN, France), which is capable of measuring a 3-mm dose equivalent under protective glasses.
In this review, the basic features of DOSIRIS, efforts to reduce exposure, and our measurement of eye lens doses in operators using DOSIRIS during IR procedures such as transcatheter arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) are described.
Basic Features of DOSIRIS
DOSIRIS was developed as a dosimeter by the French Institut de radioprotection et de sûreté nucléaire. It can be worn under protective glasses and measure the dose equivalent at a depth of 3 mm. It is compatible with the International Atomic Energy Agency (IAEA) Guideline recommendation that the equivalent dose to the eye lens at 3 mm depth, H p (3), with a dosimeter worn as close as possible to the eye.
Basic structure
Since it uses lightweight materials, DOSIRIS weighs only 12 g, is barely noticeable when worn, and reduces the burden on the wearer. The device has an articulated arm that can be adjusted to a variety of head and face shapes, and the position of the detector can be finely adjusted at the corner of the eye behind protective glasses ( Fig. 1a and b); however, the author felt slight discomfort and mild pain on the head and face after wearing the device for a long period. Recently, DOSIRIS could be attached to the left side of protective glass and could be placed close to the left side of the left eye under protective glasses (Fig. 2), improving the discomfort and pain could.
Measurement of 3-mm dose equivalent
The detector is a thermoluminescent dosimeter (TLD) that uses 7 LiF:Mg,Ti as the TLD element and is encapsulated in a 3-mm-thick polypropylene capsule (Fig. 3, DOSIRIS TLD element). This TLD element demonstrates good energy dependence because the main component is LiF, which exhibits a low effective atomic number. A metal filter is normally used to correct for the energy dependence of TLD elements, but angular dependence is poorer when using a metal filter [4].
Because of its good energy dependence, DOSIRIS does not require the use of a metal filter for correcting energy dependence, and this is one of the reasons for its good angular dependence. During IVR procedures, adjusting the angular direction of the X-ray tube is sometimes necessary, which also changes the direction of scattered radiation. Additionally, the operator is often required to change the position of head during procedures. Therefore, it is crucial that the do-simeter demonstrates good angular dependence so that eye lens exposure can be evaluated accurately. DOSIRIS is ideal for this purpose.
DOSIRIS measures X-rays and gamma rays in the energy range from 25 keV to 1.25 MeV and beta particles at 0.8 MeV (mean energy). The dose reported by Chiyoda Technol Corporation [Tokyo, Japan] is a 3-mm dose equivalent of 0.1 mSv to 1 Sv.
Eye Lens Dosimetry with DOSIRIS during Transcatheter Arterial Chemoembolization ( TACE ) for Hepatocellular Carcinoma (HCC)
We measured the eye dose of operators performing TACE using DOSIRIS and the glass badges since 2017.
The angiography system used for the study was the Allura Xper FD20/20 (Philips, 2013) with conditions on the system set to reduce exposure below the default conditions for abdominal angiography. We presented the results of exposure during TACE for HCC using the exposure reduction mode to the Radiological Society of North America and other meetings. The exposure reduction mode involved the use of pulse fluoroscopy, 50% reduction in the imaging frame rate, lower radiation dose, and an exposure-reducing −0.4 mm Cu +1.0 mm Al accessory filter. Measurement based on numerical dose determination showed that the air kerma and dose area product in patients treated under exposure reduction mode could be reduced to approximately 65% of the values under the standard mode default setting on conventional systems. Almost no loss of image quality occurred, adverse events did not increase, and the therapeutic effect was the same [5].
During TACE, the protective ceiling plate was interposed between a flat panel detector and the operator (Fig. 3). For eye lens protection, the operator wore a Panorama Shield with an acrylic lens containing lead with a lead equivalence of 0.07 mm Pb (Toray Medical Co., Ltd. Tokyo, Japan). DOSIRIS was used as the dosimeter, and it was worn in the region of the left eye under protective glasses to measure the dose to the eye lens in accordance with the IAEA recommendation. The operator also wore a glass badge and a pocket dosimeter on the left side of the cap, and the dose measured by each dosimeter was compared. The glass badge was worn in the usual way behind a protector (Fig. 1b).
Mean eye lens dose during TACE: Comparison of dose measured by DOSIRIS and glass badge
By dividing the reported monthly eye lens dose from Chiyoda Technol by the number of TACE procedures performed per month, the eye lens dose per TACE procedure was estimated.
The eye lens dose measured by DOSIRIS was 34.8 μSv/ TACE with protection, and the eye lens dose measured by glass badge was 57.6 μSv/TACE without protection.
The number of TACE procedures that can be conducted annually within the eye lens dose limit of 20 mSv/year is 347/year based on the glass badge measurement of eye lens dose and 574/year based on the DOSIRIS measurement. For reference, when an operator used the same angiography system in exposure reduction mode without any protective glasses, the eye lens dose measurements from both devices were very similar, at 240 μSv/TACE measured by DOSIRIS and 243.3 μSv/TACE measured by the glass badge. Under these conditions, the operator could perform 83 TACE procedures per year without exceeding the 20 mSv/year eye lens dose limit. Because IR operators must perform many high-dose procedures in addition to TACE, it is likely that the eye lens dose limit of 20 mSv/year would be exceeded without protective devices. Therefore, it is essential that IP operators have at least some personal protection from exposure. Previous literature on eye lens doses during TACE reported values of 270-1,070 μSv without protective devices, 16-64 μSv with protective devices [6], and a mean left eye lens dose of 421 μSv (range: 94-894 μSv) without protective devices measured by TLD [7]. These reports show very high eye lens doses without protection and highlight the importance of protective glasses and other protection.
Literature on Eye Lens Dosimetry Using DOSIRIS
Some published reports analyzed the differences and relationship between eye lens equivalent doses mainly in IR operators and stuffs as measured by DOSIRIS with protective glasses and by glass badges worn at the neck. In these reports, personal glass badge dosimeters at the neck tended to overestimate the eye lens dose measured by DOSIRIS, but both values correlated relatively well. Additionally, the Panorama Shield protective glasses (Toray Medical Co., Ltd.) exhibit a reported a shielding rate of about 60% based on the eye lens dose measured by DOSIRIS. There are concerns that if IR operators do not use protective glasses, the eye lens equivalent dose measured by DOSIRIS would exceed 20 mSv/year [8,9].
In this review, use of DOSIRIS to evaluate the eye lens dose during TACE for HCC was described. Of course, IR operators must various IR procedures under different imaging conditions for each modality. We are currently conducting a comparative study of eye lens doses measured by DOSIRIS and glass badges without protection during various IR procedures at our hospital.
Conclusion
The use of DOSIRIS under protective glasses to measure the 3-mm dose equivalent during IR procedures can provide accurate eye lens dose measurements and is ideal for this purpose. This method is also important for reducing the excessive number of IR operators whose exposure is above the dose limit. However, it might probably be practical to provide DOSIRIS to only IR operators, since providing all stuffs in the IR rooms with DOSIRIS might be high cost. | 2022-07-02T15:28:04.490Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "6bc186cd007fb12c70d8a8aa3a361ffb7c1bdf4b",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/interventionalradiology/7/2/7_2022-0005/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e50f4693e27c4f86acf874751e95958990b03398",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": []
} |
119630026 | pes2o/s2orc | v3-fos-license | Entanglement fidelity and measurement of entanglement preserving in quantum processes
The entanglement fidelity provides a measure of how well the entanglement between two subsystems is preserved in a quantum process. By using a simple model we show that in some cases this quantity in its original definition fails in the measurement of the entanglement preserving. On the contrary, the modified entanglement fidelity, obtained by using a proper local unitary transformation on a subsystem, is shown to exhibit the behavior similar to that of the concurrence in the quantum evolution.
The entanglement fidelity provides a measure of how well the entanglement between two subsystems is preserved in a quantum process. By using a simple model we show that in some cases this quantity in its original definition fails in the measurement of the entanglement preserving. On the contrary, the modified entanglement fidelity, obtained by using a proper local unitary transformation on a subsystem, is shown to exhibit the behavior similar to that of the concurrence in the quantum evolution. Quantum entanglement is a key element for applications of quantum communications and quantum information. A complete discussion of this has been given in Ref. [1]. Characterizing and quantifying the entanglement is a fundamental issue in quantum information theory. For pure and mixed states of two qubits this problem about the description of the entanglement has been well elucidated [2,3,4,5,6,7]. Recently, Jordan et al. [8] considered two entangled qubits, one of which interacts with a third qubit named as a control one that is never entangled with either of the two entangled qubits. They found that the entanglement of these two qubits can be both increased and decreased by the interaction with the control qubit on just one of them. If we regard the control qubit as an environment and the state of the qubit interacting with the control qubit as the information source, this example is just a model for the time evolution of quantum information via a noisy quantum channel originating from the interaction with the control qubit. Schumacher [9] and Barnum et al. [10] have investigated a general situation where R and Q are two quantum systems and the joint system RQ is initially prepared in a pure entangled state |Ψ RQ . The system R is dynamically isolated and has a zero internal Hamiltonian, while the system Q undergoes some evolution that possibly involves interaction with the environment. The evolution of Q might represent a transmission process via some quantum channel for the quantum information in Q. They introduced a fidelity F e = Ψ RQ |ρ RQ ′ |Ψ RQ , which is the probability that the final state ρ RQ ′ would pass a test checking whether it agrees with the initial state |Ψ RQ . This quantity is called as entanglement fidelity (referred hereafter as EF). The EF can be defined entirely in terms of the initial state ρ Q and the evolution of system Q, so EF is related to a process, specified by a quantum operation ε Q , which we shall discuss later in more details, acting on some initial state ρ Q . Thus, the EF can be denoted by a function form F e (ρ Q , ε Q ). The EF is usually used to measure how well the state * Electronic address: njuxy@sina.com ρ Q is preserved by the operation ε Q and to identify how well the entanglement of ρ Q with other systems is preserved by the operation of ε Q . The complete discussion of EF can be seen in [9,11]. In the present work we will investigate the following question: Is EF a good measurement of the entanglement preserving? Using the example of Jordan et al., we find that in some cases EF defined above completely fails for measuring the entanglement preserving though it may be a good measurement of the entanglement preserving in the case of slight noise. We also find that in order to make the EF indeed equivalent to an entanglement measure the modified entanglement fidelity (MEF) should be used. Some detailed discussions about the MEF have been given in [9,12,13]. Recently, Surmacz et al. [14] have investigated the evolution of the entanglement in a quantum memory and showed that the MEF can be used to measure how well a quantum memory setup can preserve the entanglement between a qubit undergoing the memory process and an auxiliary qubit. For the example of Jordan et al., we derive an analytic expression of the MEF and the comparison of it with the concurrence is given.
Quantum operation ε Q is a map for the state of Q Here ρ Q is the initial state of system Q, and after the dynamical process the final state of the system becomes ρ Q ′ . Then the dynamical process is described by ε Q .
In the most general case, the map ε Q must be a tracepreserving and positive linear map [15,16], so it includes all unitary evolutions. They also include unitary evolving interactions with an environment E. Suppose that the environment is initially in state ρ E . The operator can be written as where i p i |i i| is the spectral decomposition of ρ E , with {|i } being a base in the Hilbert space H E of the environment E, and E Q j = i √ p i j|U |i . Now we can use Eq. (2) to get the intrinsic expression of Ψ RQ |ρ RQ ′ |Ψ RQ , i.e., F e (ρ Q , ε Q ). Because one has If systems R and Q both have zero internal Hamiltonian and there is no interaction between R and Q, the operation ε Q entirely originates from the interaction between Q and the environment. In this sense the example of Jordan et al. is a special case of this situation. We consider two entangled qubits, A and B, and suppose that qubit A interacts with a control qubit C. Then A, B and C respectively correspond to systems Q, R and environment E that we have just referred. We suppose that the initial states of the three qubits are where with σ A(B) i , i = 1, 2, 3, being Pauli matrices for qubit A(B). ρ AB + and ρ AB − are two Bell states, representing the maximally entangled pure states for the combined system of qubits A and B. The total spins of states ρ AB − and ρ AB + are 0 and 1, respectively. We suggest an interaction between qubit A and C described by the unitary transformation where λ is the strength of the interaction, and |α and |β are two orthonormal vectors for system C. Then the changing density matrix for the combined system of qubits A and B can be calculated as ).
The changing density matrix ρ AB ′ ± usually represents a mixed state. In order to quantify the entanglement of it we use the Wootters concurrence [5] defined as where ρ is the density matrix representing the investigated state of the combined system of A and B, λ 1 , λ 2 , λ 3 , and λ 4 are the eigenvalues of ρσ A 2 σ B 2 ρ * σ A 2 σ B 2 in the decreasing order, and ρ * is the complex conjugation of ρ. From Eq. (9) we can obtain It is found that at time λt = π 2 , the state ρ AB ′ ± is changed from a maximally entangled state at t = 0 to a separable state and at time λt = π the state ρ AB ′ ± returns to the maximally entangled state. The explicit calculation about ρ AB ′ and C(ρ AB ′ ± ) can be seen in [8]. Now we adopt the EF to investigate this example. Using Eqs. (2), (5), (7), and (8), we obtain the quantum operation on qubit A, . Substituting them into Eq. (4) and noting that ρ A ≡ Tr B (ρ AB ± ) = 1 2 1, we can get the EF as We can easily find the disagreement between the evolutions of F e and C(ρ AB ′ ± ). At λt = π, state ρ AB ′ ± returns to the maximally entangled state as can be seen from the concurrence, but its entanglement fidelity is zero (F e = 0). On the contrary, the initial maximally entangled state have been changed to a separable state at λt = π 2 , but the EF at this time is not zero. The evolutions of EF F e and concurrence C(ρ AB ′ ± ) are depicted in Fig. 1.
In fact, F e (ρ Q , ε Q ) = F 2 s (ρ RQ , ρ RQ ′ ), where F s (ρ RQ , ρ RQ ′ ) is the static fidelity [11]. The static fidelity satisfies 0 ≤ F s (ρ RQ , ρ RQ ′ ) ≤ 1, where the first The concept of the EF arises from the mathematical description for the purification of mixed states. Any mixed state can be represented as a subsystem of a pure state in a larger Hilbert space. The entanglement of a pure state may cause the states of subsystems to be mixed. The EF is usually used to measure how faithfully a channel maintains the purification, or, equivalently, how well the channel preserves the entanglement. In the above simple example, however, we have found that, except for some special cases, only in the case of slight noise, i.e., λt −→ 0, the EF approximately agrees with the concurrence. This means that this quantity may not be a good measurement for the evolution of the entanglement in the processes of interaction with the environment.
In fact, Schumacher [9] has noted that the EF can be lowered by a local unitary operation but the entanglement cannot be so. From this consideration he defined the MEF where U Q is any unitary transformation acting on Q. It is clear that F ′ e ≥ F e . Since by using a proper local unitary operation we can make the Bell state ρ AB ± become the Bell state ρ AB ∓ , we can find that in the above example F ′ e = 1 at time λt = π whereas F e = 0 at this time. So at λt = π, the MEF equals the concurrence. By using the quantum operation which we discussed above, we can get the intrinsic expression of the MEF . (15) For this example we can derive an analytic expression of F ′ e . Suppose U is an arbitrary unitary operation on a single qubit. Then it can be written as [11] where α, β, γ and δ are real numbers, and R y(z) is the rotation operator about the y(z) axis. We have ) cos 2 (β/2 + δ/2 + λt/2) We should find a unitary operator U which make j (Trρ A U E A j )(Trρ A (U E A j ) † ) take its maximum value. Since cos 2 (β/2 + δ/2 + λt/2) ≥ 0 and cos 2 (β/2 + δ/2 − λt/2) ≥ 0, we can take γ = 0. So one obtains = 1 + cos 2 (β/2 + δ/2)(2 cos 2 (λt/2) − 1) − cos 2 (λt/2).
When 2 cos 2 (λt/2) − 1 ≥ 0 we take cos 2 (β/2 + δ/2) = 1 and get F ′ e = cos 2 (λt/2); when 2 cos 2 (λt/2) − 1 < 0 we take cos 2 (β/2 + δ/2) = 0 and get F ′ e = 1 − cos 2 (λt/2). The evolutions of the MEF F ′ e and the concurrence C(ρ AB ′ ± ) are depicted in Fig. 2. We can find that the MEF and the concurrence exhibit a similar behavior, although their values do not exactly agree with each other at all moments. When the state ρ AB ′ ± returns to the maximally entangled state, the MEF is equal to 1. The maximal difference between them comes at the separable states where the MEF is equal to 1/2 while the concurrence is zero.
We have mentioned that the EF equals 1 if and only if ρ RQ = ρ RQ ′ . This means that the EF can be use to measure the difference between a quantum channel and the identity channel. If the concern is on the entanglement preserving in an evolution process, however, one has to use the MEF because the EF can be lowered by a local unitary operation in this process but the entanglement cannot be so. If a quantum channel is just a unitary operator, the entanglement is certainly invariant and the MEF always equals to 1 in the quantum process. In this sense the MEF can be used to measure the difference between a quantum channel and an arbitrary unitary operator.
In summary, for the example of Jordan et al., we have derived the analytic expressions of both the EF and the MEF, and show the comparisons of them with the concurrence. From these we find that the MEF may admirably reflects the entanglement preserving in a quantum process. | 2007-05-13T03:16:05.000Z | 2007-04-23T00:00:00.000 | {
"year": 2007,
"sha1": "0f64cfe248633e6ba5b1aa38e79aeed37038d027",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0704.2973",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0f64cfe248633e6ba5b1aa38e79aeed37038d027",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248299721 | pes2o/s2orc | v3-fos-license | Layered Complex Networks as Fluctuation Amplifiers
In complex networked systems theory, an important question is how to evaluate the system robustness to external perturbations. With this task in mind, I investigate the propagation of noise in multi-layer networked systems. I find that, for a two layer network, noise originally injected in one layer can be strongly amplified in the other layer, depending on how well-connected are the complex networks in each layer and on how much the eigenmodes of their Laplacian matrices overlap. These results allow to predict potentially harmful conditions for the system and its sub-networks, where the level of fluctuations is important, and how to avoid them. The analytical results are illustrated numerically on various synthetic networks.
Introduction.-Complex-networked systems are widely present in physical and man-made applications ranging from neurons in the brain to large-scale power transmission networks [1]. These systems are made of individual elements with their own internal dynamics, that are interacting together. The interplay between the internal degrees of freedom and the coupling typically gives birth to organized collective dynamics such as consensus or synchronization [2,3]. The coupling between the elements is usually modelled by complex networks. Recently a lot of effort has been put in the refinement of this approach in order to account for specific structures that might exist in the coupling [4][5][6][7]. Indeed, many complex systems are interdependent or hierarchically influence each other [8], as e.g. in the brain where various regions with different functional networks interact with each other [9], or in power networks where layers with different voltage levels are connected [10], or in social networks where people are part of interacting communities [11]. For such applications, the extension to multi-layer networks as depicted in Fig. 1(a) is particularly relevant. The latter are composed of many different layers of networks interacting with one another. The nodes in the different layers might represent the same individual element taking part in different coupled dynamics, or distinct elements belonging to separate systems that somehow interact together. Also, these systems are unavoidably subject to noise coming from the interacting elements or from the environment [12]. Such noisy conditions might have different effects on the coupled system. Indeed, depending on its amplitude and time-dependence, it might induce fluctuations around a stationary state or even escapes from an initial basin of attraction [13][14][15]. Both outcomes are potentially harmful to the system as fluctuations might damage some network components on the long run, and basin escapes can lead to large system failures. Therefore, to prevent such events and in order to devise more resilient interdependent complex systems [16,17], it is of particular interest to understand how noise in one layer impact the dynamics of other layers.
In this letter, I consider two-layer systems where only one layer is subjected to noise. Interestingly, I find that, depending on the smallest eigenvalues of the sub-network Laplacian matrices and the overlap of their eigenmodes, fluctuations can be significantly amplified or reduced in the second layer. I show the latter analytically by comparing the second moment of the degrees of freedom in each layer. I also illustrate the theory numerically on different networks and extend the principle to networks with more than two layers. Two layers networked system.-I consider two layers of diffusively coupled agents, {x k }, {y k }. Each layer has n nodes and its own undirected complex coupling network. The two layers are coupled together in a directed way that is detailed later. The dynamics of the 2n agents is governed by the following set of differential equations, where {x k }, {y k } are the nodal degrees of freedom respectively in layer 1 and 2, L (l) ij is the Laplacian matrix of the undirected coupling network in the l-th layer, and f i is a coupling function between the two layers. As set in Eq. (2), the inter-layer interaction is directed, i.e. Eq. (1) does not depend on {y k } , while Eq.(2) does depend on {x k }. To investigate the propagation of noise from the first to the second layer, only the first one is subjected to ambient noise η i that is taken as white, i.e. η i (t)η j (t ) = δ ij η 2 0 δ(t−t ) [18]. Without loss of generality, I take the initial conditions to be x i (0) = 0 , y i (0) = 0 for i = 1, ...n . Indeed, in the following I consider the long time limit for the second moment which is independent of the initial conditions. Importantly, Eqs. (1), (2) can represent the linear approximation of a nonlinear system around a stable fixed point.
In the following, I denote λ of the Laplacian matrix L (k) . As I consider connected undirected networks in each layer, n for k = 1, 2 with the first eigenvector being constant i.e. u Noise propagation.-First, one has to choose an inter-layer coupling function. The simplest choice is the first layer tunes the natural velocities of the second one, orthogonal to the zero mode u 1 [19]. Using this coupling function together with a modal decomposition over the eigenvectors of L (1) and L (2) , one obtains as solution of Eqs. (1), (2) (see [20,21] for details), Therefore, the response of the first layer depends on the scalar product between η and u (1) j , while in the second layer it is between x and u (2) j . In order to investigate how noise is transmitted from one layer to the other, one may calculate the second moment of the degrees of freedom, namely {x i } and {y i }. Using Eq. (3) one has in the long time limit for the first layer [21], which can be related to the inverse of the resistance centrality of the i−th node [21,22]. This means the highest resistance centrality, the least fluctuations. In the model considered here, the x i 's modify the natural velocities in the second layer. In order to calculate the second moment in the second layer, one needs the two-point time shifted correlators of the x i 's. Due to the coupling in the first layer, the latter is then non-trivial and reads, Therefore, for the second layer, the input is a time-and space-correlated noise. For equal times, i.e. t = t , one recovers the pseudo-inverse of L (1) [23]. Then, plugging Eq. (6) into Eq. (4) yields for the second moment in the second layer, The fluctuation of y i is thus a non-trivial function of the overlap between u (1) γ and u (2) β and their corresponding eigenvalues. Equation 7 is obtained for two different complex networks in the two layers. It is easier to understand what is going on by considering the specific case of similar layers, i.e. with the same complex network. In this particular setting, one has u Now to see how the noisy first layer affects the second one, it is important to compare Eq. (8) to Eq. (5). Indeed, while in Eq. (5), one has the first power of λ α at the denominator, in Eq. (8) it is the third power of λ α . This seemingly tiny difference might have serious implications. In particular, the slowest modes of the coupling network, i.e. the λ α 's that are the closest to zero, primarily determine whether the noise is amplified or reduced in the second layer. Therefore, one expects poorly connected networks with low algebraic connectivity to be good candidates to amplify the fluctuations, while well-connected networks might efficiently reduce the fluctuations.
Numerical Results.-First, I illustrate the case with two times the same coupling network in both layers, i.e. L (1) = L (2) . Figure 1(b)-(d) show the trajectories of x i 's and y i 's respectively in layer 1 and 2. Following Eq. (8), to obtain the amplification of the fluctuations, I choose cycle networks which have an algebraic connectivity that decreases with the size, namely λ 2 = 2 − 2 cos(2π/n) . This is shown in panel Fig. 1(b) where one clearly sees that the fluctuations in the second layer (bottom panel) are strongly amplified, compared to the input noise of the first layer (top panel). In the thermodynamic limit, the amplification factor for such systems scales with the size as y 2 i / x 2 i ∝ n 4 . The latter is observed by compar-ing panels (b) and (c) where the network size has been doubled and further confirmed by the amplification factors given in the caption of Fig. 1. To obtain the other counter effect, namely the reduction of the fluctuations, one needs a network with larger algebraic connectivity. Therefore, I simulate two layers of the same Erdős-Rényi network with edge probability p = 0.15 which has considerably larger λ 2 [see panel (d)]. The second moment in the second layer is significantly reduced compared to the first one.
Second, I consider the case of different networks in each layer. Here, going back to Eq. (7), fluctuations in the second layer crucially depend on the overlap between the Laplacian eigenvectors of the two layers. In particular, if the eigenvectors do not overlap enough, only little fluctuations will propagate in the second layer. To demonstrate that, I consider the overlap between the Fiedler modes of each layer and the amplification factor for all combinations of five coupling networks of size n = 50 , namely, a single cycle, two Watts-Strogatz (WS) networks with different rewiring probabilities [24], a Barabási-Albert (BA) network with m = 4 [1], and an Erdős-Rényi (ER) network [25]. This is shown in Tab. I. One observes a strong amplification for combinations with the cycle and WS I. This is due to the small algebraic connectivity in each layers and the overlap that is still significant (0.405) due to the low rewiring probability for the WS I. One also notice a strong reduction of the fluctuations for combinations of BA and ER networks which satisfy both λ 2 ) 2 1 . Interestingly, for the networks investigated here, having BA or ER in the first layer only allows reduction of the fluctuations in the second layer, while having a cycle, WS I or WS II in the first layer can lead to both amplification or reduction, depending on the type of network in the second layer.
More than two layers.-So far I considered two-layer systems. However, it is interesting to go one step further and have more than two layers. For example, one may add a third layer that is influenced by the second one exactly as the latter is influenced by the first one. The dynamics of that third layer would then read, (9) where f i is the inter-layer coupling function defined above. Even though it is possible to calculate analytically the second moment of the z i 's, here I only show numerical results. I illustrate two interesting effects in the case of three layers. First, the amplification of the fluctuations obtained in Fig. 1(b),(c) for the same network with low algebraic connectivity in each layer becomes even stronger when a third layer of the same network is added. Indeed, Fig. 2(a) shows the case of three layers of the same cycle network with nearest neighbors. Fluctuations in the third layers are strongly amplified compared to the first one. Significant amplification factors might then be achieved simply by adding layers of the same network with low algebraic connectivity. Second, amplification of the fluctuations in the third layer is still possible even if they were reduced in the second one. The latter is shown in Fig. 2(b) where the first and the third layer are cycle networks with nearest neighbors and the second layer is an Erdős-Rényi network with a higher algebraic connectivity. Due to the only marginal overlap between u α , α ≥ 2 , it is able to follow closely the slowest modes of the first layer and therefore, to transmit them to the third layer. Then, as the third layer has the same coupling network with low algebraic connectivity as the first layer, fluctuations may still be amplified.
More general coupling function.-The results obtained so far are for a simple choice of coupling function given above Eq. (3). Here I briefly show that similar effects are possible for a more general coupling function that reads, where x i = x i − n −1 j x j and µ, ν ∈ R . Following the same steps as for the previous coupling function one eventually obtains for the second moment in the second layer, Then, assuming that the two layers have the same cou-pling network eventually leads to, First, for µ = 0, ν = 1 one correctly recovers Eq. (8). Second, if ν = 1, µ > 0, the amplification effect obtained for the simpler coupling function is still possible as long as |2λ 2 + µ| < 1 . Third, quite interestingly, for ν = −1, µ < 0 and |λ 2 +µ| < 1, i.e. repulsive coupling, significant amplification might occur. Apart from these three cases, one can ensure that no amplification happens by simply choosing µ > 1 − λ 2 . The more complicated case of two different networks is left to further investigations.
Colored noise.-The amplification phenomena described in previous sections was derived analytically for η i 's being white-noises, i.e. uncorrelated in time, and spatially independent. However, it is not limited to white-noise and might be even more important for colored noise, i.e. correlated in time. For example, one may take noisy sequences η i 's such that η i (t)η j (t ) = δ ij η 2 0 e −|t−t |/τ0 , where τ 0 is the typical correlation time. The latter are standardly obtained from an Orstein-Uhlenbeck process. Even though the calculations are doable, the expressions are then complicated and not very insightful. I therefore only discuss the limit of long correlation time, i.e. λ 2 τ 0 ≥ 1 . In such a situation the second moment in the first and second layers are given by, Unlike for the white-noise case, here one clearly remarks what happens, even for two different networks in the two layers. Given that the overlap between u (1) γ and the slowest modes u (2) β is finite and not too small, fluctuation can be strongly amplified by having a second layer with |λ (2) α | 1 . Comparing Eqs. (13), (14) to Eqs. (5), (7), one concludes that colored noise with a long correlation time leads to stronger amplification or reduction of the fluctuations than white-noise.
Conclusion.-Using a modal decomposition of a multi-layer system, I showed that noise originally injected in one layer might be significantly amplified when transmitted to other connected layers. In particular, the amplification strongly depends on the overlap between the eigenmodes of the network Laplacian matrices in each layer and their corresponding eigenvalues. On the one hand, when the overlap is finite and the algebraic connectivity low, fluctuations are amplified in the second layer. On the other hand, if the overlap is marginal or if the algebraic connectivity is high, then fluctuations are likely to be reduced in the second layer. Moreover, impor-tant amplification factors can be achieved in multi-layer systems by simply adding layers with the same poorly connected network. In addition to that, amplification of the fluctuations might still happen even when they are reduced at intermediate layers. As shown above, fluctuation amplification might be even more significant in case of colored noise with correlation times longer than the intrinsic system's time-scales. These results highlight how inter-dependent systems might be vulnerable to noisy signals. Ways to correct such vulnerabilities would be either to make sure that the algebraic connectivity in each layer is large enough or that the eigenmodes of connected layers do not overlap. The latter should be considered in future research. The amplification effect described here might also be useful to develop inference algorithms [26].
Interestingly, the results presented in this letter give indications about how noise input should be correlated in space and time in order to possibly induce larger or smaller fluctuations in single layer diffusively coupled systems. Extension of this work should consider the latter fact into more details, as well as different interlayer coupling functions. Also, investigating moments higher than the second [27] or including inertia for the dynamical agents to potentially uncover resonances in case of time-dependent inter-layer coupling [28] represent two research avenues of interest. | 2022-04-22T06:47:32.183Z | 2022-04-21T00:00:00.000 | {
"year": 2022,
"sha1": "ce5d9e8e6bb28d5bf5e838295c521cc684f5682b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2632-072x/ac7e9d",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3ef871b4ba2917e6b3cfc73918d0af91b6bc2fae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
234504215 | pes2o/s2orc | v3-fos-license | Development of a photoionisation spectrometer for detection of atmospheric 85Kr
A higher than baseline atmospheric concentration of the radioactive fission product 85Kr is considered to be the best indicator of clandestine plutonium separation. Additionally, its high volatility makes it suitable for detecting leaks in nuclear waste containers and damaged fuel cladding. A spectrometer for ultra-trace analysis is currently under development and will be optimised for atmospheric monitoring of 85Kr. This device is based on an adapted form of collinear resonance ionisation spectroscopy, a technique developed at ISOLDE-CERN for performing precision measurements on exotic nuclei. The motivation for this device is explored, along with an overview of progress and future developments.
Introduction
Kr is a radioactive noble gas present in the atmosphere. Around 0.09 TBq of the total atmospheric inventory is produced naturally through the interaction of cosmic neutrons and the stable isotopic neighbour 84 Kr [1]. However, an estimated 5500 PBq of atmospheric 85 Kr originates from anthropogenic sources. Most of this is released by nuclear reprocessing facilities in the extraction of plutonium from spent nuclear fuel. 85 Kr is produced in nuclear reactions with a fission yield of 0.3% [2]. Unless the fuel cladding is damaged, the 85 Kr formed in reactors is retained in spent fuel rods, which are usually kept in storage for a minimum of 6 months after use in order to reduce their activity. A higher than baseline concentration of 85 Kr in the environment of a storage facility can indicate a leak in the fuel containers, as it will escape through any cracks in the cladding due to its high volatility.
During the dissolution of spent fuel for reprocessing, the 85 Kr in the rods is released into the atmosphere, leading to a spike in local atmospheric concentration. If detected, this spike can be used as a signal that a nuclear facility is operating in the vicinity of the measurement. Detection of illegal reprocessing facilities is challenging and cannot be done with satellite imaging, as the buildings used are indistinguishable from other industrial facilities. Because of this, the International Atomic Energy Agency (IAEA) has introduced environmental sampling to the Nuclear Proliferation Treaty (NPT) safeguards [3]. A study published by the IAEA [4] concluded that atmospheric monitoring for radioactive tracers is more likely to be successful than analysis of soil and groundwater, due to large uncertainties and low quantities of the isotopes involved in these measurements. Of the gaseous fission products, 85 Kr is the most suitable for use as 10.76 year half-life has resulted in a worldwide atmospheric background that would render releases from a facility undetectable at distances of more than a few hundred kilometres [5]. For this reason, random air sampling with a mobile detector system is preferable over a network of fixed monitoring stations, which would incur a high cost. Considering the current background in the Northern hemisphere, at least 50 samples per day would be required to monitor a region of 10 million km 2 for the absence of reprocessing activities of more than 270 g Pu per day [4]. Therefore, a 85 Kr monitoring system needs to have high throughput, fast turnaround for sample analysis and a compact design. 85 Kr is present in the atmosphere with an isotopic abundance of 10 −12 and atmospheric concentration of 10 −18 . Because of this, a 85 Kr detector needs high sensitivity and mass resolution to be able to measure the 85 Kr signal over the much larger 84 Kr component, as well as distinguishing it from any isobaric contamination. The most effective technique currently available for 85 Kr measurement is Atom Trap Trace Analysis (ATTA) which selectively traps isotopes in a magneto-optical trap using a resonant laser beam and detects individual atoms though their florescence. ATTA is capable of analysing trace noble gases with sub ppt (10 −12 ) isotopic abundances [14]. To perform an ATTA measurement, at least 2 µl of pure krypton gas is separated from an air sample [12]. Including the time taken for this separation process, ATTA can determine the concentration of 85 Kr present in a sample in about 5 hours. ATTA instruments are used exclusively for measurements of isotopic abundances on noble gases.
Collinear Resonance Ionisation Spectroscopy
Collinear Resonance Ionisation Spectroscopy (CRIS) is a laser spectroscopy technique developed at the ISOLDE (Isotope Separator On-Line DEvice) facility at CERN for performing high precision measurements on exotic nuclei [22]. During a typical CRIS experiment, an ion beam is neutralised with sodium or potassium vapour and overlapped collinearly with a pulsed laser beam. The laser is tuned to the precise frequency required to excite an electronic transition in the element under investigation. The excited atoms are then ionised using a second laser beam or an electric field. The resulting ions are then electrostatically guided to a charged particle detector for counting [13]. At CERN, CRIS is used to measure the hyperfine structure of atoms in order to extract relative charge radii and magnetic moments. Due to the high selectivity, sensitivity and resolution of the technique, CRIS is also well suited to ultratrace element analysis, in which atoms of an element that make up less 0.0001% of a sample by weight are counted. When modest accelerating voltages are required (5 to 30 keV), as is the case for this project, CRIS can be implemented as a compact, benchtop device which will be adapted to be part of a mobile detector system.
A key problem in trace isotope analysis is eliminating background due to isobars and neighbouring masses. Through maintaining a vacuum in the ionisation region of 10 −10 -10 −9 mbar, background due to collisional ionisation is minimised [6]. In conventional resonant ionisation mass spectroscopy (RIMS) methods, selectivity is achieved through mass separating the beam within the ion source and magnet, suppressing the signal from neighbouring masses by as much as 10 8 [16]. However, when measuring extremely rare isotopes with abundant neighbours, such as 85 Kr, the desired signal is drowned out by the tail of the signal from the abundant neighbour. The collinear aspect of the CRIS method introduces improved resolution and additional suppression of neighbouring isotopes. Atoms of all isotopes in a sample are initially ionised and accelerated to a common energy of qV , where q is the charge on the accelerated ion and V is the accelerating potential, resulting in a reduction in the Doppler linewidth of the transition. Due to their differing masses, ions of different isotopes move with different velocities, experiencing different Doppler shifts of their transition energies. This introduces an additional kinematic shift between isotopes, ∆ν, given by where ν o is the rest-frame atomic transition frequency and m 1 and m 2 are masses of neighbouring isotopes. The plus and minus signs correspond to shifts induced when the ion beam propagates collinearly and anti-collinearly with the laser beam respectively [19]. By increasing the separation between neighbouring isotopic peaks, the selectivity of the technique is increased. For example, the natural isotope shift between neighbouring isotopes 85 Kr and 84 Kr for the 5s 2 [3/2] 0 →5p 2 [3/2] 2 transition is 43.4 ± 6.3 MHz [20]. When accelerating to 5 keV, the kinematic shift induced is 1.1 GHz. With a natural linewidth 2Γ of 4.4 MHz, the selectivity of this transition is 7 × 10 4 . As selectivities are multiplicative, using the CRIS method with a two or three step resonant scheme can result in a total selectivity of more than 10 8 . A single CRIS spectrometer is capable of performing selective measurements on elements across the periodic table by adjusting the frequency of the lasers to the relevant transitions and optimising the potentials guiding ions to the detector. This allows for a trace analysis instrument with multi-element capability, which will be optimised for measurement of 85 Kr as well as other radioactive tracers such as 14 C and 90 Sr. As part of this optimisation process, a krypton resonance ionisation scheme will be developed. Simulations based on electron-capture cross section calculations [11] have demonstrated that after neutralisation by sodium vapour through the charge exchange process, the metastable 5s 2 [3/2] 0 state in krypton will be 63% populated. Metastable atoms will be resonantly excited to high-lying Rydberg states through an infrared transition, from which they will be ionised with a strong electric field. The overall efficiency of the CRIS technique using field ionisation with a 5 keV beam energy is estimated to be 20 %. The estimated efficiencies of each step is given in Table 1. These values are based on measurements from previous experiments and simulations [11] [21]. Table 1. Estimated efficiencies of processes involved in the CRIS technique [11] [21].
Process Efficiency
Transmission from ion source to trap > 90% Transmission through trap 64% Neutralisation at 5 kV 63% Field ionisation from Rydberg states 80% Detection efficiency 80% Transmission from trap to MCP > 90%
Project outline
A CRIS spectrometer is under development at The University of Manchester, for ultratrace analysis of samples containing rare radioactive isotopes. This device will be optimised for the measurement of 85 Kr/Kr isotopic ratios. A permanent magnet electron cyclotron resonance (ECR) ion source will be used for the production of krypton ions from atmospheric samples. A radiofrequency quadrupole (RFQ) ion trap is under development to improve the properties of the beam for higher-quality measurements and increased duty cycle, and a field ionisation unit will be installed for the ionisation step. The experimental set-up for 85 Kr/Kr measurement is shown in Figure 1. Figure 1. A schematic of the design of the CRIS spectrometer for 85 Kr measurement.
Ion production
A permanent magnet ECR source with a continuously flowing gas feed and a maximum magnetic field of 875 Gauss has been developed and is under testing, based on those designed and constructed at Peking University [15]. The ECR source consists of a volume containing electrons and an applied magnetic field, into which 2.45 GHz microwaves are injected. A sample gas is also injected into the volume. When the frequency of the microwaves matches the gyration period of the electrons, the electrons gain enough kinetic energy to ionise the surrounding gas, creating a plasma. The plasma is extracted using electrostatic plates held at high voltage. It has been shown [7] that intense ion beams of up to 50 mA can be produced from air with compact ECR sources. This is beneficial for a CRIS spectrometer, which requires a high throughput to analyse enough air for reliable 85 Kr measurement in as short a time as possible. The CRIS ECR source can continuously and directly sample the air at a flow rate of 1 ml per minute, without the need for sample preparation. This flow rate corresponds to a krypton gas intake of 1 nl per minute. Taking the CRIS efficiency as 20%, as outlined in Table 1, and a 85 Kr isotopic abundance of 10 −12 , this results in the detection of 53 85 Kr atoms in 10 minutes of sampling from 10 nl of krypton gas in 10 ml of air. Therefore, to achieve a 10% statistical uncertainty with a 50 mA beam, the atmosphere would need to be sampled for under 20 minutes. The ECR source will be used in conjunction with a dipole magnet to initially separate 85 Kr + ions from the primary beam. The 84 Kr + component will be monitored on a Faraday cup. The rest of the beam, primarily consisting of nitrogen and oxygen, will be dumped onto another Faraday cup. As the light mass beam will be up to 250 W, this beam dump will be cooled with water.
Radiofrequency quadrupole (RFQ) ion trap
RFQ traps are used at radioactive beam facilities to improve the properties of ion beams for better efficiency, reduced background and higher quality of measurement, by trapping the ions between 4 cylindrical or hyperbolic electrode rods. Applying an RF potential that is 180 • out of phase on adjacent rods generates a 2-d RF field with a minimum at the beam axis, confining the ions radially to the centre. The trapping volume is filled with a buffer gas which cools the beam as ions collide with gas particles. The combined effect of cooling and axial confinement decreases the emittance of the ion beam, leading to better transmission through the beamline and a greater probability of interaction with lasers [8]. Additionally, cooling minimises the energy spread of the beam and reduces Doppler broadening of spectral lines, improving the resolution of the technique. A prototype RFQ trap has been developed and is currently under testing with a liquid metal ion source. In the interest of designing a compact, 5 RFQ trap has been built using a trapping region of 20 cm. The trap is filled with a helium buffer gas at between 10 −2 -10 −1 mbar. The DC potential is applied to copper pads set in 3-d printed circuit boards, slotted in between the quadrupole rods. Both rods and DC elements are mounted on polylactic acid (PLA) 3-d printed end caps. The use of 3-d printed materials allows for rapid prototyping and design changes [9], and have been shown to be vacuum compatible in previous work [10]. Ions are injected and extracted from the trap using a series of circular electrodes.
Particle tracing simulations carried out using the COMSOL Multiphysics simulation platform have suggested that the current prototype RFQ trap will be capable of cooling and transmitting a beam of 85 Kr + with the desirable efficiencies. Collisions of ions with a helium buffer gas were modelled elastically. By performing parametric sweeps, optimal values of the potentials applied to injection and extraction electrodes have been calculated for maximal transmission through the trap. These values are displayed in Figure 2. The transmission efficiency of the injection optics is 94%. Primary losses in beam current occur during the initial extraction phase. The optimal average pressure of the buffer gas in the trapping volume was found to be 0.04 mbar. The prototype trap is currently under testing with a RF frequency of 0.75 MHz. Combinations of RF frequencies and amplitudes for a constant q value of 0.6 were tested to determine the extent of the impact of using larger RF amplitudes. In these simulations, an emittance of 35 π mm mrad was used, with a beam current of 100 pA and gas pressure 0.04 mbar. The results are displayed in Table 2 Permanent magnet ECR ion sources have been shown to provide beams with transverse emittances (ε) of less than 0.2 π mm mrad [17]. The ISOLDE beam emittance depends on the combination of target and ion source used, but has been measured to be 35 π mm mrad [18]. Both of these values were tested in the simulation with an RF frequency of 1.2 MHz, RF amplitude of 210 V and beam current of 100 pA. Intermediate values were also tested. The resulting transmission efficiencies are displayed in Table 2. The quoted emittances are root-mean-squared values, using the convention that the area of the trace space ellipse A = πε.
Conclusion and future work
A CRIS spectrometer for ultra-trace analysis of radioactive isotopes is under development. The design of this instrument will be optimised for the measurement of atmospheric 85 Kr concentrations, which has multiple applications in nuclear safety. A compact ECR ion source will be used with a dipole magnet to produce a Kr + beam from air samples that will be fed continuously into the source at 1 ml per minute. A compact RFQ trap that will cool and bunch the beam is under testing, with simulations suggesting the trap will be able to successfully cool and transmit a beam of 85 Kr with an efficiency of 60 to 80%, depending on the properties of Figure 2. A COMSOL simulation tracing 85 Kr + ion transport through the RFQ trap. For this simulation, the buffer gas was at 0.04 mbar, U RF at 210 V with a frequency of 1.2 MHz, beam emittance 0.8 π mm mrad and beam current 100 pA. Consecutive DC segments differ in potential by 5 V. The colour scale represents the kinetic energy of the ions. the beam produced by the ECR source. As primary losses occur during the initial extraction phase, redesign of the extraction optics will be considered. Simulations suggested a significant improvement in transmission with larger RF amplitudes, which will be verified experimentally after the initial phase of testing. As the simulations did not account for space charge effects, these will be incorporated into the next phase of simulations. The trap will also be simulated for bunching the beam, which improves the CRIS efficiency by minimizing duty cycle losses. The CRIS method will be used to selectively excite and ionise 85 Kr atoms. The high sensitivity and selectivity of the technique, as well as its capacity to produce fast results from a compact set-up, makes this a competitive method for analysis of trace isotopes in environmental samples, with 85 Kr atoms expected to be detected to within 10% statistical uncertainty in less than 20 minutes. | 2020-12-24T09:04:54.496Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "6c2826a1b97ba907f4facc2ac9b8a81541f72e1d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1643/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c38560152bd1aa06c46e9c79318a0d80a5ed8b63",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
128069413 | pes2o/s2orc | v3-fos-license | Investigation of Aerosol Properties and Structures in Two Representative Meteorological Situations over the Vipava Valley Using Polarization Raman LiDAR
Vipava valley in Slovenia is a representative hot-spot for complex mixtures of different aerosol types of both anthropogenic and natural origin. Aerosol loading distributions and optical properties were investigated using a two-wavelength polarization Raman LiDAR, which provided extinction coefficient, backscatter coefficient, depolarization ratio, backscatter Ångström exponent and LiDAR ratio profiles. Two different representative meteorological situations were investigated to explore the possibility of identifying aerosol types present in the valley. In the first case, we investigated the effect of strong downslope (Bora) wind on aerosol structures and characteristics. In addition to observing Kelvin–Helmholtz instability above the valley, at the height of the adjacent mountain ridge, we found new evidence for Bora-induced processes which inject soil dust aerosols into the free troposphere up to twice the height of the planetary boundary layer (PBL). In the second case, we investigated aerosol properties and distributions in stable weather conditions. From the observed stratified vertical aerosol structure and specific optical properties of different layers we identified predominant aerosol types in these layers.
Introduction
The Vipava valley, located in the southwestern part of Slovenia, about 30 km inlands from the Bay of Trieste in the Adriatic (Figure 1), represents a typical case of a complex Alpine terrain configuration.Due to the morphological structure of this region, characterized by a multitude of basin valleys surrounded by mountains, ventilation in these valleys is predominantly poor, which leads to the formation of strong vertical aerosol gradients in lower troposphere.In stable atmospheric conditions within the lower troposphere, especially in the winter, local emissions of anthropogenic aerosols (biomass burning and traffic) cause the Vipava valley to become a local air pollution hot-spot [1].As a part of the Mediterranean region, the area is also frequently affected by long-range transport of Saharan dust from North Africa across the Mediterranean and the Adriatic Sea, as well as sea salt.The most important phenomenon in the region that ventilates the valleys are episodes of Bora [2], katabatic wind capable of inducing mixing particulate matter and trace gases from the surface into the free troposphere [3,4].To monitor spatial loading distribution of different aerosol types and to understand the effects of various aerosol sources, experimental data with sufficiently high temporal and spatial resolution are needed.A variety of weather conditions with specific localized impacts on vertical mixing need to be investigated [5].In comparison to air-based in situ measurements, high temporal and spatial resolution LiDAR measurements has proven to be very useful for identifying aerosol sources and transport mechanisms as well as the evolution of the PBL [6][7][8][9][10].Although satellite-based global observation tools [11][12][13][14] could in principle be used, these measurements have considerable uncertainty and are available only in limited time slots, as satellites have typical revisit times of two weeks.
Aerosol optical properties, depending on aerosol size, shape and refractive index (for example LiDAR ratio and depolarization ratio) can however be reliably obtained using ground-based multiple wavelength polarization Raman LiDARs [15][16][17].The investigation was therefore performed using a dedicated two-wavelength polarization Raman LiDAR system, developed for this study, which provided experimental data on aerosol optical properties not available in previous studies.The most important parameters that were used for aerosol identification are the particle depolarization ratio (PDR), available from the measurement of different polarizations of the back-scattered signal at 355 nm (PDR differs considerably for spherical particles such as water droplets or smoke soot and non-spherical particles such as ice crystals or mineral dust [15,18,19]), the backscatter Ångström exponent (BAE, related to aerosol size distribution [16,[20][21][22]) and the LiDAR ratio (LR) between the backscattering and extinction coefficient (related to aerosol size distribution and their refractive index [17,[23][24][25][26][27]). Aerosol identification was made using a combination of PDR, LR, and BAE [17,25,[28][29][30][31].The selection of the predominant aerosol type was additionally verified by in situ measurements and by modeling of backward airmass trajectories [32,33].The aim of this paper is to reveal aerosol vertical structures and spatial distribution of specific aerosol types under two representative meteorological conditions for this region, during a Bora onset and in stable weather.In the first study of this kind in Slovenia, using a two-wavelength polarization Raman LiDAR as an aerosol identification tool, we attempted to reveal the properties and processes of aerosol vertical mixing induced by Bora, as well as investigate mixtures of different aerosol types of both anthropogenic and natural origin in the case of elevated aerosol concentrations in a local hot-spot location.
Methodology
The measurements took place between 31 August and 22 December 2017 at the Vipava valley floor, in the town of Ajdovščina (45.87 • N, 13.90 • E, 125 m a.s.l.), where the LiDAR system and a wind sensor were installed.Additionally, aerosol absorption coefficients were measured by an aethalometer at Otlica (951 m a.s.l., 5 km horizontally displaced from Ajdovščina), where the measuring site was located on the valley-side edge of the Trnovski gozd plateau (Figure 1).The purpose for setting up two measurement sites was to distinguish advected aerosols from local background, as well as to obtain information about aerosol vertical mixing.Radiosonde data from Rivolto (113 m a.s.l., about 67 km away from Ajdovščina) was also used.In total, we accumulated 192 h of LiDAR data in 32 days without precipitation or low clouds.
Lidar System
A two-wavelength polarization Raman LiDAR system developed at the University of Nova Gorica was used to simultaneously provide vertical profiles of aerosol backscatter coefficients at 355 nm and 1064 nm, LiDAR ratio at 355 nm, depolarization ratio at 355 nm and the backscatter Ångström exponent between 355 and 1064 nm.Combining this data facilitates improved detection of aerosol types, properties and origin.
In order be able to take data all-weather conditions, including operation during very strong winds, the LiDAR is deployed indoors and accessed the atmosphere through a rooftop UV transparent window.Two separate lasers with synchronized trigger were used to emit 355 nm (UV) and 1064 nm (IR) light.The ranges for the complete overlap between the field of view of the telescope and the divergence of the laser pulses were measured to be about 200 m for IR and about 300 m for the UV laser.Four separate backscattering channels (vibrational nitrogen Raman signal at 386.7 nm, two Mie-Rayleigh signals at 355 nm with different polarization planes and Mie-Rayleigh signal at 1064 nm) were used in this study.A half-wave plate was installed in front of the polarization channels to calibrate the volume depolarization ratio (VDR) using the so called ±45 • method, and to correct the polarization plane offset [35].Main components of the system are listed in Table 1.The configuration and optimization of the LiDAR system is described in detail in [36].
In Situ Measurements
The aerosol absorption coefficient and black carbon (BC) concentration were measured using an Magee Scientific aethalometer AE33 at Otlica.The device provides aerosol absorption coefficients at seven different wavelengths (370 nm, 470 nm, 520 nm, 590 nm, 660 nm, 880 nm and 950 nm) [37].BC concentration is calculated based on the measurements of the rate of increase of light attenuation at the 880 nm wavelength with 1 min time resolution.The mass absorption cross section air of 7.77 m 2 g −1 at 880 nm and a multiple scattering parameter C = 1.57were used to convert measurements of attenuation to BC mass concentration according to the supplement in [37].
Meteorological Data
Relevant vertical profiles of meteorological data (temperature, pressure, relative humidity, wind speed and direction) were obtained from two radiosonde launch stations closest to Ajdovščina, Ljubljana (50 km away) and Rivolto (67 km away).Both stations provide data on daily basis at 05:00 CET (Ljubljana) and between 00:00-01:00 (Rivolto).All the heights in radiosonde data were reprocessed to be relative to Ajdovščina.Despite moderate horizontal distance of both radiosonde sites from our measurement area, comparable results from the two indicate that atmospheric vertical structure in the free atmosphere above the Vipava valley, which is located between them, can be well described by the available radiosonde data [38,39].Wind sensor (ultrasonic anemometer Vaisala WMT700) was co-located with the LiDAR and its data was used to select downslope wind cases.The HYSPLIT model [40] was used in its backward trajectory mode, setting the final point in time to the location of our LiDAR site.Backward trajectories were intended to suggest regions, in which lie the sources of the observed aerosol types.Backward airmass trajectories obtained from the HYSPLIT model [40] were used to identify long-range transport of airmasses into the Vipava valley region.
Data Processing
Aerosol backscatter coefficient profiles were extracted from IR and UV Mie-Rayleigh scattering signals using the Klett method [41,42].Only data with signal-to-noise ratio more than 5 at least up to the reference height, which was above 5 km.In the nighttime, extinction and backscattering coefficient profiles at 355 nm were additionally retrieved from N 2 Raman signal and Mie-Rayleigh UV signal using the standard Raman method [43,44], which allowed us to determine the LiDAR ratio [27].When available, backscatter coefficient profiles at 355 nm retrieved by Raman method were used in the subsequent atmospheric studies.While other methods to calculate the VDR exist [35], we selected the method used in [45] to be able to compare our results to previous measurements and to classify the aerosols we observed into established aerosol categories.The particle depolarization ratio (PDR) was separated from the VDR by evaluating the molecular backscatter coefficient using the properties of the standard atmosphere and radiosonde data [45].The value of molecular depolarization ratio (MDR) depends on the temperature and the filter bandwidth in the corresponding detectors.The MDR in our system was taken to be between 0.007 and 0.008 for temperatures between 173 K and 273 K [46].Backscatter related Ångström exponent (BAE) was calculated from the backscatter coefficients measured at 355 nm and 1064 nm.Small BAE values indicate large aerosol size, and zero is consistent with no wavelength dependence of scattering, which is true for particles much larger than either of the two wavelengths used.All LiDAR techniques used and their differences are reviewed in detail in [36].
The uncertainties of the retrieved atmospheric optical properties (backscatter coefficients, extinction coefficient, LiDAR ratio, Ångström exponent, depolarization ratio) were considered separately for each type of the investigated representative weather conditions.In the case of aerosol properties in Bora, which were studied during the day, only elastic scattering signals were available.Due to Bora-induced vertical mixing, mineral dust was identified to be the predominant aerosol type below 3 km and height dependence of dust LiDAR ratio was expected to be weak [45].With the assumed LiDAR ratio for dust of 50 ± 10 sr, the uncertainty of the backscatter coefficients was taken to be 10% [22,47], and the uncertainty of the resulting backscatter Ångström exponent was taken to be 15%.In the case of stable atmosphere, where all LiDAR channels were available, main uncertainties arose from the assumption of the extinction Ångström exponent (5%) [27], the selection of the "aerosol-free" reference range value for signal normalization (5%) and the height dependent signal-to-noise level and applied smoothing (5%).Due to the use of a simultaneously available nearby radiosonde data the uncertainties of temperature and pressure profiles were neglected.The total uncertainty of the retrieved LiDAR ratio was thus assumed to be 10%.The uncertainty of the depolarization ratio was 20% after the calibration using the method described in [48,49].
The absorption Ångström exponent (AAE) for completely black spherical aerosols can be obtained from the absorption coefficients measured by an aethalometer.In our case, AAE was determined between 470 nm and 950 nm, which allowed for separation of primary aerosol sources from traffic and biomass burning [50].BC aerosols were categorized based on the value of the AAE, where AAE ≤ 1 refers to pure traffic and AAE > 1.7 refers to pure biomass burning [50,51].
Results and Discussion
The retrieved remote sensing and in situ data was classified with respect to local wind conditions, which were previously categorized in a long-term statistical study of wind conditions in the valley [39].In the 32 days, when both datasets were simultaneously available, two predominant categories were found.One category (29 days) refers to calm and stable atmospheric conditions, which are often accompanied by elevated aerosol concentrations within the valley and stratified atmospheric structure.To benefit from simultaneous measurements in all available LiDAR return channels, aerosol properties, and structures were studied during the night.The second (3 days) refers to Bora episodes, which are common in the Vipava valley.The case of Bora is characterized by strong and turbulent airflow close to the valley floor, and periodic structures can be found at the approximate height of the orographic barrier [39].A typical case from each category was selected for detailed analysis.
Aerosol Properties in Bora
Typical Bora conditions were present on 8 September 2017, with the downslope wind from the NE reaching peak velocities of about 16 m/s (Figure 1).Bora flow was accompanied by the presence of Kelvin-Helmholtz instability above the valley at the height of the barrier, as suggested by different ground (NE) and radiosonde (SW) wind directions above the barrier as well as by the periodicity of atmospheric structures retrieved by LiDAR (Figure 2).Due to differences in airmass density, wind speed and wind direction above and below the barrier, waves formed at the interface between the layers, pushing the denser air above the less dense air, leading to convective instability, wave breakdown and turbulence production [2].The investigation of aerosol structures and properties above the valley is based on LiDAR data between 10:30 to 14:00 CET.As the measurements took place in the daytime, only elastic scattering data is available.The first occurrence of a large gradient in the IR LiDAR return signal was taken to correspond to the top of the PBL, which was in this case at about 1 km above the surface.Using aerosols as tracers, carried by the airmasses, elastic scattering information also provides insight into airflow properties, such as circulations that lift particles or trace gases into the free troposphere [3].These circulations can be seen as distinct aerosol peaks with short duration above the PBL, especially after 13:00 CET in Figure 2a.With time, aerosol peaks grew stronger and reached higher above the PBL.They appear in 5-10 min intervals, which may be due to the Kelvin-Helmholtz instability and other Bora properties.The BAE, related to the size distribution of aerosols (Figure 2b), indicates that relatively large particles enter the PBL from the surface (blue) and are being driven above the PBL in the observed aerosol peaks.The performance of the BAE and PDR retrieval was in this case checked also using the layer of scattered water clouds, present at heights between 2 and 3 km.The BAE values in the cloud (Figure 2b at 12:00 CET) are almost zero and are the lowest in the entire measurement.The same is true for PDR values of 0.1 in the cloud (Figure 2c), which are typical for water droplets [52].Two to three times larger PDR values (between 0.2 and 0.25) found elsewhere indicate the presence of non-spherical particles (such as dust, picked up from the surface), which agrees with the observed small BAE values.
Aerosol Properties in Stable Atmosphere
Atmospheric stratification over complex terrain is common in stable weather conditions.As an example of the presence of different aerosol types from various sources, we present the conditions in a cloudless period of the night of 31 August 2017 between 23:40 CET and 00:10 CET on the next day.Presence of marine aerosols from the Adriatic and the Mediterranean was expected because of long-range aerosol transport, while local aerosols mainly originated from combustion emissions.Atmospheric stability was verified using the two radiosonde profiles (downstream profile at Ljubljana and upstream profile at Rivolto on 1 September 2017, Figure 3), which were found to be very similar.Increasing potential temperature with height indicates stably stratified conditions and the suppression of vertical air motion.SW wind above 1.4 km, obtained from radiosonde, indicates the arrival of airmasses from the Mediterranean, which carried marine aerosols.IR LiDAR return as well as the BAE and the PDR show distinct atmospheric stratification (Figure 4).The PBL is visible below 0.7 km, the residual layer (RL) between 0.7 km and 1.4 km and an elevated aerosol layer (EAL) above 1.4 km with peak aerosol concentration at about 2 km.The same stratification was observed in the radiosonde data as well, where the boundaries between the layers could be identified by large changes in the gradient of the relative humidity and potential temperature.In addition to lower relative humidity, the airmasses in the residual layer between 0.7 km and 1.4 km move in a different direction (Figure 3).This could cause some mixing at the top of PBL; however, it was inherently intermittent, so it did not influence the conditions in the PBL.Optical properties of the aerosols in each of the three layers were investigated based on the retrieved extinction coefficient, backscatter coefficients, depolarization ratio, BAE, and LR profiles (Figure 5).The backscatter coefficients (9 × 10 −3 km −1 sr −1 at 355 nm and 8 × 10 −3 km −1 sr −1 at 1064 nm) and the extinction coefficient (0.4 km −1 ) were found to be largest within the PBL.The values of the observables sensitive to aerosol characteristics (LR, PDR and BAE) were found to differ significantly between the layers (Table 2) and indicate the presence of different aerosol types.Backscatter coefficients and the extinction coefficient were used for the estimation of aerosol loading, while LR, PDR, and BAE were used for the assessment of predominant aerosol types.The PDR was the most important parameter for the identification of long-range transport aerosols, which in the Mediterranean region often include mineral dust from Northern Africa.The predominant type of aerosols in each layer was assessed by comparing our LR, PDR, and BAE values to those from previous experiments (Table 2), where known aerosol sources were observed.The measured aerosol optical properties in the PBL correspond well to those from combustion.Due to atmospheric stability and absence of mineral dust in the PBL (PDR values were below 10%), the predominant source of aerosols was expected to be anthropogenic.Primary sources of these aerosols were determined based on in situ aethalometer measurements at Otlica.Increased AAE values during the night (when LiDAR measurements took place), reaching values of 1.6 after 21:00 CET, indicate predominant presence of biomass burning aerosols.Daytime AAE values are in contrast close to 1, which suggests that the predominant aerosol source in the PBL during the day was traffic.Daily evolution of BC concentration and AAE are shown in Figure 6 (right).Based on significantly (3 times) lower PDR and LR and 30% lower BAE, the predominant type of aerosols in the EAL was expected to be marine.This choice is supported by 48-h HYSPLIT backward airmass trajectories ending within the EAL, which were at heights well below 300 m above the Adriatic Sea (Figure 6, left), so they can be expected to carry sea-salt aerosols.The presence of well-mixed marine aerosols above 2 km is supported also by the radiosonde measurements.Relative humidity (Figure 3a), consistently exceeded 80%, and the modeled SW arrival direction agrees with the radiosonding wind data.In the RL, intermediate values of identification parameters suggest a mixture of biomass burning and marine aerosols.
Conclusions
Based on the remote sensing data, retrieved under two representative types of local weather conditions, we demonstrated that predominant aerosol types in specific atmospheric layers can be successfully identified using a combination of the information on aerosol optical properties and on vertical atmospheric structure.In the case of strong Bora, aerosol identification capability of the lidar system allowed us to make the first direct observation of the injection of dust aerosols from the ground into the free troposphere, as high as twice the PBL height.Vertical mixing, driven by turbulent Bora air-flows, is an important mechanism for long-range transport of local aerosols from the Vipava valley.The observed 5-10 min periodicity of aerosol peaks above the PBL may be related to atmospheric oscillations such as gravity waves or the Kelvin-Helmholtz instability, where the periodicity above the Vipava valley was found to be 2-12 min [4].The obtained information on atmospheric structures may in the future be used to improve the representativeness of local-scale numerical airflow modeling during Bora episodes.In particular, the period of gravity waves or atmospheric structures indicating airmass motion may be used to evaluate model performance.In the case of stable atmospheric conditions and stratified atmosphere, predominant aerosols in the PBL were found to originate from local anthropogenic sources.The choice of predominant aerosol type was made based on other measurements of optical properties for well-known aerosol sources in Europe.The performance of the LiDAR-based aerosol identification was cross-checked using radiosonde data, BC measurements, and backward trajectory modeling.Long-range transport aerosols, such as the marine aerosols in the presented stable atmosphere case, appeared above the Vipava valley but did not significantly affect aerosol composition within the PBL.Results of the present study also indicate that systematic monitoring of aerosol characteristics retrievable from LiDAR data, in combination with weather information and in situ measurements of the properties of atmospheric particulate matter, could be used to investigate processes such as aerosol formation and aging mechanism and the extent of their vertical mixing, which would help to improve local air quality under unfavorable weather conditions.
Figure 1 .
Figure 1.Terrain configuration of the Vipava valley[34] and its location in central Europe (inlay, licensed under https://creativecommons.org/licenses/by-sa/3.0(CC BY-SA 3.0) by A. Mladenovic).Towards the Adriatic coast in the south-west, the valley (125 m above the sea level) is closed by the Karst plateau (up to 300 m a.s.l.), while to the north-east the terrain rises steeply to the Trnovski gozd plateau (up to 1200 m a.s.l.).Measurement sites, horizontally displaced by about 5 km, are marked by black points.The wind rose shows wind speed and direction distribution in Ajdovščina for the Bora outbreak on 8 September 2017.Predominant direction was NE, with speeds exceeding 16 m/s.
Figure 2 .
Figure 2. Temporal variation of aerosol distribution over the Vipava valley during a Bora episode on 8 September 2017 from 10:30 to 14:00 CET.(a) Logarithm of range square corrected LiDAR return at 1064 nm (arbitrary units).(b) BAE based on the combination 355 nm and 1064 nm backscatter coefficients.(c) PDR.For all three figures the data was re-sampled to 18.75 m range resolution.The values are shown for heights within the complete LiDAR overlap range.The heights are relative to Ajdovščina.
Figure 3 .
Figure 3. Radiosonde profiles from Ljubljana (blue) at 5:00 CET and Rivolto (red) at 1:00 CET on 1 September 2017.(a) Potential temperature (θ); (b) Relative humidity; (c) wind speed and (d) wind direction.Horizontal lines at 0.7 km and 1.4 km denote significant slope changes in the relative humidity, potential temperature, and wind speed profiles.The height is relative to Ajdovščina.
Figure 4 .
Figure 4. Temporal variations of the IR LiDAR return (a), BAE (b) and PDR (c) in stable atmosphere conditions on 31 August 2017 between 23:40 CET and 00:10(+1) CET show stratified atmosphere with no apparent mixing between different layers.PBL was found to be below 0.7 km, the residual layer (RL) between 0.7 km and 1.4 km and an elevated aerosol layer (EAL) above 1.7 km with peak aerosol concentration at about 2 km.All data was re-sampled to 18.75 m range resolution.The values are shown for heights within the complete LiDAR overlap range.The heights are relative to Ajdovščina.
Figure 5 .Figure 6 .
Figure 5. Aerosol optical properties obtained from LiDAR data in stable atmosphere on 31 August 2017 at 23:40-00:10(+1) CET.(a) BS coefficients at 1064 nm (red) using Klett and at 355 nm (blue) using Raman method; (b) extinction coefficient at 355 nm; (c) VDR and PDR at 355 nm; (d) LR at 355 nm and (e) the BAE between 355 and 1064 nm.Error bars represent total uncertainty of each presented quantity.Horizontal lines (black and green) indicate the boundaries between different elevated aerosol layers.Table2.Aerosols in the three observed atmospheric layers (PBL, RL and EAL) were characterized using observables sensitive to intrinsic aerosol properties.Particle depolarization ratio (PDR) and LiDAR ratio (LR) were obtained at 355 nm, while the backscatter Ångström exponent (BAE) was retrieved using both 355 nm and 1064 nm backscattering coefficients.Predominant aerosol type in each layer was chosen based on the values of aerosol optical properties of particular aerosol types, investigated in reference papers, listed in the last column.
Table 1 .
Main components of the custom-made polarization Raman-Mie LiDAR in Ajdovščina, which was the main device for the investigation of aerosol properties above the Vipava valley. | 2019-04-23T13:23:43.359Z | 2019-03-08T00:00:00.000 | {
"year": 2019,
"sha1": "35cadad7edaa9ccb3cdfa21391d6583fe7be54b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/10/3/128/pdf?version=1552041298",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "35cadad7edaa9ccb3cdfa21391d6583fe7be54b9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
268250377 | pes2o/s2orc | v3-fos-license | Modeling and analysis of hybrid-blood nanofluid flow in stenotic artery
Current communication deals with the flow impact of blood inside cosine shape stenotic artery. The under consideration blood flow is treated as Newtonian fluid and flow is assumed to be two dimensional. The governing equation are modelled and solved by adopting similarity transformation under the stenosis assumptions. The important quantities like Prandtl number, flow parameter, blood flow rate and skin friction are attained to analyze the blood flow phenomena in stenosis. The variations of different parameters have been shown graphically. It is of interest to note that velocity increases due to change in flow parameter gamma and temperature of blood decreases by increasing nanoparticles volume fraction and Prandtl number. In the area of medicine, the most interesting nanotechnology approach is the nanoparticles applications in chemotherapy. This study provides further motivation to include more convincing consequences in the present model to represent the blood rheology.
www.nature.com/scientificreports/approach enables a comprehensive examination of the intricate physiological processes involved, utilizing mathematical equations to quantify the effects of stenosis on blood flow.For future research, there is an opportunity to enhance existing mathematical models by incorporating more realistic parameters.This could involve considering patient specific characteristics, accounting for dynamic changes in stenosis severity over time, or incorporating variations in blood viscosity.The practical insights gained from improved mathematical models and simulations offer a promising avenue for addressing the complexities of blood flow in stenotic conditions, making the research appealing to readers interested in both theoretical advancements and practical applications in the field.We analyzed the flow of blood in the narrow artery with addition of nanoparticles by considering non-Newtonian nature of blood.The obtained PDE's are transformed into ODE's with the use of similarity transformations.Numerical solution has been calculated for temperature and velocity of blood by using MAT-LAB bvp4c.Obtained results are shown graphically and also in tabular form.This innovative approach not only contributes to deeper scientific understanding of blood flow issues associated with stenosis but also paves the way for developing more precise medical interventions and personalized treatment strategies.
Flow geometry and coordinate system
The following relevant assumptions are made: i.We considered the flow of blood through stenotic artery of cosine shape constriction.
ii. Blood acts like steady, two-dimensional, incompressible Newtonian fluid.iii.The length of stenosis is L 0 2 , width of unblocked region 2R 0 , radius of the artery is R(x) and the maximum values of height is represented by λ. iv.Blood flow along x − axis and r − axis is perpendicular to the flow.v.The region of stenosis is chosen as
Problem formulation and method of solution
The governing steady boundary layer equations of motion, momentum and energy for Newtonian hybrid nanofluid are defined respectively: Boundary conditions can be specified as: Physical properties of nanofluids are defined as follows 13 : where ρ hnf , µ hnf k f and k hnf are the density, viscosity and thermal conductivity of hybrid nanofluid of Cu-Al 2 O 3 nanoparticles and blood, the heat capacity of fluid is (ρC p ) hnf and values of all these properties are defined in Table 1.The value of ψ for u and v is presented in Eq. ( 8), the continuity Eq. ( 2) is satisfied. (1) (5)
where x = x L 0 and after the successful implementation of useful similarity transformation the equations (9 − 10) finally becomes: where where Dimensionless form of Eq. ( 1) is where f = R(x) R 0 and ǫ = R 0 is the non-dimensional measure of stenosis in reference artery.Boundary conditions in dimensionless form are The dimensionless quantities in Eqs. ( 12) and ( 13) are flow parameter γ = ν f L 0 /u 0 R 2 , Prandtl number Pr = k f /(µC p ) f and Cu-Al 2 O 3 nanoparticles concentration are shown by φ 1 and φ 2 .
Physical quantities
The physical quantities of flow field i.e., coefficient of Skin friction C f and heat transfer Nu x are described as: Expression for shear stress τ w and heat flux q w can be find as Non-dimensional form of Eqs. ( 18), ( 19) becomes where Re shows the Reynolds number.
Numerical solution
Numerical solution of Eqs. ( 12) and ( 14) is obtained by using MATLAB bvp4c technique.MATLAB bcp4c solve the boundary values problems for ordinary differential equations.The results for temperature and velocity profiles are obtained and presented graphically.
Graphical results and explanation
Blood flow problem through stenotic artery with addition of hybrid nanoparticles is studied.Results of various parameters on stenotic artery are investigated.Figure 1 shows the geometry of stenotic artery.Figure 2 describes the consequences of temperature for Pr .Graphical results shows that by rising the Pr = 2.0, 3.0, 4.0, 5.0 temperature decreases.Basically Pr is a ratio of diffusivity of momentum to thermal.This implies that there is a inverse relation to heat transfer from the wall of artery, for smaller value of Pr the heat diffusion is greater than momentum.Figure 3 depicts the impact of nanoparticles on temperature field and the curve decreases by increasing φ = 0.01, 0.05, 0.1, 0.2 .Figure 4 depicts the consequences of γ on blood temperature and the curve shows increasing behavior by increasing the value of γ = 0.1, 1.8, 2.5, 3.4 .Impact of nanoparticles on velocity distribution is shown in Fig. 5.By increasing nanoparticles volume fraction values, velocity curve of blood bending down.This is acceptable with the physical presentation that when the nanoparticles volume fraction rises then due to nanoparticles stacked up in blood, the flow velocity decreases.Figure 6 draws the results of γ on velocity profile F′(η ).Velocity of blood increases by increasing γ = 1.0, 1.4, 2.2, 3.7.Figure 7 shows the results of γ on velocity profile F(η ).Velocity of blood increases by increasing γ = 1.0, 1.4, 2.2, 3.7.Figure 8 presents skin friction variations due to change in nanoparticles volume fraction and flow parameter.Skin friction profile goes down by rising γ values.Figure 9 presents consequences of heat transfer coefficient and the curve shows decreasing behavior.Table 1 describes the values for (blood) base fluid and solid hybrid nanoparticles.Table 2 describes the impact of Pr and γ on Nusselt number.Results exhibits that for Pr values coefficient of heat transfer rises by increase in γ while goes down.Table 3 shown the results of γ and φ on skin friction.We can conclude that by rising the flow parameter γ , the values of Skin friction coefficient also rises and by increasing φ , the coefficient of Skin friction decreases.Remarkable properties are show by hybrid nanoparticles which cannot attained in individual state by any component.Mainly the applications of nano drug delivery in biofluid dynamics, for arterial diseases treatments, a detail computational study is presented for hybrid nanoparticles and heat transfer through stenotic artery.Results of present study bay be fruitful during operation procedures in tuning the blood flow.
Conclusion
Flow of blood through stenotic artery with addition of hybrid nanoparticles is studied.The mathematical study of blood flow in stenosis involves modeling complex fluid dynamics, vessels geometry and rheological properties to understand the impact of narrowed passages on flow patterns.A numerical method has been used to attain
Figure 9 .
Figure 9. Consequences of Pr and γ on heat transfer coefficient.
Table 2 .
Nusselt number variations with respect to Pr and γ.
Table 3 .
Numerical values for skin friction coefficient with respect to φ and γ . | 2024-03-07T06:16:45.948Z | 2024-03-05T00:00:00.000 | {
"year": 2024,
"sha1": "942f917b43878526edac56b1fef9a792dc9939e3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-55621-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56b1d429008c4370508bb4afb665b78099cccadf",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56052525 | pes2o/s2orc | v3-fos-license | Strategies for climate change impacts on irrigated crops in National Capital Region of India
Irrigation has helped in increasing food production and achieving food security in India. However, climate change is expected to affect the crop production in irrigated area particularly in groundwater irrigated areas. This study was undertaken for suggesting strategies to climate change impact on irrigated crops based on projected change in crop water requirement and groundwater availability for irrigation in the National Capital Territory of Delhi. Prevailing groundwater recharge in the study area during monsoon was 4.01 MCM (Million cubic meter). The same for various scenarios varied from -15.47 MCM to 5.08 MCM. It was revealed that groundwater recharge would increase if it is estimated based on the climate prediction done using local weather data. The impact of climate change on groundwater availability is evident in scenarios based on INCCA and IPCC predictions where it varied from -2.66 MCM to 1.02 MCM. Contrary to common perceptions, crop water requirement of prevailing cropping system would not increase in future if all the important climatic parameters are considered for its prediction. This may be due to the fact that effect of increase in temperature on crop water requirement may be compensated by decrease in other climatic parameters such wind speed and duration of daily sunshine hours. Results indicated that climate change may not have much impact on sustainability of prevailing cropping system as per the crop water requirement is concerned. Based on water requirement and groundwater availability under various climate change scenarios, appropriate strategies to cope up the climate change impact on irrigated crops have been suggested.
INTRODUCTION
It is reported that the global as well as the regional climate is changing due to increased concentration of greenhouse gases in the atmosphere (IPCC, 2007).The important parameters which control the climate of a region are temperature, rainfall, relative humidity, wind velocity, duration of sunshine hours and amount of solar radiation reaching the earth surface.Trenberth et al. (2007) reported that the global mean surface temperature increased by 0.74° C ± 0.18° C during the period of 1906-2005.According to IPCC (2007) global mean air surface temperature would be increased by 1.4 to 5.8° C by the end of 2100 under different emission scenarios.Climate change is expected to influence the hydrologic cycle which would result in change in evapotranspiration, precipitation and its distribution, change in soil moisture status etc. (IPCC, 2007).A numerous investigators reported that in event of climate change, irrigated agriculture would be severely affected due to increased crop water requirement and decreased water resources availability especially in the arid and semi-arid regions of worlds including India (Mahmood, 1997;Goyal, ISSN : 0974-9411 (Print), 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.ansfoundation.org2004; De Silva et al, 2007;INCCA, 2010: Shahid, 2011).The crop water requirement is further expected to increase due to increase in cropped area to meet the increasing food demand of people.However, there are also contradictory reports on impact of climate change on crop water requirement (Yano et al., 2007).Shahid (2011) reported that the irrigation requirement of Boro rice will increase by 0.8 mm day -1 in northwest Bangladesh.According to Doria and Madramootoo (2009), irrigation water requirement of vegetable crops would increase by 40-100% and that of potatoes by 80% in southern Quebec province of Canada.A case study on impacts of climate change on paddy irrigation water requirements conducted in Sri Lanka suggest that the potential evapotranspiration of paddy increased by 3.5 % to 5.0 % and consequently the water requirement increased by 23 % and 13 % (De Silva et al, 2007).Tung and Haith (1998) evaluated the impact of climate change on irrigated corn in arid zone of Rajasthan and found that 1% increase in temperature from base period might increase evapotranspiration by 15 mm, which would require 34.275 MCM of additional water for irrigation.Doll (2002) conducted a global analysis of the impact of climate change and climate variability on irrigation water requirements and reported that irrigation requirement in two-thirds of the global area having irrigation facilities would increase.Contrary to above reviews, several investigators (Yano et al 2007;Chattopadhay and Hulme, 1997;Peterson and Keller, 1990) had suggested that there would not be any increase in crop water requirement as a result of climate change.Yano et al, (2007) reported that actual evapotranspiration (ETa) from wheat cropland would decrease by 28 and 8% during 2070 and 2079, respectively and ET a and irrigation water requirements in 2070 and 2079 for maize crop would decrease respectively by 24 and 15 % and 28 and 22 %.Chattopadhay and Hulme (1997) reported that both pan evaporation and evapotranspiration decreased during the period of 1961 to 1992 in various regions of India.The coping strategies for climate change impacts on irrigated crops must consider the changes in water requirement and water availability for irrigation.Numerous strategies and technology have been suggested to cope with the impact of climate change on crop water requirement and water resource availability (Kabat et al 2003).These include efficient utilisation of water resources, efficient irrigation methods, land leveling, zero tillage, direct seeded rice, system of rice intensification, crop diversification, conservation agriculture, appropriate irrigation scheduling, improved weather forecasting, use of drought tolerant varieties, site specific soil and water conservation structures, rainwater harvesting, desalinisation of brackish water, reuse of waste water and improved agronomic practices etc. (Kabat et al 2003).The applicability of these techniques for coping the impacts of climate change needs to be modified in view of climate variability and change at a particular region.The present study was undertaken to suggest the coping strategies for climate change impacts on irrigated crops based on projected changes in crop water requirement and groundwater availability in National Capital Region (NCR) of India.
Study area:
The study was carried out for the agriculturally dominant Najafgarh Block under South West District of National Capital Territory (NCT), Delhi.It is located between 28° 30¢ 10¢¢ to 28° 39¢ 30¢¢ N latitude and 76° 51¢ 45¢¢ to 77° 6¢ 15¢¢ E longitude in the National Capital Region (NCR) of India which falls under semi-arid region (Fig. 1).Najafgarh Block dominated by agriculture was selected for investigating the impact of climate change on crop water requirement and groundwater recharge.The study area is about 20064 ha.The major crops of the study area are rice, pearl millet, maize and pigeon pea during kharif season and wheat and mustard for rabi season.The area under the each crop is presented in Table 1.Methodology: To evaluate the impact of climate change on crop water requirement, a total nine climatic change scenarios based on local weather data (one scenario), Indian Network for Climate Change Assessment (INCCA) (2 scenarios), and Inter-Governmental Panel for Climate Change (IPCC) (6 scenarios) predictions were considered.On other hand, a total twelve climate change scenarios were considered for analysis in the impacts of climate change on groundwater recharge and its availability.Out of twelve, seven scenarios pertaining to varying level of groundwater pumping and recharge.The level of groundwater pumping was decided based on the prevailing rate of pumping in the study area which was estimated to be 0.4946 m y -1 .The scenarios considered for assessing impact of climate change on crop water requirement were: First scenario of predictions for 2030s using local weather data with increase in average air temperature by 0.26 0 C and relative humidity by 4% and decrease in wind speed 5.15 (km day -1 ) and sunshine hours by 0.26 h.Scenario 2 and 3 for increase in average air temperature by 1.7 °C and 2.0 °C respectively as predicted by INCCA in the 2030s.Scenario 4, 5,6,7,8 and 9 presented as predictions given by IPCC for 2100s increase in average temperature by 1.1 °C, 1.4 °C, 2.9 °C, 3.8 °C, 5.4 °C and 6.4 °C respectively.Scenarios considered for assessment of Groundwater recharge were: Scenario 1 considered as the recharge based on predictions of climatic parameters for 2030s using local weather data.Recharge based on IPCC predictions an average air temperature rise by 1.1 °C and 6.4 °C for 2100s was considered in scenario 2 and 3 respectively.Recharge based on INCCA predictions for 2030s for average temperature rise by 1.7°C and 2.0 °C in scenario 4 and 5 respectively.Natural recharge of the year 2005 have increased by 10 %, 20 %, 30 %, 40 %, and 50 % for the scenario 6, 7, 8, 9 and 10 respectively in the prevailing pumping and decreased by 5 % and 10 % for scenario 11 and 12 respectively.Scenarios 6-12 are based on anticipated increase or decrease in groundwater pumping due to increase or decrease in demand.Scenarios 11 and 12 were also considered to evaluate the impact of additional water supply from other sources or equivalent increase in recharge through artificial means such as rainwater harvesting for groundwater recharge.Climate change scenarios were generated by the ARIMA model for 2030s using local weather data, which includes average air temperature, relative humidity, sunshine hours, wind speed and rainfall.The ARIMA predictions for 2030s, IPCC predictions for 2100 and INCCA predictions for 2030s were used to evaluate the impact of climate change on crop water requirement using CROPWAT model of FAO.Crops namely rice, pearl millet, pigeon pea (arhar), maize, wheat and mustard grown usually in the region were considered for evaluating the impacts on crop water requirement.Assessment of impacts of climate change on groundwater recharge and its availability for irrigation was done using HYDRUS-1D and MODFLOW software.Groundwater recharge rate and total recharge in the study area were predicted for various climate change, recharge and pumping rate scenarios.The crop water requirements of cropping systems practiced in NCR were also estimated under all assumed scenarios.
Based on the predicted water table fluctuation under various scenarios, the total groundwater recharges in the study area were also estimated to evaluate the impact of climate change on groundwater availability for irrigation purpose.
RESULTS AND DISCUSSION
Crop water requirement: Crop water requirement of selected crops were estimated using CROPWAT software and presented in Tables1and 2. The water requirement of rice and pearl millet was estimated to 19.61 and 22.1 MCM, respectively (Table 1).Water requirement of these two crops under other scenarios except the scenario 1 is higher than the reference scenario.This indicates that there is no impact of climate change on water requirement, if it is estimated using local weather data.This can be supported by the findings of Chattopadhay and Hulme (1997) who reported that both pan evaporation and evapotranspiration decreased during the period of 1961 to 1992 in various regions of India.Similar observations were also made by Peterson et al. (1990).In another study, Yano et al. (2007) reported that actual evapotranspiration (ETa) from wheat cropland would decrease by 28 and 8 % during 2070 and 2079 respectively.The same study suggests that ETa and irrigation water for maize crop would decrease by 24 and 15 % and 28 and 22 % in 2070 and 2079 respectively.In case of scenarios 2 to 9, crop water requirement will be more than the reference scenarios.This is supported by the study of Tung and Haith (1998) who evaluated the impact of climate change on irrigated corn in arid zone of Rajasthan.They reported that 1 % increase in temperature from base period might increase evapotranspiration by 15 mm, which would require 34.275 MCM of additional water for irrigation.They suggested that the irrigation, appropriate selection of planting data and cultivars can be potential management options to reduce the impact of climate change.This worth mentioning that under scenarios 2 to 9, crop water requirement was estimated from rise in temperature only.Effect of other climatic parameters was not considered.However, it is worth to mention that even at the prevailing pumping condition groundwater table is declining, which can be attributed to other conditions and not merely to climate change.However, results obtained from other scenarios indicated that the crop water requirement would be increasing in future.This is mainly due to the fact that the crop water requirement under different scenarios was estimated using increase in air temperature only.In such cases, pearl millet could be and an alternate crop to be grown as it requires less water.Water requirement of wheat, mustard, maize and pigeon pea were 1920, 1830, 3835 and 3432 m 3 /ha respectively.For these crops also water requirement is higher in all scenarios except scenario 1 (Table 2).In case of scenario 1 the water requirement is not increasing event of climate change.However, if water requirement increases due to climate change as found in case of IPCC and INCCA predictions, pearl millet and maize can be alternate crops during kharif and mustard would be alternate to wheat in rabi.
The crop water requirement of important cropping systems includes rice-wheat, rice-mustard, pearl millet -wheat, pearl millet-mustard, maize-wheat, maize-mustard, pigeon pea-mustard and pigeon pea-wheat practiced in this region under various climate change scenarios are presented in Table 3 Water requirement was found to be the highest (783.6 mm) for the rice-wheat cropping system and lowest (421.1 mm) for the pearl millet-mustard under all scenarios (Table 3).Results also indicated that crop water requirement under scenario 1 was lower than the reference scenario.This implies that crop water requirement of prevailing cropping system would not increase in future, if all the important climatic parameters are considered for its prediction.It can be supported by study conducted by Parekh and Prajapati (2013)
Conclusion
It was concluded that based on the IPCC and INCCA predictions, the rice-wheat cropping system in the study area needs to be replaced with pearl millet-mustard, pigeon pea-mustard, pearl millet -wheat, and pigeon pea-wheat.The estimated ground water recharge in the study area during monsoon was of 4.01 MCM which varied from -15.47 (scenario 10) to 5.08 MCM (scenario 1).The groundwater availability based on INCCA and IPCC predictions ranged between -2.66 and 1.02 MCM.Rainwater harvesting and artificial groundwater recharge need to be made mandatory in the study area for increasing the increased groundwater recharge consequently groundwater availability.
Fig. 1 .
Fig. 1.Location of the Najafgarh Block in the National Capital Territory Delhi.
Table 2 .
Volume of water required of the different crops grown in NCR under various climate change scenarios.
Table 3 .
Water requirement for different cropping systems of in the NCR under various climate change scenarios.
Table 4 .
Groundwater recharge during monsoon season under various climate change scenarios.quality water, in situ moisture conservation measures, zero tillage, direct seeded rice, system of rice intensification, use of drought tolerant and short duration varieties, etc could be other potential strategies for copping the impact of climate change on irrigated crops. | 2018-12-05T00:42:54.770Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "0dd136dbe1aab9ab04f7c21c2a8069ad3e575ff7",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/621/578",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0dd136dbe1aab9ab04f7c21c2a8069ad3e575ff7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
64294216 | pes2o/s2orc | v3-fos-license | Is mean heart dose a relevant surrogate parameter of left ventricle and coronary arteries exposure during breast cancer radiotherapy: a dosimetric evaluation based on individually-determined radiation dose (BACCARAT study)
Background Intra-individual heterogeneity of cardiac exposure is an issue in breast cancer (BC) radiotherapy that was poorly considered in previous cardiotoxicity studies mainly based on mean heart dose (MHD). This dosimetric study analyzes the distribution of individually-determined radiation doses to the heart and its substructures including coronary arteries and evaluate whether MHD is a relevant surrogate parameter of dose for these substructures. Methods Data were collected from the BACCARAT prospective study that included left or right unilateral BC patients treated with 3D-Conformal Radiotherapy (RT) between 2015 and 2017 and followed-up for 2 years with repeated cardiac imaging examinations. A coronary computed tomography angiography (CCTA) was performed before RT for all patients. Registration of the planning CT and CCTA images allowed delineation of the coronary arteries on the planning CT images. Using the 3D dose matrix generated during treatment planning and the added coronary contours, dose distributions were generated for whole heart and the following substructures: left ventricle (LV), left main coronary artery (LMCA), left anterior descending artery (LAD), left circumflex artery (LCX) and right coronary artery (RCA). A descriptive analysis of the physical doses in Gray (Gy) was performed, Dmean was the volume-weighted mean dose. Results Dose distributions were generated for 89 left-sided BC patients (MHD = 2.9 ± 1.5 Gy, Dmean_LAD = 15.7 ± 3.1 Gy) and 15 right-sided BC patients (MHD = 0.5 ± 0.1 Gy; Dmean_RCA = 1.2 ± 0.4 Gy). For left-sided BC patients, the ratio Dmean_LAD/MHD was around 5. Pearson correlation coefficients between MHD and Dmean for delineated substructures were all statistically significant. However, for all substructures, the coefficient of determination R2 indicated that the proportion of the variance in Dmean of the substructure predictable from MHD was moderate to low (in particular R2 = 0.45 for LAD). Among left-sided BC patients with MHD < 3Gy, 56% of patients could nevertheless receive LAD doses above 40Gy (V40 > 0). Conclusion Our study illustrates that MHD is not enough to predict with confidence individual patient dose to the LV and coronary arteries, in particular the LAD. For precise radiotherapy-induced cardiotoxicity studies it would be necessary to consider the distribution of doses within these cardiac substructures rather than just the MHD. Trial registration ClinicalTrials.gov: NCT02605512, Registered 6 November 2015 - Retrospectively registered.
Results: Dose distributions were generated for 89 left-sided BC patients (MHD = 2.9 ± 1.5 Gy, Dmean_LAD = 15.7 ± 3.1 Gy) and 15 right-sided BC patients (MHD = 0.5 ± 0.1 Gy; Dmean_RCA = 1.2 ± 0.4 Gy). For left-sided BC patients, the ratio Dmean_LAD/MHD was around 5. Pearson correlation coefficients between MHD and Dmean for delineated substructures were all statistically significant. However, for all substructures, the coefficient of determination R 2 indicated that the proportion of the variance in Dmean of the substructure predictable from MHD was moderate to low (in particular R 2 = 0.45 for LAD). Among left-sided BC patients with MHD < 3Gy, 56% of patients could nevertheless receive LAD doses above 40Gy (V40 > 0).
(Continued on next page) Introduction Radiotherapy (RT) for breast cancer is an essential part of adjuvant cancer treatment. RT reduces the risk of local recurrence and the risk of breast cancer mortality [1].
However, left-sided RT, especially, has been shown to induce excess cardiovascular mortality and morbidity [2][3][4][5][6]. The study by Darby et al. [5] found a linear relationship between the mean heart dose (MHD) and the rate of major coronary event, which increased of 7.4% per Gy of the MHD. These results on radiation-induced ischemic heart disease were confirmed in a more recent study of BC patients treated with three dimensional conformal radiation therapy (3D-CRT) [7]. The predominance of ischemic heart disease observed in these studies indicates that the coronary arteries may be the critical structures for the development of radiation induced heart morbidity. Among the three major coronary arteries (left anterior descending, circumflex, and right), the left anterior descending (LAD) supplies a major part of the myocardium. Therefore, occlusion of the LAD, may cause a large area of myocardial necrosis and lead to severe left ventricle impairment and congestive heart failure.
However, in many epidemiological studies on post radiotherapy cardiotoxicity, doses are typically described as those received by the entire heart with the MHD [5,8] . Therefore, the MHD is used as the reference dose for analyzing dose-response relationship. But, dose distribution in the heart is not homogeneous, highest cardiac radiation doses can be observed in the apex and in the apical-anterior segment and some hot spots>50Gy persist in some parts of the heart [9]. Nevertheless, the anatomic distribution of RT-associated coronary artery diseases has been poorly studied whereas it can be supposed that the increased risk of coronary artery diseases would manifest largely in the coronary arteries that are directly within the radiation field. This was confirmed by a Sweden study [10] where patients with left-sided breast cancer RT had a statistically significant increased rate of stenosis in the LAD when compared with those with right-sided cancer. This observation was concordant with the location of typical RT fields and with the fact that the highest doses are likely to be delivered to the anterior heart, including the LAD. As a consequence, MHD may be a poor surrogate parameter to reflect the dose to the cardiac sub-structures especially the LAD [11] and MHD may thus be a poorly relevant dose criterion for RT-induced cardiotoxicity studies and dose-response relationship.
However, a major issue is that coronary arteries are difficult structures to delineate because of their small volumes. Some atlas and auto-segmentation methodologies were developed for contouring cardiac substructures [12,13], but they presented limits due to uncertainties for the smallest heart structures that are the coronary arteries. Another approach for coronary arteries' radiation doses was presented in a previous work performed by Moignier et al. [14] where radiotherapy simulation CT scans and coronary computed tomography angiography (CCTA) of patients treated for a mediastinal Hodgkin lymphoma were used to merge thoracic and detailed cardiovascular anatomies and allowed personalized coronary arteries dose calculations.
In 2015, we launched the BACCARAT study, a cohort of a hundred of breast cancer patients treated with 3D-CRT and followed prospectively for 2 years with repeated cardiac imaging examinations including echocardiography and CCTA and measurement of circulating biomarkers. BACCARAT study aims to enhance knowledge on detection and prediction of early subclinical cardiac dysfunction and lesions induced by breast RT and on biological mechanisms potentially involved, based on functional and anatomical cardiac imaging combined with simultaneous assessment of multiple circulating biomarkers and accurate heart dosimetry.
With the rare opportunity given by the BACCARAT study to have simultaneous dosimetric and precise anatomical information from the simulation CT and CCTA images for all patients, the aim of this dosimetric study was to analyze the distribution of personalized individually-determined radiation doses to the heart and its substructures, in particular the coronary arteries, and evaluate whether MHD is a relevant surrogate parameter for the dose to cardiac substructures, in particular the LAD.
Material and methods
The BACCARAT study, a monocentric prospective cohort study further detailed in a previous article [15], consisted in the inclusion of 114 women treated with adjuvant 3D-CRT for left or right unilateral breast cancer in the Clinic Pasteur in Toulouse, France, without chemotherapy, and followed for 2 years after RT (ClinicalTrials.gov: NCT02605512). Inclusion period lasted from October 2015 to December 2017. Follow-up of patients is still ongoing, the end is foreseen for early 2020.
After the surgical treatment of breast cancer, all patients were treated with 3D-CRT with 6 and 25 MV photon beams by tangential fields. Patients underwent planning CT scan, not contrast enhanced, with CT slices every 1.25 mm.
Patients were planned for RT with or without irradiation of supraclavicular or internal mammary lymph nodes. The planning target volume dose was 50 Gy delivered in 5 weeks with 25 daily doses of 2 Gy or 47 Gy delivered in 5 weeks with 20 daily doses of 2.35Gy. Additional boost of 9-15 Gy could be applied to the tumor site with electron/photons beams with energies ranging from 6 MeV to 18 MeV. Patients were positioned on a breast board with both arms above the head. The treatment planning system (TPS) used to perform dose calculations was Eclipse™ with the Analytical Anisotropic Algorithm (AAA v13.6) (Varian Medical System, Palo Alto, CA, USA). Each patient's radiotherapy was planned such that the dose distribution was optimized and normalized to the International Commission on Radiation Units and Measurements (ICRU) reference point of the breast. Breath-hold gating was used for patients treated for left-sided breast cancer patients with heart very close to the anterior chest wall and for dose constraints achievement (Quantitative Analysis of Normal Tissue Effects in the Clinic -QUANTEC recommends keeping the volume of heart receiving at least 25 Gy (V25) less than 10% to keep the risk of cardiac mortality under 1% [16]; in addition, Clinique Pasteur attempts to keep MHD < 5 Gy). Dose-Volume-Histogram (DVH) for the heart was generated by the Clinic Pasteur radiotherapy department.
Before RT, a coronary computed tomography angiography (CCTA) was performed for all patients as planned in the BACCARAT protocol. One of the challenges of the BACCARAT study was to evaluate precisely absorbed doses to cardiac substructures, in particular focused on coronary arteries in order to enhance in a second step the dose-response analysis when considering early cardiotoxicity. The coronary arteries are narrow and moving structures; consequently the simulation CT scan makes proper contouring difficult or impossible. For dosimetric evaluation, the simulation CT scan, the CCTA, the RT dose and RT structure files in DICOM format were used as done in a previous study [14] . Merging anatomical information from the simulation CT scan and the CCTA was performed. The CCTA provided an optimal coronary visualization but at a given moment of the cardiac cycle, generally the diastole. On the contrary, the simulation CT scan provided images integrating the breathing and heart beating motion. Consequently, a slight rotation of the heart around the cranio-caudal axis, a homothetic deformation along the three dimensions and a translation to match the coronary arteries origins were carried out for the registration. Once inserted in ISOgray TPS (version 4.2, Dosisoft, Cachan, France; http://www.dosisoft.com/en/radiotherapy/planning-products.html), manual delineation was performed for the left ventricle (LV), the left main coronary artery (LMCA), the left anterior descending artery (LAD), the left circumflex artery (LCX) and the right coronary artery (RCA). Using the 3D dose matrix generated during treatment planning and the new delineated substructures, DVH for LV and coronary arteries were generated with ISOgray TPS by the dosimetric department of IRSN in collaboration with the Clinic Pasteur radiotherapy department (Fig. 1).
From the DVHs, the following absorbed dose metrics for all delineated cardiac structures were calculated: Dmean (in Gy) is the volume-weighted mean dose; D2 (in Gy) is the minimal dose received by the most irradiated 2% of the structure volume, which can be considered as the near maximum dose; V5 (in %) is the relative volume exposed to at least 5 Gy (similar definition for V10, V25 etc.) Descriptive analysis of the physical doses in Gray (Gy) was performed. Continuous variables are presented with mean, standard deviation, median and range values. Categorical values are presented with percentages. The chi-square test was used to compare categorical variables and Student's t-test or Wilcoxon non-parametric test if necessary was used to compare continuous variables. We defined individual ratio D2/Dmean as a kind of Homogeneity Index (HI) adapted to organ at risk as HI are usually used for planning target volume. Pearson's correlation coefficients were used for correlation analysis between MHD and mean doses to the different substructures. The relationship analysis between MHD and mean doses to the different substructures were further investigated based on linear regressions providing the R 2 value which corresponded to the coefficient of determination indicating the proportion of the variance in Dmean of the substructure predictable from MHD. R 2 < 0.70 was considered insufficient for prediction and surrogate parameter purpose. P < 0.05 was considered statistically significant. All statistical analysis was performed with SAS software V9.2.
Results
Retrospective dosimetric evaluation for all cardiac substructures was available for 104 BC patients (89 left-sided and 15 right-sided) from the BACCARAT study (Table 1). Mean age of the population was 58 years old, with no significant difference between right-sided and left-sided BC patients. Most patients (85/104) were diagnosed with an invasive ductal carcinoma and underwent breast conserving surgery. The prescribed radiation dose was 50 Gy in 25 sessions for more than three forth of the population. An additional boost of 9-15 Gy was applied to the site of the tumor if clinically indicated. Regional lymph nodes irradiation was performed in 31 left-sided BC patients and 1 right-sided BC patient.
Results for each dose metric for the heart and its sub-structures are shown in Table 2. The mean MHD was 2.95 ± 1.49 Gy for left-sided RT, and 0.46 ± 0.12 Gy for right-sided RT. All patient met the dose constraint of the heart (V25Gy < 10%) with a maximum V25 value of 8.7%. The inter-patient variability in MHD was important especially for left-sided RT with a range of values from 0.87 to 6.72 Gy, further confirmed by the range in near-maximum dose D2 from 3.95 to 48.87 Gy. The intra-patient variability for heart doses was also extremely high, with average individual ratio D2/MHD of 8.4 for left-sided patient and 4.6 for right-sided BC patients illustrating the heterogeneity of doses within whole heart structure. Considering the other cardiac structures, LV and LAD were the most exposed structures with respectively mean doses of 6.2 Gy and 15.7 Gy (as illustrated for a patient in Fig. 1), but intra-patient variability in doses (D2/Dmean) was lower than observed for the whole heart (5.7 and 2.6 respectively). For the RCA, we observed that the mean absorbed dose was higher for right irradiation than left one (0.74 ± 0.53 Gy vs. 1.25 ± 0.51 Gy), and higher than MHD for right-sided BC.
The ratios of Dmean for delineated structure and MHD are presented in Table 3. It allowed observing that for left-sided BC patients, the mean ratio Dmean LAD/MHD was around 5 and around 2 for Dmean LV/MHD. All other ratios were below 1 except for RCA in right-sided BC patients (2.7 ± 0.7 Considering the left sided BC RT patients, even in the "low exposed" group (MHD < 3Gy) high exposure to the LV and LAD was observed, D2 value that could reach 55.48Gy and 56.42Gy, higher than in the group of patient with MHD > 3Gy (Table 4). Moreover, in the "low exposed" group, 56% of the population could receive doses above 40 Gy to the LAD (75% in the high exposed group) that could reach 51.6% of the LAD volume, and 3 patients even received doses above 50Gy to the LAD with volumes of LAD of 1.2, 5.3 and 24.1% for these 3 patients.
Discussion
Although there was considerable decrease in doses to the heart over past few years [8,17,18], radiation induced heart disease is still a concern due to the improvement in breast cancer patients survival. Mean heart dose was often used as the reference measure for cardiotoxicity studies [5]. However, there is increasing evidence toward the importance to consider individual cardiac substructures as some studies have implicated the left anterior descending artery [10,19], as well as the left ventricle [7] as important components of the heart associated with radiation induced heart disease.
Our study highlighted that MHD was not representative of order of magnitude of mean dose to the left ventricle or even more to the most exposed coronary, i.e. LAD. The mean LAD dose (15.6Gy) observed among 89 left-sided BC patients was substantially higher than the mean heart dose (2.9 Gy) as previously observed [20]. Inter-individual variability in LAD exposure dose, illustrated by the wide range of LAD dose from 1.68Gy to 37.60 Gy, could be explained by several factors: -variation in heart size (larger heart tend to push the LAD into the radiation field), − variation in breast size (it modifies the extend of the tangential field), − individual coronary topology (coronaries are more or less tortuous [9], and in this study some were delineated longer than others because of intrinsic limitation of the CCTA imaging), − presence or not of boost and regional lymph node irradiation. The difference between LAD dose and MHD is striking, but it can be explained by the fact that the LAD is located in the anterior region of the heart where it is exposed to the tangential fields used in left sided breast RT. Moreover, we observed a mean ratio Dmean LAD/MHD around 5, confirming prior findings [21]. However, the association between the MHD and delineated substructures, in particular the LAD mean dose, was not precise enough to predict with confidence individual patient dose as shown in Fig. 2 and previously observed [22]. Despite significant correlation as observed in Table 3, this analysis showed that the predictive value of the MHD was not good for cardiac substructures including coronary arteries, even for LAD. More than 55% of patients with mean heart dose<3Gy could still be exposed to 40Gy or more to a part of LAD as illustrated in Table 4. In Darby's study [5], estimated mean doses to the LAD and MHD were also using a methodology based on a computed tomography of a patient with typical anatomy due to the period of inclusion of patients in a pre-computed tomography era. Such non individually-determined method could have led to inaccurate LAD dose estimations due to difficulties in localizing this structure in contrast with our method based on individual CT and CCTA. This could explain that the MHD was a better predictor of the rate of major coronary events than the mean dose to the LAD and even more surprising that inclusion of the mean LAD dose to MHD failed to improve the prediction of major coronary events rates as coronary arteries, in particular LAD, are critical to the subsequent development of radiation-induced cardiac diseases. In a more recent study [7], dose to the left ventricle (V5 precisely) was a better predictor of major coronary events than MHD. Similarly, in our study, the left ventricle was chosen as a structure of interest since its dose may be more representative of cardiac endpoints than the MHD. Indeed, left ventricle pumps blood into the systemic circulation to most of the body through the aorta (while the right ventricle fills only the lungs). The most common reason for referral to echocardiography is left ventricular function. Assessment of left ventricular function is extremely important as it correlates with symptoms, prognosis, events, and complications in a large number of conditions and many decisions in cardiology are based on left ventricular function [23] . From a dosimetry point of view, the association between the MHD and the LV dose as illustrated in Fig. 1 showed that MHD also failed in predicting with confidence individual LV dose.
In the cardiotoxicity assessment of breast cancer RT, dosimetric evaluation for RCA was poorly studied. However, RCA, is the second largest artery, after left main coronary artery, supplying the heart. During breast cancer treatment of right breast the proximal part of RCA can be included in the irradiation field. In our study we observed among right sided BC patients that mean heart dose was 0.46Gy but mean RCA dose was 1.25Gy. These results were concordant with those previously published [24] with a mean dose to the RCA of 1.92 Gy. The mean ratio Dmean RCA/Dmean Heart was around 3 and to a certain extend the RCA of right irradiated patients may play the same role than LAD for left irradiated patients.
Because of their small volumes and inherent contouring variability, the coronary arteries (in particular the LAD) are difficult structures to delineate. Dreaming of a 'Swiss Army Knife' for cardiac doses, some studies developed methodology based on atlas and auto-segmentation that could help to standardize contouring for these structures [12,13,25,26]. These algorithms were able to generate valid segmentations for major cardiac structures (whole heart, Left ventricle, Right ventricle, …) but not for the small structures such as coronary arteries. In our study, rather than considering a generic coronary model based on anatomic charts or an atlas, we had the opportunity to perform an individually determined cardiac dosimetry including small cardiac substructures based on registration of the planning CT and CCTA images. However, a limitation of our study is that doses calculated by the TPS might not be reliable enough (out-of-field, lung/heart interface) but currently, it is what is available in the usual clinical context. Further investigations based on our data taking into account some of these dosimetric parameters based on heart DVH parameters and possibly other available clinical parameters, might lead to a better definition of the coronary dose, even if not delineated.
Very few previous studies have clearly defined the anatomic distribution of RT-associated coronary artery disease [14,19]. One would reasonably hypothesize that the increased risk of coronary artery disease would be dose dependent and would manifest largely in the coronary arteries that are directly within the radiation field [27] as previously observed [19]. Our study allowed observing that the value of MHD could not help so much in understanding the dose effect relationship for the LAD. More research is needed to determine which indicator of heart dose from breast RT is the most significant for determining cardiac toxicity and morbidity. Doses to the heart has to be as low as possible. Actually, the QUANTEC -Quantitative Analysis of Normal Tissue Effects in the Clinic-recommendations, specify that the heart should always be contoured and that V25Gy < 10% (in 2 Gy per fraction) [16]. No dose recommendation exists for the coronary arteries, except one Danish study proposed a threshold with a maximum of 20 Gy to be given to the LAD [28] . With MHD as low as possible, the coronary arteries exposure could be as low as possible ad gating is an option to decrease the doses [22,29] but we could not test it in our study as only 4 patients were concerned.
Conclusion
To our knowledge it is the first study to use patient CCTA combined with patient-specific simulation CT scan to estimate dose to the substructures of the heart, including coronary arteries, in such a large number of patients. It allowed observing that taking into account the MHD only could nevertheless lead to excessive cardiac substructure irradiation and the predictive value of the MHD was not good for cardiac substructures including coronary arteries. The results of the study indicate that for precise radiotherapy-induced cardiotoxicity studies, it would be necessary to assess the dose delivered to the whole heart as well as to the cardiac substructures, in particular the LAD. For our BACCARAT study, this accurate heart dosimetry should be fruitful in the analysis of precise myocardial dysfunction based on cardiac echography and for precise analysis of coronary artery segments based on CCTA.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. MHD mean heart dose, D2 minimal dose received by the most irradiated 2% of the structure volume, V40 (in %) is the relative volume exposed to at least 40 Gy | 2019-02-07T17:39:17.864Z | 2019-02-07T00:00:00.000 | {
"year": 2019,
"sha1": "098bddc85913ae6fd75b794cd78b24caa69d79ec",
"oa_license": "CCBY",
"oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-019-1234-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5889a06bc75d88887a37bdea8229da57095f3715",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258364472 | pes2o/s2orc | v3-fos-license | Not heading in the right direction: Five hundred psychiatrists’ views on resourcing, demand, and workforce across New Zealand mental health services
Objective: To explore the views of psychiatrists (including trainees) regarding the current state and future direction of specialist mental health and addictions services in Aotearoa New Zealand. Methods: Psychiatrists and trainee psychiatrists (registrars) in Aotearoa New Zealand were surveyed in August 2021. Of 879 eligible doctors, 540 participated (83% qualified and 17% trainee psychiatrists), a response rate of over 60%. Data were analysed quantitatively and with content analysis. Results: Psychiatrists thought specialist mental health and addictions services had been neglected during recent reforms, with 94% believing current resourcing was insufficient, and only 3% considering future planning was heading in the right direction. The demand and complexity of on-call work had markedly increased in the preceding 2 years. Ninety-eight percent reported that people needing specialist treatment were often (85%) or sometimes (13%) unable to access the right care due to resourcing constraints. The pressures were similar across sub-specialties. A key theme was the distress (sometimes termed ‘moral injury’) experienced by psychiatrists unable to provide adequate care due to resource limitations, ‘knowing what would be a good thing to do and being unable to do it . . . is soul destroying’. Recommendations were made for addressing workforce, service design and wider issues. Conclusion: Most psychiatrists in Aotearoa New Zealand believe the mental health system is not currently fit for purpose and that it is not heading in the right direction. Remedies include urgently addressing identified staffing challenges and boosting designated funding to adequately care for the 5% of New Zealanders with severe mental health and addiction needs.
Introduction
Despite aspirational policy statements (e.g.United Nations General Assembly, 2007), access to mental health care remains variable and mental health conditions persist as the leading cause of global disability (World Health Organization, 2022).Australia and Aotearoa New Zealand's rates of mental health-associated disability sit among the highest in the world (Global Burden of Disease Mental Disorders Collaborators, 2022).This has obvious human costs but also confers a heavy financial burden, with serious mental health and addiction issues costing an estimated $12 billion a year in New Zealand alone (New Zealand Ministry of Health, 2017).
The escalating demand for mental health care shows little sign of slowing down.The number of individuals seen by the specialist mental health services and non-government organisations 'ring-fence' funded to serve the 'top 3%' or those with moderate-to-severe needs (New Zealand Government, 2018a) increased by approximately 40% between 2010 and 2020 (Association of Salaried Medical Specialists, 2021b).This jump in demand has been attributed to an increasing population and improvements in data capture, as well as the increased incidence and awareness of mental health issues (New Zealand Ministry of Health, 2019).The rising demand for mental health support has put considerable pressure on the health system and beyond (Office of the Health and Disability Commissioner, 2018).Meanwhile, New Zealanders have struggled with accessing appropriate and timely mental health services, with long waits to access treatments while their mental health deteriorates (New Zealand Government, 2018b).At a time when mental health presentations were already increasing, the situation has been complicated by the effects of the COVID-19 pandemic, which disproportionately affected vulnerable groups, including people with recognised mental conditions, young people and essential workers (Bell et al., 2021(Bell et al., , 2022;;Every-Palmer et al., 2020).
Improving mental health service provision in New Zealand has been a key focus for the Government in recent years.In 2018, the final report of the Government Inquiry into Mental Health and Addiction, He Ara Oranga, made a number of recommendations, with a particular focus on prevention and early intervention and expanding access to the 20% of the population who experience mild-to-moderate mental health issues (New Zealand Government, 2018b).A major recommended strategy was a shift from 'Big Psychiatry' to a less medically driven 'Big Community'.'This is all very well as far as it goes' explained Mulder and colleagues, 'but runs into a major flaw: "big" psychiatry in New Zealand is actually rather small' (Mulder et al., 2022).Specifically, they pointed to New Zealand's comparatively low number of psychiatric beds per capita; in 2020, New Zealand had 32 beds per 100,000 people, compared to the Organisation for Economic Cooperation and Development (OECD) average of 65 beds.In addition, the country has for many years had the lowest number of psychiatrists per capita compared with 10 other similar countries including the United Kingdom, Australia and Canada (Association of Salaried Medical Specialists, 2021b).Mulder et al. (2022) concluded, 'it would have been more accurate for the [He Ara Oranga] Report to have used the term "Small Psychiatry", which would have helped explain the problems facing the public sector'.
Following publication, the He Ara Oranga report received some criticism from mental health experts for a perceived comparative neglect of the needs of those with serious mental illness and the resourcing of specialist mental health and addiction services (Allison et al., 2019;Royal Australian and New Zealand College of Psychiatrists, 2019).However, He Ara Oranga was hopeful that 'demand for specialist services will reduce as issues are dealt with earlier, before they escalate . . .' (New Zealand Government, 2018b).To date, this optimistic prediction does not seem to have transpired.As 2021 Committee members of Tu Te Akaaka Roa (the New Zealand committee of the Royal Australian and New Zealand College of Psychiatrists [RANZCP]), the authors of this paper were receiving increasing reports from members concerned about demand, resourcing and workforce shortages in their areas.With increasing access problems and expanding wait times for specialist mental health and addiction services also being reported in the media (Meier and Lourens, 2021), we considered it timely to canvas the experiences of the frontline psychiatry workforce in a systematic way.
Aim
The aim of this study was to collect data from psychiatrists regarding their views about the state of mental health services in New Zealand.The targeted domains for inclusion were a follow:
Method
The study was approved by the University of Otago Human Ethics Committee (reference D21/230).The survey was Australian & New Zealand Journal of Psychiatry,58(1) fielded from 27 July to 24 August 2021.The sampling frame was all clinicians practising psychiatry in New Zealand either as vocationally registered psychiatrists, provisionally vocationally registered psychiatrists or psychiatry trainees (registrars).Both members and non-members of the RANZCP were eligible to participate.There were around 870 eligible people within the eligible population.
A cross-sectional survey design was used.The survey was designed to be brief, so to encourage participation and to ensure compatibility across media platforms.The content was refined using the 'group mind' process, with 10 colleagues iteratively testing and rigorously critiquing pilot versions, with successive improvements made based on their comments (Bradburn et al., 2004).
The Survey Monkey ® questionnaire was distributed by email (from the RANZCP's mailing list) and took approximately 13 minutes to complete.After viewing the information sheet, consenting to participate and successfully completing the eligibility questions, participants then anonymously answered questions about their experiences of mental health services in New Zealand.Demographic information collected included age, gender (male, female, gender diverse), area of specialisation, years of experience and ethnicity.Additional free text boxes were provided.Please see Supplementary File 1 for the survey questions.
Analysis
Numerical data were quantitatively analysed using Microsoft Excel ® software.Percentages reported represent the response to each question and not the total number of participants (participants could skip questions and a small number of participants did not respond to every question).For measures where the absolute numbers are low, these were not broken down by other demographic variables.Free text data were analysed using manifest and latent content analysis (Hsieh and Shannon, 2005).Content analysis can capture the meanings within data, including data from questionnaires and involves establishing categories and identifying the frequency by which they occur (Crowe et al., 2015).
Data were analysed in the first instance by M.L.G. and S.E-.P. To enhance trustworthiness, key ideas and themes were shared and developed through successive iterations of the manuscript with co-authors (H.T., O.H., M.L., M.J. and S.R.).In terms of positionality, all authors were affiliated with the RANZCP.M.L.G. was a policy analyst, and all other authors were psychiatry members of Tu Te Akaaka Roa, the New Zealand committee of the RANZCP.
Results
There were 540 responses in total, representing a response rate of approximately 61%, and 90% of participants completed the whole survey.Eighty-nine participants (17%) were still training as psychiatrists (psychiatry registrars/ residents).Participant demographics are summarised in Table 1.The sample was generally representative of New Zealand vocationally registered psychiatrists (46.4% of the RANZCP membership in New Zealand are female and 4.4% identify as Māori).However, there was relative underrepresentation of trainees, with 17.1% of participants being registrars compared with 26.5% of the New Zealand RANZCP membership.
Participants were asked for their views on the current state and direction of the mental health and addiction system.Results are summarised in Table 2.A clear majority (82%) said that the configuration was currently not fit for purpose, and a similar proportion (88%) said that funding was not fit for purpose.A majority disagreed that the system was 'heading in the right direction' (71% in relation to configuration and 75% in relation to funding).
Regarding the configuration of services, psychiatrists commented that services were often disjointed and constructed in a way that did not work well for service users.Many participants also commented on a poorly integrated system for addressing the broader social determinants of mental health.
Many participants discussed the increased funding for primary mental health care in the 2019 budget.Participants were generally supportive of this investment, but noted that in their experience, it had not led to any tangible reductions in pressure on secondary services but it conversely had increased referrals from primary care.While improvements in primary care were considered helpful to address some peoples' needs, service users with serious and complex mental health issues were thought to also need the support of specialist services, with this population currently being underserved.
The pervasive rhetoric seem[s] to centre around there being plenty of service[s] for the severely mentally ill and the mild to moderate group missing out. Making two hungry people fight over scraps is an unacceptable tactic. The severely mentally ill are [being] very poorly served.
The cohort which uses specialised psychiatric care is not the same that benefit from primary interventions.They need highly trained and structured treatment programmes which are not accessible through primary care providers.
Resourcing of secondary mental health and addiction services
Across all of the questions, the highest level of consensus was regarding current resourcing of inpatient and secondary mental health and addiction services -94% of participants disagreed or strongly disagreed that resourcing was currently fit for purpose.Most participants also disagreed that this was heading in the right direction.The number of participants who thought that resourcing was fit for purpose or heading in the right direction was striking small -only 2% and 3%, respectively.No specialists in old age, consultation-liaison, intellectual disability, addiction, or psychotherapy expressed positive views in this domain.
Free-text comments on resourcing were made by 190 participants.The most common themes were inadequate staffing and/or the lack of inpatient beds.
On call 'crisis' work
After-hours work was a particular source of consternation.For participants who worked on an acute on-call roster, 90% reported that demand in their after-hours work had increased or increased a lot and 88% reported that complexity of people's needs after-hours had increased or increased a lot.A small number of participants reported that demand and complexity had not changed (9% and 12% respectively), but not a single participant reported they had reduced.
Over 120 participants left free-text comments regarding after-hours work.A key theme was the distress psychiatrists experienced by not providing adequate mental health crisis services to people in crisis.
Being on call is now the worst part of my job and it makes me consider resigning my job. The main problem . . . is the lack of inpatient beds that are available . . . it is knowing what would be a good thing to do and being unable to do it, and then making an alternative plan that you know is a bad one but you have no other options -that is soul destroying.
It is extremely stressful to consistently be having to provide sub-standard care for people in need, not to mention the lack of recovery time before returning back to work for your day job.I have begun to dread being on call . . .On call is a disaster.I resent having to undertake this much complex work with such limited resource . . .This feels hopeless and abusive for patients and staff.
Other key themes mirrored a perceived paucity of available inpatient beds and understaffing.Comments on afterhours staffing shortages highlighted that this was not confined to a lack of psychiatrists; there were shortages of all mental health professionals needed to provide care.Participants reported that bed shortages meant that acutely unwell people were either not admitted, or were discharged precipitously to create space for the next admission, which then created a cycle of crisis-driven reactive care.Participants described the lack of beds as self-perpetuating -when people have to wait longer to access treatment, they become more unwell in the interim, and then it takes longer to recover (which further reduces capacity).
The main stress on call is a resource issue not a complexity issue.Service users are presenting more often with worse symptoms than two years ago, but this is not due to them being more complex than two years ago, but because they have been unable to receive adequate treatment due to demand.
Approximately one-third of participants mentioned substance use, housing and/or other socioeconomic factors as contributing to both demand and complexity.These factors were thought to complicate the support of people in crisis and make it more difficult for them to recover in the community.Consequently, people got 'stuck' in inpatient facilities or there was a 'revolving door' of frequent readmissions.
We have difficulties with getting people discharged from hospital as they have no proper accommodation, therefore when you are on call there are often no beds to admit acute patients into.This makes the entire crisis-response system less efficient, and crisis patients end up waiting longer for service.Australian & New Zealand Journal of Psychiatry, 58( 1) Homelessness also makes it more difficult to keep [people] well in the community, leading to more crisis presentations.
[There is] huge unmet need and complex needs including housing instability, severe deprivation, complex unmet physical health needs and significant unmet need for addiction services.
In participants' usual area of clinical practice
Similar patterns were reported in psychiatrists' 'usual area of practice', with 93% reporting demand had increased or increased a lot, 87% reporting complexity of people's needs had increased or increased a lot, and 86% reporting waiting lists had increased or increased a lot.Conversely, few participants reported an increase in staffing levels (60% reported staffing had decreased or decreased a lot, and 32% reported no change).Some elaborated this was a relative decrease -e.g., there might have been an increase in staff numbers, but this had not kept pace with increasing numbers of people accessing care.
Almost two-thirds of comments discussed staffing pressures.Many were concerned about the impact of burnout and stress on recruitment and retention of staff.Again, this was seen as self-perpetuating -staff were leaving due to stress, which put further strain on the remaining staff.
There were a few statistically significant (but still relatively minor) variations across different sub-specialties.For example, adult (n = 236) and child and adolescent (n = 91) practitioners were more likely than other sub-specialties to report that staffing levels had decreased (64% and 59%), compared with 42% of old age (n = 52) and 45% of addiction psychiatrists (n = 33), p = .05).A subgroup analysis of the 97 child and adolescent participants has been published elsewhere (Every-Palmer et al., 2022).Around 86% of old age psychiatrists (n = 51) reported an increase in demand and 75% reported an increase in waiting lists, which was lower than other sub-specialties (90-97%, p = .05).
Service users' access to care
Participants were asked 'Thinking of your area(s) of usual clinical practice in the last two years, how often are patients/ consumers unable to access the right mental health care at the right time because of resourcing constraints?' Most people (85%) reported this occurred often or very often.Similar results were seen across different sub-specialties (see Figure 1).However, there were some statistically significant differences -old age psychiatrists (n = 53) were less likely than other sub-specialties to report service users were 'often/very often' unable to access the right care (70%), while consultation-liaison psychiatrists (n = 35) were more likely to report this (100%).
Of the 150 participants who provided additional comments, over a third mentioned a shortage of psychologists or psychological therapies as key barriers to people accessing the care they required.
What is working well?
When asked 'Thinking of your area(s) of clinical practice, what are some things that are currently working well?', participants often referred to their team or the commitment of staff (44% of responses).Others reported how much they enjoyed working with service users.
There are lots of hard working/committed team members who do their best for the clients and each other.
The current staff!Those mental health professionals that are sticking at it, going above and beyond their duties.They need to be recognized and valued.
Many participants applauded the movement over time from a medico-centric model towards a more holistic multidisciplinary approach, with the evolution of valuable new cultural and peer-support workforces.
. . .holistic approaches, continued therapeutic empathy, appreciation for the person behind the illness, and commitment to the public.
What would psychiatrists change to improve services?
More than 430 participants provided a suggested priority to improve the mental health and addiction system (total n = 837 suggestions, with an average of two suggestions per participant).Many participants provided non-specific responses; 'increase staffing', 'more funding' or 'the whole system needs change' were common themes.However, a number of participants provided detailed responses and specific ideas.
Workforce issues received the greatest number of mentions (n = 158).Participants described significant and pervasive workforce shortages across the board with inadequate numbers of regulated mental health and addiction staff (all professions) and peer-support workers to meet current demand.Recruiting additional staff or creating more positions was mentioned 120 times.Employing more multidisciplinary staff was emphasised as important to ensure the diverse needs of people seeking support could be met.
Addressing the acute shortages of mental health clinicians
Australian & New Zealand Journal of Psychiatry,58(1) who could deliver evidence-based psychological therapies in the public system was viewed a priority.
Increasing the number of psychologists employed or alternatively, increasing the number of psychiatric registrars [trainees] and encouraging them to do more therapy.
Participants also emphasised the importance of valuing and retaining existing staff, noting the risk of burnout and job dissatisfaction leading to a self-perpetuating downwards spiral of workforce attrition.
Turnover of staff is a major issue.Each time a disillusioned staff member leaves, their caseload is distributed among already stressed remaining clinicians.
Improve inpatient facilities
A desire for more acute resources and improved inpatient facilities was another strong theme.Participants described a disconnect between the demand for inpatient beds and the number of beds currently available.
Ensure inpatient units are fit for purpose and regularly upgraded in line with other hospital areas.
Some noted that a plan was needed to develop more rehabilitation services and intermediary services for service users to be able to transition better between services suited to their current needs.Some participants described the inpatient units in their areas as unfit for purpose and in dire need of refurbishment or replacing.
Increase funding for specialist services
Specialist services (including services for older people, infants, children, adolescents and intellectual disability) were identified as in urgent need of increased funding/ resourcing to adequately support the demand for services.It was noted that funding should be based on adjustable models aligned with health need and population dynamics, rather than fixed models.Others commented that more funding should be directed towards prevention and mental health funding should be ring-fenced.
Improve integration of primary and secondary services
Some 47 mentions were made regarding primary mental health and addiction services.Participants felt greater integration was required between primary and secondary services for the system to work effectively, and primary care required greater support from secondary care.This included more training, increased access to psychotherapy and more psychiatry and psychology consultations to support GPs.
Improve relationship between primary to secondary care including increasing training for GPs, perhaps embedding a psychologist in GP practices.
Several participants stated that greater investment in primary care increased the workload and stress on the secondary sector by driving more referrals; therefore, a wholeof-system approach is required.Several participants noted people might not be able to afford to see a GP for ongoing Australian & New Zealand Journal of Psychiatry, 58(1) mental health care and would present to EDs or crisis services instead.
I do think that an increase in funding directed to primary care is the right direction.However, there is a misconception that if you treat things in primary care this will translate to less demand on acute services, when the opposite is the case.
Address underlying social determinants
As well as addressing health system issues, there was a strong desire to direct funding towards addressing the social and economic factors that drive poor mental health outcomes and inequities such as institutional racism and colonialism, poverty, violence and abuse, homelessness, poor quality housing, substance use, inequitable access to education and health care and to supporting parenting skills (n = 64).
Multi-agency response to address social determinants of mental health particularly housing, education, social deprivation -these are driving the complexity and prevalence of crisis presentations.
Invest in wrap around services for disadvantaged childreninconsistent care placements, cultural disconnection, and trauma are setting children up for a lifetime of hardship, disadvantage and contact with MHS [mental health services] which may have been preventable.
Investing in correcting the socioeconomic determinants of mental health was considered a priority.The housing crisis was mentioned eight times, and it was noted that more houses are required to address poverty and homelessness.
More robust clinical governance
Many psychiatrists felt they 'weren't being listened to' and were undervalued.Some 39 participants voiced concerns that service design decisions were being made by people with limited understanding of the complexities of mental illness and substance use.Other participants reported not feeling trusted by managers.It was considered that better clinical 'leadership' rather than 'management' would result in a more effective healthcare system and greater satisfaction and retention of staff.
Discussion
Our study indicates that psychiatrists in New Zealand are pessimistic about the state of mental health and addiction services.Most psychiatrists think the system is not fit for purpose and believe that people are unable to access mental health care due to resourcing constraints.These concerns exist in an environment where psychiatrists report escalating demand, increasing complexity, growing waitlists and reduced staff levels.
These issues are not new (Association of Salaried Medical Specialists, 2021b; Meier and Lourens, 2021;Monasterio et al., 2020); nor are such concerns unique to psychiatrists in New Zealand.However, the country should be in a somewhat unique position to improve its mental health and addiction system following a major government inquiry into mental health and addiction in 2018 (New Zealand Government, 2018b), substantial funding injections via recent government budgets (New Zealand Ministry of Health, 2022a, 2022b) and a recent overhaul of the health system (New Zealand Ministry of Health, 2022c).
The government-promoted premise that increased upstream intervention would reduce the need for specialist services is contentious.Thus far, 'upstream' has meant increasing access to brief and novel forms of primary care support (19,20).To date, this has had little discernible impact on more than a decade's worth of increasing demand for psychiatrists and specialist mental health services (18).It has been estimated that for specialist services 'the average funding per client in real terms actually has decreased by an estimated 38% from 2008/09 to 2019/20' (Association of Salaried Medical Specialists, 2021b).The true prevalence of moderate-to-severe mental illness is closer to 5% than the 3% ring-fence set in the 1990s.In Te Rau Hinengaro, an epidemiological survey of high prevalence mental health conditions undertaken in 2003 to 2004, 4.7% of the adult population had severe mental health needs in any year (Oakley Browne et al., 2006).Ring-fenced funding for specialist mental health services should be increased to cover the 5% of the population with significant needs.
In the context of the COVID-19 pandemic, there has also been limited additional funding to boost immediate care for those with moderate-to-severe issues, and this has led to increased concern and risk of burnout among a perennially limited pool of psychiatrists (Chambers and Frampton, 2022).A long 'tail' of pandemic-related psychological sequelae is anticipated, and gains from any type of upstream prevention efforts may take decades to manifest (Clift et al., 2022).As a result, there is likely to be continued demand for current or greater levels of specialist (including psychiatric) services for quite some time.
While our study demonstrated concern about the future of our mental health and addiction system, participants highlighted the strengths of a committed, multidisciplinary workforce, with increased cultural and peer-expert capacity.Most feedback highlighted the need to boost access to specialist staffing and services in the short-term, while improving longer-term health and non-healthrelated upstream prevention of mental health and addiction issues.In terms of psychiatry, psychiatry training schemes across the country have noticed a significant upswing of interest in the discipline, with more eligible Australian & New Zealand Journal of Psychiatry,58(1) candidates than there are places available (Alan Faulkner, Chair RANZCP New Zealand training committee, February 2023, personal communication).If more positions were to be funded -as is required -there is a promising talent pool available to fill them.
Participants recognised that all levels of the health system are inter-related and that focusing on part of the system (such as primary care) means the pressure point is just moved to a different area.Providing adequate facilities that match the service user's journey is also a key requirement.In particular, the participants noted a lack of long-term community facilities for those with more complex presentations and challenges in finding appropriate accommodation to discharge service users who need support.
Our study derives these views and suggestions from the doctors working at the coal-face of New Zealand's mental health and addiction system.Strengths of our study include the high response rate and the collection of quantitative and qualitative feedback.Limitations include the subjective nature of findings, finite depth of questioning and potential selection bias (those doctors with stronger opinions were probably more motivated to take part).This survey only recruited psychiatrists.It would be useful for future research to also seek the views of service users, families and other mental health clinicians.
As noted by Braun and Clark (Braun and Clarke, 2006), data are not coded in an epistemological vacuum.Six (from seven) of this study's authors were psychiatrists, and we were all members of the New Zealand Committee of the RANZCP.The framing of the questionnaire and our interpretation of free text themes may have been biased by our professional backgrounds and positionality.Furthermore, words/text do not operate as external signs of internal meaning for the individual participant but rather as a predetermined system for the allocation of meaning (Crowe, 1998).Perceptions are dynamic, so the same questionnaire may have elicited different responses at different points in time.However, our results can be triangulated with a survey of New Zealand psychiatrists published by the Association of Salaried Medical Specialists in November 2021 that reported similar increases in caseload, complexity and dissatisfaction with the resourcing of specialist mental health services (Association of Salaried Medical Specialists, 2021a).In addition, in that survey, 45% of psychiatrists said they would leave their current job if they could.These data alongside our representative sample and high response rate give validity to our finding that New Zealand psychiatrists are very concerned about the current state and future direction of their mental health and addiction system.
Further exploration of the perspectives of the nonpsychiatric workforce and patients/service users is needed to better understand to what extent the views of psychiatrists are role-dependent or a genuine reflection of the state of specialist services.Longitudinal studies, perhaps using the integrated data infrastructure (IDI) may help determine the veracity of the premise that upstream intervention will reduce demand for specialist services.
Conclusion
The majority of psychiatrists believe specialist mental health and addiction services in New Zealand are not currently fit for purpose and that they are not heading in the right direction.Remedies include boosting ring-fenced funding to reflect the 5% of New Zealanders with severe mental health needs; actively addressing the critical workforce pipeline (including the number of psychiatrists); and urgently improving the experiences of mental health staff.A skilled and adequately staffed multidisciplinary workforce that includes peer support and cultural capability is an extant strength.This workforce needs to be grown and nurtured to achieve the highest attainable standard of mental health for the people of New Zealand.
Me mahi tahi tātou, mō te oranga o te iwi katoa (we work together for the well-being of everyone).
Figure 1 .
Figure 1.How often patients/service users are unable to access the right mental health care at the right time because of resourcing constraints reported by specialty area.
Table 1 .
Demographics of participants.
Table 2 .
Frequency of agreement to survey questions.The configuration of Aotearoa's mental health and addiction system is . . . | 2023-04-28T15:11:40.359Z | 2023-04-25T00:00:00.000 | {
"year": 2023,
"sha1": "9373e0207126733123c397c8911bb0bac05636d0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/00048674231170572",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e0ead1ee4a928556e97113705ec57463e70eec6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199491963 | pes2o/s2orc | v3-fos-license | Saturation mutagenesis of twenty disease-associated regulatory elements at single base-pair resolution
The majority of common variants associated with common diseases, as well as an unknown proportion of causal mutations for rare diseases, fall in noncoding regions of the genome. Although catalogs of noncoding regulatory elements are steadily improving, we have a limited understanding of the functional effects of mutations within them. Here, we perform saturation mutagenesis in conjunction with massively parallel reporter assays on 20 disease-associated gene promoters and enhancers, generating functional measurements for over 30,000 single nucleotide substitutions and deletions. We find that the density of putative transcription factor binding sites varies widely between regulatory elements, as does the extent to which evolutionary conservation or integrative scores predict functional effects. These data provide a powerful resource for interpreting the pathogenicity of clinically observed mutations in these disease-associated regulatory elements, and comprise a rich dataset for the further development of algorithms that aim to predict the regulatory effects of noncoding mutations.
that the cancer-associated allele (A) led to a significant increase in promoter activity (~8 fold versus empty vector) compared to the unassociated allele (~3.5 fold versus empty vector) 4 . Our MPRA for this promoter was carried out in HeLa cells along with the cotransfection of USF1 and USF2. We did not observe a significant effect on promoter activity for rs1867277 (Supplementary Table 10). Overall, we only observed that 6 out of 1923 variants with a minimum of 10 associated tags showed a significant effect (p-value < 10 -5 ) on promoter activity, but with low expression effects. Our three technical replicates also had the lowest correlation of all MPRA experiments (0.16 Pearson correlation; Supplementary Table 7), precluding our ability to provide definitively interpretable results for this promoter.
GP1BB
Bernard-Soulier syndrome (BSS) is a rare autosomal recessive bleeding disorder characterized by defects of the GPIb-IX-V complex, a platelet receptor for von Willebrand factor (VWF). Most of the mutations identified in the genes encoding for the GP1BB (GPIbα), GP1BB (GPIbβ), and GP9 (GPIX) subunits prevent expression of the complex at the platelet membrane or its interaction with VWF. A mutation at the GP1BB promoter, NM_000407.4:c.-160C>G, is thought to disrupt the second GATA consensus binding site, and was shown to decrease promoter activity by 84% (Supplementary Table 10) 5 . In our MPRA, which was carried out in HEL 92.1.7 cell lines (erythroblasts originating from human bone marrow), we also observed a reduction in promoter activity for this mutation (52% promoter activity compared to baseline). We also saw a significant reduction across the whole GATA binding site. Several additional regions led to reduced promoter activity. For example, chr22: 19,710,710,900 (GRCh37), which is annotated by JASPAR to have binding sites for Ets-related factors 6 , showed significant reduction of promoter activity due to various mutations. Additional regions such as chr22: 19,710,912-19,710,923, chr22:19,710,940-19,710,949 or chr22:19,711,001-19,711,032 (all GRCh37) also encompassed several mutations that led to significant reduction in promoter activity. Overall, we observed a good correlation (0.88 Pearson correlation; Supplementary Table 7) between replicates for this MPRA.
HBB
Mutations in the the Hemoglobin Subunit Beta (HBB) gene cause beta thalassemias (β0 or β+), characterized by reduced hemoglobin levels that lead to lower oxygen levels in the body. The severity of the disease depends on the nature of the mutation. There are 31 clinical variants in the HBB promoter that were reported as disease causing, based on the Leiden Open Variation Database (https://lovd.bx.psu.edu/home.php?select_db=HBB) 7 . In our MPRA, which was carried out in in HEL 92.1.7 cells (erythroblasts originating from human bone marrow), 17 out of 31 clinical variants were found to have a significant effect on promoter activity (p-value < 10 -5 ) (Supplementary Table 10). We observed several blocks where mutations led to reduction in activity, possibly due to transcription factor binding sites. For example, in positions c.-142 to c.-136 there is a known binding site for the erythroid Krüppel factor (EKLF), a zinc-finger transcription factor that plays a critical role in erythropoiesis and regulation of β-globin switching 8 . We observed a decrease in expression of about 12%-19% for previously reported nucleotide variants within this binding site (Supplementary Table 10). In contrast, the promoter mutation NM_000518.4:c.-18C>G, which has been previously shown to decrease steady state levels of mRNA to 85.2% 9 and is thought to disrupt the binding and transactivation of EKLF (causing mild β+ thalassemia 8 ), did not show a significant effect in our study (Supplementary Table 10). We identified 12 promoter activating variants (showing greater than 20% upregulation in promoter activity), for example variant c.-99C>G showed an 128% increase in activity (Supplementary Table 10) and could potentially have a strong effect on HBB expression. Overall, we observed a good correlation between technical replicates (0.77 Pearson correlation; Supplementary Table 7). There could be several reasons why some of our MPRA results don't completely match previous results for these clinically associated variants. Our pvalue minimum threshold might be too stringent to allow us to characterize these mutations (e.g. if we use a p-value threshold of 0.01, 23 out of the 31 variants show a significant effect on promoter activity), the clinical association might be incorrect and/or experimental differences could be contributing to these differences. Table 10). Work carried out in human erythroleukemia cells (K562 and HEL) has identified several proteins that may bind to the HBG1 promoter and either activate or repress its activity 10 255C>T decreases the similarity to the SP1 recognition site and is associated with a more moderate increase in HBG1 expression 10,11 .
We carried out MPRA on the HBG1 promoter in HEL 92.1.7 cells (erythroblasts originating from human bone marrow) obtaining high reproducibility between technical replicates (0.92 Pearson correlation; Supplementary Table 7). We detected multiple blocks where mutations lead to reduced promoter activity, suggesting they serve as binding sites for activating transcription factors. We saw a significant repressive activity for mutations within the CAAT, CCAAT and CACCC blocks. However, we did not observe a continuous block of significant variants in our MPRA for the ATGCAAAT octamer sequence (c. Table 10). For example, c.-228T>C showed a strong increase in expression of over 50% and c.-167C>T led to a 65% reduction in promoter activity. Similar to HBB, these results could be due to: 1) Our pvalue minimum threshold being too stringent (if we use a p-value threshold of 0.01, 8 out of the 14 variants show a significant effect on promoter activity); 2) The clinical association might be incorrect; 3) Experimental differences could all be contributing to these differences 4) Another limitation of our results for this promoter could be due to its regulation being driven by a complex pattern of repressor and activator binding which are temporally expressed.
HNF4A P2 promoter
Mutations in the Hepatocyte Nuclear Factor 4 Alpha (HNF4A) gene lead to maturity-onset diabetes of the young (MODY), a monogenic autosomal dominant subtype of early-onset diabetes mellitus caused by defective insulin secretion of pancreatic beta cells 12,13 . HNF4A has two characterized promoters, P1 and P2. We focused on the distant upstream P2 promoter, which is 46 kb 5' to the P1 promoter and is thought to be the major promoter regulating HNF4A pancreatic beta and hepatic cell expression 12 have been associated with MODY 80,81 and are thought to affect the binding of various transcription factors and reduce promoter activity. Transfection assays with various mutations have identified functional binding sites for HNF1A, HNF1B, pancreatic and duodenal homeobox 1 (PDX1) and other transcription factors known to be associated with MODY 12 . They also showed that mutations c.-136A>G and c.-169C>T lead to reduced promoter activity compared to the wild-type sequence in HEK293 cells 12,13 .
We carried out MPRA in HEK293T cells and observed an average Pearson correlation of 0.89 across replicates (Supplemental Table 7). Out of the five MODY-associated mutations, we only observed a significant effect for c.-192C>G, which increased promoter activity by 20% (Supplementary Table 10 Figure 8).
MSMB
A SNP, rs10993994, in the promoter (-57 bp from TSS) of the Microseminoprotein Beta (MSMB) gene was found to be associated with prostate cancer risk in two separate GWAS 14,15 and showed differential reporter activity 16,17 . The risk allele, T, of this SNP reduced promoter activity to 30% and 13% compared to the unassociated allele in HEK293T human embryonic kidney cells and prostate cancer cell lines (LNCaP), respectively 18,19 . We carried out MPRA for this promoter in HEK293T cells line, and observed a good correlation between replicates (0.88 Pearson correlation; Supplementary Table 7). For rs10993994, we obtained a 15% increase in promoter activity for the risk allele (T) instead of reduction in promoter activity as observed in LNCaP cells (Supplementary Table 10). This is likely due to the difference in cell lines used. The MSMB promoter has two potential androgen response element half sites (TGTTCT) located -113 to -118 bp 5' to the TSS and another is 23 bp upstream of SNP rs10993994 18 (chr10:51,549,467-51,549,472; GRCh37), which are likely to only get activated in LNCap cells but not in HEK293T cells. We observe multiple blocks that showed significant effects on promoter activity, which align with ENCODE motifs and conservation at the 3' half, but not on the other half (Supplementary Figure 8).
PKLR
Three different single nucleotide mutations (NM_000298.5:c.-324T>A, c.-83G>C, c.-72A>G) and a polymorphic deletion c.-248delT in the promoter of the Pyruvate Kinase L/R (PKLR) gene have been found in patients with pyruvate kinase (PK) deficiency 20,21 , which leads to anemia. Functional analysis for both rat and human PKLR promoters revealed four conserved motifs (CAC/SP1, PKR-RE, GATA1) within the first 250 bp upstream region 21 . Both c.-72A>G and c.-83G>C significantly reduced promoter activity. c.-72A>G has been attributed to disruption of the consensus binding motif for GATA-1 at c.-69 to c.-74. c.-83G>C was shown to be a part of a core binding motif (CTCTG) of a novel regulatory element (PKR-RE1) in the erythroid-specific promoter of PKLR, in close proximity to a regulatory GATA-1 binding site. c.-324T>A did not show an effect on promoter activity, c.-248delT turned out to be a non-functional polymorphism 20,21 .
We carried out MPRA for this promoter using two different post-transfection time points (24 and 48 hours) in K562 cells and observed a good correlation between replicates (0.86 and 0.91 Pearson correlation for 24hr and 48hr respectively; Supplementary Table 7). We observed a significant reduction in promoter activity for two of the disease causing mutations, c.-72A>G (24h=-1.77, 48h=-2.40) and c.-83G>C (24h=-1.25, 48h=-2. 19). For c.-324T>A and c.-248delT we did not observe a significant effect which is consistent with the previous functional study 20,21 . Around 15% of all tested variants showed a significant effect, with the majority of these variants being located either near the 3' or 5' end, suggesting that these two regions are important for PKLR promoter activity. Table 12, 13), compared to all tested promoter elements.
Enhancers (in alphabetical order)
BCL11A B cell CLL/lymphoma 11A (BCL11A) is a transcription factor that regulates hemoglobin switching and has a characterized erythroid enhancer where common genetic variation has been associated with fetal hemoglobin (HbF) levels 22 . Saturation mutagenesis of the BCL11A enhancer via CRISPR-Cas9 guide RNA libraries uncovered key functional regions in this enhancer and implicated DNase hypersensitive site (DHS) +58 to have the most significant effect on BCL11A expression and HbF levels 23 . We thus used DHS +58 for our MPRA, which was carried out in the HEL92.1.7 erythroblast cell line. For the BCL11A experiments, we did not observe high correlations between replicates (0.38 Pearson correlation; Supplementary Table 7), possibly due to low fold activation by this enhancer in our assay/cell line (2.5 fold compared to empty vector) and a high mutation rate from the error-prone PCR step (creating an average of 6 variants per construct and very few wild-type sequences; factors that negatively impact model fit). Only 3% of the tested variants had a significant effect (p-value < 10 -5 ). The average log2 expression effect of all variants was only 0.064 (4.5%) and 0.21 (16%) for the subset of significant variants. The low activation we observed for this enhancer could be due to the use of different cells lines. HEL92.1.7 is an established erythoblast cell line collected from bone marrow. HUDEP-2 24 , which was used for the CRISPR-Cas9 assays, are human erythroid progenitors derived from the umbilical cord and are thought to be more similar to adult erythroid cells. It could also be due to differences in experimental methodologies, i.e. endogenous mutations via CRISPR-Cas9 that assay phenotypic changes versus MPRAs that test the ability of the sequence to drive regulatory activity.
IRF4
A SNP, rs12203592, within an intron of the gene encoding the Interferon Regulatory Factor 4 (IRF4), a transcription factor involved in pigmentation, is strongly associated with sensitivity of skin to sun exposure, freckles, blue eyes and brown hair color 25 . IRF4 is activated by the melanocyte master regulator melanocyte inducing transcription factor (MITF) along with the transcription factor AP-2 alpha (TFAP2A). The pigmentationassociated allele, rs12203592-T, is thought to disrupt a TFAP2A binding site and was shown to reduce enhancer activity 25 . We carried out saturation-based MPRA for this enhancer in human melanoma SK-MEL-28 cells and observed strong correlations between technical replicates (0.99 Pearson correlation; Supplementary Table 7). The rs12203592-T allele led to a significant reduction in enhancer activity, 36% compared to wild-type (Supplementary Table 11). Overall, we observed that 37% of the variants had a significant effect on enhancer activity. Conservation seems to be an informative measurement for activity and IRF4 has one of the best agreement between different annotations and absolute expression effects (see Results and Supplementary Table 12,13).
IRF6
A SNP, rs642961, in an Interferon Regulatory Factor (IRF6) associated-enhancer was shown to confer an 18% attributable risk for isolated cleft lip 26 . This variant is part of a common haplotype, rs76145088 (G>A), rs642961 (G>A) and rs77542756 (A>G) and the AAG associated-haplotype is thought to disrupt AP-2α binding sites and increase enhancer activity compared to the unassociated haplotypes in human foreskin keratinocyte HFK cells 26 . However, this difference did not reach statistical significance. Saturation based MPRA for this IRF6 enhancer was done in the human keratinocyte HaCaT cells and obtained a strong correlation between technical replicates (0.96 Pearson correlation; Supplementary Table 7). We observed that around 19% of all variants show a significant effect on enhancer activity. We did not observe a significant effect on enhancer activity for rs642961 and rs77542756 by themselves (Supplementary Table 11). However, the region where rs642961 and rs77542756 were located (chr1:209,989,152-209,989,314; GRCh37) had several activating mutations. For SNP rs76145088 our MPRA detected a significant repression by 18% (Supplementary Table 11). We could not detect an enrichment of significant variants at the AP-2α binding sites and because of the design of our MPRA, the associated-haplotype AAG is not covered. IRF6 had several conserved sites, but most of them do not align with blocks of significant expression effects (Supplementary Figure 9), leading to low correlation of conservation and functional scores (Supplementary Table 12 , 13). Of note, although rs642961 shows minor effects on reporter gene expression in cell culture, it is possible that it has significantly larger or even opposing effects on IRF6 expression in vivo in its native chromosomal context and/or in the presence of trans-acting variants 26 .
MYC rs6983267 and rs11986220
Activation of the MYC proto-oncogene, bHLH transcription factor (MYC) gene is a hallmark of cancer initiation and maintenance. Several GWAS reported association between the G-allele of rs6983267 and increased risk for various types of cancers such as colorectal and prostate cancer [27][28][29] . Studies have also shown that a region including rs6983267 has enhancer activity and interacts with the proto-oncogene MYC during early stages of colorectal cancer development 30,31 . rs11986220 resides ~100 kb away from rs6983267, and is associated with prostate cancer risk independent of rs6983267 32 . rs11986220 resides within a FOXA1 binding site, and the prostate cancer risk allele (A) is thought to facilitate both stronger FOXA1 binding and androgen responsiveness 32 . To study the individual variant effects of these SNPs and their surrounding sequences, we generated saturation mutagenesis libraries of 600 bp around rs6983267 and 464 bp around rs11986220. We tested the MYC rs6983267 MPRA library in human embryonic kidney cells, HEK293T, adding 20mM LiCl to activate the Wnt pathway and the MYC rs11986220 in human prostate adenocarcinoma LNCaP cells grown in a medium with 100nM Dihydrotestosterone to stimulate androgen activity.
For MYC rs6983267, we observed a strong correlation between replicates (0.75 Pearson correlation; Supplementary Table 7). We did not see a change in activity (p-value of coefficient 0.05; log2 expression effect -0.08 / 95% activity) due to the cancer-associated rs6983267 allele (T) (Supplementary Table 11). We observed a stretch of nucleotides that led to a reduction in enhancer activity at chr8:128,413,089-128,413,107 (GRCh37). This region correlates with an annotated CTCF binding site as determined by ERB v90 33 . Of note, two variants, chr8:128,413,279C>G and chr8:128,413,289G>T (GRCh37) led to a strong increase in enhancer activity by 86% and 84%.
For MYC rs11986220, we did not obtain a good correlation between replicates (0.31 Pearson correlation; Supplementary Table 7). rs11986220 showed a 14% repressing activity, but was not statistically significant (Supplementary Table 11). In total, only 11 out of 1685 variants had an expression effect on enhancer activity at a significance level of 10 -5 .
RET
A common non-coding variant, rs2435357, within an enhancer in intron 1 of the gene Rearranged During Transfection Proto-oncogene (RET) is significantly associated with Hirschsprung disease (HSCR) susceptibility 34 . The disease-associated variant leads to a significant reduction in reporter activity (6-8 fold) compared to the unassociated allele in Neuro-2a cells 34 . Saturation mutagenesis based MPRA was carried out for this enhancer in Neuro-2a cells, with the library cloned into the pGL3 vector, similar to the experiment that originally tested this enhancer 34 . Overall, we observed a reasonable correlation between technical replicates (0.71 Pearson correlation; Supplementary Table 7). We did not detect a significant effect on enhancer activity for rs2435357 (Supplementary Table 11). However, this SNP is located within a sequence block that has multiple variants with significant activating and repressing effects (chr10:43,582,096-43,582,232; GRCh37). Both rs2506005 and rs2506004, which reside 76 bp upstream and 217 bp downstream of rs2435357 respectively, and are in linkage disequilibrium with this SNP, also showed no significant effect on enhancer activity. A region located towards the 3'-end of the sequence (chr10:43,582,390-43,582,526; GRCh37) is enriched in variants that led to differential enhancer activity.
TCF7L2
GWAS for type-2 diabetes (T2D) found a significant association with SNP rs7903146, located within an intron of the gene Transcription Factor 7 Like 2 (TCF7L2). The rs7903146 risk T allele is associated with impaired insulin secretion, incretin effects and enhanced rate of hepatic glucose production 35 . Luciferase assays of both alleles in mouse pancreatic beta MIN6 cells exhibited significant allele-specific differences with the T allele leading to a three fold stronger enhancer activity compared to the non-risk allele (Supplementary Table 11) 102 . We carried out mutation saturation based MPRA for this enhancer in MIN6 cells, which covered over 99% of all mutations in the tested region (Supplementary Table 3) and obtained high variant effect correlations between replicates (0.76 Pearson correlation; Supplementary Table 7). Functional and conservation scores do not correlate well with the observed variant effects, but we observed some correlation with JASPAR motifs (Supplementary Table 12, 13). We do observe a significant effect on enhancer activity for rs7903146, even though only a 19% expression increase.
UC88
Ultraconserved elements (UCEs) are sequences greater than 200 bp in length that are identical between human, mouse and rat 36 . Half of the UCEs were shown to be functional enhancers in developing mouse embryos, the majority of which are active in the central nervous system 37 . UC88 is located on chr2:162,094,920-162,095,508 (GRCh37) and was shown to be a robust forebrain enhancer at mouse embryonic day E11.5 (hs416; VISTA enhancer browser 38 ). UC88 was tested for enhancer activity in mouse neuroblastoma Neuro-2a cells and showed significant enhancer activity (9.3 fold compared to empty vector). Mutation saturation based MPRA was performed on UC88 in Neuro-2a cells. Replicates had a lower replicate correlation (0.64 average Pearson coefficient); however, this increased to 0.9 for significant variants (using a lenient p-value threshold of < 0.01) (Supplementary Table 7). This limited our interpretability to 10% (196 out of 1964, p-value < 0.01) of all measured variants in UC88, not allowing us to see clear patterns regarding known motifs or other annotations (Supplementary Figure 9). Overall, we obtained a low correlation of functional, conservation and motif scores (Supplementary Table 12, 13).
ZFAND3
A Type 2 Diabetes GWAS found an associated with rs58692659, which is ~10 kb upstream to the zinc finger AN1-type containing 3 (ZFAND3) gene 39 . The rs58692659 SNP maps to an open chromatin region, identified previously by both islet FAIRE-seq and ChIP-seq for H2A.Z, and is thought to be bound by several beta cell transcription factors, including PDX1, FOXA2, NKX2.2 NKX6.1 and NEUROD1. Further TFBS analyses and EMSA suggest that the T allele alters the binding of NEUROD1 39 . Luciferase assays showed reduced enhancer activity (~5 fold) for the rs58692659 risk allele (T) compared to the C allele in mouse MIN6 beta cells 39 . We carried out mutation saturation based MPRA on this enhancer in MIN6 cells and obtained high variant effect correlations between replicates (0.89 Pearson correlation; Supplementary Table 7). For rs58692659, we observed a significant reduction in enhancer activity (70% residual activity) for the disease-associated allele (T) which is consistent with the previous reported luciferase results. Furthermore, we found that rs58692659 resides in a region (chr6:37,775,451-37,775,758; GRCh37) that with numerous mutations that lead to either a reduction or increase in enhancer activity (Supplementary Table 11). This region is highly conserved in vertebrates and contains predicted binding sites for the islet-enriched transcription factors RFX4, MAX and NHLH1 39 as annotated by JASPAR 2018 6 . Due to the high correlation of conservation, the tested functional scores result in high correlations as well (Spearman correlation ~0.4; Supplementary Table 12, 13).
ZRS
Mutations in the Sonic Hedgehog (SHH) limb enhancer, called the zone of polarizing activity regulatory sequence (ZRS), lead to a wide array of limb malformations 40 . These include polydactyly, triphalangeal thumb (TPT), syndactyly, tibial hypoplasia and Werner mesomelic syndrome (OMIM #188770) [41][42][43][44][45][46] . Several of these mutations were tested for enhancer activity using mouse transgenic assays, with the majority showing ectopic expression compared to the wild-type sequence 40 . In vitro assays were also done for a 1.7 kb version of this enhancer in NIH3T3 mouse fibroblasts which showed higher enhancer activity upon co-transfection of homeobox d13 (Hoxd13) and heart and neural crest derivatives expressed 2 (Hand2). As we were limited in length due to technical reasons, we first tested a 485 bp ZRS sequence that encompasses the majority of disease causing mutations in this in vitro system. We observed a significant fold change for co-transfections with Hoxd13 by itself (4.2 fold) or along with Hand2 (2.0 fold) compared to the empty vector. We thus set out to do saturation mutagenesis MPRA in NIH3T3 cells using both conditions (i.e. Hoxd13 or Hoxd13+Hand2).
We observed reasonable correlations between technical replicates (0.76 Pearson correlation for both Hoxd13 or Hoxd13+Hand2; Supplementary Table 7). Overall, we did not detect a high fraction of significant variant effects and those that were significant were distributed over the entire region with stronger effects towards the 3' end. For most disease-causing mutations, we did not observe a significant effect, likely due to technical reasons. These could be due to this enhancer having important spatial and temporal activities in the developing limb that are likely not represented in an in vitro setting that uses a single cell type along with the co-transfection of additional factors. However, we did observe an effect for some known mutations. These include chr7:156,584,153T>C (GRCh37; ZRS417), which resulted in 16% and 9% increase with Hoxd13 and Hoxd13+Hand2 respectively (Supplementary Table 11), and is known to cause a more severe limb phenotype which includes polydactyly of the four extremities and bilateral tibial deficiency 44 . Another interesting mutation is chr7:156,584,285C>A (GRCh37; ZRS285) that led to 17% and 13% higher enhancer activity with Hoxd13 and Hoxd13+Hand2 respectively (Supplementary Table 11), and is known to cause polydactyly in Silkie chickens 47 .
Supplementary Figures
Supplementary Figure 1 A B Promoter and enhancer luciferase assays. Bar charts representing the relative luciferase activity of the 10 selected promoters (A) and 10 selected enhancers (B) for either the wild-type (blue) or the saturation mutagenesis library (mutant library; red) normalized to the empty vector. In (A), for FOXE1, 7.5ug of upstream transcription factor (USF) 1 and 2 were co-transfected along with the vector/library. All promoters activities were measured after 24hr transfection except for PKLR which was measured after 48hr. In (B), for ZRS, 3.75ug HOXD13 or HOXD13 plus HAND2, were co-transfected along with the vector/library. For MYC (rs11986220), 10nM dihydrotestosterone (DHT) were also co-transfected. For MYC (rs6983267), 20mM of LiCl was added after 24hr to induce Wnt pathway activity and luciferase activity was measured at 32 hours. For all other enhancers, luciferase activity was measured 24 hours after transfection. All results are mean ± standard deviation of at least 3 independent experiments. SORT1 enhancer assays. Bar charts representing the relative luciferase activity of the SORT1 enhancer wild-type sequence (blue), mutant library (red) and flip library (light red). Luciferase levels were measured following 24 hour and results are plotted with mean ± standard deviation of at least 3 independent experiments.
Tags assigned from saturation mutagenesis libraries. Plots for all 24 assignments described in Supplementary Table 3. Variant are sampled evenly across all the regions. The number of tags per variant follows previously characterized biases in error-prone PCR using Taq polymerase, with a preference of transitions over transversions and T-A preference among transversions. Insertions were rare, while short deletions occur at rates similar to those of the rare transversions. Shape denotes the reference and the color the alternative allele of the variant. The "-" in the legend stands for a 1 bp deletion.
Quantitative RT-PCR validation results of TERT-GBM short-interfering RNA experiments.
To measure siRNA knockdown efficiency, qPCR was performed using the Ambion Power SYBR Green Cells-to-Ct kit to measure mRNA abundance of GABPA and TERT with primer sequences previously used (see Methods). Relative expression levels were calculated using the deltaCT method against the housekeeping gene GUSB.
51
Overlay of inferred variant effects for SNVs from promoter elements with available annotation data. Motif predictions overlayed with biochemical evidence from ChIP-seq experiments were obtained from Ensembl Regulatory Build (ERB) 33 and ENCODE 3 annotations. JASPAR motif predictions were obtained from the JASPAR 2018 6 release.
Overlay of inferred variant effects for SNVs with available annotation data for all enhancer elements.
Motif predictions overlayed with biochemical evidence from ChIPseq experiments were obtained from Ensembl Regulatory Build (ERB) 33 and ENCODE 3 annotations. JASPAR motif predictions were obtained from the JASPAR 2018 6 release.
Supplementary Figure 10
Receiver operating characteristic curves of computational scores used for the binary classification of promoter variants. Plots are divided into the classification of the top 200, 500 and 1000 high-effect promoter variants (p-value < 10 -5 , min tags 10) versus the same number of random variants (per element) with no expression effect (log2 value < 0.05, min tags 10). Curves show the average performance across 100 classification rounds (see Methods). If multiple scores exist per method, only the best score is plotted (see Supplementary Table 16).
Supplementary Figure 11
Precision-recall curves of computational scores used for the binary classification of promoter variants. Plots are divided into the classification of the top 200, 500 and 1000 high-effect promoter variants (p-value < 10 -5 , min tags 10) versus the same number of random variants (per element) with no expression effect (log2 value < 0.05, min tags 10). Curves show the average across 100 classification rounds (see Methods). If multiple scores exist per method, only the best score is plotted (see Supplementary Table 16).
Supplementary Figure 12
Receiver operating characteristic curves of computational scores used for the binary classification of enhancer variants. Plots are divided into the classification of the top 200, 500 and 1000 high-effect enhancer variants (p-value < 10 -5 , min tags 10) versus the same number of random variants (per element) with no expression effect (log2 expression effect < 0.05, min tags 10). Curves show the average across 100 classification rounds (see Methods). If multiple scores exist per method, only the best score is plotted (see Supplementary Table 17).
Supplementary Figure 13
Precision-recall curves of computational scores used for the binary classification of enhancer variants. Plots are divided into the classification of the top 200, 500, and 1000 high-effect enhancer variants (p-value < 10 -5 , min tags 10) versus the same number of random variants (per element) with no expression effect (log2 expression effect < 0.05, min tags 10). Curves show the average across 100 classification rounds (see Methods). If multiple scores exist per method only the best score is plotted (see Supplementary Table 17).
Supplementary Tables
Supplementary Table 1 Tested promoter sequences. Note that the construct length of MSMB is 2 bp longer than the described reference genome coordinate range, due to an insertion polymorphism present in the construct. Vector sequences for pGL4.11b and pGL4.11c are available for download from NCBI GenBank (accessions MK484103 and MK484104, respectively) and from our OSF project 75B2M. Table 2 Tested enhancer sequences. Note that the construct length of MYC (rs11986220) is 1 bp longer than the described reference genome coordinate range, due to an insertion polymorphism present in the construct. Human ZRS (hZRS, a limb-specific enhancer of the Sonic hedgehog gene) was co-transfected with HOXD13 and HOXD13+HAND2. UC88 is an ultraconserved element, which has not been previously associated with a phenotype. Vector sequences for pGL3c, pGL4.23c/d, and pGL4Zc are available for download from NCBI GenBank (accessions MK484107, MK484105/MK484106, and MK484108, respectively) and our OSF project 75B2M.
Supplementary Table 7
Pearson correlation across different transfection replicates. Correlation from the three transfection replicates of fitted SNV and 1 bp deletion variant effects divided by their standard deviation (requiring at least 10 tags in each replicate). MYC (rs11986220) is abbreviated to MYCs1 and MYC (rs6983267) abbreviated to MYCs2. TERT-HEK293T, TER-GBM, TERT-GBM-SiGABPA and TERT-GBM-SiScramble-2 are abbreviated to TERT-H, TERT-G, TERT-GAa and TERT-GSc, respectively. The left side considers all estimates obtained, the right side tries to exclude nonsignificant allele effects by requiring a lenient p-value threshold (<0.01) in at least one of the replicates. We use the reduction in included alleles as a proxy for the proportion of alleles with regulatory activity for each element (Frac. Supplementary Table 9 Effects of specific substitution or transition (ts) vs. transversion (tv) effects for significant readouts across a representative experiment of each element. The number of mutations with a significant p-value (< 10 -5 ) is reported from models combining transfection replicates. The overlap with single nucleotide variants present in the gnomAD r2.1 is reported. Elements from Supplementary Total SNVs 28937 4601 2756 602 689 144 75 13 Total Tv 19113 2411 1755 419 244 30 21 8 Total Ts 9824 2190 1001 183 445 114 54 5 Previously described variants of promoters . Clinical variants of promoters (HBB, HBG1, PKLR, F9, GP1BB, TERT, HNF4A, MSMB, and FOXE1) measured in this MPRA study, as well as additional described variants in Supplementary Note 1. Corresponding RefSeq transcripts of the HGVS annotation are listed in Supplementary Table 1. Fold changes are reported as "non significant" (n.s), if the associated p-value is higher than 10 -5 . Variants that were not available from our experiments are marked as "not covered" (n.c.). We are including effects for the PKLR_24h and PKLR_48h (PKLR_24h/PKLR_48h) as well as TERT-GBM and TERT-HEK experiments (TERT-GBM/TERT-HEK). Previous reported phenotypes in literature are summarized in the Phenotype column.
Supplementary Table 16
Performance of computational scores for binary classification of promoter variants with high and low expression effect. The top 200, 500, and 1000 highest expression effect promoter variants (p-value < 10 -5 , min tags 10) were selected and matched with no effect variants (log2 expression effect < 0.05: min tags 10). The same number of effect/no effect variants was sampled for each element, without normalizing each elements' contribution to the overall set.
Supplementary Table 17
Performance of computational scores for binary classification of enhancers variants with high and low expression effect. The top 200, 500, and 1000 highest expression effect enhancer variants (p-value < 10 -5 , min tags 10) were selected and matched with no effect variants (log2 expression effect < 0.05: min tags 10). The same number of effect/no effect variants was sampled for each element, without normalizing each elements' contribution to the overall set. In fact, the three elements contributing the Table 14) for which cell-types where publically available (HEK293T, HeLa S3, HepG2, K562, and LNCaP). In cases where an annotation is based on positions rather than alleles, we assumed the same value for all substitutions at each position. Bold text marks the best performing method.
Supplementary Table 21
Primers for preparing association libraries for short enhancers/promoters. These primers are also used to amplify longer enhancers and then those products are fragmented for subassembly (Supplementary Table 22). Plasmid DNA libraries are amplified with sequencing adaptor primers capturing the cloned sequence with its tag and adding the P5/P7 Illumina flow cell sequences. All sequences are provided 5' to 3'. | 2019-08-09T16:33:58.180Z | 2019-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "4ac01f7730583953316157845063510478028bc7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-11526-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa897ec69999af912e25393d83d3618cb8fa637c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
202949715 | pes2o/s2orc | v3-fos-license | Life Cycle Assessment of Straw Solid Particle Molding Fuel Manufacturing Process
Taking a 500,000 t/a straw briquette fuel plant in Baoding area of Hebei Province as an example, this paper establishes a life cycle energy consumption and environmental emission analysis model of straw solid fuel. Based on this model, the energy saving and greenhouse gas emission reduction potential of straw solid particles as typical briquette fuels are quantitatively analyzed. Secondly, the life cycle energy consumption and environmental emissions are analyzed. Finally, from the point of view of energy consumption and environmental emissions, the potential value of environmental impact is calculated by standardization and sensitivity analysis is carried out. The results show that the energy consumption of straw granulation stage (compression molding stage) accounts for the largest proportion in the solid particle manufacturing life cycle, reaching 32.30%. Compared with fossil fuels, solid particles have a large emission reduction advantage, which provides basic data for the utilization of straw solid granular forming fuel.
Introduction
China is rich in crop straw resources and forest resources. According to statistics, the annual output of crop straw is about 600 million tons, equivalent to 300 million tons of standard coal, and the forestry residue is about 150 million tons [1]. How to efficiently and comprehensively utilize biomass energy such as crop straw and forestry residues has become an important research topic in various countries. [2] Biomass fuels have the dual characteristics of renewability and environmental friendliness, and are considered as an important energy source in the future sustainable energy system. They can be regarded as a green energy source and a new clean energy source. The biomass processed into solid briquette fuel has the advantages of easy storage and transportation, wider application range, 20% higher combustion efficiency and less soot pollution. (Use the status quo). For example, Hebei Province is an important commodity grain production base in China, and the high yield of crops brings about the production of a large number of straws. Now Hebei has leapt to the fourth largest straw producing area in China, with a total amount of about 80 million tons, but nearly half of the straw are directly used as fertilizer for returning to the field, only 2% of which is used as straw energy [3]. It can be seen that the utilization of straw are not mature enough.
In this paper, the life cycle assessment (LCA) method is used to establish the analysis model of straw solid pellet briquette fuel. Aiming at the technology and energy utilization of straw solid briquette fuel in Baoding area, the energy balance relationship and greenhouse gas emissions of straw solid briquette fuel converted into biomass solid briquette fuel are quantitatively evaluated, and the environmental emission of straw pellet during its life cycle is analyzed. The environmental emissions of straw particles in different stages of their life cycle were analyzed, and the influence of key parameters on the accounting results was determined by sensitivity analysis, thus providing a reference for the correct evaluation of the energy sustainability of biomass solid briquette fuels in China.
Formatting the title
Take a biomass solid forming granule factory in Baoding as an example. Its fuel engineering design pellets 2000 t per day (water content 9.91%) and 500,000 t per year. In autumn, the moisture content of straw is 30% -70% (50% of the middle of the project.). Under the regulation of 2000 tons, 4 000 tons of straw per day and 1 million tons per year are needed. Because a certain amount of straw will be consumed in the actual production pretreatment stage, the consumption of straw is estimated at 110% ratio (based on the life cycle assessment of the environmental impact assessment of biomass energy engineering). The daily consumption of straw is 4400 tons, and the annual demand of straw is 1.1 million tons.
Referring to the actual working conditions of the plant, the LCA boundary division is shown in Figure 1. The life cycle system from straw transportation stage to biomass solid pellet briquette fuel application stage is divided into three stages: straw transportation stage from field to plant transportation stage, biomass solid pellet fuel processing and briquetting stage and biomass solid pellet briquette fuel transportation stage. The boundary is divided as follows: A
Raw Material Transportation Phase
Straw is mainly transported by diesel trucks with an average fuel consumption of 0.08L/(t.km). The distance of bicycle transportation is calculated by collecting radius model. The calculation method is as follows: Straw Collection = Total Collection Area ×Output of Straw per Unit Planting Area × Ratio of Planting Area to Total Area × Ratio of Straw for Energy The project is located in Baoding City, Hebei Province, with a total area of 22,100 km 2 ; the maize planting area of 4,613 km2; the total amount of straw collected for the project is 1.1 million tons; and the proportion of straw used for energy is 43% [4]. Reference formula can be obtained, R is the collection radius of straw, m; M is the total amount of straw collection per year, kg; M0 is the total amount of straw waste per unit area, kg/m 2 ; A is the proportion of crop planting area; and beta is the proportion of straw waste used for energy. The radius of collection is about 10 km. From (1) it can be calculated that the annual diesel consumption in the transportation process is as follows:
Stage of Straw Production
In the process of solid particle production, it is divided into straw raw material screening, straw drying, straw crushing, straw screening, material transportation, granulation, particle transportation, cooling, particle screening, finished product bagging/warehousing. The energy consumption is calculated by using the corresponding models. Table 1. Technical parameters of high efficiency crusher Machine Type ,Motor Power Production Efficiency GXP130 -100 220KW 5t/h According to the total amount of straw required and the production efficiency of the high-efficiency crusher, it can be calculated that two equipment are needed to support the operation of the equipment when the equipment is running all year round. Equipment power consumption can be calculated by formula: equipment power consumption = motor power equipment operation time: The annual power consumption of the two equipments is 143 x 105kWh. The content of other steps in the straw production process is similar. It is possible to calculate the power consumption of one step and not repeat it again.
The energy consumption in each stage of summary
According to the calculation in the table above, the proportion of energy consumption in each stage of straw production is as shown in table 2.
As shown in Table 2, we can see that in the life cycle of solid particle manufacturing, the proportion of energy consumption in the straw granulation stage (compression molding stage) is the largest, reaching 32.30%, and the proportion in the straw screening stage is the smallest, only 0.05%.
Environmental impact assessment
The main impact assessment is to identify the environmental impact of quantitative or qualitative evaluation, thus the impact of its external environment to determine the resources consumption and pollutant discharge of the research system.
Stage of Straw Production
Environmental impact potential (EIP) is the sum of the effects of all similar environmental emissions in the whole system. The environmental impact potential of the same kind of pollutants is converted to the reference substance by equivalent coefficient ( Table 4). The environmental impact potential is calculated according to the different types of pollutants. Formula for calculating environmental impact potential: (3): EP (m) is the mth environmental impact potential in the product life cycle; EP (m) n is the mth environmental impact potential of the nth emissions; Q (m) n is the nth emissions; EF (m) n is the mth environmental impact equivalent factor of the nth emissions [5]. Potential impacts of various environmental impacts are shown in Table 5.
Sensitivity Analysis
When each process is changed, the impact of energy consumption and environment varies, but it is difficult to quantitatively and qualitatively analyze the specific changes. In this study, the sensitivity analysis is based on the radius of straw transportation and the power data of various straw processing machines (+10%). Based on this, the sensitivity factors in the life cycle of the system are determined. The sensitivity of each data to the results can be obtained by comparing the environmental impact changes caused by the fluctuation of each data. Here. The sensitivity S formula is set here as follows: S= (evaluation results after changing inventory data) -(original evaluation results) / (original evaluation results) Firstly, the sensitivity of straw gas direct supply scenario was analyzed, and the life cycle assessment was carried out by changing the data of each stage. The results showed as shown in table 10: when the transportation radius changed from 78.1 km to 85.91 km, the change ratio of global warming capacity was 0.09%, and when the power consumption increased by 10%, the change ratio of global warming capacity was 232.56%. The results show that the change ratio of manufacturing process is much larger than that of transportation process, so it is an important problem to solve the problems of energy saving and emission reduction and environmental impact in the manufacturing process before the biomass solid forming technology.
Conclusion 1)
In the life cycle evaluation of straw solid pellet briquette fuel, the transportation of straw and the screening of straw all need to consume energy. Although the energy consumption of the straw granulation stage (compression molding) is the largest, the volume of the compressed briquette fuel is reduced by 6-8 times, the fuel density is 1.0-1.4 t/m3, the energy density is equivalent to the medium bituminous coal, which improves the transport and storage capacity, facilitates storage and transportation, and can be used efficiently. At the same time, the pollution caused by straw burning in situ is reduced and fossil fuels such as coal are saved.
2) When the parameters change, the straw transportation has less impact on the environment, but it is the most human consumption stage in the whole process of the system; in the straw production stage, after changing the parameters, the impact on the environment is very large. No change is recommended.
3) There is still much room for the development of biomass pellet fuel products, focusing on the research and development of straw granulation technology to improve fuel calorific value. 4) According to the GWP coefficient of global warming potential, the ability of biomass solid particles to global warming is also reduced. | 2019-09-17T02:59:02.963Z | 2019-09-05T00:00:00.000 | {
"year": 2019,
"sha1": "22f6d3c7db99f5a69e1eef5da96b2882e20abab7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/310/4/042031",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "87ed574af2bf20090994542b58e8f03e5bccfb49",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
118608000 | pes2o/s2orc | v3-fos-license | Distinguishing Signatures of top-and bottom-type heavy vectorlike quarks at the LHC
An SU(2) vectorlike singlet quark with a charge either +2/3 (t') or -1/3 (b') is predicted in many extensions of the Standard Model. The mixing of these quarks with the top or bottom lead to Flavor Changing Yukawa Interactions and Neutral Current. The decay modes of the heavier mass eigenstates are therefore different from the Standard Model type chiral quarks. The Large Hadron Collider (LHC) will provide an ideal environment to look for the signals of these exotic quarks. Considering all decays, including those involving Z- and Yukawa interactions, we show how one can distinguish between t' and b' from ratios of event rates with different lepton multiplicities. The ability to reconstruct the Higgs boson with a mass around 125.5 GeV plays an important role in such differentiation.
Introduction
The Standard Model (SM) of particle physics has enjoyed a remarkable success in explaining various experimental data. Occasionally some observations have shown deviations from the SM expectation, but have disappeared later with the increase in statistics. Nonetheless, the possibility of new physics being indicated by experimental data [1,2] has constantly driven physicists towards an inspired quest. On the theoretical front, the SM does have some shortcomings as it has too many free parameters and offers no answer to some of the fundamental questions. An example is the issue of naturalness, or the stability of the Higgs boson mass against quadratically divergent corrections. In order to address questions such as this, many theoretical scenarios are exploited, with or without direct connections with the questions asked.
In a general context, vectorlike quarks can be both top-like (charge +2/3) and bottom-like (charge −1/3) vector singlet. We consider vectorlike isosinglet quarks in both the sectors as possibilities, taking one at a time in addition to the three generations of SM chiral fermions. The extension of the SM through the inclusion of a weak isospin singlet fermion leads to mixing between the singlet fermions and the SM doublets and hence different phenomenological consequences from what are predicted by the SM. The aim of our work is to distinguish between the singlets top-and bottom-type (t ′ and b ′ ) from their decays. In particular we wish to utilise the fact that the decay into Higgs is possible, and that the mass of the Higgs is known to us now. The dominant decays of t ′ and b ′ are expected to be With the discovery of a Higgs like boson by the LHC experiments [9,10], the decay mode t ′ /b ′ → Ht/b is a channel of interest. We make use of this fact and tag five b's along with the requirement of two b pairs giving invariant mass peaks at m H for both the isosinglets. We find that we can distinguish between the signals present on account of t ′ and b ′ .
The collider phenomenology of these vectorlike isosinglets has been considered extensively in the literature(for most recent ones [11] - [22]). In the earlier works, events are mainly selected with a final state composed of W or Z bosons and jets consistent with the decay of the heavy quarks. Once a signal is obtained, it becomes necessary to pinpoint the new physics scenario which leads to it. Since both t ′ and b ′ mimic the same signal through these channels, we are addressing this question by our analysis through the Higgs decay channel.
The outline of the paper is as follows. In section 2 we discuss the couplings of the t ′ and b ′ separately to the SM fields, through the effective Lagrangian in a model -independent way. In section 3 we discuss the signal and the background along with the methodology adopted for the analysis of the signal. In section 4 the results of our numerical analysis based on Monte Carlo simulations is presented. We summarise and conclude in section 5.
2 Phenomenology of t ′ and b ′ Strong processes can produce both t ′ and b ′ -type quarks at the LHC with identical rates, through gluon fusion or quark-antiquark annihilation. Such pair-production, whose rates are independent of the degree of singlet-doublet mixing, are the modes relevant for our study. Though single-production is also possible, perhaps with less phase space suppression, it is (a) driven by electroweak couplings, and (b) suppressed by the singlet-doublet mixing angle(s).
The left and right handed component of the vectorlike quarks have the same quantum number under SU(3) × SU(2) L × U(1) Y unlike the SM ones which are chiral.
with electric charge +2/3, and mixes with t.
with electric charge −1/3, and mixes with b.
Mixing and Coupling of t ′ and b ′
With the addition of the isosinglet quarks to the SM content, we assume mixing to take place mainly with the third generation of the quarks. We show below the general scheme of mixing in the down sector; the pattern is similar in the up-sector as well. The weak eigenstates are denoted by (d w , s w , b w , b ′ w ) and they are related to the mass eigenstate by [23,24] The unprimed fields denote the basis, where, the mass matrix of the up-type quarks are diagonalized. The submatrix V 3×4 consisting of the first three rows of the U 4×4 matrix is the charged current matrix analogous to the SM CKM matrix and is not unitary. The addition of the fourth row restores the unitarity of U. The charged current interaction in the mass basis is now given by: where e is the electromagnetic coupling constant, θ W is the weak mixing angle and V ij is the relevant 3 × 4 submatrix of U. The indices i, j run over the quark generations (i = 1-3, j = 1-4), and u = (u, c, t), d = (d, s, b, b ′ ). As a consequence of mixing between fields with different weak isospin (T 3 ), Flavor Changing Neutral Current (FCNC) processes appear at the tree level, something that is absent in the SM framework. We therefore get b ′ bZ and b ′ bH interactions. In addition, the SU(2) singlet field b ′ in the flavor terms can have a gauge invariant 'bare' mass term, contrary to d, s and b. As a result the mass and Yukawa coupling matrices cannot be simultaneously diagonalised, and the physical states can have flavor changing Yukawa interactions. The neutral current interaction in the mass basis is given by: The index k runs from 1 to 3, whereas i and j runs from 1 to 4. The electromagnetic current J µ em is diagonal in the mass basis and is The FCNC coupling as seen from Eq. (2.4) is controlled by V † V , which is a 4 × 4 matrix. Since the mixing matrix V 3×4 is embedded within the unitary matrix, therefore from Eqs. (2.1), (2.2) we obtain the relation, Now let us come to the full explanation of flavor changing Yukawa couplings, already mentioned above. There are gauge invariant mass terms proportional tob ′ L b ′ R andb ′ L f ′ R , in the Yukawa coupling sector where f ′ =(d, s, b), due to non chiral nature of vector quarks. As these terms do not arise from the Yukawa coupling, therefore, the Yukawa matrix cannot be simultaneously diagonalized with the mass matrix through a biunitary transformation, consequently giving rise to non diagonal Yukawa coupling among the physical quarks. The relations go analogously for the top sector. The explicit forms of the Yukawa couplings along with the coupling of the gauge bosons to the vector quarks and SM quarks due to mixing are given as where f and F are dominantly SU(2) L doublet and singlet respectively, both in the up and down sectors, and generically stand for mass eigenstates. Moreover, f here denotes third generation quarks only. The CKM matrix elements of the SM involving the light quarks are directly determined from the experiments and are therefore tightly constrained. These experimental results also give unitary limits on the other elements which cannot be directly determined. The non-observation of the FCNC decays in the top sector by the Tevatron [25] and the analysis of single top production in LEP [26] has set bounds on the CKM matrix elements involving the top quark at 95% CL. A detailed analysis on the allowed mass range and mixing angle θ in accordance with the precision electroweak data, flavor physics and oblique parameters is presented in [27]. They have presented the range of the CKM matrix elements allowed for different quark masses in case of different scenarios. For our analysis we have assumed a simplified version of the matrix given in Eqs. (2.1) and (2.2) and describe all the interactions on the addition of an isosinglet fermion by the following mixing matrix (2.8) We are considering here the mixing of the vector quark with the third generation only as the effect of mixing is very small in the lighter generations for massive vector quarks. The other elements of the mixing matrix are fixed to the SM value. Thus we are essentially considering the isosinglet quark in either sector mixing with the third family alone, the mixing angle being consistent with all existing constraints.
The current phenomenological constraints on vectorlike quarks come from direct production bounds at the various colliders and from flavor physics. There are various direct limits on their masses depending on the decay channel analysed. The CDF collaboration has excluded a heavy t ′ with SM like couplings at 95% CL up to 358 GeV [28] and a heavy b ′ with SM like couplings at 95% CL up to 372 GeV [29]. The search mode in the collider experiments is mainly through the pair production of these exotic quarks and further assuming these quarks to only decay through a particular channel. The bound obtained by CDF on b ′ was by looking for pair produced heavy quark with a 100% branching ratio to W + SM quarks. This analysis mainly constrains all the models predicting this final state. Recent LHC bounds from the ATLAS [30]- [34] and CMS [35] data, have set a lower limit on charge 2/3 and -1/3 exotic quark mass, by the investigation through either a particular decay channel or assuming branching ratios to W, Z and H decay modes in the context of different models. The exclusion of mass of the vectorlike quarks is mainly dependent on the strength of their couplings.
All the possible decay modes of the heavy quarks are considered in our analysis. Flavor constraints are also significant for vectorlike quarks. The mixing of the new quark with the SM quarks leads to the non-unitarity of the 3 × 3 SM CKM matrix and non-unitarity of this form is tightly constrained as the unitarity triangle of the SM is being measured with absolute precision. The presence of FCNC in this case contributes to some processes such as b → sγ, where the quark b changes its flavor by emitting or absorbing Z or Higgs boson. Along with it a photon is also emitted. The FCNC also leads to b meson mixing such as B d −B d and B s −B s mixing [27,36]. There being enormous activity in the flavor sector, it is expected that the experimental data from this sector can be applied to find constraints on the heavy quarks and their mixing with the SM ones. The constraints obtained in this case are largely model dependent [37,38] and we do not consider them for our analysis. The benchmark points that we consider for our calculation are presented in Table 1. We present in Fig. 1 the cross section of the vectorlike quark pair production at the 14 TeV LHC. The main production channels are gluon-gluon fusion and qq annihilation. The production cross section decreases with the mass Parameter Value m t ′ 350, 500 m b ′ 350, 500 θ 5, 10, 15 Table 1: The various benchmark points used for our analysis, with θ as the mixing angle of the vector quarks and is independent of the mixing angle θ. Moreover the cross section is the same for the two cases considered here. We next show in Fig. 2 the branching ratios of the various decay modes of the vector quarks in the two cases plotted as a function of their mass.
We have kept the Higgs mass fixed at m H = 125.5 GeV. These branching ratios are sensitive to the Higgs mass, and have very weak dependence on θ. We have therefore, shown our results for a fixed value of θ.
Signal and Backgrounds
We consider, as already mentioned, the pair production of both t ′t′ and b ′b′ , taking one at a time via quark-antiquark as well as gluon pair annihilation.
Signals
With the recent discovery of the Higgs with a mass around 125−126 GeV, we get an added edge, given the fact that the Higgs dominantly decays to bb in this mass range. With appropriate tagging, it should be possible to reconstruct the Higgs from the invariant mass of the b-jet pairs. Also, for the allowed range of the mixing matrix elements the branching fractions of t ′ or b ′ to Higgs is substantial for moderate masses.
Signal for t ′
We will be mainly concentrating on the decay mode of t ′ to Higgs and a top quark.
In this case there can be three possible outcomes depending on the decay mode of both the W 's. It can be either both the W 's are decaying to leptons or one of them is decaying leptonically and the other decaying hadronically, the third possibility is the hadronic decay for both the W's. Thus the final states arising from t ′t′ production are The same final states can also be obtained from other decay modes of t ′ . The processes which mimic the t ′t′ decay channel considered for our analysis are The contribution from these decay channels is proportional to the branching ratio of Z → bb, which is about 15% and too small. It must be remarked that added to the small branching ratio their contribution to the signal gets filtered by the various cuts and the calculation of Higgs invariant mass as explained later.
Signal for b ′
Similar to t ′ , in this case also we will mainly concentrate on the decay mode of b ′ to Higgs and bottom quark with the final state consisting of only 6b's.
There are other modes for b ′ which can give rise to the final state of 6b's. Of course, the contributions from all these processes are proportional to the respective branching fractions of Z → bb.
The hadronic decay mode of the W 's from t ′ will give the same final state signature as b ′ , provided the jets emitted from the W 's are light. For both t ′ and b ′ , we look for the following final state signals, mainly with 5 tagged b's reconstructing two Higgs in the mass range, 123−128 GeV.
Signal 1: 5b + 2l+ p T Signal 2: 5b + 1l+ p T Signal 3: 5b The package CalcHEP v2.5.6 [39] is used to calculate the cross section for the signal process and the respective branching ratios of t ′ and b ′ .
Backgrounds
There are many SM processes which can fake the signals listed above. The dominant backgrounds arise from We have computed the cross section for all the background processes except the process pp → ttHH, with ALPGEN [40] which takes into account all the spin correlation and finite width effects. The cross section for the production of ttHH is computed with CalcHEP v2.5.6 [39], and is found to be about 0.0005 pb at 14 TeV, for the Higgs mass of 125.5 GeV. Since it is too small to be a threat to our signal, we are not considering this process further in our analysis. A similar argument follows for the background process W + W − HH production in association with jets. The cross section is of the order of 10 −5 pb. Therefore we are also ignoring this process in the further analysis of the background. The QCD factorisation and renormalisation scale (Q 2 ) in ALPGEN for the different background processes are presented in Table 2.
Event Selection Criteria
For the numerical evaluation of both the signal and the background rates, we have considered the CTEQ6L parton distribution function with m t = 172 GeV, m b = 4.8 GeV, m H = 125.5 GeV and centre-of-mass energy, √ s of 14 TeV. The signal events along with their decay branching fractions are generated with the help of CalcHEP v2.5.6 [39]. The renormalisation and factorisation scale used for the calculation of production cross sections is the default scale used in CalCHEP, i.e squared sub process centre-of-mass energy (m ij 2 =ŝ = (p i + p j ) 2 ). These signal events are passed on to PYTHIA-6.4.24 [41] for showering and hadronization along with the help of CalcHEP-PYTHIA interface program [42]. We have taken into account in PYTHIA the initial and final state radiations due to QED and QCD, along with the multiple interactions accounting for pile up. The showering of the SM background events is done by passing on the output of ALPGEN [40] in the form of unweighted events to PYTHIA. ALPGEN performs the matching of the jets produced in the showering routine to the partons obtained from the matrix element calculation using the MLM matching procedure [43]. Jet formation is done through FastJet 3.0.2 [44] using anti-k t algorithm, with radius parameter R = 0.4. The event selection criteria or the cuts applied are the same for both the signal and the background and are detailed below.
• Identification of Isolated Leptons (cut 1): 1) For the lepton trigger, electron candidates are required to have p e T > 25 GeV and |η| < 2.47. Moreover the electron is vetoed if it lies in the region 1.37 < |η| < 1.52 between the barrel and endcap electromagnetic calorimeters. The muons are required to satisfy p µ T > 25 GeV and |η| < 2.5. 2) Since we are interested in leptons coming from the decay of on-shell W ′ s only, they are further tested for being isolated. a) The total E T of stable particles within cone radius ∆R = (∆η) 2 + (∆φ) 2 < 0.2 of the lepton should be less than 10 GeV. b) In order to make the lepton and jets well separated, we further apply a lepton jet separation cut, ∆R lj ≥ 0.4 on the lepton for all the jets formed with p T > 20 GeV. The jets are formed through FastJet [44], with R = 0.4 using the anti-k t jet algorithm. All the particles other than the leptons with trigger of p T > 20 GeV and |η| < 2.5 form the input for Fastjet. The jets trigger for this is p T > 20 GeV. c) To exclude the contribution of the same flavor leptons that might come from the decay of Z boson, the invariant mass M ll of the isolated lepton pairs is calculated and the pair having mass in the window |M Z ± 10| GeV is discarded. The events chosen after this are listed as those passing cut 1. For the selection in case of signal with one or two isolated leptons, after the application of cut 1 all the events with one or two isolated leptons survive.
• Missing E T ( E T ) (cut 2) : For the events with one or two isolated leptons, E T is calculated by computing the vector sum of the visible p tot T of all particles, where In Eq. (3.1) p unc x,y receives contribution from the unclustered components, which consist of the leptons and hadrons in each event not passing the primary selection criteria for trigger but have p T > 0.5 and |η| < 5.0. A cut of E T > 40 GeV referred to as cut 2 is applied. All the events which survives cut 1, are subjected to this cut.
• b tagging (cut 3): The jets with E T > 40 GeV and |η| < 2.5 are selected as trigger for the identification of b jets. A jet is tagged as b jet if it has a b parton within a cone of ∆R < 0.4 with the jet axis and a tagging efficiency of 60% is incorporated. Events with five or more b's tagged in this manner are selected and are tabulated as events surviving cut 3. The cuts mentioned above are mainly motivated to suppress the background and also to discriminate the signal of the t ′ and b ′ .
Numerical Results
We have briefly discussed in section 3 about the final state signal along with the various cuts applied for our analysis. We present in this section the actual number of events surviving after each cut for a given integrated luminosity of 100 fb −1 . The different final states are briefly described below.
• First of all, we consider the number of events with the final state 5b + 2l+ p T (signal 1), i.e. 5 tagged b jets with two b jet pair's invariant mass peaking at the Higgs mass (123−128 GeV) along with 2 isolated leptons, which we call N 1 . The number of events, surviving after each cut for this final state, for the considered integrated luminosity is presented in Table 3 for both the signal and the background processes.
We can see from the table that with the choice of our cuts, we are able to discriminate between the signals of the t ′ and b ′ vector quarks. From this table we can make following observations.
-At the production level the number of events for both types of signal and one of the background, tt, is of the same order of magnitude. The other two SM backgrounds, ttH, ttbb are smaller but comparable. This continues even after the application of cut 1 and cut 2. In case of b ′b′ , this is because the dominant decay mode of b ′ being W − t, the final state will consist of 4W 's and 2b's resulting in large number of events satisfying cut 1 and 2. Similarly, the leptons from the tt process survives cut 1 and 2 as the tops produced are highly boosted. It is only after the application of cut 3 which requires at least 5 tagged b's along with a minimum energy of 40 GeV, the background gets washed away.
-After applying cut 3 the discrimination between the two kinds of signals, t ′t′ and b ′b′ starts to show up. Further demand of reconstructing two Higgs from the tagged b's makes the distinction between two kinds of signal events quite clear. We can conclude from this table that in the case of 5b + 2l+ p T final state after all the cuts, we expect the dominant contribution from top-like vector quark, t ′ . The trend seems to be same for both the masses. • We next consider the number of events with at least 5 tagged b's and 1 isolated lepton i.e. 5b + 1l+ p T (signal 2) in the final state, along with the two b jet pair's invariant mass peaking at the Higgs mass (123−128 GeV), which we call N 2 . We present the results for this final state, in Table 4 for both t ′ and b ′ along with the background. The argument in this case for the number of events surviving after each cut is similar to the previous one. The events that survive even after the Higgs invariant mass reconstruction in case of backgrounds is mainly due to the combinatorics. We next show in Figs reconstructed Higgs. The p T of the isolated lepton mostly varies between 40−80 GeV, whereas the p T of the hard b jet varies over a wide range of 80−150 GeV. The p T of the soft b jet is found to be less than 80 GeV with a peak around 50 GeV. It is seen from the Figures that the distribution pattern of both t ′ and b ′ are the same, only with a difference in the statistics. In Figure 6, however, a mild difference is noticed. This is because the lepton final states do not arise in the lowest order in pp → b ′b′ → bbHH. The signal is thus the consequence of initial/final states radiation and of cases when it actually arises from b ′ → tW , which accidentally mimic two Higgses in an uncorrelated manner.
• Finally the events with at least 5 tagged b jets and zero leptons (signal 3), along with the two b jet pair's invariant mass peaking at the Higgs mass (between 123 GeV and 128 GeV), which we call N 3 , is considered. The result for this is presented in Table 5 for both t ′ and b ′ . While considering the zero lepton final state given in Table 5, the cut consisting of isolated lepton and missing energy i.e. cut 1 and 2 is neglected for the obvious reasons. Since both t ′ and b ′ favour the hadronic decay mode, it can be seen from the Table, the number of events surviving is the same even after cut 3. It is only after the Higgs mass reconstruction from the b jets, that both t ′ and b ′ show different behaviour. Similar to signal 2, we have shown in Figs. 7, 8, 9, 10 the various distributions with all the kinematic cuts imposed. The p T of the hard b jet is peaked around 200 GeV for b ′b′ pair production, whereas in case of t ′t′ , the p T is peaked around 100 GeV as can be seen from Fig. 7. This is mainly because the b's from b ′b′ are directly produced from the decay of b ′ , compared to those from t ′t′ , where b is produced from the decay of t ′ to tH/tZ followed by the decay of t to W b or H/Z to bb. The discrimination is reduced when one moves to the p T distribution of the softer jets, Fig. 8, with both t ′t′ and bb ′ behaving similarly for the p T distribution of the soft b jet, Fig. 9. The distribution of the opening angle between the two reconstructed Higgs is shown in Fig. 10. The distribution pattern is the same for both t ′ and b ′ . It can be seen from the figure that most of the reconstructed Higgs pair have small opening angles, even though the Higgses reconstructed in the signal are also on account of combinatorics. This is because these are real signals, as opposed to the events in Figure 6, and the boost of the parton center-of-mass frame causes a small opening angle in a large number of cases. From the above analysis, we find a significant difference in the number of events that survive after all the cuts, for both the signals. Still we choose to compute the ratio of the number of events surviving after the application of all cuts in case of the different signals N i , where i = 1, 2, 3 as defined before for both t ′ and b ′ . The relevant ratios which we consider are N 13 = N 1 /N 3 and N 23 = N 2 /N 3 for both t ′ and b ′ . We consider these ratios as working with the above rates helps us in getting most of systematic uncertainties cancelled. The results are presented in Table 6 and it can be seen that they differ significantly for t ′ and b ′ . We can make Mass(GeV) Isosinglet N 13 N 23 t ′ 0.012 0.215 350 b ′ 0.0006 0.008 t ′ 0.012 0.246 500 b ′ 0.0009 0.032 Table 6: The ratios, N 13 and N 23 for t ′ and b ′ the following observations.
• The ratio N 13 differs for t ′ and b ′ by two orders of magnitude, for both the masses of 350 and 500 GeV.
• The ratio N 23 differs by a factor of 100 in case of 350 GeV and by a factor of 10 in the case of 500 GeV. It turns out that the ratio N 13 is a better distinguishing observable than N 23 and it continues to be so even when the mass of t ′ , b ′ increases while N 23 seems to be sensitive to the t ′ , b ′ mass, and can only be used as a distinguishing observable for quarks masses unto 700 GeV.
Summary and Conclusions
In this work we have made an attempt to distinguish between top-like and bottom-like isosinglet quarks which are predicted in several extensions of the SM, at a luminosity of 100 fb −1 and center-of-mass energy of 14 TeV at the LHC. On account of being vectorlike they mix with third generation chiral quarks which leads to flavor changing Yukawa interactions along with FCNC. These quarks have the decay modes, t ′ → Zt, Ht, W + b and b ′ → Zb, Hb, W − t.
We have in this work tried to address the question of distinguishing the signatures of these isosinglet vectorlike quarks once they are discovered.
Choosing in particular the Higgs decay channel out of these possibilities for both t ′ and b ′ , we tried to make a distinction between the two cases. The Higgs decays further to a pair of b quarks. We demand that the two Higgs be reconstructed in the mass range 123−128 GeV from the tagged b's. The recent discovery of Higgs like resonance at 125.5 GeV at the LHC strengthens our analysis. We choose three final states with 2, 1, 0 lepton along with five tagged b's which is attainable at the LHC as it can efficiently detect leptons and also tag b's. We find that with a suitable choice of cuts, the SM background is very small for both the signals. Our study overall reveals that, empowered by our recent information on the Higgs, we can clearly differentiate between t ′ and b ′ from ratios of events with various lepton multiplicities in the final state along with two reconstructed Higgs. | 2014-04-13T12:27:39.000Z | 2014-04-13T00:00:00.000 | {
"year": 2014,
"sha1": "5792a030dd5a9b9538cd909e56fe16993af6fcc0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.3374",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5792a030dd5a9b9538cd909e56fe16993af6fcc0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260848186 | pes2o/s2orc | v3-fos-license | Current and emerging therapies for the treatment of age-related macular degeneration
Age-related macular degeneration (AMD) is the leading cause of vision loss in the industrialized world. In the last few decades, the mainstay of treatment for choroidal neovascularization (CNV) due to AMD has been thermal laser photocoagulation. In the last decade, photodynamic therapy with verteporfin extended treatment for more patients. While both of these treatments have prevented further vision loss in a subset of patients, improvement in visual acuity is rare. Anti-vascular endothelial growth factor A (VEGF) therapy has revolutionized the treatment of AMD-related CNV. Pegaptanib, an anti-VEGF aptamer prevents vision loss in CNV, although the performance is similar to that of photodynamic therapy. Ranibizumab, an antibody fragment and bevacizumab, a full-length humanized monoclonal antibody against VEGF have both shown promising results with improvements in visual acuity with either agent. VEGF trap, a modified soluble VEGF receptor analogue, binds VEGF more tightly than all other anti-VEGF agents and has also shown promising results in early trials. Other treatment strategies to decrease the effect of VEGF have used small interfering ribonucleic acid (RNA) to inhibit VEGF production and VEGF receptor production. Steroids, including anecortave acetate in the treatment and prevention of CNV, have shown promise in controlled trials. Receptor tyrosine kinase inhibitors, such as vatalanib, inhibit downstream effects of VEGF, and have been effective in the treatment of CNV in early studies. Squalamine lactate inhibits plasma membrane ion channels with downstream effects on VEGF, and has shown promising results with systemic administration. Other growth factors, including pigment epithelium-derived growth factor that has been administered via an adenoviral vector has shown promising initial results. In some patients ciliary neurotrophic factor is currently being studied for the inhibition of progression of geographic atrophy. Combination therapy has been investigated, and may prove to be more effective in the management of AMD-associated CNV. Ongoing and future studies will be crucial for optimizing the treatment of patients with AMD.
Introduction
In 2002, the World Health Organization estimated there were 161.2 million visually impaired people in the world; 14 million (8.7%) of whom had age-related macular degeneration (AMD) (World Health Organization 2007). In industrialized countries, AMD is the leading cause of adult vision loss. Choroidal neovascularization (CNV) is responsible for approximately 90% of the cases of severe vision loss due to AMD, and geographic atrophy (GA) is responsible for the remaining 10% (Eye Diseases Prevalence Research Group 2004; Klein et al 2006). In 2004, the estimated prevalence of AMDrelated CNV in industrialized countries was 1.02% for people over the age of 40 years, and 8.18% for people over the age of 80 years (Eye Diseases Prevalence Research Group 2004). The magnitude of this problem has driven enormous research efforts regarding the prevention and treatment of AMD-related CNV and GA. Until recently, laser photothermal and photodynamic therapy have been the only treatments that have demonstrated benefi t in large controlled clinical trials for the management of AMD-related CNV. For some patients with dry AMD, the use of oral micronutrient supplementation has reduced the rate of disease progression. Better understanding of the pathophysiologic mechanisms of AMD-related CNV and GA has allowed for the recent emergence of pharmacotherapy as a more targeted treatment approach toward both conditions. This article reviews the current level of understanding regarding several of these new treatments for AMD-related CNV and GA (Table 1).
Dry age-related macular degeneration and geographic atrophy
Dry AMD includes the spectrum of fi ndings from minimal drusen through geographic atrophy (GA), so long as no evidence of neovascularization is present. For patients with mild disease, such as extensive small drusen (Յ63 μm in diameter) or nonextensive intermediate size drusen (Ͼ63 μm, Յ125 μm), the risk of losing vision or developing CNV or GA over 5 years is relatively low, at 1.3% (Age-Related Eye Disease Study Research Group 2001). At the other end of the dry spectrum, GA occurs when areas of the retinal pigment epithelium (RPE) gradually disappear resulting in growing and coalescing areas of total RPE atrophy. Absence of the RPE leads to fallout of underlying choriocapillaris and overlying photoreceptors and affected areas correspond to scotomata (Spraul et al 1996).
Antioxidant therapy
The only therapy with proven benefi t in patients with dry AMD is a combination of oral antioxidant supplements, containing vitamin C (500 mg) and vitamin E (400 IU), beta-carotene (15 mg, often labelled as equivalent to 25,000 IU of vitamin A); and zinc as zinc oxide (80 mg) and copper as cupric oxide (2 mg) (Age-Related Eye Disease Study Research Group 2001). This study found that patients with the intermediate (extensive intermediate size drusen or at least 1 large size druse (Ն125 μm)) or advanced (CNV or central GA in 1 eye) stages of AMD experience a statistically signifi cant reduction in the 5-year rate of moderate vision loss, and progression to CNV or central GA. Therefore, the AREDS formula is recommended for those patients with the intermediate stage of AMD or worse, unless there are contraindications to the use of these oral micronutrient supplements. Copper was added to the AREDS formulations containing zinc to prevent copper defi ciency anaemia, a condition associated with high levels of zinc intake.
Because the effects of high-dose beta-carotene are harmful in some patient populations, including smokers (Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group 1994), and the AREDS formula did not include other potentially benefi cial micronutrients; the AREDS Research Group is currently enrolling a second trial (AREDS2) to investigate alternatives to the original AREDS formula. AREDS2 will address the role of omega-3 fatty acids and lutein as well as decreased dosages of beta-carotene in patients with intermediate or advanced AMD.
Although oral antioxidant supplementation helps prevent vision loss, local delivery of antioxidants is an area of current interest because a small but signifi cant increase in the rate of hospitalization for urinary tract infections, among other conditions, has been reported (Age-Related Eye Disease Study Research Group 2001). The National Eye Institute is sponsoring a phase II trial of an antioxidant OT-551 (Othera Pharmaceuticals, Inc., Exton, PA), a low molecular weight compound metabolized to TP-H, which is a potent freeradical scavenger (National Institute of Health 2007). The study drug is administered topically as a drop three times daily in patients with bilateral GA, the progression of which will be assessed over a 2-year period.
Anecortave acetate
Anecortave acetate (Retaane ® , Alcon Research, Ltd., Fort Worth, TX, USA), is an angiostatic steroid that does not exhibit glucocorticoid receptor-mediated activity. Its use in ocular disease is attractive because of its dosing and delivery: 15 mg delivered as a posterior juxtascleral depot every 6 months. Because of its favourable dosing schedule and delivery and low risk profi le, anecortave acetate is being studied in the Anecortave Acetate Risk Reduction Trial (AART) for the prevention of CNV in patients with eyes at high risk of converting from non-neovascular to neovascular AMD. This study targets patients with bilateral large size drusen and pigment changes who have a particularly high 5-year risk of progression to advanced AMD (47%) based on the AMD simplifi ed severity scale (Ferris et al 2005).
Age-related macular degenerationrelated choroidal neovascularization
In choroidal neovascularization anomalous choroidal vessels grow under or through the RPE (Green et al 1993). Although specifi c stimuli for CNV growth remain unknown, it generally occurs in the presence of soft drusen, breaks in Bruch membrane, and a pro-infl ammatory and pro-angiogenic milieu, characterized by elevated levels of vascular endothelial growth factor A (VEGF) or plateletderived growth factor (PDGF) (Ambati et al 2003).
Choroidal neovascularization has various patterns and confi gurations of proliferation that have been described based on its appearance with fl uorescein angiography (FA). This highly useful diagnostic test allows one to determine the pattern, boundaries, composition and location of the neovascular lesions with respect to the centre of the fovea. These patterns of fl uorescence have been shown to be reliable and reproducible in multi-centre clinical trials and in practice (Macular Photocoagulation Study Group 1982;Macular Photocoagulation Study Group 1986;Macular Photocoagulation Study Group 1991;Macular Photocoagulation Study Group 1993;Macular Photocoagulation Study Group 1994; Treatment of Agerelated Macular Degeneration with Photodynamic Therapy (TAP) Study Group 2001; Treatment of Age-related Macular Degeneration with Photodynamic Therapy (TAP) Study Group 2003). Classic CNV refers to a discrete, welldemarcated focal area of hyperfl uorescence seen during the early images of the FA that increases in the intensity of fl uorescence as the FA images progress in the later phases. The hyperfl uorescence not only increases in intensity but also extends beyond the boundary of the initial lesion seen in the early FA images. Occult CNV refers angiographic patterns lacking the features of classic CNV and is characterized by stippled or speckled hyperfl uorescence that is frequently seen in the mid to later FA images. Occult CNV has been divided in to two types, fi brovascular pigment epithelial detachment(FVPED) and late leakage of an undetermined source (LLUS). In FVPED, the lesion has ophthalmoscopic or photographically appreciable thickness or elevation when viewed stereoscopically. The stippled hyperfl uorescence may become well defi ned in the later FA images. In LLUS, the lesion is not elevated when viewed stereoscopically. The choroidal based stippled or speckled hyperfl uorescence appears in the mid to late FA images and have no classic or FVPED angiographic qualities. CNV are also described by location. In subfoveal CNV, any component of the lesion resides underneath the geometric centre of the fovea. In extrafoveal CNV the edge of the lesion is no closer that 200 micrometers from the foveal centre. Those lesions whose edges reside within 1-199 micrometers from the foveal centre are juxtafoveal (Macular Photocoagulation Study Group 1991). When at least 50% of a choroidal neovascular lesion's composition is of a particular pattern, the qualifi er predominantly is applied, as in predominantly classic, predominantly occult, or predominantly hemorrhagic. When less that 50% of a choroidal neovascular lesion's composition is of a particular pattern, the term minimally is applied, as in minimally classic (Treatment of Age-related Macular Degeneration with Photodynamic Therapy (TAP) Study Group 2003).
Natural history data have indicated that 62% of eyes with predominantly classic subfoveal CNV lose 3 or more lines of visual acuity at 2 years with 30%-48% losing 6 or more lines (Macular Photocoagulation Study Group 1993). The prognosis for eyes with CNV that does not involve the centre of the fovea is slightly worse, with 49%-62% losing 6 or more lines at 3 years, likely due to better visual acuity at baseline (Macular Photocoagulation Study Group 1986; Macular Photocoagulation Study Group 1994). Visual acuity outcomes are worse for eyes with larger lesions but are slightly better for eyes with occult angiographic patterns (Treatment of Age-related Macular Degeneration with Photodynamic Therapy (TAP) Study Group 2003). Since poor visual outcomes occur without treatment, the prompt administration of safe and effective therapy is paramount in the management of CNV due to AMD.
Thermal laser photocoagulation
The Macular Photocoagulation Study (MPS) compared focal thermal laser photocoagulation of choroidal neovascularization to observation for CNV in AMD patients and consisted of multiple randomized clinical trials. Within 1 year of treatment, 25% of eyes with extrafoveal CNV due to AMD had lost fewer than 6 lines of vision with laser compared to 60% of eyes in the observation group (Macular Photocoagulation Study Group 1982), with a difference that persisted through 3 years (Macular Photocoagulation Study Group 1986). Two years after treatment, 21% of eyes with subfoveal CNV lost 6 or more lines of vision compared to 38% of eyes in the observation group (Macular Photocoagulation Study Group 1991). Unfortunately, eyes lasered for subfoveal CNV initially experience a marked drop in vision due to damage to the macular centre from photocoagulation. This undesirable effect and the advent of newer treatment options, make thermal laser photocoagulation a seldom-used treatment for patients with subfoveal CNV. Today, MPSstyle thermal laser photocoagulation for extrafoveal CNV is still considered an effective method of treatment and is used in selected cases.
Photodynamic therapy with verteporfi n
Photodynamic therapy (PDT) involves an intravenous infusion of a photosensitizing agent that selectively binds to the increased number of lipoprotein receptors on the endothelium of abnormal vessels including CNV. The photosensitizing agent is then activated with laser light, usually in the far-red spectrum where light transmission through tissue and blood is higher. Activation of the photosensitizer, which is usually structurally related to porphyrin, results in free-radical formation, endothelial damage, and clotting cascade activation with thrombosis of the affected, abnormal vasculature. Verteporfi n ® (Visudyne, Novartis AG, Basel Switzerland) was the fi rst photosensitizer approved for use in exudative AMD. It is administered at a dose of 6 mg/m 2 and is activated by 689-691 nanometer laser light. The Treatment of Age-related Macular Degeneration with Photodynamic Therapy (TAP) study demonstrated a particular treatment benefi t for patients with predominantly classic subfoveal CNV. At 2 years, PDT with verteporfi n prevented loss of 3 or more lines of vision in 67% of these patients, compared to 39% of controls (Treatment of Agerelated Macular Degeneration with Photodynamic Therapy (TAP) Study Group 2001). However, 33% of patients still lost 3 or more lines of vision over 2 years. In addition, it has been observed that PDT with verteporfi n initiates an infl ammatory response and up regulates VEGF and other growth factors that contribute to recurrence of CNV (Schmidt-Erfurth et al 2003). As such PDT with verteporfi n is not currently preferred as monotherapy for the management of AMDrelated CNV.
VEGF-binding agents
The vascular endothelial growth factor family consists of seven members (vascular endothelial growth factors A through F, and placental growth factor), which are secreted polypeptides that share common structural domains but have different biological and physical properties (Ferrara 2004). VEGF plays a role in normal and pathologic angiogenesis and has at least six known isoforms which are formed through alternative splicing and are named based on their number of amino acids: 121, 145, 165, 183, 189, and 206 (Ferrara 2004). VEGF165 is a potently pro-angiogenic molecule that has been shown to effect retinal vascular proliferation and increased vascular permeability in the eye (Adamis et al 2005). Neovascular membranes in patients with AMD contain both isoforms VEGF165 and VEGF121 (Rakic et al 2003). The 165 isoform can be cleaved by plasmin to form the 110 amino acid fragment (VEGF110) that retains biologic activity (Keyt et al 1996). Animal models have also suggested that VEGF165 plays a role in infl ammation, induction of leukocyte recruitment, and neuroprotection against ischemic injury (Nishijima et al 2007). This suggests that VEGF blockade may have both deleterious and salubrious effects in eyes with vascular disease. VEGF modulation has therefore been a target of many CNV treatments (Figure 1).
Pegaptanib
The fi rst available anti-VEGF treatment for use in the eye was pegaptanib (Macugen ® , Eyetech Pharmaceuticals, Inc., New York, NY, USA), an aptamer that targets VEGF165. In the VEGF Inhibition Study in Ocular Neovascularization Clinical Trial (VISION), pegaptanib was shown to be safe and effective in preventing vision loss in patients with AMDrelated CNV for over 2 years when compared to controls (Gragoudas et al 2004). However, visual decline occurred over time in a pattern similar to that seen with PDT with verteporfi n. This persistent loss of vision despite treatment with pegaptanib may be due to its selective binding to VEGF165 only, leaving all other isoforms uninhibited and available to stimulate neovascularization. Furthermore, its dosing of intravitreal injections every 6 weeks, compared to the less-invasive quarterly dosing schedule of PDT with verteporfi n gave pegaptanib treatment of neovascular AMD no competitive advantage.
VEGF antibodies and antibody fragments
Both bevacizumab (Avastin ® , Genentech, Inc., South San Francisco, CA, USA), a full-length humanized antibody against VEGF, and ranibizumab (Lucentis ® , Genentech, Inc.), a humanized antigen-binding fragment against VEGF, are able to bind and inhibit all isoforms of VEGF . Although ranibizumab has one binding site for VEGF compared to bevacizumab's two, ranibizumab has been affi nity matured, exhibiting 3-to 6-fold higher affi nity for VEGF than bevacizumab (Presta et al 1997;Chen et al 1999).
Ranibizumab is the fi rst therapy for neovascular AMD to result in a significant improvement in visual acuity . Two phase III studies of the use of monthly intravitreal injections of ranibizumab, MARINA and ANCHOR have been completed Rosenfeld et al 2006a). MARINA evaluated 716 patients with minimally classic or occult neovascular AMD. At 2 years, MARINA demonstrated a mean gain of 5.4 letters (0.3 mg group) to 6.6 letters (0.5 mg group) for patients treated with monthly ranibizumab injections compared to a mean loss of 14.9 letters for patients undergoing monthly sham injections (Rosenfeld et al 2006a). ANCHOR evaluated 423 patients with predominantly classic neovascular AMD. At 1 year, ANCHOR demonstrated a mean gain of 8.5 letters (0.3 mg group) to 11.3 letters (0.5 mg group) for patients treated with monthly ranibizumab injections compared to a mean loss of 9.5 letters for patients treated with verteporfi n every 3 months . The 0.5 mg dose prevented 3 lines of vision loss in 90% (MARINA) to 96.4% (ANCHOR) compared to sham (52.9%) or verteporfi n (64.3%). The 0.5 mg dose resulted in 3 or more lines of vision gain in 33.8% (MARINA) to 40.3% (ANCHOR) compared to sham (5.0%) or verteporfi n (5.6%). All comparisons were statistically signifi cant (p Ͻ 0.001). Serious adverse events for patients treated with ranibizumab included endophthalmitis (1% (MARINA) to 1.4% (ANCHOR)) and uveitis (0.7% (ANCHOR) to 1.3% (MARINA)). Arterial thromboembolic events occurred in 3.8% of the sham group compared to 4.6% in each of the ranibizumab groups (MARINA), and 2.1% of the verteporfi n group compared to 2.2% (0.3 mg group) and 4.3% (0.5% group) for the ranibizumab groups (ANCHOR). Although some might consider these data to demonstrate a high level of safety, neither study was powered to evaluate safety concerns. In fact, at a 6-month interim evaluation, the SAILOR study, a phase IV evaluation of the safety and effi cacy of 0.3 mg and 0.5 mg doses of as-needed ranibizumab reported a statistically signifi cant increase in the rate of stroke (0.3% compared to 1.2%) with the higher (commercially available) dose whereas for arterial thromboembolic events of myocardial infarction or vascular death, the differences between the doses were not statistically signifi cant [personal communication, Genentech, Inc., January 24, 2007] The burden of monthly injections of ranibizumab coupled with the appreciable risk of adverse events prompted the PIER and PrONTO studies (Rosenfeld 2006b;Schmidt-Erfurth 2007). Each study used less frequent dosing than ANCHOR and MARINA, with PIER using a fi xed dosing schedule and PrONTO using an "as needed" schedule. PIER was a phase IIIb randomized, double-masked, sham-controlled study of 184 subjects who received monthly injections of ranibizumab or sham for 3 months, followed by quarterly injections. At 12 months, the sham group lost 16 letters, whereas the ranibizumab group initially gained one line but lost 0.2 letters at 12 months (Schmidt-Erfurth 2007). Although the study did not include a monthly injection of ranibizumab arm, comparison to ANCHOR and MARINA data would suggest inferiority with the PIER dosing schedule. The
Ion transporters
Vatalanib Figure 1 Mechanisms of inhibition of vascular endothelial growth factor-A (VEGF). Pegaptanib, ranibizumab, bevacizumab, and VEGF trap bind and sequester VEGF, preventing it from binding and activating VEGF receptor. Inhibitors of VEGF receptor tyrosine kinases prevent transduction of the VEGF binding signal. Small interfering RNA molecules prevent translation of VEGF (bevasiranib) or VEGF receptor-1 (Sirna-027). Squalamine interferes with the function of various ion transport channels, the activity of which is required for angiogenesis. Double dotted lines represent cellular plasma membranes and the single dotted line represents nuclear membrane. Current and emerging therapies for AMD 5.6 injections, 35% gained 3 or more lines, and 95% lost fewer than 3 lines of vision (Rosenfeld 2006b). Although the criteria for re-treatment have not been systematically evaluated, and some clinicians would support treating based on other factors than those evaluated in PrONTO, decreasing the number of required injections would substantially reduce the burden of AMD treatment on patients.
Bevacizumab is a full-length antibody with similar properties to ranibizumab, and is approved for systemic use in some solid tumours and is therefore available for off-label use in the treatment of AMD-related CNV (Hurwitz 2004). An open-label uncontrolled clinical study of intravenous infusion of bevacizumab (5 mg/kg) at baseline with 1-2 repeated doses at 2-week intervals in 18 patients reported a statistically signifi cant median gain of 14 letters and decrease of OCT thickness from 379 to 255 μm (Moshfeghi et al 2006). The only signifi cant ocular or systemic adverse event was hypertension, which occurred or worsened in 10 patients and was readily treated with oral medications.
Concerns about thromboembolic events with use of intravenous bevacizumab in the treatment of cancer prompted investigators to use bevacizumab intravitreally. Multiple retrospective studies have reported on the use of 1.25 mg doses of intravitreal bevacizumab, citing three or more lines of vision improvement in 38.3% (Spaide et al 2006) to 44% of treated patients and median gain of 15 letters to 20 letters (Avery et al 2006) at 8-12 weeks. One prospective study of monthly injections of 2.50 mg bevacizumab reported median gain of 6 lines of vision and normalization of central retinal thickness from 350 to 211 μm using optical coherence tomography (OCT) (Bashshur et al 2006). Although, the OCT data in these reports are compelling; the visual acuity data did not employ Early Treatment for Diabetic Retinopathy Study (ETDRS) protocol visual acuities and as such may not accurately refl ect visual acuity improvement in these patient series.
In each of the reports on intravitreal bevacizumab, negligible adverse events were noted. This was an important fi nding since numerous concerns were raised with the intravitreal injections of humanized antibody such as the inciting of intraocular infl ammation or the promotion of systemic thromboembolic side effects. Acknowledging the bias inherent in a survey based on self-reporting, an internet survey of adverse events of 3810 intravitreal injections in 3034 patients following intravitreal bevacizumab injections reported low rates of treatment-related and drug-related adverse events, citing rates of 0.03% for endophthalmitis and 0.1% for all forms of cerebrovascular events (Fung et al 2006). Concerns regarding the durability of compounded bevacizumab have been put to rest by in vitro studies of VEGF binding following refrigeration or freezing for up to 6 months (Bakri et al 2006).
At this time, effi cacy data are superior for ranibizumab than bevacizumab, and each has demonstrated reasonable safety, with the exception of concern for increased rates of stroke demonstrated by SAILOR. In an effort to clarify these issues, the National Eye Institute has agreed to sponsor a multi-centre trial comparing ranibizumab to bevacizumab with fi xed monthly versus variable dosing schedules. The Comparison of Age-related Macular Degeneration Treatment Trial (CATT) is scheduled to begin enrolling patients in the fall of 2007. Until this question can be answered by the CATT, we will be forced to treat patients with medication availability and fi nancial feasibility issues in mind.
VEGF trap
The VEGF trap (Regeneron Pharmaceuticals, Inc., Tarrytown, NY, USA) is a fusion protein of portions of VEGF receptor 1 and 2 and the Fc region of human IgG which binds all VEGF isoforms and does so more tightly than other available VEGF binding agents. The Clinical Evaluation of Antiangiogenesis in the Retina (CLEAR) study was a randomized, double-masked, ascending dose, placebo-controlled phase I trial of 18 patients with neovascular AMD who received either placebo or 1 of 3 systemic doses of intravenous VEGF trap (0.3, 1.0, and 3.0 mg/kg) . The study found a dose-dependent increase in systemic blood pressure, which was clinically signifi cant above the 1.0 mg/kg dose and further studies of systemic VEGF trap were halted. CLEAR IT-1 was a phase I dose escalation study of a single intravitreal injection of multiple doses VEGF trap (0.05, 0.15, 0.5, 1, 2, and 4 mg). At 6 weeks, mean gain in visual acuity was 4.8 letters and mean OCT central retinal thickness decreased from 298 to 208 μm . The potential benefi t of VEGF trap is a sustained effect compared to single injections of other VEGF-binding agents, although this has not been demonstrated in a head-to-head trial. A phase II VEGF trap study (CLEAR-AMD) is currently in the process of enrolling patients.
Small interfering RNA
The 2006 Nobel Prize in Physiology or Medicine was awarded to Andrew Fire and Craig Mello for their work in RNA interference, the process by which small interfering RNA (siRNA) molecules inactivate messenger RNA, thereby suppressing RNA translation. For clinical use, these drugs are administered as double-stranded RNA molecules that are imported across the cellular membrane and processed by an enzyme, Dicer, which shortens the siRNA to 21-24 nucleotides. The processed siRNA is incorporated into a RNA-induced silencing complex (RISC), which, when activated, binds complementary mRNA and digests it. This allows a single molecule of siRNA to degrade multiple copies of mRNA.
The fi rst drug to employ this mechanism in the treatment of neovascular AMD is bevasiranib (formerly Cand5, Acuity Pharmaceuticals, Philadelphia, PA) and is targeted against VEGF mRNA. The Cand5 Anti-VEGF RNA Evaluation (CARE) study was a phase II study of 3 doses of bevasiranib (0.2, 1.5, and 3.0 mg) injected intravitreally 6 weeks apart (n = 127 eyes). At 12 weeks after the initial injection, mean loss of vision was 4.1 letters (0.2 mg dose), 6.9 letters (1.5 mg dose), or 5.8 letters (3.0 mg dose), with 71.8-79.4% losing less than 3 lines. Adverse events included stroke (0.8%), arrhythmia (0.8%), and hypertension (5.5%) (Tolentino 2006). Compared to improvement in vision seen with other therapies, including ranibizumab, bevacizumab, and VEGF trap, the vision loss in these patients does not bode well for the use of bevasiranib as monotherapy. By targeting a relatively upstream component of the VEGF pathway, it is currently considered that the use of siRNA may have a delayed effect in infl uencing disease processes. The recent FDA approval of ranibizumab may prompt a design change in the phase III study of bevasiranib to include injection of a VEGF-binding agent at baseline. The combination of VEGF binding by ranibizumab or bevacizumab combined with the mRNA interference of bevasiranib has the theoretic advantage of preventing further vision deterioration by immediately binding existent VEGF while interfering with the upstream production of VEGF.
A second siRNA drug, sirna-027 (Sirna Therapeutics, San Francisco, CA), directed against VEGF receptor 1 (VEGFR1), has demonstrated effi cacy in animal models (Shen et al 2006) and reasonable safety in a Phase I trial and is undergoing phase II evaluation currently.
VEGF receptor tyrosine kinase inhibition
A further method of inhibiting the effect of increased VEGF within the eye with CNV is to inhibit the tyrosine kinase activity of VEGF receptors. Vatalanib ® (formerly PTK-787, Novartis International AG, Basel, Switzerland) is a potent inhibitor of all known VEGF receptor tyrosine kinases, VEGFR1 (sFlt-1), VEGFR2 (KDR), and VEGFR3 (Flt-4). The use of vatalanib is an attractive alternative to intravitreally or intravenously injected medications because of its satisfactory oral bioavailability. Preclinical data have demonstrated inhibition of experimental retinal and choroidal neovascularisation (Maier et al 2005). Clinical studies in healthy individuals have shown no serious adverse events and adverse events from the use of vatalanib in patients with solid and hematologic malignancies have included nausea/vomiting, fatigue, dizziness, diarrhoea, and hypertension (Joondeph et al 2006). The Study of Vatalanib and Photodynamic Therapy with Verteporfin in Patients With Subfoveal Choroidal Neovascularization (CNV) Secondary to Age-related Macular degeneration (ADVANCE) is currently enrolling for a phase I/II comparison of PDT to PDT/vatalanib.
PEDF
Pigment epithelial-derived growth factor (PEDF) is a factor with neurotrophic, neuroprotective, and antiangiogenic properties (Steele et al 1993;Mori et al 2002). The potential benefi t of PEDF in preventing damage due to neovascular AMD lies in its multifaceted protection of the retina and retinal pigment epithelium and inhibition of angiogenesis. CDNA encoding human PEDF has been shown to inhibit ocular neovascularization when introduced into the vitreous and subretinal space of animal models via an adenoviral vector (AdPEDF.11, GenVec, Inc., Gaithersburg, MD, USA) (Imai et al 2005). A phase I study of AdPEDF.11 in 28 patients demonstrated no dose-limiting toxicity and suggested that its anti-angiogenic effect lasts for months (Campochiaro et al 2006). Furthermore, therapy does not seem to be associated with a systemic immune response, thereby theoretically allowing for repeat injections (Hamilton et al 2006). The sustained effect and repeatability of this therapy is attractive approach given the relatively short duration of anti-VEGF agents.
Squalamine lactate
Squalamine lactate (Evizon ® , Genaera Corporation, Plymouth Meeting, PA) is an anti-angiogenic amino sterol derived from cartilage of the dogfi sh shark, Squalus acanthus. Its mechanism of action includes blockade of cell membrane ion transporters that regulate cell function by controlling pH and metabolism. When bound to calmodulin, squalamine also blocks the action of VEGF and integrin expression, thereby inhibiting angiogenesis. Squalamine is ineffective when administered intravitreally and therefore requires intravenous dosing. However, systemic dosing has yielded promising results in rats (Ciulla et al 2003) as well as humans (Kaiser 2007). In a phase I/II clinical trial of 40 patients who had received 25 or 50 mg/m2 weekly for 4 weeks, no patients lost vision and 26% gained 3 or more lines. Genaera has abandoned this product and is no longer in clinical development.
Anecortave acetate
Anecortave acetate has also been used in patients with AMD-related CNV. In these patients, Anecortave acetate produced similar visual acuity results to PDT with verteporfi n in the C-01-99 study (Slakter et al 2006). With improved therapies and outcomes, anecortave acetate is not appropriate monotherapy for active CNV. However, the NEI is sponsoring the BRIDGE study that will evaluate combination therapy of anecortave acetate and ranibizumab.
Vitreoretinal surgery
The suggestion that surgical removal of CNV before the development of subretinal fi brosis would allow for reapposition of healthy RPE and photoreceptors, thereby improving visual acuity is an attractive one. Multiple techniques and modifi cations have been tried to improve visual acuity outcomes in patients with CNV, including submacular surgical removal of CNV (Hawkins et al 2004), pneumatic displacement of large subretinal haemorrhage (Hassan et al 1998), andmacular translocation (deJuan et al 1998;Mruthyunjaya et al 2004). To date, none of these therapies has proven effective, and with rare exception, none are appropriate primary treatment strategies for AMDassociated CNV.
Combination therapy
As with many therapies, which have been borrowed from oncology for use in the treatment of neovascular AMD, the concept of "induction and maintenance" is one employed in several combined management approaches in an effort to improve therapeutic effi cacy and reduce treatment frequency. As indicated previously, PDT has been implicated in having the undesirable observation of increasing infl ammation and VEGF levels (Schmidt-Erfurth et al 2003). By combining treatment modalities, an opportunity exists to refi ne the blockade of angiogenesis and vascular permeability. For example combination of PDT, intravitreal anti-VEGF agent and steroid could in theory allow for blockade of CNV at multiple levels. The FOCUS study is a phase I/II trial comparing as-needed dosing of verteporfi n followed by monthly dosing of either ranibizumab or sham injection. At 2 years, the ranibizumab/PDT group gained a mean of 4.6 letters whereas the sham/PDT group lost a mean of 7.8 letters. The number of treatments through 2 years averaged 1.4 in the ranibizumab/PDT group compared to 4.0 in the sham/PDT group (Heier, Antoszyk, et al 2006;Heier, Boyer, et al 2006).
In a prospective, non-comparative, interventional case series, 104 patients underwent triple therapy using intravitreal dexamethasone (800 μg) and bevacizumab (1.5 mg) administered within a mean of 16 hours after PDT (Agustin et al 2007). All 104 patients received one triple therapy cycle while 5 patients received a second triple treatment due to remaining CNV activity. The triple therapy was complemented in 18 patients (17.3%) by an additional intravitreal injection of bevacizumab. The mean follow-up period was 40 weeks (range, 22-60 weeks). Mean increase in visual acuity was 1.8 lines (p Ͻ 0.01). Mean decrease in retinal thickness was 182 micrometers (p Ͻ 0.01). No serious adverse events were observed. The authors concluded that triple therapy results in signifi cant and sustained visual acuity improvement after only one cycle of treatment in patients with AMD associated CNV. In addition, the therapy offered a good safety profi le and potentially lower cost compared with therapies that must be administered more frequently, and convenience for patients (Agustin et al 2007). The PDT was performed with a reduced time of light delivery (70 seconds) in an effort to theoretically reduce choroidal damage .
The Neovascular Age-related macular degeneration, Periocular corticosteroids and Photodynamic therapy (NAPP) trial evaluated 67 patients with AMD and subfoveal CNV. Thirty-four patients were given a single periocular injection of corticosteroid, followed immediately by PDT with verteporfi n, and 33 patients were given PDT alone. No difference in visual acuity or angiographic leakage between the groups was observed at 6 months (NAPP Trial Research Group et al 2007).
Several other studies are combining therapeutic modalities, including the BRIDGE study (ranibizumab plus anecortave acetate), PROTECT study (ranibizumab plus PDT with verteporfi n), VISION phase IV study (pegaptanib plus PDT) are currently underway.
Future directions
Another target of angiogenesis has been through non-VEGF pathways, including bioactive lipids. Sphingomab (Lpath Inc., San Diego, CA) is a monoclonal antibody targeted against sphingosine-1-phosphate, which has been implicated in angiogenesis, scar formation, and infl ammation (Sabbadine 2006). Studies have been very preliminary, but the reminder that VEGF is not the only contributor to angiogenesis is important, and may play a further role in combination therapy.
Encapsulated Cell Technology (ECT), developed by Neurotech, Lincoln, RI, involves implantation into the vitreous cavity of a small semi-permeable polymer capsule. The capsular implant is lined with cultured cells that have been engineered to secrete certain proteins or peptides. The initial studies of ECT in humans have been phase I trials of ECT containing modifi ed human retinal pigment epithelial cells , programmed to secrete ciliary neurotrophic factor (CNTF) in the treatment of retinitis pigmentosa and non-neovascular AMD (Tao et al 2002;Sieving et al 2006). Assuming that these capsules demonstrate immune privilege, ECT allows for theoretically sustained low-dose delivery of a single or combination of proteins or other cellular products into the vitreous cavity.
Rather than replacing specific defective genes or supplying defi cient proteins, some have advocated replacing entire cell lines and tissues.
Although not yet ready for human trials, stem cell transplantation and retinal pigment epithelium transplantation are enticing goals for the management of chronic blinding diseases (Francis et al 2003). Signifi cant challenges with these therapeutic modalities include identifying pluripotent cells, controlling their differentiation and understanding the relationships between modifi ed cells and the immune system.
Recent advances in the understanding of the relationship between genetic susceptibility and AMD have raised new questions regarding the influence of immunity on the occurrence of CNV. One example of this is the recent association between complement factor H haplotypes and AMD (Edwards et al 2005). Further understanding will allow for genetic testing and identifi cation of individuals at high risk for CNV and those appropriate for intensive prophylactic therapies.
Conclusion
In the last 3 years, the tertiary intervention for AMD-related CNV, the leading cause of irreversible severe vision loss in AMD, has shifted from a predominantly laser based treatment approach to a more targeted pharmacotherapeutic approach. We have determined that pharmacotherapy is superior to laser based treatment in many patients with AMD-related CNV, allowing for better outcomes in visual acuity and retinal anatomy and has made treatment available to neovascular AMD patients who were previously poor candidates for laser-based therapy.
In this time, we have learned also that non-selective VEGF blockade by agents like ranibizumab and bevacizumab appears more effi cacious in the short-term treatment of AMDassociated CNV than the use of selective VEGF blockade by agents like pegaptanib. At this juncture, caution still needs to be exercised, as the long-term risks of non-selective VEGF blockade are entirely unknown. Given the potential neuroprotective role of VEGF, in theory, complete and sustained VEGF blockade might result in long-term vision loss in AMD-related CNV.
The use of pharmacologic agents as monotherapy has allowed patients to recover vision faster than with previous treatment modalities but the effects are frequently, but not always, short-lived. A sustained beneficial effect has only been shown in treatment schedules requiring frequent intravitreal injections. As our understanding of the pathophysiologic mechanism of angiogenesis and the pharmacodynamics of anti-angiogenic agents improve, we have the opportunity to refi ne our treatment approach. The use of combination therapy, involving manipulation of multiple aspects of the angiogenesis cascade, is being investigated in patients with AMD-related CNV. Providing rapid and sustained improvement in the vision and function while reducing the risks and treatment burden of administering pharmaco-monotherapy may be the way of the very near future. While pharmacotherapy has help tremendously in the care of these patients with VEGF-mediated disease, long-term goals in the management of AMD will also need to address other sequelae such as vision-limiting macular ischemia, atrophy, and subretinal fi brosis in those patients with disease non-responsive to anti-VEGF agents or those patients with inactive but advanced disease. In addition, further investigations aimed at preventing the progression of both the neovascular and non-neovascular forms of the disease prior to the onset of vision loss will be instrumental in affording patients to maintain a better quality of life in this period of increasing life expectancy. | 2014-10-01T00:00:00.000Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "3377447d248cf884532d31c3b94856b96857639e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=2926",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3377447d248cf884532d31c3b94856b96857639e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118273007 | pes2o/s2orc | v3-fos-license | Some remarks about the baffling Higgs physics and the particle mass problem
A statistical model is proposed which ascribes the Z^0 mass to the screening properties of the neutrino Fermi sea (neutrino vacuum). Concerning the fermion masses, some puzzling features of the Higgs mechanism are examined. Arguments are advanced, based on the Zitterbevegung theory of the electron substructure, showing that in low energy experiments electron behaves as an extended distribution of charge, though its size comes out less than 10^-16 cm in high energy experiments. This might be a clue for explaining the origin of fermion masses without resorting to Higgs field.
-Introduction.
-This year, the reconditioned LHC at CERN in Geneva will start to operate so that experiments either confirming or refuting the Higgs boson existence will be at last carried out. This renewes the interest for this elusive particle which was hypothized to explain masses of bosons which mediate the weak force and which should give mass to all massive fermions, leptons and quarks. For this reasons, it seems timely reconsider the argument of masses even to highlight possible explanations different from the Higgs hypothesis.
-
The weak boson masses. -The main feature of weak interaction is its very short range. In the 1933 Fermi's theory, it was regarded as a "contact" interaction acting at zero spatial separation in contrast with electric force mediated by photons which acts at large distances. Higgs mechanism ascribes to weak bosons a "true" inertial mass originated by interaction with a doublet of scalar fields in SU (2) space, that is, the Higgs field [1]. Owing to energy-time uncertainty principle, weak bosons of mass M last a time δt ≤ ℏ/M c 2 so that range of weak force is ℏ/M c, the boson Compton wavelength. This is like what occurs with Yukava force mediated by massive pions. An alternative, more conservative, explanation is based on the effect of the neutrino Fermi sea (neutrino vacuum) on the weak boson propagation. Indeed, owing to the vanishingly small neutrino mass, neutrino sea is not above-bordered by a forbidden energy gap as the electron Fermi sea. Consequently, it screens the weak force quite as electrons in the conduction band of metals screen the electric force [2]. In this way, range of weak force is curtailed, which is equivalent to have massive bosons.
To work out a rigorous treatment of the above screening effect is a rather exacting task so that it appears suitable use a simplified approach. It is based on a special application of the Thomas-Fermi method (TF), already examined ten years ago [3]. A short account of this is given here.
3 -Screening of weak force in neutrino vacuum. -According to SU (2)⊗U (1) symmetry, weak interactions are mediated by W ± and Z 0 bosons. If mass is assigned to these particles, their masses turn out to be related by ϑ w ≃ 28.6 • standing for the electroweak angle [1]. Equation (1) holds independently of the mass-generating mechanism. Therefore, it is sufficient consider only the neutral Z 0 boson. In the unperturbed sea, ν L neutrinos of kinetic energies w = cp ranging from 0 to − ∞ are present. Taking into account only one spin component, neutrino density of states is related to kinetic energy by [4] ρ (cp) = (cp) In presence at the point r = 0 of a steady fermion ( 2 , only the time-component Z 0 of its potential is different from zero. So, the perturbed neutrino energy is U (r) = − Q νL eZ 0 (r) standing for the neutrino potential energy and Q νL = 1/ sin 2ϑ w for the electroweak neutrino charge in units of e. Let us examine first the case in which U is negative, that is, U = − |U |. On line w = U (r), it follows from equation (3) cp = 0 . Consequently, the neutrino sea is divided in two regions: one above line U (r) where cp is positive, the other below this line where cp is negative. By denying that neutrino sea is perturbed up to infinite depth ( 3 , we assume that an energy w F (w F = |w F |) can be found great enough with respect to |U (r)| (w F ≫ |U (r)| for any r) that neutrinos with energy below − [w F − |U (r)|] remain unperturbed. This amounts to say that energy −w F sets a cut-off in neutrino sea depth. So, neutrino density in the unperturbed Fermi sea is In the perturbed sea, integration over cp covers the range − (w F − |U |) to 0 in the negative region and from 0 to |U | in the positive one. Accordingly, the perturbed neutrino density is By subtracting the unperturbed density n F , we obtain When potential energy U is positive, that is, U = |U |, equation (6) is found again but with term |U | /w F reversed in sign [3]. In reality, taking into account that w F is large with respect to |U (r)|, term |U | /w F is small and can be disregarded.
By letting Q f e be the fermion electroweak charge, potential Z 0 turns out to be ruled by the Poisson-like equation that is, utilizing equation (6), where It follows that the screened potential is This result entitles us to define a "screening mass" M Z 0 by means of a formal Compton wavelength We consider now the high-energy collisions. Let w (−) , w (+) be the energies and − → p (−) , − → p (+) the momenta of the colliding electron-positron pairs. Assuming momentum − → p (−) opposite to momentum − → p (+) , we have standing for energy and momentum in the centre of mass. Disregarding the rest energy 2m e c 2 , wavelength corresponding to p c.m. is A resonant collision is originated when wavelength λ c.m. becomes equal to the width of the potential well which allows for the electron-positron interaction, that is, So, taking into account equations (14), (11) and (13), we get which relates collision energy to Z 0 mass. When resonance occurs, energy w c.m. is released through lepton and quark emissions mediated by flavor-diagonal interactions [1]. Utilizing equations (11) and (9), Z 0 mass, in energy units, turns out to be α standing for the fine structure constant. Apart from the neutrino electroweak charge, it is related only to w F , the energy cut-off in neutrino sea depth. Without this cut-off, that is, for w F → ∞, the Z 0 mass diverges so reducing to zero the range λ S of weak forces and recovering the old Fermi's "contact" theory. By comparing M Z 0 c 2 with its experimental value of about 91 GeV, we obtain w F as large as 1120 GeV, while the screening length λ S turns out to be 2.2 · 10 −16 cm. As for the meaning of these figures, it is to be pointed out that on a temperature scale w F corresponds to 1.3 · 10 16 K. Consequenly, for T > 10 16 K negative kinetic energy neutrinos are excited to positive energies and the neutrino sea becomes partially empty. This hinders the sea screening properties. Higgs mechanism also gives 10 16 K as the temperature which restores the simmetry broken at low temperature [5]. This fact is not surprising because Higgs and screening mechanisms are calibrated on equivalent experimental data. The range 2.2 · 10 −16 cm of the weak force entails that the electron size is at least three orders of magnitude smaller than the classic electron radius 2.8 · 10 −13 cm.
Conclusion: the above statistical treatment, though not rigorous, readily explains how massless weak bosons may originate short range interactions without resorting to Higgs physics.
-
The fermion masses. -The Higgs field, which has been assumed to be at the origin of weak boson masses, is also considered in connection with the fermion masses. The theory accommodates the masses of electrons and quarks of the three flavors and sets to zero neutrino masses as a consequence of the non-existence of right neutrinos. Masses are assumed proportional to the Higgs vacuum-expectation-value. Accordingly, three arbitrary coupling factors g e, g u , g d are considered for the first flavor. So, taking into accont the second and third flavor, the theory contains nine undetermined parameters [1].
To detect Higgs boson, various kinds of experiments have been devised based on its decay. But, the mere existence of a decay showing the features expected for the Higgs is not sufficient to conclude that it really concerns the "true" Higgs. It is also necessary that the found results allow an independent determination of the above mentioned nine parameters. Obviously, in lack of this, the Higgs remains nothing more than a conjecture.
But, another point challenges the correctness of the Higgs hypothesis. It is a well-established result of classic electrodynamics that the electromagnetic (e.m.) field shows inertial properties. This field, indeed, shows a momentum density − → E × − → H /4πc parallel to the Pointing's propagation vector. On this ground, at the beginning of the past century considerable endeavour was devoted to explain the electron mass as the e.m. mass of a charge distribution of definite size. The advent of quantum mechanics set an end to these attempts, because the supposed point-like nature of elementary particles entails a divergent electron mass. For this reason, the unsolved problem of electron mass was merely put aside by applying renormalization procedures purposely devised. It follows that a viable Higgs mechanism, besides the mass, should likewise explain how the divergent electron e.m. mass is turned off.
In our opinion, in order to manage the tough problem of particle masses, two basic arguments should be considered. The first is that, according to the Copenhagen interpretation of quantum physics, electron is an observable object, not an absolute entity. This means that its features, as expected from theory, depend on the special experiment considered. The second concerns a peculiar property of Dirac equation: the so-called electron Zitterbewegung (Zb) [6,7,8]. It has been shown, when dealing with the expected electron velocity, that in the Fourier expansion of the spinor components, each element dp x dp y dp z of momentum space is associated with oscillations on x, y, z axes of amplitudes and phases depending on − → p . These oscillations are caused by interference beats between positive and negative energy states. By integrating over momenta, we have for x axis, λ C standing for the Compton wavelength and for twice the oscillation period. It follows that a dynamic substructure is originated in which electron move in space around its centre of mass along a complex tissue of closed trajectories [9]. Keeping the above arguments in mind, we point out that high-energy collisions are very fast processes. The collision time τ can be roughly identified with the ratio between impact parameter b and the electron-positron velocities c, that is, τ ≃ b/c [10]. But, while impact parameter in direction orthogonal to velocities can be assumed equal to the electron size, that is, b ⊥ ≃ 10 −16 cm, in parallel direction it is reduced by Lorentz contraction, that is, b ≃ 1 − β 2 · 10 −16 cm. Since for 45 GeV electrons we have: 1 − β 2 = 1.1 · 10 −5 , we obtain τ ≃ b /c = 3.7 · 10 −32 s. This time is short in comparison with the Zb period, in fact: τ /T Zb = 4.6 · 10 −12 . It follows that when electrons collide oscillations are stopped and the Zb substructure is not observable. This is like what occurs with a high-speed camera which allows to take steady pictures of a propeller even if it spins very fast.
The situation is opposite in low energy experiments where, in general, energies are determined with high accuracies. For instance, in atomic spectroscopy indetermination δw is less than about 10 −7 eV, that is: δw/m e c 2 2 · 10 −13 . This follows from the fact that Rydberg constant (R H = 13.6056981 eV) is known with seven decimal digits. Considering that equation (18) allows us to write the energy-time uncertainty principle δw δt ≃ h as a reciprocity relation we obtain δt/T Zb 5 · 10 12 . This large indetermination in time compels us to eliminate time in equation (17) so that oscillations are changed into distributions of probability lying along the electron trajectories ( 4 . Consequently, the observable electron turns out to be a static distribution of charge of definite size. In this way, it might allow for a finite e.m. mass evading divergent results [11]. Opposite to the previous propeller example, this is like what occurs with a low-speed camera which takes the picture of the spinning propeller in form of an uniform disk.
-Final remarks.
-The opinion that the Zb electron substructure should be considered in connection with the mass problem is not new [9]. So far, however, it has found scarce attention because most physicists consider the Zb oscillations as a meaningless feature of Dirac's equation and assume that electron behaves always as a point-like object which, in absence of external forces, cannot change its own velocity. Recently, an experiment has been performed showing clear evidence adverse to this belief [12]. In this experiment, a single 40 Ca + ion trapped in an electromagnetic cage simulates a free electron in an extremely fast quivering motion superimposed on a slow drift, that is, just the Zb motion.
It is to be pointed out , on the other hand, that the TF statistical model, just in order to allow for the finite mass of weak bosons, rules out the assumption of a point-like electron devoid of a dynamic substructure. In fact, this electrons would originate, a divergent neutrino potential energy: |U (r)| → ∞ for r → 0, which according to equation (3) would prevent us from considering a finite cutoff energy w F and, consequently, a finite boson mass. | 2010-01-27T16:15:52.000Z | 2009-12-16T00:00:00.000 | {
"year": 2009,
"sha1": "d75560b747c8eacbaef52e93f43365febbd1ab72",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d75560b747c8eacbaef52e93f43365febbd1ab72",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237441867 | pes2o/s2orc | v3-fos-license | Anti-inflammatory cytokine-eluting collagen hydrogel reduces the host immune response to dopaminergic cell transplants in a rat model of Parkinson’s disease
Abstract In cell replacement approaches for Parkinson’s disease, the intracerebral implantation of dopamine neuron-rich grafts generates a neuroinflammatory response to the grafted cells that contributes to its varied outcome. Thus, the aim of the present study was to fabricate an anti-inflammatory cytokine-eluting collagen hydrogel capable of delivering interleukin (IL)-10 to the brain for reduction of the neuroinflammatory response to intracerebral cellular grafts. In vitro assessment revealed that cross-linker concentration affected the microstructure and gelation kinetics of the hydrogels and their IL-10 elution kinetics, but not their cytocompatibility or the functionality of the eluted IL-10. In vivo evaluation revealed that the hydrogels were capable of delivering and retaining IL-10 in the rat striatum, and reducing the neuroinflammatory (microglial) response to hydrogel-encapsulated grafts. In conclusion, IL-10-eluting collagen hydrogels may have beneficial anti-inflammatory effects in the context of cellular brain repair therapies for Parkinson’s disease and should be investigated further.
In cell replacement approaches for Parkinson's disease, the intracerebral implantation of dopamine neuron-rich grafts generates a neuroinflammatory response to the grafted cells that contributes to its varied outcome. Thus, the aim of the present study was to fabricate an anti-inflammatory cytokine-eluting collagen hydrogel capable of delivering interleukin (IL)-10 to the brain for reduction of the neuroinflammatory response to intracerebral cellular grafts. In vitro assessment revealed that cross-linker concentration affected the microstructure and gelation kinetics of the hydrogels and their IL-10 elution kinetics, but not their cytocompatibility or the functionality of the eluted IL-10. In vivo evaluation revealed that the hydrogels were capable of delivering and retaining IL-10 in the rat striatum, and reducing the neuroinflammatory (microglial) response to hydrogel-encapsulated grafts. In conclusion, IL-10-eluting collagen hydrogels may have beneficial anti-inflammatory effects in the context of cellular brain repair therapies for Parkinson's disease and should be investigated further.
Background
Cell transplantation is a promising disease-modifying therapy that could develop into an alternative treatment for Parkinson's disease [1]. However, a significant limitation of this therapy that has prevented its progression into the clinic is its variable outcome which is associated with multiple factors including the host brain's inflammatory response to the cellular implant [2]. It is well known that the transplantation of exogenous cells into the brain generates an innate inflammatory response [3][4][5][6][7][8][9] and that only a small portion (1-20%) of the grafted dopaminergic neurons from fetal midbrain tissue survive the transplantation process [10]. In the first instance, when cells are transplanted into the brain, a gliotic reaction is initiated, with the recruitment of microglia to the vicinity of the graft site, alongside the recruitment of astrocytes to seal off the traumatic injury caused by the implant. Thus, the host innate immune response is characterised by the recruitment of microglia and astrocytes to the grafted site and surroundings. This host innate immune response is very quick [6,9] -present within hours after injection -and contributes to the variable reparative outcome of such transplantation approaches [3,5]. Thus, there is a clear window to explore the potential benefits of targeting the host innate immune response at the grafted site to challenge the extensive cell death of dopaminergic neurons within the first stages after transplantation. However, despite this, the effects of targeting the host innate immune response locally at the site of cell transplantation to enhance cell survival during the early transplantation phase have been poorly investigated. In recent years, biomaterial scaffolds have been investigated in an attempt to reduce or overcome the current challenges surrounding the cell transplantation processes [2,11]. In particular, chemically cross-linked collagen-based scaffolds have been broadly used in the tissue engineering field due to collagen's low immunogenicity, biodegradability, high availability and versatility [12]. In Parkinson's disease research, these naturally derived, in situ-forming, injectable scaffolds have been proven to improve the survival, re-innervation and functional capability of intrastriatal transplanted fetal dopaminergic neurons when used alone or in conjunction with the neurotrophin, glial cell line-derived neurotrophic factor (GDNF), in a rodent model of Parkinson's disease [13,14]. Additionally, these collagen scaffolds are capable of substantially reducing the innate inflammatory response around the grafted cells by creating a physical barrier between the transplanted and host cells [14,15]. Since these hydrogels have the capacity to retain and release therapeutic factors, it should be possible to further enhance their protective effect by the incorporation of anti-inflammatory molecules, such as the anti-inflammatory cytokine, interleukin (IL)-10 (IL-10), which is well known to suppress microglial activation [16].
Therefore, the present study aimed to generate and characterise an injectable collagen scaffold in an injectable hydrogel form capable of delivering functional IL-10 to the brain to reduce the inflammatory response to dopamine neuron-rich grafts in the context of cell-based brain repair for Parkinson's disease. To do so, we generated several hydrogel compositions using different concentrations of cross-linker and assessed these in vitro for their microstructure and gelation kinetics, as well as their IL-10 elution kinetics, anti-inflammatory functionality and cytocompatibility ( Figure 1). After this initial characterisation, we assessed the IL-10-eluting hydrogels in vivo in terms of their ability to deliver and retain IL-10 in the striatum and their ability to reduce the neuroinflammatory response to dopaminergic grafts ( Figure 2).
In vitro studies
Fabrication of type I bovine collagen hydrogels All components were maintained on ice to prevent premature gelation. For a final volume of 1000 μl, 400 μl of 5 mg/ml type I bovine collagen (Vornia Biomaterials) stock solution, neutralised with 1 M NaOH, was added to 200 μl of 10× phosphate-buffered saline (PBS) containing the required concentration of the cross-linker, poly(ethylene glycol) ether tetrasuccinimidyl glutarate (4s-StarPEG). The volume was made up to 1000 μl by adding 400 μl of PBS alone or, for the IL-10-eluting hydrogels, PBS containing the required concentration of human IL-10 (GFH83-100,
Figure 2. In vivo experimental designs
Following in vitro characterisation, two in vivo studies were completed to assess (A) the neurocompatibility of the IL-10-eluting hydrogel and its ability to deliver and retain IL-10 in the striatum in naïve rats, and to assess (B) the ability of the IL-10-eluting hydrogel to reduce the host inflammatory response to primary fetal dopamine neuron-rich transplants.
Cell Guidance Systems). Thus, the resulting collagen hydrogel had a final concentration of 2 mg/ml of type I bovine collagen. Based on our previous studies using collagen hydrogels for release of the neurotrophin, GDNF [13,14], we focused primarily on hydrogels cross-linked with 1, 2 and/or 4 mg/mg cross-linker for the IL-10 eluting hydrogels in vitro and in vivo. Some additional concentrations were included for the preliminary visualisation and gelation assays. Hydrogels with all cross-linker concentrations were injectable as, because there were kept on ice before injection, they remained in their liquid state and could easily pass through the injection cannula.
Visualisation of the collagen hydrogel microstructure
To visualise the differently cross-linked collagen hydrogels, 200 μl collagen hydrogels with increasing cross-linker concentrations (2-12 mg/ml) were generated and frozen immediately in liquid nitrogen. Once frozen, collagen hydrogels were fractured (to allow for visualisation of the inside of the sample) and freeze-dried overnight. Samples were placed into aluminium stubs and gold-coated in a gold sputter coater Emitech K550 (Quorumtech, U.K.) or Emscope SC500 (Quorumtech, U.K.). Images from the sample's structure were taken using a Hitachi S-4700N Pressure Scanning Electron Microscope (Hitachi, U.K.). To determine the impact of cross-linker concentration on porosity of the hydrogels, pore diameter measurements were taken from the scanning electron microscopy (SEM) photomicrographs using ImageJ software (20 randomly selected pores per hydrogel; 3 hydrogels per concentration).
To observe the distribution of the cells within the hydrogel, SH-SY5Y cells (ATCC; 5 × 10 6 cells/ml) were encapsulated in 50 μl collagen hydrogels cross-linked with either 6 or 12 mg/ml of 4s-StarPEG and fixed after 48-h incubation period in plating media at 37 • C. To limit the impact of sample processing on the cell-encapsulated collagen hydrogels, the samples were dehydrated in ascending ethanol concentrations (50, 70 and 100%) and hexamethyldisilazane and left to air-dry overnight. Samples were mounted and gold sputter-coated. Images were taken using a Hitachi S-4700N Pressure Scanning Electron Microscope (Hitachi, U.K.).
Assessment of the collagen hydrogel gelation time
Since the concentration of cross-linker used in the hydrogels will affect the speed and intensity of polymerisation, the gelation time of the collagen hydrogels with increasing concentrations of cross-linker was assessed in vitro. Fifty microlitres of collagen hydrogels with increasing cross-linker concentrations (1-12 mg/ml) were transferred to a previously sterilised (UV radiation) superhydrophobic surface (Teflon ® ) and placed in an incubator at 37 • C and 5% CO 2 and checked every 5-10 min to evaluate their gelation time.
Assessment of IL-10 release from collagen hydrogels
The in vitro release of IL-10 from the collagen hydrogels was evaluated using a human IL-10 ELISA. In short, 50 ng of human IL-10 was loaded into 50 μl collagen hydrogels with either 2 or 4 mg/ml of cross-linker and left to polymerise at 37 • C. Subsequently, each 50 μl collagen hydrogel was incubated in a well of a 24-well plate with SH-SY5Y cells (50000 cells/cm 2 ). Human IL-10 containing supernatant was collected at several time points until the collagen hydrogels were fully degraded (not detectable when medium was removed; 4 days after incorporation to the medium). The amount of IL-10 present in the medium was analysed with a human IL-10 ELISA kit (R&D Systems; DuoSet DY217B) following the manufacturer's protocol.
Assessment of the functionality of the IL-10 released from collagen hydrogels
To determine if the IL-10 released from the collagen hydrogels was fully functional, a Poly I:C challenge was used. Embryonic day 14 (E14) primary ventral mesencephalic (VM) cultures (50000 cells/cm 2 ) were pretreated with IL-10 (50 ng/ml) either as a single bolus or delivered in a 50 μl collagen hydrogel with 4 mg/ml cross-linker, and 1 h later, Poly I:C (InvivoGen, 20 or 50 μg/ml) was administered. After 24 h of incubation with Poly I:C, medium was collected. Levels of the pro-inflammatory cytokine, IL-1β, released from the VM cultures in response to the inflammagen, Poly I:C, were measured using an ELISA kit (Peprotech; 900-K91) to confirm the functionality of IL-10. Untreated cultures of VM cells, without either Poly I:C, IL-10 or hydrogel treatment, were included as a baseline for IL-1β release.
Assessment of the cytocompatibility of the collagen hydrogels
Before using hydrogels in vivo, their biocompatibility with VM cultures was assessed. VM cultures were incubated with preformed collagen hydrogels (2 × 50 μl gels) cross-linked with 1, 2 or 4 mg/ml of cross-linker for 24 h or left untreated. Following incubation, cells were either assessed for cell viability using the AlamarBlue ® (Invitrogen) assay or fixed for future immunocytochemical staining for neurons (using mouse anti-β-III tubulin from Millipore @ 1:200), dopaminergic neurons (using mouse anti-tyrosine hydroxylase from Millipore @ 1:1000) or astrocytes (using rabbit anti-GFAP from Millipore @ 1:2000, respectively). For analysis, cell counts were quantified from five randomly selected sample sites per well, in three technical replicates per experimental condition, with three biological replicates. The β-III tubulin florescence was quantified by measuring the threshold area of each image using ImageJ software.
In vivo studies
Ethical statement and surgical procedures All procedures involving the use of animals were approved by the Animal Care and Research Ethics Committee at the National University of Ireland, Galway, were completed under licence by the Irish Health Products Regulatory Authority, and were carried out in compliance with the European Union Directive 2010/63/EU and S.I No. 543 of 2012. Male Sprague-Dawley rats (weighing 200-225g on arrival) and time-mated female Sprague-Dawley rats were sourced from Charles River, U.K. Animals were housed in groups of four per cage, on a 12:12-h light/dark cycle, at 19-23 • C, with relative humidity levels maintained between 40 and 70%. For the duration of the experiment, animals were allowed food and water ad libitum. All ex vivo analyses were carried out by an experimenter blind to the treatment of the animals.
All stereotaxic surgeries were performed under isoflurane anaesthesia (5% in O 2 for induction and 2% in O 2 for maintenance) in a stereotaxic frame with the nose bar set at −2.3 (intrastriatal) or −4.5 (intra-medial forebrain bundle (MFB)). For infusions to the striatum (in situ forming collagen hydrogels), coordinates were AP = 0.0, ML + − 3.7 (from bregma) and DV −5.0 below dura, at a total volume of 6 μl per injection. For infusions in the MFB (6-hydroxydopamine lesions), coordinates were AP −4.0, ML −1.3 (from bregma) and DV −7.0 below dura, at a total volume of 3 μl per injection.
To obtain tissue for E14 VM cultures and suspensions, time-mated female Sprague-Dawley rats were deeply anaesthetised with isoflurane (5% in O 2 ) and rapidly decapitated using a guillotine. The E14 embryos were obtained by laparotomy and the VM was microdissected from each embryo as previously described [17]. From this tissue, single-cell suspensions were generated for cell culture and intracerebral transplantation as previously described [13,14].
In vivo study in naïve rats to assess the neurocompatibility of the IL-10-eluting hydrogel as well as its ability to deliver and retain IL-10 in the striatum To evaluate the effects of cross-linker concentration in vivo, 24 adult male Sprague-Dawley rats received a bilateral intrastriatal infusion of 1000 ng (in 6 μl) of human IL-10 loaded into 6 μl of collagen hydrogel cross-linked with either 1, 2 or 4 mg/ml of 4s-StarPEG (n=4 per group per time point). A bolus intrastriatal infusion of human IL-10 (1000 ng in 6 μl PBS) was used as a control. The animals were then killed at days 1, 2 or 4 post-surgery by terminal anaesthesia (50 mg/kg pentobarbital i.p.) and transcardial perfusion-fixation for post-mortem immunohistochemical analysis of collagen polymerisation and biodegradation, IL-10 delivery and retention, and the host inflammatory response (using rabbit anti-collagen from Abcam @ 1:1000; rabbit anti-IL-10 from Peprotech @ 1:200; mouse anti-CD11b from Millipore @ 1:400; rabbit anti-GFAP from Dako @ 1:2000) as previously described [13,14]. The volume of immunostaining was assessed using cross-sectional areas measured from photomicrographs of a 1:6 series of sections throughout the rostro-caudal axis of the striatum, while for optical density analyses, the staining density was assessed from photomicrographs of three sections along the rostro-caudal axis. To do so, the mean grey values were determined using ImageJ software, and converted into optical density (arbitrary units) by applying the following formula: OD = log10 (255/mean grey value).
In vivo study in parkinsonian rats to assess the effects of an IL-10-eluting collagen hydrogel on the host inflammatory response to a VM cell transplant
Once the hydrogel neurocompatibility and ability to release IL-10 were determined, an in vivo study was carried out to assess if the collagen hydrogel could reduce the host innate immune response after a VM cell transplant in the 6-hydroxydopamine lesioned rat model of Parkinson's disease [19]. Male Sprague-Dawley rats (n=13) received a unilateral intra-MFB 6-hydroxydopamine lesion (12 μg in 3 μl). Rats were then assigned into three groups (n=4-5 per group) to receive intrastriatal transplants of E14 VM cells (300000 cells) encapsulated in an IL-10-loaded collagen hydrogel (cross-linked with 4 mg/ml 4s-StarPEG) with either 0, 500 or 1000 ng of IL-10. The animals were killed 4 weeks post-transplantation by terminal anaesthesia (50 mg/kg pentobarbital i.p.) and transcardial perfusion-fixation for post-mortem assessment of the host inflammatory response to the transplant (using mouse anti-CD11b from Millipore @ 1:400 and rabbit anti-GFAP from Dako @ 1:2000). Volume and optical density analyses of immunostaining were completed as described above. The 4-week timepoint was chosen as the endpoint of the study as we have previously shown a pronounced host inflammatory response to the transplant as early as 2 weeks after VM cell transplantation [14].
Statistical analysis
All data are expressed as mean + − standard error of the mean, and were analysed using one-way, two-way or two-way repeated-measures ANOVA as appropriate, with post-hoc Bonferroni test where required. Throughout the 'Results' text, the main effects from the initial ANOVA are cited in the body of the 'Results' section, while the results of the post-hoc analyses are shown in the corresponding figure and explained in the figure legend.
In vitro characterisation of differentially cross-linked collagen hydrogels
Several collagen hydrogels with a fixed concentration of type I collagen and different 4s-StarPEG concentrations were generated and characterised. The porous structure of the hydrogels was clearly visible using SEM ( Figure 3A), allowing for the homogeneous encapsulation of SH-SY5Y cells ( Figure 3B). As expected, increasing the cross-linker concentrations reduced the size of the hydrogel pores ( Figure 3C). Unsurprisingly, the cross-linker concentration was also strongly linked to the gelation kinetics, with a higher concentration of cross-linker leading to shorter hydrogel gelation times ( Figure 3D; cross-linker concentration, F (4,10) = 697.60, P<0.0001).
To assess the IL-10 elution kinetics, IL-10 was loaded into collagen hydrogels (2 and 4 mg/ml cross-linker), and when fully polymerised, collagen hydrogels were added to SH-SY5Y neuronal cultures. Both collagen hydrogel compositions successfully retained and released IL-10 into the medium over time until fully degraded at 4 days post administration (collagen hydrogels were not visible when medium was removed) ( Figure 3E; time, F (6,28) = 25.39, P<0.0001). The collagen hydrogels with a lower concentration of cross-linker degraded more rapidly, consequently releasing IL-10 into the medium faster ( Figure 3E; cross-linker concentration, F (1,28) = 57.34, P<0.0001).
Furthermore, to ensure IL-10 released from the collagen hydrogels was functional, its effects were assessed using a Poly I:C challenge. The administration of Poly I:C to VM cell cultures generated an inflammatory response as observed by the release of the pro-inflammatory factor IL-1β into the medium ( Figure 3F; Poly I:C, F (2,18) = 11.84, P<0.001). Pretreatment of VM cultures with IL-10 -either as a bolus or released from a collagen hydrogel -1 h before Poly I:C addition attenuated this response ( Figure 3F; IL-10, F (2,18) = 93.44, P<0.01). Although the kinetics of IL-10 release into the cell culture medium is distinctly different when delivered as a bolus (same dose but higher immediate concentration in the medium) vs from the hydrogel (same dose but slower, more sustained release) the purpose of this assay was simply to confirm that the IL-10 retained its functionality after elution from the gel. Before conducting in vivo experiments, the collagen hydrogel cytocompatibility was assessed by exposing primary VM cultures to preformed collagen hydrogels with increasing concentrations of cross-linker (1-4 mg/ml) for 24 h. All collagen hydrogel compositions were compatible with these primary neural cultures as no change in the populations of neuronal cells ( Figure 4A; cross-linker concentration, F (3,8) = 0.25, P>0.05), dopaminergic neurons ( Figure 4B; cross-linker concentration, F (3,8) = 0.21, P>0.05) or astrocytes ( Figure 4C; cross-linker concentration, F (3,8) = 0.39, P>0.05) were observed. Additionally, the overall viability of VM cultures, as assessed by AlamarBlue ® assay, was not compromised by the exposure to any of the collagen hydrogels ( Figure 4D; cross-linker concentration F (3,8) = 1.41, P>0.05). In support of these results, the morphology of neuronal cells, dopaminergic neurons and astrocytes in the VM culture was not modified by any of the tested hydrogel compositions ( Figure 4E).
Taken together, these data indicate that the properties of the hydrogel can be tuned by regulating the cross-linker concentration. More importantly, we have shown that collagen hydrogels are cytocompatible, and can retain and release a functional anti-inflammatory cytokine in vitro.
In vivo study in naïve rats to assess the neurocompatibility of the IL-10-eluting hydrogel as well as its ability to deliver and retain IL-10 in the striatum After ensuring that hydrogels were cytocompatible and able to retain and release IL-10 in vitro, we assessed these properties in vivo. To do so, IL-10-loaded collagen hydrogels with increasing concentrations of cross-linker (1-4 mg/ml) were injected into the striatum of naïve rats.
We first determined the host neuroinflammatory response to the collagen hydrogels by evaluating the microgliotic and astrocytic reaction in the implant site vicinity ( Figure 5). We found that the density of microgliosis (IL-10 delivery, F (3,34) = 3.137, P<0.05) and astrocytosis (IL-10 delivery, F (3,34) = 1.824, P>0.05) around the implant site was similar when IL-10 was delivered as a bolus or within the hydrogel. This result suggests that hydrogels cross-linked with 4s-StarPEG at a concentration range of 1-4 mg/ml are biocompatible for intrastriatal delivery of IL-10.
Within the same study, type I bovine collagen immunostaining showed the presence of collagen hydrogel in the striatum throughout all experimental groups ( Figure 6A) confirming in situ polymerisation. The collagen volume was similar between all hydrogel compositions, although the collagen hydrogel with 4 mg/ml of cross-linker showed a greater trend of collagen staining. As expected, the collagen staining tended to decrease with time indicating the biodegradability of the hydrogel.
Once the neurocompatibility, in situ polymerisation and biodegradation of hydrogel compositions were verified, the ability of the hydrogels to deliver and retain IL-10 in the surrounding striatum was assessed using IL-10 immunostained photomicrographs. Analysis showed significantly more IL-10 immunostaining in the striatum at 24-h post-implantation when delivered within the hydrogel relative to a bolus injection ( Figure 6B; IL-10 delivery, F (3,36) = 3.824, P<0.05). Indeed, the volume of IL-10 staining increased from 0.14 + − 0.106 mm 3 to 10.12 + − 7.656 mm 3 , which represents a 70-fold increase in retention of IL-10. Because of the rapid biodegradation of the collagen hydrogel, the volume of IL-10 in the striatum also decreased quickly over time ( Figure 6B; time, F (2,36) = 9.034, P<0.001).
In vivo study in parkinsonian rats to assess the effects of an IL-10-eluting collagen hydrogel on the host inflammatory response to a VM cell transplant
In the final study, we assessed whether the IL-10 loaded collagen hydrogels could reduce the inflammatory response elicited by the intrastriatal transplantation of primary dopamine-rich neural transplants. To do so, 6-hydroxydopamine lesioned rats were transplanted with E14 VM cells encapsulated in either unloaded collagen hydrogels, or in collagen hydrogels loaded with IL-10 (500 or 1000 ng). In line with our previous studies, the delivery of the cell-seeded collagen hydrogels with 4 mg/ml of cross-linker did elicit a host innate immune response surrounding the transplantation site ( Figure 7). However, the transplantation of VM cells within IL-10 loaded hydrogels (1000 ng dose) significantly reduced the density of microglia surrounding the VM graft ( Figure 7A; IL-10 concentration, F (2,10) = 6.243, P<0.05), showing that the inflammatory response elicited by the intrastriatal transplantation of primary dopamine neuron-rich grafts can be reduced by encapsulation within the IL-10-loaded collagen hydrogel. To note, no collagen or IL-10 immunostaining was detected at this 4 week timepoint indicating that the hydrogels had fully degraded (data not shown).
Discussion
Because one of the factors that contributes to the varied outcome of cell-based brain repair for Parkinson's disease is the host brain's inflammatory response to the implanted cells, in the present study, we sought to generate a collagen hydrogel matrix enriched with the anti-inflammatory cytokine, IL-10, for reduction in this neuroinflammatory response. In the first instance, we generated several hydrogel compositions using different concentrations of cross-linker and assessed these in vitro for their microstructure and gelation kinetics, as well as their IL-10 elution kinetics, anti-inflammatory functionality and cytocompatibility. After this initial characterisation, we assessed the IL-10-eluting hydrogels in vivo to deliver and retain IL-10 within the striatum, and their ability to reduce the neuroinflammatory response to dopaminergic grafts. In vitro studies revealed that increasing cross-linker concentration reduced the hydrogels' pore size and accelerated their gelation kinetics, but did not affect the cytocompatibility of the hydrogels. Most importantly, the in vitro studies demonstrated that hydrogels could release IL-10 over time and that the released cytokine retained its anti-inflammatory functionality. Subsequently, the in vivo studies demonstrated the ability of the IL-10-loaded collagen hydrogels to deliver and retain the cytokine within the striatum (relative to bolus injection of the cytokine), and most importantly in the context of cell-based brain repair for Parkinson's disease, to reduce the microglial neuroinflammatory response to dopaminergic grafts. Taken together, these data suggest that anti-inflammatory functionalised hydrogels merit further exploration as a delivery matrix for cellular reparative strategies in Parkinson's disease.
Biomaterials, particularly injectable hydrogels, have the potential to improve the outcome of brain repair strategies for Parkinson's disease through multiple mechanisms. Several studies have shown that the intrastriatal transplantation of dopaminergic grafts encapsulated in collagen hydrogels can provide the transplanted cells with a more favourable microenvironment during and after transplantation that ultimately improves the efficacy of this approach [13,14]. One mechanism through which hydrogels function is by providing a physical barrier between the exogenous transplanted cells and the host brain's innate inflammatory cells thereby dramatically reducing the recruitment of astrocytes and microglia around the graft site [14,15]. Several studies have focused on enhancing the neurotrophic properties of the hydrogels through enrichment with dopaminergic neurotrophins such as GDNF [13,14]. Still, to date, no study has focused on enhancing its anti-inflammatory properties. Thus, in the present study, we sought to generate a collagen hydrogel enriched with the anti-inflammatory cytokine, IL-10, to further enhance the favourable microenvironment provided by the hydrogel to cells encapsulated within it.
In the studies presented here, we have used collagen hydrogels fabricated with bovine type I collagen and multiple concentrations of 4s-StarPEG (1-12 mg/ml) as the main components of the biomaterial as they have been successfully used for cell transplantation previously [13,14]. Type I collagen was chosen since it has been widely used for cell encapsulation due to its properties of mimicking the extracellular matrix and its ability to polymerise and form an in situ hydrogel [12], whereas 4s-StarPEG can stabilise and stiffen the resulting collagen hydrogel while being non-toxic, with low immunogenicity and already approved by the U.S. Food and Drug Administration [18]. The concentration of collagen in the collagen hydrogels was not investigated in this work as collagen hydrogels with 2 mg/ml of type I bovine collagen have successfully been used in cell transplantation studies for Parkinson's disease [13,14].
In the initial studies, as the hydrogel structure can be regulated by the cross-linker agent [18], we analysed several collagen hydrogel compositions (with increasing concentration of cross-linker) to determine how the concentration of 4s-StarPEG cross-linker altered its properties. The degree of cross-linking is a crucial component for many of the hydrogel's properties like elasticity, rigidity and swelling behaviour. Here, we have shown that the cross-linker concentration did modify the fibrous microstructure of the collagen hydrogel. Furthermore, we showed that higher the cross-linker concentration, the quicker the polymerisation occurred in vitro at 37 • C. We have also demonstrated here that the hydrogels could release IL-10 over time in in vitro neuronal cultures, until the hydrogel was fully degraded. More importantly, we have shown by using a Poly I:C challenge that the released IL-10 can exert its biological functions such as reducing the levels of the pro-inflammatory cytokine IL-1β. Since chemical cross-linking can result in toxicity, the compatibility of the biomaterial must be assessed in vitro, before using the hydrogels in the brain. Our preliminary in vitro assessments showed that the incubation of hydrogels along with VM cultures did not have any negative impact on the cell survival at any of the cross-linker concentrations assessed. Together the in vitro data suggested that collagen hydrogels with lower concentrations of cross-linker (1-4 mg/ml of 4s-StarPEG) may be the best candidates for in vivo delivery since they were cytocompatible, had slow gelling properties and were able to retain and release functional IL-10.
In our first in vivo study, all collagen hydrogel compositions (with 1-4 mg/ml of cross-linker) injected into the striatum successfully polymerised in situ and the collagen hydrogel with the greater cross-linker concentration (4 mg/ml) showed the strongest and most defined collagen staining at 24 h post-transplantation. The collagen staining was less well-defined for the 1 and 2 mg/ml cross-linked hydrogels, probably reflecting poor in situ formation and/or rapid biodegradation. As expected, the 4 mg/ml collagen hydrogel degraded quickly over time as it was mostly degraded by day 4 post-injection as seen previously [14]. Similar to other type I collagen hydrogels injected into the brain [13][14][15], we determined that the collagen hydrogels were compatible with the host striatum considering the host immune response generated by microglia and astrocytes was comparable with an IL-10 bolus injection.
However, the most striking finding of our first in vivo study was the extent to which the collagen hydrogel, with 4 mg/ml of cross-linker, retained IL-10 in the striatum relative to a bolus injection at 24 h post-implantation. Although the retention time of IL-10 was short, it is well established that the immediate post-grafting window is the most critical for dopaminergic cell transplants [19,20] and that the host inflammatory response to the cells manifests within hours after transplantation [3][4][5][7][8][9][10]. This suggests that the increased retention of this anti-inflammatory cytokine, even for short periods, could have beneficial consequences for grafting. Indeed, we have already reported significant benefits from retaining trophic factors for short periods alongside transplanted cells. For example, in the Moriarty et al. studies [13,14], even though GDNF was fully degraded by day 4 post-injection, it dramatically improved the survival of primary dopaminergic cells.
Having established that the IL-10-eluting hydrogels were well tolerated in the brain and capable of IL-10 retention in the striatum, we sought to assess if they could reduce the inevitable host inflammatory response to intrastriatal dopamine neuron-rich grafts. Although the delivery of cells in collagen scaffolds for neural repair has been investigated, this is the first time that cells have been delivered in an anti-inflammatory cytokine enriched collagen scaffold. In this work, we have shown that the encapsulation of VM cells in an IL-10-loaded collagen hydrogel reduced the density of microglial cells around the graft site at 4 weeks post-transplantation. The suppression of microglial activation by IL-10 is well established [16] but this is the first time it has been shown in the context of brain repair for Parkinson's disease. Ultimately, further long-term studies will be required to determine if this anti-inflammatory effect provides additional benefit to the encapsulated cells and further improves the efficacy of this approach.
In conclusion, we found that collagen hydrogels enriched with the anti-inflammatory cytokine, IL-10, were highly cytocompatible in vitro and neurocompatible in vivo, and could release functional IL-10 both in cellular models and in the Parkinsonian brain where it reduced the microglial response to dopaminergic grafts. Taken together, these data suggest that anti-inflammatory functionalised hydrogels merit further exploration as a delivery matrix for cellular reparative strategies in Parkinson's disease.
Data Availability
The data that support the findings of the present study are available from the corresponding author, upon reasonable request. | 2021-09-09T05:27:40.190Z | 2021-08-23T00:00:00.000 | {
"year": 2021,
"sha1": "dc6102ec245cd438f47a4152c6310c3638f5b1b6",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/neuronalsignal/article-pdf/5/3/NS20210028/919547/ns-2021-0028.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc6102ec245cd438f47a4152c6310c3638f5b1b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237730753 | pes2o/s2orc | v3-fos-license | Synthesis of magnetic CuFe2O4/Fe2O3 core-shell materials and their application in photo-Fenton-like process with oxalic acid as a radical-producing source
ABSTRACT In this work, we proposed to synthesize CuFe2O4/Fe2O3 core-shell materials with different Fe2O3 contents in order to create new and efficient photo-Fenton-like catalysts for the degradation of methylene blue with oxalic acid as a radical-producing source. The catalysts were prepared through two stages: first, CuFe2O4 was prepared by a hydroxide coprecipitation – annealing method and then, Fe2O3 was immobilized on CuFe2O4 surface by a simple impregnation – annealing procedure. According to the experimental results, our CuFe2O4/Fe2O3 core-shell materials exhibit high photo-Fenton-like catalytic activity for the degradation of methylene blue under both UVA light and visible light, as well as good ferromagnetic properties, which allows them to be easily separated from the solution by a magnet. Among them, the catalyst prepared with the molar CuFe2O4/Fe2O3 ratio of 1:2 showed the best catalytic performance with the rate constant of 2.103 h–1 under UVA light and 0.542 h–1 under visible light, which were 2 times higher than CuFe2O4 sample. The enhanced catalytic activities of our core-shell materials can be attributed to the high content of surface Fe3+ species, high specific surface area and the presence of rod-like Fe2O3 particles on their surface.
Introduction
The existence of various dye molecules in textile wastewater has been widely considered as one of major environmental problems that our world is facing today. Due to their toxicity, organic dyes have many negative effects not only on human health but also on aquatic life [Berradi et al. 2019, Hassan et al. 2018, Ito et al. 2016, Lellis et al. 2019. Therefore, treatment of textile wastewater containing organic dyes has become an urgent need for the environment protection. Unfortunately, the decomposition of organic dyes is usually ineffective since these organic molecules are remarkably resistant to conventional biological methods such as activated sludge or anaerobic digestion [Katheresan et al. 2018]. Over the past decades, homogeneous Fenton processes based on the reactions of Fe 3+ /Fe 2+ ions and H 2 O 2 were commonly known as a potentially better oxidation approach owing to the generation of highly reactive oxygen species like hydroxyl radicals which can completely mineralize most of organic species [Fenton 1894;Gligorovski et al. 2015, Neamtu et al. 2003, Wang and Xu 2012. More specially, it was reported that the production of hydroxyl radicals was greatly improved under UV light [Zepp et al. 1992]. However, the homogeneous Fenton and photo-Fenton technologies still present some shortcomings for practical applications. Firstly, Fe 3 + /Fe 2+ ions as homogeneous catalysts are completely dissolved in reaction solution, leading to enormous challenges in catalyst recovery after wastewater treatment. Secondly, for the recovery, these homogeneous catalysts usually require a post-neutralization process, which possibly causes the increase in the treatment cost as well as the formation of ferric sludge as a secondary pollution source [Catrinescu et al. 2003, Hanna et al. 2008.
In order to overcome these obstacles, several magnetic photo-Fenton-like catalysts were developed in the literature [Heidari et al. 2019, Liu et al. 2012, Sharma et al. 2015, Sharma and Singhal 2018 (in general, the photo-Fenton-like reactions are the advanced oxidation processes using oxidants other than H 2 O 2 and transition metals as catalysts other than iron under illumination [Rodríguez-Narváez et al. 2019]). These heterogeneous magnetic catalysts relies on the ferrite materials which belong to the spinel structure with the common formula M 2+ Fe 3+ 2 O 4 (M = Fe, Mn, Cu, Zn, Ni). Owing to the tunability of their structure and composition, they do not only display promising photo-Fenton catalytic activities but also exhibit highly ferromagnetic properties which help them to be easily separated from the solution after the treatment [Sharma et al. 2015].
Nevertheless, the activity of ferrite catalysts is still limited and needs to be further improved for the practical applications. Recently, Guo et al. proved that the addition of tartaric acid into the H 2 O 2 -ferrite-photo system could enhance the decolorization of methylene blue from 52% to 92.1% within 80 minutes [Guo et al. 2019]. Unfortunately, it was widely recognized that H 2 O 2 is hard to be stocked for a long period because this compound is unstable and easily decomposes to form oxygen gas and water during the storage.
According to our previous reports, the replacement of H 2 O 2 by oxalic acid can extend the storage time and notably ameliorate the photo-Fenton performance of these catalysts [Dinh et al. 2017, Ngo TPH andLe TK 2018]. In fact, the ferric species on the surface of ferrite catalysts and oxalic acid dissolved in solution seem to be able to create ferrioxalate complexes which can enhance the photoabsorption of UV-visible light to produce more hydroxyl radicals and thus improve the photo-Fenton efficiency [Liu et al. 2012]. Beside, our previous works also proved that the amounts of different ions on the surface of ferrite catalysts greatly affect their photo-Fenton performance [Ngo TPH and Le TK 2018]. It was observed that the photo-Fenton catalytic activity tends to rise when the surface Fe content of our catalysts gradually increases [Ngo TPH and Le TK 2018]. With that in mind, this work aims to prepare new magnetic photo-Fenton catalysts based on CuFe 2 O 4 /Fe 2 O 3 core-shell materials with different Fe 2 O 3 contents on their surface in order to enhance the photo-Fenton catalytic activity. Actually, before our study, the combination of Fe 2 O 3 and CuFe 2 O 4 was carried out by Silva et al. who used the modified Pechini method to form new heterogeneous α-Fe 2 O 3 /CuFe 2 O 4 catalysts [Silva et al 2020]. These new mixed oxides exhibited both magnetic behavior and high photo-Fenton catalytic activity for the degradation of methylene blue under visible light. However, since the modified Pechini method is based on the polyesterification between citrate salts and ethylene glycol followed by a pyrolysis at high temperatures, this technique is not only inappropriate to synthesize the core-shell materials but also complicated, requiring a careful control of polymerization and a long reaction time (24 hours). Therefore, in this study, we proposed to apply a facile impregnationannealing method to synthesize our CuFe 2 O 4 /Fe 2 O 3 core-shell materials with enhanced catalytic activities. Moreover, oxalic acid was used as a stable and effective radical-producing source instead of H 2 O 2 to further improve the photo-Fenton performance. The influences of Fe 2 O 3 contents on the crystal structure, morphology, surface composition and magnetic properties of our catalysts were also discussed in details.
Sample preparation
In our work, all chemicals were commercially available and used without further purification. The preparation of CuFe 2 O 4 /Fe 2 O 3 core-shell materials with different Fe 2 O 3 contents was effectuated through two stages. In the first stage, CuFe 2 O 4 nanoparticles were synthesized by a simple hydroxide coprecipitation -annealing method which used NaOH as the precipitator. Briefly, Cu(NO 3 ) 2 .3H 2 O and Fe(NO 3 ) 3 .9H 2 O ((>98%, purchased from Sigma Aldrich) in a molar ratio of 1:2 were dissolved in 200 mL distilled water to form a solution containing 0.10 mol.L -1 Cu 2+ and 0.20 mol.L -1 Fe 3+ . Next, 400 mL solution of 0.40 mol.L -1 NaOH was slowly dropped into the above solution under constant stirring to obtain a brown co-precipitate. This coprecipitate was washed with distilled water, dried at 150°C during 2 hours and then annealed in an electric furnace at 800°C for 2 hours to produce magnetic CuFe 2 O 4 nanoparticles.
In the second stage, Fe 2 O 3 was immobilized on the surface of CuFe 2 O 4 nanoparticles via a facile impregnation -annealing process. Firstly, 1.20 g synthesized CuFe 2 O 4 powder was dispersed in 400 mL solution of NaOH (0.225 mol.L -1 ). The suspension was constantly stirred by a mechanic agitator. In the other hand, a series of aqueous Fe(NO 3 ) 3 solutions were prepared with different concentrations (0.05, 0.10, 0.15 mol.L -1 ). These concentrations were calculated following the desired molar CuFe 2 O 4 /Fe 2 O 3 ratios (1:1, 1:2, 1:3, respectively). Then, the aqueous Fe 3+ solutions were quickly poured into the suspension containing NaOH and CuFe 2 O 4 to form a slurry solution. This slurry solution was regularly stirred for 30 minutes. After that, the magnetic powders were separated from the solution by a magnet, dried at 150°C for 1 hour and annealed at 500°C for 2 hours. Finally, the products were washed with distilled water, collected by a magnet and dried again at 150°C for 1 hour. All the samples were named as CuFe 2 O 4 /Fe 2 O 3 -X (X = 0, 1, 2 and 3 corresponding to the molar Fe 2 O 3 /CuFe 2 O 4 ratios). Besides, Fe 2 O 3 nanoparticles (without magnetic CuFe 2 O 4 cores) were also prepared from aqueous Fe(NO 3 ) 3 and NaOH solutions in the same conditions (annealing at 500°C for 2 hours) in order to compare their photo-Fenton activity with that of our CuFe 2 O 4 /Fe 2 O 3 samples.
Characterization
The structural and phase analyses of CuFe 2 O 4 and CuFe 2 O 4 /Fe 2 O 3 samples were performed by powder X-ray diffraction (XRD) using a BRUKER-Binary V3 X-ray diffractometer with monochromatic Cu Kα source (λ = 1.5406 Å) operated at 40 kV and 40 mA. The phase identification and the Rietveld refinement were carried out using Joint Committee on Powder Diffraction Standards database (JCPDS cards) and Fullprof 2009 software, respectively. The surface morphology of our samples were characterized via field emission scanning electron microscopy (FE-SEM) images taken on a HITACHI SU8000 with an accelerating voltage of 5 kV. The specific surface area (S BET ) of samples was measured through nitrogen adsorptiondesorption isotherms recorded at 77 K using a NOVA 1000e analyzer (Quantachrome Instruments).
The Fourier transform infrared studies (FT-IR) for all our samples were carried out in KBr matrix using Bruker VERTEX 70 spectrometer. The FTIR spectra were recorded at the wavenumber resolution of 4 cm -1 and in the wavenumber range of 4000-400 cm −1 . The surface atomic composition of our samples was also investigated by low voltage energy-dispersive X-ray spectroscopy (EDX) on a HITACHI SU8000 instrument operated at 5 kV (corresponding to the penetration depth of 50 nm [Takano 2011]).
The magnetic properties of CuFe 2 O 4 and CuFe 2 O 4 /Fe 2 O 3 -2 samples were measured at room temperature using a vibrating sample magnetometer PPMS6000 (Quantum Design) in the magnetic field range varying from −11 kOe to 11 kOe. The saturation magnetization (M S ), the remanent magnetization (M R ) and the coercivity (H c ) of each sample were determined from the obtained corresponding hysteresis loops.
Catalytic tests
The photo-Fenton-like catalytic activities of CuFe 2 O 4 , Fe 2 O 3 and CuFe 2 O 4 /Fe 2 O 3 samples with H 2 C 2 O 4 as a radical-producing source were evaluated through the degradation of methylene blue (MB, purchased from Merck) under UVA light and visible light illumination. The typical experiments were conducted in a batch-type reactor based on a glass beaker containing a solution of MB (2 × 10 −5 mol.L −1 ) and H 2 C 2 O 4 (10 −3 mol.L −1 ). Firstly, 0.125 g of catalytic powder was dispersed into this solution, which was then constantly stirred by a mechanic agitator in the dark for 60 minutes until the MB adsorption-desorption equilibrium was established. The solution temperature was maintained at about 30°C by a circulation system of water. Next, for photo-Fenton-like reactions, the suspension was irradiated by a visible light (VIS) lamp (9 W Osram Dulux S with the visible-light intensity of 12.5 W.m -2 ) or an ultraviolet A (UVA) light lamp (9 W Radium 78 with the UVA-light intensity of 33.0 W.m -2 ) fixed 10 cm above the solution surface. The light intensity of both lamps was measured by an Ocean Optics USB4000 spectrometer. It should be noted that MB has the maximum absorption at 664 nm whereas the light spectra of both our UVA and visible lamps (Figure 1, measured by a StellarNet spectrometer USB4C00211) does not show any peak around this wavelength. Furthermore, the capacity of our lamps is only 9 W, much lower than that of other works [Liu et al. 2012, Sharma et al. 2018, which allows us to avoid the selfsensitization of MB molecules. At given time intervals, aliquots (5 mL) of solution were collected, followed by the separation of catalytic powder from the solution by a magnet. Finally, the concentrations of remaining dye were analyzed using a Helios Omega UV -VIS spectrophotometer (Thermo Fisher Scientific, USA) at 664 nm. The total organic carbon (TOC) of the solution was also determined by Shimadzu TOC-VCPH analyzed. The calibration curve was prepared using potassium hydrogen phthalate (99.99%, purchased from Merck).
Crystal structure and phase composition
XRD characterization was used to determine the crystal structure of our magnetic CuFe 2 O 4 powder and CuFe 2 O 4 /Fe 2 O 3 core-shell materials ( Figure 2). From their patterns, the quantitative analysis of phase composition was carried out by using Fullprof 2009 program and the results are given in Table 1
Morphology and surface specific area
The FE-SEM images of CuFe 2 O 4 and CuFe 2 O 4 /Fe 2 O 3 core-shell samples are represented in Figure 3. It can be seen that the CuFe 2 O 4 sample is composed of agglomerated cubic particles with their size range between 80 and 200 nm ( Figure 3a). core during the synthesis process. Interestingly, the FE-SEM observation (with magnification of 100 K) also showed the evolution of particle size when the content of Fe 2 O 3 increased. For CuFe 2 O 4 /Fe 2 O 3 -1 sample, the rod-like particles are about 100 nm in length and 40 nm in diameter (Figure 3b). This length tends to increase to 150 nm whereas the diameter tends to decrease to 25 nm for the core-shell material prepared with the molar CuFe 2 O 4 /Fe 2 O 3 ratio of 1:2 ( Figure 3c). However, when the molar CuFe 2 O 4 /Fe 2 O 3 ratio was up to 1:3 (Figure 3d), the rod-like particles were strongly shortened (60-70 nm in length). More specially, in this sample, some rod-like particles seems to be transformed to spherical particles with the size of about 60 nm. This result suggests that loading a very large quantity of Fe 3+ ions on the surface of CuFe 2 O 4 may reduce the available surface area for the development of Fe 2 O 3 seeds, as a consequence, hindering the growth of rod-like Fe 2 O 3 nanoparticles.
Besides, we also noticed the difference in surface texture between CuFe 2 O 4 and CuFe 2 O 4 /Fe 2 O 3 -2 samples via FE-SEM images with higher magnification (×200 K). Owing to the presence of randomly distributed rod-like particles, the surface of CuFe 2 O 4 /Fe 2 O 3 -2 sample (Figure 3f) becomes roughness with more porosity than CuFe 2 O 4 surface (Figure 3e), likely resulting in higher surface area for core-shell materials. In order to better investigate the effect of loading Fe 2 O 3 on the surface morphology of CuFe 2 O 4 , the nitrogen adsorption-desorption isotherms analysis was carried out. For CuFe 2 O 4 /Fe 2 O 3 -2 sample, the BET specific surface area was found to be 11.498 m 2 .g -1 , which is about ten times higher than that of CuFe 2 O 4 sample (only 1.159 m 2 .g -1 ). The enhanced surface area was also observed for ZnO/Fe 2 O 4 hollow nanospheres [Li et al. 2014] and MgFe 2 O 4 @SiO 2 core shell nanocomposites [Tiwari and Kaur 2020]. These FE-SEM and BET results reinforce the fact that Fe 2 O 3 nanoparticles were successfully bound to the surface of CuFe 2 O 4 , making the material surface rougher, thereby increasing the specific surface area.
Magnetic properties
The magnetic hysteresis curves of our samples are depicted in Figure 5a. From these curves, the magnetic parameters such as saturation magnetization, remanent magnetization and coercivity are determined and shown in [Bhowmik and Saravanan 2010] on the surface of CuFe 2 O 4 . However, the magnetic properties of our core-shell materials are still good enough to make them easily be separated from the solution by a magnet (Figure 5b).
Photo-Fenton catalytic activity
The photo-Fenton catalytic performance of CuFe 2 O 4 , Fe 2 O 3 and CuFe 2 O 4 /Fe 2 O 3 core-shell materials was evaluated through the MB degradation under UVA light and visible light (Figures 6a and 6b, respectively). The apparent rate constant (k) for each sample is calculated using the pseudo-first-order Langmuir-Hinshelwood kinetic model (Figures 6c and 6d However, when the Fe 2 O 3 content was further enhanced, the rate of MB degradation tended to decrease. These results prove that the photo-Fentonlike catalytic activity of our samples is able to be controlled by the CuFe 2 O 4 /Fe 2 O 3 ratios used in catalyst preparation. Moreover, the UV-light-induced catalytic activity is always higher than the visible-light-induced catalytic activity for all catalysts, indicating that the light energy is also a factor affecting their catalytic performance. In order to verify the stability and the reproducibility of our catalysts, the leaching test and the reuse tests for the CuFe 2 O 4 /Fe 2 O 3 -2 sample were carried out as follows: after the first photo-Fenton catalytic run with our CuFe 2 O 4 /Fe 2 O 3 -2 sample and oxalic acid under UVA light or visible light, the catalyst was removed from the solution by a magnet, washed with distilled water and dried at 150°C for 1 hour. Then, the catalyst was reused for four next consecutive runs in the same conditions. Besides, the iron concentration in the solution of the first run was evaluated by atomic absorption spectrometry (AAS) using Shimadzu AA-6300 spectrometer. Within 0.5 hour under UVA light or 3 hours under visible light, the catalytic performance of the CuFe 2 O 4 /Fe 2 O 3 -2 sample was only slightly reduced after four-time reuses (Figure 7a). The FE-SEM image also displays that rod-like and spherical Fe 2 O 3 nanoparticles still cover nearly the entire surface of this sample after 5 consecutive catalytic tests (Figure 7b), indicating the reproducibility and the potential of this catalyst for practical applications. Furthermore, the AAS result shows a very low concentration of leached Fe 3+ ions (1.56 mg.L -1 ), suggesting that our heterogeneous catalyst is stable in the experimental conditions and the dissolution of iron species by oxalic acid can be neglected. Finally, Figure 8 compares the TOC evolution versus irradiation time of CuFe 2 O 4 and our coreshell CuFe 2 O 4 /Fe 2 O 3 -2 sample. It was observed that CuFe 2 O 4 /Fe 2 O 3 -2 sample always showed the better TOC removal than CuFe 2 O 4 under UVA light and visible light. Nevertheless, for both samples, the TOC removal is slower than the discoloration of MB solution, which indicates that most MB molecules were effective broken by the highly reactive oxygen species but small organic intermediates produced from the MB degradation are difficult to be removed from the solution.
Discussion
As shown in Figure 6 and Table 3, Fe 2 O 3 nanoparticles (without magnetic CuFe 2 O 4 cores) exhibited great photo-Fenton catalytic activities for MB degradation (k = 3.285 h -1 under UVA light and k = 0.655 h -1 under visible light). However, since these nanoparticles are antiferromagnetic and well dispersed in the solution, they are difficult to be recovered and reused. In contrast, although CuFe 2 O 4 and other ferrite materials can be easily recovered owing to their ferromagnetic properties, their catalytic performance was not high enough for practical applications due to the limited iron species on their surface. Therefore, the combination of Fe 2 O 3 and CuFe 2 O 4 can be a promising solution for the improvement of magnetic photo-Fenton catalysts. On the other hand, it should be reminded that our catalysts did not display photocatalytic activities in the experimental conditions (using a 9 W Osram Dulux S lamp as visible light source and a 9 W Radium 78 lamp as UVA light source) although in some studies, the authors reported the photocatalytic activity of CuFe 2 O 4 materials. For these studies, the authors usually used a 500 W xenon lamp [Zhu et al. 2013] or a 150 W xenon arc lamp [Ismael et al. 2020]. Their capacity is much higher than that of our lamps (only 9 W). There are also some works on CuFe 2 O 4 photocatalysts using a 8 W lamp, but this lamp emits the light in the UVC region whereas our study only uses the UVA lamp and visible lamp. Hence, the photocatalytic activity of our samples can be excluded from this study. According to the experimental results, via the immobilization of Fe 2 O 3 onto CuFe 2 O 4 particles, we really improved the photo-Fenton-like catalytic activity of CuFe 2 O 4 for the degradation of MB under both UVA light and visible light. This enhancement of activity should be explained by various factors including the change in phase composition, the evolution of morphology and the variations of functional groups on the surface of our catalysts. In fact, the growth of Fe 2 O 3 nanoparticles on the surface of magnetic CuFe 2 O 4 cores did not only increase the hematite phase in the structure of samples but also modified the distribution of metallic ions on their surface. As displayed in the FTIR spectra, the magnetic CuFe 2 O 4 powder shows an extremely weak M octa -O peak, indicating a very limited presence of surface octahedral metal ions (Cu 2+ and Fe 3+ ). In contrast, when the surface of CuFe 2 O 4 was coated by Fe 2 O 3 nanoparticles, the intensity of this peak remarkably increased, which can be associated with the fact that our Fe 2 O 3 nanoparticles crystallize in the corundum structure and contain all Fe 3+ ions in octahedral sites [Kraushofer et al. 2018, Li et al. 2016]. Interestingly, the M tetra -O peak of these CuFe 2 O 4 /Fe 2 O 3 core-shell materials is still intense, even more intense than that of CuFe 2 O 4 sample. These results demonstrate that our CuFe 2 O 4 /Fe 2 O 3 core-shell catalysts contain the high amounts of both tetrahedral and octahedral Fe 3+ ions on their surface, which can be considered as the main reason for the improvement of photo-Fenton-like catalytic performance. Among all our catalysts, the CuFe 2 O 4 /Fe 2 O 3 -2 sample seems to show the highest content of surface Fe 3+ ions owing to the most intense M octa -O and M tetra -O peaks in its FTIR spectrum. The enhanced Fe content on the surface of our core-shell catalysts is also supported by the EDX study. In fact, [Jeong and Yoon 2005, Liu et al. 2012, Ngo TPH and Le TK 2018. Then, due to light irradiation, these ferrioxalate complexes will be excited to produce numerous radicals such as C 2 O 4
, O 2
•and • OH (eq. 1-4) [Liu et al. 2012, Mulazzani et al. 1986], which are both highly reactive and thus able to degrade effectively MB molecules in solution. Therefore, the best performance of CuFe 2 O 4 /Fe 2 O 3 -2 catalyst is likely attributed to the highest surface Fe 3+ content of this sample.
Secondly, when Fe 2 O 3 nanoparticles were combined with CuFe 2 O 4 , the specific surface area of our catalysts was strongly enhanced (about ten times higher than that of CuFe 2 O 4 ). This can increase the reactive sites on their surface, leading to the enhancement of their photo-Fenton-like activity. Moreover, we also noticed a sound correlation between the shape of Fe 2 O 3 particles and the catalytic activity. Depending on the molar CuFe 2 O 4 /Fe 2 O 3 ratios, the Fe 2 O 3 particles could be transformed between the spherical shape and the rod-like morphology. It seems that the rod-like Fe 2 O 3 particles in CuFe 2 O 4 /Fe 2 O 3 -1 and CuFe 2 O 4 /Fe 2 O 3 -2 samples showed the better performances than the spherical Fe 2 O 3 particles in CuFe 2 O 4 /Fe 2 O 3 -3 catalyst. Although the reason for the enhanced activities of rod-like particles is still unclear and needs to be further studied, this phenomena was also observed in some previous works [Chaudhari et al. 2012, Liang et al. 2015]. Chaudhari et al. compared the peroxidase mimic activity of hematite iron oxides with different nanostructures, including hexagonal prism, cube-like and rod-like particles, and found that the rod-like particles showed the best activity [Chaudhari et al. 2012]. Likewise, Liang et al. reported that the photocatalytic degradation of rhodamine B can be improved when using X-shaped α-Fe 2 O 3 nanocrystals which are composed of two rod- like particles crossing each other [Liang et al. 2015].
These results indicate that the particle shape also plays an important role in catalytic performance. As a result, by managing the molar CuFe 2 O 4 /Fe 2 O 3 ratios, we can control the particle shape of Fe 2 O 3 and consequently the photo-Fenton-like activity of our materials.
Conclusion
In summary, we successfully developed a new and effective magnetic photo-Fenton-like catalysts based on the immobilization of Fe 2 O 3 nanoparticles on the surface of CuFe 2 O 4 particles. These core-shell materials exhibited excellent activities for the degradation of methylene blue in the presence of oxalic acid under both UVA light and visible light. Among them, the CuFe 2 O 4 /Fe 2 O 3 -2 catalyst show the best catalytic performance, which can be assigned not only to the highest surface Fe 3+ content of this sample but also to the high specific surface area and the presence of Fe 2 O 3 rod-like particles immobilized on the surface of its magnetic CuFe 2 O 4 cores. Moreover, owing to the presence of magnetic CuFe 2 O 4 cores, our coreshell catalysts can be easily separated from the solution and recovered by using a magnet, making them suitable for practical applications. Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Availability of data and materials
Not applicable. | 2021-09-09T20:51:00.841Z | 2021-07-03T00:00:00.000 | {
"year": 2021,
"sha1": "336b9b8cbcabac0059acd417624003aefec3d8b6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21870764.2021.1939241?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d9c6e28d0d5eaeda8f00d209c8dcd5a8c7c9584c",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
119165740 | pes2o/s2orc | v3-fos-license | Confinement-deconfinement transitions for two-dimensional Dirac particles
We consider a two-dimensional massless Dirac operator coupled to a magnetic field $B$ and an electric potential $V$ growing at infinity. We find a characterization of the spectrum of the resulting operator $H$ in terms of the relation between $B$ and $V$ at infinity. In particular, we give a sharp condition for the discreteness of the spectrum of $H$ beyond which we find dense pure point spectrum.
Introduction
Graphene, a two dimensional lattice of carbon atoms arranged in a honeycomb structure, has attracted great attention in the last few years due to its unusual properties [1,13]. The dynamics of its low-energy excitations (the charge carriers) can be described by a two-dimensional massless Dirac operator D 0 [23,6], where the speed of light c is replaced by the Fermi velocity, v F ∼ 10 −2 c. A remarkable property of these Dirac particles is their lack of localization in the presence of electric potential walls (i.e., potentials V with V (x) → ∞ as |x| → ∞). Indeed, if we assume that V is rotationally symmetric and of 'regular growth' the spectrum of the operator D 0 + V equals the whole real line and is absolutely continuous [21,15,4,17]. It is also known, at least in three dimensions, that a much larger class of potentials growing at infinity do not produce eigenvalues [22,9].
One way to localize Dirac particles (in the sense that the Hamiltonian has nontrivial discrete spectrum) is through inhomogeneous magnetic fields when they are asymptotically constant [3,11] as well as when they grow to infinity. Consider, for instance, a magnetic field B = curl A with B(x) → ∞ as |x| → ∞ and denote by D A the corresponding Dirac operator coupled to B. It is known that the spectrum of D A is discrete away from zero and zero is an isolated eigenvalue of infinite multiplicity (see Section 3).
In this article we consider two-dimensional massless Dirac operators coupled to both an electric potential V and a magnetic field B. We study the combination of the two effects described above: The deconfinement effect associated to V and the confinement one associated to B.
Before presenting our main results let us first discuss this problem assuming that V and B are sufficiently regular positive rotationally symmetric functions. In this case, the Dirac operator admits an angular momentum decomposition D A + V = j∈Z h j and its spectrum, σ(D A + V ), satisfies Let A(r) = 1 r r 0 B(s)sds be the modulus of the magnetic vector potential in the rotational gauge. It is easy to show, on the one hand, that if A(r) → ∞ as r → ∞ and lim |x|→∞ V (|x|)/A(|x|) < 1, the spectrum of h j is discrete, for each j ∈ Z (see Proposition 1 in the appendix for the precise statement). On the other hand, as opposed to the non relativistic case, it is known [18,Proposition 2] that if V (r) → ∞ as r → ∞ and lim |x|→∞ V (|x|)/A(|x|) > 1, (2) the spectrum of each h j equals the whole real line and is purely absolutely continuous. This phenomenon was recently discussed from the physical point of view in [7]. In that article a device was proposed to control the localization properties of particles in graphene by manipulating the electro-magnetic field at infinity, i.e., far away from the sample.
Clearly, if condition (2) is satisfied the spectrum of the full operator H := D A + V is also absolutely continuous and equals R. Conversely, one has pure point spectrum if condition (1) holds. It is, however, unclear whether the eigenvalues of H accumulate. Assume that V (x), B(x) → ∞ as |x| → ∞. One may expect that when the quotient |V (|x|)/A(|x|)| is sufficiently small, for large |x|, the main effect of the electric potential is to remove the zero modes of D A yielding purely discrete spectrum for H. However, as this quotient grows the eigenvalues of the h j might accumulate creating points in the essential spectrum of H.
The aim of our work is to shed some light on the spectrum of H in terms of the relation between B and V at infinity. We emphasize that most of our results do not assume rotational symmetry. In fact, besides some regularity conditions on B and V we only require that the potential V grows subexponentially fast.
Let us describe our results disregarding technical assumptions (for the precise statements see Section 2). We show (see Theorem 1) that the spectrum of H is discrete if and moreover that this condition is sharp in the following sense: If the quotient in (3) converges to 1 along a sequence (x n ) n∈N ⊂ R 2 with lim n→∞ |x n | = ∞ the essential spectrum of H, σ ess (H), is not empty. In fact, we prove (see Theorem 3) that zero belongs to the essential spectrum of H if, for some natural number k, the quotient V 2 (x n )/|2B(x n )| converges to k sufficiently fast, as n → ∞. In addition, we find (see Corollary 1) that the essential spectrum of H covers the whole real line if there is a continuous path γ : R + → R 2 with |γ(t)| → ∞ as t → ∞ such that V 2 (γ(t))/|2B(γ(t))| converges to infinity with moderate speed (see Remark 5), as t → ∞.
In order to get a better picture of our results consider the example when V (x) = V 0 |x| t and B(x) = B 0 |x| s for some constants V 0 , B 0 , t > 0 and s ≥ 0. In this case A, in the rotational gauge, satisfies |A(|x|)| = B 0 |x| s+1 /(s + 2).
From Theorems 1 and 3 and Corollary 1 (see Example 1) we conclude that Hence, if we fix B and increase the strength of V (at infinity), we observe a transition from purely discrete (c) to purely absolutely continuous spectrum (b) passing through dense pure point spectrum (e). In other words, we observe a transition between a strongly confined system, in the sense that the energy required to 'bring a particle to infinity' is not finite, to a completely deconfined system. Between these two regimes there is another one (e) belonging to a (presumably) weaker form of confinement.
The organization of this article is as follows: In the next section we state our main results precisely. We recall some basic facts about magnetic Dirac operators in Section 3. In Section 4 we prove some useful commutator estimates used in the proof of Theorem 1 which is given in Section 5. Theorem 2 is proven in Section 6 and the proofs of Theorems 3 and 4 are given in Section 7. The main text is followed by an appendix containing some auxiliary results.
Main results
Assume that V and B are continuous functions on R 2 . We define the twodimensional massless Dirac operator coupled to a magnetic field B on H := L 2 (R 2 , C 2 ) a priori as Similarly we define In view of [2] D A and H are essentially self-adjoint. We denote the self-adjoint extensions of D A and H by the same symbols and their domains by D(D A ) and D(H) respectively.
Remark 1.
A similar result was obtained in [19]. However, there the statement was only proved when the limit superior in (8) is strictly smaller than 1/4 instead of 1. Our proof is quite different from the one given in [19]. We split the analysis on the spaces kerD A and kerD ⊥ A and estimate the cross terms with the commutator bounds derived in Section 4.
The next two theorems state that the constant 1 in (8) above is in fact sharp.
Remark 2. Note that conditions (9) and (11) are equivalent to for some constant c > 0.
In order to state the next two results we use the following definition: We say that a function f : R 2 → R varies with rate ν ∈ [0, 1] at infinity if there are constants R > 1 and C > 0 such that for any α : Clearly, if f varies with rate ν ∈ [0, 1] then it also varies with rate ν ′ at infinity for all ν ′ ∈ [0, ν]. Note also that power functions of |x| with positive power vary with any rate ν ∈ [0, 1].
Remark 4. From
for some constant c > 0. Note, however, that the converse is not in general true.
We also remark that, under somewhat different assumptions, Theorem 3 (with k = 1) improves the statement of Theorem 2. The proof of Theorem 2 is based on the construction of an infinite dimensional subspace of the operator domain on which H stays bounded. This space is constructed using the zero-modes of the operator D A . The proof of Theorem 3 is, in contrast, based on the construction of a Weyl sequence of functions localized around points ( , the points where the potential V has the same value as the k − th Landau-level of the magnetic Dirac operator with constant field B(x n ). This idea of V crossing through the (local) Landau-levels can also be used in the case when V 2 (x)/2B(x) → ∞ as |x| → ∞ (along some sequence) to obtain the following result.
Remark 5. Due to (16) and (19) we have that |V (x n )| → ∞ and moreover for some constant c > 0. Observe that conditions (17) and (18) give an upper bound for the growth of the ratio V 2 (x n )/|2B(x n )|.
Note in addition that the theorem is also applicable for bounded magnetic field. In this case the condition (18) can only be satisfied for ν > 0.
Remark 6. It is easy to see that the regularity conditions on V and B in theorems 3 and 4 can be weakened to hold only outside some compact set K ⊂ R 2 . Inside K it is sufficient that these functions are bounded. The same holds true for Theorem 1 (compare with Lemma 7).
The theorem above can also be used to find other points in the essential spectrum of H. To this end is suffices to find a sequence (x n ) n∈N satisfying the conditions of the theorem for V − E instead of V . As an example we get the following result.
For completeness we give a proof of this corollary.
Proof. Let E ∈ R. Due to (20) and the continuity of V 2 /B along γ we find a constant N > 0 and a sequence (x n ) n∈N , on the range of γ with |x n | → ∞, such that Since |V (x n )| → ∞ as n → ∞ it is clear that the conditions (17), (18), and (19) are also satisfied for V (x n ) − E instead of V (x n ). This implies the claim.
Remark 7. As we already mentioned in the Introduction one can combine Corollary 1 and Proposition 1 from the Appendix for functions V and B that are rotationally symmetric. In this case one obtains that σ(H) = R is pure point, i.e., H has dense pure point spectrum (see Example 1 bellow). Note that the same type of spectral phenomenon occurs for σ(D A ) when B is rotationally symmetric and decays at infinity but B(x)|x| → ∞ as |x| → ∞ [12] (see also [20, pp. 208]).
Let us now apply Corollary 1 to some particular cases.
Then |∇V | and |∇B| vary with any rate ν ∈ [0, 1] at infinity and we have that (17) is satisfied for any sequence if and only if 1 |x| which is the case whenever 2t < s+1. For these exponents (18) and (19) are clearly fulfilled for any sequence which tends to infinity. Hence, Corollary 1 states that for 0 ≤ s < 2t < s + 1 we have σ(H) = R. Furthermore, in view of Proposition 1 we get that the spectrum in this case is pure point.
Supersymmetry and zero modes
In this section we recall some basic facts about magnetic Dirac operator which are going to be useful in our proofs. As we mentioned in the Section 2 the operator where and p j = −i ∂ j , j = 1, 2, is the momentum operator in the j-th direction. The operators d, d * satisfy the commutation relation We now investigate further the relation between dd * and d * d. We note that the of kernel D A fulfills ker(D A ) = ker(d) ⊕ ker(d * ). Due to the matrix structure of D A we have that with the unitary operators We now recall some results concerning the structure of the kernel of D A . For a given B ∈ C(R 2 , R) one finds a weak solution φ ∈ C 1 (R 2 , R) of the equation see, e.g. [5] where a much larger class of fields B is considered. (Using standard elliptic regularity it is easy to see that if B belongs to some Hölder class the solution φ ∈ C 2 (R 2 , R)). A direct computation yields Moreover, it is known [16] that whenever With these observations we can easily go through the following example that will be useful later on.
Remark 8. If we assume further that B(x) → ∞ as |x| → ∞ we have by [8] that the spectrum of D A is discrete away from zero. Since B fulfills in this case (28) this implies that σ ess (D A ) = {0}.
Useful commutator estimates
We denote by P 0 the orthogonal projection onto ker(D A ) and set P ⊥ 0 := 1 − P 0 . In this section we show some commutator bounds between the electric potential V , P 0 , and the sign of D A denoted by sgn(D A ). We use these bounds in Section 5 to show Theorem 1.
Throughout this section we use the following notation: For 0 < V ∈ C 1 (R 2 , R) such that ∇V /V ∞ < ∞ we set Note that T formally equals [D A , V −1 ]V , where [·, ·] is the symbol for the commutator. We define for z ∈ ̺(D A ) where Θ r (z) : Proof. For z ∈ ̺(D A ) we get the following relation on H From this follows that Since T R A (z)) < 1 we get that ±1 ∈ ̺(T R A (z)) by the Neumann series. Therefore, we get the desired expression (32) and the estimate on Θ r (z). Equation (33) and the bound for Θ l (z) follows similarly.
As stated in Lemma 7 in the appendix we can reduce the proof of Theorem 1 for potentials V ∈ C 1 (R 2 , R) and the magnetic fields B ∈ C(R 2 , R) satisfying the following conditions: There exist constants δ, η ∈ (0, 1) such that, for all x ∈ R, Due to Lemma 1 we see that under these assumptions 0 is an isolated eigenvalue of D A of infinite multiplicity and that σ(D A ) \ {0} is discrete. Moreover, we find a spectral gap (−2β 0 , 0) ∪ (0, 2β 0 ) ⊂ ̺(D A ), where Lemma 3. Let V ∈ C 1 (R 2 , R), B ∈ C(R 2 , R), and A ∈ C 1 (R 2 , R 2 ) with B = curl A. Assume further that the conditions (35)-(37) are fulfilled for δ ∈ (0, 1 2 ) and η ∈ (0, 1). Then, we have (a) The operators P ⊥ 0 , V −1 V and V P ⊥ 0 , V −1 are well-defined on C ∞ 0 (R 2 , C 2 ) and extend to bounded operators on H with The same holds true if we replace P ⊥ 0 above by P 0 .
Proof of Theorem 1
We note that the assumptions in Theorem 1 imply either that V (x) → ∞ as |x| → ∞ or V (x) → −∞ as |x| → ∞ by using the continuity of V . We may assume without lost of generality that V is positive at infinity. Similarly, it suffices to consider the case B(x) → ∞ as |x| → ∞ since otherwise we just have to change the roles of d and d * in the proof.
In order to prove Theorem 1 it suffices to find a constant c > 0 such that holds; see, e.g. [19]. Moreover, according to Lemma 7 we may assume that V and B fulfill the conditions (35)-(37), where η ∈ (0, 1) is some fix constant and δ ∈ (0, 1) can be chosen arbitrarily small.
Since σ ess (D A ) = {0} it is convenient to show (42) by splitting ϕ as the sum of P 0 ϕ and P ⊥ 0 ϕ. Using the bounds derived in Section 4 we can then estimate the cross terms.
Proof of Theorem 1. Let ϕ ∈ C ∞ 0 (R 2 , C 2 ). We compute, using Lemma 3, We estimate each of the terms above separately. Observe that for any ε ∈ (0, 1) we have that An application of Lemma 3 yields where in the last equality we use (35). Further, by Lemma 3 (a), we obtain Therefore, in view of (43),(44), and (45) it suffices to show that for δ > 0 small enough and some ε ∈ (0, 1). We set In view of Lemma 1 we have that whereπ denotes the orthogonal projection onto ker(d) ⊥ . Using this, (22), and writing ϕ = (ϕ 1 , ϕ 2 ) T we get According to condition (37) and (23) we have that In order to give a lower bound to dπϕ 1 2 − c ǫ,δ Vπϕ 1 2 we will use that where s, s * are the isometries given in (24). A simple computation yields We note that V s * , V −1 is one of the components of the operator Using the definition of the operator norm, we obtain that where in the last bound we use Lemma 4. Combining this with (49) and proceeding as in (48) we obtain Choosing ε = 1 − δ 1/2 we get that c ǫ,δ = 1 + O(δ 1/2 ) as δ → 0. This implies that, for δ > 0 sufficiently small, the terms in (48) and (50) are positive. This concludes the proof of the theorem.
Proof of Theorem 2
In this section we prove Theorem 2 for B > 0. The case B < 0 can be done similarly. As we mention in Section 3 there is a function φ ∈ C 2 (R 2 , R) satisfying ∆φ = B. We choose A(x) = (−∂ 2 φ, ∂ 1 φ) as the vector potential in the Hamiltonian D A . A key element in our proof is that the space is infinite dimensional. Since B > B 0 we see that X is a subspace of ker(d) (see (27)). Let us first state a technical result concerning this space whose proof is given at the end of this section.
Lemma 5. Assume that the conditions of Theorem 2 are fulfilled. Let φ ∈ C 2 (R 2 , R) be such that B = ∆φ. Then we have, for Ω ∈ X, a) Ω ∈ D(d * ) ∩ D(V ) and Proof of Theorem 2. We first show that the dimension of X (defined in (51)) is infinite. To this end define φ = φ − 1 2 ln(B) and note that Thus, by the discussion in Section 3 (see (28)) the space is infinite dimensional. The claim now follows since clearly X and Y are isomorphic. Let us define the subspace By Lemma 5 we have, for any Ω ∈ X, Therefore, the map X ∋ Ω → (d * Ω, −V Ω) T ∈ W is bijective and we conclude that dim W = ∞.
Proof of Theorems 3 and 4
We roughly explain the main ingredients of the proof of Theorems 3 and 4: Note first that it suffices to construct a sequence (ψ n ) n∈N ⊂ D(H) which converges weakly to zero such that Hψ n / ψ n → 0 as n → ∞ (Weyl sequence). For simplicity we consider the case when B(x) = B 0 . Recall that the corresponding magnetic Dirac operator D A has the (Landau) eigenvalues l n := sgn(n) 2|n|B 0 , n ∈ Z. Now assume that there is a sequence (x n ) n∈N such that V (x n ) = √ 2nB 0 = l n for all n ∈ N (this is a simplification of condition (16)). The bulk of the Weyl sequence consists of eigenfunction (ϕ n ) n∈N of D A centered around x n and with eigenenergies −l n . It is well known that these eigenfunctions are almost Gaussian-like localized. Due to this strong localization one has that (V ϕ n )(x) ≈ V (x n )ϕ n (x) = l n ϕ n (x) in the sense of L 2 . Therefore, we get that where the error terms will be controlled using the remaining assumptions of the theorems.
This consideration can be also extended to non-constant magnetic fields. In this case we construct the Weyl sequence using eigenfunctions of the magnetic Dirac operator with constant magnetic field B(x n ). Such a local linearization can be well controlled using again that the Landau eigenfunction are strongly localized.
Throughout this section we will assume without lost of generality that B(x n ) ≥ B 0 is positive. In order to prepare the proof of Theorems 3 and 4 we introduce some notation: For a sequence (x n ) n∈N ⊂ R 2 we set V n := V (x n ) and B n := B(x n ) and define the magnetic vector potentials We define the operators d n and d * n through the relation Let (p n ) n∈N be a sequence of natural numbers. As already mentioned above an important ingredient in our proof is that 2p n B n is the square of the p n -th Landau level of the Dirac operator D An . For all n ∈ N, we define the functions and observe that We have, for any k ∈ N (see e.g., [20,Section 7.1.3]), Next we define the localization functions. Let χ ∈ C ∞ 0 (R 2 , [0, 1]) be such that χ(x) = 1 for |x| ≤ 1 and χ(x) = 0 for |x| ≥ 2. We set where r n > 0 will be chosen in the proofs later on. Finally, we observe that since curl (A − A n ) = 0 there exists a function g n ∈ C 2 (R 2 , R) such that ∇g n = A − A n on R 2 .
We define the Weyl functions to be given, for all n ∈ N, by Clearly, we have In order to prove Theorems 3 and 4 we will choose r n > 0 such that ψ n are linear independent and each of the four terms above tend to zero uniformly in ψ n as n → ∞. In the next lemma we estimate the norms ψ n . Lemma 6. Assume that V 2 n /(2p n B n ) → 1 as n → ∞. Then, for all n ∈ N large enough, we have s pn e −s ds .
Proof of Theorem 3. In this proof we use Lemma 6 for p n = k (n ∈ N), where k is some fixed natural number. We choose the localization radii to be given by Since r n → 0 as n → ∞ we can assume that the ψ n have disjoint support, for otherwise we can extract a subsequence satisfying this property. In view of Remark 4 we see that, as n → ∞, s k e −s ds → 0.
Then, according to Lemma 6 there exists an N > 0 such that, for all n > N , Next we estimate the corresponding terms of (56). A simple calculation shows that For the last two terms in (56) we use (62) and (65) to get, for ν ∈ [0, 1], This implies by (18) that for ν ∈ [0, 1] the ratio r n /|x n | ν → 0 as n → ∞. Hence, on the one hand, we see that the support of the ψ n are mutually disjoint (at least for a subsequence). On the other hand, since |∇V | and |∇B| vary with rate ν ∈ [0, 1], this enable us to use the mean value theorem to obtain that where we use again (62) combined with (65) and (64). Analogously, we get Hence, in view of (17) we get that (D A + V )ψ n / ψ n → 0 as n → ∞ which proves the theorem.
Appendix A. Results for rotationally symmetric potentials In this appendix we study some properties of the Dirac operator H = D A + V when V and B are rotationally symmetric. Let A be given by the rotational gauge where r = |x|. We can decompose H into a direct sum of operators on the j-th angular momentum eigenspace, i.e., there is a unitary map such that U HU * = j∈Z h j with where v(|x|) := V (x) and m j = j + 1/2 , j ∈ Z (see e.g. [20]). Since H is essentially self-adjoint on C ∞ 0 (R 2 , C 2 ) we deduce that h j is also essentially self-adjoint on U C ∞ 0 (R 2 , C 2 ) j ⊂ L 2 (R + , C 2 ; dr) for any j ∈ Z.
Proposition 1. Assume that A ∈ C 1 (R + , R) and v ∈ C(R + , R) are such that the conditions are fulfilled. Then, for all j ∈ Z, the spectrum of h j is purely discrete. In particular, the spectrum of D A + V is pure point.
Proof. Since lim sup r→∞ v 2 (r)/A 2 (r) < 1 we find constants µ > 1 and r 0 > 0 such that A 2 (r) > µv 2 (r) for r > r 0 . We pick λ ∈ (µ −1 , 1) and define, for j ∈ Z, the potential function w j on R + as For any j ∈ Z the matrix valued potential w j is real-valued and diagonal. Moreover, the diagonal entries of w j are bounded from below and converge to +∞ as r → ∞ due to (68) and (69). As a consequence we get, for every j ∈ Z, a self-adjoint operator with form core U C ∞ 0 (R 2 , C 2 ) j . Moreover, s j is bounded from below and has purely discrete spectrum. In order to show that h j has also purely discrete spectrum we observe that, for any ψ ∈ U C ∞ 0 (R 2 , C 2 ) j , which is equivalent to h 2 j ≥ (1 − λ)s j , for j ∈ Z. An application of the min-max principle (see [14, Section XIII.1]) gives that h 2 j has purely discrete spectrum. This implies the claim for h j .
Next we define the magnetic field aŝ In view of (76) and (74) we havê Hence,V andB satisfy conditions (35)-(37). Since the function B −B has compact support in R 2 , we find a function G ∈ C 1 (R 2 , R 2 ) such that ||G|| ∞ < ∞ and B −B = curl G on R 2 . We defineÂ(x) := A(x) − G(x) for x ∈ R 2 . By construction we know that (D A + V ) − (DÂ +V ) is bounded on L 2 (R 2 , C 2 ). Hence, the resolvent difference of D A + V and DÂ +V is compact if one of the resolvents is itself compact. From this follows the claim. | 2014-05-27T00:05:23.000Z | 2012-10-18T00:00:00.000 | {
"year": 2012,
"sha1": "4e988956d94189dcbec6c4f9539811d72cf29909",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1016/j.jfa.2013.07.018",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4e988956d94189dcbec6c4f9539811d72cf29909",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
235510564 | pes2o/s2orc | v3-fos-license | Facile synthesis of near-infrared responsive on-demand oxygen releasing nanoplatform for precise MRI-guided theranostics of hypoxia-induced tumor chemoresistance and metastasis in triple negative breast cancer
Background Hypoxia is an important factor that contributes to chemoresistance and metastasis in triple negative breast cancer (TNBC), and alleviating hypoxia microenvironment can enhance the anti-tumor efficacy and also inhibit tumor invasion. Methods A near-infrared (NIR) responsive on-demand oxygen releasing nanoplatform (O2-PPSiI) was successfully synthesized by a two-stage self-assembly process to overcome the hypoxia-induced tumor chemoresistance and metastasis. We embedded drug-loaded poly (lactic-co-glycolic acid) cores into an ultrathin silica shell attached with paramagnetic Gd-DTPA to develop a Magnetic Resonance Imaging (MRI)-guided NIR-responsive on-demand drug releasing nanosystem, where indocyanine green was used as a photothermal converter to trigger the oxygen and drug release under NIR irradiation. Results The near-infrared responsive on-demand oxygen releasing nanoplatform O2-PPSiI was chemically synthesized in this study by a two-stage self-assembly process, which could deliver oxygen and release it under NIR irradiation to relieve hypoxia, improving the therapeutic effect of chemotherapy and suppressed tumor metastasis. This smart design achieves the following advantages: (i) the O2 in this nanosystem can be precisely released by an NIR-responsive silica shell rupture; (ii) the dynamic biodistribution process of O2-PPSiI was monitored in real-time and quantitatively analyzed via sensitive MR imaging of the tumor; (iii) O2-PPSiI could alleviate tumor hypoxia by releasing O2 within the tumor upon NIR laser excitation; (iv) The migration and invasion abilities of the TNBC tumor were weakened by inhibiting the process of EMT as a result of the synergistic therapy of NIR-triggered O2-PPSiI. Conclusions Our work proposes a smart tactic guided by MRI and presents a valid approach for the reasonable design of NIR-responsive on-demand drug-releasing nanomedicine systems for precise theranostics in TNBC. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12951-022-01294-z.
Introduction
Triple-negative breast cancer (TNBC) accounts for almost 15-20% of all the breast cancer cases and presents a poor prognosis due to its metastatic nature and high recurrence rate [1,2]. Owing to the paucity of ER, PR and HER2, TNBC cannot benefit from the FDA-approved targeted therapies which have been proved to be efficacious for the other breast cancer subtypes [3,4]. The systemic chemotherapy using paclitaxel (PTx) remains the first-line treatment of advanced TNBC, but its efficacy is limited by its severe side effects and acquired drug resistance, leading to the failure of chemotherapy and tumor migration and invasion [5,6]. The hypoxic tumor microenvironment was reported to be involved in different tumorigenesis mechanisms of TNBC, such as invasion, immune evasion, chemoresistance, and metastasis [7]. Hypoxia is a pathophysiological characteristic for the tumor microenvironment that results in abnormalities in the tumor vasculature and a disproportion between oxygen provision and oxygen utilization in the tumors [8,9]. Poor efficacy in hypoxic areas is often linked with the space from cancer cells to supply vessels and the permeability of the newly formed tumor vascular system [10][11][12]. Meanwhile, hypoxia upregulates the activity of p-glycoprotein in tumor cells, leading to drug resistance of the cells [13]. In addition, most chemotherapeutic agents including PTx induce cytotoxicity by proliferating cells, but hypoxic cancer cells are more likely to proliferate more slowly than normal tissue, resulting in tumor cells tolerant to the chemotherapy. What's more, Keywords: On-demand drug release, MRI-guided theranostics, Tumor hypoxia, Nanoplatform, Triple negative breast cancer Graphical Abstract hypoxia has increasingly emerged as a crucial factor of microenvironment in the regulation of tumor metastasis accompanying with the activation of hypoxia-inducible transcription factor (HIF), which could activate the process of epithelial-to-mesenchymal transition (EMT), resulting in tumor metastasis and leading to a multidrug resistance [14][15][16]. Therefore, hypoxia is an important factor that contributes to TNBC metastasis and multidrug resistance to chemotherapy, and alleviating hypoxia microenvironment can enhance the anti-TNBC efficacy of chemotherapy and also inhibit its invasion and metastasis.
Tumor oxygenation can relieve hypoxia to achieve better therapeutic effects [17,18]. For example, producing oxygen in situ with the use of catalysts to promote the degradation of endogenous hydrogen peroxide (H 2 O 2 ) can relieve tumor hypoxia and strengthen the therapy [19][20][21]. In addition, improving intertumoral blood flow can also lead to relieved tumor hypoxia [22,23]. However, the limited available H 2 O 2 within the tumor, as well as the poor distribution of oxidized red blood cells reaching the tumor vessels, often leads to an insufficient intratumor oxygen delivery and unsatisfactory tumor reoxygenation. Biocompatible perfluorocarbons (PFCs) have been applied as the artificial blood substitute for many years, which often been used in different nanocarriers to transport oxygen into tumors for alleviating the tumor hypoxia [24][25][26]. However, O 2 is physically dissolved in PFCs and the release of oxygen from PFOB depends on the diffusion between oxygen concentration gradients [27]. This property also makes it difficult to retain high oxygen level in perfluorocarbon-based nanosystems for a long time in the natural state, which would impact the blood circulation time and the oxygen accumulation in tumor. Ensuring the O 2 transport stability and rapidity of release was the key aspect for PFCs-based O 2 carrier. It has been reported the near-infrared (NIR)-induced photothermal therapy (PTT) not only could kill the tumor [28][29][30], but also accelerate O 2 release from PFCs-based O 2 carrier to mitigate tumor hypoxia with remarkably synergistic effects [31,32]. Therefore, sealing off the oxygen in a nanosystem and release it by NIR may be a strategy for delivering O 2 to tumor.
Hence, a near-infrared responsive on-demand oxygen releasing nanoplatform (O 2 -PPSiI) was successfully established to delivery O 2 to tumor and trigger its release under NIR irradiation. The poly(lactic-co-glycolic acid) (PLGA) was served as a core to load both oxygen carrier perfluorooctyl bromide (PFOB) and the chemotherapy drug PTx. Then, covering the ultrathin-walled silica shell on it was used to seal the O 2 in this nanosystem. The Indocyanine green (ICG) served as a photothermal converter to trigger the rupture of silica shell and O 2 release. Furthermore, in order to realize the precise and controllable on-demand drug releasing, the non-invasive radiation-free modality MRI and its T1 contrast agent paramagnetic gadolinium (Gd 3+ ) complexes, which are all widely used for breast imaging in clinic, was applied to monitor the distribution of nanosystem in real-time. In addition, the Arginine-glycine-aspartic acid (RGD) and urokinase plasminogen activator (uPA), whose receptor integrin α v β 3 and uPAR are overexpressed on human tumor cells [33,34], were decorated on the surface of O 2 -PPSiI to transmit the drug into the tumor. The ondemand drug and oxygen release were achieved with an NIR-induced photothermal effect to realize precise treatment against TNBC via synergistic chemotherapy, PTT, and hypoxia mitigation; there were no toxic side effects. In addition, NIR-triggered O 2 -PPSiI could inhibit the natural procedure of epithelial-mesenchymal transition (EMT) in the TNBC and weaken its migration and invasion ability (Scheme 1). This study offers a smart "Trojan Horse" strategy guided by Gd-enhanced MRI and provides a valid approach for the reasonable design of an NIR-responsive on-demand drug-releasing nanosystem for precise theranostics in TNBC.
Synthesis of O 2 -PPSiI
The synthesis of O 2 -PPSiI were described previously [17,35]. Briefly, the 180 μL liquid PFOB and 20 mg PTX were both mixed into a 10 mL acetone of 70 mg PLGA to form a clear solution. The solution was then added into the cetyltrimethylammonium bromide (CTAB) solution (0.2 g/40 mL), emulsified by ultrasonic in ice-bath conditions for 10 min, and then completely volatilize acetone at the room temperature. The PFOB/PTX@PLGA nanoparticles (PP) were obtained.
Next, PP solution was dispersed into 45 mL distilled water and injected into 100 mL round-bottom flask under O 2 atmosphere. Keeping the Stir at 200 rpm for 1 h made the nanoparticles fully absorb oxygen. 5 mL isopropanol was added by stirring for 30 min followed by the addition 0.5 mL TEOS and 0.01 mL APTES. After reacting for 12 h, the products were added into the dichloromethane solution with 50 mg DL-menthol by reaction for another 12 h at 35 ℃, and the O 2 -PFOB/PTX@SiO 2 nanoparticles (O 2 -PPSiI) were collected by centrifugation. Afterwards, ICG, Gd-DTPA, and O 2 -PPSiI were mixed in an aqueous solution. After 12 h, uPA and RGD linked to the surface of silica shells with the catalysis of 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS).
Oxygen storage and release of O 2 -PPSiI
To examine the stability of the oxygen storage in O 2 -PPSiI, the nanoparticle was dispersed in the deoxygenated water and divided into seven samples for different times at room temperature. A portable oxygen meter (YSI-550A, YSI, USA) was employed to monitor the dissolved O 2 concentrations in the solutions excited with an 808 nm NIR laser at a density of 1.5 W/cm 2 for 6 min.
To examine the performance of oxygen release in O 2 -PPSiI. An ultrasound imaging device (Philips iU-Elite ultrasound system, USA) was used to visualize the process of oxygen release in O 2 -PPSiI, and the images of the sample or the tumor before and after irradiation were obtained at a gain of 69% or 79%, depth of 2.5 or 1.3 cm, and a mechanical index 0.6.
Evaluation of cell migration and invasion
Cell migration was examined with a wound healing assay. Briefly, MDA-MB-231 cells (2 × 10 6 cells/well) were seeded into the 6-well plates. After culturing for 24 h, each well was scratched in the middle of the cells. The medium was then taken away, and the cells were washed with PBS, and then the DMEM with 3% FBS was added. Next, the cells were exposed to different treatment groups with incubation for 24 h. After the addition of Hoechst 33,342 (1 μg/mL) for 30 min, the images of cell migration were obtained using a fluorescence microscope.
Cell invasion was examined with transwells assay. Briefly, a Boyden transwell chamber (Corning, USA) was added with dissolved ECM gel and incubated for 4-8 h at 37℃. MDA-MB-231 cells were added into the DMEM and seeded into the Boyden chamber (5 × 10 5 cells/mL). A total of 500 mL DMEM with 10% FBS was put into the lower chamber. Next, different treatment agents at the same concentration were added into the chamber for 24 h at 37℃. Thereafter, the cells that invade through the upper chamber were fixed with methanol, stained with crystal violet, and observed with an inverted optical microscope (Olympus, Japan).
Scheme 1
The rational design of NIR-responsive on-demand drug releasing nanomedicine system to relieve tumor hypoxia, enhance chemotherapy and inhibit tumor metastasis
Establishment of MDA-MB-231 orthotopic xenografts
All the animal study was performed with the approval of the Animal Experimentation Ethics Committee of Jinan University. Female BALB/c nude mice with 4-5 weeks of age were bought from Vitalriver Inc. (Beijing, China) and raised in a specific pathogen-free environment. The MDA-MB-231 cells at logarithmic growth phase were digested and collected as a suspension (2 × 10 5 cells/40 μL) followed by injection into the fat pad of the third breast on the right to establish MDA-MB-231 orthotopic xenografts. After growing for about 28 days, the mice with the tumor reached approximately 300 mm 3 and were included to perform the next experiments. All the mice were anesthetized with intraperitoneal injection of 2% pentobarbital (75 μL /200 g) sodium salt before the examinations.
In vivo antitumor activity
To investigate the antitumor activity of O 2 -PPSiI, the included tumor-bearing mice were divided into 7 groups with the treatment of saline, laser, PTX, O 2 -PPSiI, PPSiI (the nanoparticles without O 2 ) irradiated with NIR-laser (simplified as "PPSiI + Laser"), O 2 -PSiI (the nanoparticles without PTX) irradiated with NIR-laser (simplified as "O 2 -PSiI + Laser") and O 2 -PPSiI irradiated with NIR-laser (simplified as "O 2 -PPSil + Laser"). All the agents were injected through the caudal vein at equivalent concentration of 10 mg/kg PTX, and the 808 nm NIR-laser was used at a density of 2 W/cm 2 for 5 min. All the groups were treated twice a week with an interval of 3 days, and the NIR-laser irradiation was conducted at 8 h after the intravenous injection which was precisely identified according to the MR imaging of O 2 -PPSiI in vivo.
The structural and functional MRI including threedimensional T2-weighted imaging (3D-CUBE T2WI), blood oxygenation level-dependent magnetic resonance imaging (BOLD-MRI) and intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) sequences were performed to monitor the antitumor activity of O 2 -PPSil before and 7 days, 14 days and 21 days after the treatment with a 1.5 T Signa HDxt superconductor clinical Magnetic resonance system (GE Medical, Milwaukee, U.S.). The 3D-CUBE T2WI was scanned with TR/TE, 2000/83.1 ms; FOV 60 mm × 60 mm; matrix, 192 × 160; slice thickness/space, 1.5/0 mm; and the volume of each tumor was calculated through volume render program at the post-processing workstation (AW4.5, GE Healthcare). The BOLD-MRI was scanned with TR/TE, 235/3.9-104.7 ms; FOV 50 mm × 50 mm; matrix, 192 × 128; slice thickness/space, 2.0/0.2 mm; and the R2* was mapped through Functool R2star program at the post-processing workstation (AW4.5, GE Healthcare). The IVIM-DWI was scanned with TR/TE, 3000/101.7 ms; FOV 50 mm × 50 mm; matrix, 128 × 96; slice thickness/space, 2.0/0.2 mm; b values, 0, 25, 50, 75, 100, 150, 200, 400, 600, 800, 1000, 1200 and 1500 s/ mm 2 ; and then the diffusion-related parameter D and perfusion-related parameter f was mapped via the Functool MADC program at the post-processing workstation (AW4.5, GE Healthcare). Besides, the body weight of all the mice in each group was also recorded every two days. All the results were normalized to their percentage difference (△X (%) = (X i -X base )/X base × 100%, which i represented the different time points, and X referred to the Volume, R2*, D, f and Body weight as mentioned above). After 21 days, all the mice in each group were sacrificed, and the tumor was separated and weighted, and organs and blood sample were also collected to conduct the hematoxylin and eosin (H&E) staining and biochemical analysis.
Statistical analysis
All the data are presented with mean ± standard deviation (SD). Differences among groups were evaluated with one-way analysis of variance (ANOVA) and the least significant difference (LSD) t-test. Pearson correlation analyses between MRI-derived parameters and the expression of E-cadherin, vimentin, and Snail-Slug were performed. A difference of P < 0.05 (*) or P < 0.001 (**) was considered statistically significant. All statistical analyses were completed using the SPSS software package (Version 14.0, SPSS Inc., Chicago, IL, USA).
Facile synthesis and characterization of O 2 -PPSiI nanosystem
The O 2 -PPSiI nanosystem was chemically synthesized by a two-stage self-assembly process, which was the PFOB core as the oxygen carrier, and then encapsulated into an ultrathin-walled silica shell (Fig. 1A). In the first stage, the PFOB core (i.e., the core without the silica coating) was synthesized via an emulsion-(solvent-evaporation) method [35], and PTx was mixed into PLGA to form the PFOB/PTX@PLGA (abbreviated as PP) nanocapsule. And then the PP nanocapsule was used to absorb O 2 as a carrier. In the second stage, the ultrathin-walled silica shell was synthesized by hydrolysis and condensation of TEOS. The silica shell was heterogeneous in nature, and thus biocompatible DL menthol was used to prevent oxygen release from the nanosystem. The loading efficacy of PTX in O 2 -PPSil was about 20%.
Transmission electronic microscopy (TEM) pictures presented the morphology of O 2 -PPSi which was a typical spherical shell/core structure ( Fig. 1B) with an average diameter of 203 nm (Fig. 1C). Compared to PP (− 8.8 mV), the zeta potential of O 2 -PPSi elevated to + 22.9 mV (Fig. 1D), which is due to the formation of the aminated silica shell on the PP surface. The elemental mapping images of O 2 -PPSi nanoparticles showed the presence of Si and F elements (Fig. 1E), which further verified the loading of PFOB and coating of the silica shell. Benefiting from the rich amino and silicon hydroxyl groups on the surface, the tumor-targeting ligands uPA and RGD were covalently modified to the surface of O 2 -PPSi nanoparticles; photothermal agent ICG and MR contrast agent Gd-DTPA (Gd) were also added by electrostatic interaction. The morphology of the nanosystem (named O 2 -PPSiI) was also observed by TEM and scanning electron microscopy (SEM). The X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared spectroscopy (FTIR) results further proved that the O 2 -PPSiI was successfully synthesized. The silicon and the gadolinium peaks were seen in the XPS survey spectra for the surfaces of O 2 -PPSiI, suggesting that the silica shell and Gd-DTPA were successfully coated and modified on the shell of O 2 -PPSiI (Fig. 1I). The high resolution Gd 4d XPS spectra for O 2 -PPSiI inserted in Fig. 1I revealed that the characteristic peaks centered at 142.8 eV and 153.7 eV could be attributed to Gd 4d 5/2 and Gd 4d 3/2 , suggesting the existence of Gd (III), which derived from Gd-DTPA [36]. A satellite peak at 148.2 eV in the Gd 4d XPS spectra may derived from the coordinate bond of Gd (III) and DTPA. The XPS spectrum of C 1 s is shown in O 2 -PPSi (Additional file 1: Fig. S1A), and the peak components at 284.5 eV and 286.6 eV were appointed to the C-C and C-N groups, respectively. These were attributed to Fig. S1B). The chemical structure of O 2 -PPSiI was analyzed using FTIR. Versus the FTIR spectrum of O 2 -PPSi, the broad band at ≈1600 cm −1 resulted from the C = N band and the peaks at 1414 cm −1 that are represented by the vibrational stretching of the C = C groups (Fig. 1J), which are from the ICG. The spectrum of DTPA exhibited the characteristic peak of the asymmetric and symmetric carbonyl (C = O) stretch of anhydride at 1738 cm −1 . Meanwhile, a characteristic UV absorption spectra peak of ICG emerged at 808 nm after the ICG was grafted on the surface of O 2 -PPSi (Additional file 1: Fig. S2), which matched the wavelength of the NIR laser applied for PTT. Versus the O 2 -PPSi, the color change of O 2 -PPSiI indicated that the ICG was coated on the surface (Additional file 1: Fig. S3).
We also evaluated the magnetic properties of O 2 -PPSiI using T1-weighted imaging (T1WI) and T1 mapping MRI, and the results showed that the T1WI signal of nanosystem was linear and concentration-dependent. The T1 relaxivity (r1) of O 2 -PPSiI to be 27.812 mM −1 /s −1 (Additional file 1: Fig. S4), and it was dramatically superior than the free Gd-DTPA (4-5 mM −1 /s −1 ), indicating a favorable T1WI contrast effect of O 2 -PPSiI. These results demonstrated that O2-PPSiI could enhance T1 positive contrast. Therefore, we further examined the accumulation of O 2 -PPSiI in vivo by MRI. The accumulation of O 2 -PPSiI in the tissue was quantitatively evaluated by the decrease of T1 value (longitudinal relaxation time, normalized as △T1 to the base). The injected O 2 -PPSil was selectively accumulated in the tumor and reached its maximum at 8 h (△T1 26.28%) after the systemic administration in vivo (Additional file 1: Fig. 1K). However, the accumulation of Gd-DTPA had no significant selectivity between the tumor and normal muscle tissue. It reached the maximum at 30 min (△T1 18.45%) but recovered to baseline after injection for 2 h (Fig. 1L). These results demonstrated that O 2 -PPSiI could enhance T1 positive contrast of Gd-DTPA and tumor targeting ability against TNBC in vivo.
Photothermal ability of O 2 -PPSiI nanosystem
The photothermal ability of O 2 -PPSiI was then examined in this study. The absorption of O 2 -PPSiI at 750 − 850 nm showed obvious concentration dependence ( Fig. 2A). The extinction coefficient of O 2 -PPSiI at 808 nm for different concentrations was 82.16 Lg −1 cm −1 (Additional file 1: Fig. S5), indicating that the O 2 -PPSiI nanosystem has high photothermal-conversion performance due to ICG modification. The photothermal-conversion efficiency (η) of the O 2 -PPSiI nanosystem was determined as 45.45% based upon the data analysis in Fig. 2B and C. To estimate the photothermal-conversion performance of O 2 -PPSiI, it was stimulated by an 808 nm NIR laser with increased power densities (0.5, 1.0, 1.5, and 2 W cm −2 ) when the concentration was locked at 300 µg mL −1 . As shown in Fig. 2D, the temperature elevation of O 2 -PPSiI increased by 35 °C (reached 61 °C) over 3 min of irradiation (1.0 W cm −2 ), and the thermal pictures of this process were documented with an IR thermal camera (Additional file 1: Fig. S6), demonstrating that O 2 -PPSiI had an excellent photothermal-conversion performance. The packaging of ICG in O 2 -PPSiI can contribute to the improvement of photostability versus free ICG, as proved with a reducible photothermal effect under repeated laser cycling for five cycles [37] (Additional file 1: Fig. S7).
NIR-triggered O 2 release and drug delivery
The system has NIR-induced photothermal effects from ICG and active oxygen release in PFOB [38]. We encapsulated the oxygen reservoir and antitumor agent into the nanocore and developed an NIR-responsive on-demand drug releasing nanomedicine system to overcome the drug leakage of conventional nano-based drug delivery system (NDDSs). We measured the O 2 concentration change of O 2 -PPSiI in deoxygenated water under a nitrogen atmosphere and found that the O 2 concentration increased significantly in the O 2 -PPSiI solution; many bubbles appeared with longer NIR irradiation times ( Fig. 2E and Additional file 1: Fig. S8). Subsequently, ultrasound imaging was used to visualize the NIR-triggered O 2 release and drug delivery in vitro. Figure 2F and Additional file 1: Fig. S9 show an apparent enhanced echo intensity in the O 2 -PPSiI solution after NIR irradiation. This was further confirmed via successful development of the NIR-responsive on-demand drug releasing nanomedicine system in this study. Meanwhile, the O 2 concentration of the O 2 -PPSiI solution without NIR irradiation at room temperature showed no obvious change within 24 h (Fig. 2G), suggesting that the stability of O 2 -PPSiI was favorable. This proved that the O 2 -PPSiI has higher O 2 storage ability than other O 2 delivery nanosystems. The temperature elevation of O 2 -PPSiI with NIR laser irradiation was the key contributor of oxygen release from O 2 -PPSiI. Figure 2G and Additional file 1: Fig. S10 ; Fig. 2J). This temperature increase could ablate the Fig. S11). These results demonstrated that O 2 -PPSiI offered NIR-responsive on-demand drugrelease in vitro and in vivo.
Distribution of O 2 -PPSiI nanosystems in vivo
The targeting efficacy of the nanoparticle to by RGD and uPA to TNBC was investigated. We firstly compared the intracellular uptake of O 2 -PPSiI between MDA-MB-231 cells and normal breast cells (Hs 578Bst), and the O 2 -PPSiI exhibited much higher cellular uptake in MDA-MB-231 cells, but obvious lower in normal breast cells (Additional file 1: Fig. S12A and B). And then, the targeted molecule competition assay further proved that the highly selective intracellular uptake of O 2 -PPSiI in MDA-MB-231 cells was mediated by the RGD and uPA (Additional file 1: Fig. S12C). In addition, the targeting efficacy in vivo of the nanoparticle to by RGD and uPA to triple negative breast cancer was detected by using nanosystem with or without different targeting molecule the bio fluorescence imaging. As shown in Additional file 1: Fig. S12D, the double targeted-nanosystem (O 2 -PPSiI with uPA and RGD) has a higher accumulation in tumor than the O 2 -PPSiI nanosystem with uPA or RGD. These results proved the design of double targeting molecule could result in the high targeting efficacy of O 2 -PPSiI to TNBC. Subsequently, MRI was used to monitor the distribution of the O 2 -PPSiI nanosystem in vivo at various time points. Figure 3A shows that intravenously injected O 2 -PPSiI was selectively accumulated in the tumor region and came to its maximum at 8 h. Therefore, dual modal imaging can effectively and precisely monitor the distribution of the O 2 -PPSiI nanosystem in vivo and confirm the best time point of NIR irradiation.
Transporting the drugs to the deep tumor tissue is limited owing to abnormal angiogenesis and irregular tumor blood flow. Currently, functional nanosystems have provided new strategies to enhance tumor penetration and achieve effective and successful antitumor therapy [39]. Here, intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) was used to evaluate the in vivo tumor vascularity. As a non-contrast-based functional MRI sequence, IVIM-DWI is an attractive approach to assess not only the tumor cellularity but also the microperfusion with no need to use exogenous contrast agents. This allows it to be repeatedly used as a method of quantitatively monitoring the therapeutic response in vivo [40]. In this work, we applied IVIM-DWI to evaluate the microperfusion of the baseline tumors, and then drew two orthogonal lines in the Dorsal/Ventral (Do/Ve) and Medial/Lateral (Me/La) directions crossing the tumor center. This divided each radius into 10 segments with equivalent lengths (each segment was then appointed with a figure of 1-10, i.e., the outermost segment was appointed with 1 and the innermost segment of 10 at a radial position). We then confirmed the heterogeneous vascularity of TNBC and found that the intratumor blood flow at the normalized radius ≥ 6 in both the Ve/ Do and La/Me direction was significantly decreased versus the outermost segment ( Fig. 3B and C), which was also observed by Gaustad [41]. Based on these results, we further divided the entire tumor into peripheral (the outermost segment 1 to segment 5) and central (segment 6 to the innermost segment 10) zones. The injected O 2 -PPSiI infiltrated deep into the central region with no significant difference in △T1 versus the peripheral zone (Fig. 3D). The Gd-DTPA mainly accumulated in the peripheral region and presented a lower △T1 than the central zone (Fig. 3E). These results confirmed a favorable intratumor penetration of O 2 -PPSiI, and the potential mechanism may be associated with the cell-to-cell transport through the active targeting of uPAR in the tumor and stromal cells [42].
We further investigated the biodistribution of injected O 2 -PPSiI in the tumor-bearing nude mice. The T1 MRI data presented that, except for the intratumor accumulation, the injected O 2 -PPSiI was mainly collected by the kidney within 24 h (Additional file 1: Fig. S13A). It had a fluctuating yet increasing trend in △T1 after 24 h (Additional file 1: Fig. S13B), which may reflect the urinary excretion of O 2 -PPSiI. In addition, the injected O 2 -PPSiI was also taken up by the liver and spleen that showed a fluctuating change in △T1 (Additional file 1: Fig. S14C-D). Moreover, the fluorescence images showed most of the injected O2-PPSil accumulated in the intestinal system within 8 h, and then vanished gradually after 12 h excepted for the retention in the tumor region. These results indicated a urinary and intestinal excretion of the O 2 -PPSiI, further confirming its favorable biodegradability in tumor theranostics.
Antagonizing tumor hypoxia statues by NIR-triggered oxygen release in vivo
Hypoxia offers multiple problems in the treatment of cancers [43]. Antagonizing tumor hypoxia statues is important for tumor therapy [44,45]. Therefore, the efficacy of relieving tumor hypoxia for O 2 -PPSiI nanosystems was further investigated with MDA-MB-M231 tumor bearing nude mice. First, based upon the paramagnetic effect of deoxyhemoglobin, we used blood oxygenation leveldependent magnetic resonance imaging (BOLD-MRI) to monitor the real time improvements in tumor hypoxia as well as the efficacy of NIR-triggered oxygen-shuttle nanomedicine in promoting tumor oxygenation. Improvements in tumor hypoxia were evaluated quantitatively using the decreased R2* values of BOLD-MRI [46]. Figure 3F shows that the tumors treated with NIR-triggered O 2 -PPSiI and O 2 -PSiI (without PTX, 808 nm, 1 W cm −2 ) demonstrated a reduction in R2* (normalized as △R2* to the base) versus the baseline in both the central and peripheral zones. These results proved a satisfactory tumor oxygenation level in the local tissue. Versus the saline group, the tumors treated with a single NIR laser, PTX, and O 2 -PPSiI showed no significant difference in △R2*, while the NIR-triggered O 2 -PPSiI and O 2 -PSiI treatment groups had a significant decrease in △R2* at the endpoints of the treatment in both the central and peripheral zone (Fig. 3G). The tumor treated with NIRtriggered PPSiI was considered as a negative control to verify the contribution of O 2 release for the tumor hypoxia relieving effect. Versus the NIR-triggered PPSil group, the tumor treated with NIR-triggered O 2 -PPSil showed a significantly decreased △R2* in both the central and peripheral zones, which further proved the improvements in tumor hypoxia statues by NIR-triggered oxygen release [47]. Furthermore, we performed HIF-1α immunofluorescence staining to confirm the efficacy of NIR-triggered O 2 -PPSiI in promoting tumor oxygenation (Fig. 3H). Consistent with the results of BOLD-MRI, the tumor treated with NIR-triggered O 2 -PPSiI showed a decreased expression of HIF-1α. But then, tumors treated with single NIR, PTX, O 2 -PPSiI, and NIR-triggered PPSiI exhibited high expression of HIF-1α with strong green immunofluorescence in different levels. The BOLD-MRI and immunofluorescence staining data suggested that NIR-triggered O 2 -PPSiI could mitigate the hypoxia microenvironment of TNBC in vivo, especially to the central tumor that presented as an avascular region with poor blood flow.
Synergistic therapeutic efficacy of NIR-triggered O 2 -PPSiI
The in vitro cytotoxicity of the O 2 -PPSiI nanosystem was investigated with MDA-MB-231 cells by MTT [48]. Additional file 1: Fig. S14 To further investigate the cancer cell killing mechanism of O 2 -PPSiI in vitro, we measured the ROS and 1 O 2 generated by O 2 -PPSiI [49]. Versus PTX and NIR-triggered PPSiI, O 2 -PPSiI under NIR laser irradiation showed more high-efficiency ROS and 1 O 2 generation in solution and MDA-MB-231 cells (Additional file 1: Fig. S15), indicating an important role of O 2 release in the production of 1 O 2 and ROS [50,51]. Subsequently, the in vivo synergistic antitumor effect of the NIR-triggered O 2 -PPSiI nanosystem was further investigated with MDA-MB-231-bearing nude mice. Based on the precise monitoring for the distribution of O 2 -PPSiI with T1 mapping MRI in vivo (Fig. 3A), the NIR irradiation was triggered at 8 h after the injection of PPSiI, O 2 -PSiI, or O 2 -PPSiI. The relative changes of tumor volume and weight are revealed in Fig. 4A-C. Over 21 days of treatment, the relative tumor volume (normalized as △Volume to the base) of mice treated with only saline, O 2 -PPSiI, and NIR irradiation groups increased significantly (Fig. 4A). The mice treated with PTx and NIR-triggered O 2 -PSiI (without PTX) presented a slightly increased tumor volume compared to day 0, demonstrating the moderate tumor inhibition of the single chemotherapy and phototherapy against TNBC. However, the tumor volume of mice treated with NIR-triggered PPSiI showed no significant difference at 14 days and 21 days compared to day 0, while the NIRtriggered O 2 -PPSiI group showed significantly decreased tumor volumes (Fig. 4B), and the volume as well as the tumor weight at the endpoint of both the groups were significantly lower than the PTX and NIR-triggered O 2 -PSiI groups (Fig. 4C), demonstrating an obvious inhibition and regression effect of the synergistic chemophototherapy. After treatment for 21 days, the picture of separated tumors in each group also presented similar results as the tumor volume and weight revealed above (Additional file 1: Fig. S16). Moreover, the tumor volume and weight of the mice treated with NIR-triggered O 2 -PPSiI were significantly lower than those treated with NIR-triggered PPSiI after 21 days coincided with the results of 3D-CUBE T2WI at 21 days (Fig. 4D). The excellent antitumor efficiency in vivo for the NIR-triggered O 2 -PPSiI nanosystem could be ascribed to an enhanced synergistic chemo-phototherapy efficiency induced by O 2 release triggered by NIR laser irradiation.
To further investigate the antitumor effect of the NIRtriggered O 2 -PPSiI in vivo, the parameters of IVIM-DWI (D and f value) were applied to track the tumor cellularity and perfusion during the therapy [52]. Figure 4E and Additional file 1: Fig. S17 showed the △D value (normalized as △D to the base) in both the central and peripheral regions of the NIR-triggered O 2 -PPSiI group increased significantly compared the PTX, NIR-triggered O 2 -PSiI, and PPSiI groups, which demonstrated a reduced cellular density and activity of the tumor in vivo [53]. The results suggest that the enhanced synergistic chemo-phototherapy efficacy induced by tumor hypoxia improvement under NIR laser irradiation could effectively inhibit the tumor cell proliferation especially to the avascular central tumor. Moreover, tumor progression was suppressed by the synergistic chemo-phototherapy which was confirmed with microscopic analysis of tumor tissues stained by hematoxylin and eosin (H&E). Figure 4F shows that O 2 -PPSiI under NIR laser irradiation could effectively decrease the cell viability in tumor regions which offering an additional support for the (See figure on next page.) Fig. 4 In vivo antitumor therapeutic efficacy for NIR-triggered O 2 -PPSiI. A-C The relative changes of volume (△Volume) in each groups before and after the treatments, together with the △Volume and weight of tumors of each groups at the 21 days after the treatment. Significant difference between the Saline and treatment groups is indicated at P < 0.05 ( * ) or P < 0.001 ( ** ) level. D The representative images for tumor volume derived from 3D-CUBE T2WI in each group at the 21 days after the treatment. E IVIM-DWI derived D mapping of tumor in each group before and after the treatment.
The suppression of tumor metastasis
The metastasis of the tumor is a major concern in the clinic. It has been reported that the hypoxic microenvironment in the tumor could influence the progression of metastasis [54]. The combination of O 2 -PPSiI and the NIR laser could release the O 2 to relieve tumor hypoxia due to the rupture of the silica shell, which might antagonize hypoxia-induced tumor metastasis. Therefore, we first used the wound-healing migration assay together with the transwell assay to evaluate the suppression effect of the tumor metastasis in vitro of O 2 -PPSiI under NIR laser irradiation. Figure In the tumor hypoxic microenvironment, signaling pathways that facilitate cell survival and metastasis are activated to give the tumor cell the ability to migrate and invade via the epithelial-mesenchymal transition (EMT) [55,56]. EMT is a biological process that promotes the transformation of immotile epithelial cells to motile mesenchymal cells, involving a reduction of epithelial markers and the increase of mesenchymal markers in the tumor [57]. To further explore the effect of NIR-triggered O 2 -PPSiI on suppressing the tumor metastasis in vivo, the epithelial-specific marker E-cadherin together with the mesenchymalspecific markers vimentin and Snail-Slug were chosen to perform immunohistochemistry microscopy [58]. Figure 5D and E show that the expression of epithelial-specific marker E-cadherin in the tumor treated with NIR-irradiated O 2 -PPSiI was much higher than that of the single PTX group, O 2 -PSiI + Laser group, or PPSiI + Laser group. In addition, the expression of mesenchymal-specific markers Snail-Slug and vimentin in the tumor treated with NIR-irradiated O 2 -PPSiI was much lower than that of the single PTX group, O 2 -PSiI + Laser group, or PPSiI + Laser group. These results demonstrated that the NIR-irradiated O 2 -PPSiI could inhibit the process of EMT in the tumor and decrease its migration and invasion due to the tumor hypoxia improvement induced by the oxygen release under NIR laser irradiation and synergistic chemophototherapy. These effects were further verified by the results of correlation analysis presented in Additional file 1: Fig. S18. In addition, the uPA/RGD dualtargeting molecules of O 2 -PPSiI may play a positive role in this process as indicated by significantly downregulating the expression of mesenchymal-specific markers vimentin and Snail-Slug in the O 2 -PPSiI group compared to the Saline [58,59].
Furthermore, the partial activation of EMT program was considered a major driver of tumor progression from initiation to metastasis. Particularly, zinc finger E-box binding homeobox 1 (ZEB1) and transforming growth factor β (TGF-β) play essential roles in the proliferation, migration, and invasion of tumor cells. We next examined whether O 2 -PPSiI could inhibit ZEB1 and TGF-β levels in orthotopic MDA-MB-231 tumor-bearing mice. Immunofluorescence for the ZEB1 verified that O 2 -PPSiI combined with NIR laser could efficiently reduce the expression of ZEB1 withintumor in comparison with the saline group suggesting that this combined strategy suppressed the activation of cell motility and stemness ( Fig. 6A and D). Versus the mice in PPSiI + Laser group, the O 2 -PPSiI combined with NIR laser distinctly inhibited the expression of ZEB1, which confirmed that relieving tumor hypoxia could effectively abrogate the hypoxia-induced EMT. Meanwhile, the EMT was also reduced by TGF-β downregulation due to the synergistic effects of NIR-irradiated O 2 -PPSiI ( Fig. 6B and D).
Matrix metalloproteinase-2 (MMP2) is a member of the zinc-binding endopeptidase family and plays an essential role in the invasion and metastasis of cancer cells. Upregulation of MMP2 expression can also promote tumor metastasis [60]. Therefore, we further detected the expression of MMP2, proving that O 2 -PPSiI combined with NIR laser could inhibit tumor metastasis. Figure 6C and D show that O 2 -PPSiI under NIR laser irradiation could effectively downregulate the MMP2 expression. The negative correlation analysis of E-cadherin, vimentin, Snail-Slug and ZEB1, TGF-β, and MMP2 demonstrated that the O 2 -PPSiI + NIR laser group could effectively influence the EMT (Fig. 6E and Additional file 1: Fig. S19). Therefore, these results indicated that O 2 delivering strategies could alleviate hypoxia of the tumor tissue and abrogate the hypoxia-induced EMT to inhibit tumor metastasis.
Biosafety evaluation of NIR-triggered O 2 -PPSil in vivo
The in vivo biosafety for O 2 -PPSiI was systematically inspected via body weight monitoring, blood biochemical Fig. 7 A The results of blood biochemical analysis in each group at 21 days after treatment, and significant difference between the Saline and treatment groups is indicated at P < 0.05 (*) or P < 0.001 (**) level. B The H&E staining of heart, lung, liver, spleen and kidney in each group at 21 days after treatment. The black arrows pointed to the region of patchy lymphocyte infiltration and hepatocyte swelling analysis, and H&E staining [61]. Additional file 1: Fig. S20 shows that the body weight (normalized as △Body weight to the base) of the mice in NIR-triggered O 2 -PPSiI group increased during treatment for 21 days. This was significantly higher than the baseline and the saline group at 21 days. The body weight of PTX-treated mice declined significantly compared to the baseline over 9 days after injection. The blood biochemical analysis of mice treated with PTX also indicated an impaired metabolic function of the liver with significantly elevated Alanine transaminase (ALT) and Aspartate aminotransferase (AST) in the blood (Fig. 7A). This was further verified by the presence of patchy lymphocyte infiltration and hepatocyte swelling in liver H&E staining (Fig. 7B); no obvious hematological and histological abnormalities were observed in the other groups ( Fig. 7A and B). These results indicated the hepatotoxicity of single PTX treatment and proved the safety of O 2 -PPSiI for clinical translation.
Conclusion
Herein, the near-infrared responsive on-demand oxygen releasing nanoplatform O 2 -PPSiI was chemically synthesized in this study by a two-stage self-assembly process, which could deliver oxygen and release it under NIR irradiation to relieve hypoxia. This improved the therapeutic effect of chemotherapy and suppressed tumor metastasis. This smart design achieves the following advantages: (i) the O 2 in this nanosystem can be precisely released by an NIR-responsive silica shell rupture; (ii) the dynamic biodistribution process of O 2 -PPSiI was monitored in real-time and quantitatively analyzed via sensitive MR imaging of the tumor; (iii) O 2 -PPSiI could alleviate tumor hypoxia by releasing O 2 within the tumor upon NIR laser excitation; (iv) The migration and invasion abilities of the TNBC tumor were weakened by inhibiting the process of EMT as a result of the synergistic therapy of NIR-triggered O 2 -PPSiI. This NIR responsive on-demand oxygen releasing could provide new thinking on the investigation of controllable drug-releasing nanomedicine systems for precise theranostics in TNBC.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12951-022-01294-z. Figure S4. The magnetic properties of O 2 -PPSiI. Figure S5. Normalized absorbance intensity at λ = 808 nm divided by the characteristic length of the cell (A/L) at varied concentrations of O 2 -PPSiI. Figure S6. Representative images of PBS and O 2 -PPSiI after irradiated with NIR laser (808 nm, 2 W cm −2 ) for 4 min. Figure S7. Temperature changes of ICG and O 2 -PPSiI solution with NIR laser (2 W cm −2 ) switch-on and switch-off for 5 cycles. Figure S8. The image of O 2 -PPSiI before and after NIR laser irradiation. The appearance of bubbles suggested the O 2 release from O 2 -PPSiI after NIR laser irradiation. Figure S9. The gray value of ultrasonography for water and O 2 -PPSiI nanosystem solution before and after laser irradiation in vitro. Figure S10. The raised temperature of O 2 -PPSiI under NIR laser treatment was the main contributor of oxygen release from O 2 -PPSiI. The scale bar = 100. Figure S11. The gray value of ultrasonography for mice before and after treatment with O 2 -PPSiI nanosystem and laser irradiation. Figure S13. The distribution of O 2 -PPSiI in liver, spleen and kidney quantified by T1 mapping, and significant difference between the groups at the same time point is indicated at P < 0.05 (*) level. Figure S14. Cytotoxicity of O 2 -PPSiI with and without NIR irradiation against MDA-MB-231 cells. Figure S15. The overproduction of 1 O 2 and ROS induced by NIR-triggered O 2 -PPSiI. Figure S16. The separated tumors of each group at the 21 days after the treatment. Figure S17. IVIM-DWI derived D mapping of tumor in each groups before and after the treatment, and significant difference of relative D values (△D) between the Saline and treatment groups is indicated at P < 0.05 ( * ) or P < 0.001 ( ** ) level. Figure S18. Pearson correlation analysis between MRI-derived parameters and the expression of E-cadherin, vimentin and Snail-Slug, statistically significant correlation coefficient (r) is indicated at P < 0.05 (*) or P < 0.001 (**) level. Figure S19. Pearson correlation analysis between the expression of Snail-Slug/Vimentin and ZEB1, TGF-β and MMP2, statistically significant correlation coefficient (r) is indicated at P < 0.05 (*) or P < 0.001 (**) level. Figure S20. (A) The relative body weight (△Body weight) changes of tumor-bearing mice in each groups at different time points, and (B) the comparison of △Body weight (%) between the Saline and treatment groups at 21 days after treatment. Significant is indicated at P < 0.05 (*) or P < 0.001 (**) level. | 2021-06-22T17:54:50.976Z | 2021-05-04T00:00:00.000 | {
"year": 2022,
"sha1": "4ea32f657f250a6dae8f7ae4033f74e9c996a923",
"oa_license": "CCBY",
"oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-022-01294-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50ef963d0fdce8aab29a70b22f505f7088221388",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198133834 | pes2o/s2orc | v3-fos-license | Constant theoretical conductance via changes in vessel diameter and number with height growth in Moringa oleifera
Intricate anatomical adjustments allow plant theoretical hydraulic capacity per unit leaf area to remain constant as trees grow in height.
Introduction
Leaves are the engines of plant primary productivity, so it is important to understand how leaves maintain productivity as plants grow in height. As a tree grows from a seedling to maturity, the distance that water must be transported increases, and longer conductive path lengths could reasonably be expected to be associated with greater hydraulic resistance, lower conductance per unit leaf area (leaf-specific conductance), and lower productivity (Ryan and Yoder, 1997;Niklas and Spatz, 2004;Koch et al., 2004). Yet, because carbon fixation should directly impact growth and fitness, changes in productivity with height growth should be a major target of selection. Given variation among the members of a species, individuals whose leaf-specific productivity declines markedly with height growth will necessarily be selected against in favor of those whose productivity declines less or even remains constant. Individuals whose per-leaf area productivity declines least with height growth will by definition have more carbon to invest in growth and reproduction per unit leaf area than individuals with greater declines. As a result, selection should always push in the direction of constant leaf-specific productivity with height growth, or minimal possible declines in productivity. If constant productivity can be achieved, it necessarily involves constant conductance per unit leaf area, given the link between conductive rate and photosynthetic productivity (Santiago et al., 2004;Brodribb, 2009). Given the relationship of leaf-specific conductance with terrestrial primary productivity, testing the prediction that leaf-specific conductance should remain constant with height growth, and understanding the mechanisms that make this possible, is a priority.
The invariance of leaf-specific conductance with height growth is assumed to be possible because of a xylem vascular system made up of conduits that widen from the stem tip to the base at a rate that compensates for the resistance imposed by increasing path length (West et al., 1999;Anfodillo et al., 2006). Because height growth in a tree takes the leaves ever farther from the base of the trunk, the conductive path length becomes longer as trees become taller. Conduit walls impose friction, creating resistance to flow, and, all else being equal, longer conduits have greater wall area, greater friction, and thus greater resistance to flow than shorter ones (Comstock and Sperry, 2000). Therefore, if conduits remain the same diameter, conductive rates per unit leaf area will inevitably drop as path length increases. Instead, conduits widen from the stem tip to the base with height, across plant habits, such as trees, shrubs, and lianas, across monocots and dicots, as well as across biomes (Anfodillo et al., 2006;Coomes et al., 2007;Petit et al., 2009;Olson et al., 2014Olson et al., , 2018Rosell and Olson, 2014;Morris et al., 2018). This widening occurs at the rate predicted to offset the resistance imposed by increasing conductive path length and keep leaf-specific conductance constant, or at least minimize the drop in conductance. In the simplest models of hydraulic architecture (West et al., 1999), a unit of leaf area is assumed to have a constant number of conduits as height increases, with basipetal vessel widening leading to constant resistance regardless of conduit length, and xylem conduits are assumed to have constant terminal conduit diameters (conduit diameter at the twig apex).
Our recent work suggests that, at least in angiosperms, terminal twig vessel diameter is not constant and instead increases predictably with height growth, suggesting that vessel number per unit leaf area might also change with height. In surveys across angiosperm species, we found that vessel diameter (VD) increased predictably with height (H) not only at the stem base, where it scaled as approximately VD base ∝ H 0.4 , but also at the tip, where it scaled as VD apex ∝ H 0.2 Rosell and Olson, 2014). Wider conduits at the stem tip in taller trees would imply that flow within a single conduit also increases. Given two conduits with identical lengths and basal vessel diameters, the one with the wider apical diameter will offer less resistance to flow. If the number of conduits does not change, then increasing the terminal conduit diameters would imply increasing the flow per unit leaf area. Such 'oversupply' seems unlikely, given that natural selection should favor fluid distribution networks that minimize network-level fluid volume and therefore construction costs (Banavar et al., 2002). Assuming that conductance remains constant per unit leaf area, if conductance per unit of pipeline increases with tree size due to wider conduits at the stem tips in taller trees, then the number of conduits per unit leaf area must decrease accordingly. Although not conforming with the exact predictions of existing hydraulic scaling hypotheses (West et al., 1997(West et al., , 1999Hölttä et al., 2011Hölttä et al., , 2013Drake et al., 2015;Couvreur et al., 2018), coordinated variation between tip conduit diameter, widening rate, and conduit number might still occur in such a way that a constant water supply per unit of leaf area is maintained as predicted.
Although the results of comparative studies to date suggest that the maintenance of leaf-specific conductance is plausible, studies of universal vessel scaling at the stem base and apex have been largely based on species-level mean values across many species, and have not included leaf area (Anfodillo et al., 2006;Olson et al., 2014Olson et al., , 2018Rosell and Olson, 2014;Morris et al., 2018). As a result, whether conductive capacity does indeed scale isometrically with leaf area with height growth as predicted has not been tested. Moreover, because the patterns studied to date have been mostly comparative, direct documentation is missing to test whether the increases in terminal and basal vessel diameter occur within species and in such a way that they keep pace with leaf area.
We tested the prediction that the xylem network should supply a unit of leaf area with a constant flow of water during height growth in a plantation experiment with the tropical tree Moringa oleifera Lam. Focusing on a single species allowed us to factor out much of the variation that is introduced in comparative studies covering wide spans of habit, wood density, leaf size, leaf mass per unit area, growth ring types, and other factors that could potentially influence vessel dimensions. Moringa is useful for such studies because it grows quickly (8 m in the first year from seed is common, and plants fruit within 6 months) and saplings are monopodial, making accurate measurement of conductive path length feasible. We grew plants at a relatively high density for several months, and competition between plants resulted in a wide range of heights and leaf areas for the same age, ideal for estimating the rates of change with height in leaf area, vessel-widening rate, vessel diameter, total vessel number, and a whole-plant conductance index estimated from these measurements.
Materials and methods
To provide the necessary variation in plant height and total leaf area in an acceptable time frame, we carried out an experiment using the tropical tree Moringa oleifera Lam. (Moringaceae, Brassicales). Moringa oleifera is a highly tractable woody plant study system because of its simple anatomy-wide vessels with simple perforation plates in a background of libriform fibers with axial parenchyma limited to 1-2 layers adjacent to vessels (Olson, 2001;Olson and Carlquist, 2001)-and its very fast growth. Moringa oleifera trees easily reach 8 m in their first year from seed, making it possible to produce plants of a wide range of heights under identical growing conditions and of the same age. The saplings grow tall and straight, meaning that conductive path length, as approximated by the stem-tip-to-base distance, can be measured with precision (see Supplementary Fig. S1 at JXB online). To maximize genetic diversity and thus the likelihood of finding differing relationships between leaf area, vessel diameter, and stem length, we gathered seeds from 13 individuals of M. oleifera with cultivated provenances from Africa, Asia, Madagascar, and the Americas, from the International Moringa Germplasm Collection (http://www.moringaceae.org), on the lowland tropical Pacific coast of Mexico in Jalisco State.
Experimental design
We planted 200 seeds directly in the ground at the International Moringa Germplasm Collection near the Chamela Biological Station, located at 19°29′54.34ʺ N, 105°2′40.46ʺ W. The region experiences a tropical monsoonal climate with marked dry and wet seasons, with most of the precipitation falling between June and October. The study area is characterized by an annual low temperature of 14.9 °C, a mean annual temperature of 24.9 °C, and an annual average rainfall of 752 mm (Bullock, 1986;Bullock and Solis-Magallanes, 1990;García-Oliva et al., 2002). The elevation of the plantation site was ~100 m a.s.l. We planted the trees 1 m apart to encourage straight growth and competition to ensure a wide range of heights in plants of the same age. We watered the plants weekly throughout the prolonged dry season to encourage rapid growth.
Sampling
We sampled from 5 to 13 April 2016. We harvested individuals by cutting them at soil level, and measured their heights and basal diameters. We fixed samples of the basal and terminal stem xylem from 139 individuals in 50% aqueous ethanol. All the slope estimations were performed including samples from all 139 individuals. We measured the fresh leaf area and dry leaf mass of 50 individuals to generate an equation to extrapolate leaf area from the dry leaf mass of each individual in our population. We scanned the leaves using a digital scanner (Seiko Epson, Tokyo, Japan), and used WinFolia (Regent Instruments Inc., Canada) to measure leaf area from digital images, including petioles and rachises of M. oleifera's large, pinnately compound leaves. We dried the leaves and weighed them using an analytical balance (Sartorius Corporation, Gottingen, Germany).
Anatomical characterization
We cut thin sections for light microscopy from the basal and terminal stem of each individual to measure vessel diameters and estimate the total number of vessels at the stem base. We used a sliding microtome (Gärtner et al., 2014) to make transverse sections from the basal and terminal wood of each individual. We made temporary and permanent slides, and measured vessel diameter with a light microscope (Zeiss, Oberkochen, Germany). We measured vessel radial diameter, following radial transects from the pith to the vascular cambium to capture intra-individual variation in vessel diameter. We measured 35 vessels per section to obtain per-individual mean basal and apical vessel diameters. We estimated the total number of vessels by measuring vessel density (vessels mm -2 ) and multiplying this value by the total xylem area. We measured vessel density by counting the number of vessels in 35 optical fields spanning pith to cambium. We calculated the xylem area by subtracting the pith area from the xylem+pith area. Our analyses of xylem respiratory activity using triphenyl-tetrazolium chloride (Shain and Mackay, 1973;Spicer and Holbrook, 2007) suggest that for the range of sizes studied all the xylem is active sapwood (data not shown). Xylem conductive area was calculated by multiplying the mean vessel area by the total number of vessels.
Statistical analyses
To estimate the slopes of all bivariate relationships, we used standardized major axis (SMA). In many of the relationships we examined, it seems likely that instead of one variable causing variation in the other, the variables affect each other mutually. SMA is appropriate in these cases (Smith, 2009). To reduce the influence of outliers on slope estimation, we used so-called robust SMA methods for estimating slopes of all bivariate relationships and for slope tests (Warton et al., 2006;Warton, 2011, 2013). Moreover, robust SMA is less sensitive to outliers. In all models, we used log 10 -transformed variables, with all analyses carried out in R (R Core Team, 2018) and graphs being generated using the R package ggplot2 (Wickham, 2016). Standard major axis line fitting and slope tests were performed using the R package smatr3 (Warton et al., 2012).
Testing the prediction of constant conductive capacity per unit leaf area
We used two complementary approaches to test the prediction that conductance should remain constant per unit leaf area as plants grow taller. Using the empirical scaling exponents given in Table 1, we estimated whole-plant conductance in the following way. We first estimated mean apical vessel diameter given plant height from the relationship between apical vessel diameter and height given by VD apex ∝ H a . Subsequently, we used the relationship between basal vessel diameter and height given by VD basal ∝ H b to estimate the tip-to-base widening rate (DistTip) of vessels, assuming a power law relationship given by VD basal ∝ H a /VD apex ∝ H b =VD ∝ DistTip c . Given this tip-to-base widening rate, we calculated the total resistance of a widened tube of length H, made up of segments of 1 cm length, with initial diameter given by VD apex ∝ H a . We estimated the theoretical flow within single widened tube segments (1/R) by calculating the resistance of 1 cm long segments using the Hagen-Poiseuille equation (Sperry et al., 2005): where η is the viscosity of water and D is vessel diameter. We summed the resistances of the 1 cm segments to calculate the total resistance of the widened tube along the entire conductive pathway length. We then multiplied the resistance of single widened tubes by the empirical total number of vessels in each plant to estimate whole-plant resistance. The reciprocal of the value for whole-plant resistance gave the whole-plant flow per plant. Whole-plant flow was transformed into a whole-plant conductance index (K plant ), assuming a driving pressure equivalent to a 50 cm water column. First, we tested whether the slope of the wholeplant conductance index against total leaf area scaled with a slope of 1. Second, we estimated the scaling relationships of total leaf area versus tree height and whole-plant conductance versus tree height to test the prediction that the resulting scaling exponents should be statistically indistinguishable. If whole-plant conductance keeps pace with total leaf area as plants grow taller, then the whole-plant conductance index and total leaf area should scale with height with slopes that do not differ significantly from one another. These two complementary approaches to our main question tested of the prediction of an isometric increase between total stem water flow and total leaf area with height growth. All tests were performed at α=0.05.
Results
Our results were consistent with the prediction of constant conductance per unit leaf area with height growth. The plants sampled encompassed over four orders of magnitude of difference in total leaf area, and over one order of magnitude of difference in height. Mean plant height (H) was 174.69 cm ranging from 13 to 420 cm, and most of the plants flowered. Mean per-individual total leaf area (LA) was 8212 cm 2 , ranging from 16.53 to 31 431 cm 2 . Mean per-individual basal vessel diameter (VD basal ) was 108.42 µm, ranging from 39.86 to 176.90 µm. Mean per-individual terminal twig vessel diameter (VD apex , 'apical vessel diameter') was 63.54 µm, ranging from 32.00 to 107.31 µm. Mean xylem area in the basal section (XA) was 407.68 mm 2 , ranging from 1.122 to 3604.30 mm 2 . The mean total vessel number (NV) per individual in the basal xylem was 3170, ranging from 232 to 28 354. We used data from these trees to derive empirical exponents for the rate of change of total leaf area with height, and the rates of change of total vessel number, mean vessel diameter at the stem base, and mean apical vessel diameter with both height and total leaf area. We then used these exponents to test the prediction that whole-plant conductance should scale isometrically with leaf area as plants grow taller. All traits analyzed scaled closely with plant height (Table 1). Total leaf area was predicted well by plant height, and scaled with height with a slope of 1.859 (Fig. 1A). Apical vessel diameter increased predictably with height with a slope of 0.268 (Fig. 1B). Basal vessel diameter was also closely predicted by height, and increased with a slope of 0.341 (Fig. 1C). The resulting rate of widening of the vessels in the plants of our population was VD basal ∝ H 0.341 /VD apex ∝ H 0.268 =0.073. Total vessel number increased with total leaf area with a slope of 0.789 (Fig. 1D), and its confidence interval did not include isometry (Table 1). Congruently, the total number of vessels also increased with height at a lower rate than total leaf area, with a slope of 1.493 (Fig. 1E). As a result of the increase in vessel number increases at a lower rate than total leaf area, presumably because wider apical and basal vessel diameters with increasing height cause an increase in individual vessel conductance with height, allowing a similar area of leaf to be supplied by fewer vessels. (E) Accordingly, total vessel number increases with height at a lower rate than leaf area. (F) Average vessel diameter at the stem base becomes wider in taller plants, leading to an increase in basal vessel diameter with total leaf area. The confidence intervals for the slopes and intercepts are shown in Table 1 basal vessel diameter with height, basal vessel diameter also scaled predictably with total leaf area (Fig. 1F). Conductive area (CA), understood as the total vessel lumen area, increased at a higher rate than xylem area, with a slope of 1.063, and its confidence interval did not include isometry (Table 1). These scaling exponents were used to estimate a whole-plant conductance index and test whether it increased at the same rate as total leaf area with height growth.
The rate of increase in the whole-plant conductance index was statistically indistinguishable from the rate of increase in total leaf area with height growth, and congruently the wholeplant conductance index increased isometrically with total leaf area as trees increased in height. Calculated based on the empirical exponents in Table 1, whole-plant conductance increased with height at a rate of K plant ∝ H 1.825 ( Fig. 2A), which did not differ significantly from the rate of increase in total leaf area with height (F 1,137 =0.644, P=0.424), which was LA ∝ H 1.859 (Fig. 1A). Also in agreement with predictions, the whole-plant conductance index scaled with leaf area with an exponent of 0.969 (Fig. 2B), and did not differ significantly from isometry (F 1,137 =1.249, P=0.266). Thus, both methods of calculation are consistent with the prediction that whole-plant conductance should scale isometrically with leaf area with height growth.
Discussion
In Moringa oleifera, whole-plant conductance increased at the same rate as leaf area with height, providing clues regarding a major vector of natural selection on plant hydraulic systems (Fig. 2). This coincidence with predictions is remarkable since it emerges from the concatenation of a series of empirical scaling exponents (Table 1), namely, leaf area scaling with height LA ∝ H 1.859 , stem tip vessel diameter with height VD apex ∝ H 0.268 , the rate of widening of vessel diameter with distance from the stem tip VD ∝ DistTip 0.073 , and the total number of vessels with leaf area NV ∝ LA 0.789 . Even relatively small variation, due to error or biological reality, in any of the scaling exponents involved would lead to substantial deviations in the whole-plant conductance-leaf area scaling relationship, making the coincidence we observed between results and predictions striking.
Constant whole-stem conductance per unit leaf area would have important implications at both individual and community scales. Predictable tip-to-base vessel widening within leaves accounts for vessel diameter scaling predictably with individual leaf area across species (Sack et al., 2012;Gleason et al., 2018). Standardizing measurements of vessel diameter for leaf area should reveal that leaves within a tree are similar from the point of view of vessel diameter and widening rate. This expectation Fig. 2. Leaf-specific theoretical conductance remains constant with height growth in Moringa oleifera. (A) Theoretical whole-plant conductance (K plant ) scales statistically identically with height, as does leaf area, as K plant ∝ H 1.825 (see Fig. 1A). (B) K plant scales isometrically with total leaf area, congruent with expectations regarding the way in which selection should favor individuals with constant photosynthetic productivity per unit leaf area with height growth. Confidence intervals for slopes and intercepts are shown in Table 1. P<0.001 for all R 2 . n=139 in all cases.
implies that leaves can in principle be functionally equivalent in terms of their vessel dimensions on a per-area basis, regardless of their position in the crown and distance from the stem base. At the community scale, constant leaf-specific conductance allows estimation of whole-community productivity as proportional to the sum of the leaf area of trees within the community (West et al., 2009). Therefore, knowing how leaf area (and thus productivity) scales as individual trees increase in size, together with the assumption that forest resource use is always maximal (meaning that a community is saturated with individuals, whether it is composed of many small or few large individuals), allows prediction of the slope of tree size distributions in any community (Simini et al., 2010;Anfodillo et al., 2012). Deviation from the predicted distribution can even be diagnostic of forest disturbance (Coomes et al., 2003;Sellan et al., 2017). This energy equivalence expectation, supported here by constant conductance per unit leaf area, represents a potential link between hydraulics, stand productivity, and the shaping of optimal canopy height and density by microsite (Eagleson, 1982;Cabon et al., 2018). The finding of constant conductance per unit leaf area with height growth is therefore important for understanding forest function.
With regard to understanding plant hydraulic architecture, our results point to an intricate coordination between plant height, a constant tip-to-base vessel-widening rate, and variation in vessel diameter at the stem base and at the stem apexthe latter a feature not predicted or included in any model of plant hydraulic function to date (Figs 1 and 3). As an individual becomes taller, the vessels at the stem apex become wider, with VD apex ∝ H 0.268 (Fig. 1B). Given a constant tip-to-base vesselwidening rate, wider vessels at the stem apex will lead to even wider vessels at the stem base with height growth (Fig. 1C) than would be produced with uniform terminal stem vessel diameters. Given two conduits of identical length and basal diameter, the one with the wider terminal diameter will have lower resistance with only a minimally greater investment in conduit wall material, given that flow is expected to increase as VD 4 while conduit wall material would increase proportionally to vessel perimeter and therefore only linearly with the increase in vessel diameter. As a result, terminal twig conduit widening with increasing height seems likely to be a major aspect of plant economy, minimizing resource investment per unit leaf area with height growth (Bettiati et al., 2012;Olson et al., 2014). It is also likely what allowed the total number of vessels in our study plants to increase with leaf area with an exponent of less than 1, NV ∝ LA 0.789 (Fig. 1D). This means that taller plants supply the same leaf area with proportionally fewer vessels (Fig. 1A, E), and again the widening of terminal conduit diameter and the reduction in resistance it brings is surely involved in this proportional reduction in vessel number (Fig. 3). In turn, this reduction is likely the result of selection favoring minimized increases in sapwood carbon cost per unit leaf area with height increase Rosell et al., 2017); indeed, it is hard to imagine what other process would lead to terminal conduit widening with height. Our results in Moringa, together with patterns of vessel scaling with stem length across vessel-bearing angiosperms (Anfodillo et al., 2006;Olson et al., 2014Olson et al., , 2018Rosell and Olson, 2014), show that as plants grow taller, not only stem basal diameter but also terminal twig vessel diameter becomes wider, with a concomitant reduction in the number of vessels per unit leaf area, giving clues to significant vectors of selection on plant hydraulic systems.
One of these vectors is carbon economy, and our results suggest an additional feature that is not incorporated into any optimality model of plant hydraulic construction, which is that vessel lumen transectional area relative to the rest of the xylem becomes slightly higher with height increase. In Moringa, the number of vessels per unit leaf area scales as NV ∝ LA 0.789 , implying that taller plants have fewer vessels per unit leaf area. Yet, even though there are fewer vessels per unit leaf area with height growth, the proportion of vessel lumen area per unit leaf area increases in taller plants. Vessel diameter scales with leaf area as VD basal ∝ LA 0.180 (Fig. 1F). Consequently, individual vessel area scales with leaf area as VA ∝ LA 0.360 , and since the total number of vessels scales with leaf area as NV ∝ LA 0.789 , conductive area scales with leaf area as CA ∝ LA 1.149 . These findings coincide very closely with the predictions of West et al. (1999), where NV ∝ H 3 , VA ∝ H 0.5 , and LA ∝ H 3 , such that CA ∝ LA 1.15 . While this finding is remarkably congruent with the predictions of West et al. (1999), when expressed in terms of xylem area it points to additional details not incorporated in their model, namely that conductive area scales as CA ∝ XA 1.06 , meaning that the proportion of vessel lumen area relative to the rest of the xylem becomes slightly higher with height growth . This suggests that as trees grow taller, they deviate some carbon from support tissue such as fibers to vessels. This property very likely helps trees maintain leaf-specific conductance constant with height growth while minimizing the increases in carbon costs per unit leaf area expected to occur as plants grow taller (Hölttä et al., 2011).
The isometric increase between leaf area and whole-plant conductance estimated using only vessel dimensions also Fig. 3. The intricate covariation of plant height, tip-to-base vessel widening rate, stem tip vessel diameter, stem base vessel diameter, and number of vessels per unit leaf area. The white cylinders represent plants of different heights; the black cones within them represent vessels. In our experiment, for a given unit of leaf area, represented by the constant-sized leaf above each stem, the number of vessels decreased with increasing plant height. At the same time, terminal vessel diameter increased, represented by the wider apices of the vessels. Given tip-tobase vessel widening, the rate of which is constant over height increases, vessel diameter also becomes predictably wider at the stem base with height growth, represented by the wider bases of the vessels. This delicate synchronization of multiple factors led to a similar whole-plant conductance per leaf area as individuals become taller. (This figure is available in colour at JXB online.) suggests that other properties of plant hydraulic systems that are known to contribute to the resistance of the vascular system must also increase proportionally with vessel diameter. Our index of conductance was calculated using the Hagen-Poiseuille equation assuming continuous tubes running the entire lengths of stems. This assumption omits much biological detail, including the irregularity of internal vessel walls, the constrictions imposed by perforation plates, and the finite lengths of vessels and the concomitant passage of water through pits from vessel to vessel. Yet, this simplified set of assumptions accurately predicts vessel diameter-stem length exponents, as well as yielding results consistent with predictions of constant leaf-specific conductance with height growth. This suggests that the other sources of resistance, including the distribution of vessel lengths from the tip to the base of a tree, must scale similarly with vessel diameter. End-wall resistance and lumen resistance increase isometrically among a diverse set of tracheid-and vessel-bearing plants (Sperry et al., 2005), suggesting that selection minimizing decreases in conductance has also been an important factor influencing conduit length distributions. Similarly, total pit permeable area increases isometrically with tracheid lumen area (Schulte, 2012;Lazzarin et al., 2016), and since pore diameter also increases proportionally to pit membrane area, pit resistance likely also remains a constant fraction of total resistance as trees increase in size (Schulte et al., 2015). These proportionalities must occur for simple Poiseuille assumptions to predict conduit diameter scaling accurately. Ultimately, proportionality between different components of the total resistance of the conductive system is expected to result from selection favoring constant conductance per leaf area as trees increase in height.
In summary, our results show that during height growth, plants maintain a constant supply of water per unit leaf area through an intricate interplay between the tip-to-base vesselwidening rate, terminal twig vessel diameter, and the total number of vessels. Our findings support the general prediction that increasing height is accompanied by compensatory changes in xylem structure that maintain leaf photosynthetic productivity constant, a finding that underwrites the scaling of carbon assimilation from leaves to trees to communities.
Supplementary data
Supplementary data are available at JXB online. Fig S1. View of the experimental Moringa oleifera plantation and an individual tree. | 2019-07-23T13:07:22.077Z | 2019-07-20T00:00:00.000 | {
"year": 2019,
"sha1": "90966b820dd9473d71da72f245d73ae2cfa8bb15",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jxb/article-pdf/70/20/5765/30297340/erz329.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e758d92db8dad1781774624285ffb7b6655ba17",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
54203737 | pes2o/s2orc | v3-fos-license | Complex powers for cone differential operators and the heat equation on manifolds with conical singularities
We obtain left and right continuous embeddings for the domains of the complex powers of sectorial $\mathbb{B}$-elliptic cone differential operators. We apply this result to the heat equation on manifolds with conical singularities and provide asymptotic expansions of the unique solution close to the conical points. We further show that the decomposition of the solution in terms of asymptotics spaces, i.e. finite dimensional spaces that describe the domains of the integer powers of the Laplacian and determined by the local geometry around the singularity, is preserved under the evolution.
Introduction
The domains of the complex powers of sectorial operators play an important role in the regularity theory of partial differential equations (PDEs). Concerning the theory of maximal regularity, under elementary embeddings (see e.g. [2, (I.2.5.2)] and [2, (I.2.9.6)]), they determine the interpolation spaces of Banach couples appearing in the evolution of linear and quasilinear parabolic problems (see e.g. [1,Theorem 7.1], [4,Theorem 2.7] or [7,Theorem 2.1]). In the case of usual elliptic differential operators, the domains of the complex powers can be determined by standard pseudodifferential theory and in general they are described by fractional Sobolev spaces. Of particular interest is the case of degenerate differential operators, where such knowledge can be applied to the study of PDEs on singular spaces.
In this article we are interested in conically degenerate operators. This class contains the naturally appearing operators on manifolds with conical singularities. It is well known (see e.g. [10], [14], [19] or [20]) that a cone differential operator which satisfies certain ellipticity assumptions (i.e. being B-elliptic) and is defined as an unbounded operator on an arbitrary (weighted) Mellin-Sobolev space has several closed extensions, called realizations, which differ by a finite dimensional space, called asymptotics space. The structure of the asymptotics space is standard and is determined by the coefficients of the operator close to the singularity. Therefore, the description under continuous embeddings of the complex power domains of sectorial B-elliptic cone differential operators is related to the interpolation between Mellin-Sobolev spaces and direct sums of Mellin-Sobolev spaces and asymptotics spaces. We proceed to such an estimate by using only elementary interpolation theory and basic facts concerning the description of the maximal domain of a B-elliptic cone differential operator. Then, embeddings for the domains of the complex powers are recovered by standard theory in the case of sectorial closed extension. For the property of sectoriality, we point out the associated theory and provide an example related to the Laplacian on a conic manifold.
As a next step, we apply the above result to the theory of degenerate parabolic PDEs. In order to emphasize concrete results, we deal only with linear theory. More precisely, we consider the heat equation on a manifold with conical singularities. However, our complex power result can be applied to non-linear problems on such spaces, such as e.g. the porous medium equation or the Cahn-Hilliard equation, and provide information concerning the asymptotic behavior of the solutions close to the singularity, as we remark later on.
Concerning now the heat equation on a conic manifold, instead of choosing as usual an appropriate realization of the Laplacian on a Mellin-Sobolev space as e.g. in [5], [8], [17] and [19], we pick up a closed extension by going arbitrarily high to the power scale of a specific R-sectorial realization of the Laplacian. In this way, by using standard maximal L q -regularity theory, we show well posedness of the problem on spaces consisting of sums of Mellin-Sobolev spaces and asymptotics spaces.
As a consequence, it is shown that an appropriate decomposition of the initial data in terms of a Mellin-Sobolev space part and an asymptotics space part is preserved under the evolution. Furthermore, the asymptotics space expansion of the solution can become arbitrary long, and hence can provide arbitrarily sharp information concerning the asymptotic behavior of the solution close to the conical tips, depending on the regularity of the right hand side of the heat equation. Hence, e.g. in the case of the homogeneous problem, the complete asymptotic expansion of the solution close to the singularity is recovered, which turns out to be dependent on the local geometry of the cone.
Acknowledgment. We thank Elmar Schrohe for helpful discussions concerning Theorem 3.3.
Basic Maximal L q -regularity Theory for Linear Parabolic Problems
Let X 1 d ֒→ X 0 be a continuously and densely injected complex Banach couple. We start with a basic decay property of the resolvent of an operator which allows us to define complex powers.
Definition 2.1 (Sectorial operator). Let θ ∈ [0, π) and denote by P(θ) the class of all closed densely defined linear operators A in X 0 such that for some K A,θ ≥ 1 that is called sectorial bound of A and depends on A and θ. The elements in P(θ) are called (invertible) sectorial operators of angle θ.
k=1 be the sequence of the Rademacher functions and θ ∈ [0, π). An operator A ∈ P(θ) is called R-sectorial of angle θ, if for any choice of λ 1 , ..., λ N ∈ S θ \{0} and x 1 , ..., x N ∈ X 0 , N ∈ N\{0}, we have that for some constant R A,θ ≥ 1 that is called R-sectorial bound of A and depends on A and θ.
For any q ∈ (1, ∞) and φ ∈ (0, 1), denote by L q (0, T ; X 0 ) the X 0 -valued L q -space and by (·, ·) φ,q the real interpolation functor of exponent φ and parameter q. Consider the abstract linear parabolic problem q ,q and −A : X 1 → X 0 is the infinitesimal generator of a bounded analytic semigroup on X 0 . The operator A has maximal L q -regularity if for some q (and hence for all, according to a result by G. Dore) we have that for 2) that depends continuously on the data f , u 0 . Finally, recall the standard embedding of the maximal L q -regularity space (see e.g. [2, Theorem III.4.10.2]), namely If we restrict to Banach spaces belonging to the class of UMD (i.e. having the unconditionality of martingale differences property, see [2, Section III.4.4]) then R-sectoriality implies maximal L qregularity (it actually characterizes this property in UMD spaces due to [22,Theorem 4.2]) as we can see from the following fundamental result. Finally, we note that the property of maximal L q -regularity is preserved on power scales of R-sectorial operators in UMD spaces, as we deduce from the following elementary result.
Lemma 2.4. Let X 0 be a complex Banach space and let A : D(A) → X 0 be an R-sectorial operator of angle θ ∈ [0, π). For any k ∈ N\{0} let for some constant C ≥ 1.
Complex Powers for Cone Differential Operators
Let B be a smooth (n + 1)-dimensional, n ≥ 1, manifold with possibly disconnected closed (i.e. compact without boundary) boundary ∂B. Endow B with a Riemannian metric g which in a collar neighborhood [0, 1) × ∂B of the boundary admits the form where (x, y) ∈ [0, 1) × ∂B are local coordinates and x → h(x, y) is a smooth up to x = 0 family of Riemannian metrics on the cross section ∂B that does not degenerate up to x = 0. We call B = (B, g) conic manifold or manifold with conical singularities which are identified with the set {0} × ∂B. When h is independent of x we have straight conical tips, otherwise the tips are warped.
where ∂B i are smooth, closed and connected. The naturally appearing differential operators on B degenerate and belong to the class of cone differential operators or conically degenerate operators. A cone differential operator A of order µ ∈ N is an µ-th order differential operator with smooth coefficients in the interior B • of B such that when it is restricted on the collar part (0, 1) × ∂B it admits the following form If a k , k ∈ {0, ..., µ}, do not depend on x close to zero we say that A has x-independent coefficients.
We associate two special symbols to a cone differential operator. If (ξ, η) are the corresponding covariables to the local coordinates (x, y) ∈ [0, 1) × ∂B near the boundary, then we define the rescaled symbol by Furthermore, the following holomorphic family of differential operators defined on the boundary is called conormal symbol of A, where ∂B = (∂B, h(0)), s ∈ R, p ∈ (1, ∞) and H s p (∂B) denotes the usual Sobolev space. We may then extend the notion of ellipticity to the case of conically degenerate differential operators as follows. , y). Further, take a covering κ i : U i ⊆ ∂B → R n , i ∈ {1, ..., N }, N ∈ N\{0}, of ∂B by coordinate charts and let {φ i } i∈{1,...,N } be a subordinated partition of unity. For any s ∈ R and p ∈ (1, ∞) let H s,γ p (B) be the space of all distributions u on B • such that is defined and finite, where * refers to the push-forward of distributions. The space H s,γ p (B) is independent of the choice of the cut-off function ω, the covering {κ i } i∈{1,...,N } and the partition {φ i } i∈{1,...,N } .
Hence, a cone differential operator A of order µ induces a bounded map We recall next some basic facts concerning the domain of a B-elliptic cone differential operator A. Further details can be found in [6], [10], [14], [18], [19] or [20]. We regard A as an unbounded operator in H s,γ . The domain of the minimal extension (i.e. the closure) A min of A is given by If in addition the conormal symbol of A is invertible on the line {λ ∈ C | Re(λ) = n+1 2 − γ − µ}, then we have that D(A min ) = H s+µ,γ+µ p (B). Concerning the domain of the maximal extension A max of A, which as usual is defined by we have that Here E A,γ is a finite-dimensional space independent of s, that is called asymptotics space, which consists of linear combinations of C ∞ (B • ) functions that vanish on B\([0, 1) × ∂B) and in local coordinates (x, y) on the collar part (0, 1) × ∂B they are of the form ω(x)c(y)x −ρ log m (x) where c ∈ C ∞ (∂B), ρ ∈ C and m ∈ N.
For the x −1 powers ρ describing E A,γ we have that ρ ∈ Q A,γ , where Q A,γ is a finite set of points in the strip that is determined explicitly by the poles of the recursively defined family of symbols and by T σ , σ ∈ R, we denote the action (T σ f )(λ) = f (λ + σ) (see e.g. [19, (2.7)-(2.8)]). The logarithmic powers m are related to the orders of the above poles. If A has x-independent coefficients, then Q A,γ coincides with the set of poles of (σ µ M (A)(·)) −1 in the strip (3.7) and where E ρ , ρ ∈ Q A,γ , is a finite-dimensional space independent of s consisting of C ∞ (B • ) functions that vanish on B\([0, 1) × ∂B) and in local coordinates (x, y) on the collar part (0, 1) × ∂B they are of the form with c i ∈ C ∞ (∂B) and certain m ρ ∈ N depending on the order of ρ.
Under (3.8), let the closed extension A of A in H s,γ p (B) be given by where Q A,γ ⊆ Q A,γ is a given subset and E ρ is a subspace of E ρ . The maximal domain structure together with standard properties of interpolation spaces, imply the following result concerning real interpolation between Mellin-Sobolev spaces and direct sums of Mellin-Sobolev spaces and asymptotics spaces, which is inspired by [17, Lemma 5.2]. Theorem 3.3. Let s, γ ∈ R, θ ∈ (0, 1), p, q ∈ (1, ∞), A be B-elliptic with x-independent coefficients and A be the closed extension (3.10). Then, for any ε > 0 the following embeddings hold Therefore, the first embedding follows by (3.11), (3.12) and (3.13).
By standard properties of interpolation spaces and according to (3.9), the operator (3.15) maps the left hand side of (3.14) to for any δ > 0 and certain η ∈ N. Therefore, by the construction of (3.15), for the last sum in (3.14) we deduce that σ ∈ Q A,γ . Assume that A has x-independent coefficients and let A be the closed extension (3.10). Take k ∈ N, k ≥ 1, and consider the integer powers A k of A defined as usual by Since A k is also B-elliptic, by regarding A k as a closed extension of A k in H s,γ p (B) we have that where Q A k ,γ ⊆ Q A k ,γ and F ρ ⊆ F ρ denotes the usual asymptotics space corresponding to the pole ρ. Recall that for the minimal domain in general we have that ∈ (1, ∞), c ≥ 0, k ∈ N, k ≥ 1 and z ∈ C with 0 < Re(z) < k. Assume that A is B-elliptic with x-independent coefficients and that for the closed extension A given by (3.10), A + c is sectorial, i.e. it belongs to the class P(0). Then, according to (3.16), for all ε > 0 we have that H s+µ Re(z)+ε,γ+µ Re(z)+ε As examples of sectorial closed extensions of B-elliptic cone differential operators we refer to [8,Proposition 1] and [19,Theorem 4.3]. A typical one is obtained by the Laplacian ∆ on B induced by the metric g. This operator in the collar neighborhood (0, 1) × ∂B near the boundary is of the form where ∆ h(x) is the Laplacian on the cross section ∂B induced by the metric h(x). The conormal symbol of ∆ is given by σ M (∆)(λ) = λ 2 − (n − 1)λ + ∆ h(0) . Clearly, (σ M (∆)(λ)) −1 is defined as a meromorphic in λ ∈ C family of pseudodifferential operators with values in L(H s p (∂B), H s+2 p (∂B)), s ∈ R, p ∈ (1, ∞). More precisely, if σ(∆ h(0) ) = {λ i } i∈N is the spectrum of ∆ h(0) , then the poles of (σ M (∆)(·)) −1 coincide with the set Therefore, the pole zero of (σ M (∆)(·)) −1 is always contained in the strip (3.7) provided that γ ∈ ( n−3 2 , n+1 2 ). In this case, denote again by C the subspace of E ∆,γ in (3.6) under the choice ρ = m = 0 and c| ∂Bi = c i , c i ∈ C, i ∈ {1, ..., k B }, i.e. C consists of smooth functions that are locally constant close to the boundary. Such a realization can satisfy the property of maximal L q -regularity, as we can see from the following result.
Theorem 3.6. Let s ≥ 0, p ∈ (1, ∞) and the weight γ be chosen as where λ 1 is the greatest non-zero eigenvalue of the boundary Laplacian ∆ h(0) . Consider the closed Then, for any θ ∈ [0, π) there exists some c > 0 such that c − ∆ is R-sectorial of angle θ.
The Heat Equation on Manifolds with Conical Singularities
We consider the following well known linear parabolic equation for appropriate functions f and u 0 , which describes the heat distribution in a given domain. The above problem, called heat equation, was treated in [8], [17] and [19] on manifolds with straight conical tips and it was shown existence, uniqueness and maximal L q -regularity of the solution on Mellin-Sobolev spaces. More precisely, in [8,Theorem 6] it is shown maximal L q -regularity for (4.20)-(4.21) by employing the minimal extension of the Laplacian on a weighted L p -space. Then, this result is extended to dilation invariant extensions of the Laplacian in [19,Theorem 5.8]. In [17] a non-linear generalization, called porous medium equation, is considered and it is shown maximal L q -regularity on arbitrary order Mellin-Sobolev spaces [17,Theorem 4.2] as well as on spaces with asymptotics in the sense of the domain of bi-Laplacian [17,Proposition 7.5]. The same problem is treated in [12] on surfaces with straight conical tips by using the Friedrichs extension of the Laplacian. Finally, we also refer to [5] for an alternative approach to the problem with similar results, as well as to [21] for the properties of the bi-harmonic heat kernel on such spaces.
In order to study the evolution on asymptotics spaces, we consider here the same problem with the difference that the Laplacian is chosen on the power scale defined by the realization (3.19).
The maximal L q -regularity of the solution obtained in the above theorem together with the interpolation results of the previous section can show that the asymprotics space decomposition of the initial data u 0 in (4.21) can be preserved under the evolution induced by (4.20 4) is independent of x. Let s ≥ 0, k ∈ N, k ≥ 1, p, q ∈ (1, ∞), γ be chosen as in (3.18), ε > 0, where the asymptotics spaces involving the initial data determine the domain of the k th power ∆ k of the realization (3.19), i.e. we have that for all δ > 0, Q ∆ k ,γ ⊆ Q ∆ k ,γ and F ρ ⊆ F ρ according to (3.10). Then, for each T > 0, for the unique solution of the problem (4.20)-(4.21) on [0, T ) × B obtained by Theorem 4.1 we have that From the above result we deduce that the more regularity we have for f and u 0 the better information we obtain concerning the asymptotic behavior of the solution u of (4.20) close to {0} × ∂B. In the case of homogeneous heat equation, i.e. when f = 0, we can recover the complete asymptotic expansion of the solution in terms of the local geometry around the singularities, as we can see from the following result. . Furthermore, for any k ∈ N we have that u ∈ C ∞ ((0, ∞); D(∆ k )).
Remark 4.4. The porous medium equation on manifolds with straight conical tips was studied in [17]. In [17, Section 7] the equation was considered in sums of higher order Mellin-Sobolev spaces and asymptotics spaces and it was shown existence, uniqueness and maximal L q -regularity of the solution (see [17,Theorem 7.8]). Furthermore, the Cahn-Hilliard equation on manifolds with possibly warped conical tips was considered in [16], and similar results were shown in terms of higher order Mellin-Sobolev spaces (see [16,Theorem 4.6] and [16,Theorem 5.9]). By the embedding (2.3), Remark 3.4 and [9, Corollary 7.3] combined with [17,Theorem 7.8] and [16,Theorem 4.6] we can obtain in each case more precise information concerning the asymptotic behavior of the solutions close to the singularities in terms of the description of the domain of the bi-Laplacian. | 2018-04-17T12:21:14.000Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "d09e2b9f5622b331f6f255fe7dea2343800be579",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1090/proc/13986",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d09e2b9f5622b331f6f255fe7dea2343800be579",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
2605433 | pes2o/s2orc | v3-fos-license | An Enhanced Three-Factor User Authentication Scheme Using Elliptic Curve Cryptosystem for Wireless Sensor Networks
As an essential part of Internet of Things (IoT), wireless sensor networks (WSNs) have touched every aspect of our lives, such as health monitoring, environmental monitoring and traffic monitoring. However, due to its openness, wireless sensor networks are vulnerable to various security threats. User authentication, as the first fundamental step to protect systems from various attacks, has attracted much attention. Numerous user authentication protocols armed with formal proof are springing up. Recently, two biometric-based schemes were proposed with confidence to be resistant to the known attacks including offline dictionary attack, impersonation attack and so on. However, after a scrutinization of these two schemes, we found them not secure enough as claimed, and then demonstrated that these schemes suffer from various attacks, such as offline dictionary attack, impersonation attack, no user anonymity, no forward secrecy, etc. Furthermore, we proposed an enhanced scheme to overcome the identified weaknesses, and proved its security via Burrows–Abadi–Needham (BAN) logic and the heuristic analysis. Finally, we compared our scheme with other related schemes, and the results showed the superiority of our scheme.
Introduction
With its strong self-organization, low-cost, resource-limited and data-centered, wireless sensor networks (WSNs) have been widely deployed in harsh environments such as military, industrial, transportation and even battlefields. Different to some systems such as the distributed architectures [1,2], there are three participants in WSNs. Each participant has different computational and storage power, and only the gateway can store the long-term key. Furthermore, most sensor nodes are distributed in an unattended environment, which means the sensor node is prone to be attacked. It also should be noted that the communications between users and sensor nodes are usually in an open channel, and the adversary can eavesdrop on or modify messages in the network. Therefore, the privacy and security of WSNs are always the thorny and vital issues. To deal with these security issues, it is a common practice to establish a security mechanism to share secret key between communicating parties and encrypt the date from remote parities. In this context, the remote user authentication protocol [3][4][5][6][7] with a session key is an essential security strategy for a secure and practical communication over an untrusted but complicated network. It guarantees that the communicating parties can verify the validity of each other and negotiate a session key for encrypting the future transmitted messages. The major challenge in designing an authentication protocol in WSNs is to balance the relationship between security, privacy and computational cost.
Generally, we authenticate a remote user from three aspects: what he knows, such as password; what he owns, such as a smart card; who he is, such as biometrics. A scheme using "X" aspects to verify the remote user is called "X-factor" authentication protocol. With the development of biotechnology and the increasing demands on security, three-factor (password + smart card + biometrics) user authentication scheme gets widely applied.
Related Works
In 2009, Das [8] introduced a password-based scheme with a smart card for WSNs; it then aroused an intense discussion and greatly promoted the development of user authentication in WSNs. Many researchers [4,[9][10][11] identified the security pitfalls in Das's scheme [8] (such as being prone to offline password guessing attack, impersonation attack and insider attack), and then proposed many enhanced versions. However, none of these schemes was secure enough to resist against various attacks or achieved low computational cost.
In 2011, Fan et al. [12] criticized the weakness of previous schemes and designed a new scheme with lightweight operations. With lower computational cost, their scheme seems quite suitable for a resources-limited environment such as WSNs. In 2012, Das et al. [13] proposed a new scheme which supports the dynamical addition of new nodes and only involves some lightweight operations. It has to be admitted that Das et al.'s scheme provides many desired attributes. Unfortunately, Wang et al. [14] identified that the two schemes both are vulnerable to many attacks: Fan et al.'s scheme [12] can neither achieve user anonymity, nor avoid smart card lost attack and insider attack, etc.; Das et al.'s scheme cannot resist against insider attack, smart card lost attack, etc.
In 2013, Xue et al. [15] introduced an efficient authentication scheme with admirable features and lightweight computational cost. However, it was revealed by Wang et al. [3] that this scheme fails to achieve user anonymity. Furthermore, Li et al. [16] demonstrated its vulnerability to offline dictionary attack, insider attack, stolen-verifier attack, etc., and proposed a new scheme which is still insecure against offline dictionary attack. In the same year, Li et al. [17] identified the weakness (not resistant to dictionary attack and session key disclosure attack, etc.) in Yeh et al.'s scheme [18].
In 2014, Choi et al. [19] showed that a previous scheme [20] suffers from sensor energy exhausting attack, offline password guessing attack and the session key attack, and then proposed a new scheme. After demonstrating the security flaws in Xue et al.'s scheme [15], Jiang et al. [21] also designed an improved one. However, both the scheme of Choi et al. [19] and Jiang et al. [21] were discovered as not being secure as claimed by Wu et al. [22].
In 2015, He et al. [23] described a temporal-credential-based scheme for WSNs, yet soon was pointed out subject to impersonation attack, smart card lost attack and tracking attack. In the same year, Chang et al. [24] proposed an enhanced dynamic identity authentication, once again, it was proved not secure against offline password guessing attack, user impersonation attack, etc. by Jung et al. [25] and Park et al. [26]. To strengthen the security of the scheme, Jung et al. [25] and Park et al. [26] both added the biological characteristic as a new factor and proposed a three-factor enhanced version. Furthermore, they both proved the security of their scheme formally, so they were confident in the security of their scheme.
Motivations and Contributions
When revisiting Jung et al.'s scheme [25] and Park et al.'s scheme [26], it was regretful to find that the two schemes are still not as secure as claimed, though they both are equipped with the complete formal proof, and furthermore, add a biometric factor into the scheme to improve the security of the previous scheme. Ridiculously, the improved two schemes that are armed with a biometric factor and a formal proof, even cannot provide the same level security assurance as the previous ones. We find them vulnerable to offline password guessing attack, impersonation attack, and no user anonymity, no forward security, etc.
In fact, it is pretty common that a scheme with formal security proof was found insecure. Though the user authentication in wireless sensor networks have been developed over almost ten years since Das [8] first proposed a two-factor scheme, there is not yet a secure and practical scheme. Even more alarming is the fact that many schemes violate some basic design principles that have been proposed. Such an unsatisfactory situation prompts us to design a secure but efficient scheme for wireless sensor networks. Furthermore, the common consensus on the system architecture, adversary model and security requirements should be reached. In conclusion, our contributions are as follows: 1. We depict the system architecture, adversary model and security requirements of wireless sensor networks. Though these factors are the basis of the authentication scheme, researchers usually ignore them. 2. We demonstrate that: (1) Jung et al.'s scheme cannot resist against offline password guessing attack, impersonation attack, and fails to achieve user anonymity and forward secrecy, etc.; (2) Park et al.'s scheme suffers from offline password guessing attack, and no user anonymity. Furthermore, we explain the inherent reason for these attacks. 3. We propose an improved scheme with various desirable attributes, and prove its security via BAN logic and heuristic analysis. Then, we compare our scheme with other related schemes.
The results show the great advantage of our scheme.
Organization of the Paper
The remainder of this paper is organized as follows: we describe the system architecture and adversary model in Section 2, analyze Jung et al.'s scheme and Park et al.'s scheme in Sections 3 and 4, respectively; in Section 5, we propose an enhanced scheme; the security and performance analysis are given in Sections 6 and 7, respectively; and the conclusions are drawn in Section 8.
Preliminaries
This section introduces the preliminaries in the user authentication scheme including computational problems, system architecture, adversary model and security requirements.
Computational Problems
Given two large primes p and q, let F p be a finite field, E/F p be an elliptic curve over F p , and G be a q-order subgroup of E/F p . Then, for α, β ∈ Z * p and a point P in G, we can define the discrete logarithm problem over the elliptic curve as follows: 1. Elliptic curve discrete logarithm problem: given (P, αP), it is impossible to compute α within polynomial time. 2. Elliptic curve computational Diffie-Hellman problem: given (αP, βP), it is impossible to compute αβP within polynomial time.
System Architecture
Wireless sensor networks (as shown in Figure 1) attract worldwide attention with the prevalence of Internet of Things (IoT). Generally, people may be more familiar with distributed systems, which involve two participants: a set of users and a single server, while there are three participants in the user authentication of WSNs: a number of sensor nodes, a gateway node and a set of users. In a wireless sensor network, there are tens to thousands of sensor nodes that are deployed in a particular area. They work together to collect the data from physical world and have limited computing and storage power. Furthermore, they are usually left in an unattended environment, so the adversary can easily capture them to acquire secret parameters. The gateway node acts like a registration center. In WSNs, an authentication protocol usually consists of four basic phases: registration, login, verification, and password update. Sometimes, the dynamic node addition phase is suggested for meeting the demand on increasing new sensor nodes. In the registration phase, users and sensor nodes submit their personal information to the gateway, then the gateway will issue users a smart card with some sensitive parameters physically (face to face or via the mail), and distribute a shared secret key to sensor nodes. When a user wants to access a sensor node, he/she can initialize an access request to the gateway in the login phase. After checking the legitimacy of the user, the gateway informs the corresponding sensor node about the request. Then, the user and the sensor node verify the legitimacy of each other via (or not) the gateway and negotiate a session key in the verification phase. The user can change the password in the password update phase. In addition, the new sensor nodes can join the network in the dynamic node addition phase.
Adversary Models
When considering cryptanalysis of the user authentication schemes in WSNs, the adversary A is also supposed to have the following capacities: A can enumerate all the items in D pw * D id in polynomial time, where D pw and D id denote the password space and the identity space, respectively [28,29]. 3. When evaluating forward secrecy, A can get the long-term secret key [28,30]. 4. A can acquire the password of a legitimate user by a malicious card reader, or get the parameters in the smart card, but cannot achieve both [28,30]. 5. A can get the data in sensor nodes for they are usually left unattended [3,31]. 6. A can get the past session keys [30]. 7. A can get the user's biometrics [29,32].
The capacity of acquiring biometrics is the most controversial. Many researchers view it as a quite strong factor that cannot be broken. However, this is impractical. For example, the adversary can at least get the biometrics via a malicious terminal. Moreover, unlike the password that may change with the different applications, the biometrics is unique to every particular person. Thus, the adversary can collect one's biometrics via any biometric-based terminal. This indicates that the adversary can acquire the password and biometrics both, or the smart card and the biometrics both. Furthermore, this hypothesis has been accepted in many schemes, such as [29,32,33].
It should be noted that: a secure three-factor authentication scheme should guarantee that the breaking of any two of the three factors will not affect the other one, and the system is still secure.
Security Requirements
Understanding the security requirements of the user authentication is a fundamental step to analyze or design a protocol. Thus, we summarize the security requirements of user authentication in the wireless sensor network: S1 Mutual authentication. It is an essential requirement in all authentication schemes. It requires the participants to authenticate each other [34,35]. S2 User anonymity. It is a privacy protection requirement for individual users, not directly related to system security. Many systems have such a requirement including distributed system [36]. While the privacy protection in wireless sensor networks is more severe, since the information among sensor nodes (usually unreliable) is transmitted in a way of broadcasting. Protecting user anonymity is to stop A from computing the user's identity or linking the transcript to a same user. Note that such a requirement is not applied to the gateway, but to sensor nodes for they are untrusted. S3 Key agreement. It is also an essential requirement in most authentication schemes. The session key is used to encrypt the further communications to achieve confidentiality. S4 Forward secrecy. It is for the final collapse of the whole system, and it requires that the previous communications will be secure, even the system collapses (usually refers to the adversary that owns the long-term key of the system). S5 User friendly. It is an additional requirement to improve the user experience with the development of the network. A user friendly scheme usually includes: let the user U i select the password freely, and change it locally [30]; when U i finds the smart card insecure, let he/she revoke it and re-register to the system with original identity. S6 No stolen-verifier attack. It is a requirement related to the security of the whole system (so as the following attacks), which requires that the verifier table does not expose any sensitive information for A to impersonate the participants or learn/control the session key. S7 No insider attack. It requires that the participants cannot get any sensitive information, which may provoke an attack. S8 No dictionary attack. It requires that A cannot conduct a brute force attack. S9 No replay attack. It stops A from conducting an attack via replaying the history message, which requires that the participants can check the freshness and validity of the received message. S10 No parallel session attack. This requirement is a bit similar to the replay attack, but it considers a condition where A conducts an attack via initiating multi-session simultaneously. S11 No de-synchronization attack. The synchronization attack in wireless sensor networks is more destructive than that in traditional networks, since a gateway may connect even hundreds of sensor nodes. It requires that the parameters among corresponding participants are consistent. S12 No impersonation attack. It is a very important requirement in authentication, which requires that the outside adversary (inside adversary has been considered in insider attack) will not be able to impersonate any participants. A scheme resistant to impersonation attack requires that the participants verify whether the corresponding communication party is a counterfeit one. Note that: the occasion where A performs a user impersonation attack using the password from a dictionary attack is not included-such an attack belongs to dictionary attack. S13 No known key attack. It is an attack related to the session key, which requires A, who knows that the current session key cannot compute the keys in others.
Cryptanalysis of Jung et al.'s Scheme
In 2017, Jung et al. [25] demonstrated several attacks against Chang et al.'s [24] two-factor user authentication scheme in WSNs. To improve the security and practicability of the scheme, they devised an enhanced one over Chang et al.'s scheme [24] by "employing biometrics information with the biohashing technique". They proved their scheme secure to various attacks such as offline dictionary attack using the Burrows-Abadi-Needham (BAN) logic. However, as we will show in this section, Jung et al.'s scheme still suffers from offline dictionary attack, impersonation attack, etc., which is even less secure than the previous one. For convenience of illustration, some notations are listed in Table 1. collision free one-way hash function Gen(BIO i ) one part of fuzzy extraction function, output a biometric key R i and a helper string P i Rep(BIO i , P i ) one part of fuzzy extraction function, output the biometric key R i in Gen(BIO i ) → a insecure channel ⇒ a secure channel
A Brief Review of Jung et al.'s Scheme
In this section, we review Jung et al.'s scheme [25] briefly, their scheme consists of four phases: registration, login, verification and password change. The password change phase was omitted, since it has little relevance to this work.
Registration Phase
In Jung et al.'s paper, there is only a user registration phase as follows: However, according to the paper, the sensor node S j preserves a private key X s j . So we deduce that the sensor node registration phase was missed. For the integrity, we add it as below: 3. S j stores X s j as a secret key.
Login Phase and Verification Phase
inputs the ID i and PW i , and his biometrics BIO i ; then, the smart card computes: GW first checks the freshness of T 1 , then computes: S j first checks T 2 , and computes: believes the legitimacy of GW and the authentication phase ends successfully. Otherwise, the authentication fails.
Security Flaws in Jung et al.'s Scheme
Jung et al. [25] criticized that Chang et al.'s scheme [24] fails to resist against offline password guessing attack and the session key attack. Thus, they add a new factor to enhance the security of the previous two-factor scheme, and formed a three-factor one. Despite armed with the biometrics factor and provable security proof, their scheme suffers from the same (even more serious) security issues.
Offline Dictionary Attack
Offline dictionary attack is exactly what most schemes suffer from and also the major security requirement of a user authentication protocol. Jung et al. [25] showed that Chang et al.'s scheme [24] cannot resist against this attack once the adversary breaches the victim's smart card and eavesdrops on the message from the open channel. Unfortunately, as we show below, the same attack also works for Jung et al.'s own scheme. In addition, Jung et al.'s scheme is vulnerable to other kinds of offline dictionary attacks with less attack cost.
According to the adversary capabilities mentioned in Section 2.3, it is natural to suppose that the adversary A somehow possessed U i 's smart card and then revealed the message {A i , E i , C i , D i } in it; acquired U i 's biometric BIO i by a malicious terminal or other ways; and intercepted transcripts {DID i , M U i ,G , C i , T 1 } via the public channel. Then, A can obtain U i 's password PW i as follows: 1. Guesses the value of PW i to be PW * i and ID i to be ID * i from the dictionary space D pw and D id , respectively. In fact, according to Wang et al. [28], once an adversary picks the victim's (U i ) smart card, it is easy to learn the corresponding identity ID i of the user U i .
Computes HPW
where T 1 is from the public channel. 7. Verifies the correctness of PWi * and ID * i by checking if the computed M * U i ,G is equal to the intercepted M U i ,G . 8. Repeats Steps 1-7 of this procedure until the correct value of PW i and ID i is found.
The time complexity of the above attack is O(|D pw | * |D id | * (5T H )). T H is the running time for hash computation. |D pw | denotes the number of passwords in D pw . |D pw | and |D id | are very limited, generally |D id | < |D pw | < 10 6 [30,37], so the above attack is quite efficient.
Besides the above kind of offline dictionary attack, Jung et al.'s scheme still suffers from another kind of offline dictionary attack where the adversary A obtained the victim's smart card and the biometrics BIO i . Then, A can conduct another offline dictionary attack as follows (Steps 1-5 are the same with the above attack, so they are omitted): Step 8. Repeats Steps 1-7 of this procedure until the correct value of PW i and ID i is found.
The time complexity of the attack is the same as the former attack. Actually, these two attack strategies are not new, and many researchers [32,36,[38][39][40] have captured these two attack scenarios to break numerous schemes. However, these kinds of adversaries are still rampant.
Remark 1.
As we mentioned before, a true three-factor authentication scheme should ensure that even if any two of the three factors are compromised, the other factor cannot be breached and the entire system is still secure. Obviously, this protocol is intrinsically not a three-factor protocol. It indicates that the biometric factor is not a master key to settle the problem in user authentication. On the contrary, a scheme armed with biometrics factor may even cannot provide the same security level as a two-factor authentication. The way to add more factors into the authentication protocol is not the essential way to design a more secure protocol.
In the scheme of Jung et al. [25], the obstacles to compute the verification value M U i ,G for an adversary A is the PW i and the ID i , so A can guess the value of the PW i and the ID i , then verify the guessed value by comparing the computed M * U i ,G with the intercepted M U i ,G . This is exactly the essential reason for the former kind of offline dictionary attack. Similarly, E i is also the fuse of the latter kind of attack. However, the function of the two parameters is quite different: the M U i ,G is the key of the GW to authenticate U i , while the E i contributes to changing the password locally and detecting incorrect input timely. Therefore, the M U i ,G is indispensable to an authentication protocol, and the E i conduces to improve the usability of a scheme. Furthermore, the "public-key principle" is necessary to resist the former attack [41]; and a way of "honeywords" + "fuzzy-verifiers" is suggested by Wang et al. [30] to deal with the latter attack.
Impersonation Attack
Suppose an adversary A was also a legal user U a , then he could get the secret key x as follows: where D a is from the smart card. 2. Computes TID a = h(ID a ||u). 3. Computes HPW a = h(PW a ||H(BIO i )). 4. Computes H ID a = A a ⊕ h(HPW a ||TID a ), where A a is from the card. 5. Computes x = C a ⊕ H ID a , where C a is from the card.
Obviously, the time complexity of the above attack is O(5T H + 3T R ), where T R is the running time for exclusive-or operation. With the secret x, A has the same capacity as the GW, thus A can impersonate as the GW or the S j ; this indicates that the security of the whole system collapsed.
Actually, not only can an insider legal user carry out such an attack, but also an adversary who has gotten the PW and ID of any users by "offline dictionary attack" can also perform such an attack.
is the fundamental reason for such an attack. To a legitimate user who knows the H ID i , the secret key x is actually exposed. Therefore, the only "XOR" operation on x is a risky behavior which is far from enough to protect such an significant parameter.
User Anonymity
User anonymity is of great significance to privacy protection. It requires that the adversary can neither confirm who transmits the messages nor recognize whether the messages come from the same user. In wireless sensor networks, numerous sensor nodes are deployed in an unattended environment. In addition, the information is transmitted in a way of broadcasting. Therefore, user anonymity in WSNs is an essential requirement. However, in Jung et al.'s scheme [25], user-specific parameters DID i and C i are transmitted via an open channel. Thus, following DID i or C i , the adversary A identifies the transmitted messages with the DID i and C i from a large amount of messages in the open channel, and links them to the user U i . Then, for the purpose of marketing or even other terrible attempts, the A can learn the user U i 's habits, such as the time to initiate an access request, the kinds of sensor nodes to visit. Therefore, Jung et al's scheme fails to achieve user anonymity.
Forward Secrecy
Forward secrecy requires that even if the long-term secret key was exposed, the adversary still cannot compute the previous session key. In other words, when the long-term key is compromised, the protocol cannot promise the security of further communications, but it can guarantee the security of the previous communication. Forward secrecy is the last umbrella of system security, but Jung et al.'s scheme fails to achieve it.
Supposing that an adversary A got the secret key x and intercepted the parameters DID i and M j in the channel, A could perform an attack to get the previous session key as follows: 1. Computes X s j = h(SID j ||x).
Remark 2.
In this scheme, the session key consists of a fixed parameter DID i and a random number R from GW. As DID i is exposed to an open channel, the only challenge in computing the session key is the value of R. On one hand, the sensor node S j has to know R to form the session key. This means that the S j is capable of computing R. On the other hand, S j 's special or only secret parameter is X s j , where X s j = h(SID j ||x). Thus, once acquiring X s j and the transmitted message in an open channel, anyone can compute the session key. Therefore, when an adversary learns the long-term key x, he/she has the same capability as the S j . Of course, he/she can compute the correct session key. In fact, it is a more secure way to set up the session key with the security mechanism of challenge-response for the two sides of communication. Anyway, all this corroborates that a protocol without any exponentiation operations conducted on the server side cannot achieve forward secrecy [41].
Cryptanalysis of Park et al.'s Scheme
Similar to Jung et al., Park et al. [26] also criticized Chang et al.'s scheme [24], and improved this two-factor scheme into a three-factor one. They claimed their new scheme overcomes the weaknesses in [24], and proved the security of the scheme via BAN logic. Unfortunately, we once again found this scheme also insecure: no resistance to two kinds of offline dictionary attacks and no user anonymity.
A Brief Review of Park et al.'s Scheme
This section describes Park et al.'s scheme [26] briefly.
Registration Phase
Note that the senor node registration phase is the same with Jung et al.'s [25], so it is omitted here.
and TID i is a random number, TID • i is initialized to NULL. 3. U i inputs P i into the smart card. Note that, in Park et al.'s scheme [26], this step is not mentioned.
But, according to the scheme, this step is necessary. We speculate it is missed.
Login Phase and Verification Phase
inputs the ID i and PW i , and the biometrics BIO i , and then the smart card computes: Otherwise, it ends the session. 2. GW → S j : {DID i , M G,S j , X i , T G }. GW first checks T i , then gets H ID i and computes: Otherwise, S j chooses b ∈ Z * p and computes: Y j = βP, , GW further computes: and updates (TID i , TID • i ) as (TID i new , TID i ), then sends {e i , M G,U i , Y i , T G } to the GW. Otherwise, it exits the session.
U i checks T G , and computes
, and updates TID i as h(H ID i ||T i ). Otherwise, it exits the session.
Security Flaws in Park et al.'s Scheme
Compared with Jung et al. [25], Park et al. [26] deployed an elliptic curve cryptosystem trying to achieve user anonymity and resist against offline dictionary attack. Though Wang et al. [3,41] pointed out that a public key algorithm is necessary to achieve user anonymity and offline dictionary attack, it does not mean that, once the public key algorithm is added, the system will be secure. Deploying the public key algorithm requires some skills, and we will propose a sound scheme as an example to explain such skills in Section 5. In this section, we proved that Park et al.'s scheme suffers from many attacks, including offline dictionary attack and no user anonymity.
Offline Dictionary Attack
Suppose the adversary A got the message {A i , B i , C i , P i , TID i } in the card; and also acquired U i 's biometrics BIO i in addition to intercepted transcripts {DID i , X i , M U i ,G , T i , TID i }. Then, A conducts an offline dictionary attack as follows: 1. Guesses PW i to be PW * i and ID i to be ID * i .
Computes
where A i is from the card, X i and T i are from the channel. 6. Verifies the correctness of PWi * and ID * i by checking whether M * U i ,G == M U i ,G . 7. Repeats Step 1-7 of this procedure until the correct value of PW i and ID i is found.
The time complexity of the above attack is O(|D pw | * |D id | * (3T H + T RE )). T RE is the running time of fuzzy extraction computation. Thus, the above attack is quite efficient.
Similar to the analysis in Section 3.2.1, the adversary can also select B i as the verification to test the guessed value of PW * i and ID * i .
User Anonymity
Park et al. [26] attempted to update some parameters to provide user anonymity. However, such a method is not as desirable as they expected. On one hand, the gateway has to update the database in every session, which is efficient; on the other hand, if the adversary A acquires the verifier table
Proposed Scheme
In this section, we proposed a new enhanced scheme (as shown in Figure 2) which not only provides some desirable attributes but also can resist against the known attacks. Furthermore, we improve the scheme from the following aspects: 1. based on Wang et al. [3,41], we apply a public key algorithm for resisting against offline dictionary attack via the verification from the open channel. In such an attack, as we analyzed above, the key solution is about the way to construct the verification parameter between the user and the gateway node. Once the verification parameter consists of a "challenge" that is deployed a public key algorithm, a trap door will be built. Therefore only the one who owns the corresponding secret key can compute the correct "challenge" (i.e., X in our scheme). In Park et al.'s scheme, though a public key algorithm is deployed, it is not used to construct a "challenge" for authentication. More specifically, all the parameters in the verification M U i ,G (=h(A i ||X S i ||X i ||T i )) can be computed with the static or open knowledge in the user side and the open channel, so A can compute all parameters (A i ,X S i ,X i ,T i ) with guessed password and then use M U i ,G to verify the guessed value. While, in our new scheme, a "challenge" X is built. Besides the static or open knowledge on the user side, A has to know the dynamic α or the long-term key to compute X, and thus fails to conduct such an attack; 2. as introduced in Section 3.2.1, we use "honeywords" + "fuzzy-verifiers" to resist against offline dictionary attack via verification from the smart card [30,42]; 3. we do not protect user anonymity via updating parameters as Park et al., but deploy a dynamic identity technique via a public key algorithm [3].
The details of our scheme is described as follows: They accept session key as Figure 2. Proposed scheme.
Registration Phase
The registration phase to the sensor node is similar to Jung et. al. [25] and Park et. al. [26], so it is omitted. When a new user wants to be a legitimate user of the system, then he/she may submit his/her personal information on the gateway to initiate a user registration phase as follows: Honey_List) in database, and Honey_List is supposed to count the number of failing in user login phase and it is initialized to NULL. Once its value is bigger than the predetermined threshold, the corresponding smart card will be discarded till the user re-registers. 3. U i inputs P i into the smart card.
Login Phase and Verification Phase
After being legitimated, the user U i can login to the system with the password, identity and biometrics, and get authenticated via exchanging information with the corresponding communication parties. Finally, after finishing the authentication successfully, the user and the sensor node will build a session key to protect the security of the subsequent communications.
1. U i → GW: {DID i , X i , M U i ,G , T i }. U i inputs his/her identity ID i , password PW i , and biometrics BIO i ; then, the smart card computes: i == B i , the card accepts the user, and selects a random number α ∈ Z * p , computes: and then finds r i and Hony_List via ID i . If Hony_List ≥ the preset value (for example 10), the GW thinks this smart card has been suspended and rejects the request. Otherwise, GW computes , GW rejects the request and sets Hony_List = Hony_List + 1. Once Hony_List is bigger than the preset value, the corresponding smart card is suspended. Otherwise, it computes: and sends {X i , M, M G,S j , T G } to S j to conveys U i 's request. 3. S j → GW: {Y j , M S j ,G , T j }. S j first checks the valid of T G , and computes h(k i ) * = M ⊕ h(X s j ).
If M * G,S j = h(h(k i ) * ||X s j ||X i ||SID j ||T G ), S j does not believe GW and rejects the session. Otherwise, S j chooses β ∈ Z * p and computes: Y j = βP, , and then sends {Y j , M G,U i , T G } to U i to transmit S j 's responds. Otherwise, it exits the session. 5. U i first checks T G , and if M G,U i == h((X s i ||k i ||X i ||Y j ||X||T G ), U i authenticates the GW, and computes SK i = h(X i ||Y j ||αY j ) to finish the authentication successfully. Otherwise, the authentication fails.
Password Change Phase
Once the user wants to change password for security consideration, he/she can achieve it through the following steps: 1. U i inputs ID i , PW i and new password PW new i .
The card computes:
If B * i = B i , the card does not permit U i to change the password. Otherwise, it further computes:
Revocation Phase
Revocation phase, as the emergency response strategy, is of great significance to the security of the system. It provides an efficient way to protect the account from being abused. When the user finds his/her smart card breached, he/she can revoke the smart card as follows: 1. U i firstly get authenticated by the card as the way to the step 1 in Section 5.2. 2. U i −→ GW: {DID i , X i , M U i ,G , T i , revoke_request}. As described in Section 5.2, the smart card computes DID i ,X i ,M U i ,G and sends {DID i , X i , M U i ,G , T i , revoke_request} to the gateway. 3. After receiving the revocation request from U i , GW first verifies U i . If GW authenticates U i successfully, it sets Honey_List to a big number, which is bigger than the preset value. Then, the smart card will be revoked, and nobody can login to the system with the card unless U i re-register. Otherwise, GW rejects the request.
Re-Register Phase
If a user U i with correct password and identity is still rejected by S j , then can re-register as follows: 2. Firstly, GW looks for ID i from User − list, checks whether Honey_List ≥ the preset value. If so, GW believes the card is suspended, then performs the corresponding steps in Section 5.1.
Security Analysis
To prove the security of our scheme, we analyze it from two aspects: a formal way using the Burrows-Abadi-Needham (BAN) logic [43]; a informal/heuristic way. Through the formal way, we prove our scheme achieves four basic security goals. These goals ensure that the user and the sensor node are mutual trust, and they both compute the session key successfully; furthermore, the session keys computed by them are equal. Through the informal/heuristic way, we prove that our scheme not only satisfies many desired attributes such as user anonymity and forward security, but also is resistant to various attacks such as offline dictionary attack, impersonation attack, and de-synchronized attack.
Formal Analysis Based on BAN Logic
The BAN logic [43] is a simple and efficient way to analyze the design logic and security of a protocol. It has a set of particular notions (shown in Table 2) to depict the logic of the protocol. We will prove the security of our scheme according to its notions and processes. P |≡ X P believes X, i.e., the principal P believes the statement X is true. P X P sees X, i.e., the principal P receives a message that contains X.
P |⇒ X P has jurisdiction over X, i.e., the principal P can generate or compute X.
P |∼ X P said X, i.e., the principal P has sent a message containing X. (X) X is fresh, i.e., X is sent in a message only at the current run of the protocol, it is usually a timestamp or a random number. P K ← → Q K is the shared key for P and Q. P Y Q Y is the secret known only to P and Q or some principals trusted by them.
X Y X combined with Y, and Y is usually a secret.
{X} K X encrypted with K. In BAN logic, the goals of our authentication scheme are defined as: According to the proof steps in BAN logic, we re-describe our scheme into an idealized form: Then, some assumptions are defined as follows: • H 4 : GW |≡ (T j ).
Based on the definition above, we perform the BAN logic proof as follows: Then, according to H 7 , S 1 , RULE(1), we get S 2 : According to H 3 and RULE(4), we get S 3 : In addition, according to S 2 , S 3 and RULE(2), S 4 : Then, according to H 7 , S 1 , RULE(1), we get S 6 : S j |≡ GW |∼ X i , h(k i ), SID j , T G . According to H 3 and RULE(4), we get S 7 : S j |≡ X i , h(k i ), SID j , T G . In addition, according to S 2 , S 3 and RULE(2), we get S 8 : From M 3 , it is easy to get S 9 : GW X j , k j , h(k i ), T j X s j .
Then, according to H 7 , S 1 , RULE(1), we get S 10 : GW |≡ S j |∼ X j , k j , h(k i ), T j . According to H 3 and RULE(4), we get S 11 : GW |≡ X j , k j , h(k i ), T j . In addition, according to S 2 , S 3 and RULE (2), we get S 12 : GW |≡ S j |≡ X j , k j , h(k i ), T j . From M 4 , it is easy to get S 13 : U i X j , k i , X, T G X s i .
Then, according to H 7 , S 1 , RULE(1), we get S 14 : U i |≡ GW |∼ X j , k i , X, T G . According to H 3 and RULE(4), we get S 15 : U i |≡ X j , k i , X, T G . In addition, according to S 2 , S 3 and RULE(2), we get S 16 : U i |≡ GW |≡ X j , k i , X, T G .
Therefore, we prove our scheme achieves Goals 1-4 successfully. In other words, our scheme promises that U i and S j have been authenticated mutually, and they further compute and share the same session key SK.
Informal Analysis
The heuristic way plays an important role in testing the security of the user authentication protocol. It makes up for the defects of formal proofs in some security requirements. For example, the formal proofs cannot capture user anonymity and user friendly problems. Therefore, in this section, we apply the heuristic method to prove the security of our scheme.
Mutual Authentication
In step 2 and step 5 of Section 5.2, the gateway node and the user authenticate each other via their shared secret parameter X s i and X. On the user side, only with the correct password, biometrics, and the corresponding smart card can U i compute X s i , so the gateway can authenticate U i via this parameter. On the gateway node, after receiving X i , only the one with the long-term key x, can compute X, so the user authenticate GW via X.
In step 3 and step 4 of Section 5.2, the gateway node and the sensor node authenticate each other via X s j . If an adversary wants to compute X s j , then he/she has to guess the long-term key x, and the probability of such an event can be ignored.
Therefore, the user and the sensor node have authenticated the gateway, and the gateway has also authenticated them. Furthermore, from the authentication relationship among the three parties, equivalently, the user and the sensor node get authenticated with each other. All in all, our scheme achieves mutual authentication well.
User Anonymity
In our scheme, ID i is concealed in DID i , which is changed with X in every session. To get ID i , A has to compute X, which means that A without α or x has to solve the elliptic curve discrete logarithm problem. As we introduced in Section 2.1, such a problem cannot be solved in polynomial time. Thus, in our scheme, the user identity is not only well protected, but also untraceable.
Furthermore, note that an obvious difference in user anonymity between the wireless sensor network and the distributed network is about whether the user identity can be known by other participants. In a distributed network, there are only two participants: the user and the server. In such a condition, the user identity can be known by the server to build a session key. While in the wireless sensor network, there are three participants: the user, the gateway node and the sensor node. The gateway node acts as a register center and is protected well, so it can know the user identity. While the sensor node is usually deployed in an unattended environment, it is of high possibility to be controlled by the adversary. Thus, the user identity should not be exposed to it. In addition, our scheme achieves such a goal: the user identity is not transmitted to the sensor node.
Forward Secrecy
The session key SK = h(X i ||Y j ||βX i ) = h(X i ||Y j ||αY j ). The key parameter is βX i or αY j . If an adversary A intercepts the message in an open channel, acquires the secret key x, then A knows X i and X j . Thus, A needs to compute βX i or αY j . However, computing βX i or αY j for A is equivalent to solving the Elliptic curve discrete logarithm problem, and it is bound to fail. Therefore, A cannot compute SK, and our scheme achieves forward secrecy.
Offline Dictionary Attack
A sound three-factor user authentication scheme should ensure that even if A gets any two of the three factors, he/she cannot break the system. In our scheme, if A gets the password and biometrics, he/she still cannot compute X s i to construct a valid login request; if A gets the password and the smart card, he/she can neither compute X s i nor guess the biometrics, thus also fails to perform an attack; if A gets the smart card and biometrics, then A may conduct an offline dictionary attack by using M u i ,G or B i as the verification parameter to check the correctness of the guessed value.
If A uses B i , then he/she may make the offline dictionary attack as follows: guesses ID i and PW i to be ID * i and PW * i , respectively, computes R * i = Rep(BIO i , P i ), HPW * i = h(PW * i ||R * i ), then verifies ID * i and PW * i by checking B i ?= h(h(HPW * i ) ⊕ h(ID i ) ⊕ h(P i )) mod n 0 . However, even A gets a pair of {ID * i , PW * i } such that B i == h(h(HPW * i ) ⊕ h(ID i ) ⊕ h(P i )) mod n 0 , he/she may not find the correct ID i and PW i , for there are |D pw | * |D id | \ n 0 ≈ 2 32 candidates of {ID i , PW i } pair (where n 0 = 2 8 and |D pw | = |D id | = 2 6 ) [30]. Thus, A then has to test the {ID * i , PW * i } via sending the login request to the gateway node, and once the number of login failures exceeds the preset value, the smart card will be suspended and the attack fails.
If A uses M u i ,G , then he/she can compute HPW * i as above, and further compute X * s i = A i ⊕ HPW * i ⊕ P i , k i = h(X * s i ||T i ), DID i = ID * i ⊕ h(X i ||X). However, A cannot compute X, as we explained in Section 6.2.2, and thus fails to finish such an attack.
In conclusion, our scheme is resistant to dictionary attack.
Privileged Insider Attack
In our scheme, the user submits {ID i , HPW i , P i } to the gateway node. The password is well protected by a long-term number R i , so GW cannot learn any useful information from it. Therefore, our scheme is secure against privileged insider attack.
Verifier-Stolen Attack
The verifier table stored in GW does not expose sensitive messages; even if an adversary acquires the table, he/she cannot make any attack. Thus, our scheme is resistant to verifier-stolen attack.
Replay Attack
The timestamp is used to prevent replay attack. On the one hand, if A replays the history message directly, the corresponding communication party will find it via checking the freshness of the timestamp. On the other hand, if A tries to forge the message in the open channel, such as {DID i , X i , M U i ,G , T i }, then he/she has to know X s i . However, to compute X s i , it is asked that A has to know x or U i 's password, biometrics and smart card, which is impossible. Similarly, A also cannot replay or construct other message flows.
Performance Analysis
To better evaluate our scheme, we make a comparison among the related schemes for wireless sensor networks [25,26,29,44]. From Table 3, it is obvious that our scheme is more competitive than other schemes: our scheme achieves all the security requirements while others [25,26,29,44] all have some attributes that fail to satisfy more or less; the computation of our scheme is similar or slightly high to that of other schemes. Furthermore, achieving all the security requirements is more significant to an authentication scheme, and it is not advisable to sacrifice security for efficiency. Table 3. Performance comparison among relevant schemes in wireless sensor networks. | 2017-12-21T15:12:27.014Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "97c06bdc9a2db7d0462f0fa36e8c21f46d9f3e52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/12/2946/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97c06bdc9a2db7d0462f0fa36e8c21f46d9f3e52",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
218973812 | pes2o/s2orc | v3-fos-license | From Linguistic Descriptions to Language Profiles
Language catalogues and typological databases are two important types of resources containing different types of knowledge about the world’s natural languages. The former provide metadata such as number of speakers, location (in prose descriptions and/or GPS coordinates), language code, literacy, etc., while the latter contain information about a set of structural and functional attributes of languages. Given that both types of resources are developed and later maintained manually, there are practical limits as to the number of languages and the number of features that can be surveyed. We introduce the concept of a language profile, which is intended to be a structured representation of various types of knowledge about a natural language extracted semi-automatically from descriptive documents and stored at a central location. It has three major parts: (1) an introductory; (2) an attributive; and (3) a reference part, each containing different types of knowledge about a given natural language. As a case study, we develop and present a language profile of an example language. At this stage, a language profile is an independent entity, but in the future it is envisioned to become part of a network of language profiles connected to each other via various types of relations. Such a representation is expected to be suitable both for humans and machines to read and process for further deeper linguistic analyses and/or comparisons.
Introduction
Approximately 7,000 distinct languages constitute our record of linguistic diversity (Hammarström, 2015). Languages are equal witnesses -where e.g., English is but one -to the variation and constraints of the unique communication system of our species (Evans and Levinson, 2009). They harbour information on what happens to language given tens of thousands of millennia of diversification, under all imaginable circumstances of human interaction. As such they may be used to investigate theories that may otherwise not be testable with anything less than a laboratory the size of human history. Two web-publications maintain catalogues of the languages of the world: Ethnologue (Eberhard et al., 2019) and Glottolog (Hammarström et al., 2019). Ethnologue provides metadata such as number of speakers, location (in prose words), literacy etc. Glottolog provides classification, location (in GPS coordinates), and bibliographical references. For in-depth information about a lesser-known language, specialists typically consult any available descriptive grammar. For example, for the language Ulwa (ISO 639-3 language code: yla) of Papua New Guinea, there exists Barlow, Russell. (2018) A grammar of Ulwa. University of Hawai'i at Mānoa doctoral dissertation. xiv+546 pp.
Around 4,000 languages have at least one published grammatical description but the breadth, depth, and quality of these vary (Hammarström et al., 2018). For analysis of the languages themselves, there are a number of databases which keep a record of various char-acteristics (also known as linguistic features) of individual languages. For example, the World Atlas of Language Structures (WALS; Haspelmath et al. 2005), contains information on some 200 features spanning 2500 languages (but is sparsely filled in). A very extensive list of linguistic databases can be found at http:// languagegoldmine.com/ (accessed 2020-04-05). These inventories and databases are highly useful resources but have clear limits on the number of features and/or languages they contain. As such they do not represent all the information available about the same language in descriptive publications. This situation is inevitable as (1) a fixed list of linguistic features is designed for a database, but languages differ from each other in a myriad of ways which cannot be known a priori; and (2) databases are curated manually by reading the descriptive documents, which is a time-consuming activity. For these reasons we aim to go beyond the manual curation of linguistic databases in order to capture the valuable knowledge about many other languages and features remaining within descriptive publications. Thus, our aim is to extract all the information about a language described in a publication, and represent it in a structured manner. These structured representations can be successively normalized and thus form the basis for large-scale comparison of languages. If successful, it will widen the scope of investigations and comparisons across languages considerably. Toward this end, advancements in natural language processing and information extraction may be exploited. A related concern is that various types of knowledge about languages are maintained separately. Consequently, one has to explore different resources to access knowledge about the same language. For example, some general and referential type of data (i.e. about language names, the number and names of dialects, the areas where they are spoken, the number of speakers, etc.) are often maintained in the form of digital inventories, the attributive type of data (i.e. various phonological, morphological, and grammatical features) are maintained as typological databases, and many other details are found in descriptive documents (grammars, dictionaries, etc.) and, since recently, increasingly in web-pages, blogs etc. Further, several of the important resources on natural languages are not open-access. For example, Ethnologue has most of its information behind a paywall. 1 Since only a particular creative arrangement of words -but not facts in general -can be copyrighted, the prospects for free and open structured representations are much better, even when extracted from copyrighted source materials. In this paper, we present the concept of a language profile in order to address the above-mentioned limitations and concerns. A language profile can be envisaged as a digital representation of a natural language containing various types of information about the language stored at a central location in a structured format and publicly available for further use. It aims to be a dynamic representation in the sense that it is not tied to a predefined set of features (like typological databases), but targets any traceable features. Included are also introductory and referential information about a target language extracted from the descriptions and other available resources. Various types of information about a language are grouped into various sections, and the resulting structure is called a language profile. In the present paper, we describe the concept of a language profile only. In future work, we plan to describe how language profiles can be linked in a full network (a LangNet) using different kinds of comparisons/relations (e.g., genetic, geographical, typological similarity). Conceptually, such a network of languages is similar to other networks in the area of NLP such as WordNet, VerbNet, FrameNet, etc., except that it is at the level of languages. We believe that such a rich representation model, and the network of languages will be a useful resource for linguistic studies. The remainder of the paper is organized as follows: Section 2 describes in detail the structure and components of a language profile, while details on semi-automatic development of a language profile from linguistic descriptions are given in Section 3.
Language Profiles
As mentioned in the introduction section, a language profile is necessarily a structured digital representation of a natural language. In this section, we will present the structure and various proposed components of a language profile. In doing so, we will use a natural language called 'Ulwa', and build a minimal part of its profile. At this stage, this language profile will be built semi-automatically, but a long term objective is to automatize the process as much as possible. We will indicate which parts are built automatically and which manually, and will provide suggestions, wherever possible, for automatizing the corresponding parts.
1. Metadata Part: The metadata part contains basic metadata such as official language name, number of speakers, areas where spoken, etc., and referential (e.g., ISO code and/or glottocode, language family, etc.) information. Table 1 shows this part of the 'Ulwa' language profile.
In this case, most of the fields and their values in this part of the profile are available in the language catalogue Ethnologue (Eberhard et al., 2019) in the Yaul entry (https://www.ethnologue.com/ language/yla). As such it resembles information already available in language inventory databases, but improves on these by being more dynamic, linkable and aggregateable. The list of possible fields of metadata is not bounded, and can be extended indefinitely. Each field in the profile and information within it will have a structured representation. For example, the location in the above given profile is not a simple string, but rather a geographical location with a name and coordinates. This can be linked to existing inventories of geographical locations such as GeoNames (http: //www.geonames.org). The same applies to the dialect names, families and branches in the classification field, official and alternative language names, etc. Appropriate data structures will be proposed for various fields, with proper IDs to be used for various types of inter-and intra-profile connections. Further, each piece of information will have a recorded source which may be weighted according to usage needs whenever there are many different sources for the same field.
2. Attributive Part: This is the major part of a language profile and is intended to contain the typological and other structural information of a target language. Again, other databases exist with a similar type of information (e.g., WALS -see above). The key difference is as follows. The attributive part of a language profile does not contain answers to a predefined set of typological and other questions. Rather, it contains all attributive (i.e. phonological, morphological, and grammatical) information which can be extracted (semi)automatically from the available descriptive data about a given language. As an example, consider the attributive part of the 'Ulwa' language profile given in Table 2. The information in this part was automatically extracted from a language description (Barlow, 2018). (A description of the automatic extraction of the typological information is given in Section 3.) In this example, there is no categorization of the features. In the future, we intend to divide the attributive part into various subparts e.g. phonological, morphological, grammatical attributive information, and so on. The feature ID field is left blank intentionally at this stage, and a detailed set of feature IDs is to be worked out at a later stage. Every item of information in each section of the language profile has a source linked to an entry from the reference section. The maintenance of references within the profile ensures that the crucial source links can be kept in sync.
Building a Language Profile
Building a language profile is a complex process. It requires gathering information about a language from all available sources, i.e., manuals, digital inventories, linguistic descriptions, etc. This is a long-term process, and will require gradual efforts to incrementally develop a large set of rich language profiles. At this stage, we have relied on manual collection of information for the introductory as well as the reference part, although parts of it can be automatized (information about language name and number of speakers can be extracted automatically using the frame based methodology explained below, which was used to build the attributive part automatically). The automatic extraction of typological information from descriptive grammars is a novel task, and there exists only a few studies and systems reported previously (Virk et al., 2017;Virk et al., 2019). In Virk et al. (2019), a framesemantic based approach is proposed for developing a parser to automatically extract typological linguistic information from plain-text grammatical descriptions of natural languages. As a case study, the authors have shown how the parser can be used to extract value of an example typological feature. However, the system has not been used for any actual typological work. We continue that work and use the parser to extract typological feature values (shown in Table 2) of a language profile. A brief description of the parser and how it has been used for our purposes follows. The parser relies on a lexico-semantic resource, LingFN (Malm et al., 2018), and its frame-labeled data for training machine learning models to build a parser. The development of LingFN itself is based on the theory of frame-semantics (Fillmore, 1976;Fillmore, 1977;Fillmore, 1982), and is motivated by the development of Berke-ley FrameNet (Baker et al., 1998) and other, domainspecific framenets (e.g. a framenet to cover medical terminology (Borin et al., 2007), Kicktionary, 2 a soccer language framenet). Let us take an example to better understand what LingFN is, and how its frame-labeled data is used to build the frame-semantic parser which in turn is used for automatic extraction of typological features. Consider the following sentence which is taken from a descriptive grammar of the Ulwa language.
In Ulwa, adjectives in NPs sometimes precede their head nouns.
The sentence contains information about the relative position (sequencing) of two syntactic categories i.e. 'adjectives' and 'head nouns'. Their position wrt one another is not always the mentioned one but could be the other way around, as conveyed by the adverb 'sometimes'. This is useful information that we are interested in extracting automatically. One of the possible approaches is to develop a framesemantic based information extraction system. For that purpose, the first step is to design (or use from the Berkeley FrameNet) special structures to represent this type of phenomenon (i.e. sequencing). In frame-semantics such structures are called semantic frames, and in general, a semantic frame is a structured representation of a an entity, an object, and a scenario. In our case, a semantic frame represents a linguistic entity (e.g. nouns, verbs, etc.) or phenomenon (e.g. affixation, agreement, sequence, etc). Let us say for the sequencing phenomena, we have designed a semantic frame with the structure shown in Table 3. (For more details on development of the SEQUENCE and other linguistic domain semantic frames with annotated example sentences, we refer the reader to Malm et al. (2018)).
SEQUENCE
Entity 1 Entity 2 Entities 3 Order Frequency Language Variety String segments labeled as one of the frame-elements are enclosed within a pair of brackets while the frame-element label (bold) follows an underscore sign. Note that in case of above given sentence, the word 'precede' is both a lexical unit (a word triggering a particular frame) and also a frameelement. Now imagine, if we have enough sentences annotated with the SEQUENCE (and other frames from LingFN), one could train machine learning models for automatic labeling of these frames on un-annotated data. This is exactly what is proposed by the authors in (Virk et al., 2019), and they have a developed a parser for this purpose. What the parser does is to take un-annotated sentences containing typological linguistic information and annotate them with linguistic domain frames and their associated frame-elements.
As suggested by the authors in the same paper, the annotations can be converted to typological information in any required format using a rule based module. This is exactly what we have done to extract feature values shown in Table 2 for the Ulwa language. Note, only the SEQUENCE frame was used to extract the whole information present in Table 2. In the future we plan to extend this work to other typological features and hence enhance the attributive part of a language profile.
Conclusions and Future Work
We have presented the idea of a language profile, which is envisaged as a digital structured representation of a natural language. It has two major objectives. The first objective is to overcome a major limitation of existing typological databases which contain information about a pre-defined set of linguistic features. We propose work towards automatically extracting information about all the features described in a descriptive document. The second objective is to collect various types of information available about a language stored in a structured way and at a common place together with information about the sources. The idea is at an embryonic stage and is to be further matured and extended in the future. | 2020-05-29T13:09:46.554Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "5a66c5417263a0a1185584335858edc3bd0c8db3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "4c318d7a367aa6d84b4edc56de7f19c25f1de306",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13814874 | pes2o/s2orc | v3-fos-license | Diagnostic and therapeutic strategies for arrhythmogenic right ventricular dysplasia/cardiomyopathy patient
Abstract Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is a rare inherited heart muscle disease characterized by ventricular tachyarrhythmia, predominant right ventricular dysfunction, and sudden cardiac death. Its pathophysiology involves close interaction between genetic mutations and exposure to physical activity. Mutations in genes encoding desmosomal protein are the most common genetic basis. Genetic testing plays important roles in diagnosis and screening of family members. Syncope, palpitation, and lightheadedness are the most common symptoms. The 2010 Task Force Criteria is the standard for diagnosis today. Implantation of a defibrillator in high-risk patients is the only therapy that provides adequate protection against sudden death. Selection of patients who are best candidates for defibrillator implantation is challenging. Exercise restriction is critical in affected individuals and at-risk family members. Antiarrhythmic drugs and ventricular tachycardia ablation are valuable but palliative components of the management. This review focuses on the current diagnostic and therapeutic strategies in ARVD/C and outlines the future area of development in this field.
Introduction
Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is an inherited heart muscle disease characterized by ventricular arrhythmias, an increased risk of sudden death, and predominant right ventricular dysfunction. 1 The pathological hallmark of ARVD/C is myocyte loss with fibro-fatty production. Since the first case series in 1982, 2-4 our understanding of its pathogenesis, clinical manifestations, and long-term outcomes has advanced significantly. Arrhythmogenic right ventricular dysplasia/cardiomyopathy was initially considered to be a congenital defect in the right ventricular myocardium and designated as dysplasia but later was reclassified as a cardiomyopathy. 5 While involvement of the right ventricle typically predominates in early disease, biventricular, and left dominant forms are now recognized to be an important part of the disease spectrum. 6 The purpose of this review article is to provide an update concerning the diagnosis and treatment strategies in ARVD/C.
Epidemiology
Arrhythmogenic right ventricular dysplasia/cardiomyopathy is a rare condition 7 and has an estimated prevalence of 1:5000. 8 Arrhythmogenic right ventricular dysplasia/cardiomyopathy patients typically present in the 2nd to 4th decade of life. [9][10][11] Approximately 20% of the patients present after age of 50 years. Late presentation does not confer a benign prognosis. 12 Men are more commonly affected with earlier onset than women and have worse outcomes once diagnosed. [13][14][15] This may be explained by sex-related difference in hormone profiles and exercise participation. 16,17 Genetic basis Arrhythmogenic right ventricular dysplasia/cardiomyopathy is mainly inherited in an autosomal-dominant pattern with incomplete penetrance and variable expressivity. 18 Autosomal recessive forms have also been described and can be associated with wooly hair and palmoplantar keratoderma, such as in Naxos disease and Carvajal syndrome. 19 The discovery of the genetic basis of Naxos disease identified the first disease-causing gene 20 in ARVD/C. It is located at chromosome 17q21 and encodes plakoglobin (JUP), a desmosomal protein important in cell-to-cell adhesion. This was followed by the discovery of mutations in genes that encode other desmosomal proteins including desmoplakin (DSP), 21 plakophilin-2 (PKP2), 22 desmoglein-2 (DSG2), 23,24 and desmocolin-2 (DSC2), 25 all of which cause autosomal-dominant forms of ARVD/C.
Mutations in genes encoding non-desmosomal proteins have also been identified. Most are associated with other cardiomyopathies and arrhythmia syndrome and represent phenotypic overlap. These include the intermediate filament protein desmin (DES), 26 the cardiac sodium channel-Na v 1.5 (SCN5A), 27 lamin A/C (LMNA) 28 on the nuclear membrane, titin (TTN), 29 phospholamban (PLN), 30 and filamin C (FLNC). 31 Pathogenic variants in genes encoding proteins of the area composita, a-T-Catenin (CTNNA3), 32 and N-cadherin (CDH2), 33 have been identified in a small group of ARVD/C patients with typical right-predominant disease. The pS358L founder mutation in TMEM43, encoding transmembrane protein 43 is common among French Canadian ARVD/C patients. 9 Possible mutations in transforming growth factor beta-3 (TGFB3) 34 have been described but association with ARVD/C remains to be confirmed.
Among all the index cases, PKP2 is the most commonly mutated gene (20-46%), followed by DSP (3-15%), DSG2 (3-20%), DSC2 (1-8%), and JUP (<1%). 35,36 Non-desmosomal genes account for less than 10% of all pathological variants. 35 Attempts have been made to correlate genotypes to phenotypes. Patients with more than 1 mutation (4-16%) have earlier disease onset and worse disease outcomes including arrhythmia, sudden cardiac death, and heart failure. 37,38 As in Carvajal syndrome, DSP mutations are also associated with left ventricular dysfunction, sometimes referred to as arrhythmogenic left ventricular dysplasia,37,39 Table 1 provides a list of the genes for which ARVD/C-associated diseasecausing mutations have been identified. Novel mutations can be found and registered at www.ARVD/Cdatabase.info. 41 Pathophysiology Arrhythmogenic right ventricular dysplasia/cardiomyopathy is hypothesized as a disease of desmosomal dysfunction based on its genetics. This is supported by the remodelling of intercalated disk 42 and loss of desmosomes observed in ultra-structural studies. 43 Desmosomes and gap junctions are responsible for maintaining cell adhesion, signal transduction, and electrical integrity, all of which have been implicated in the disease pathophysiology.
Defective desmosomal proteins lead to loss of adhesion between cardiac myocytes followed by inflammation, fibrosis, death, and production by fibrofatty tissue. 20 This may be aggravated by wall stress from physical activity which disproportionately affects right ventricle compared with left ventricle. 44 Besides providing mechanical cell adhesion, desmosomes are also important in intracellular and intercellular signal transduction. 45,46 Desmosomal alteration redistributes plakoglobin from cell surface to cytosol and nucleus, which suppresses canonical Wnt signalling by competing with b-catenin. 47 It promotes adipogenesis in the heart by increasing adipogenic factors and reducing inhibitors of adipogenesis, which may explain the typical fibroadipocytic production in ARVD/ C. 48,49 Lipogenesis and apopotosis have been reproduced by induction of adult-like metabolism in an in vitro model generated by patient-specific mutant PKP2 induced pluripotent stem cells-derived cardiomyocytes. 50 Apart from protein relocation, epitope masking has also been reported to cause reduced plakoglobin immunoreactivity in intercalated discs. 51 Desmosomes also maintain electrical integrity by forming a coordinated protein network called 'connexome' at the intercalated disks by interacting with ion channels and gap junctions. 52,53 Disruption of electrical coupling may explain the increased arrhythmia susceptibility in ARVD/C even before the onset of significant fibrosis and necrosis. 54 The aggregation of voltage-gated sodium channels with cell adhesion molecules and reduced sodium current from desmosomal mutations may explain the phenotypic overlapping between ARVD/C and Brugada syndrome. 55,56 Clinically, both conditions can manifest as right precordial electrocardiography (ECG) repolarization abnormalities, right ventricular outflow tract (RVOT) conduction disturbances, and ventricular arrhythmias the from right ventricle. 57 Pathologically, fatty infiltration of the myocardium has been reported in both conditions. 58,59 As a result, ARVD/C and Brugada syndrome might be at the ends of a spectrum of structural myopathies and sodium current deficiency that share a common origin from abnormal connexome. 56 Exercise has been shown to exacerbate phenotypes in mice with ARVD/C associated mutations. 60,61 In clinical studies, exercise is associated with disease penetrance and ventricular arrhythmia in ARVD/C. 17,62,63 On a population level, ARVD/C has also been reported as a leading cause of sudden cardiac death in competitive athletes. 64
Clinical manifestations
The presentation of ARVD/C varies considerably, ranging from proband patients with symptoms related to arrhythmia and heart failure to asymptomatic family members diagnosed in the context of cascade screening. Based on experience from referral centres, index patients usually present with palpitations (30-60%), lightheadedness (20%), and syncope (10-30%). 7,13,18,66,67 These symptoms are in turn linked to the presence of non-sustained or sustained ventricular arrhythmias. Up to 19% of ARVD/C patients present as cardiac arrest. 68 Arrhythmogenic right ventricular dysplasia/cardiomyopathy can occasionally manifest as chest pain accompanied by transient ischaemic electrocardiographic changes and troponin elevation, mimicking acute myocarditis and heart attack. 69 Atypical chest pain may be explained by small vessel disease producing spasm. 70 Although not at the origin of the disease, myocarditis is thought to be a superimposed phenomenon 71,72 and certain mutations may increase the susceptibility to it in ARVD/C. 39 Three stages have been described in the natural history of ARVD/C. 5,73 In the initial 'concealed stage', individuals are asymptomatic. There are no or subtle structural changes in the right ventricle. The risk of cardiac arrest occurring in the concealed phase is extremely small but not zero. About 60% of the cardiac arrest victims were asymptomatic before their event. 68 As the disease progresses into the 'clinically overt stage', symptoms emerge as ventricular arrhythmias and morphological abnormalities in the right ventricle appear. Ventricular arrhythmias range from premature ventricular complexes, non-sustained or sustained ventricular tachycardia, and ventricular fibrillation 18 (Figure 1). Atrial arrhythmias and atrial fibrillation have also been associated with ARVD/C, [74][75][76][77] Haemodynamically tolerated ventricular tachycardia has been found in up to 60% of the index patients in their initial presentation. 78 In a subset of patients, the disease may progress to right, left, or biventricular failure. We recently reported that 49% of ARVD/C patients had at least one heart failure symptom, with exertional dyspnoea being more common than volume overload. Among ARVD/C patients with heart failure, approximately 80% have isolated right ventricular dysfunction. Female sex and lateral precordial T-wave inversions have been associated with heart failure. 79 According to experience at Johns Hopkins and the Nordic ARVD/C Registry, heart failure is the most common (90%) indication to heart transplant in ARVD/C 80 and age at first symptoms under 35 years predicts transplantation. 81 Among patients undergoing heart transplant, 58% had biventricular failure, 29% had right ventricular failure, and 3% had left ventricular failure. Sudden cardiac death also occurs at the end of the disease from overt heart failure.
Diagnosis of arrhythmogenic right ventricular dysplasia/ cardiomyopathy 2010 Task Force Criteria
Arrhythmogenic right ventricular dysplasia/cardiomyopathy is a clinical diagnosis with no single gold-standard test. The 2010 Task Force Criteria (TFC) is the standard for diagnosis today 82 ( Table 2). It is a scoring system combining the six categories of disease features: right ventricular structural alteration, tissue characterization, repolarization abnormalities, depolarization abnormalities, arrhythmias, and family history. In each category, a major, minor, or no criterion can be met. Definite ARVD/C is diagnosed if an individual has at least 'four points' with a major criterion worth two points and one minor worth one point. A score of 'three points' is classified as probable ARVD/C. The criteria must come from different categories. The 2010 TFC is revised from the 1994 international task force guideline 83 with the addition of quantitative criteria of structural alterations and genetic testing. Its application has reduced the number of individuals satisfying the imaging criterion, increased the number of family Diagnostic and therapeutic strategies for ARVD/C members being diagnosed, and increased the overall diagnostic yield of ARVD/C compared with its 1994 counterpart. 84,85 Although being the current gold standard, the 2010 TFC do not apply to leftdominant forms, 6 Or fractional area change >33% to < _40% By MRI • Regional RV akinesia or dyskinesia or dyssynchronous RV contraction • And one of the following Ratio of RVEDV to BSA > _100 to <110mL/m 2 (male) or > _90 to <100mL/m 2 (female) Or RV ejection fraction >40% to < _45% • Premature sudden death (<35 years) due to suspected ARVC in first-degree relative • ARVC confirmed pathologically or by current TFC in second-degree relative Adapted from Marcus et al. 82 Permission has been obtained for reuse. Definite = 2 major OR 1 major þ 2 minor/Borderline = 1 major þ 1 minor OR 3 minor/Possible = 1 major OR 2 minor. ARVD/C, arrhythmogenic right ventricular dysplasia/cardiomyopathy; aVF, augmented voltage unipolar left foot lead; aVL, augmented voltage unipolar left arm lead; BSA, body surface area; ECG, eletrcocardiogram; MRI, magnetic resonance imaging; PLAX, parasternal long-axis view; PSAX, parasternal short-axis view; RV, right ventricle; RVEDV, right ventricular end diastolic volume; RVOT, right ventricular outflow tract; SAECG, signal-averaged eletrcocardiogram; TFC, Task Force Criteria.
General diagnostic approach
When ARVD/C is suspected, the initial evaluation should include clinical inquiry about symptoms related to arrhythmia and heart failure, a detailed multi-generation family history, an exercise history, 12-lead ECG, ambulatory Holter monitoring, echocardiography, and cardiac magnetic resonance imaging (CMR). If the diagnosis remains inconclusive after this non-invasive approach, electrophysiological testing can be considered to characterize the arrhythmic substrate and stratify arrhythmic risk. 87,88 Endomyocardial biopsy is rarely performed because of its invasiveness and poor sensitivity and specificity. Right ventriculography has been largely replaced by the echocardiography and CMR. Genetic testing is indicated for index patients with unequivocal phenotype and cascade screening of family members. 89
Electrocardiography
The standard 12-lead ECG is rarely normal in ARVD/C patients. 90 It provides critical clues of the depolarization and repolarization abnormalities in ARVD/C. T-wave inversion from V1 to V3 in the absence of complete right bundle branch block is a major TFC criterion and the most common ECG finding 9,91 ( Figure 2). T-wave inversion in V1 and V2 is a minor criterion. 82 Negative T waves may extend to V5 and V6, indicating severe structural abnormality and left-sided involvement. 92 Incomplete right bandle brunch block is seen in 15% of ARVD/C patients and T-wave inversions through V3 maintains optimal sensitivity and specificity in its presence. In the presence of complete right bandle brunch block, the most sensitive and specific diagnostic parameter is an r/s ratio <1. 91 It is important to recognize that the left dominant form of ARVD/C has a different ECG pattern than the more common right dominant form. Inverted T-waves can be confined to the (infero) lateral leads in left dominant form of ARVD/C. 93 Low precordial QRS voltage may be seen but are not included in the TFC. 93,94 It can appear in advanced stage of the disease 95 and particularly in PLN mutation carriers. 96,97 Epsilon waves are small amplitude deflections after the end of QRS but preceding the T wave. While epsilon waves are a major criterion for ARVD/C, we discourage their use when evaluating patients for ARVD/C. This reflects the fact that a recent study has shown very poor inter-reader reproducibility when panel experts were asked to identify Epsilon wave on ECG tracings of leads V1, V2, and V3. 98 Furthermore, this study revealed that epsilon waves do not impact the diagnosis of ARVD/C as they are only seen in patients with severe disease in whom the diagnosis is met on other structural and electrical criteria. 11,98 In some difficult cases, Epsilon waves may be better observed with Fontaine lead system 99 or an insertable loop recorder. 100 Terminal activation delay, defined as duration from the nadir of S wave to the end of QRS > _55 ms, 101 is a minor criterion for depolarization abnormality. Although late potentials detected by signal-averaged eletrcocardiogram (SAECG) are also listed as a minor criterion for ARVD/C, 82 we have found them to have poor sensitivity and specificity relative to other criteria and have therefore abandoned performing SAECGs when evaluating patients for possible ARVD/C.
Ambulatory Holter monitoring
Ambulatory Holter monitoring captures premature ventricular complex and sustained or non-sustained ventricular tachycardia, both of which are important for diagnosis and follow-up in ARVD/C. The presence of >500 premature ventricular complex per 24 h is a minor TFC criterion 82 while >1000 per 24 h is predictive of ventricular arrhythmia terminated by defibrillators. 67,102 Ventricular tachycardia of left bundle branch pattern is consistent with ARVD/C diagnosis, 82 and its presence is associated with arrhythmic risk. 1 Arrhythmogenic right ventricular dysplasia/cardiomyopathy patients can be followed by Holter annually to assess their risk of arrhythmia. Because electrical abnormalities on Holter monitoring may precede detectable structural changes in ARVD/C, 103 serial ambulatory Holter can be used for follow-up of family members to facilitate early diagnosis.
Echocardiography
Echocardiography is the first line imaging tool when ARVD/C is suspected. Right ventricular wall motion abnormalities, ventricular dilation, and reduced systolic function are included in the 2010 TFC. 82 A RVOT long-axis diastolic dimension >30 mm occurred in 89% of probands and 14% of healthy controls. 104 Echocardiography has been used in clinical follow-up and showed marked variability in the rate of disease progression. 105,106 However, more views than standard echocardiography are required to diagnose ARVD/C. Quantitative assessment of right ventricular function also requires high expertise but can lead to effective early detection of structural abnormalities 107 or even abnormal right ventricular deformation prior to structural abnormalities. 108
Cardiac magnetic resonance
Cardiac magnetic resonance is the preferred imaging modality for ARVD/C 109 in experienced centres in the absence of an implantable cardioverter-defibrillator (ICD). It provides comprehensive information on structural, functional, and tissue characterization of the ventricles. As in echocardiography, only wall motion abnormality, ventricular dilation, and systolic dysfunction are included in the 2010 TFC. 82 Microaneurysm is not a criterion due to the concern of overuse of this term leading to false positive diagnoses. Although the presence of late gadolinium enhancement on CMR has been shown to be a diagnostic feature of ARVD/C and correlates with the myocardial fibro-fatty changes, 110 this MRI parameter was not included in the 2010 Diagnostic Criteria. Despite this, we believe that it is of diagnostic value, especially those with the biventricular or left dominant forms. Myocardial fat and wall thinning are not diagnostic for ARVD/ C either. 111 Nevertheless, tissue characterization on CMR deserves recognition and combining them to the 2010 TFC may improve diagnostic accuracy. 112 The location of wall motion abnormalities was not addressed in the 2010 TFC. Although the right ventricular apex is part of the initial report describing the triangle of dysplasia, 2 it is not typically affected in isolation. Instead, the basal inferior and anterior right ventricle, and the posterolateral left ventricle are commonly affected. 113 Common normal variants mischaracterized as ARVD/C include right ventricular free wall tether, pectus excavatum, and isolated apicolateral bulge 111 (Figure 3). It is important to recognize that MR imaging is not the gold standard for diagnosis of ARVD/C but is only one of a number of diagnostic tests that should be considered when evaluating patients for suspected ARVD/C. In healthcare systems where CMR is not easily accessible, echocardiography in experienced hands using multiple views (with or without contract medium or deformation imaging) can still very valuable.
Diagnostic and therapeutic strategies for ARVD/C
Family history and genetic testing
Arrhythmogenic right ventricular dysplasia/cardiomyopathy is most commonly characterized by autosomal dominant inheritance with reduced penetrance. As a result, a definite diagnosis in first-degree family members is considered as a major criterion, even when no pathological mutation is detected in the family. 82 According to a recent study, one third of first-degree relatives develop ARVD/C with siblings having the highest risk of disease. 114 Genetic testing has been incorporated into the 2010 TFC. Genetic counselling and genetic testing is recommended for making the diagnosis in probands and as part of cascade screening of family members of probands with mutations. 115 Mutations are detected in approximately 60% of index patients so the absence of mutations does not exclude the diagnosis. 78 Genetic testing is recommended to identify pathologic mutations in probands fulfilling phenotypic criteria and facilitate cascade screening of family members. 18 The population prevalence of ARVD/C related mutations is unclear. A recent population-based study showed no phenotype in cases with definite loss of function desmosomal variants. 116 Therefore, genetic testing is not recommended in individuals with only one minor 2010 TFC criterion and may be considered when one major or two minor criteria being meet. 117 Given the complexity in result interpretation and the rapid-evolving nature of genetics in ARVD/C, genetic counselling is strongly advised for patients and family members. 118
Endomyocardial biopsy
The tissue characterization criteria in 2010 TFC focused on the severity of myocyte loss and specified the quantitative parameters of fibrosis. 82 However, endomyocardial biopsy is invasive and its diagnostic sensitivity may be limited due to the patchy distribution of the disease. Although the right ventricular free wall is often affected, biopsy is usually taken from the septum for fear of perforation, which further limits its sensitivity. 45 Immunohistochemical analysis of desmosomal protein localization is not specific for ARVD/C because similar disease patterns can also be seen in sarcoidosis and giant cell myocarditis. 119 Today, endomyocardial biopsy is rarely performed. Its diagnostic value in ARVD/C is mainly for excluding other cardiomyopathies, myocarditis, and sarcoidosis. 11
Isoproterenol testing
Due to the high prevalence of catecholamine-facilitated ventricular tachycardia in ARVD/C, 120 ventricular arrhythmogenicity during isoproterenol testing has been proposed to assist diagnosis. 121 In the article by Denis et al., 121 isoproterenol is infused continuously for 3 min at 45 lg/min. The test is considered positive if polymorphic premature ventricular contractions with at least 1 couplet or ventricular tachycardia of left bundle branch pattern but excluding RVOT origin is induced. Its sensitivity, specificity, positive, and negative predictive values to diagnose ARVD/C are 91.4%, 88.9%, 43.2%, and 99.1%, respectively. Although it is not included the 2010 TFC, its high sensitivity may help improve diagnosis in early disease stage. The high dose isoproterenol challenge is also being evaluated as a risk predictor in ARVD/C.
Differential diagnosis
The common differential diagnoses for ARVD/C are idiopathic ventricular tachycardia and sarcoidosis. 11 Other conditions to consider include dilated cardiomyopathy, pulmonary hypertension, and Uhl's disease.
Idiopathic ventricular tachycardia, either from RVOT or aortic sinus cusp, is usually associated with normal ECG and echocardiogram. 122 A scoring system combining T-wave inversions in V1-V3, QRS duration in lead I > _120 ms, QRS notching, and late precordial transition effectively distinguish idiopathic RVOT ventricular tachycardia from ARVD/C. 123 Ectopic QRS morphology (intrinsicoid deflection time >80 ms, QS pattern in lead V1, and QRS axis >90 ) has been reported recently to aid in differentiating idiopathic RVOT ventricular tachycardia from early ARVD/C. 124 Exhaustive work up is not needed when ECG and echocardiograph are normal and family history is negative for sudden death.
Cardiac sarcoidosis can overlap with ARVD/C. Older age of symptom onset, presence of cardiovascular comorbidities, non-familial pattern of disease, signs of conduction defect, significant left ventricular dysfunction, myocardial delayed enhancement of the septum, first degree or higher degrees of atrio-ventricular (AV) block, and mediastinal lymphadenopathy should raise the suspicion for cardiac sarcoidosis. 125 Cardiac position emission tomography can be very important for differential diagnosis. 126 Endomyocardial biopsy may be required to assist the diagnosis.
Biventricular ARVD/C with severe left side involvement may mimic dilated cardiomyopathy. Dilated right ventricle can also be seen in congenital heart disease such as Ebstein anomaly, Uhl's disease, anomalous venous return, pulmonary hypertension, or right ventricular infarction. 11
Management of arrhythmogenic right ventricular dysplasia/ cardiomyopathy
The clinical management of ARVD/C patients consists of six aspects: (i) establishing an accurate diagnosis; (ii) risk stratification for sudden death; (iii) prevention of sustained ventricular arrhythmia; (iv) prevention of development of heart failure; (v) cardiac transplant; and (vi) screening and follow-up of family members. We will discuss each of this six-pronged approach here.
Establishing an accurate diagnosis
An accurate diagnosis is the premise of management. As above, lack of awareness of 2010 TFC and misinterpretation of CMR are the most common reasons for misdiagnosis.
Risk stratification for sudden death and decision for an implantable cardioverterdefibrillator
The prognosis of ARVD/C is predominantly related to ventricular arrhythmia which may lead to sudden death. 78 Its estimated overall mortality varies from 0.08% per year to 3.6% per year. 82,127 Although not tested in randomized controlled trials, the efficacy of ICD therapy has been established by multiple observational studies. 7,67,[128][129][130] Between 40% and 78% of patients received appropriate ICD interventions after implantation. The survival benefit was estimated at 26%. 128 However, the benefit comes at the expense of device/leadrelated complications (3.7% per year) and inappropriate ICD interventions (4.4% per year). 1,102 Therefore, assessing sudden death risk is central in the disease management.
Commonly recognized predictors of life-threatening arrhythmia are summarized here. The strongest predictors are prior cardiac arrest from ventricular fibrillation and sustained ventricular tachycardia. 45 Other major risk factors include arrhythmogenic syncope, nonsustained ventricular tachycardia, and severe systolic dysfunctions of the right, left, or both ventricles. 45,18 Premature ventricular complex count >1000 per day, T-wave inversion more than three leads, male sex, younger age at presentation, and proband status are minor risk factors. 1 Inducibility at electrophysiology study has been associated with ventricular arrhythmia treated by ICD from the Johns Hopkins and the Zurich experience 67,102,130,131 but not in other studies. 128,132 Atrial fibrillation may reflect electrical instability and has been associated with life-threatening arrhythmic events. 66 Risk of ventricular arrhythmia does not seem to significantly increase during pregnancy. 133 Unwillingness to restrict exercise has also been associated with higher risk of ventricular arrhythmia, 17,62 The absence of risk factors confers a low risk of sudden death (<1% per year) and ICD is not indicated. 1 Based on the reported risk factors and estimated probability of life-threatening arrhythmic event, the International Task Force Consensus Statement on treatment of ARVD/C grouped patients in to high (>10% per year), intermediate (1-10% per year), or low risk categories(<1% per year) 1 (Figure 4). It is useful in the risk-benefit discussion with patients and family about whether the recommendation of an ICD is appropriate. Another critical variable is a specific Diagnostic and therapeutic strategies for ARVD/C patient's preferences and values. Whereas some patients are unwilling to accept even a small risk of sudden death and consider placement of an ICD to be reassuring, other patients are adamantly against having an ICD implanted and are willing to accept a small risk of sudden death.
Single-chamber ICDs are recommended to minimize the risk of long-term lead-related complications especially in young patients. Although the number of inappropriate interventions may be decreased by a dual-chamber detection system, the additional lead predisposes to a higher risk of short and long term complications and should be only be employed in the setting of symptomatic bradycardia or AV block (which is virtually never seen in patients with ARVD/ C). 45 Anti-tachycardia pacing is highly successful in terminating ventricular arrhythmia and should be programmed in all devices. 132 The role of subcutaneous ICD is under investigation.
Prevention of sustained ventricular arrhythmias
Pharmacologic therapy is commonly utilized in the management of ARVD/C despite the lack of randomized clinical trials. Beta-blockers are recommended as the first line therapy for all definite ARVD/C patients. 1 This recommendation is an extrapolation of the betablockers' efficacy in sudden death prevention in heart failure. 134 It also relies on the observations that ventricular arrhythmia in ARVD/ C is often effort-related and catecholamine-facilitated. 120 The use of anti-arrhythmic drugs rarely eliminates the risk of sudden death 45 and is mainly to reduce the arrhythmia burden. 1 If betablocker is ineffective, the conventional 'wisdom' is to try sotalol followed by amiodarone. However, evidence from cases series regarding the efficacy of sotalol and amiodarone have been conflicting. 135,136 The addition of flecainide to beta-blockers may be effective according to a recent case series. 137 Overall, the use of antiarrhythmic drugs is empirical. The field is awaiting well-designed clinical trials to guide the use of anti-arrhythmic drugs in ARVD/C management.
Catheter ablation is reserved for ARVD/C patients who fail pharmacologic therapies and continue to have frequent sustained or symptomatic non-sustained ventricular arrhythmias. 1 According to the 2015 Treatment Consensus, ARVD/C patients with incessant ventricular tachycardia (VT) or frequent ICD interventions on VT despite maximal pharmacological therapy should be referred for catheter ablation (Class I indication). 1 Despite a high short-term success rate, the major limitation of catheter ablation is the recurrence of sustained ventricular arrhythmias in 50-70% of patients' after 3-5 years of follow-up. [138][139][140][141] Epicardial VT ablation or a combined endo-epi approach has been associated with improved short and long term efficacy with a 30% recurrence rate at 2 years of follow up. 138 New arrhythmogenic foci created by the progression of fibrofatty production may explain the not insignificant recurrence rate. 140 Nevertheless, catheter ablation is still an important palliative procedure to improve patients' quality life by reducing the arrhythmia burden. Bilateral sympathectomy may be considered for refractory ventricular arrhythmias. 142 Exercise restriction is recommended to all affected patients and desmosome mutation carriers. 1 This is because that exercise has been associated with development and severity of the ARVD/C phenotype both in animal 61,46 and human studies 17,62,63,143 (Figure 5). Importantly, continuing to participate in competitive sports after diagnosis was associated with higher risk of ventricular tachyarrhythmia in definite and borderline ARVD/C probands. 62 Desmosomal mutation carriers who remained in the top quartile of exercise duration after presentation had the higher risk of incident ventricular arrhythmia. 17 Prevention of progression and development of heart failure Few studies have examined structural progression in ARVD/C. It has been shown that structural dysfunction of the disease is progressive but with substantial interpatient variability. 105 As noted above, exercise restriction is believed to slow the disease progression. Because of the proven efficacy of angiotensin-converting enzyme inhibitors in heart failure, it has also been used for most patients with ARVD/C especially in the presence of structural changes. 1
Cardiac transplantation
Heart transplant is rarely needed in ARVD/C and is the last resort in case of either end-stage heart failure or debilitating lethal arrhythmia. 80 Patients requiring transplant often have disease onset at a younger age. Cardiac transplantation, when needed, is generally performed 10-20 years after initial presentation. Overall, the survival of patients with ARVD/C at 1, 5, and 10 years is 87%, 81%, and 77%, similar to non-ARVD/C recipients and better than ischaemic cardiomyopathy. 144 Screening approach to family members After a proband is diagnosed with ARVD/C, there are three scenarios applicable to family members: (i) presence of a pathological mutation; (ii) absence of a pathological mutation, and (iii) presence of a variant of uncertain significance. 45 In the presence of a pathological mutation, mutation-specific 'cascade' genetic testing is recommended to identify mutation-carrying family members. 18 Non-carriers are unlikely to develop the disease. For family members with disease-causing mutations, serial examination at one to three year intervals starting at the age of 10 years and exercise restriction are recommended. 17 This serial screening should consist of an ECG, a Holter, and a CMR or echocardiogram. In the absence of a pathological mutation, ARVD/C is still considered hereditary and all family members are recommended to undergo diagnostic testing at 1-3 years intervals. When a variant of uncertain significance is identified, phenotyping of all at-risk relatives remains important and cascade genetic screening should not be performed to assist in clinical management (although it may assist in refinement of variant classification). Again, genetic counselling is strongly recommended in the management of ARVD/C. 118 Direction for future research Over the past three decades, much has been learned about diagnosis and treatment of ARVD/C. The complexity of the disease and advance in technology create tremendous opportunities to advance the care we deliver to these patients. Better delineation of the disease progression will deepen the understanding of its natural history and facilitate early diagnosis. As data interpretation catches up with the explosion of sequencing technology, we will have a better insight into its genetic aspect. Detailed and objective measurement of Diagnostic and therapeutic strategies for ARVD/C exercise exposure with prospective follow-up will make the personalized recommendation on exercise restriction possible. The refinement of risk stratification for sudden death will enable clinicians to implant an ICD in the right patients. Basic mechanistic research is needed in the search of curative therapy for the disease. At the end of the day, promising therapies will need to be tested in randomized controlled trials. Because ARVD/C is a rare disease, institutional and international collaboration will be required for these exciting endeavours. Funding | 2018-05-03T00:19:16.288Z | 2018-04-23T00:00:00.000 | {
"year": 2018,
"sha1": "d22fa095527c9d1451d0d843adb9fefa471faa0d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/europace/article-pdf/21/1/9/27387270/euy063.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d22fa095527c9d1451d0d843adb9fefa471faa0d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52016164 | pes2o/s2orc | v3-fos-license | Phase diagrams and crystal-fluid surface tensions in additive and nonadditive two-dimensional hard disk mixtures
Using density functionals from fundamental measure theory, phase diagrams and crystal-fluid surface tensions in additive and nonadditive (Asakura-Oosawa model) two-dimensional hard disk mixtures are determined for the whole range of size ratios $q$ between disks, assuming random disorder in the crystal phase. The fluid-crystal transitions are first-order due to the assumption of a periodic unit cell in the density functional calculations. Qualitatively, the shape of the phase diagrams is similar to the case of three-dimensional hard sphere mixtures. For the nonadditive case, a broadening of the fluid-crystal coexistence region is found for small $q$ whereas for higher $q$ a vapor--fluid transition intervenes. In the additive case, we find a sequence of spindle type, azeotropic and eutectic phase diagrams upon lowering $q$ from 1 to 0.6. The transition from azeotropic to eutectic is different from the three-dimensional case. Surface tensions in general become smaller (up to a factor 2) upon addition of a second species and they are rather small. The minimization of the functionals proceeds without restrictions and optimized graphics card routines are used.
I. INTRODUCTION
The fluid-crystal transition in two-dimensional (2D) systems of hard disks has been of fundamental interest over the past years. Only recently, it has been established in the one-component system by simulations [1] and experiments [2] that the transition happens via a first-order transition from the fluid to the hexatic phase and a continuous transition from the hexatic to the crystal phase. Although the crystal phase is not strictly periodic (it does not have infinitely long-ranged positional order), in simulations and experiments it has practically the appearance of a conventional, periodic crystal. Therefore, 2D hard disks have a similar status as a model system for crystallization in films and monolayers as 3D hard spheres have for crystallization in the bulk. Besides simulations, classical density functional theory (DFT) for hard particle systems has reached a certain maturity and accuracy owing to the development of fundamental measure theory, starting with the work of Rosenfeld [3]. For 2D hard disks, a functional has been proposed in Ref. [4] which gives a very accurate description of fluid structure in oneand two-component systems [5], as well as values for the fluid and crystal coexistence densities which are rather close to the ones of the first-order fluid-hexatic transition [4]. In these FMT calculations, strict periodicity of the crystal phase was assumed.
Crystals in binary hard disk systems have been studied some time ago by "older" density functional methods in Refs. [6,7] (variants of weighted-density functionals with restricted minimizations). For substitutionally disordered crystals, a sequence of phase diagram types (spindle, azeotropic, eutectic) has been found upon lowering the disk size ratio similar to the case of 3D hard * shang-chun-lin@uni-tuebingen.de † martin.oettel@uni-tuebingen.de spheres [8], although the exact shape and the transition size ratios differ considerably between Refs. [6,7]. In these cases, the crystals were assumed strictly periodic as well. In Ref. [9], a survey of possible alloy phases was undertaken in a special zero-temperature limit (identifying the highest packing fraction structure among all candidates). In last decade, phase field crystal (PFC) models have emerged as an efficient tool to phenomenologically describe phase diagrams of binary systems in 2D and 3D [10,11] and of the crystal-fluid surface tension [12]. The PFC models employ a certain Taylor expansion of direct correlation functions among species to produce the desired crystal structure and use several parameters to capture material properties. Whereas the approach is suited to to describe the mesoscale behavior of solidification generically, a link to the density distributions in specific crystals is difficult to establish. An example is the hard sphere system where PFC fails to describe quantitatively vacancy concentrations, surface tensions and associated density profiles [13].
Only recently, a binary mixture with a fixed size ratio of 1/1.4 was investigated by simulations aiming at the fate of the hexatic phase [14] in disordered crystals. The hexatic phase was found to disappear quickly upon addition of the smaller species; overall, a phase diagram of eutectic type was found for this size ratio.
Here, we employ the FMT functional of Ref. [4] to study phase diagrams and crystal-fluid surface tensions for additive and nonadditive binary hard disk mixtures. The nonadditive case is the 2D variant [15] of the wellknown Asakura-Oosawa (AO) model [16,17], originally formulated for a mixture of 3D hard spheres where there are no interactions between particles of the second component (depletant). The depletants lead to an effective attractive potential between particles of the first species (which for small size rations is strictly a two-body potential), therefore the study of the AO model is equivalent to a study of hard disks with additional short-ranged attractions. The derivation of the AO functional from the functional of Ref. [4] proceeds by employing the "linearization trick" already studied in the 3D case [18,19]. We examine the case of random disorder over the whole range of possible size ratios. Random disorder includes the cases of substitutional disorder when disk sizes are comparable, interstitial disorder for small size ratios and superpositions of alloy configurations for intermediate size ratios.
The paper is organized as follows. In section II, we introduce the theoretical background for the AO model, FMT and the FMT-based AO functional. In section III, we discuss the numerical treatment of the full minimization of the functionals for bulk crystals and crystal-fluid surfaces. In section IV, we present our results for density distributions in the crystal and crystal-fluid interfaces, for phase diagrams and surface tensions. In the final section, we summarize and discuss our results.
A. Hard disk mixture and Asakura-Oosawa model
We consider a mixture of large (l) and small (s) disks, with diameter σ l and σ s , respectively, and q = σs σ l denoting the size ratio. In the case of an additive system (denoted as HD mixture), one may define an interaction diameter d ij = σ i /2 + σ j /2 with i, j = {l, s}. The pair potential Φ ij (r) between two particles with center-center distance r is ∞ for r < d ij and 0 for r > d ij .
In the case of an AO mixture, the interaction diameter d ss for the interaction between two small disks is zero, i.e. there is no interaction among the small disks and they behave as an ideal gas. The other interaction diameters d sl and d ll remain unchanged. The small disks act as an depletant and induce an effective, attractive twobody potential Φ AO (r) between the large disks (depletion potential). Its shape is determined by the overlap of exclusion areas of two large disks at distance r, where the exclusion areas (which are forbidden to centers of small disks) are circles of diameter σ l + σ s centered at the midpoints of the large disks [15,16]: where η = determines the magnitude of the depletion potential (ρ s is the bulk number density of small disks). Furthermore, β = 1 kBT , with k B denoting Boltzmann's constant, and T temperature. For small size ratios q < σs 0.155, the AO mixture can be mapped exactly onto a single component model with an effective two-body potential given by the depletion potential above. For larger q, the effective potential should include n-body overlaps of excluded area (n ≥ 3). Furthermore, in the dilute limit of the (additive) HD mixture (with the number density ρ l of large disks being small), the effective potential between large disks is identical to Eq.(1) [15].
B. Fundamental Measure theory (FMT)
We consider inhomogeneous mixtures with density profiles ρ(r) = {ρ s (r), ρ l (r)}. In classical density functional theory (DFT), crystals are considered as inhomogeneous fluid, whose equilibrium density profile ρ eq (r) = {ρ s,eq (r), ρ l,eq (r)} minimizes the grand potential where F is the Helmholtz free energy, µ i and V ex i are the chemical potential and the external potential for species i, respectively. F can be further decomposed into the ideal gas part F id and the excess free energy F ex . The exact form of F id is where λ i is the thermal wavelength for species i. In the following, we put λ i = 1.
FMT is the most accurate route to density functionals of hard body mixtures. Most FMT functionals assume an excess free energy density which is local in a set of weighted densities n α (r) which are convolutions of the density profiles with geometrically motivated weight functions [3]. For hard spheres in 3D, the original derivation of the functionals proceeds from an exact low-density form ("deconvolution of the Mayer fbond") and subsequently uses scaled particle arguments [3]. Such a functional does not describe crystals, though. In this case, a possible derivation proceeds via dimensional crossover. Here, one requires that by confining an arbitrary density profile to 1D (a line) and 0D (a collection of points), the functional delivers the correct free energies whose exact form are known from other arguments [20,21]. Using this route, the properties of hardsphere crystals and crystal-fluid interfaces are described in quantitative agreement with simulations [13,22,23]. Also, the low-density form remains exact.
In the derivation of a genuine 2D functional along these lines, problems are encountered. Maintaining the exact low-density form or the having the exact free energy for a density distribution consisting of two sharp peaks is not possible with an excess free energy density local in weighted densities [20]. An approximate solution to this problem was derived in Ref. [4]. The excess free energy is given by where the weighted densities n α are sums over convolutions of the HD species density profiles with weight functions, =: n s α + n l α where α indicates the type of weight function and i the species (l=large and s=small). The weight functions are defined as where R i is the radius of species i, θ(r) is the Heaviside step function, and δ(r) is the Dirac delta function. w i T (r) is a tensorial weight function with cartesian components αβ. The free energy density is given by [4]: with three parameters C 0 , C 1 and C 2 . The functional gives the correct second virial coefficient if C 0 +C 2 /2 = 1. Furthermore, the correct free energy for a single, sharp density peak requires C 0 + C 1 + C 2 = 0. Thus, the dependence on the three parameters can be reduced to a dependence on a single parameter a with For one component, a best fit to the Mayer f -bond gives a = 11/4 whereas a fit to crystal pressures obtained by simulation gives a = 3 [4]. For binary systems in the fluid phase, the functional delivers an excellent description of pair correlation functions when compared to experiments [5].
Recently, a functional for 2D rods (discorectangles) has been derived which maintains the exact low-density form by using weighted densities which are two-center convolutions with a weight function (fundamental mixed measure theory, FMMT) [24]. In the limit of the 2D rods becoming disks, the functional (7) is an approximation to the FMMT functional. However, fluid-crystal coexistence densities in the one-component case are approximately equal, and the numerical effort in FMMT is considerably higher. Therefore we will not consider the FMMT functional in this work.
A functional for the AO mixture can be obtained by the "linearization recipe": A functional for a genuine hardbody mixture (such as the one in Eq. (7)) is linearized in the density (or equivalently in the weighted densities n s α ) of the small species. This entails that the direct correlation function between two particles of the small species, c (2) ss (r, r ) = −βδ 2 F ex /(δρ s (r)δρ s (r )), vanishes, consistent with the small species behaving as an ideal gas. In 3D, such a functional (derived from the original Rosenfeld functional [3]) describes structural properties and wetting transitions in the fluid phase very well [25]. Recently, an extension using functionals from the dimensional crossover route has been studied which allows the description of the crystal phase in 3D [19]. According to the linearization recipe, the AO mixture excess free energy density is given by
A. Free minimization and phase coexistence
For the crystal phase, we assume periodicity and consider a rectangular unit cell with side lengths L and √ 3L for a triangular lattice (see Fig.1). Since we only consider solid solutions (random disorder), we assume that the triangular lattice is formed by equilateral triangles as in the one-component case.
The free parameters in this free energy minimization problem are the density profiles ρ l (r) and ρ s (r) in the unit cell as well as the length L. We parametrize the latter via an effective vacancy concentration n: In the one-component case, an ideal crystal has 2 particles in the unit cell, therefore n > 0 indeed corresponds to the vacancy concentration in the equilibrium crystal. For a HD mixture, n may also be negative, corresponding to an effective interstitial concentration which is easily possible if small disks are inserted into a crystal of large disks. The full minimization for given average densitiesρ l ,ρ s proceeds via i.e. in two steps [22]. The first minimization step is achieved by an iterative solution of the Euler-Lagrange equation (for fixed n, L) where where Φ is given by Eq. (7) in case of the HD mixture, and Eq.(9) in case of the AO mixture. The chemical potentials µ i are adapted in each iteration step to keepρ l ,ρ s constant. Iteration is done using a combination of Picard steps and DIIS (discrete inversion in iterative subspace) [13,22]. The Picard steps are performed according to where i is the species index, j labels the iteration step and ξ is a Picard mixing parameter which we chose in the range from 10 −3 to 10 −2 for bulk crystal and also interface minimizations. The DIIS steps are performed using between 5 and 9 forward profiles. As a side remark, we also minimized F [n α ] by dynamic DFT with the exponential time differencing algorithm [26] for a onecomponent system. By choosing the time step dt = 10 −3 (in units of the Brownian time), the thermodynamic properties in equilibrium are identical to the ones from the Picard-DIIS method, but the dynamic DFT method requires much more computational resources. The second minimization step, the minimization with respect to n (and thus L), amounts to do the first minimization for a few values of n within an interval of starting width ∼ 10 −3 and determine the minimum via a quadratic fit. The procedure is iterated with smaller interval widths until we have reached 3 digits of confidence or the interval width is less than 10 −5 .
The procedure is slightly modified in the case of an AO mixture, see also Ref. [19] for more details. Here, we define the vacancy concentration by i.e. it corresponds to the concentration of sites unoccupied by the large particles. Furthermore, we define a semi-grand free energy (fixedρ l and µ s ) which is minimized in step 1 for fixed n, L. In each iteration step, the density profile ρ s (r) of the small spheres is computed by the grand-canonical equilibrium condition which can be solved explicitly: Phase coexistence requires P cr = P fl and µ i,cr = µ i,fl with i = l, s, i.e. coexisting fluid [crystal] states form two lines in the ρ l -ρ s plane. In practice, first we choose ρ l and ρ s for the crystal and treat ρ l = ρ l,cr as the parameter on which the other three coexistence densities depend. Fully minimizing F/N with n delivers P cr and µ i,cr . Through µ i,cr = µ i,fl and the fluid equation of state we can find P fl , ρ l , ρ s in the fluid. In general, P fl = P cr and thus we change ρ s,cr iteratively until |P cr − P fl | < 5 × 10 −6 .
B. Surface tension
A surface tension in 2D is a line tension defined as Ω+P A L , where P is the pressure, A is the area of the system and L is the length of the interface. In this paper, we are interested in the planar surface tension γ, which is determined by the slope of the free energy density versus the inverse length of the numerical box in the direction of the interface normal, with the average particle density fixed [27].
In general, γ depends on the angle θ between the crystal and the interfacial normal. For small anisotropies, γ can be approximated by γ(θ) = γ 0 (1 + sin(6θ)); in FMT, O[10 −3 ] for the one-component crystal-fluid interface. In experiments [2], O[10 −2 ]. Due to the smallness of , in this paper, γ 0 is directly determined by . The density profiles are initialized similar to Ref. [13]. In the iterations we chose a Picard mixing parameter constant in space (this works here in 2D but not in 3D [13]). We fix the average densitiesρ i =ρ i,cr +ρ i,fl 2 by adapting µ i in the iterations, whereρ i,cr/fl is the bulk average density in the crystal/fluid phase for species i at coexistence, and then finally perform the free minimization.
C. Further numerical details
Here we briefly discuss further computational details. The crystal phase requires double precision, with numbers of grid points from 64 2 up to 256 2 for one unit cell.
The crystal-fluid interfaces require an extension of the numerical box between 1 × 96 and 1 × 196 unit cells to give reliable surface tensions. Heavy usage of Fourier transforms is required for the minimization. Weighted densities (Eq. (6)) are computed using and functional derivatives (Eq. (13)) by with F denoting the Fourier transform and the complex conjugate. For F(w α ), the analytic forms using Bessel functions are used [28]. For accelerating the numerics, all calculations are executed on high-performance Nvidia Tesla K80 or K40 GPU's with massive parallelization through the developer environment CUDA [29]. For a detailed description of GPU utilization in two-and three-dimensional FMT, we refer the reader to the paper by Stopper et al. [28]. CUDA has a wide range of tools and libraries, such as template library thrust and fast Fourier transforms (cuFFT) which is usually a bottleneck in the DFT calculations. With a potential speed gain of up to 40 times relative to a serial CPU program [28], our calculations gave a factor of 15-20 since we try to maximize the system size; thus, our largest system is 4 times larger than those in Ref. [28]. The minimization of a unit cell (first minimization step) usually takes a few seconds (∼ 500 Picard-DIIS steps) and that of an interface about 15-30 minutes (∼ 5000 Picard-DIIS steps) for one-component system.
A. One-component system
In the last decade, two-dimensional one-component HD systems were extensively studied, with now quantitative agreement in type and location of the phase transition between experiments [2] and Monte Carlo (MC) simulations [1]. For a summary, in Table I we provide a comparison between FMT, MC and experiments. Experiments and simulations find a first-order transition between the fluid and the hexatic phase, and a continuous transition between the hexatic and crystal phase. The surface tension in the experiments [2] (see Supplementary Material therein) is for hexatic-fluid coexistence. FMT results are for an assumed first-order transition between fluid and crystal.
From Table I we see that coexistence packing fractions and the surface tension are described very well by FMT, even though in FMT the strict periodicity assumption for the crystal differs from the character of the hexatic and crystal phase in experiments/simulations. This good correspondence is in line with the quantitative description of fluid structure given in earlier work [4,5]. I: Thermodynamic properties of the one-component crystal-fluid transition. γ 0 denotes the averaged planar surface tension, σ the HD diameter, µ chemical potential, P pressure, η = (π/4)σ 2 ρ packing fraction, superscript (1c) one-component, and subscripts (co) coexistence of the crystal (cr) and fluid (fl), respectively. Note that for Exp and MC, two values for η cr correspond to the packing fraction of the hexatic phase at fluid-hexatic coexistence and the packing fraction at the hexatic-crystal continuous transition, respectively. The FMT coexistence values for a = 11/4 differ slightly from those in Ref. [4] which suffer from a small numerical error. When the radii of the disks are comparable (large q 1), we observe a clear substitutional disorder. Density peaks for both species are centered on the triangular lattice points and their magnitude is essentially determined by the composition of the crystal. An example can be seen in the crystal part of the crystal-fluid density profile shown in Fig. 7d below.
For small size ratios q 1, we observe interstitial disorder, i.e. the small disks almost exclusively occupy the interstitial space between the large disks which in turn occupy the triangular lattice points. An example can be seen in the crystal part of the crystal-fluid density profile shown in Fig. 7a below. The HD and AO case are very similar, and qualitatively the AO crystal density profiles in 3D show the same behavior [19].
For intermediate q and the HD case, we observe a superposition of substitutional and interstitial disorder, and the interstitial disorder may show a transition to different alloy configurations upon changing the composition. We exemplify this for q = 0.45. Large disks density peaks are again centered on the triangular lattice positions (not shown). For low small disk concentrations (c s = 0.03) we observe interstitial disorder superficially compatible with an AB 2 structure (see the small disk distribution in Fig. 2a). From the large and small disks drawn in Fig. 2a one sees however that the small disks are too big for the formation of a true AB 2 phase. For higher small disk concentrations (c s = 0.39, see for the small disk distribution) the lattice constant becomes smaller (large spheres on the triangular lattice points almost touch) and the interstitial density peaks of the small spheres are compatible with an AB 3 structure.
Here, remarkably, the large disks drawn around the triangular lattice points and the small disks drawn around the interstitial peak positions reveal two packed AB 3 configurations. In the AO case, we only observed small disk density distribution of the type shown in Fiq. 2a.
Here, we have not investigated whether the minimized crystal structures with disorder are stable or not with respect to phase separation into different alloy phases. This requires more extensive investigations beyond the scope of this work. However, our results illustrate that a free minimization of the FMT functional is capable of generating alloy structures without the need to explicitly parameterize the density profiles (e.g. by suitably chosen Gauss peaks, as it is often done).
C. Binary systems: phase diagrams
For two-component hard systems, equilibrium states are on a surface in a three-dimensional space, spanned by e.g. the packing fractions η l , η s and the pressure P . Consequently, binodals are lines in this threedimensional space and they are often displayed by their two-dimensional projections, e.g. lines in the η l -P or c s/l -P plane where c s/l = ρ s/l /(ρ l + ρ s ) is the relative concentration of small/large spheres. In the AO model, customarily the η l -µ s plane is chosen but the topology of phase diagrams is very similar to the one in the η l -P plane.
Small size ratios q
For a size ratio q = 0.15, the phase diagram is shown in Fig. 3 in two different projections. For both HD and AO mixtures, the addition of the small species leads to an increased coexistence pressure for the fluid-crystal transition, i.e. the fluid phase is stabilized. The AO mixture shows the typical widening of the coexistence gap (η l,cr − η l,fl ) with increasing concentration of the small species (see Fig. 3a), smoothly leading to a sublimation line. For η s 0.01, the HD mixture binodal follows the AO binodal, i.e. also shows an initial widening of the coexistence gap. This could be expected since for these small concentrations the small disks only act as depletants and their mutual interaction is irrelevant. For higher η s , the binodals separate. The choice of the parameter a in the functional has a significant influence on the location of the binodal. This is similar to the observation in Ref. [19] that also in the 3D case, the binodal differs considerably between the White Bear II (tensor) and the Rosenfeld (tensor) functional, although the differences in the one-component case are not that significant.
For the size ratios q = 0.3 and q = 0.45, the phase diagrams are shown in Fig. 4 in the η l -P plane. For the AO mixture, the liquid (rich in large disks)-vapor (poor in large disks) transition has become stable which leads to the appearance of a triple point above which sublimation (vapor-crystal transition) is stable. The triple point pressure decreases with increasing q. The difference in the location of the liquid-vapor transition between the FMT results for the two different values of a is only a consequence of normalizing the pressure axis by P 1c co (for the two a values, it differs by ∼ 15%, see Table I). For the HD mixtures, there is no fluid-fluid transition and there is hardly any widening of the coexistence gap of the fluid-crystal transition visible. The results for the AO mixture are very similar to the 3D case [19]. Experimentally, it is possible to realize such 2D systems by sedimented monolayers of colloidal spheres (as in Refs. [2,5]) to which nonadsorbing polymers can be added. For small size ratios q 0.15 it would be interesting to study experimentally or by simulations the fate of the established melting scenario for hard disks as the polymer concentration is increased. As we have seen, the coexistence gap continuously widens in this case, and we expect that towards the sublimation regime only the first-order transition survives.
Size ratios q close to 1
For size ratios q in the vicinity of 1, we only focus on the HD mixture. In the AO mixture, the phase diagram becomes rather uninteresting with regard to crystal phases. There, upon addition of the smaller, polymeric Again this is very similar to the 3D case, and a detailed discussion can be found in Ref. [19]. For HD mixtures, phase diagrams are shown in Fig. 5. For q very close to 1, the phase diagram is of a type commonly denoted as spindle type (which would be directly visible in the c l -P plane or in the c s -P plane): The coexistence pressure continuously increasing upon addition of smaller disks and reaches its maximum for the pure small-disk system (see Fig. 5(a)) Upon lowering q, the type of phase diagram crosses over to azeotropic (see Fig. 5(b) and (c)): there, a maximum pressure for a stable fluid is found for a certain finite composition, i.e. for a truly mixed system. At this point of maximum pressure, the coexisting fluid and crystal have the same composition (azeotropic point). The precise value for q where this transition happens depends on the parameter a in the functional; it is around 0.91 for a = 3 and around 0.93 for a = 11/4. The transition from spindletype to azeotropic phase diagrams has also been observed in simulations of hard sphere mixtures in 3D [8]. There, the transition happens at around q = 0.94. Furthermore, in 3D the azeotropic phase diagram changes to a eutectic phase diagram already at around q = 0.88. From our results, this happens in 2D at much lower q (see below).
Intermediate size ratios q
Again we will only discuss HD mixtures. The phase diagram for q = 0.6 is shown in Fig. 6(a) and for q = 0.7 in Fig. 6(b),(c). For q = 0.6, we observe a phase diagram of eutectic type. It is actually very similar to the phase diagram found in simulations for q = 1/1.4 (see Ref. [14] Supplementary Material). The crossover to the azeotropic phase diagram (as seen in q = 0.75 in Fig. 5(c)) is surprising according to the FMT results. For q = 0.7, a 3 dimensional phase diagram in η l -η s -P space is presented in Fig. 6(c). The coexisting plane with a majority of large disks (black surface) is close to the one of small disks (blue surface), but does not cross. By increasing q, two plane touch then form an azeotropic type. Back to the η l -P plane, the branch with a majority of large disks distorts to form an azeotropic point (see the black lines in Fig. 6(b)) whereas the branch with a majority of small disks remains approximately unchanged when compared with q = 0.6 (blue lines in Fig. 6(b)). Thus, above the azeotropic point pressure there is a stable and a metastable coexistence between a crystal with a majority of small disks and a mixed fluid.
D. Binary systems: Interface density profiles
Fiq. 7 shows representative density profiles of the crystal-fluid interface for hard disk mixtures with q = 0.15, 0.45 and 0.75. For all size ratios q, the density of large disks is always peaked on the triangular lattice sites (see Fig. 7a, upper panel) while the density of small disks changes from interstitial to substitutional disorder by increasing q (see also the discussion in Sec. IV B). For the AO mixture, we found similar density profiles for q < 0.5, except Fig. 7c. From the profiles one infers a rather broad interface.
We analyze the interface structure further by employing the methods of Ref. [30]. Smooth average density and crystallinity modes can be extracted from the Fourier transform of the full density profiles by picking a lateral reciprocal lattice vector (K y ) and cutting out a window around a reciprocal lattice vector K x parallel to the interface normal. The average modes are the inverse Fourier transforms of the cut-out window. The average density mode M 0 is obtained by choosing K x = K y = 0 and the leading crystallinity mode M 1 is obtained by choosing K y = 0, K x = 4π/ √ 3L where L is the length of the rectangular unit cell side which is parallel to the interface, see Figs. 2 and 7. M 1 is complex in general, in figures we show its absolute value only.
In Fig. 8 we compare laterally averaged density profiles with the extracted density and crystallinity modes for the four interfaces of Fig. 7. Several observations can be made. First, looking at the density and crystallinity mode of large disks (middle column in Fig. 8) we note that coming from the fluid side, crystallinity sets in earlier as densification (except for the case q = 0.45, c s = 0.39). This has also been noted before in the 3D case of one-component hard spheres [30]. Second, looking at the density and crystallinity mode of small disks (right column in Fig. 8) we observe that for small q = 0.15 (interstitial disorder) and large q = 0.75 (substitutional disorder) the small disk crystallinity is essentially proportional to the large disk crystallinity. Since the crystal has a smaller concentration of small disks than in the fluid, the density mode increases from left to right but stays monotonic. For the intermediate size ratio q = 0.45 we note that the crystallinity of small disks is peaked at the interface, and for c s = 0.39 this also applies to the density mode. Thus we see an interfacial enrichment of ordered, small spheres. This interfacial enrichment can be also seen in the laterally averaged density profiles (left column in Fig. 8) which exhibit an increase in the oscillation amplitude of the small sphere density (red lines) in the interfacial region. However, the quantification of this effect is easier using the crystallinity and density modes. For small to moderate size ratios of up to 0.6, we may view the small disks as depletants, at least for small concentrations c s . In Fig. 9, we show the associated crystalfluid planar surface tension γ 2c 0 versus c s for both AO and HD mixtures.
For q = 0.15 (Fig. 9a), we have computed the surface tension for c s up to 1. We remind the reader of the associated phase diagrams (Fig. 3) which in the AO case shows the typical widening of the coexistence gap. In the HD case, the widening of the coexistence gap follows the AO case only for small c s . It is a bit surprising that the surface tension decreases upon addition of small disks, with the results for the HD mixture are on top of those for the AO mixture until c s ∼ 0.4. In the depletion picture, the addition of small disks leads to an increasing attraction between large disks. In mean-field approximation, the increasing attraction together with an increasing coexistence gap should lead to a higher surface tension. Such an increase is seen both for the AO model and the HD case only for rather large c s , after a minimum has been reached around c s ≈ 0.6 (Fig.9a). In the HD case, for c s → 1 we reach the monocomponent case for small disks, thus the surface tension should reach γ 2c 0 (c s = 1) = σ l σs γ 2c 0 (c s = 0) = γ 1c 0 /q.
The peculiar behavior of an initially decreasing sur-face tension is also seen for q = 0.3 (Fig. 9b), q = 0.45 (Fig. 9c) and q = 0.6 ( Fig. 9d), although the decrease becomes smaller with increasing q. With increasing size ratio, also the HD and the AO results differ more and more already for small c s and we also note that the choice of the parameter a in the FMT functional influences the results considerably. Overall, the surface tensions are rather small on the thermal energy scale. For the monocomponent case this leads to strong interface fluctuations, as observed in Ref. [2]. Owing to the decrease in γ 2c 0 upon addition of small disks, we would expect that these fluctuations also become stronger. For q ≥ 0.75, the phase diagram in the HD mixture is of azeotropic or spindle type (see Fig. 5), thus we can determine γ 2c 0 in the whole range of concentrations from c s = 0 up to 1. In Fig.10, the surface tension γ 2c 0 versus c s is shown for four aspect rations q ≥ 0.75 and the two values of the parameter a. Qualitatively, there is no significant dependence on a for these size ratios. As before (for small q) the initial decrease of γ 2c 0 for small c s is present. There is a minimum in the surface tension around c s = 0.5 and it reaches the correct monocomponent value γ 2c 0 (c s = 1) = γ 1c 0 /q. The surface tensions can actually be well described with the following function involving one fit parameter κ: where γ 1c 0 and η 1c cr on the right hand side of Eq.(20) are the monocomponent surface tension and the coexistence crystal packing fraction (see Table I) and η l/s,cr are the coexistence crystal packing fractions of large/small HD. For the fit parameter κ we note that lim q→1 κ(q) = 2. For q < 0.75, Eq. (20) is not valid, which may be due to the complicated transition from an azeotropic to an eutectic phase diagram (as discussed before).
V. SUMMARY AND CONCLUSION
Using density functional theory (fundamental measure theory), we have performed an extensive study of the phase diagram and crystal-fluid surface tensions in binary hard disk systems, both for the additive case and the non-additive (Asakura-Oosawa like) case. Since we assumed a periodic crystal, we find first-order transitions only. These correspond to the first-order fluid-hexatic transition for the one-component case and presumably to first-order fluid-crystal transitions (which become stable upon admixing a second component, see e.g. Ref. [14]). Overall, the phase diagrams are qualitatively very similar to 3D. In the AO case and for small size ratios q, the typical continuous widening of the coexistence gap is observed upon addition of the smaller species, and for intermediate q a vapor-liquid transition becomes stable. In the additive case, the phase diagrams show the sequence spindle → azeotropic → eutectic upon lowering q from 1 to 0.6 (similar to 3D). However, the transition from azeotropic to eutectic is different from what is known in 3D hard sphere systems (see the phase diagram in Fig. 6(b),(c) for q = 0.7).
The results for the crystal-fluid surface tensions reveal two things. Overall, their values are much smaller than 1 in thermal units 1/(βσ l ). For the one-component case, the resulting large thermal fluctuations of the interface have been observed experimentally [2]. Secondly, the addition of a second component leads in general to a substantial decrease in the surface tension. This holds for the AO case (for q 0.6) and also for the additive case (here for the whole range of q). Complementary, dedicated simulation or experimental results on this are clearly desirable, also in view of the relevance of the surface tension for nucleation processes, see Ref. [31] for a review on more qualitative results on 2D crystal and defect formation. The observed decrease in surface tension should lead to a considerable decrease in the time scales of crystal nucleation.
In contrast to phase diagrams, results on crystal-fluid surface tensions in binary 3D systems are scarce. For binary hard spheres with a size ratio of q = 0.9, results are reported in Ref. [32]. For that q, the phase diagram is azeotropic. The surface tension is found to increase monotonically with the addition of small spheres. These findings are similar to those for a 3D binary Lennard-Jones system with zero size mismatch, but a ratio of interaction strengths of 0.75 (leading to a spindle-type phase diagram) [33], but they are different from the nonmonotonic behavior found here in the 2D system (see Fig. 10(a),(b)).
The full minimization of the FMT functionals show interesting effects for the density distributions in the crystal unit cells and of the crystal-fluid interfaces. For intermediate size ratios (examples shown for q = 0.45) superpositions of substitutional and alloy structures are found, and enhanced crystallinity and density of small disks is observed right at interface between crystal and fluid. Clearly, an extension of the present studies to the global stability of alloy phases and their interfaces is desirable but requires considerable more efforts.
For completeness, we present also results for the liquidvapor surface tension γ lv in the AO model for size ratios q = 0.3...0.7, see Fig. A.1(a). Similar to the crystal-fluid surface tension, the numerical values for γ lv are much smaller than 1 in thermal units 1/(βσ l ), even far away from the critical point. In Fig. A.1(b) we show the extracted exponent α for the assumed power-law relation γ lv ∝ ∆η α l , where ∆η l = η l,liq − η l,vap is the difference between the coexistence packing fractions of large disks in the liquid and vapor phase. For mean-field models, α = 3 close to the critical point, and this behavior is found to hold not only in the immediate vicinity of the critical point. This is similar to results from density functional studies of the 3D AO model [34,35]. | 2018-05-09T21:58:44.000Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "d388ebd6d03795ab022bf5c78b11bb25d5b3d2de",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.03742",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d388ebd6d03795ab022bf5c78b11bb25d5b3d2de",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
59474383 | pes2o/s2orc | v3-fos-license | A General Maximum Progression Model to Concurrently Synchronize Left-Turn and through Traffic Flows on an Arterial
In the existing bandwidth-based methods, through traffic flows are considered as the coordination objects and offered progression bands accordingly. However, at certain times or nodes in the road network, when the left-turn traffic flows have a higher priority than the through traffic flows, it would be inappropriate to still provide the progression bands to the through traffic flows; the left-turn traffic flows should instead be considered as the coordination objects to potentially achieve better control. Considering this, a general maximum progression model to concurrently synchronize left-turn and through traffic flows is established by using a time-space diagram. The general model can deal with all the patterns of the left-turn phases by introducing two new binary variables into the constraints; that is, these variables allow all the patterns of the left-turn phases to deal with a single formulation. By using the measures of effectiveness (average delay time, average vehicle stops, and average travel time) acquired by a traffic simulation software, VISSIM, the validity of the general model is verified.The results show that, compared with the MULTIBAND, the proposed general model can effectively reduce the delay time, vehicle stops, and travel time and, thus, achieve better traffic control.
Introduction
As an important approach applied in urban traffic signal control, arterial traffic signal coordination has tremendous importance in reducing the vehicle delay time and vehicle stops, improving the passing efficiency of arterials and alleviating the traffic congestion.Studies on arterial traffic signal coordination can be divided into two types based on the different optimization objectives.The first type aims to minimize the delay time, whereas the second type aims to maximize the progression bandwidths.A delay-based method establishes a relational expression between the delay time and traffic signal parameter combinations (such as cycle length, split, and offset) and calculates the delay time for different signal parameter combinations, thereby obtaining the optimal signal parameter combinations with a minimum delay time by comparison.A bandwidth-based method, accordingly, aims to maximize the two-way progression bandwidths of an arterial, so that the vehicles, when driving within the progression bands, can pass through the entire arterial without any stops.A bandwidth-based method is the preferred choice of numerous traffic engineers because it can visually display the coordination control effects.Morgan and Little are pioneers who studied the problem of arterial traffic signal coordination and formulated the concept of bandwidths [1].Subsequently, Little extended their previous work and established a mixed-integer linear programming model for arterial traffic signal coordination by considering the maximum two-way progression bandwidths as the objective functions and cycle length, offset, and progression speed as the decision variables [2].Messer et al. developed a traffic signal progression program to optimize multiphase sequences to obtain the maximum progression bandwidths [3].In 1981, Little et al. further extended their previous research by considering issues such as queue clearance time and left-turn phase sequence and proposed a more universal coordination model-the MAXBAND model [4].Chang et al. extended the research by Little and developed the MAXBAND-86 model [5].In comparison with the MAXBAND model, the MAXBAND-86 model can solve the traffic signal coordination problem of a closed road network enclosed by multiple arterials.A progression bandwidth may not be realized or be only partially realized when the signal timings generated by the MAXBAND model and PASSER II are actually applied to arterials.Tsay and Lin proposed a new algorithm for solving the maximum progression bandwidth to ensure that the users could receive a more realistic range [6].In actual situations, different road sections have different traffic and saturation volumes, and so, their demands for the progression bandwidths also differ.Therefore, it is preferable to assign different progression bandwidths to different road sections to better satisfy the actual traffic requirements.Gartner et al. took this into consideration and proposed a variable progression bandwidth model-the MULTIBAND model, which could generate different progression bandwidths for different road sections [7].Subsequently, Stamatiadis and Gartner extended the MULTIBAND model as a solution for network coordination by developing the MULTIBAND-96 model [8].Gartner and Stamatiadis, thereafter, proposed an optimized solution algorithm specific to MULTIBAND-96 and, thereby, significantly improved the computational efficiency [9,10].Concurrently, Chaudhary and Messer developed an arterial traffic signal coordination software-PASSER IV, which could optimize the progression bandwidth in grid networks [11].In recent years, relevant scholars have further studied bandwidth-based methods and obtained relatively detailed results based on the MAXBAND or MULTIBAND model.Tian and Urbanik proposed a bandwidth-oriented signal coordination method based on the system partition technology, which divided a large signalized arterial into several subsystems and optimized each subsystem to obtain the maximum progression bandwidth [12].Lu et al. noted that the MAXBAND model failed to consider the dispersion feature of a platoon and added the platoon dispersion model as a constraint to the MAXBAND model, thereby proposing a modified MAXBAND model [13].Tian et al. conducted quantitative evaluations of two signal timing issues (phase sequence and number of intersections) related to the progression bandwidth.The research results indicated that the lead-lag phase more easily yielded the maximum progression bandwidth solutions compared with the leading and lagging phases.However, a larger number of intersections imply more difficulty in obtaining the maximum progression bandwidth [14].Chen et al. found that the queue clearance time changed dynamically with the offset instead of being a fixed value and accordingly derived the functional relation between the two parameters.Next, this functional relation was combined with MAXBAND to develop an improved MAXBAND model considering the dynamic queue clearance time [15].Lin et al. proposed a new mixed-integer nonlinear programming model with the aim of providing maximum nonstops [16].In consideration of the issue that the MAXBAND and MULTIBAND models may not be able to provide maximum progression bandwidths under certain weight factors, Lu et al. proposed a new type of maximum progression bandwidth model suitable for nonbalanced bandwidth demands by introducing proportional impact factors [17].The overlapping and split phases are two classical phase design methods; however, the MAXBAND considers only the overlapping phase.To overcome this limitation, Lu et al. developed a bandwidth-oriented model with a comprehensive consideration of both the overlapping and split phases [18].Cesme and Furth presented an inductive progression coordinated control method based on self-organizing traffic signals using secondary extension and dynamic coordination [19].Li proposed a two-phase approach to solve the problem of travel time uncertainty.In this approach, first, the modified MAXBAND model was used to generate multiple optimal or suboptimal plans; next the Monte Carlo method was applied to simulate random travel time, evaluate the plans, and rank them according to their reliability [20].Gomes introduced a vehicle arrival function into the signal coordination control and then developed a new bandwidth-oriented model.And then, he provided a detailed analysis of the pulse arrival and Gaussian arrival [21].Ye et al. noted that the available coordination control method could not accurately calculate the queue clearance length and, therefore, developed a calculation formula specific to it that was incorporated into the coordination control model [22].Based on the MAXBAND model, Yang et al. proposed a multipath maximum progression bandwidth model which could provide progression bands for multiple paths on an arterial [23].Zhang et al. considered the constraint of the MULTIBAND model requiring the progression line to be symmetrical and proposed an asymmetric multiband model allowing the progression line to be asymmetrical-the AM-Band model [24].Kim et al. proposed a dynamic progression bandwidth analysis method to adjust the bandwidth dynamically using closed loop signal data [25].Based on the MAXBAND model, Dai et al. developed a new bandwidth-based model suitable for a bus lane system by taking into consideration factors such as the bus speed, location of the bus stops, and dwell time [26].The existing models are highly dependent on fixed green splits; to overcome this shortcoming, Shirvani and Maleki proposed a modified multibandwidth model based on the acceptance degree of a bandwidth by using fuzzy control theory [27].Zhang et al. studied long arterials and grid networks and then developed two bandwidth-oriented models (denoted as MAXBANDLA and MAXBANDGN models) to address the signal coordination problems of long arterials and grid networks [28].When a common signal cycle is applied to a minor intersection, whose actual signal cycle length is approximately half of the common signal cycle length; it tends to cause an excessive delay for drivers on a crossing road.Therefore, Zhou et al. proposed an uneven double cycle control method to solve the above issue [29].By introducing concepts such as the green centre line, ideal intersection position, and ideal intersection spacing, Lu et al. proposed a network progression band model with the minimum time difference between the actual and ideal green centre point as the objective function [30].
In general, all the bandwidth-based models mentioned above considered the through traffic flows as their objects and assigned the progression bands accordingly.On an arterial, the through traffic flows generally have larger volumes and should be awarded a higher priority and consequently assigned the progression bands.However, at certain times or nodes, when the left-turn traffic flows have a higher volume than the through traffic flows, it would be inappropriate to continue to provide the progression bands to the through traffic flows.Under such conditions, it is obvious that converting the progression bands to the left-turn traffic flows, which have a higher priority then, could result in better traffic control.An arterial consisting of three signalized intersections, as shown in Figure 1, was selected as an example to illustrate the focus issue of this paper. denotes the outbound through traffic flows at (west approach); , driving from to +1 , will divide into three streams at +1 , namely, left-turn traffic flow ,+1 , through traffic flow ,+1 , and right-turn traffic flow ,+1 ; +2 denotes the inbound through traffic flows at +2 (east approach), and similar to , +2 will also divide into three streams at +1 when driving from +2 to +1 , namely, left-turn traffic flow +1,+2 , through traffic flow +1,+2 , and right-turn traffic flow +1,+2 .The other symbols follow the same explanation.The available bandwidth-based methods typically consider the arterial through traffic flows (such as outbound through traffic flows ,+1 and +1,+2 , inbound through traffic flows +1,+2 and ,+1 ) as their coordination subjects and assign progression bands accordingly.By this approach, the through traffic flows can pass through the entire arterial without any stops or with only a few stops, thereby achieving a good passing efficiency.Nevertheless, when the left-turn traffic flows at +1 ( ,+1 and +1,+2 ) have higher volumes than the through traffic flows ( ,+1 and +1,+2 ), and consequently have a higher priority and more urgent demand of progression band, it would be inappropriate to still coordinate the through traffic flows; instead the progression bands should be relayed to the left-turn traffic flows to achieve a better passing efficiency.
Summarizing, the issue to be examined in this study is as follows: first, we assume that the bidirectional left-turn traffic flows ( ,+1 and +1,+2 ) have a higher demand for the progression bands than the through traffic flows ( ,+1 and +1,+2 ) at +1 .Therefore, the bidirectional left-turn traffic flows ( ,+1 and +1,+2 ) will be taken into account as the objects.In this work, they will first be considered for modelling, which is significantly different from the existing methods.Next, outbound through traffic flows +1,+2 from +1 to +2 and inbound through traffic flow ,+1 from +1 to will also be considered during the modelling.This suggests that it is possible to concurrently synchronize the left-turn and through traffic flows.
In addition, we note that the number of upstream and downstream intersections is equal at +1 in Figure 1.However, this is not necessary, that is, it is not required that +1 be located in the middle of the arterial road.Figure 1 is only used to better illustrate the issue described in this paper.
The remainder of this paper is organized as follows.In Section 2, a general maximum progression model is presented, which can simultaneously coordinate the leftturn and through traffic flows and handle all the patterns of the left-turn phases by introducing two binary variables.In Section 3, a case study is discussed to compare the proposed model with the MULTIBAND model by using the traffic simulation software, VISSIM.Section 4 provides the conclusions with the final remarks and a description of possible future work.
Model Formulations
The necessity of coordinating the bidirectional left-turn traffic flows at +1 was described in Section 1.There are four possible patterns of left-turn phases at +1 , as shown in Figure 2.
Figure 2 shows the four possible patterns of the left-turn phases.However, some readers may question the nonconsideration of the symmetric phase (the outbound and inbound green times for the through flows are equal; outbound and inbound green times for the left-turn flows are also equal) at +1 in Figure 2. In fact, the symmetric phase is only a special case of Patterns I and II, and the specific explanation is as follows.From Figure 2
General Model of Simultaneously Optimizing Offsets and
Phase Sequences.According to the above analysis, there are various patterns of left-turn phases at +1 (see Figures 2 and 3).In this section, a general maximum progression model that can deal with all the above patterns of the left-turn phases is established by introducing two new binary variables +1 and +1 .In addition, key notations involved during the modelling are denoted as per the list in Notations.
To avoid repetition, we select Pattern I (see Figure 2) as the example for modelling; the modelling processes for other patterns can be conducted similarly.The time-space diagram for Pattern I based on the MULTIBAND model (Gartner et al. [7]) is presented in Figure 4.
Owing to the differences in the modelling objects (through traffic flows are taken as the object in MULTIBAND model, whereas left-turn and through traffic flows are simultaneously considered as the objects in the general model), the loop integer and inequality constraints in the MULTIBAND model cannot be directly applied to the general model.Thus, the new loop integer and inequality constraints suitable for the general model established in this paper are deduced as follows.
First, the derivation process of the new loop integer constraints is defined here. ,+1 and ,+1 in Figure 4 can be expressed as In Figure 4, (3) is applicable from Point to .
Combining ( 5)-( 7), the new loop integer constraint between +1 and +2 can be expressed by Inbound Outbound Second, because the new inequality constraints are relatively simple and can be obtained directly from the time-space diagram; the corresponding expressions are directly provided in this section (see constraints represented in ( 15)-( 18)).
, +1 , and +2 are considered as the model subjects, and similar to the MULTIBAND model, the objective is to maximize the sum of the two-way weight progression bandwidths.The general model established in this paper can be expressed as follows.
In the proposed model, the objective function and constraints are linear.Particularly, because the variables in 9) is the objective function of the proposed model that aims to maximize the left-turning and through progression bandwidths.Constraint (11) is used to indicate the direction with the higher weight and identify the outbound and inbound directions.The direction with the higher weight will be assigned the larger progression bandwidth.Constraint (12) is used to limit the value range of the cycle length.Constraint (13) is employed to limit the travel speed on the road section to within a reasonable range.Constraint ( 14) is applied to limit the travel speed change between consecutive road sections to within a reasonable range.Constraint (15) is used to limit the minimum values of and ( = , +2) at and +2 .Constraint ( 16) is used to limit the minimum values of ,+1 , +1 , +2,+1 , and +1 ( = ) at +1 .Constraint (17) limits the maximum values of , , +2 , and +2 ( = ) at and +2 .Constraint (18) is applied to limit the maximum values of ,+1 , +1 , +2,+1 , and +1 ( = ) at +1 .Constraint ( 19) is used to indicate that the variables in this constraint are nonnegative.Constraint (20) specifies that the variables in this constraint are integers.Constraint (21) indicates that the variables in this constraint are valued as 0 or 1.
Correspondence Relationships between Binary Variables and Phase Sequences.
Although the derivation of the general model proposed in this paper is based on Pattern I, the general model can also deal with the remaining phase patterns (Patterns II to VI) by introducing two new binary variables ( +1 and +1 ) into the inequality constraints (see constraints ( 16) and ( 18)).This implies that the maximum progression model established herein is a general mathematical model that can deal with all the above-mentioned phase patterns.
The correspondence between the values of +1 and +1 and the phase patterns is shown in Table 1.
As can be seen from Table 1, +1 = 0 and +1 = 1 correspond to two patterns, that is, Patterns I and VI.A question to be addressed is how to distinguish between Patterns I and VI when the values +1 = 0 and +1 = 1 are obtained after the general model is solved.The specific method to achieve this is given below.Before the model is solved, the green time of each traffic flow at +1 is obtained (i.e., the green time is a known parameter).If the outbound and inbound green times for the through traffic flows are equal ( +1 = +1 ) and so are for the left-turn traffic flows ( +1 = +1 ), then the case corresponds to Pattern VI, otherwise it corresponds to Pattern I. Similarly, when +1 = 1 and +1 = 0 (as listed in Table 1), Patterns II and V can be distinguished.
Numerical Example
3.1.Basic Parameters of the Arterial.Yinhai road including five signalized intersections in the Yiwu city is selected to verify its validity of the established general model.The distribution of each intersection on the arterial is shown in Figure 5. Chouzhou road is an important traffic attraction because various trade marts are located on this road, which cause more traffic flows to turn left at 3 .Therefore, Yinhai road provides a good testing arterial to verify the effectiveness of the model proposed herein.In Table 2, LT, TH, and RT are the abbreviations for left-turn, through, and right-turn, respectively; and W, N, E, and S denote the west, north, east, and south approach of the intersections, respectively.The design speed for the progression bands in both the outbound and inbound directions on the arterial is 45 km/h. 2 for each intersection and the Webster theory, the distribution ratio of the phase green time for each intersection is listed in Table 3 (saturation flow of one lane is assumed as 1800 veh/h).
Time-Space Diagrams. Based on the traffic flow data presented in Table
Table 2 shows that, at 3 , the outbound left-turn traffic volume (1310 veh/h) is larger than the outbound through traffic volume (828 veh/h), and the inbound left-turn traffic volume (932 veh/h) is also larger than the inbound through traffic volume (546 veh/h).Thus, the left-turn traffic flows at 3 have a stronger and more urgent demand for the progression band than the through traffic flows.An attempt has been also made to assign the progression bandwidths to the outbound and inbound left-turn traffic flows at 3 .
The method is to input the distances between the intersections, design speed of the progression bands, signal timing (as shown in Table 3), common cycle length (calculated as 100 s using the Webster cycle formula, assuming a total lost time at each intersection is 12 s and the traffic flow volume data as presented in Table 2), and other data into the MULTIBAND model and established general model.These models are solved by using the optimization tool, LINGO.
The solution of the general model is 3 = 0 and 3 = 1, exhibiting that the optimal pattern at 3 is Pattern I or Pattern V (see Table 1).Simultaneously, it can be seen from Table 3 that at 3 the outbound distribution ratio of the green time (0.31499) for the through flows is not equal to that of the inbound one (0.19697) for the through flows.In addition, the outbound distribution ratio of the green time (0.47258) for the left-turn traffic flows is also not equal to the inbound ratio (0.35456) for the left-turn flows.Therefore, the optimal phase pattern obtained by the general model at 3 is Pattern I.The solution of the MULTIBAND model is 3 = 1 and 3 = 1, indicating that the optimal pattern obtained by the MULTIBAND model at 3 is Pattern II (the correspondence between and and phase patterns is shown in [7]).The optimal solutions obtained by LINGO are displayed in the time-space diagrams, as displayed in Figure 6 (for the MULTIBAND model) and Figure 7 (for the general model established in this paper), respectively.From the above analysis, although the MULTIBAND model has optimal phase patterns that are different from those of the established general model, both models yielded the theoretical maximum progression bandwidth corresponding to the optimal phase pattern.Because their modelling objects are different, the derived progression bands still have the following differences: In Figure 6, at 3 , the MULTIBAND model selects the outbound and inbound through traffic flows as its modelling subjects to provide the progression bands; the bandwidth for the outbound through traffic flow between 2 and 3 is 2 = 28 and that for the inbound through traffic flow between 3 and 4 is 3 = 14.In Figure 7, the general model considers the outbound and inbound left-turn traffic flows as its modelling subjects to provide the progression bands; the bandwidth for the outbound left-turn traffic flow between 2 and 3 is 2 = 35, and the bandwidth for the inbound leftturn traffic flow between 3 and 4 is 3 = 31.It can be seen that the two models differ from each other mainly in terms of the different modelling objects, and, thus, different progression bandwidths.At 3 , the MULTIBAND model provides a progression bandwidth of 2 + 3 = 42 (shown in Figure 6) for the through traffic flows, whereas the general model provides 2 + 3 = 66 (shown in Figure 7) for the left-turn traffic flows.The general model here, being superior to the MULTIBAND model in terms of the progression bandwidth, may be able to generate a better traffic control than the MULTIBAND model (such as less delay, stops, and travel time), which will be validated hereunder using traffic simulation.
Simulation and Results
Analysis.To evaluate the proposed general model and MULTIBAND model, the average delay time, average vehicle stops, and average travel time were used as measures of effectiveness (MOE S ), which were obtained using the traffic simulation software, VISSIM.The traffic signal synchronizing solutions generated by the two models were input into VISSIM (including the cycle length, offset, phase, and phase sequence).To avoid the randomness caused by a one-time traffic simulation, ten random seeds were selected to form ten situations of traffic simulation.It is decided to evaluate and compare the performance of the two models on two levels.
Level 1: Considering that the main aspect that differentiates the general model from the MULTIBAND model is that it provides the progression bands to the outbound left-turn ( 2,3 from 2 to 3 ) and inbound left-turn traffic flows ( 3,4 from 4 to 3 ) instead of the outbound through ( 2,3 from 2 to 3 ) and inbound through traffic flows ( 3,4 from 4 to 3 ) as defined by the MULTIBAND model.The traffic flows (including the left-turns and through traffic flows) between intersections 2 , 3 , and 4 were selected as the evaluation objects for Level 1 to verify the effectiveness of coordinating left-turn flows.The traffic simulation data of the ten random seeds were averaged, and the obtained average delay time, average travel time, and average vehicle stops were used to compare the two models.See Figure 8 for the comparison results for Level 1.
Figure 8 shows that, at Level 1, the average delay time, average travel time, and average vehicle stops obtained from the general model are smaller than those derived from the MULTIBAND model by 30.44%, 16.23%, and 32.58%, respectively, implying that at Level 1 it is feasible to provide the progression bands for the outbound and inbound leftturn traffic flows ( 2,3 and 3,4 ) at 3 and concurrently obtain a better traffic control.This validates the general model established in this paper.
Second, Level 2 considered the traffic flows in the entire arterial ( 1 , 2 , 3 , 4 , and 5 ) as its evaluation objects.The traffic simulation data of the ten random seeds were also averaged, and the obtained average delay time, average travel time, and average vehicle stops were used to compare the two models.Refer to Figure 9 for the comparison results for Level 2.
Figure 9 shows that, at Level 2, the average delay time, average travel time, and average vehicle stops derived from the general model are smaller than those of the MULTIBAND model by 32.51%, 14.10%, and 27.59%, respectively.This implies that the general model established herein can obtain a better control than the MULTIBAND model in terms of the entire arterial.
Compared with Level 1 and Level 2 in the MULTIBAND model, the general model has a better control, as proved by its shorter average delay time, fewer average stops, and shorter average travel time.This validates the concept of providing the progression bands to the left-turn traffic flows to achieve a better control.
Discussion.
It is necessary to highlight that the traffic flows on crossing roads were not considered in the traffic simulation evaluation, because of which the comprehensiveness of the performance evaluation based on the two models could be questioned.The reasons for why the traffic flows on the crossing roads were not included in the simulation evaluation are as follows: (1) Considering the traffic flow from the north approach at 1 as an example, the traffic flow comes from the upstream intersection ( N1 ) on the north side of 1 .The delay time, vehicle stops, and travel time of this traffic flow are affected by the offset between N1 and 1 .Therefore, the simulation evaluation of such a traffic flow should be conducted on the north-south road from N1 to 1 instead of the east-west arterial from 1 to 5 ; (2) all the bandwidth-based methods (MULTIBAND model and general model herein) consider the traffic flows on arterials as their research subjects; the traffic flows on the crossing roads intersecting the arterials should be evaluated on the corresponding crossing roads instead of on the arterial.
Conclusions
The existing bandwidth-based methods typically consider the through traffic flows on an arterial as their study object to provide the progression bands, implying that the through traffic flows have a higher priority and more urgent requirement for the same.However, when the left-turn traffic flows have a higher priority and more urgent demand for the progression bands than the through traffic flows, it would be inappropriate to still provide the progression bands to the through traffic flows.A better control may be achieved by selecting the left-turn traffic flows as the study object and providing them the progression bands.In this view, a general maximum progression model to concurrently synchronize the left-turn and through traffic flows was established in this paper by using time-space diagrams.The general model could deal with all the patterns of the left-turn phases by introducing two new binary variables.A numerical example was selected to compare the MULTIBAND model and general model established herein, the results of which indicated that the latter can effectively reduce the average delay time, average vehicle stops, and average travel time.Thus, the appropriateness and practicability of the general model established herein were validated.
In the future research work, the following two directions can be focused on: as shown in Figure 1) at +1 as their study objects and provides progression bands accordingly.In the future, the general model could be expanded to include one-way leftturn traffic flows (such as outbound left-turn traffic flow ,+1 ) and one-way through traffic flows (such as inbound through traffic flow +1,+2 ) at +1 as the study objects.
(2) After the technology of the Internet of Vehicles is applied and matures in the future, the demand of the traffic flows on different roads in a road network for the progression bands could be obtained in real time and dynamically.Such traffic flows could then be selected as the study objects to assign the progression bands to better adapt to the traffic flow variation and provide the progression bands to traffic flows in urgent need more accurately.
Notations
( ): Outbound (inbound) progression bandwidth between and +1 ( ): Weight coefficient of outbound (inbound) bandwidth ( ) : Target ratio between inbound bandwidth and outbound bandwidth 1 , 2 : Upper limit and lower limit of cycle length : R e c i p r o c a lo fc y c l el e n g t h ( ): Red time for through traffic flows in outbound (inbound) direction at ( ): Green time for left-turn traffic flows in outbound (inbound) direction at ,+1 ( ,+1 ): Travel time from ( +1 ) to +1 ( ) in outbound (inbound) direction ( ): Queue clearance time at in outbound (inbound) direction ( ): Distance between ( +1 ) and +1 ( ) in outbound (inbound) direction , ( , ): Upper limit and lower limit of travel speed in outbound (inbound) direction , ℎ ( , ℎ ): Upper limit and lower limit of travel speed change in outbound (inbound) direction ( ): 0/1 variables, depending on left-turn phase patterns Δ : Time difference between the centre of to the nearest center of Δ = − 0.5 − + 0.5
Figure 1 :
Figure 1: Comparison of the coordinated traffic flows.
(a), we can see that when the time of phase (B) in Pattern I is zero, Pattern I becomes Pattern VI, as shown in Figure 3(b).Similarly, we can see from Figure 2(b) that when the time of phase (B) in Pattern II is zero, Pattern II becomes Pattern V, as shown in Figure 3(a).Patterns V and VI are two possible patterns of the symmetric phase.
Figure 4 :
Figure 4: Time-space diagram for Pattern I.
Figure 5 :
Figure 5: Distribution of the intersections on the arterial.
Figure 6 :
Figure 6: Time-space diagram generated by the MULTIBAND model.
Figure 7 :
Figure 7: Time-space diagram generated by the proposed general model.
Figure 8 :
Figure 8: MOE comparison between the proposed general model and MULTIBAND model at Level 1.
( 1 )
the general model established herein considers two-way left-turn traffic flows (outbound left-turn traffic flow ,+1 and inbound left-turn traffic flow +1,+2
Table 2 :
Traffic flow volumes at each intersection on the arterial (veh/h).
Table 3 :
Distribution ratio of phase green time for each intersection. | 2018-12-29T15:04:35.865Z | 2018-02-21T00:00:00.000 | {
"year": 2018,
"sha1": "a713df53171f332df0a68f32a0f8b99d88ab1228",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2018/2453246.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a713df53171f332df0a68f32a0f8b99d88ab1228",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8728767 | pes2o/s2orc | v3-fos-license | Post-Transplant Immunosuppression: Regulation of the Efflux of Allospecific Effector T Cells from Lymphoid Tissues
A functional sphingosine-1-phosphate (S1P) receptor antagonist specifically inhibited the egress of activated allospecific T cells from draining popliteal lymph nodes in alloantigen-sensitised mice. The level of S1P receptor 1 (S1PR1) mRNA was similarly reduced 1 and 3 days after mitogenic activation of T cells. However, the response of these cells to the S1PR1-specific agonist SEW2871 was only reduced on the first day after T cell activation with normal receptor-mediated Akt-phosphorylation restored by day 3. Longitudinal analysis of CD69 expression showed that almost all T cells expressed this antigen on days 1 and 3 after activation. However, the absolute level of cell-surface expression of CD69 peaked on undivided T cells and was then halved by each of the first 3 cycles of mitosis. CD69-specific small interfering RNA (siRNA) reduced the maximal level of CD69 expression by undivided, mitogen-stimulated T cells. These cells retained their capacity to phosphorylate Akt in response to stimulation with SEW2871. These data show that S1P receptors are involved in controlling the egress of activated T cells from lymph nodes, and that S1PR1 function is regulated by the level of T cell surface CD69. They suggest a potential for augmentation of this process to deplete alloreactive effector cells after organ transplantation.
Introduction
Immunosuppression produced by the inhibition of calcineurin has greatly enhanced the success of allogeneic organ transplantation. However, long-term administration of calcineurin inhibitors can cause a range of morbidities. For this reason there is a continuing pressure to develop new immunosuppressive drugs which function through different pathways. One novel strategy for the induction of immunosuppression involves inhibition of the efflux of activated T cells from the lymphoid tissues [1].
Under normal conditions, naïve T cells recirculate continually through blood and lymphatic tissues. Homeostatic chemokines, principally CCL19, CCL21 and CXCL12, drive entry into lymph nodes by promoting firm adhesion of T cells to high endothelial venules (HEV) followed by endothelial diapedesis [2]. These T cells remain in normal lymph nodes for between 6 and 24 hours before exiting via the cortical sinuses [3,4]. This egress is driven by a positive concentration gradient of sphingosine-1-phosphate (S1P) from the lymph node to lymph, which stimulates the T cell-surface receptor S1PR1 [5,6]. This model of T cell egress is supported by the action of the drug FTY720, which disrupts lymphocyte recirculation by inhibiting the normal response to S1P by binding S1PR1. This drug-receptor complex is then internalized and targeted for ubiquitination and degradation rather than recycling to the cell surface [7][8][9].
The drug FTY720, which is phosphorylated to FTY720-P in vivo, was found to prevent allograft rejection as effectively as calcineurin inhibitors [10,11]. However, clinical trials were terminated as a consequence of adverse affects which, most significantly, included brachycardia. More specific analogues of FTY720 include AUY954 [1] and KRP-203 which is known to extend cardiac allograft survival in rodents [12]. Although new clinical trials are planned in transplantation, it is clear that our fundamental knowledge of regulation of the efflux of activated T cells from lymphoid tissues remains incomplete.
At the start of an adaptive immune response, only a small proportion of the T cells retained within a particular node is primed by TCR-specific interaction with antigen-presenting cells bearing immunogenic MHC antigen-peptide complexes. These T cells are triggered to proliferate and differentiate, generating effector and memory cells. Evidence suggests that the first effector T cells move into the circulation 3 days after the initial priming event [5]. This correlates with the period required for reacquisition of the responsiveness to S1PR1 stimulation of activated T cells within the node [5].
It has been suggested that an activation-induced increase in the expression of S1PR1 and decrease in CCR7 are the most important factors resulting in the reacquisition of S1PR1 signalling and migration of effector T cells from lymph nodes [6]. However, this model does not include the potential involvement of CD69.
CD69 is a type II TM protein of the C-type lectin family [13] and exists in its mature form in cells as a disulphide-linked dimer [14]. For a long time it was known simply as a marker that is upregulated onto the surface of T cells within hours of cell activation [15,16]. More recently however, data from several studies have indicated an important role for the protein in the regulation of immune cell trafficking. Transgenic overexpression studies showed that CD69 inhibited thymocyte egress from the thymus [17], knockout of CD69 was shown to prevent lymph node shutdown [18] and adoptive transfer experiments with CD69negative cells showed it was involved in relocation of memory T cells into the bone marrow compartment [19]. Immunoprecipitation and crosslinking-reporter assays showing direct interaction of CD69 with S1PR1 indicate a likely mechanism behind these phenomena. CD69 can form a complex with S1PR1 in the plasma membrane which leads to receptor internalization and degradation [18,20].
The magnitude and dynamics of CD69 expression by T cells vary depending on the stimulus applied. For example, treatment for 1 h with type 1 IFNs induces low level expression of cellsurface CD69 [18,21], whilst mitogenic T cell activation induces gene transcription leading to high level CD69 expression [16,22]. These differences could be important for the control of S1PR1 signalling and regulation of the egress of T cells from lymph nodes.
The current study was designed to define roles played by the regulation of S1P-mediated signalling in controlling the egress of alloantigen-activated T cells from lymph nodes. An initial series of experiments was performed in vivo to demonstrate the potential of FTY720 to prevent the normal export of activated, alloantigen-specific T cells from reactive murine lymph nodes. Further experiments were performed in vitro to validate the relationship between the level of human T cell-surface CD69 expression and the response of S1PR1.S1PR1 function was tested using SEW2871 as an agonist, as the molecule has been shown to be highly selective for that S1P receptor. GTPcS binding assays using S1PR transfectants showed strong binding and signalling activity of SEW2871 through S1PR1, but a complete lack of activity of the ligand on S1PR2, 3,4 and 5 at a concentration of 10 mM [23].
Materials and Methods
Reagents FTY720 (S) phosphate (FTY720-P) was purchased from Cambridge Bioscience (Cambridge, UK). It was dissolved in ethanol at 1 mg/ml for storage at 220uC. On the day of use 1 mg/ml FTY720-P was diluted to 100 mg/ml in sterile water with 2% bcyclodextrin (Sigma-Aldrich; Poole, UK) and then mixed thoroughly. SEW2871 was purchased from Enzo Life Sciences (Exeter, UK).
Animals and Procedures
Female BALB/c (H2 d ) and C57BL/6 (H2 b ) mice (6-8 weeks old; Charles River, Margate, UK) were maintained under pathogen-free conditions. All procedures were performed in accordance with UK Home Office and EU Institutional Guidelines and within the parameters of current personal and project licences.
C57BL/6 mouse footpad injections (on day zero) were unilateral, subcutaneous, and comprised 5610 6 BALB/c splenocytes suspended in 25 ml RPMI 1640 medium (Sigma-Aldrich). Between days 2 and 5, some mice were injected daily, intraperitoneally, with 100 ml 100 mg/ml FTY720-P or an equal volume of drug vehicle. The mice were killed on day six and the popliteal lymph nodes draining the injected footpads removed. Cell suspensions were prepared from the nodes by pressing the tissue through 70 mm cell strainers (BD Biosciences; Oxford, UK) into RPMI 1640 medium. Popliteal lymph node-derived cells were washed twice with RPMI 1640 medium before further use.
BALB/c splenocytes were prepared as follows. Spleens were mechanically disrupted then the tissue forced through cell strainers into RPMI 1640 medium. The cells were purified by density centrifugation (Histopaque-1083; Sigma-Aldrich).
Enzyme-linked Immunosorbent Spot (ELISPOT) Assay
Ninetysix-well format Immobilon MultiScreen-P plates (Millipore; Watford, UK) were coated with IFN-c capture antibody [clone AN18] (Mabtech; SE-131 28 Nacka Strand, Sweden) diluted into carbonate-bicarbonate buffer (Sigma-Aldrich) overnight at 4uC. These plates were washed twice with PBS and blocked with RPMI 1640 medium supplemented with 10% FBS, 100 U/ml penicillin and 0.1 mg/ml streptomycin (all Sigma-Aldrich) for 1 h at room temperature. Mixed leukocyte reaction assays were then performed in each well by mixing 1610 4 C57BL/6 popliteal lymph node-derived cells with 2610 5 BALB/c splenocytes in a total volume of 200 ml RPMI 1640 culture medium containing 50 mM 2-mercaptoethanol (Sigma-Aldrich). After incubation for 18 h at 37uC the cells were discarded and the plates washed six times with PBS. A biotinylated IFN-c detection antibody [clone R4-6A2] (Mabtech), diluted in PBS, was applied overnight at 4uC. The plates were washed six times with PBS. The streptavidin-alkaline phosphatase conjugate (Mabtech) was diluted in PBS and applied for 1 h at room temperature. The plates were washed another six times with PBS and 50 ml of BCIP/NBT liquid substrate system for membranes (Sigma-Aldrich) was added to each well. After development for 5 min at room temperature, the reaction was stopped by removal of substrate and washing with tap-water. Developed spots were enumerated using an ELISPOT reader (AiD; Strassberg, Germany).
Cell Isolation, Culture and Activation
Total CD3 + T cells were derived from human peripheral blood or platelet-depleted leukocyte cones (NHS Blood and Transplant Service, UK). They were isolated using an erythrocyte-rosetting negative-selection kit (StemCell Technologies; Grenoble, France) and cell separation by centrifugation across Lympholyte H (Cedarlane Laboratories; Ontario, Canada). T cells were cultured in X-VIVO 15 medium (Lonza; Slough, UK) and activated with Dynabeads Human T Activator CD3/CD28 (Life Technologies; Paisley, UK), at a ratio of one bead per cell and a starting culture density of 1610 6 cells per ml.
Antibodies, Cell Labelling and Flow Cytometry
For analysis of mouse cells, the antibodies were CD3 [clone KT3] (AbD Serotec) and CXCR3 [clone 220803] (R&D Systems). The antibodies used for human cells were S1PR1 [clone 218713] (R&D Systems) and CD69 [clone FN50] (BD Biosciences). In some cases T cells were labelled by incubation with 1 mM CFSE-DA (carboxyfluorescein diacetate N-succinimidyl ester; Sigma-Aldrich) for 5 min at 37uC in PBS +0.1% (v/v) FBS. They were then chilled rapidly on ice and washed with cold RPMI 1640 before use. Data was acquired on a FACSCanto II instrument (BD Biosciences) and analysis performed using FACSDiva 6.1.3 (BD Biosciences) and FlowJo 7.6 (Treestar; Ashland, Oregon, USA) software.
Real Time PCR
RNA was prepared from cell pellets using TRI Reagent (Sigma-Aldrich). cDNA was synthesised from approximately 1 mg RNA per sample using SuperScript III Reverse Transcriptase and random hexamers as primers (Life Technologies). Expression of S1PR1 was determined semi-quantitatively using a specific Taq-Man Gene Expression assay (Hs00173499_m1), with 18S ribosomal RNA as the reference (Hs99999901_s1; both Applied Biosystems, Life Technologies; Paisley, UK). The amplifications were run on an ABI Prism 7000 instrument (Applied Biosystems). Hitchin, UK). The secondary antibody was horseradish peroxidase-conjugated anti-rabbit IgG (Sigma-Aldrich).
Protein Assays
Phospho(S473)-Akt and total Akt were also detected using a cellbased ELISA (R&D Systems). Approximately 10 000 T cells were added to each well of a 96-well format plate (previously coated with 10 mg/ml poly-lysine (Sigma-Aldrich) for 30 min) and left to attach for 1 h. The cells were treated for the desired length of time then fixed with 8% formaldehyde. Phospho(S473)-Akt and total Akt were detected by double immunoenzymatic labelling. The relative quantities of the proteins were deduced from the intensities of spectrally distinct fluorescences associated with each target. The detector was a Dynex MFX instrument (Worthing, UK) operating with excitation and emission wavelengths of 540 nm and 600 nm for phospho(S473)-Akt, and 360 nm and 450 nm for Akt.
Gene Knockdown
Resting T cells were transfected with 100 pmol siRNA per million cells by electroporation (Nucleofector I instrument; Lonza). For CD69 knockdown, an equimolar mixture of three 21 nucleotide duplexes (MISSION siRNA; Sigma-Aldrich) was used. The single strand sequences were: 59 GAGUUAGAUGUUGGUACUA 39 59 CUACUCUUGCUGUCAUUGA 39 59 CUCUCAUUGCCUUAUCAGU 39 Separately, as a control, cells were transfected with an equal mass of MISSION siRNA Universal Negative Control 1 (Sigma-Aldrich).
After transfection, the T cells were rested for about 5 h in X-VIVO 15 before addition of Dynabeads Human T Activator CD3/CD28 at a ratio of one bead per cell. 24 hours later the cells were analysed directly for CD69 expression or viable cells were sorted using a FACS Aria II (BD Biosciences) instrument for phospho-protein analysis.
Statistics
Prism 4.0c (GraphPad Software; La Jolla, California, USA) was used for statistical analyses. Comparisons between groups were made using the Student's t-test for unpaired data. P values #0.05 were considered significant.
The Egress of Alloactivated T cells from Lymph Nodes is Dependent on Intact S1P Receptor Signalling
An initial series of experiments demonstrated that unilateral foot pad sensitisation of C57BL/6 mice with BALB/c splenocytes produced a difference (P,0.01) in popliteal node cellularity after 6 days, with a mean of 2.01610 6 cells (s.e.m. 0.17610 6 ) recovered from the sensitised side and 4.88610 5 (s.e.m. 2.92610 5 ) from the uninvolved contralateral node (data not shown). Efficacy of the drug FTY720-P in our animal model was tested by measuring the effect of its administration on the frequency of peripheral blood CD3 + cells. Daily dosing over two days resulted in a greater than 90% depletion of CD3 + cells from peripheral blood (Fig 1A and B). A second series of experiments showed that the overall yield of cells from the reactive popliteal node of sensitised animals was not Figure 3. The capacity of S1PR1 to signal in T cells before and after cell activation. T cells were either left at rest or stimulated with CD3/ CD28 beads for 1, 3 or 6 days. (A) The cells were then treated for 10 minutes with either 10 mM SEW2871 or vehicle before being lysed. Akt phosphorylated at residue serine 473 (pAkt) and total Akt were detected in cell lysates by western blotting. Blots are representative of those from three independent experiments. (B) Cell-based ELISA was used to quantify the relative amounts of pAkt and Akt in each cell population before and after cell stimulation with SEW2871. The relative amounts of pAkt in the samples were calculated by normalising the fluorescent signal value corresponding to pAkt to that of total Akt in each experimental well. Graph shows means 6 s.e.m, n = 3 for each time point. doi:10.1371/journal.pone.0045548.g003 Regulation of S1PR1 Responsiveness by T Cell PLOS ONE | www.plosone.org changed from control values by daily treatment with FTY720-P (P.0.05; Fig 1C).
Measurement by ELISPOT of the frequency of IFN-c producing, alloreactive T cells in the population recovered from reactive popliteal lymph nodes showed a difference (Fig 1D; P,0.01) between animals treated with the drug vehicle and FTY720-P, with a mean 1.9-fold enrichment of these cells in nodes from the drug treated animals. Analysis of the expression of the T cell activation-associated chemokine receptor CXCR3 in the popliteal node-derived cell population (Fig 1E) also showed a mean 1.9-fold enrichment of these cells in drug-treated animals (Fig 1F; P,0.05).
T cell Activation is Associated with a Transient Loss in S1PR1 Signalling Capacity
Longitudinal analysis of the expression of S1PR1 gene following T cell activation by stimulation of CD3 and CD28 (Fig 2) showed a marked reduction in transcription after 24 h; a similarly reduced level of mRNA encoding this receptor was observed 3 days after T cell activation.
To test for the presence of functional S1PR1 at the cell surface, T cells were stimulated with the specific agonist SEW2871 and activation of the downstream signalling component Akt measured. When resting cells were stimulated with the specific agonist SEW2871 there was rapid formation of the phosphoserine (473) derivative of Akt (pAkt) (Fig 3A). Stimulation of T cells with this agonist 24 h after the activation of T cells did not increase the level of pAkt. However, an increased level of pAkt was observed when T cells were stimulated with SEW2871 on either day 3 or 6 after mitogenic activation. Additionally, Cell-based ELISA was used to quantify the relative amounts of pAkt and Akt in each cell population before and after cell stimulation with SEW2871. Data from three independent donors (Fig 3B) show that the transient reduction in S1PR1-mediated intracellular signalling was only apparent 1 day after T cell activation.
Analysis of CD69 Expression Following T cell Activation
Flow cytometric analysis of human T cell surface expression of CD69 showed that almost no resting cells expressed this antigen ( Fig 4A). However, almost all these cells expressed CD69 when analysed 1 and 3 days after mitogenic T cell activation (Fig 4A). Although the T cells analysed on days 1 and 3 after activation were uniformly positive for CD69 expression, quantification of the median level of antigen expression per cell demonstrated a marked reduction between days 1 and 3 after T cell activation (Fig 4B).
The rate of loss of CD69 from the surface of activated T cells was analysed by fractionating the dividing cells on the basis of CFSE dilution (Fig 5A). This experiment showed that none of the cells had divided on day 1, maintaining maximum CD69 expression. However, the overall level of CD69 expression observed on day 3, for example, was the sum of separate levels expressed by T cell subpopulations which had divided 0, 1, 2 or 3 times (Fig 5B). On both days 3 and 6 after activation, analysis of the rate of decrease of CD69 expression between cell cycles 0 and 3 showed a similar half-life of 1.15 and 0.97 mitotic division cycles respectively (Fig 5C).
Loss of T cell Expression of CD69 Correlates with Increased S1PR1 Signalling
A series of experiments was performed to demonstrate the potential to reduce CD69 expression by transfection of resting human T cells with sequence-specific siRNA sequences prior to mitogenic activation. After gating to exclude the T cells damaged during transfection, the percentage of cells expressing CD69 1 day after mitogenic activation (Fig 6) was reduced from 35.6% (control siRNA) to 13.6% (CD69-specific siRNA); the corresponding median fluorescence values were 24.1 and 11.4 respectively.
A series of 3 separate experiments was then performed in which viable and non-viable T cells were separated after transfection by fluorescence-activated cell sorting. These cells were then activated by stimulation of CD3 and CD28 for 1 day and the pAkt/total Akt ratio was quantified after stimulation with the specific S1PR1 agonist SEW2871. This study demonstrated that treatment with the CD69-specific siRNA sequence markedly increased the pAkt signal generated by stimulation of S1PR1 (P,0.001; Fig 7).
Discussion
The current model for regulation of the egress of T cells from lymphoid tissue has two facets: the situation in normal circumstances and that during non-specific lymph node inflammation. The current study extends this by using in vivo and in vitro methods to examine how the exit of antigen-activated T cells from lymph nodes is controlled.
Published work has shown that the S1P agonist FTY720 can block the routine efflux of lymphocytes from lymphoid tissue during homeostasis. However, there has been some debate as to the extent to which the efflux of activated T cells from lymph nodes is S1P-dependent. Habicht et al. adoptively transferred TCR-transgenic, allospecific T cells along with an allograft and showed that effector memory T cells are trapped in regional lymphoid tissues by FTY720 treatment [24]. It has been known for some time that whereas FTY720 treatment efficiently depletes naïve T cells from peripheral blood, antigen-experienced cells are not affected [25]. Furthermore, using an in vivo model of contact hypersensitivity, Nakashima et al. showed that FTY720 did not significantly suppress the delayed-type hypersensitivity reaction if administered during the afferent phase [26]. An in vivo model of alloimmunity was used in the current study to determine whether or not the egress of activated T cells from lymph nodes is dependent on S1P receptor signalling. The functional S1P receptor antagonist FTY720-P was administered two days after sensitisation with allogeneic cells in the foot pad to minimise unwanted effects on immune priming. This treatment did not alter the total number of cells within the draining popliteal node but led to specific accumulation of the subpopulations of alloreactive and CXCR3-expressing T cells, suggesting that the ability of activated T cells to exit the reactive node is S1Pdependent.
Previous work has shown that it is signalling of S1PR1, and not that of other S1P receptors, that overrides lymph node retention signals to allow T cell exit from lymph nodes [1]. Regulation of this receptor following T cell activation was therefore the focus of the study. The observation that S1PR1 gene expression was decreased by mitogenic activation of human T cells is consistent with a previous demonstration that S1PR1 expression is reduced in mouse T cells following TCR activation [27]. Indeed, it is known that stimulation through the TCR suppresses expression of the transcription factor KLF2, which is a positive regulator of S1PR1 transcription [28]. It is interesting that the magnitude of the signalling response to stimulation of S1PR1 on three-day activated T cells was comparable to that of resting cells, even though the gene expression was approximately three-fold lower in the activated cells. Because of the lack of a monoclonal antibody which can detect cell-surface S1PR1, the relative amounts of protein on resting, one day and three day activated cells could not be directly compared. It also ruled out use of techniques such as fluorescence resonance energy transfer (FRET) to study directly protein -protein interactions with S1PR1. Nevertheless, it is possible to infer from our data that control of S1PR1 gene expression has only a minor role in the regulation of receptormediated intracellular signalling following T cell activation.
The expression of CD69 on the T cell surface was at a maximal level 24 h after mitogenic activation. At this time the potential for agonist-induced S1PR1 signal transduction to elicit phosphorylation of Akt was completely inhibited. Although most of these T cells still showed positive cell-surface CD69 after 3 days, the median level of CD69 expression was greatly reduced. At this time the cells showed an intracellular signalling response following stimulation with the S1PR1-specific agonist SEW2871 which was similar to that of resting T cells. This is consistent with a report showing that T cells which have been activated in vivo re-acquire at least some responsiveness to S1P by day 3 [5].
Cell cycle analysis of changes in the cell-surface expression of CD69 by activated T cells showed that the level of this antigen decreased by almost exactly one half during each of the first 3 cell divisions. This is consistent with there being little neosynthesis of CD69 after the first mitotic division, with the existing protein dividing equally between the daughter T cells. A previous study supports this model by demonstrating the mRNA encoding CD69 is only detectable during the first 24 hours after T cell activation [16]. The current study showed no evidence of T cell proliferation during this period.
The dilution of CD69 protein during T cell mitosis provides a potential control mechanism for S1PR1 signalling and is consistent with an established model [29] in which the division history of an individual T cell is a major determinant of its fate. Those T cells that have a short division history would be least differentiated but, by expressing a high level of CD69 protein, would be inhibited from exiting the lymph node. In contrast, T cells that have divided several times would become more differentiated. These cells would express low levels of CD69 allowing the egress of short-lived effector T cells from the lymph nodes.
The potential to regulate S1PR1 activity directly by reduction of the level of CD69 protein expression was examined by transfection of T cells with specific siRNA sequences prior to mitogenic activation. The act of transfection of T cells with either control or CD69-specific siRNA sequences non-specifically prevented a subpopulation of these cells from upregulating CD69 expression following T cell activation. Nevertheless, stimulation of control and experimental T cell transfectants with SEW2871 after mitogenic activation for 24 h revealed a highly significant difference. In common with non-transfected cells, those T cells which had received a control siRNA sequence showed no significant accumulation of pAkt following treatment with the specific S1PR1 agonist. In contrast, T cells which had been transfected with the specific siRNA sequence showed a marked accumulation of pAkt following stimulation with the agonist. These data suggest that the re-acquisition of responsiveness to S1PR1 stimulation observed 3 days after T cell activation is a consequence of the decrease in CD69 expression produced by between 1 and 3 cycles of mitosis.
The existence of a CD69-mediated mechanism to control S1PR1 signalling, and therefore by implication to inhibit normal T cell efflux from lymph nodes, supports the potential for therapeutic reinforcement of this natural process in order to produce the depletion of activated allospecific T cells from the circulation of transplant patients. This provides further impetus for clinical development of novel immunosuppressive agents which inhibit the S1P receptor system. | 2017-04-13T14:43:10.182Z | 2012-09-18T00:00:00.000 | {
"year": 2012,
"sha1": "ca389c6e2082364336fc016a560855948895970a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045548&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c594137e9bb672e2bd499fe50f864d284f4eea4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257200277 | pes2o/s2orc | v3-fos-license | Estimation of Ground Contact Time with Inertial Sensors from the Upper Arm and the Upper Back
Ground contact time (GCT) is one of the most relevant factors when assessing running performance in sports practice. In recent years, inertial measurement units (IMUs) have been widely used to automatically evaluate GCT, since they can be used in field conditions and are friendly and easy to wear devices. In this paper we describe the results of a systematic search, using the Web of Science, to assess what reliable options are available to GCT estimation using inertial sensors. Our analysis reveals that estimation of GCT from the upper body (upper back and upper arm) has rarely been addressed. Proper estimation of GCT from these locations could permit an extension of the analysis of running performance to the public, where users, especially vocational runners, usually wear pockets that are ideal to hold sensing devices fitted with inertial sensors (or even using their own cell phones for that purpose). Therefore, in the second part of the paper, an experimental study is described. Six subjects, both amateur and semi-elite runners, were recruited for the experiments, and ran on a treadmill at different paces to estimate GCT from inertial sensors placed at the foot (for validation purposes), the upper arm, and upper back. Initial and final foot contact events were identified in these signals to estimate the GCT per step, and compared to times estimated from an optical MOCAP (Optitrack), used as the ground truth. We found an average error in GCT estimation of 0.01 s in absolute value using the foot and the upper back IMU, and of 0.05 s using the upper arm IMU. Limits of agreement (LoA, 1.96 times the standard deviation) were [−0.01 s, 0.04 s], [−0.04 s, 0.02 s], and [0.0 s, 0.1 s] using the sensors on the foot, the upper back, and the upper arm, respectively.
Introduction
Ground contact time (GCT) is defined as the amount of time a runner is in contact with the ground for each step (from initial foot contact to final foot contact). GCT has been addressed as one of the most influential biomechanical factors that affect running economy [1,2].
The use of inertial measurement units (IMUs) for the measurement and evaluation of sports or human motion, has been an established technique for a long time [3]. The possibility of using these sensors outside of the laboratory environment has made them interesting for application in localization [4], occupational health and safety [5], or pathological gait analysis [6], among others. This has led, in the past decade, to an exploration of the possibility of estimating GCT from IMUs attached to different parts of an athlete's body.
The upper body is, from a practical point of view, an ideal place for wearable sensor placement. In fact, runners, especially vocational runners, mainly motivated by the necessity of wearing their cellular phones during sports practice, usually wear pockets attached to the upper back or to the upper arm that are ideal to hold sensing devices. Therefore, a correct estimation of GCT from such places could permit an extension of the analysis of the performance of their running to the public, even allowing them to do this from their own cellular phone, that is in general provided with inertial sensors, and could be easily provided with an app to record the sampled signals and even to estimate GCT. Therefore, in this work we conduct an experimental study to estimate the GCT from an inertial sensor located at the upper back and the upper arm. We include advanced (semi-elite) and amateur runners (six in total), running on a treadmill at different rhythms. We compare the performance between our estimations and those obtained from estimations based on a ground truth optical motion capture system (Optitrack). For assessment purposes, we also address the estimation of the GCT from an IMU placed at the foot, using a validated method proposed in the state of the art [7].
The paper is organized as follows. In Section 2 we describe the design, execution, and results of a systematic search based on the Web of Science, to select and analyze the most relevant works for GCT estimation from body-worn IMUs. Sections 3 and 4 describe the experimental methods and results. The paper is finished with the discussion and conclusions in Sections 5 and 6.
The upper body is, from a practical point of view, an ideal place for wearable sensor placement. In fact, runners, especially vocational runners, mainly motivated by the necessity of wearing their cellular phones during sports practice, usually wear pockets attached to the upper back or to the upper arm that are ideal to hold sensing devices. Therefore, a correct estimation of GCT from such places could permit an extension of the analysis of the performance of their running to the public, even allowing them to do this from their own cellular phone, that is in general provided with inertial sensors, and could be easily provided with an app to record the sampled signals and even to estimate GCT. Therefore, in this work we conduct an experimental study to estimate the GCT from an inertial sensor located at the upper back and the upper arm. We include advanced (semi-elite) and amateur runners (six in total), running on a treadmill at different rhythms. We compare the performance between our estimations and those obtained from estimations based on a ground truth optical motion capture system (Optitrack). For assessment purposes, we also address the estimation of the GCT from an IMU placed at the foot, using a validated method proposed in the state of the art [7].
The paper is organized as follows. In Section 2 we describe the design, execution, and results of a systematic search based on the Web of Science, to select and analyze the most relevant works for GCT estimation from body-worn IMUs. Sections 3 and 4 describe the experimental methods and results. The paper is finished with the discussion and conclusions in Section 5.
State of the Art
A systematic search was run using the Web of Science database on the 4th of February 2022. The following sentence was used for that purpose: Sensors 2023, 23, x FOR PEER REVIEW 3 of 14 TS = ( ( ( (foot OR initial OR terminal OR ground) NEAR/3 (contact) ) OR ("toe off") OR (gait NEAR/3 event*) ) AND (acceleromet* or inertial or gyroscop* or IMU) AND (run*) ) AND (DT==("ARTICLE"))) The inclusion criteria for the prior screening of the references were: • Conference papers were discarded; • The primary objective of the study had to be the timing of step events or ground contact time as a summarized result. Works with a different primary objective were not considered, even though event detection was addressed for that purpose. This consideration was applied sequentially over the title, abstract, and the whole paper, to screen the recorded papers.
The inclusion criteria for the prior screening of the references were: • Conference papers were discarded; • The primary objective of the study had to be the timing of step events or ground contact time as a summarized result. Works with a different primary objective were not considered, even though event detection was addressed for that purpose. This consideration was applied sequentially over the title, abstract, and the whole paper, to screen the recorded papers.
After this preliminary screening, 11 papers were selected for a more exhaustive screening. From these, two papers were later discarded, as they only addressed identification of the initial contact event [16,17]. Figure 2 summarizes the process using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) flow diagram.
(gait NEAR/3 event*) ) AND (acceleromet* or inertial or gyroscop* or IMU) AND (run*) ) AND (DT==("ARTICLE"))) The inclusion criteria for the prior screening of the references were: • Conference papers were discarded; • The primary objective of the study had to be the timing of step events or ground contact time as a summarized result. Works with a different primary objective were not considered, even though event detection was addressed for that purpose. This consideration was applied sequentially over the title, abstract, and the whole paper, to screen the recorded papers.
After this preliminary screening, 11 papers were selected for a more exhaustive screening. From these, two papers were later discarded, as they only addressed identification of the initial contact event [16,17]. Figure 2 summarizes the process using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) flow diagram. After the references screening, two researchers independently analyzed each reference and findings were discussed later. The reading was focused on:
•
The identification of the location of the IMU on the body; • The identification of the participants in the experiments (number of people, running experience, gender, velocities, and duration); After the references screening, two researchers independently analyzed each reference and findings were discussed later. The reading was focused on:
•
The identification of the location of the IMU on the body; • The identification of the participants in the experiments (number of people, running experience, gender, velocities, and duration); • The identification of the performance of the method described to estimate the GCT with respect to the gold standard method used for validation of the results (accuracy, precision, or similar performance metrics).
Two papers were removed in this phase [10,13]. While they addressed the identification of initial and final contact events, they did not make a paired analysis that could permit the calculation of the GCT. Table 1 contains the information extracted from the selected references. For outdoor experiments, optical [14], photoelectric bar [12], and force platforms [9] systems were used as the gold standard. Indoor experiments (four from seven) used an optical MOCAP system [11] or an instrumented treadmill as the gold standard. Elite participants were only considered in two studies from the seven. In the rest, recreational runners were involved in the experiments, mixing male and female runners. We include in Table 1 the number of runners for each paper that were used for the GCT estimation, and whose results were validated against the gold standard. This refers to the 25 subjects defined as the validation set in [7], and the 14 subjects that were validated against the gold standard in [8]. With regards to [14], experiments were performed with amateur and elite runners. We include results from the elite athletes that were used to test the estimation algorithms. For the rest of the analyzed references, all the subjects involved in the experiments were considered.
Recreational runners run at recreational running velocities, except for [7], where higher running rhythms, up to 5.56 m/s (20 km/h, 3 min/km), were considered.
GCT estimation performance is reported in the different works, using different metrics. For our study, we decided to interpret the information reported in the different papers using a generic measure of accuracy and precision. To interpret accuracy and precision values (those included in Table 1), we adopted the following decisions:
•
Positive values for the accuracy were used to indicate that GCT estimations from the IMU are higher than GCT estimations from the ground truth (negative values were used in the other case); • In [7], the central tendency and dispersion of estimation errors are indicated, respectively, using (i) the median of the mean error, and (ii) the median of the standard deviation, between the IMU and the force-platform-based GCT estimations in the different trials (person/speed). We have taken these values as being representative of the accuracy and precision of estimations; • Accuracy and precision were compiled from [8,12] and [14] using Bland-Altman plots between the accelerometer-based and the gold-standard-based identification of GCT. We used the offset as accuracy. Variability is reported in these works in terms of 95% LoA (we have interpreted this as 1.96 times the standard deviation unless a different value was specified in the paper). Values included in our table refer to the standard deviations, and are calculated from them. In [14], the included values were identified from a figure, so perhaps a little error may exist in the values included in the table; • The authors of [9] directly reported the mean and standard deviation of the error from each IMU-based method and the ground truth; • The authors of [15] reported the average accuracy and LoA for errors from estimations compared to a force platform at the different velocities. We include in the table the median of these values, as representatives of the accuracy and precision of the method. Similarly, [11] reports average accuracy and precision values for errors from estimations compared to a force platform at the different velocities. We include in the table the median of these values as representative values.
Experiments
Experiments were conducted on a treadmill. Two rounds of experiments were developed (see Figure 3 for a graphical description of the experimental procedures). A preliminary experimental round, involving three volunteer adults, recruited from the scientific team (one female, two male, age 41 ± 14.52 years, weight 75.33 ± 8.39 kg, height 175 ± 8.66 cm), were used to define the estimation methods. Subjects ran at a comfortable running pace, and cameras and IMU signals were collected and matched to define the estimation methods from the IMUs.
The validation experiment involved six healthy adults from the University of Oviedo athletics team (two female, four male, age 37 ± 12.38 years, weight 60.17 ± 8.84 kg, height 168.67 ± 10.19 cm) without any symptomatic musculoskeletal injuries. Two of them were semi-elite runners. Written informed consent was obtained in advance from all the participants. Runners were told to warm-up at their desired running pace for two minutes. After that, they ran at three different rhythms (Z1, Z2, Z3), one minute for each. Velocities corresponding to each rhythm were specified by their coach attending to the personal conditions of each one (see Table 2). Runners were equipped with Xsens DOT V2 inertial sensors (see Figure 4, left). Two A preliminary experimental round, involving three volunteer adults, recruited from the scientific team (one female, two male, age 41 ± 14.52 years, weight 75.33 ± 8.39 kg, height 175 ± 8.66 cm), were used to define the estimation methods. Subjects ran at a comfortable running pace, and cameras and IMU signals were collected and matched to define the estimation methods from the IMUs.
The validation experiment involved six healthy adults from the University of Oviedo athletics team (two female, four male, age 37 ± 12.38 years, weight 60.17 ± 8.84 kg, height 168.67 ± 10.19 cm) without any symptomatic musculoskeletal injuries. Two of them were semi-elite runners. Written informed consent was obtained in advance from all the participants. Runners were told to warm-up at their desired running pace for two minutes. After that, they ran at three different rhythms (Z1, Z2, Z3), one minute for each. Velocities corresponding to each rhythm were specified by their coach attending to the personal conditions of each one (see Table 2). Runners were equipped with Xsens DOT V2 inertial sensors (see Figure 4, left). Two of them were firmly attached, using elastic bands, to the upper back and the upper arm of the subjects. An additional sensor was attached to the right foot, close to the third metatarsal, using an adhesive band. The sampling frequency was fixed at 100 Hz. Sensor measurements between the different IMUs were synchronized using the option available in the configuration software provided by the manufacturer, Xsens DOT app for Android. Sensors were also configured to store the sampled data in their internal memory, to avoid data loss. Data were transferred to a personal computer once the experiments were completed. Three Flex 3 Optitrack (Figure 4, right) cameras were positioned so that the markers on the athlete's right foot were always visible. Reflective markers were placed on the heel and the third metatarsal of the foot (Figure 4, left-bottom), positioned so as not to impede the correct running technique. The four cameras were connected via a USB to a personal computer, on which the Optitrack Motive Tracker 2 software was run. An additional Flex 3 camera was used to record the image of the foot for testing purposes.
Synchronization between the IMU system and the Optitrack was performed manually for each runner. The Optitrack system and the IMUs were each started on their own time base. After initiating the data collection in both systems, the athlete remained static for 5 s and then tapped their foot vertically on the treadmill, remaining static for another 5 s. The time of contact of the foot with the floor, visually identified from the corresponding signal peaks in both sensor signals, was used as the initial time, with all other events measured with respect to it.
GCT Estimation from the Optical System
The position signals measured by the MOCAP system were filtered by a bidirectional FIR filter, designed using the window method, with order 6 and a cutoff frequency of 3 Hz.
The steps were segmented from the vertical marker position signal located on the toe of the foot. Local minima, with a prominence of at least 40 mm, were located on this marker and used as step markers. To detect initial and final contacts, a threshold was placed at each step, corresponding to 20 mm above the recorded minimum of the toe of the foot (we observed that the minimum of the marker at the heel systematically occurred later). The instant at which this value was reached was identified as IC, while the instant at which this height was exceeded again was identified as FC.
GCT Estimation from the Inertial Sensors
Based on previous work [7], the initial and final foot contacts can be detected from the angular velocity of the pitch of a foot-mounted inertial sensor. As reported, three local minima can be identified in each footfall cycle (see Figure 5), the first of which corresponds to the start of contact and the second to the end of contact.
To detect these minima, the angular velocity signal was filtered using the same bidirectional FIR filter used for the camera signals. All local minima and maxima present in Three Flex 3 Optitrack (Figure 4, right) cameras were positioned so that the markers on the athlete's right foot were always visible. Reflective markers were placed on the heel and the third metatarsal of the foot (Figure 4, left-bottom), positioned so as not to impede the correct running technique. The four cameras were connected via a USB to a personal computer, on which the Optitrack Motive Tracker 2 software was run. An additional Flex 3 camera was used to record the image of the foot for testing purposes.
Synchronization between the IMU system and the Optitrack was performed manually for each runner. The Optitrack system and the IMUs were each started on their own time base. After initiating the data collection in both systems, the athlete remained static for 5 s and then tapped their foot vertically on the treadmill, remaining static for another 5 s. The time of contact of the foot with the floor, visually identified from the corresponding signal peaks in both sensor signals, was used as the initial time, with all other events measured with respect to it.
GCT Estimation from the Optical System
The position signals measured by the MOCAP system were filtered by a bidirectional FIR filter, designed using the window method, with order 6 and a cutoff frequency of 3 Hz.
The steps were segmented from the vertical marker position signal located on the toe of the foot. Local minima, with a prominence of at least 40 mm, were located on this marker and used as step markers. To detect initial and final contacts, a threshold was placed at each step, corresponding to 20 mm above the recorded minimum of the toe of the foot (we observed that the minimum of the marker at the heel systematically occurred later). The instant at which this value was reached was identified as IC, while the instant at which this height was exceeded again was identified as FC.
GCT Estimation from the Inertial Sensors
Based on previous work [7], the initial and final foot contacts can be detected from the angular velocity of the pitch of a foot-mounted inertial sensor. As reported, three local minima can be identified in each footfall cycle (see Figure 5), the first of which corresponds to the start of contact and the second to the end of contact. In our training experiments, we found that the initial and final foot contact events can be identified from the minimum and maximum, respectively, of the low pass filtered resulting acceleration, recorded by the inertial sensor mounted on the upper back (see Figure 6). For this purpose, the modulus of the acceleration at each instant of time was calculated. This signal was then filtered with a bidirectional filter, of moving average and order 20.
Since the signals vary in amplitude in different cycles, a dynamic threshold was established, which was used to differentiate the absolute maximum of each cycle from the rest of the maxima present. The threshold was taken as the average between the maximum and the minimum value recorded in the last 50 measurements.
Each time a local maximum was encountered, if the acceleration value was higher than the set threshold, the maximum was identified as the final contact of the step.
The initial contact was identified as the local minimum detected just before the FC (see Figure 6). Finally, we proposed a method to estimate the GCT from the IMU mounted on the upper arm, estimating the initial contact as the maximum of the jerk in the vertical axis (which corresponds to the inflexion point of the vertical acceleration), and the final contact from the local minima of the derivative of the resulting acceleration (see Figure 7).
For this purpose, the accelerations were filtered using a bidirectional moving average filter of order 10.
The step cycle was segmented using the resulting acceleration signal. For this purpose, a dynamic threshold was set as the mean between the maximum and minimum of the last 50 samples, and the steps were segmented considering the maximum of the resulting acceleration exceeding this value. To detect these minima, the angular velocity signal was filtered using the same bidirectional FIR filter used for the camera signals. All local minima and maxima present in this filtered signal were detected. Those where the slew rate was positive were discarded. Maxima were used to segment the signal into single steps.
In each cycle (step) of the filtered signal, three minima appear. The first of the minima present is identified as IC, while the absolute minimum of the cycle is identified as FC.
In our training experiments, we found that the initial and final foot contact events can be identified from the minimum and maximum, respectively, of the low pass filtered resulting acceleration, recorded by the inertial sensor mounted on the upper back (see Figure 6). For this purpose, the modulus of the acceleration at each instant of time was calculated. This signal was then filtered with a bidirectional filter, of moving average and order 20. In our training experiments, we found that the initial and final foot contact events can be identified from the minimum and maximum, respectively, of the low pass filtered resulting acceleration, recorded by the inertial sensor mounted on the upper back (see Figure 6). For this purpose, the modulus of the acceleration at each instant of time was calculated. This signal was then filtered with a bidirectional filter, of moving average and order 20.
Since the signals vary in amplitude in different cycles, a dynamic threshold was established, which was used to differentiate the absolute maximum of each cycle from the rest of the maxima present. The threshold was taken as the average between the maximum and the minimum value recorded in the last 50 measurements.
Each time a local maximum was encountered, if the acceleration value was higher than the set threshold, the maximum was identified as the final contact of the step.
The initial contact was identified as the local minimum detected just before the FC (see Figure 6). Finally, we proposed a method to estimate the GCT from the IMU mounted on the upper arm, estimating the initial contact as the maximum of the jerk in the vertical axis (which corresponds to the inflexion point of the vertical acceleration), and the final contact from the local minima of the derivative of the resulting acceleration (see Figure 7).
For this purpose, the accelerations were filtered using a bidirectional moving average filter of order 10.
The step cycle was segmented using the resulting acceleration signal. For this purpose, a dynamic threshold was set as the mean between the maximum and minimum of the last 50 samples, and the steps were segmented considering the maximum of the resulting acceleration exceeding this value. Since the signals vary in amplitude in different cycles, a dynamic threshold was established, which was used to differentiate the absolute maximum of each cycle from the rest of the maxima present. The threshold was taken as the average between the maximum and the minimum value recorded in the last 50 measurements.
Each time a local maximum was encountered, if the acceleration value was higher than the set threshold, the maximum was identified as the final contact of the step.
The initial contact was identified as the local minimum detected just before the FC (see Figure 6). Finally, we proposed a method to estimate the GCT from the IMU mounted on the upper arm, estimating the initial contact as the maximum of the jerk in the vertical axis (which corresponds to the inflexion point of the vertical acceleration), and the final contact from the local minima of the derivative of the resulting acceleration (see Figure 7). The initial contact of each step was located as the last maximum of the vertical jerk acceleration signal before the maximum of the resulting acceleration.
The final contact of each step was located as the first minimum of the vertical acceleration after the maximum of the resulting acceleration. In all cases, we sought to achieve a logical sequence of detections, in which each IC was followed by its corresponding FC. If any of the algorithms detected two ICs in a row, or two FCs in a row, the second one was eliminated, to maintain the logical sequence of events.
On the other hand, while the MOCAP system and the sensor located on the foot only detected the events corresponding to the right foot, the inertial sensors located above the waist (on the back and on the arm) detected the initial and final contacts of both feet. To make a comparison between the different methods, the events corresponding to the left foot were not considered for further analysis.
Experimental Results
A total of 1722 steps were identified from the cameras, 1708 from the foot-attached IMU, 3421 from the back-attached IMU, and 3414 from the arm-attached IMU. From the arm and back, a much higher number of steps were identified, since the steps corresponding to both the right and left foot were detected, while from the camera and from the foot, only those of the right foot were detected.
To perform step-by-step paired comparisons, the times recorded as the initial contact from the steps identified from the cameras, and from each of the IMUs, were checked. Each of the steps identified from the camera was matched to the step recorded by each of the IMUs closest in time. Steps from the IMUs not matched to steps from the cameras were discarded. The total number of steps matched was 1705 from the foot IMU, and 1720 from each of the arm and back IMUs. Finally, we discarded those steps where either method provided clearly erroneous results, including step times of less than 15 ms or contact times of less than 8 ms. A total of 1673 steps were finally included in the statistical study. In summary, more than 99% of the steps were identified from all the IMUs (99.8% from the back and arm sensors). Likewise, 98.1% of the total number of detected steps presented consistent step times and contact times, and were included in the statistical study. Table 3 shows (mean ± std) the GCT estimated for every subject from the different sensors, and the corresponding step time estimated from the ground truth. An ANOVA analysis revealed that the mean GCT estimated from the cameras was significatively different for each subject and running velocity (~0). The test also revealed that the mean estimation errors from the camera, and from each of the IMUs, were significantly different (~0) for the different IMUs, subjects, and running velocities, with the exception of estimation errors from the upper arm, where a p-value of 0.066 was found for the influence of the running velocity. For this purpose, the accelerations were filtered using a bidirectional moving average filter of order 10.
The step cycle was segmented using the resulting acceleration signal. For this purpose, a dynamic threshold was set as the mean between the maximum and minimum of the last 50 samples, and the steps were segmented considering the maximum of the resulting acceleration exceeding this value.
The initial contact of each step was located as the last maximum of the vertical jerk acceleration signal before the maximum of the resulting acceleration.
The final contact of each step was located as the first minimum of the vertical acceleration after the maximum of the resulting acceleration.
In all cases, we sought to achieve a logical sequence of detections, in which each IC was followed by its corresponding FC. If any of the algorithms detected two ICs in a row, or two FCs in a row, the second one was eliminated, to maintain the logical sequence of events.
On the other hand, while the MOCAP system and the sensor located on the foot only detected the events corresponding to the right foot, the inertial sensors located above the waist (on the back and on the arm) detected the initial and final contacts of both feet. To make a comparison between the different methods, the events corresponding to the left foot were not considered for further analysis.
Experimental Results
A total of 1722 steps were identified from the cameras, 1708 from the foot-attached IMU, 3421 from the back-attached IMU, and 3414 from the arm-attached IMU. From the arm and back, a much higher number of steps were identified, since the steps corresponding to both the right and left foot were detected, while from the camera and from the foot, only those of the right foot were detected.
To perform step-by-step paired comparisons, the times recorded as the initial contact from the steps identified from the cameras, and from each of the IMUs, were checked. Each of the steps identified from the camera was matched to the step recorded by each of the IMUs closest in time. Steps from the IMUs not matched to steps from the cameras were discarded. The total number of steps matched was 1705 from the foot IMU, and 1720 from each of the arm and back IMUs. Finally, we discarded those steps where either method provided clearly erroneous results, including step times of less than 15 ms or contact times of less than 8 ms. A total of 1673 steps were finally included in the statistical study. In summary, more than 99% of the steps were identified from all the IMUs (99.8% from the back and arm sensors). Likewise, 98.1% of the total number of detected steps presented consistent step times and contact times, and were included in the statistical study. Table 3 shows (mean ± std) the GCT estimated for every subject from the different sensors, and the corresponding step time estimated from the ground truth. An ANOVA analysis revealed that the mean GCT estimated from the cameras was significatively different for each subject and running velocity ( p ∼ 0). The test also revealed that the mean estimation errors from the camera, and from each of the IMUs, were significantly different ( p ∼ 0) for the different IMUs, subjects, and running velocities, with the exception of estimation errors from the upper arm, where a p-value of 0.066 was found for the influence of the running velocity. Table 3. Ground contact time in seconds (mean ± std) for the different subjects using the different sensors.
The right-hand column shows aggregated results for the step time identified using the cameras.
IMU Foot IMU Upper Back IMU Upper Arm
Step Time Figure 8 shows the Bland-Altman plot for the error analysis in the estimations from the cameras compared to each of the accelerometers. The average difference for estimations from the cameras, and both from the foot and the upper back IMUs, is 0.01 s (one sample) as an absolute value (a positive bias means that the IMU overestimates the cameras and a negative difference means that the IMU underestimates the cameras). This average error grows to 0.05 s (five samples) for the difference between the estimation from the cameras and from the upper-arm-attached IMU. The LoA (1.96 times the standard deviation) was Figure 8 shows also correlation plots that relate the GCT estimated from the cameras and the different IMUs. A positive relation is found in all the cases. Figure 8 shows also correlation plots that relate the GCT estimated from the cameras and the different IMUs. A positive relation is found in all the cases.
Discussion
A great deal of work has been done in recent years addressing the estimation of GCT from inertial sensors placed on different body segments. To have a clear picture of the state of the art, in the initial phase of our study we addressed a systematic search from one of the widest and most recognized databases of scientific research (Web of Science), finding related works. We found that, usually lower body segments have been selected for sensor placement. The upper body has rarely been considered for that purpose, with the exceptions of some sporadic studies that considered a sensor placed on the trunk. This motivated our experimental work, motivated by the potential benefits of estimating GCT from these positions, that may extend the improvement of running technique to the public, permitting them to do this even from consumer devices such as cellular phones.
To design the experiments, we adopted the usual configuration for indoor studies based on treadmill running. In our case, an instrumented treadmill was not available, so we used a standard treadmill and a gold standard visual MOCAP system, as proposed in previous work [11]. Methods to estimate GCT from the upper back and the upper arm sensors were designed in advance, from an initial study considering unstructured experiments involving three amateur subjects. For the validation experiments, we included a mixture of recreational (four) and semi-elite (two) runners, with the aim of extending our results to a wide population. Running velocities in our experiments (3.3 m/s to 5.6 m/s) were in the range of normal to high rhythms (2.78 m/s to 5.56 m/s), as proposed in [7].
The IMUs were placed in comfortable positions on the upper back and arm, inspired by the usual commercially available utility pockets (mainly used for mobile phones), in the form of armbands around the upper arm, or harnesses on the upper back. A third IMU was placed on the foot, as a representative of the state of the art, IMU-based GCT estimation methods. This third IMU was used to interpret the results of the novel methods proposed in our work in the light of the results of other methods tested in the state of the art. With this paired analysis, we tried to mitigate the effect of the different experimental conditions from our setup to those used in the original experiments, including the actual sensors used. The work in [7] was selected as representative for several reasons. In the first place, the original work considered the largest population and the widest range of velocities, from the references analyzed. Most importantly, the algorithm was based on simple techniques to detect the initial and final contacts. Described events in the signal were clearly found in our signals, without complex processing, and therefore estimation from the reproduced algorithm was expected to be less prone to error implementation than could be the case from other algorithms analyzed.
GCT estimation requires a prior step segmentation from the sampled signals. The success of this phase was not generally reported in the analyzed references from the systematic search, with the exception of [8], that reported a step rate detection of 97%, and [12], that reported values close to 100% for step detection success. Our results also show a high step detection rate from all the IMUs, identifying above 98% of the actual steps identified by the optical system.
Using the estimations from the optical system, we found (Table 3) an average step time of 0.32 s and an average GCT from the cameras of 0.18 s. These results are in agreement with previous studies for treadmill running [18], where, for velocities between 12 km/h and 20 km/h (our velocity range), the authors reported an average step time of 0.34 s and an average GCT of 0.22 s.
The GCT estimated from the IMU attached to the foot showed an average bias of 0.014 s and a precision of 0.013 s. These values signify that, on average, we get an estimation error of one sample with a precision of one additional sample. This is an acceptable error, as an improvement over this would suppose nearly perfect detection. This result confirms that the estimation from the accelerometers works properly in our experimental setup. The results in our experiments differ from the results reported in the original work that used the same estimation method applied to a foot-worn inertial sensor [7], which reported a higher estimation error accuracy, of −0.03 s, with a precision of 0.004 s, and [11] reported an improved estimation accuracy of −0.008 s, with a precision of ±0.004 s. The experimental conditions are different in our setup, which may have caused the differences.
We found an average error value of 0.008 s in the estimations from the upper-backattached sensor compared to the ground truth, with a precision (standard deviation) of 0.016 s. This error is similar to that found from estimation from the foot-attached sensor, that may lead one to suspect that a similar performance may be expected in general terms between the foot-attached and the upper-back-attached IMUs, although the difference was found to be significant in statistical terms (ANOVA, p~0). Regarding estimations from the upper arm, GCT estimations were less accurate and precise, reporting up to 0.049 s of average error with a precision of 0.027 s (standard deviation). However, the values are in the range of reported values for GCT estimation from the state of the art. The authors of [7] and [9] reported average errors around 0.03 s, and [8] and [9] reported a similar average error, around 0.05 ms. A precision with an estimated standard deviation around 30 ms was less frequent. Only [9] reported a similar standard deviation of error, of 34.1 ms, for estimation errors from a shank-mounted inertial sensor.
As a summary, Figure 9 shows, using error bars, the precision (as absolute values, to facilitate the comparison) and accuracy of the methods reported in the state of the art and those analyzed in this paper. Varied performance values have been reported. There are, on the one side, very accurate and precise methods [12,14], and estimations from the pelvis and the foot, performed by [9]. This high performance can be justified, as in these studies experimental conditions were very controlled, which may have reduced the variability of the steps analyzed. In [12], 50 m of stable running were monitored from elite athletes; the authors of [9] and [14] analyzed only a reduced number of steps (ten and four, respectively). In any case, the estimation method seems to affect the results, as [9] reports, from the same experiments, an improved performance for estimations based on the pelvis and a combination of the shank and the foot. For the remaining references, performance is varied, and the methods analyzed in this paper (shown in orange on the right) are comparable to them, with estimations from the foot and the upper back having among the lowest errors and, on the contrary, estimations from the upper arm among the highest. . Error bars (accuracy ± precision) reported in the references analyzed from the state of the art and the methods proposed in this paper. X-axis labels show the reference and a letter (as indicated in Table 1) This work is, to our knowledge, the first experimental study to estimate GCT from the upper arm and the upper back. The results confirm the feasibility of obtaining a significant estimation of GCT from the upper back and the upper arm. In line with previous studies [7], further studies addressing a systematic assessment of other upper back/arm signal events, as indicators of the initial and final foot contact, may eventually confirm the optimality of the proposed algorithm, or lead to improvements of the estimation results. Tests with a greater population, and outside laboratory conditions, are other lines of future research. | 2023-02-26T16:05:12.511Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "de01d862c8aa26af0d54fe2d084eeb8e7ce20d58",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/5/2523/pdf?version=1677228911",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24dd598b3c1efdd1b164eb8a2aa4640a7329e7c8",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
201170985 | pes2o/s2orc | v3-fos-license | Monochorionic twin pregnancies increases perinatal risk and is a high risk factor of selective intrauterine growth restriction
Twins pregnancy can cause a lot of disease, especially monochorionic twin pregnancies, the prenatal infant will have many diseases and have high mortality rate. According to analysis and compare of the twin pregnancy, especially pregnant woman and puerpera’s situation and complication and baby’s situation; we hope we can find the reason which causes the fetus growth restrain of monochorionic twin pregnancies. So we can provide some reference for the prenatal health care, complication prevention and prenatal outcome. We divided 489 cases of twin pregnancies into two groups: monochorionic twin and dichorionic twin and compared the clinical features of them. At last, we used the logistic regression analysis method to analyze the risk factors of selective intrauterine growth restriction(sIUGR).
2 Abstract Background Twins pregnancy can cause a lot of disease, especially monochorionic twin pregnancies, the prenatal infant will have many diseases and have high mortality rate. According to analysis and compare of the twin pregnancy, especially pregnant woman and puerpera's situation and complication and baby's situation; we hope we can find the reason which causes the fetus growth restrain of monochorionic twin pregnancies. So we can provide some reference for the prenatal health care, complication prevention and prenatal outcome.
Methods
We divided 489 cases of twin pregnancies into two groups: monochorionic twin and dichorionic twin and compared the clinical features of them. At last, we used the logistic regression analysis method to analyze the risk factors of selective intrauterine growth restriction(sIUGR).
Results
The incidences of premature rupture of membranes and sIUGR were significant higher in monochorionic twin and twin-twin transfusion syndrome (TTTS) only exists in monochorionic twin. The weight of the newborn babies(both big and small babies)were significant lower in Monochorionic twin. The neonatal transfer rate was significant higher in monochorionic twin. Gestational weeks and weight of newborn babies are the high risk factors of sIUGR.
Conclusions
The type of chorion has a great influence to the pregnant period and the ending of maternal women. Monochorionic is a high risk factor of the sIUGR, which means that the main cause of sIUGR is from placenta, so it is a kind of "placental origin disease".
Background
When there are two babies in the uterine cavity of a pregnant woman, we usually call this kind of pregnancy "twin pregnancies". Hellin [1]once calculated that the rate of multiple pregnancies was 1:89 n-1 (n referring to the number of babies). While as to the use of assisted reproductive technology (ART) and the use of ovulation medicine, the rate of multiple pregnancies has been increasing in recent years.
Maternal complications and risk of adverse pregnancy outcomes are higher in twin pregnancies than in singleton pregnancy, so as to the rate of premature birth and perinatal mortality. Twin pregnancies are much easier to get complications, especially monochorionic twin who will get higher rate of mortality and complication. Our research used retrospective way to analyze and compare twin pregnancies, especially the situation of pregnancy women and newborn babies, clinical characteristics and complications between monochorionic twin and dichorionic twin. We hoped to find out the high risk factors of the fetal growth restriction in order to make a reference to the perinatal health care and perinatal outcome of monochorionic twin.
Research object
We collected the cases of twin pregnancies in Zhujiang Hospital of Southern Medical Hospital, among which 864 cases are twin pregnancies and 489 cases were used in our research.
Case diagnosis standard and condition
(1)The gestational weeks of all cases were more than 28 weeks and the chorionic situations were defined. The cases which the chorionic situation couldn't be defined, or the gestational weeks were less than 28 weeks, or multiple pregnancy after reduction surgery couldn't be enrolled. (5)We checked if puerperae had adverse pregnancy history which included more than two times abortion, embryo damage, fetal anomaly, fetal death and postpartum hemorrhage. 3.
Research method
We used retrospective clinical analysis and multiple-factor Logistic regression analysis to analyze 489 cases twin pregnancies. the occurrence and change of sIUGR into the logistic regression analysis. We analyzed the possible factors which will affect the occurrence of sIUGR. A value of P < 0.05 was selected as the level of significance.
Compare general information
As shown in Figure 1, 12948 2.
Compare the complications
The incidences of premature rupture of membranes and sIUGR were significant higher in monochorionic twin and TTTS only existed in monochorionic twin (Table 3).
Compare the perinatal infant outcome
The weight of the newborn babies both big and small babies was significant lower in Monochorionic twin. The neonatal transfer rate was significant higher in monochorionic twin (Table 4).
Analyze the high risk factors of sIUGR
We found that gestational weeks and the weight of newborn babies are the high risk factors of sIUGR (Table 5).
Discussion
The rate of multiple-pregnancy is about 1.1 1.2 , among which the rate of monochorionic twin is about 20 30 [1,2] In our research, the rate of monochorionic twin is 30 ,which is the same as the result of previous researches. Comparing single-pregnancy and dichorionic twin, monochorionic twin had worse perinatal outcomes, the rate of perinatal mortality in monochorionic twin is twice as that in dichorionic twin, and four times as that in single-pregnancy [3]. The rate of fetal abortion during 10-24 weeks of pregnancy in monochorionic twin is six times of that in dichorionic twin and single-pregnancy [4]. The growing environment of monochorionic twin in the utero is very complicated.
Monochorionic twins will increase the burden of pregnant women, and they have more risk factors, such as premature delivery, perinatal mortality, fetal growth restriction, congenital malformation, and all of these risk factors will appear earlier and more serious than single-pregnancy.
SIUGR is one of the most common complications in multiple-pregnancy. It has a high rate of adverse pregnancy outcomes and can affect the cardiovascular function and endocrine 7 system of newborn babies and even can have a great influence after they grow up[5].
Because there were not adequate researches about the pathogenesis of this disease, we didn't get consensus of the treatment and diagnosis of monochorionic twin. The research of sIUGR becomes very popular among complicated multiple pregnancy. SIUGR can happen both in dizygotic twin and monozygotic twin, but the reason of them is different. When sIUGR happens in dichorionic twin, it is associated with placenta, umbilical cord and congenital malformation of one of the two babies; when it happens in monochorionic twin, it is associated with TTTS, placenta, umbilical cord and babies. In dichorionic twin, the sIUGR usually happens in mid or late gestation. But it is very unpredictable when it happens in monochorionic twin, it can happen in early, mid or late gestation. The more earlier happen or the greater difference in weight between two babies, it will have more serious sequela among babies and higher rate of perinatal mortality. Even though it happens in mid or late gestation when the weight difference between two babies increase, the rate of premature delivery, perinatal asphyxia and death will also increase accordingly.
Our research found that the incidence of sIUGR in monochorionic twin is 41.0 ,while the incidence in dichorionic twin is 24.1 . The age of pregnant women, the incidence of ART fertilization way and surgical delivery way were significant lower in Monochorionic twin.
The incidences of premature rupture of membranes and sIUGR were significant higher in monochorionic twin. The weight of the newborn babies both big and small babies were significant lower in Monochorionic twin. The neonatal transfer rate was significant higher in monochorionic twin. The result of our research shows: the type of chorion has great influence to the process and the ending of pregnancy; monochorionic twin is the high risk factor of sIUGR, which means the main cause of sIUGR is from placenta, so sIUGR is a kind of "placental origin disease". 8 The diagnosis standard of sIUGR is still in dispute. At present, the widely recognized shortest gestational weeks, the highest rate of intra uterine fetal death and the deterioration of disease, the highest rate of infant mortality rate and the lowest survival rate within 6 months old. Type III: umbilical artery blood flow intermittently disappears and inverses in the end-diastolic. This type has the best clinical consequences.
In conclusion, the perinatal fetal prognosis for monochorionic twin is worse than dichorionic twin, the sIUGR in monochorionic twin has more complicated situation in pregnancy outcome and perinatal infant condition, so we should take close monitoring on them, take the effective interventions promptly and improve the prognosis. Table 1 Logistic regression model of two classification variable assignment table Table 2 Comparison of the general data in monochorionic twin and dichorionic twin Table 3 Comparison of the complications in monochorionic twin and dichorionic twin | 2019-08-23T02:03:36.232Z | 2019-08-03T00:00:00.000 | {
"year": 2019,
"sha1": "627a64cb46e299dc1a883b4ec151f69ef5ca439c",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3161/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3d27f957c4e88b9476713ac54c7fd9a3d3a333b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10865253 | pes2o/s2orc | v3-fos-license | The Impact of Severity of Illness at the Community Level
This study evaluated the impact of severity of illness on hospital inpatients within the metropolitan area of Syracuse, New York, during January-December 2014. It demonstrated that patients with Major and Extreme severity of illness generated a substantial majority of the excess lengths of stay and adverse outcomes during this period. These patients were associated with 77 percent of the excess days for adult medicine and 100 percent of the excess days for adult surgery. They also generated hospital readmission rates that were at least 50 percent higher than those of patients with Minor and Moderate severity of illness. They were also associated with more than 75 percent of inpatients with the most frequent post admission complications. The data suggested that these populations need to be a focus of efforts to improve hospital efficiency and outcomes.
Introduction
In the United States, efforts to improve health care continue to focus on utilization and outcomes.Studies and professional opinions have suggested that these efforts can result in better and more efficient care [1]- [4].
Much of this interest involves hospitals because acute care is the most visible and most expensive component of the health care system.In recent years, hospitals have been a focus for specific initiatives to improve utilization and outcomes such as the Medicare programs to reduce inpatient readmissions and complications, as well as statewide initiatives to focus nursing home admissions on patients with the greatest care needs [5]- [7].
These developments in health care have focused considerable attention on the need to evaluate patients by severity of illness.Because of advances in medical care and other factors, numbers of patients with multiple diagnoses have increased.Especially among the elderly, it is no longer unusual for patients to experience several chronic conditions.These factors have created challenges for health professionals involved with the identification and treatment of these individuals [8]- [11].
Looking toward the future, evaluation of severity of illness is also important as hospitals focus on the highest risk populations.Supporting this process involves the movement of lower levels of illness to ambulatory and long term care services in the community.It will also involve managing patients with high severity of illness in Accountable Care and other approaches [12].
The development of electronic health care data has produced opportunities to identify and quantify the impact of severity of illness in health care patient populations.During the 1990's, 3M TM Health Information Systems developed a system for evaluating severity of illness based on a number of factors including patient's principal diagnosis, secondary diagnoses, and age.This system made it possible to identify and study large numbers of patients according to their relative levels of illness [13].Software developed recently has also made it possible to apply severity of illness to outcomes such as inpatient hospital readmissions and complications [14] [15].
An important context for the evaluation of health care data concerning severity of illness is the community level.This is because most health care services are provided in local communities.
Additionally, in the United States, national data are not available by severity of illness because of the absence of a national database that includes all health care payors.For this reason, it is not possible to describe the distribution of patients by severity of illness at the national level.
Although national and statewide data are important, it is at the community level that health services are used through interactions between patients and providers and interactions among providers.From this perspective, within communities, the parameters and the impact of severity of illness and other indicators can be analyzed among services that work together on a daily basis.
Population
This study evaluated the severity of illness among hospital inpatients in a single acute care system, the metropolitan area of Syracuse, New York.The City of Syracuse and Onondaga County include three hospitals, Crouse Hospital (19,919 discharges excluding well newborns, 2014), St. Joseph's Hospital Health Center (25,532 discharges), and Upstate University Hospital (26,649 discharges).Within the local health care system, the hospitals work with a combined medical staff of 1830 physicians and 12 nursing homes.The hospitals provide acute care services to a local population of approximately 600,000 and tertiary services to a population of approximately 1,400,000.
Historically, the Syracuse hospitals have competed and cooperated in the provision of acute care within the service area through their joint planning organization, the Hospital Executive Council.This cooperation has included the use of community wide acute care data to improve efficiency and outcomes as well as the development of multihospital services [16].
Since the 1990s, the Syracuse hospitals and the Hospital Executive Council have worked with 3M TM Health Information Services in the application of severity of illness data to a variety of health care issues.These have included reduction of hospital lengths of stay, monitoring of inpatient admissions, reduction of inpatient readmissions, and reduction of inpatient complications [17].These efforts have included use of the All Patients Refined Diagnosis Related Group (APR DRG) severity of illness system, as well as the Potentially Preventable Readmissions and Potentially Preventable Complications software.
Methods
This study evaluated the severity of illness of hospital inpatients in the acute care system of Syracuse, New York using the All Patients Refined Diagnosis Related Group Severity of Illness System.It focused on the impact of this algorithm on hospital lengths of stay, readmissions, and complications.The All Patients Refined Severity of Illness System was developed by 3M TM Health Information Systems as an adjunct of the All Patients Refined Diagnosis Related Groups.It is based on the identification of a level of severity of illness of the principal diagnosis and each secondary diagnosis for every hospital inpatient.It includes four levels of severity for each diagnosis, Minor, Moderate, Major, and Extreme.At the patient specific level, the system also includes a summary severity of illness based on the severity levels by diagnosis, as well as the patient age and other factors.The formula for patient severity of illness includes clinical relative weights for each indicator [13].
The study data were provided by each of the Syracuse hospitals to the Hospital Executive Council staff as administrative data through business associate agreements.In the absence of a community wide ethics committee, the study was reviewed with staff from each of the hospitals.
The study employed the Potentially Preventable Readmissions and the Potentially Preventable Complications systems to evaluate the relationships between severity of illness and inpatient outcomes.These algorithms were developed by 3M Health Information systems for use with the APR DRGs and severity of illness [15] [18].
The analyses which comprise this study were based on simple descriptive statistics.The data analysis was based on identification and comparisons of hospital inpatient discharges by severity of illness.
In this study, the All Patients Refined System was used to identify the severity of illness in populations discharged from the Syracuse hospitals during 2014.Three indicators were used in the analysis, inpatient length of stay, inpatient readmissions, and post admission complications.The analysis focused on differences in each of these indicators by severity of illness and on the relative distributions of patients among the four severity of illness categories.
The initial component of the analysis involved inpatient lengths of stay by severity of illness for adult medicine and adult surgery, the services with the largest numbers of discharges in the Syracuse hospitals.It was based on inpatient data for the combined hospitals for January-December 2014.The analysis included numbers of discharges and lengths of stay for each service by severity of illness.It also included comparisons of these stays with those of a national sample of hospital inpatients by service and severity of illness.
The second component of the study involved inpatient Potentially Preventable Readmission rates for adult medicine and adult surgery in the Syracuse hospitals.This analysis was based on readmissions within 30 days of the previous discharge.Potentially Preventable Readmissions included all of those where a clinical relationship between the initial admission and the readmission was present.Between 2009 and 2014, these services accounted for more than 75 percent of total readmissions in the Syracuse hospitals.This analysis employed inpatient hospital data for January-December 2014 to identify hospital readmission rates for each of the hospitals and the combined total.It included comparison of these rates among severity of illness categories for each of these populations.
The third component of the analysis involved Potentially Preventable Complications for pneumonia and urinary tract infections, the diagnoses associated with the largest numbers of post admission complications in the Syracuse hospitals.The analysis was based on data for Crouse Hospital and St. Joseph's Hospital Health Center, two hospitals which account for more than 60 percent of hospital inpatients in Syracuse.They were the only hospitals for which complete data were available for January-December 2014.The analysis was based on numbers of inpatient complications by severity of illness category for patients with pneumonia and urinary tract infection as a Potentially Preventable Complication and those without each complication.
Results
The first component concerned hospital lengths of stay in Syracuse, New York by severity of illness.Relevant data for the combined Syracuse hospitals for January-December 2014 are summarized in Table 1.
These data identify numbers of hospital discharges and lengths of stay in the Syracuse hospitals by All Patient Refined Severity of Illness for the two largest acute care services, adult medicine and adult surgery.This information demonstrated that, for each service, mean hospital stays increased with higher severity of illness.
Lengths of stay for patients with Moderate severity of illness were 38.3 percent longer than those with Minor severity of illness for adult medicine and 46.3 percent longer for adult surgery.Mean stays for patients with Major severity of illness were 56.4 percent longer than those with Moderate severity of illness for adult medicine and 126.0 percent longer for adult surgery.Stays for patients with Extreme severity of illness were 88.5 percent longer than those with Major severity of illness for adult medicine and 166.3 percent longer for adult surgery.
The data in Table 1 also identified differences in the distribution of hospital inpatients in Syracuse for the two major services.For adult medicine, 50.8 of inpatients were at the highest levels of severity of illness (Major or Extreme), while for adult surgery, 26.5 percent were at these levels.Perhaps most importantly with respect to hospital lengths of stay, the data in Table 1 identified differences in hospital stays and related inpatient utilization compared with national averages adjusted for severity of illness.These comparisons were adjusted to compare national and Syracuse populations with the same severity of illness.The results of these comparisons were quantified as unit differences between stays and utilization differences in patient days.
The results of these comparisons demonstrated that a substantial majority of the excess patient days in the Syracuse hospitals for each service were generated by patients at Major or Extreme severity of illness.For Adult Medicine, of the 8355 patient days difference from the national average, 76.9 percent were produced by patients at Major or Extreme severity of illness.For Adult Surgery, all 5906 excess patient days were generated by patients at Major or Extreme severity of illness.
The second component of the study concerned patients readmitted to the Syracuse hospitals within 30 days of their initial inpatient discharge by severity of illness.Relevant Potentially Preventable Readmissions data for adult medicine and adult surgery combined between January and December 2014 are summarized in Table 2.
The data in Table 2 demonstrated that increases in severity of illness were associated with increases in monthly hospital readmission rates in the Syracuse hospitals between January-March and July-December 2014.Between patients with Minor and Moderate severity of illness within each time period, the rates increased for all three quarters at Crouse Hospital and St. Joseph's Hospital Health Center and for two of the three quarters at Upstate University Hospital.For most of these comparisons, the rates increased by more than 50 percent.Between patients with Major and Moderate severity of illness, the rates increased at all three hospitals.Between patients with Major and Extreme severity of illness, the rates increased for all three quarters at Crouse Hospital and St. Joseph's Hospital Health Center and for two of the three quarters at Upstate University Hospital.
The readmission rates for high severity of illness patients also provided information concerning the relative sizes of theses populations.For patients with Major severity, between 10 and 13 percent of medical-surgical inpatients were readmissions.For patients with Extreme severity, between 12 and 25 percent of medical-surgical inpatients were readmissions.
In a separate analysis, the Pearson Correlations between monthly readmission rates for medical-surgical patients and those in specific severity of illness categories between January 2013 and March 2015 were identified.Among the Syracuse hospitals, correlations between these rates for patients with Minor or Moderate severity of illness and rates for all medical surgical inpatients ranged between 0.3173 and 0.5620 among the hospitals, while correlations between rates for patients with Major or Extreme severity of illness and rates for all medical surgical inpatients ranged from 0.7187 -0.8735.These correlations demonstrated that patients with Major and Extreme severity of illness were the principal drivers of variations in readmission rates.
The third component of the study concerned patients who experienced post admission complications in the Syracuse hospitals for pneumonia and urinary tract infection, the diagnoses with the largest numbers of patients for this outcome.Data for these Potentially Preventable Complications at Crouse Hospital and St. Joseph's Hospital Health Center during 2014 are summarized in Table 3.
This information demonstrated that substantial majorities of the hospital inpatients who experienced the two complications in each of the hospitals were at Major or Extreme severity of illness.Patients at these levels of severity of illness accounted for 89.3 -92.3 percent of all inpatients with pneumonia as a complication and 75.9 -79.1 percent of inpatients with urinary tract infection as a complication.These data were also reflected in the relatively small numbers of patients at Minor or Moderate severity of illness with pneumonia complication (10 -11) and with urinary tract infection as a complication (19 -27).These data suggested that patients with high severity of illness were probably more susceptible to these complications, even in hospital inpatient settings.
Further comparisons addressed the potential relationships between patients with and without each of the complications evaluated in the two Syracuse hospitals.For pneumonia, 89.3 -92.3 percent of inpatients who experienced the complication were at Major and Extreme severity of illness compared with 40.2 -41.1 percent of those who did not.For urinary tract infection, 75.9 -79.1 percent of inpatients who experienced the complication were at Major or Extreme severity of illness, compared with 40.4 -41.1 percent of those who did not.High severity of illness was frequently associated with each of these complications.
During the same period, for pneumonia, 7.6 -10.7 of inpatients in the hospitals who experienced the complication were at Minor and Moderate severity of illness, compared with 58.9 -59.7 percent of those who did not.For urinary tract infection, 20.9 -24.1 percent of inpatients in the hospitals who experienced the complication were at Minor and Moderate severity of illness, compared with 58.8 -59.5 percent of those without it.
Discussion
As health care evolves in the United States, the importance of evaluating the degree of illness for the whole patient is increasing.The need to evaluate severity of illness at this level is generated by the requirement for providers to treat patients with a number of diagnoses, as well as the need to plan and manage care within resource constraints.This study described the application of severity of illness to inpatient data in the three hospitals of Syracuse, New York, which comprise a single, community wide acute care system.It was based on the application of All Patients Refined Diagnosis Related Group severity of illness system developed by 3M TM Health Information Systems to three indicators of utilization and outcomes.The study data were collected for indicators of efficiency and outcomes during the same twelve month period, January-December 2014.
The study demonstrated that the quantities of each of the three indicators included in the system increased with rising severity of illness.For adult medicine and adult surgery inpatient lengths of stay, medical-surgical inpatient readmission rates within 30 days, and post admission complication rates, levels increased from Minor to Moderate, Moderate to Major, and Major to Extreme severity of illness.It was interesting that these progressions occurred, especially for Potentially Preventable Readmissions and Potentially Preventable Complications that were developed in 2006 and 2007, more than ten years after the creation of the severity of illness algorithm.
These data suggested that severity of illness, at least in this version, is a valid tool for evaluating some forms of hospital utilization and outcomes.The fact that the analysis was carried out using administrative data indicated that the approach can be available widespread to providers and purchasers of care.
Perhaps more importantly, the study data also demonstrated that most problematic hospital stays, readmission rates, and complications in the Syracuse hospitals were focused in the populations with the highest severity of illness.Patients with Major and Extreme severity of illness were the source of obstacles to improving efficiency and outcomes identified in the data.
The study data indicated that most excess patient days in the Syracuse hospitals were generated by patients with high severity of illness.The data also indicated that these patients were associated with increased risk of inpatient readmissions and post admission complications.
The study suggested that the use of these data can contribute to the improvement of population health by challenging providers to improve care for patients at all levels of severity of illness.The sharing of information concerning these efforts could help improve population health on a local, regional, and national basis.
Conclusions
This information suggests that health care providers and payers will need to focus resources on management of populations by severity of illness as they work to improve the efficiency and effectiveness of care.It also suggests that they may be able to do so because many of the concerns that they need to address are related to these populations.
To be sure, the treatment and management of patients with high severity of illness are not an easy task.It requires addressing multiple diagnoses, as well as other clinical and demographic factors simultaneously.This challenge will require some of the highest creativity and related efforts that health care providers have to offer.
This area of care should generate useful directions for health systems at the community level.It addresses the need for effective care delivered in an efficient manner, especially for those patients who have the greatest needs.Understanding and managing by severity of illness hold the potential for raising the bar for this population and all of health care.Using severity of illness data can help providers and payers to reach these objectives.
Table 1 .
Adult medicine and adult surgery discharges by severity of illness, Syracuse hospitals, 2014.
Complete data for April-June unavailable because of implementation of electronic medical record systems at two of the hospitals; based on 3M Health Information Systems Potentially Preventable Readmissions algorithm applied to adult medical/surgical definitions by APR DRG for readmissions within 30 days; prepared by Hospital Executive Council. | 2018-01-08T20:50:48.805Z | 2015-12-18T00:00:00.000 | {
"year": 2015,
"sha1": "879c07b138b9e3e0cd188b1f54a7c33226da80c6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=62162",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "879c07b138b9e3e0cd188b1f54a7c33226da80c6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90490771 | pes2o/s2orc | v3-fos-license | Influence of olive leaves extract on hepatorenal injury in streptozotocin diabetic rats
Medicinal plants have always been an important source of new alternative effective compounds for human therapy. Currently, there are many of scientific evidences indicate that the medicinal plants contain a lot of hypoglycemic chemical compounds. The purpose of the present study was to determine the influence of olive leaves extract on hepatorenal injury in diabetic male rats. Experimental diabetes was induced by streptozotocin (STZ). The levels of serum glucose, alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, gamma glutamyl transferase, total bilirubin, creatinine, blood urea nitrogen, uric acid and malondialdehyde were significantly increased, while the levels of serum superoxide dismutase, glutathione and catalase were statistically decreased in untreated diabetic rats. Moreover, the histopathological examination showed several alterations in the structure of liver and kidney in untreated diabetic rats. Treatments with low dose and high dose of olive leaves extract in diabetic rats showed remarkable reducing and protecting influences of physiological and histopathological alterations. Moreover, the highly treatment efficiency was noted in diabetic rats treated with high dose followed by low dose of olive leaves extract. Additionally, the results of this study proved that the antioxidant activities of olive leaves extract played a vital role against the hepatorenal injury induced by diabetes. Finally, this study indicates to the importance of the use of olive leaves extract as promising alternative and complementary therapeutic agent against diabetes and its complications.
Introduction
Diabetes mellitus (DM) is a syndrome characterized by chronic hyperglycemia and disturbances of carbohydrate, lipid and protein metabolism associated with absolute or relative deficiency in insulin secretion and/or action (Akinnuga et al., 2010). Concern regarding this chronic disease is focused on serious DM-related complications which can affect multiple vital organ systems, thereby leading to more severe and irreversible pathological conditions such as nephropathy, retinopathy, vasculopathy, neuropathy and cardiovascular diseases, as well as hepatopathy (Reid, 2006). Longterm hyperglycemia promotes general oxidative stress and increases in the incidence of diabetic nephropathy and liver disease (El-Serag and Everhart, 2002;El-Serag et al., 2004). DM, by most estimates, is now the most common cause of liver disease and liver disease is an important cause of death in diabetic people (de Marco et al., 1999). Diabetic nephropathy is one of the alarming worldwide health problems at present leading to micro vascular (retinopathy, neuropathy and nephropathy) and macro vascular (heart attack, stroke and peripheral vascular disease) complications in many countries of the world (Umar et al., 2010). Diabetic nephropathy is a microvascular diabetic complication that leads to end-stage renal disease. Multiple factors are involved in the onset of diabetic nephropathy; oxidative stress is believed to link these factors (Wolf, 2004;Brownlee, 2005). Free radicals also prompted the development of liver diseases by inducing hepatocyte apoptosis, hepatic inflammatory response and fibrogenesis (Albano, 2006;Novo and Parola, 2008).
Recently, medicinal plants are widely used and experimental studies have shown that many species of medicinal plants with different compounds can be used as hypoglycemic agents. Medicinal plants provide a useful source of oral hypoglycemic compounds for the development of new pharmaceutical leads as well as dietary supplements to existing therapies (Kavishankar et al., 2011). The olive tree (Olea europaea) has been widely accepted as one of the species with the highest antioxidant activity via its oil, fruits, and leaves. It is well known that the activity of the olive tree byproduct extracts in medicine and food industry is due to the presence of some important antioxidant and phenolic components to prevent oxidative degradations. The olive tree has long been recognized as having antioxidant molecules, such as oleuropein, hydroxytyrosol, oleuropein aglycone, and tyrosol (Jemai et al., 2008a(Jemai et al., , 2008b. Moreover, experimental studies have shown the ability of olive leaves for the treatment and alleviation of different diseases, and physiological, biochemical and histopathological alterations (Omagari et al., 2010;Grawish et al., 2011;Zari and Al-Attar, 2011;Wainstein et al., 2012;Al-Attar and Abu Zeid, 2013;Shawush, 2014, 2015;Kumral et al., 2015;Al-Attar et al., 2016. The present study was aimed to evaluate the influence of olive leaves extract on hepatorenal injury in diabetic male rats.
Preparation of olive leaves extract
The method of Al-Attar and Abu Zeid (2013) was used to prepare the extract of olive leaves with some modifications. The dried olive leaves (200 g) were powdered and added to 7 L of hot water. After 3 h, the mixture was slowly boiled for 30 min. After boiling period, the mixture was cooled at room temperature and it was gently subjected to an electric mixer for 20 min. Thereafter the solution of olive leaves was filtered. The filtrate was evaporated in an oven at 40°C to produce dried residues (active principles). With references to the powdered samples, the yield mean of leaves extract was 20.3%. Moreover, the extract of leaves was prepared every two weeks and stored in a refrigerator for subsequent experiments.
Animal and experimentation
Sixty Wistar male rats (113.2-183.8 g) used in this study were obtained from the Experimental Animal Unit of King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia.
The experimental animals were housed 10 per cage in a room with 65% humidity, 12:12 h light: dark cycle at ambient temperature of 20 ± 1°C. Standard diet, commercial feed pellets and tap water were freely available. DM was induced using intraperitoneal injection of streptozotocin, STZ, (Sigma-Aldrich Corp, St. Louis, MO, USA) at a single dose of 60 mg/kg body weight dissolved in saline solution. DM was defined using determination of fasting blood glucose levels in rats treated with STZ. The blood glucose levels over than 17 mmol/L were considered as diabetic model rats. The normal (n = 30) and diabetic rats (n = 30) were divided into six experimental groups. The first group was served as normal healthy control, intraperitoneally received saline solution. The second group was diabetic control. The third group was diabetic rats, supplemented orally with olive leaves extract at a low dose (LD) of 200 mg/kg body weight/day. The fourth group was diabetic rats, supplemented orally with olive leaves extract at a high dose (HD) of 400 mg/kg body weight/day. The fifth group was non diabetic rats, intraperitoneally received saline solution and treated with olive leaves extract at the same dose given to the third group. The sixth group was non diabetic rats, intraperitoneally received saline solution and supplemented with olive leaves extract at the same dose given to the fourth group. After eight weeks, all rats were fasted for 12 h, water was not restricted. Blood samples were taken from orbital venous plexus under total anesthesia with diethyl ether. Blood samples were for separating the serum for analyzing the biochemical parameters. The level of serum glucose was measured according to the method of Trinder (1969). Serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels were measured using the method of Reitman and Frankel (1957). Serum alkaline phosphatase (ALP), gamma glutamyl transferase (GGT) and total bilirubin were estimated using the methods of MacComb and Bowers (1972), Szasz (1969) and Doumas et al. (1973) respectively. The methods of Larsen (1971), Patton and Crouch (1977), and (Young, 1990) were used to evaluate the levels of creatinine, blood urea nitrogen (BUN) and uric acid respectively. The methods of Beutler et al. (1963), Nishikimi et al. (1972), Ohkawa et al. (1979) and Aebi (1984) were used to measure the levels of serum glutathione (GSH), superoxide dismutase (SOD), malondialdehyde (MDA) and catalase (CAT) respectively. After blood sampling, rats were dissected and liver and kidney tis- sues were collected from each group for histopathological examinations. The collected tissues were fixed in 10% buffered formaldehyde, sectioned and stained with hematoxylin and eosin. The resulting slides were observed under light microscope (Olympus BX61-USA) connected to motorized controller unit (Olympus bxucb-USA) and photographed by a camera (Olympus DP72-USA).
Statistical analysis
All results were expressed as mean ± standard deviation (S.D.). One-way analysis of variance (ANOVA) was used to evaluate differences among experimental groups. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS for windows, version 22.0). The results were considered statistically significant when P < 0.05.
Results
The levels of serum glucose, ALT, AST, ALP, GGT and total bilirubin in all experimental groups are presented in Fig. 1A-F. The levels of serum glucose were significantly increased in diabetic rats of group 2 (340.0%), diabetic rats of group 3 treated with LD of olive leaves extract (190.2%) and diabetic rats of group 4 treated with HD of olive leaves extract (119.8%) compared with normal control rats of group 1. The levels of serum ALT (133.0%), AST (58.5%), ALP (137.9%), GGT (85.7%) and total bilirubin (56.6%) were statistically evoked in diabetic rats of group 2. The levels of serum ALT (73.8%), AST (18.3%), ALP (41.3%) GGT (49.1%) and were statistically evoked in diabetic rats of group 3, while the level of serum total bilirubin was unchanged compared with normal control rats of group 1. In diabetic rats of group 4, the levels of ALT (32.6%) and ALP (18.3%) were significantly enhanced, while the levels of AST, GGT and total bilirubin were statistically unchanged compared with normal control rats of group 1. Additionally, insignificant changes were noted in the levels of serum glucose, ALT, AST, ALP, GGT, and total bilirubin in normal rats treated with LD (group 5) and HD (group 6) of olive leaves extract.
Measured values for serum creatinine, BUN and uric acid in all groups are given in Fig. 2A-C. Statistically increases in the levels of creatinine (35.6%), BUN (112.2%) and uric acid (37.7%) were detected in diabetic rats of group 2. The levels of serum creatinine (15.7%) and BUN (60.2%) were significantly raised in diabetic rats of group 3, while the level of uric acid was unchanged compared with normal control rats of group 1. Serum BUN (30.3%) was significantly increased in diabetic rats of group 4, while the levels of creatinine and uric acid were unchanged compared with normal control rats of group 1. In comparison with normal control rats of group 1, normal rats supplemented with LD (group 5) and HD (group 6) of olive leaves extract showed insignificant alterations in the level of serum creatinine, BUN and uric acid. Table 1 represents the levels of serum SOD, GSH, MDA and CAT in all groups. The levels of serum SOD (52.5%), GSH (62.1%) and CAT (55.9%) were significantly diminished in diabetic rats of group 2 compared with normal control rats of group 1. Likewise, the levels of serum SOD (37.3% and 27.2%), GSH (39.8% and 16.3%) and CAT (33.2% and 31.0%) were significantly declined in diabetic rats treated with LD and HD of olive leaves extract respectively compared with normal control rats of group 1. Noticeably increases of serum MDA in diabetic rats of group 2 (183.6%), diabetic rats exposed to LD of olive leaves extract (109.4%) and diabetic rats treated with HD of olive leaves extract (58.4%) compared with normal control rats of group 1. In comparison with normal control rats of group 1, there were no significant alterations in the levels of serum SOD, GSH, MDA and CAT in normal rats treated with LD (group 5) and HD (group 6) of olive leaves extract Histopathological examination indicated a normal structure of the liver in the normal control rats (Fig. 3A) as well as the diabetic rats treated with LD of olive leaves extract (Fig. 3C), the diabetic rats treated with HD of olive leaves extract (Fig. 3D), the normal rats treated with LD (Fig. 3E) and HD (Fig. 3F) of olive leaves extract. Liver structure of diabetic rats of group 3 showed several changes including disarrangement of hepatic strands, rupture in liver cells (hepatocytes), mild hepatocellular necrosis, dilation and congestion of blood vessels with mild hemorrhage, dense Table 1 The levels of serum SOD, GSH, MDA and CAT of control, STZ, STZ plus LD of olive leaves extract, STZ plus HD of olive leaves extract, LD of olive leaves extract and HD of olive leaves extract treated rats. Percentage changes are included in parentheses. lymphocytic infiltration around the central vein and dark stained hepatocytic nuclei indicating cell pyknosis (Fig. 3B). Histopathological examinations of kidney or renal sections from all groups are represented in Fig. 4A-H. Areas of renal cortex containing renal corpuscles and associated tubules were showed more pronounced changes in treated rats compared with normal control. Therefore, these areas were selected for histological examination with the light microscope. The normal renal corpuscle consists of a tuft of capillaries, the glomerulus, surrounded by a double walled epithelial capsule called Bowman's capsule. Between the two layers of the capsule is the urinary or Bowman's space (Fig. 4A). In comparison with normal control rats, the renal sections from diabetic rats treated with LD of olive leaves extract (Fig. 4E) and HD of olive leaves extract (Fig. 4F) showed normal structures. Moreover, no detectable histological differences are observed by the light microscope between renal sections of normal control and normal rats treated with LD (Fig. 4G) and HD (Fig. 4H) of olive leaves extract. In diabetic rats of group 2 there were pronounced alterations in the structure of renal corpuscles including hemorrhage, shrinkage, and a highly degeneration and necrosis of glomeruli and Bowman's capsules (Fig. 4B-D).
Discussion
The incidence and prevalence of DM have continuously been increased over the last 20 years. Meanwhile an estimated number of 387 million people worldwide suffer from DM (Aziz et al., 2015). Currently, beside insulin, the most widely used medication for DM is oral hypoglycemic drugs. Furthermore, clinical uses of the current drugs are accompanied by unpleasant side effects such as severe hypoglycemia, lactic acidosis, peripheral edema and abdominal discomfort (Lorenzati et al., 2010). Modern therapies are far too costly and also they are beyond the reach of tribal people to be practiced for the majority of DM refers; so the ethnopharmacological use of herbal remedies for the treatment of DM is an area of study, which ripe with potential as a starting point in the development of alternative, inexpensive therapies (Rajendran and Manian, 2011). Therefore, the search for new antidiabetic agents with more effectiveness and less side effects has been continued. In the present study, untreated diabetic rats showed highly significant increases in the levels of serum glucose, ALT, AST, ALP, GGT, total bilirubin, creatinine, BUN, uric acid and MDA, while the levels of SOD, GSH and CAT were significantly decreased. Moreover, the histopathological examination of liver and kidney showed several changes. These findings are generally in agreement with previous experimental diabetes studies (Mohamed et al., 2009 The observed increase in the levels of ALT, AST, ALP, GGT and total bilirubin are the major diagnostic symptoms of hepatic damage and diseases (Hukkeri et al., 2002;Chatterjea and Shinde, 2005;Porchezhian and Ansari, 2005;Malarvizhi and Srinivasan, 2015). Moreover, serum or plasma enzyme levels have been used as markers for monitoring chemically induced tissue damages (Lin and Wang, 1986;Ngaha et al., 1989;Obi et al., 2001). The present liver injury was confirmed by the measurement of liver markers in serum associated with the histopathological changes of liver structure in untreated diabetic rats.
The present study demonstrates that untreated diabetic rats display a pronounced impairment in renal function which is confirmed by the enhancement of serum levels of creatinine, BUN and uric acid, and histopathological changes. BUN is a byproduct from protein breakdown. About 90% of urea produced is excreted through the kidney (Walmsley et al., 2010). Meanwhile, the creatinine is a waste product from a muscle creatinine, which is used during muscle contraction. Creatinine is commonly measured as an index of glomerular function (Treasure, 2003). BUN level can be increased by many other factors such as dehydration, antidiuretic drugs and diet, while creatinine is more specific to the kidney, since kidney damage is the only significant factor that increases the serum creatinine level (Cheesbrough, 1998). Additionally, creatinine is excreted exclusively through the kidney. Therefore, damage to the kidney will make the kidney inefficient to excrete both urea and creatinine and causes their accumulation in the blood. Therefore, the high level of blood urea and creatinine will indicate kidney damage (Dollah et al., 2012).
In the present study, the levels of serum SOD, GSH and CAT were significantly increased, while the level of MDA was decreased in STZ treated rats. These findings clearly showed that STZ induced oxidative stress in experimental diabetic rats. Diabetic complications are linked to hyperglycemia-induced oxidative stress which eventually overcomes the endogenous antioxidant defense system through glucose autoxidation, induction of nonenzymatic glycosylation of various macromolecules, and generation of reactive oxygen species (ROS) (Ademiluyi and Oboh, 2012). The human body possesses several enzymes associated with antioxidant defense and repair mechanisms against oxidative stress (Gul et al., 2013). Abundant clinical evidence demonstrated that the diabetes correlated closely with oxidative stress, resulting in an increased ROS production or a reduction in the antioxidant defense system (Susztak et al., 2006). The hyperglycemic in STZ-treated animals leads to the formation of hydrogen peroxide, which subsequently generates free radicals such as O 2À and OH À . These reactive compounds can cause peroxidation of lipids, resulting in the formation of hydroperoxy fatty acids and endoperoxides (Pushparaj et al., 2000). However, there are many evidences demonstrated that the levels of these biochemical parameters were differed and changed in diabetic rats compared with non diabetic rats (Miao et al., 2015;Nwaehujor et al., 2015;Roy et al., 2015;Obi et al., 2016;Sheweita et al., 2016;Zhu et al., 2016).
From the present study, It is obviously that the treatment of diabetic rats with LD and HD of olive leaves extract attenuated the highly increases of serum glucose, ALT, AST, ALP, GGT, total bilirubin, creatinine, BUN and uric acid. Furthermore, the histopathological examination showed that the treatment of diabetic rats with LD and HD of olive leaves extract protected the hepatic and renal structures. Administration of LD and HD of olive leaves extract led to reduce the severe alterations of serum SOD, GSH, MDA and CAT in diabetic rats. These findings proved that LD and HD of olive leaves extract play a protective role against hepatorenal injury in diabetic rats. Moreover, the highly treatment efficiency was observed in diabetic rats treated with HD followed by LD of olive leaves extract. Al-Janabi et al. (2013) showed that the use of olive leaves extract improved the levels of blood glucose, albumin, total protein and creatinine of diabetic rats induced by STZ. Furthermore, the administration of olive leaves extract to diabetic rats caused a modulation in the regeneration of b-cells of pancreas, compared with diabetic rats. Laaboudi et al. (2016) demonstrated that the oral administration of olive fruits and leaves extracts decreased the levels of serum glucose, total protein, total cholesterol, triglycerides, LDL-C, ALT, AST, creatinine, urea and uric acid, and increased the level of HDL-C in STZ diabetic male rats. Sakr et al. (2016) showed that injection of STZ provoked a significant increase in serum glucose, ALT, AST, total cholesterol, triglycerides, LDL-C, VLDL-C, while the level of serum HDL-C was significantly decreased in male rats. Moreover, serum MDA was increased and the antioxidant enzymes SOD and CAT decreased. Diabetic rats showed many histopathological alterations in the structure of pancreas and liver. When diabetic rats treated with olive leaves extract, an improvement was observed in the biochemical parameters and histology of pancreas and liver. They concluded that the ameliorative effect of olive leave extracts against toxicity of diabetes in rats may be attributed to the presence of its phenolic compounds. Most phenolic and flavonoids compounds were described as having antioxidative action in living systems, as they act as scavengers of free radicals (Rice-Evans et al., 1997). Polyphenols of olive leaves, especially oleuropein, have interesting effects on the human body such as antioxidant capability, antihypertensive, hypoglycemic, hypocholesterolemic factors (Vogel et al., 2014). Based on above mentioned observations, it can be concluded that the olive leaves extract supplementation is beneficial in lowering the level of blood glucose and associated hematobiochemical parameters, and histopathological alterations in experimental diabetic rats. The hepatorenal protective role of LD and HD of olive leaves extract attributed to its polyphenolic components which act as antioxidant factors. | 2019-04-02T13:05:29.117Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "cfc05c76f9fe3f47c7ee2f82280b3c7da46e313e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2017.02.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ca7fd82c1fc1f695d1b5c62a3e3bcd7444afd9a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67913070 | pes2o/s2orc | v3-fos-license | Leonora Carrington on and off Screen: Intertextual and Intermedial Connections between the Artist’s Creative Practice and the Medium of Film
: This article explores the under-‐‑researched intertextual and intermedial connections between Leonora Carrington’s transdisciplinary practice and the medium of film. The analysis focuses on the artist’s cameo appearances in two 1960s Mexican productions— There Are No Thieves in This Village (Alberto Isaac 1964) and A Pure Soul (Juan Ibáñez 1965)—which mark her creative collaborations with Surrealist filmmaker Luis Buñuel and Magic Realists Gabriel García Márquez and Carlos Fuentes. Carrington’s cameo roles are analyzed within a network of intertextual translations between her visual and literary works that often mix autobiographical and fictional motifs. Moreover, it is argued that Carrington’s cinematic mediations employ the recurring Surrealist tropes of anti-‐‑Catholic and anti-‐‑bourgeois satire. The article also investigates Carrington’s creative approach towards art directing and costume design, expressed in the Surrealist horror film The Mansion of Madness (Juan López Moctezuma 1973). The analysis examines the intermedial connections between Carrington’s practice of cinematic set design and her earlier experiments with theatrical scenography. Overall, this study aims to reveal undiscovered aspects of Leonora Carrington’s artistic identity and her transdisciplinary oeuvre.
Introduction
While the British---born/Mexican Surrealist, Leonora Carrington, has claimed no direct interest in expressing herself artistically through the medium of film, her vibrant life across Britain, France, Spain, the USA and Mexico has provoked scholars' reactions that seem emblematic of the possible relationship between Carrington and cinema. Susan Aberth has described Carrington's life as "cinematic in its scope and dramatic intensity" (Aberth 2004, p. 7), while Susan Rubin Suleiman has noted that "it would make a wonderful movie" (Suleiman 1993, p. 97). This article rather aims to explore the under---researched intertextual and intermedial connections between Carrington's transdisciplinary oeuvre and film. The study focuses on the artist's cameo appearances in two 1960s Mexican productions-There Are No Thieves in This Village (Alberto Isaac 1964, En este pueblo no hay ladrones) and A Pure Soul (Juan Ibáñez 1965, Un alma pura)-that mark the artist's creative collaborations with Surrealist film director Luis Buñuel and Magic Realist novelists Gabriel García Márquez and Carlos Fuentes. Carrington's cinematic mediations are explored within her Surrealist artistic practice that often fuses autobiographical elements with fictional motifs towards the creation of a complex intertextual expression. The discussion of Carrington's cameo roles reveals under---recognized aspects of the artist's transdisciplinary creativity and the production of her female Surrealist subjectivity in relation to the medium of film. Next, the analysis explores Carrington's creative approach towards art directing and costume design, manifested in the Mexican surrealist art horror, The Mansion of Madness (Juan López Moctezuma 1973, La mansión de la locura), and contextualized within her experiments with theatrical scenography. Carrington's art directing involves aesthetic references to her visual, literary and theatrical works, which expands her oeuvre into a network of intermedial translations across artistic disciplines.
The early scholarship on Carrington started with Whitney Chadwick's feminist understanding of the active role of female artists associated with Surrealism. As Chadwick claims in Women Artists and the Surrealist Movement (1985), Carrington's background to a great extent embodies and typifies other female artists' rebellion (Chadwick 1985, p. 67). Born in Lancashire in 1917 and determined to become an artist, the young Leonora Carrington left England in revolt against the patriarchal norms of her upper---class upbringing. In 1937, she joined her lover and collaborator Max Ernst in André Breton's Surrealist circle in Paris, where she refused the gendered roles of 'muse' and femme---enfant, famously stating "I didn't have time to be anyone's muse…I was too busy rebelling against my family and learning to be an artist" (Chadwick 1985, p. 66).
In Subversive Intent: Gender, Politics and the Avant--- Garde (1990), Susan Suleiman analyzes further the subversive role of Carrington's intertextual feminist critique. Departing from Gayatri Spivak's argument that feminist expressions gain little from being associated with the work of male predecessors, Suleiman claims substantive ideological and existential differences between feminist avant---garde practices and the formal innovations by male avant---garde artists, whose works still reproduce patriarchal structures. In relation to Mikhail Bakhtin's theory on the subversive potential of carnivalesque discourse and the grotesque body, Suleiman defines the double---voicedness or double allegiance of female avant---garde expressions that mimic, re---use and make parody of the formal artistic achievement of male Surrealists in the construction of a feminist critique. "This double allegiance-on the one hand, to the formal experiments and some of the cultural aspirations of the historical male avant---gardes; on the other hand, to the feminist critique of dominant sexual ideologies, […] may be the most innovative as well as the most specifically "feminine" characteristic of contemporary experimental work by women artists," as Suleiman argues (1990, pp. 162-63). Adopting Julia Kristeva's conceptualization of intertextuality (also inspired by Bakhtin's theories on carnival and dialogism), Suleiman addresses the subversive potential of Carrington's intertextual novel, The Hearing Trumpet (written in the early 1960s) that can be understood as "a feminist parodic rewriting of, among other old stories, the quest of the Holy Grail" (Suleiman 1990, p. 144). In this context, it will be discussed to what extent Carrington's cameo roles intertextually re---cycle recurring Surrealist tropes of anti---Catholic and anti---bourgeois satire (e.g., emblematic of Buñuel's cinematic work). At the same time, the analysis of Carrington's art directing will explore the intertextual and intermedial connections with her multifaceted oeuvre. Susan Aberth's monograph, Leonora Carrington: Surrealism, Alchemy and Art (2004), investigates, in depth, Carrington's development from early creative efforts towards artistic maturity during the artist's Mexican years. Before her self---exile to Mexico, Carrington's cinematic life included a traumatic break---down and hospitalization in a mental institution in Spain after Ernst's incarceration as an enemy alien by the French and later by the Nazi regime in 1939. In 1941, Leonora Carrington escaped to New York with the help of her first husband, the Mexican diplomat and journalist Renato Leduc. A year later, the couple moved to Mexico City, where they separated amicably. In 1946, Carrington married the Hungarian photographer Emerico "Chicki" Weisz with whom she had two sons, Gabriel and Pablo. In 2011, Carrington died in Mexico City as a renowned Mexican Surrealist, whose practice covered a plethora of artistic disciplines-painting, sculpture, tapestry, creative writing, theatrical experiments and cinematic involvements.
The interrelation between Carrington's work and the medium of film has nevertheless gained little attention within the proliferation of research perspectives on her oeuvre in the last decade. In this sense, this study attempts to reveal unknown aspects of Carrington's creative practice and artistic identity, which also contributes to understanding the development of Surrealism in Mexico.
As Gloria Orenstein points out, Antonin Artaud's Mexican pilgrimage in 1936 and Andre Breton's visit to Mexico in 1938, followed by the 1940 International Surrealist Exhibition in Mexico City, had a profound significance for the internationalization of the Surrealist movement (Orenstein 1975, p. 6). During the 1930s and the 1940s, under President Lázaro Cárdenas, Mexico opened its borders for the refugees and victims of the Spanish Civil War and the Second World War (Rivera 2010, p. 136). Thus, European artists, writers and intellectuals in exile, such as Surrealist filmmaker Luis Buñuel, poet Benjamin Peret, Leonora Carrington and painters Remedios Varo, Wolfgant Paalen and poet/artist Alice Rahon, gradually became an intrinsic part of Mexico's cultural life, along with the Mexican artist Frida Kahlo and writer Octavio Paz. Although officially welcomed by the Mexican government, the European artist émigrés were initially viewed as foreign "colonizers" by the politicized Mexican avant---garde (Kunny 1996, p. 172). As Dawn Ades argues, the reception of surrealism in Latin America "has often been distorted by cultural nationalism and also needs to be disentangled from Magic Realism" (Ades 2010, p. 393). Ades considers that, in the context of Latin America, "surrealism has been accused of neo---colonialism, of being too fantastic, or not fantastic enough, too irrational, or not irrational enough" (Ades 2010, p. 395). The complex relationship between the European Surrealists in exile and the local Mexican avant---gardes is also embedded in the sharp distinction between the Surrealist marvelous as a 'surrealist fantasy' and the Latin American lo real maravilloso as a 'magic reality'-an opposition drawn by the Cuban writer Alejo Carpentier in 1949 (Ades 2010, p. 412). To a certain extent, the initial antagonistic relationship between the Mexican community and the European artist expatriates, nevertheless reached dialogical and collaborative transformations in the aftermath of the 1940 International Surrealist Exhibition in Mexico City (Cruz Porchini and Ortega Orozco 2017). Carrington's associations with the Mexican intelligentsia in the 1950s and involvement with the Surrealist theatre group Poesía en Voz Alta (Poetry Out Loud) offered opportunities for collaborative projects and triggered her creative interest in theatrical forms (Kunny 1996, p. 174;Orenstein 1975, p. 6;Plunkett 2017, p. 74). Simultaneously, her close friendships and creative partnerships with Luis Buñuel and Chilean---born filmmaker Alejandro Jodorowsky, documented in their memoirs, My Last Breath (1982) and The Spiritual Journey of Alejandro Jodorowsky (2008), possibly initiated Carrington's artistic involvement with cinema.
Within the sketched out conceptual framework and outlined art historical context, this research aims to expand the existing scholarship on Leonora Carrington's versatile work towards uncovering the intertextual and intermedial links between her transdisciplinary practice and the medium of film. By tracing multiple creative collaborations and artistic trajectories, the article also attempts to reflect on Carrington's artistic identity and her creative impact within the international avant---garde in Mexico.
Leonora Carrington's Cameo Roles-Subversive Intertextuality and Carnivalesque Expression
The recent exhibition, Leonora Carrington: Magical Tales (2018), organized by Mexico City's Museum of Modern Art (MAM), offers a specific focus and a thematic section on Carrington's cinematic collaborations. The retrospective claims that Leonora Carrington makes an uncredited appearance in Luis Buñuel's film Los olvidados (1950) which prompted her artistic involvements with cinema. 1 Francisco Peredo Castro points out the intricacies of discovering Carrington's personal motivation behind taking part in There Are No Thieves in This Village (1964) and A Pure Soul (1965). Castro describes Carrington's participation as an act of "fraternal solidarity with her intellectual friends, artists and film crews" (Castro 2018, p. 321). Furthermore, Castro suggests that Leonora Carrington's interest in the films could be ascribed to an empathic attitude towards the protagonists of both stories and the fact that they "alluded to deep complexities in the human soul" (Castro 2018, p. 328). From a different perspective, this article argues that Carrington's cameo roles could be rather studied within a network of intertextual references to her overall oeuvre and within the context of shared creative approaches among representatives of 1960s Mexican intelligentsia, such as Luis Buñuel.
Buñuel's third Mexican film, Los olvidados (1950, The Young and the Damned), is an urban drama that represents a stark and realistic depiction of poverty and crime, incest and domestic abuse through the misfortunes of a group of destitute children in Mexico City slums. Initially received as an insult on Mexican national identity, the film obtained critical acclaim after it premiered at the Cannes Film Festival in 1951 and Octavio Paz praised Buñuel's grasp of Mexican sensitivity (Acevedo---Muños 2003, p. 74). While Los olvidados functioned as a critique of moral decay in Mexican urbanized society, its cinematic language introduced a novel approach towards the Mexican social reality that marked a turning point towards a "new wave" of cinematic expressions. Buñuel's Mexican films are central for understanding the two defining periods of Mexican cinema-the Classical, dominated by the representation and the construction of national myths, and the "New Cinema" of the 1960s (Acevedo---Muños 2003, p. 30). The New Cinema emerged as a cinephile group, called El Grupo Nuevo Cine, composed of young intellectuals and filmmakers who expressed criticism towards the state of Mexican national cinema and published a manifesto in 1961 that urged for the creation of a national cinematheque and an institution to teach filmmaking, as well as a network of cineclubs and specialist publications that would generate a wave of new directors, critics and informed spectators (Ramírez Berg 1992, p. 46;Flaherty 2016, p. 182). The Nuevo Cine collective was influenced by Buñuel's work and included film critics Carlos Monsivais and Emilio García Riera, cinematographer Rafael Corkidi and director Alberto Isaac, among others, whose films advocated creative collaborations with artists and émigré intellectuals, such as Leonora Carrington. The Nuevo Cine group opposed the bureaucratic/business structures in the official agencies and the cultural imperialism exerted by Hollywood, responsible for the mainstream Mexican film culture at the time. The artistic objective of this young generation of filmmakers was to establish a novel and honest approach towards Mexican social reality by drawing upon Italian neo---realism, the French New Wave, the new American directors, as well as Brazilian Cinema Novo and the emerging innovative and independent filmmaking by contemporary Cuban, Chilean and Bolivian directors (Mora 1997, pp. 44-45). The group's manifesto aimed to "ensure the filmmakers' freedom of expression and technical innovation outside the structures of the Mexican film industry [and] foster a democratizing cultural space within Mexico's cinema" (Baugh 2004, p. 26). Nuevo Cine's formative period between 1960 and 1965 advanced the emergence of a larger transformative project that continued in the years to follow-the New Mexican Cinema of the late 1960s and the 1970s. Also known as the cinema of apertura democrática (democratic liberalization), the New Mexican Cinema was a political project and an economic initiative of President Luis Echeverría Álvarez (1970)(1971)(1972)(1973)(1974)(1975)(1976), who aspired to open up new markets for Mexican film---making (Treviño 1979, pp. 27-28). In the same vein, the New Mexican Cinema directors created "films that dealt frankly with social issues and that were more politically daring, more sexually explicit, and to a degree narratively and aesthetically experimental" (Ramírez Berg 1992, p. 29).
There Are No Thieves in This Village (1964, En este pueblo no hay ladrones,) directed by Alberto Isaac thematically and aesthetically resembles Buñuel's Los olvidados. The film represents the emerging turn towards "a "new" Mexican cinema, bringing refreshment to a stagnant film hypothetically relates to her alleged cameo in Los olvidados, or to the episode that Leonora Carrington and Luis Buñuel share in There Are No Thieves in This Village. industry," and it keeps up with Alberto Isaac's status as an avant---garde director and "his own philosophy of "seeking some new thing"" (Schwartz 1997, p. 91). Based upon a story by the not yet famous Colombian writer and Nobel Prize winner, Gabriel Garcia Marquez, the film follows the experiences of unemployed and unfaithful Damaso and his pregnant and devoted wife, Ana, set in a poor Mexican village. One night, Damaso breaks into the local pool parlor and steals the village's only billiard balls. The owner of the tavern takes advantage of the situation and claims the loss of additional 200 pesos that had never been in his possession. Damaso's anonymous act initiates a chain of accusations, xenophobia and aggression targeting a foreigner as the criminal. The theft destroys the mundane life of the village, so that Damaso gradually loses enthusiasm to sell the billiard balls and turn his deed into a profitable business. The guilt felt by his wife, Ana, convinces him further to return the stolen items and restore peace. However, in the act of bringing back the billiard balls, Damaso gets caught and convicted for stealing the fictitious amount of money.
Shot in black and white and slow pace, Alberto Isaac's debut film won the second prize in the 1965 First Contest of Experimental Cinema launched a year earlier by the Sindicato de Trabajadores de la Producción Cinematográfica, or STPC (Union of Cinematic Production Workers). The competition publicly introduced some of the new directors who would later be incorporated into the New Mexican Cinema. As Jesús Salvador Treviño has argued, "the talent, themes and potential evident in such films as La formula secreta by Rubén Gámaz and En este pueblo no hay ladrones by Alberto Isaac, both high point winners in the contest, signaled that new cinema was indeed in the offing" (Treviño 1979, p. 27). There Are No Thieves in This Village became known for Isaac's avant---garde cinematic language and the cameo performances by a number of representatives of 1960s Mexican intelligentsia as featured in the film. The cast includes Gabriel Garcia Marquez himself (in his first of only two cinematic appearances in his career), the esteemed writer Juan Rulfo, journalist and critic Carlos Monsivais, cartoonists Ernesto Garcia Cabral and Abel Quezada, film director Arturo Ripstein, as well as Luis Buñuel and Leonora Carrington-both of whom share the same episode. Buñuel performs a vigorous religious doublespeak as a priest, whose Sunday sermon is attended by a group of devout women, among whom the camera traces Leonora Carrington in the role of a pensive widow. The Sunday mass episode was specifically developed by Alberto Isaac and writer/film critic Emilio Garcia Riera for the cinematic adaptation of the original storyline. In the film, Marquez humorously appears as the ticket vendor of the local makeshift cinema, who sporadically falls asleep. Along similar lines, Buñuel and Carrington's cameos refer to a broader range of satirical and subversive anti---Catholic repertoires that recur throughout their individual Surrealist oeuvres.
Buñuel's sacrilegious performance as a priest resembles his favorite childhood game depicted in his memoir, My Last Breath (1982): "I used to play at celebrating Mass in the attic of our house, with my sisters as attendants. I even owned an alb, and a collection of religious artifacts made from lead." (Buñuel [1982] 1985, p. 12) Moreover, he recounts his love to disguise himself as a priest and "walk around the city-a felony punishable by five years in jail" (Buñuel[1982] 1985, p. 227). The described act resonates with an episode in Un Chien Andalou (1929, An Andalusian Dog)-after the famous eye---slitting scene, the main male character is seen bicycling through the streets in a nun---like attire. Educated by Jesuits and growing up opposite the town's church (also having a priest for an uncle), Buñuel establishes the use of religious tropes as a hallmark of his anti---Catholic satire. The recurring image of the priest-from the caricature of a religious congregation at the Coliseum evocative of imaginary Vatican Olympic Games in L' Age D'Or (1930, The Golden Age) to the hypocritical priests in his Mexican productions Nazarin (1959) and Simon of the Desert (1965, Simón del desierto)-has marked his subversive approach and attempt to expose the insincerity of the Church as an institution that reproduces bourgeois power structures. In this context, Buñuel's cameo appearance in There Are No Thieves in This Village can be understood as a version of his anti---Catholic subversion and a form of embodied satire.
Leonora Carrington's upbringing in the mores of Catholicism and her regular expulsions from Catholic convents, in a similar way, triggered her distrust of the Catholic Church. Her "antipathy would last a lifetime, fuelled later by Surrealism's anti---clerical stance, and would manifest itself in bitingly satirical depictions of priests in both her writings and artwork," as Susan Aberth argues (Aberth 2004, p. 18). In this regard, Tara Plunkett has discussed the artist's humor at the expense of the Catholic Church in several theatre plays authored by Leonora Carrington. El Santo Cuerpo Grasoso (The Holy Oily Body), co---written with Carrington's close friend, artist and collaborator Remedios Varo, is a theatre script of an un---staged play that was created circa 1947. Carrington and Varo's parody revolves around the invention of a 'holy' substance that reveals one's true soul when applied to the subject's buttocks. Likewise, Carrington's one---act play, The Invention of Mole (c. 1960), satirically employs the figure of an archbishop who claims that ordinary people have no rights in performing holy deeds, such as the invention of the Mexican sauce, mole (Plunkett 2017, pp. 78-79).
In Subversive Intent: Gender, Politics and the Avant---Garde (1990), Susan Rubin Suleiman specifically analyzes the anti---patriarchal and anti---Christian subversion of Leonora Carrington's intertextual novel, The Hearing Trumpet, published in 1976. The protagonist-ninety---two---year---old Marian Leatherby, born in England but living in a country whose description resembles Mexico-is sent to a home for elderly ladies by her son Galahad and his family. In her new home, Marian herself is destined to succeed in a fantastic quest for the Holy Grail. In her discussion of The Hearing Trumpet, Suleiman suggests the figure of the 'laughing mother' as an extension of Hélène Cixous's Laugh of the Medusa (1975), and as a symbol of female avant---garde playfulness and feminist social critique, which resists patriarchal ideology (Suleiman 1990, pp. 141-80). Moreover, Carrington's feminist parody fuses the story of the nonagenarian Marian Leatherby (Carrington's alter ego, although she wrote the novel in her late thirties) with the framed narrative of Dona Rosalinda-the winking nun from a painting on the wall, who is a devotee of "the Goddess" and aims to combat the male---centric model of Christianity together with her homosexual friend, the bishop of Trêves les Frêles. In caveat, it should be noted that Carrington's emancipatory depiction of elderly women, or crones, can also be discovered in her paintings, The Magdalens (1986) and Kron Flower (1987). Thus, Carrington's cameo performance as a widow intertextually relates to her literary and visual works.
As Suleiman notes further, Monty Python's version of the Grail legend-Monty Python and the Holy Grail (1975) that appeared relatively at the same time as The Hearing Trumpet's publication in England-ends with the demystification of the actors' staged fictional world and a return to reality. Contrarily, Carrington's own parody of Grail literature ends in "a quite unique (but still humorous) surreality" (Suleiman 1990, p. 177). Thus, within the same conceptual dimensions, Carrington's carnivalesque cameo performance as a widow attending Buñuel's satirical sermon in There Are No Thieves in This Village can be understood as an artistic gesture that extends her subversive intertextual practice towards the film medium.
Carrington also makes a carnivalesque cameo appearance in A Pure Soul (1965), directed by Juan Ibáñez and based on Carlos Fuentes homonymous short story. The film traces the dark and complicated relationship of Claudia and her brother and lover, Juan Luis. Claudia acts as the narrator whose intimate letters reveal that Juan Luis leaves Mexico City for a new job at the United Nations headquarters in New York, where he meets and falls in love with a girl, Claire, who resembles Claudia. While the couple expects a child, Claudia's jealousy escalates and prevents Juan Luis from committing. The siblings' well---off parents-Leonora Carrington appears in the role of a refined and conservative mother and wife-further insist on the lovers' separation. Consequently, Claire takes her own life, followed by Juan Luis, whose suicide leaves Claudia with regret for provoking both deaths.
A Pure Soul is the second of two medium length episodes compiled under the title Los bienamados (1965, The Beloved). The first novel Tajimara, directed by Juan Jose Gurrola, opens the topic of forbidden love and disenchantment in a modern world that Ibáñez's film takes up further. Ibáñez's contribution to Los bienamados, similarly to En este pueblo no hay ladrones, participated in the 1965 First Contest of Experimental Cinema and shared the third price (Flaherty 2016, p. 188). A Pure Soul also marked the beginning of Ibáñez and Fuentes' collaboration, whose later film, Los caifanes (1967, The Outsiders), won international recognition for introducing innovative visual textures and soundscapes. The creative partnership between Juan Ibáñez and Carlos Fuentes reflects one of Nuevo Cine's innovative trends, "the encouragement of cinematic rendering of important Mexican and Latin American literary works," which is also tangible in Alberto Isaac's En este pueblo no hay ladrones as an adaptation of Gabriel Garcia Marquez's short story (Treviño 1979, p. 36). Thus, Carrington's cinematic mediations in both films entail shared creative approaches with the representatives and the associates of the Nuevo Cine group.
The indirect creative collaboration between Carlos Fuentes and Leonora Carrington can also be traced in his contribution to the catalogue of Carrington's 1965 solo exhibition at the Anglo---Mexican Institute of Culture, where he describes her work as an "ironical sorcery" (Fuentes 1965, p. 5). As Fuentes states further, "all Leonora Carrington's art is a gay, diabolical and persistent struggle against orthodoxy, which Leonora conquers and disperses with imagination, always multiple and singular, an imagination which she communicates with a loving pride" (Fuentes 1965, p. 6). Similarly, Carrington's cameo performance as Claudia and Juan Luis' conservative mother in A Pure Soul signals her self---conscious and ironic imagination that aims to destabilize bourgeois and patriarchal mores by employing personal motifs and representing the "Self as Other"-categorized by Whitney Chadwick as an artistic strategy of female Surrealist self---representation. Chadwick has recognized that women Surrealist artists have reproduced themselves in a multiplicity of roles "within the signs of elaborately coded femininity" in order to conquer the social institutions-family, state, and church-that regulate the place of women within patriarchy. (Chadwick 1998, p. 11) As Chadwick argues, "masking, masquerade, and performance have all proved crucial for the production of feminine subjectivity, through active agency" (Chadwick 1998, p. 22).
In this regard, in July 1962, the Iconographia Snobarium issue of the avant---garde Mexican journal S.NOB featured an anonymous photomontage portrait of Leonora Carrington, who was a regular contributor to the publication. "An oval vignette dominates the double---page spread with an ironic mood of staged formality and snobbish disdain-and yet, instead of Carrington's own visage, an unfamiliar masculine face scowls at the viewer," as Abigail Susik describes the composition (Susik 2017, p. 105). According to Susik, this satirical montage debunks iconicity as patriarchal and deliberately relates to Marcel Duchamp's photographic representation as his female alter ego, Rrose Sélavy (Susik 2017, p. 109). The oval photograph of Duchamp posing in women's clothes was taken by Man Ray circa 1920-1921 (Rubin 1968, pp. 17-19). Thus, the same Dadaist practice of fabricating personalities could be rediscovered in the prankish Iconographia Snobarium photo collage that appropriates Carrington's biography. The portrait ridicules her personal history as a daughter of a wealthy English textile merchant and a reluctant debutante who revolted against the patriarchal and bourgeois norms of her upper---class background-a motif that recurs in Carrington's painting, Self---Portrait/Inn of the Dawn Horse (1937)(1938), and her short story, The Debutante (1937)(1938), featured by André Breton in his Anthology of Black Humour (1940). While in The Debutante, the artist represents herself as a young lady refusing to comply with the restrictions of imposed marriage, in A Pure Soul Carrington reverses the code and parodies her biographical stance by entering the role of a compliant mother and wife. The anti---patriarchal and counter---Catholic nuances of her role involve the episodes of strolling around accompanied by her serious husband and kissing the hand of a priest whom they meet. In this sense, Carrington's performance in A Pure Soul once again functions as an expression of the artist's subversive irony and embodies Suleiman's conceptualization of 'the laugh of the mother.' Carrington's cameo appearances in Alberto Isaac and Juan Ibáñez's films signal that her artistic identity is defined by multiple creative collaborations and shared creative approaches with the representatives of the 1960s Mexican intelligentsia. As Jonathan P. Eburne and Catriona McAra have argued, "Leonora Carrington reveals much about the very nature of the international avant---garde activity itself: experimental art and thought demanded persistent reflection on the vicissitudes of modern life" (Eburne and McAra 2017, p. 10). In this respect, Carrington's on---screen appearances can be understood as an extension of her creative experiments across disciplines and mediums of artistic expression.
Art Directing-Intertextual Translations between Carrington's Paintings and Cinematic Set Design
Leonora Carrington's practice of art directing, expressed in The Mansion of Madness (1973, La mansión de la locura), was introduced to the general public within the 2015 TATE Liverpool exhibition, Leonora Carrington: Transgressing Discipline. The retrospective aimed at uncovering the artist's multifaceted oeuvre and initiated curatorial and scholarly interests on the relation between Carrington's work and the medium of film. The Mansion of Madness is Juan López Moctezuma's directorial debut-an art---horror loosely based on Edgar Allan Poe's short story, The System of Dr. Tarr and Professor Fether (1845). Created under the artistic supervision of Leonora Carrington and her son Gabriel Weisz (responsible for the realization of the designed sets), the film indirectly resonates with the title of Carrington's own story, The House of Fear (1938), and her memoir, Down Below (1943), that retells her traumatic experience of madness and hospitalization in a mental asylum in Santander, Spain, in 1940. Carrington's autobiographical narrative has often been read as a conceptual response to André Breton's novel, Nadja (1928) that, as Katharine Conley has argued, represents Surrealists' romanticized perspective on female mental instability (Conley 1996, p. 22). Thus, Down Below can be recognized as an authentic and visceral version of the re---enactment and mimicry of insanity that Surrealists practiced as a source of creative inspiration.
Juan López Moctezuma's perspective on the subject of madness follows the journey of the newspaper journalist Gaston LeBlanc to a peculiar mental sanatorium set in 19th---century France. Gaston returns from a long exile in America, on the commission to report the unconventional methods of the infamous Dr. Maillard whose treatment encourages patients to express their innermost idiosyncrasies. Soon after arrival, Gaston reveals that the eccentric doctor's identity is the performed alter ego of Raul Fragonard-one of the asylum's inmates who, with a group of accomplices, has locked up the psychiatrists and overtaken the madhouse. Gaston and Eugenie, the daughter of the incarcerated actual Dr. Maillard, attempt a futile escape, which results in being captured and sentenced to death. Meanwhile, the imprisoned former guards succeed in breaking free and stage a counter---revolt that leads to Gaston and Eugenie's release and Fragonard's dramatic end.
The Mansion of Madness' depictions of excessive human irrationality mixed with elements of violence, religious iconoclasm and eroticism builds on the aesthetic language of Panic Theatre-a movement and artistic collective established by the Spanish playwright Fernando Arrabal, the Chilean---born director Alejandro Jodorowsky and the French artist Roland Topor in Paris in 1962. Inspired by Buñuel's Surrealist films, Antonin Artaud's Theatre of Cruelty and its predecessor the Grand---Guignol (the Parisian theatre of horror that staged Poe's The System of Dr. Tarr and Professor Fether in 1903), the movement was named after the Greek mythological god Pan and aimed at creating deliberately shocking grotesque comedies (Marwick 2002, pp.78---80;Hand and Wilson 2016, pp. 1-9). In this context, The Mansion of Madness itself could be understood as a political grotesque that renders any system of control as suppressive and brutal. The film's representation of carnivalesque (in Bakhtinian sense) anarchy and surrealist phantasmagoria attempts an artistic subversion of any form of oppression. Therefore, it can be noted that The Mansion of Madness conceptually corresponds to Leonora Carrington's subversive surrealism.
Juan López Moctezuma's involvement with the Panic collective can be traced back to Jodorowsky's full---length cinematic debut, Fando y Lyz (1967)-based on Fernando Arrabal's play and produced by Moctezuma himself. This creative exchange of ideas and similar aesthetic endeavors continues in The Mansion of Madness, which employs the very same cinematographer (Rafael Corkidi) and composer (Nacho Mendez) who worked on Jodorowsky's cult film, El Topo (1969). In addition, both films share the same producer (Roberto Viskin), as well as some of the performers and actors appearing as extras in each of the productions. To a great extent, the aesthetic parallels between Moctezuma and Jodorowsky's cinematic expressions could be ascribed to comparable visual compositions of the mise---en---scène influenced by Leonora Carrington's imagery. Moreover, the similarities between Jodorowsky and Moctezuma's films involve each director's specific approach of re---using Bunuelian motifs and tropes.
The Mansion of Madness stars Claudio Brook as Dr. Maillard/Fragonard-the actor who regularly performs in Luis Buñuel's films. Brook plays the protagonist in the anti---Catholic satire Simon of the Desert (1965, Simón del desierto) and previously appears in Viridiana (1961) and The Exterminating Angel (1962, El ángel exterminador). Simon of the Desert tells the story of a religious recluse and healer-a character based on the fifth---century Syrian saint Simeon Stylites (Saxton 2013, p. 420). In the opening scene, the ascetic Simon performs a miracle and cures the amputated hands of a robber-a motif that Moctezuma takes up further, reverses and ridicules in the figure of the hedonistic Dr. Maillard and his unorthodox curing methods. Thus, Moctezuma employs the 'residues of meanings' and 'symbolic resonances' of Claudio Brook's Bunuelian role-a creative strategy that Jeffrey Bussolini has defined as intertextuality of casting in relation to Roland Barthes and Julia Kristeva's theorizations on the translation of significations across texts (Bussolini 2013). The Mansion of Madness' intertextual network of references expands into Moctezuma's female vampire film Alucarda (1977, Sisters of Satan) that features Claudio Brook in the role of the rational physician, Dr. Oszek, propagating science instead of religion in curing diabolic Alucarda and her resurrected female friend from supernatural powers and demonic possessions. Brook also appears in the horror drama Cronos (1993) by Guillermo del Toro, who has commented on the formative effect of Moctezuma's films (and specifically Alucarda) on his own creative process. (Del Toro 2004) As Rob Stone has commented, "horror had always had its claws into Surrealism, with the more preposterous representations of monsters and ghouls providing a superficial frisson of contact with the unknown" (Stone 2007, p. 32). A corresponding relationship between the horror genre and Surrealism can be traced within Carrington's work, but rather as an expression of her creative affinity with esotericism.
The Mansion of Madness owes its Surrealist nuances and mystic ambience to Leonora Carrington's artistic supervision that represents her own interests in alchemy, magic, astrology and occult practices as Tarot and I Ching (the Chinese Book of Changes). As Jonathan P. Eburne has noted, Carrington's esoteric surrealism is "multifariously allusive"-it simultaneously draws from "continental philosophy and contemporary cosmology, from the writings of Mabille, Carl Jung, George Gurdjieff, P.D. Ouspensky and Robert Graves, as well as from fairy tales, alchemical treatises, Buddhist and pre---Columbian religion and Celtic myth" (Eburne 2017, p. 142). Moreover, Ernest Schonfield has argued that via the mixture of various mythological traditions, Carrington establishes her own form of mythopoesis across paintings and literary texts (Schonfield 2010, pp. 250-55). The famously recognized influences on her visual expressions include Hieronymus Bosch and Pieter Bruegel the Elder, while her texts borrow from Jonathan Swift and Lewis Carroll, as Ara H. Merjian has discussed further (Merjian 2017, pp. 39-56). Seán Kissane has suggested certain visual parallels between Carrington's work and William Blake's illustrations of Dante Alighieri's The Devine Comedy (1320) dating from 1862 (Kissane 2014, p. 55). Carrington's affinity with Blake has been previously tackled by her close friend and art collector, Edward James (Ades 2014, p. 104). It could also be acknowledged that Leonora Carrington's hybrid creatures, often featuring anthropomorphic bodies with floral or zoomorphic heads, such as in the triptych Took My Way Down, Like a Messenger, to the Deep (1977), to a certain extent resemble H.P. Lovecraft's drawings of fantastic monsters. As a devoted Lovecraftian, Guillermo del Toro has credited the influence of H.P. Lovecraft's visuals on his own cinematic imagination (Del Toro 2013; Del Toro 2016)-a connection that signals and explains possible allusions between Del Toro and Carrington's visual systems.
The hypothetical interrelation between Carrington's imagery and Del Toro's artistic imagination has been suggested by Francisco Peredo Castro, although he recognizes no concrete published evidence of Carrington's influence on Del Toro's films. Nevertheless, Castro considers that "various associations are plausible between Leonora Carrington's mythical/fantastic universe and her artwork in general, in The Mansion of Madness, as well as in some of the elements cited in Guillermo del Toro's film [Pan's Labyrinth (2006)]" (Castro 2018, pp. 339-40). According to Castro, the parallels between Pan's Labyrinth and Carrington's work involve the labyrinth itself, the stone arch/doorway, the tree as a portal between fantasy and reality, as well as the character of the faun (a mythological creature) and the magical rituals performed by the protagonists (Castro 2018, pp. 339-40). Castro's analysis identifies Carrington's impact in The Mansion of Madness within the "very peculiar initial credit sequence, with deformations and coloring of the imagery (in red), […] the forest, the fog, and the white horses that advance among beams of light" (Castro 2018, p. 331). From another perspective, it can be observed that the mise---en---scène of The Mansion of Madness features ruined gallery spaces and hollowed---out industrial settings that evoke the architectural elements and color tonality of Leonora Carrington's paintings, Samain (1951), The Flying Ur Jar (1953), El Rarvarok (1963) and Forbidden Fruit (1969. Carrington's earlier attempts of connecting her creative output to film art direction date to the 1946---47 Bel Ami International Art Competition. The contest and organized exhibition presented the works of eleven distinguished American and European artists, among whom Leonora Carrington, Salvador Dali, Paul Delvaux, Max Ernst, and Dorothea Tanning. The exhibited paintings on the given topic of The Temptation of St. Anthony competed for the final selection of a single tableau to "serve a dramatic purpose" in Albert Lewin and David L. Loew's motion---picture The Private Affairs of Bel Ami (1947), based on Guy de Maupassant's 1885 novel (Lewin 1947, p. 5). Albert Lewin's previous movies, The Moon and Sixpence (1942) and The Picture of Dorian Gray (1945), followed a similar approach of using paintings as central components of the diegesis and the mise---en---scène. The renowned jury including Marcel Duchamp, Alfred H. Barr and Sidney Janis awarded Max Ernst's artistic interpretation of the subject (Lewin 1947, pp. 5-8). As Susan Aberth has observed, Leonora Carrington's entry to the competition-The Temptation of St. Anthony (1947)-borrowed many of its compositional elements from Hieronymus Bosch's homonymous painting (c. 1500s) (Aberth 2004, p. 70). Specifically, in The Mansion of Madness, the influence of Carrington's spatial imagination, that inherits Bosch and Bruegels' complex multi---character compositions, can be grasped within the framing of the group scenes. Thus, it can be concluded that Carrington's creative practice of art directing establishes intertextual and intermedial references between her paintings and cinematic scenography.
Costume design-Intertextual and Intermedial Correspondences between Carrington's Theatrical Experiments and Film
Carrington's intertextual symbolic system created across her visual and literary expressions becomes obvious in the costume design of the characters inhabiting The Mansion of Madness. The most bizarre representatives of the sanatorium include a malicious spiritual leader in a red cape with spiral rope embroidery on the chest, a bird---like man named Mr. Chicken who lives in a henhouse and behaves like a fowl, a plethora of hysterical patients and heroines who perform suggestive ritual dances and puppetry shows for the entertainment of Dr. Maillard. The costume of the mystic villain bears formal similarities to Alfred Jarry's personage Ubu Roi (King Ubu), depicted in a woodcut by Jarry as an obese man wearing a cloak with a large spiral ornament on the front. Alfred Jarry's satirical farce Ubu Roi (first staged in 1896) as a stark critique on bourgeois morality was formative for Dada and Surrealism, Antonin Artaud's Theatre of Cruelty and the Theatre of the Absurd, and Fernado Arrabal's Panic movement (Knopf 2001, pp. 77-80). Max Ernst (among other Surrealist painters, such as Joan Miró, Dora Maar and Sebastian Matta) creatively re---appropriated Ubu Roi in the painting Ubu Imperator (1923) (Hubert 1978, pp. 259-78). Moreover, in 1937, Ernst designed the sets for the continuation of the play Ubu in Chains/Ubu Enchaîné (Schumacher 1984, p. 112). Possibly, Ernst created the sets with Leonora Carrington's assistance as the recent exhibition catalogue, Surrealist Women and Their Connection with Catalonia, points out (Galeria Mayoral 2017, pp. 110-11). This potential creative collaboration could be understood as the beginning of Carrington's creative experiments with scenography and costume design. Often Leonora Carrington's personages, who traverse the boundaries between her paintings and written narratives, emerge in distinctive and extraordinary outfits that render the characters' temperaments and spiritual states. In a similar manner, the costume design for The Mansion of Madness is interwoven with the dramatic development of the protagonists-an interconnection, which is mostly tangible in the identity of Mr. Chicken. 2 Within Carrington's visual grammar, birds are often depicted as black crows (reminiscent of Edgar Allan Poe's 1845 poem The Raven), such as in the paintings Self---Portrait in Orthopedic Black Tie (1973), Tribeckoning (1983) and Crow Catcher (1990. At the same time, Mr. Chicken's attire indirectly resonates with Carrington's Portrait of Max Ernst (1939) that shows Ernst in a mantle of crimson feathers as a reference to his chosen animal alter---ego Lop Lop-The Bird Superior. Comparable bird---like costumes also appear in Carrington's short stories. In The House of Fear (1938), the mistress of the castle wears a fluttering gown "made of live bats sewn together by their wings" (Carrington [1938] 1988, p. 31). In The Sisters (1939, the main heroine Juniper is a female vampire (as Moctezuma's Alucarda) and an avian creature-"feathers grew from her shoulder and around her breasts. Her white arms were neither wings, nor arms" (Carrington [1939(Carrington [ ] 2017. Thus, it can be suggested that in intermedial order, Carrington has employed certain elements from her paintings and fiction in the visual construction of The Mansion of Madness' characters.
Moctezuma's The Mansion of Madness also features another recurring motif from Carrington's work-the white horse-the artist's totemic animal and spiritual alter---ego. As Susan Aberth has argued, horses occupy a central place in Carrington's overall oeuvre, while the importance of the white horse as a feminine archetype in her work can be ascribed to Carrington's identification with the Celtic deity Epona (supposedly linked to the folktale of Lady Godiva) after reading Robert Graves' 1948 Celtic study The White Goddess (Aberth 2004, pp. 32-33). In Self---Portrait/Inn of the Dawn Horse (1937)(1938) and the short story The Oval Lady (1937)(1938), the white horse represents Carrington's rebellious nature and desire to break free from social, sexual and gendered restrictions. In The Oval Lady, the protagonist Lucretia is able to sporadically metamorphose into a white horse, while she also enjoys playing with a rocking horse named Tartar-a game that is strictly forbidden by her suppressive father. He punishes his daughter's frivolous behavior and unwillingness to obey to the rules by burning down her favorite toy that suffers the flames as a real animal. The very same plot is re---contextualized in Carrington's theatre play, Penelopé, that she wrote in 1946. Carrington also created sets and costumes for this script, which was later staged by Alejandro Jodorowsky in 1957. Marina Warner comments that the theatre adaptation of the story "ends on a new note of savage optimism," since the heroine escapes on Tartar's back and the repressive father commits suicide (Warner [1987(Warner [ ] 1989. Furthermore, it should be noted that the play's intertextual title-referencing the Greek myth of Ulysses's wife as a symbol of marital fidelity-already suggests subversion of patriarchal norms.
The visual aspects of Carrington's stage design of the performance directed by Jodorowsky feature elements of Tarot cards from the major arcana, such as The Hanged Man, The Magic Wand and The World. Circa 1950, Carrington created her own Tarot deck that represents a synthesized version of her visual style and recurring themes. In his autobiography, The Spiritual Journey of Alejandro Jodorowsky (2008), the former mime and future cult director recounts several ritualistic meetings with Leonora Carrington, who at that time served as his spiritual guide (Jodorowsky 2008, pp. 24---42). Jodorowsky acknowledges Carrington's esoteric influence for sharing with him the Tarot symbolism, which has become the basis for his popular philosophy and practice of psychomagic (Barton---Fumo 2012;Jodorowsky [2009Jodorowsky [ ] 2018. In parallel, Juan López Moctezuma's employment of Tarot---like archetypes could be mapped within The Mansion of Madness's characters and plot. Jodorowsky and Carrington's collaboration also includes the un---staged short operetta, La princesa Araña: asquerosa operetta Surrealista para niños mutantes (1957)(1958), The Spider Princess: A Disgusting Surrealist Operetta for Mutant Children), as Abigail Susik points out. Moreover, Susik has suggested that Carrington's works have influenced Jodorowsky's films. (Susik 2017, pp. 120-25) Drawing upon this observation, it can be argued further that specifically The Holly Mountain (1973), described by Jodorowsky as "an anthology of symbols," (Cobb 2007, p. 275) establishes aesthetic parallels with Carrington's oeuvre by employing certain elements from her mythopoesis and visual expression.
Within her collaborations with the theatre group Poesía en Voz Alta, Leonora Carrington also created decors and costumes for Octavio Paz's play La fille de Rappaccini (1956), based on Nathaniel Hawthorne's short story, Rappaccini's Daughter (Orenstein 1975, p. 73). In addition, her theatre---related artistic output includes designs for two Shakespearean productions in Mexico-papier---mâché masks for The Tempest (1959) and costume drawings for Much Ado About Nothing (1962), presented within the TATE Liverpool retrospective, Leonora Carrington: Transgressing Discipline (2015), and bricolage masks for her eco---feminist play, Opus Siniestrus (1969), staged for the first time within the recent exhibition, Leonora Carrington: Magical Tales (2018), at the Museum of Modern Art (MAM) in Mexico City. Both exhibitions clearly uncover Carrington's multifaceted practice in relation to the mediums of theatre and cinema, evoking a network of intermedial and intertextual translations between her creative expressions across artistic disciplines. Moreover, the employed exhibition narratives and specific curatorial methods shape the discursive construction of Leonora Carrington's artistic identity-a research perspective that deserves further investigation in its own right.
Conclusions
The specific objective of this article has been to explore the under---researched intertextual and intermedial translations between Leonora Carrington's creative practice and the medium of film. It has been argued that Carrington's cinematic mediations, similar to her visual artworks and literary texts, fuse autobiographical elements with fictional motifs towards the creation of a complex intertextual symbolic system. In this respect, Carrington's cameo roles in There Are No Thieves in This Village (Alberto Isaac 1964) and A Pure Soul (Juan Ibáñez 1965) express self---conscious irony and function as forms of artistic subversion. In There Are No Thieves in This Village, Leonora Carrington shares an episode with Luis Buñuel who sarcastically performs as a sermonizing priest, while she plays an attentive widow-a creative expression that re---cycles the recurring Surrealist tropes of anti---Catholic and anti---bourgeois satire and employs the subversive intertextuality (in Suleiman's terms) of Carrington's novel The Hearing Trumpet (1960s). Next, in A Pure Soul, Leonora Carrington emerges as a conservative mother and wife-a cinematic appearance, which potentially parodies the autobiographical motifs from her short story The Debutante (1937)(1938) and her self---portrait Inn of the Dawn Horse (1937)(1938). Thus, Carrington masquerades the "Self as Other"-an artistic strategy which, according to Whitney Chadwick, destabilizes the social institutions (family, state and church) that regulate the place of women within patriarchy (Chadwick 1998). Therefore, Carrington's cameo performances could be understood as artistic gestures that extend her subversive intertextuality and experimental practice towards the film medium.
The analysis also addressed Leonora Carrington's creative approach towards art directing which has been contextualized within a network of intertextual and intermedial links to her paintings, literary works and theatrical experiments with scenography and costume design. The Surrealist art---horror film The Mansion of Madness (Juan López Moctezuma 1973), created under Carrington's artistic supervision, translates visual motifs from her esoteric surrealism and pictorial compositions. The mise---en---scène of The Mansion of Madness reflects Carrington's spatial imagination that inherits Hieronymus Bosch's complex multi---character compositions. Furthermore, it was suggested that the early tangential relations between Carrington's visual expression and film art direction could be dated to the 1946---47 Bel Ami International Art Competition, when she submitted the Bosch---inspired painting The Temptation of St. Anthony (1947) for Albert Lewin and David L. Loew's motion---picture The Private Affairs of Bel Ami (1947).
It has also been argued that Carrington's intertextual symbolic system across visual and literary works defines the costume design of the characters in The Mansion of Madness. The discussion drew examples from her personages, who often emerge in extraordinary and distinctive outfits, and traverse the boundaries between her paintings and written fiction. Similarly, the attire of the fowl---like protagonist in The Mansion of Madness resonates with Carrington's Portrait of Max Ernst (1939) as his animal alter---ego, Lop Lop-The Bird Superior, and with comparable bird---like costumes and characters that appear in Carrington's short stories The House of Fear (1938) and The Sisters (1939). The analysis also showed that Moctezuma's film features the recurring motif of the white horse-Carrington's animal alter---ego-which intertextually connects her self---portrait Inn of the Dawn Horse (1937)(1938) with her short story The Oval Lady (1937)(1938). Moreover, the intertextual and intermedial translations of this motif expand into the plot and the stage design of Carrington's theatre play, Penelopé (1946), staged by Alejandro Jodorowsky in 1957-a creative collaboration that allows for further research on the aesthetic parallels between Carrington's oeuvre and Jodorowsky's films.
While Carrington's creative practice does not directly involve filmmaking as a form of artistic expression, her collaborations on and off screen signal the necessity of uncovering and understanding the artist's cinematic involvements. Thus, this research aimed at revealing undiscovered aspects of Leonora Carrington's transdisciplinary work and the production of her artistic identity in relation to the medium of film. | 2019-01-29T07:33:31.159Z | 2019-01-10T00:00:00.000 | {
"year": 2019,
"sha1": "c0c6061f27f5ce0defc6729521e27421c4618816",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0752/8/1/11/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bce71da48e9325de25ff22f05812ebc89ec00a5f",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231777250 | pes2o/s2orc | v3-fos-license | Unique protein interaction networks define the chromatin remodelling module of the NuRD complex
The combination of four proteins and their paralogues including MBD2/3, GATAD2A/B, CDK2AP1 and CHD3/4/5, which we refer to as the MGCC module, form the chromatin remodelling module of the nucleosome remodelling and deacetylase (NuRD) complex. To date, mechanisms by which the MGCC module acquires paralogue‐specific function and specificity have not been addressed. Understanding the protein–protein interaction (PPI) network of the MGCC subunits is essential for defining underlying mechanisms of gene regulation. Therefore, using pulldown followed by mass spectrometry analysis (PD‐MS), we report a proteome‐wide interaction network of the MGCC module in a paralogue‐specific manner. Our data also demonstrate that the disordered C‐terminal region of CHD3/4/5 is a gateway to incorporate remodelling activity into both ChAHP (CHD4, ADNP, HP1γ) and NuRD complexes in a mutually exclusive manner. We define a short aggregation‐prone region (APR) within the C‐terminal segment of GATAD2B that is essential for the interaction of CHD4 and CDK2AP1 with the NuRD complex. Finally, we also report an association of CDK2AP1 with the nuclear receptor co‐repressor (NCOR) complex. Overall, this study provides insight into the possible mechanisms through which the MGCC module can achieve specificity and diverse biological functions.
The domains responsible for mediating intra-NuRD architecture are being uncovered through careful biochemical analysis. Previous structural studies have documented that two HDAC molecules are activated by homo-or heterodimerisation of two molecules of MTA through ELM2-SANT domains [19]. MTA1/2 and MTA3 provide two and one binding regions, respectively, within their C terminus for RBBP proteins to facilitate their access to histone tails to possibly remove acetyl marks from lysine residues [9,10]. The bromo-adjacent homology (BAH) domain of MTA proteins is well conserved but is structurally and functionally poorly characterised [20]. MBD connects to the N-terminal region of the GATAD2 protein through a coiled-coil interface [21]. GATAD2 uses the GATA zinc finger domain in the C-terminal region to facilitate CHD binding [7,16].
Focusing on the MGCC module, MBD2 and MBD3, which are highly homologous and mutually exclusive within the NuRD complex [22], structurally bridge GATAD2/CDK2AP1/CHD subunits with the MHR module to form the intact NuRD complex [11,12]. Although MBD2 and MBD3 are highly similar, their affinity and selectivity towards DNA is markedly different [23,24]. MBD3 has been extensively studied, but the function of MBD2 isoforms MBD2a and MBD2b are poorly understood. MBD2a contains an extra 148 amino acid Gly-and Arg-rich domain at the N-terminal end as compared to MBD2b [25,26]. Within GATAD2, the C-terminal portion is responsible for the interaction with CHD proteins [7,16]. However, the minimal region mediating this interaction has remained elusive. To date, CDK2AP1 is the least studied subunit of NuRD biochemically and possibly overlooked in some studies because of its small size. CHD4 contains several functional domains essential for ATP-dependent nucleosome remodelling. The Nterminal region of CHD4 binds DNA via an HMG-box-like domain [27]. CHD4 can bind to histone tails (such as H3K4 and trimethylated H3K9) via two plant homeodomain (PHD) zinc fingers and remodels nucleosomes via its ATPase motor domain assisted by tandem chromodomains (CHD) [28]. The C-terminal region of CHD4, however, is functionally and structurally the least characterised part of CHD4.
Recently, the NuRD subunit CHD4 was identified as a component of the ChAHP complex, which also contains ADNP and HP1c (otherwise known as CBX3) subunits [29]. ChAHP modulates chromatin organisation by neutralising loops formed by CTCF at specific genomic regions and plays an important role in embryonic neural development and heterochromatin organisation [30]. ADNP is the requisite subunit, without which the ChAHP complex cannot form [29]. How CHD4 can be independently involved in two different chromatin-binding complexes and whether CHD3 and CHD5 can also form stable complexes with ADNP have not been resolved. The minimal domains for these interactions are also unknown.
Based on the above understanding, we set out to address the following: (a) what proteins interact with MBD2 isoforms? (b) What is the minimal region in GATAD2 proteins that mediates their interaction with CHD family members? (c) Is CDK2AP1 an exclusive subunit of NuRD or is it found in other complexes? And (d) what regions in CHD4 and ADNP mediate their interaction? We have sought to address these questions by focusing on the interaction network of the MGCC module of NuRD and core component of the ChAHP complex, ADNP. We define novel factors and complexes that can interact with paralogues of the NuRD subunits. We also refine minimal interacting domains between certain subunits and isoforms of the NuRD MGCC module. The importance of these results lies in providing clarification of how the MGCC module functions to co-ordinate diverse cellular functions.
Results
The domain architecture of the NuRD and ChAHP complexes is depicted in Fig. 1A, which shows known domains and sites of interaction (Fig. 1A). Domains responsible for mediating intra-NuRD subunit interactions that have not previously been described or characterised are highlighted with a question mark (Fig. 1A).
MBD2 isoforms and MBD3 have both mutual and unique binding partners
Initially, we focused on the MBD subunits, which bridge the MGCC to the MHR module. Relative -free quantification (LFQ) analysis of MBD2a, MBD2b and MBD3 pulldown followed by mass spectrometry analysis (PD-MS) data demonstrate that all three MBDs significantly copurify the canonical subunits of the NuRD complex ( Fig. 1B-D). Both MBD2a and MBD3 mutually co-immunoprecipitated a substantial number of possible new interactors involved in chromatin biology, transcriptional gene regulation and RNA processing (Fig. 1B,C). MBD2b had comparatively few interactors, but significantly enriched the canonical NuRD subunit CDK2AP1 (Fig. 1D). Interestingly, CDK2AP1 was significantly enriched only with the MBD2b isoform but not MBD2a, or MBD3. This shows the preference of CDK2AP1 for binding to the MBD2/NuRD complex but not MBD3/NuRD. We also identified other proteins specifically enriched with each MBD family member. For example, the MBD2a isoform, which contains an N-terminal Gly-and Arg-rich domain, captures numerous proteins, including the chromatin remodelling proteins SMARCA4 and SMARCA5 as well as the arginine N-methyltransferase enzyme PRMT1 (Fig. 1E). Evidence for arginine methylation of MBD2a [31] suggests a possible mechanism by which the function of the MBD2/NuRD complex is regulated. Interestingly, ZNF219, a transcriptional repressor that is a known interactor of the NuRD complex, was only pulled down with MBD2b (Fig. 1D,E) [6]. Given that MBD2b is not the major isoform of MBD2, this supports the substoichiometric nature of ZNF219 noted previously [6] and possibly other noncanonical partners of the NuRD complex that have been observed. Our data reveal that the two MBD2 isoforms show distinct molecular interactions and could form NuRD complexes associating with distinct sets of partner proteins. Hence, these diverse NuRD subspecies might perform different gene regulatory activities at different loci.
A 40-residue region in the GATAD2B C terminus is important for connecting CHD4 to the NuRD complex Previous biochemical studies have demonstrated that the C-terminal region (residues 276-593) in GATAD2 proteins is necessary for binding to CHD4 [7,16]. Using GATAD2B as an exemplar, we examined whether other proteins can compete with CHD4 for binding to GATAD2B and then defined the minimal region that mediates this interaction. LFQ analysis of proteins copurified with full-length GATAD2B showed marked enrichment of the canonical NuRD subunits ( Fig. 2A). We also observed significant enrichment of auxiliary proteins important for chromatin biology and gene repression, namely BEND3 and KCRM. PD-MS of the N-terminal half of GATAD2B (GATAD2B-N, residues 1-276) revealed that all canonical NuRD subunits except for CDK2AP1 and CHD4 were enriched (Fig. 2B).
On the other hand, using GATAD2B C-terminal region (GATAD2B-C) as bait, CHD4 was the only NuRD subunit that prefers the C-terminal portion of GATAD2B for binding (Fig. 2C); it was also the most highly enriched protein in this experiment across the whole proteome. Examination of intensity-based absolute quantification (iBAQ) values in all GATAD2B bait replicates revealed that all three replicates of GATAD2B (full-length) and two replicates of GATAD2B-C but not GATAD2B-N had highintensity values for CDK2AP1 (Fig. S1), indicating the C-terminal GATAD2B binding preference of CDK2AP1.
To narrow down the CHD4-binding region in the C terminus of GATAD2B, residues 387-427 of GATAD2B were deleted from GATAD2B-C (herein named GATAD2B-C Del ) and PD-MS was performed. This pulldown showed marked depletion of CHD4 (but not other NuRD subunits), compared to the wildtype GATAD2B-C (Fig. 2D). Notably, this deletion did not disrupt the interaction of other NuRD subunits. To corroborate this finding, we performed coimmunoprecipitation using epitope DYKDDDDK (FLAG)-tagged GATAD2B-C or GATAD2B-C Del versus an human influenza haemagglutinin, epitope YPYDVPDYA (HA)-tagged C-terminal CHD4 construct (HA-CHD4-C, residues 1230-1912) using an in vitro translation (IVT) system. IVT confirmed that the GATAD2B-CHD4 interaction was direct and that residues 387-427 are necessary for the integrity of this interaction (Fig. 2E).
The GATAD2 interaction interface with CHD4 includes an aggregation prone region
Protein-protein interaction (PPI) interfaces typically contain high proportions of hydrophobic residues. These interfaces can display characteristics of aggregation-prone regions (APRs) with fibril-forming capacities. Analysis of GATAD2A and GATAD2B protein sequences using the TANGO aggregation prediction algorithm [32] revealed a number of mutually exclusive as well as overlapping APRs (Fig. 3A). The most prominent of these APRs was a small and highly similar stretch of seven hydrophobic and aliphatic amino acids in GATAD2A (residues 384-390, FIYLVGL) and GATAD2B (residues 388-394- Fig. 3A,B). We designed peptidomimetics containing the 7-residue APRs from GATAD2A and GATAD2B, conjugated to a 11-residue portion of HIV-1 Tat protein for cell permeability, referred to herein as APR A and APR B , respectively. We also designed APRs flanked by gatekeeper glutamic acid residue and named them APR AGK and APR BGK . We first tested the impact of these APR and APR A/BGK peptides on K562 cell viability compared to a Tat-conjugated control peptide comprising seven alanine residues (CTRL). After dosedependent addition of APR A , APR B , APR AGK , APR BGK and CTRL peptides, K562 cell proliferation was analysed after 48 h by MTT (3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay. APR A and APR B induced cell death with IC50 values of~30 lM, whereas the CTRL peptide had no effect (Fig. 3C). Interestingly, addition of the flanking gatekeeper glutamic acid residues (Fig. 3B) to both peptides lessened the impact on K562 cell viability, most likely as a result of disrupting b-aggregation tendency in GATAD2 proteins (Fig. 3C).
To confirm whether these peptidomimetics were inducing protein aggregation, we titrated APR peptidomimetics into K562 total cell lysates (after sonication and before centrifugation). Addition of APR A and APR B resulted in the aggregation and depletion of soluble endogenous GATAD2A and GATAD2B proteins, whereas the control peptidomimetic had no impact on GATAD2 protein solubility (Fig. 3D). Due to the similarity between the GATAD2A/B APR regions (Fig. 3B), APR A and APR B depleted GATAD2B and GATAD2A equally, indicating no specificity for a particular GATAD2 protein (Fig. 3D). In addition, we did not see significant differences between APR and APR GK peptides' ability to deplete specific GATAD2 proteins as measured by western blots. We next performed FLAG-GATAD2B PD-MS in the presence of 50 lM APR B or CTRL to determine whether the APR mimetic peptides could directly interfere with any PPIs that require the APR (Fig. 3E). We observed a significant loss of NuRD subunits CDK2AP1 and CHD4 in the presence of APR B peptides compared to the CTRL, whilst no significant change was observed for GATAD2B and other NuRD subunits (Fig. 3E). We also examined the total number of peptides detected for all NuRD subunits to ensure that the loss of CHD4 and CDK2AP1 was not due to a reduction in GATAD2B or other subunits of the NuRD complex. Interestingly, we saw no significant difference in the total number of unique peptides (99 vs 97) of GATAD2B and NuRD subunits other than CHD4 and CDK2AP1 (Fig. 3F). In contrast, the number of unique peptides for CHD4 and CDK2AP1 decreased from 113 to 12 and 7 to 0, respectively (Fig. 3F). Based on these results, we conclude that APR B competes away CHD4 and CDK2AP1 from binding to the NuRD complex.
CDK2AP1 interacts with both NuRD and NCOR complexes
CDK2AP1 is a 12-kDa protein involved in cell cycle regulation and is the least defined subunit of the NuRD complex. Pulldown of a CDK2AP1-GFP fusion previously demonstrated that CDK2AP1 is a canonical component of the NuRD complex [33]. To corroborate these findings, we used the smaller (~1 kDa) FLAG tag to immunoprecipitate CDK2AP1, and our PD-MS confirmed the enrichment of canonical NuRD subunits using medium stringency washes (Fig. 4A). Notably, we also observed enrichment of nuclear receptor co-repressor (NCOR)1, NCOR2, TBL1X, TBL1R, GSP2 and HDAC3, which are all canonical subunits of the NCOR complex (Fig. 4A).
An LFQ intensity-based heatmap of the NCOR subunits detected in FLAG-CDK2AP1 versus CTRL samples confirmed that the interaction was not mediated by beads or FLAG tag (Fig. 4B). This result is supported by the number of unique NCOR complex peptides obtained as well as the low representation of these proteins in databases of common mass spectrometry (MS) contaminants (Table S1). With high stringency washes (500 mM NaCl), the interaction of CDK2AP1 with CHD4 and GATAD2 proteins in particular was not abrogated, suggesting this is a high affinity and direct interaction (Fig. 4C,D). However, at 500 mM NaCl, all interactions of CDK2AP1 with the NCOR complex subunits were lost, suggesting that this interaction might be of low-to-medium affinity (Fig. 4C). Interestingly, the RNA-binding protein QKI, which is involved in mRNA stability, translation and splicing, was also significantly enriched in CDK2AP1 PD-MS, even with stringent washing conditions (Fig. 4A,C). CDK2AP1-QKI is a strong and possibly direct interaction but occurs independently of the MGCC module as it was not detected in other PD-MS experiments in this study.
The C-terminal end of CHD proteins facilitates engagement with the NuRD and ChAHP complexes
Our previous in vitro co-IP studies of NuRD subunits revealed that the CHD4 C terminus (aa 1230-1912) interacts with GATAD2 proteins [7]. In addition, a recent report showed that CHD4 also interacts with the ChAHP complex [16]. Given that the NuRD complex can also incorporate CHD3 or CHD5, we investigated the interactome networks mediated through the C termini of all NuRD-associated CHD proteins. Accordingly, FLAG-tagged CHD3-C, CHD4-C and CHD5-C complexes were purified and analysed by MS. We observed strong enrichment of most NuRD canonical subunits ( Fig. 5A-C). Interestingly, CDK2AP1 enrichment was seen only with CHD4-C but not with CHD4-N and CHD4-M (Figs 5D and S2), indicating that CDK2AP1 only recognises the Cterminal half of CHD4 (Fig. 5A).
PD-MS performed with the CHD4 N-terminal region (CHD4-N) also showed enrichment of NuRD subunits, suggesting that CHD4-C is not the sole region responsible for engagement with NuRD (Fig. 5D). However, notable enrichment of NuRD subunits was seen with CHD4-C when compared to CHD4-N (Fig. 5E). Notably, we observed enrichment of CSK2B (CSNK2B), CSK21 (CSNK2A1) and CSK22 (CSNK2A2), well-known subunits of the casein kinase II (CSK2) serine/threonine protein kinase complex, in the CHD4-N pulldown (Fig. 5D). The CSK2 complex regulates the function of many regulators of chromatin organisation and function and has been shown to phosphorylate serine residues in the CHD4 N terminus [34].
We also observed specific enrichment of activitydependent neuroprotective protein (ADNP) with all CHD family members ( Fig. 5A-C,F). Since the discovery of the ChAHP complex containing ADNP, CBX3 and CHD4 in 2017 [29], attention has focused on its molecular and cellular function in cancer and neurodegenerative diseases. It has been shown by PD-MS of full-length FLAG-tagged ADNP in HEK293 cells that CHD4 and CBX3 are the top-enriched proteins [29]. In addition, we saw marked enrichment of three members of the MYM-type zinc finger family, namely ZMYM2, ZMYM3 and ZMYM4 (Fig. 5G). Significant enrichment of these proteins with ChAHP may suggest a role for MYM-type proteins in genome interaction with ADNP, we performed coimmunoprecipitation using FLAG-tagged ADNP N terminus (ADNP-N, aa 1-228) and full-length HA-CHD4 or HA-CHD4-C (aa 1230-1912) proteins in HEK293 cells. FLAG-ADNP-N immobilised on FLAG beads could pulldown both full-length HA-CHD4 and the HA-CHD4-C (Fig. 5 H, left panel). Similarly, HA-CHD4 proteins immobilised on HA beads could also precipitate FLAG-ADNP-N (Fig. 5H, right panel). Notably, the C-terminal half of ADNP failed to express (data not shown).
Cancer missense mutations may change the balance of CHD/NuRD and CHD/ChAHP complexes
The impact of somatic or germline mutations on large, multisubunit complexes such as NuRD is being recognised. We focused on CHD4 as a mutually exclusive partner of both NuRD and ChAHP complexes to determine whether any cancer-specific missense mutations in the C-terminal region of CHD4 disrupted its binding to either GATAD2 or ADNP proteins. Previous CRISPR/Cas9 screens in erythroid cells revealed that disruption of aa 1872-1883 of CHD4 abrogated the CHD4-GATAD2B interaction [16]. We therefore examined the six missense mutations in CHD4 within or adjacent to this region, which could potentially affect the interaction of CHD4 with either ADNP or GATAD2B (Fig. S3A). To understand the effect of the CHD4 C-terminal missense mutations in the context of NuRD and ChAHP assembly, we performed pairwise interaction experiments to evaluate the binding of CHD4-C to full-length ADNP and GATAD2B proteins. Wild-type (WT) or mutant HA-tagged CHD4-C as well as FLAG-tagged ADNP and GATAD2B proteins were co-expressed and their interaction examined in pulldowns followed by western blot experiments. Four of these mutations (D1867N, P1879S, R1890C and N1891D) had no impact on CHD4 interaction with either GATAD2B or ADNP (D1867N is exemplified in Fig. S3B, Lane 2). However, a reduction in CHD4-C A1866D interaction with full-length ADNP was observed, when compared to WT CHD4-C (Fig. S3B). With regard to GATAD2B, only CHD4-C E1889K showed a clear effect on the interaction when compared to WT (Fig. S3C). These mutations were not completely disruptive but reduced the affinity of CHD4 for either NuRD or ChAHP subunits. These results may suggest that the composition and proportion of CHD4-NuRD and CHD4-ChAHP complexes might be perturbed in cancer cells carrying these particular CHD4-C mutations.
Fixed and altered stoichiometries were observed for MHR and MGCC modules, respectively
Understanding the stoichiometry of the NuRD complex subunits will help delineate its structure and function. To this aim, we used the iBAQ-adjusted intensity values as described previously [6,35]. Because of the high sequence similarity between paralogues (i.e. RBBP4 is 92% identical to RBBP7), we first considered each set of paralogues as a single group and averaged their iBAQ values to assess stoichiometry. The averaged values were divided by the averaged MTA value and multiplied by 2 because based on the published X-ray crystal structure, in which two molecules of MTA are found in an intact NuRD complex. Of note, bait proteins were excluded because they are in excess and introduce a bias in stoichiometry calculations. MBD and GATAD2 have been considered as exclusive subunits of the NuRD complex. Thus, we used their iBAQ data for stoichiometry calculations.
Approximately the same number of unique peptides were detected for HDAC1/2 (~20) in NuRD subunits PD-MS, but no iBAQ intensity values were calculated for HDAC1 in CDK2AP1 PD-MS suggesting that only HDAC2 but not HDAC1 was copurified with the CDK2AP1/NuRD complex (Fig. 6C). Further investigation of the number of unique peptides detected for HDAC2 vs HDAC1 in CDK2AP1 PD-MS was 6 to 2, respectively (Fig. S4A). To ensure that this observation is not due to lack of HDAC1 expression , we further analysed the transcript and protein expression levels of both HDAC1/2 (RNA-Seq dataset from Human Protein Atlas [37] and shotgun proteomics dataset from ProteomicsDB [38]) in HEK293 cells. These data show that HDAC1/2 express at the same level, and thus, absence of HDAC1 in CDK2AP1 pulldowns was not related to expression. We conclude that this observation is linked to the assembly and architecture and consequently interaction of the HDAC1 with the CDK2AP1/NuRD complex (Fig. S4B,C).
Discussion
The MGCC module of the NuRD complex plays a diverse role in almost all stages of development and in many disease states in metazoans [39,40]. A central question concerning multisubunit assemblies such as MGCC is how specificity is achieved whilst the complex is recruited to specific genomic loci. It is generally acknowledged that subunit-specific protein interactions provide some of this regulatory specificity to multiprotein complexes. For example, two groups recently demonstrated that PWWP2A, an H2A.Z-binding protein, binds to MTA1 and separates MHR from MGCC, thus forming PWWP2A-MTA1-HDAC1/2-RBBP4/7 complexes [41,42]. It is therefore plausible that NuRD subunit-specific binding partners that we report in this study could give rise to a combination of additional NuRD subcomplexes that would add to this functional diversity by creating other NuRD species with distinct compositions and stoichiometries. Our interactome data lay the foundation for future studies to investigate the functional readout of such complexes.
Recently, Sher et al. demonstrated that disruption of the MBD2/NuRD axis, but not MBD3/NuRD, leads to the derepression of fetal haemoglobin genes [16,17]. MBD2 is a methyl-CpG-binding protein; however, the proximal promoter of c-globin and the entire b-globin locus is depleted of CpG islands, suggesting that MBD2 may not be acting to directly bind methylated DNA at the b-globin locus. Identification of a PPI network for MBD2a and MBD2b isoforms could potentially help to define a mechanism of their function and pave the way for more targeted molecular therapies. Here, we demonstrate that MBD2 isoforms copurify with dozens of proteins involved in gene regulation and genome structure organisation; these proteins could potentially contribute to c-globin repression. It is highly likely that other proteinssuch as ZNF219might facilitate the binding of MBD2/NuRD to the b-globin gene locus.
Notably, enrichment of ZNF219 and CDK2AP1 with MBD2 but not MBD3 does not necessarily indicate that they do not interact with MBD3/NuRD. Regarding ZNF219, we have previously reported the enrichment of ZNF219 with MBD3/NuRD in MBD PD-MS iBAQ data show substoichiometric ratio for CDK2AP1 and CHD4, which is consistent with previous studies. This may also indicate that the majority of MBD/NuRD complexes lack CHD and CDK2AP1 proteins. If true, the MBD/NuRD complex that carries both deacetylation and remodelling activities might be less abundant compared to MBD/NuRD with only deacetylase activity. The increase in ratios of MBD, CHD4 and CDK2AP1 when GATAD2B is used as bait might also imply that the MBD-GATAD2-CDK2AP1-CHD assembly is present as an independent complex with remodelling activity.
In conclusion, we report several specific protein interactions for the subunits of the MGCC module and further report the presence of new subcomplexes independent of the NuRD complex such as GATAD2-CDK2AP1-CHD and CDK2AP1/NCOR complexes (Fig. 6D). Future studies may shed more light on the biological function of these interactors and subcomplexes in gene regulation in normal and disease states. The generation of recombinant protein complexes or tandem purification of the complexes followed by biophysical analysis might reveal the precise stoichiometry and structure and molecular function of these derivative complexes.
Plasmid constructs
All genes used in this study were cloned into pcDNA3.1(+) vector using Gibson Assembly and were either Nterminally FLAG-or HA-tagged. Except for ADNP (obtained as a cDNA from Horizon Discovery (GenBank #BC075794)) and GATAD2B-C Del constructs, the rest were a kind gift from Professor Joel Mackay, The University of Sydney. A list of primers used for Gibson Assembly of all cDNAs into pcDNA3.1(+) are available on request.
Design of APR peptides
The GATAD2A/B protein sequences were analysed by TANGO to detect APRs [32]. Default physicochemical parameters were selected as below: temperature, 298 K; pH 7.5; ionic strength, 0.02 M; and concentration, 1 M. An aggregation score of 5 was set as a cut-off per residue. The residues spanning each APR were combined with an 11 aa portion of HIV-1 Tat protein (YGRKKRRQRRR) to enhance cell permeability. Peptides were synthesised to at least 80% purity by HPLC at Mimotopes, Australia.
Cell culture and transfection
K562 cells were grown in RPMI 1640 supplemented with 10% (v/v) fetal bovine serum, penicillin (100 UÁmL À1 ) and streptomycin (100 lgÁmL À1 ). Expi293F TM cells were grown to a density of 1.5 9 10 6 cellsÁmL À1 in Expi293 TM Expression Medium (Thermo Fisher Scientific, Waltham, MA, USA). Combinations of equimolar quantities of constructs were cotransfected into cells using linear polyethylenimine (PEI) (Polysciences, Warrington, PA, USA). DNA (4 lg) was first diluted in 200 lL of PBS and vortexed briefly. PEI (8 lL, 1 mgÁmL À1 ) was then added, and the mixture was vortexed again, incubated for 20 min at room temperature and then added to 1.9 mL of cells in a 12-well plate. The cells were incubated for 65-72 h at 37°C, 5% CO2 in a humidified incubator on a horizontal orbital shaker (130 rpm). Aliquots of cells (1 mL) were then harvested, washed twice with PBS, centrifuged (300 g, 5 min), snapfrozen in liquid nitrogen and stored at À80°C.
Cell lysate and APR treatment for aggregation analysis K562 cells were lysed using 500 lL lysis buffer containing 50 mM Tris-HCl, 150 mM NaCl, 0.5% IGEPAL, pH 7.5, 1x protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA), 1 mM DTT and 1 lL Pierce TM Universal Nuclease (Thermo Fisher Scientific). Then, cells were sonicated for 5 cycles, 1 min ON/30 s OFF. After sonication, total lysate was collected and the rest of the lysate was aliquoted into new 1.5 mL Eppendorf tubes, and then, known concentration of APR peptides was added to each tube and incubated for 30 min at 4°C. Next, tubes were spun at 20 000 g for 30 min to separate the soluble and insoluble fractions. were added to each reaction. The reactions were incubated at 30°C for 3 h. Prior to immunoprecipitation, 500 lL lysis buffer (50 mM Tris-HCl, 150 mM NaCl, 0.1% (v/v) Triton X-100, 1 mM DTT, pH 7.5) was added to the reactions. Input (5% of total) was collected, and the remainder was mixed with 20 lL of anti-FLAG Sepharose 4B beads (Sigma-Aldrich) at 4°C for 2 h on a rotator. Beads then were washed with 500 lL of wash buffer (50 mM Tris-HCl, 200 or 500 mM NaCl, 0.5% (v/v) IGEPAL Ò CA630, 0.2 mM DTT, pH 7.5) for 5 times. Finally, elution was performed three times each using 20 lL of a 150 lgÁlL À1 stock of 3xFLAG peptides (Sigma-Aldrich). Western blot analysis was done as previously described [7]. Antibodies
Sample preparation and tandem MS
Label-free FLAG pulldowns were performed in at least triplicate. Nuclei of transiently-transfected Expi293F cells were lysed in the same lysis buffer as mentioned above and after sonication and spinning incubated with 20 lL anti-FLAG beads (Sigma-Aldrich). After incubation for 2 h, five washes were performed: 3 washes with a buffer containing [200 mM NaCl, 50 mM Tris-HCl, 0.5% (v/v) IGE-PAL, PH 7.5], and two washes with the same buffer lacking IGEPAL. Affinity-purified proteins were subject to on-bead trypsin digestion, where 20 lL of digestion buffer (2 M urea freshly dissolved in 50 mM Tris-HCl, 1 mM DTT, 100 ng trypsin, 20 ng LysC (Promega)) was added and then vortexed in 30°C for 2 h. Next, the beads were collected, and supernatant was transferred into LoBind tubes. Beads were resuspended in 20 lL 2 M urea containing 10 mM IAA in the dark for 20 min. The supernatant was transferred to the previous tube and incubated at 30°C for 16 h. The following day, tryptic peptides were acidified to a final concentration of 2% (v/v) with formic acid (Sigma-Aldrich) and desalted using StageTips (Thermo Fisher Scientific). Peptides were dried in a Speedi-Vac and dissolved in 10 lL 0.1% (v/v) formic acid. LC-MS/MS analysis was performed on an UltiMate TM 3000 RSLCnano System (Thermo Fisher Scientific) system connected to a Thermo Scientific Q-Exactive HF-X hybrid quadrupole-Orbitrap mass spectrometer equipped with a standard nanoelectrospray source (Thermo Fisher Scientific). Peptides (3 lL) were injected onto a C18 column (35 cm x 75 lm inner diameter column packed in-house with 1.9 lm C18AQ particles). Peptides were separated at a flow rate of 200 nLÁmin À1 using a linear gradient of 5%-30% buffer B over 30 min. Solvent A consisted of 0.1% (v/v) formic acid, and solvent B consisted of 80% (v/ v) acetonitrile and 0.1% (v/v) formic acid. The end-to-end run time was 45 min, including sample loading and column equilibration times. The mass spectrometer was set to a data-dependent acquisition mode (DDA). In the DDA run, each full-scan MS1 was operated as follows: mass scan range was between 300 and 1600 mÁz À1 at resolution of 60 000. The top 15 most intense precursor ions were selected to be fragmented in the Orbitrap via high-energy collision dissociation activation. MS2 scan was operated as follows: mass scan range was between 200 and 2000 mÁz À1 at resolution of 15 000, 1 x 10 5 AGC target and 1.4 mÁz À1 isolation window.
Raw data analysis
Raw data were analysed by MaxQuant (version 1.6.6.0) using standard settings. Additional parameters including carbamidomethyl cysteine (C) and methionine oxidation (M) were selected as fixed and variable modifications, respectively, and LysC and trypsin were selected as proteolytic enzymes. The human proteome (Proteome ID UP000005640) was used as the reference proteome. The generated proteingroups.txt table in conjunction with an experimental design text file was used to perform all statistical analyses using LFQ values in R studio as described elsewhere [43,44]. Perseus algorithm was used to impute the missing values. Proteins that had two missing values were discarded from the analysis . For better data visualisation, the output files were further processed. First, all proteins with fold change > 2 and statistically significant were kept. Then, proteins including heat shock, ribosomal, keratin or proteins with mitochondrial and cytoplasmic localisation were excluded from the list. A full list of unfiltered interactors is listed in the data file. To determine the stoichiometry of the subunits, the iBAQ-adjusted intensity values (iBAQ; is an approximate calculation of protein copy numbers by dividing the sum of intensities of all experimentally detected peptides by the number of theoretically observable peptides for each protein) for MTAs were averaged and then intensity values for known NuRD components were divided by the MTA value and multiplied by 2 because based on published X-ray crystal structure two molecules of HDAC1 and two molecules of MTA1 make the core of the NuRD complex [19]. Due to high sequence similarity between paralogues (i.e. RBBP4 is 92% identical to RBBP7), we first considered each set of paralogues as a single group. NetworkAnalyst tool was used for network visualisation of the unique and shared interactors LFQ and iBAQ are two different quantification algorithms: iBAQ is an approximate calculation of protein copy numbers and is the best at determining ratio changes within samples not across samples, whereas LFQ is the best representative of ratio changes between samples. Of note, the presence of false positives is inevitable in immunoprecipitation studies.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Fig. S1. CDK2AP1 interacts with the C-terminal of GATAD2B. Fig. S2. Enriched proteins with CHD4-M. Fig. S3. Cancer-associated missense mutation in the Cterminal region of CHD4 may change the balance of CHD4/NuRD vs CHD4/ChAHP. Fig. S4. HDAC2-NuRD is more abundant as compared to HDAC1-NuRD. | 2021-02-03T14:20:25.842Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "03314396799d135c6670dec50291a9e3212fe390",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/febs.16112",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ac2faa081efd0023edfc28527f7fb6d795ae236",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
} |
234719846 | pes2o/s2orc | v3-fos-license | THE UNIQUENESS OF BIOLOGICAL CHARACTERISTICS OF TECHNICAL HEMP AND PROSPECTS FOR ITS PRACTICAL USE
The results of studying biological characteristics and performance features of monoecious hemp varieties are presented. New hemp varieties bred in Ukraine are unique because technical hemp plants produce fiber, seeds, oil and do not accumulate narcotic compounds. Along with universal varieties, varieties with increased yields of seeds, oil, fiber and stem biomass were developed. The first medicinal hemp varieties with increased contents of cannabidiol (CBD) and cannabigerol (CBG) were obtained, and they can also be used as a technical crop. odor, cross-pollinating, heterozygosity, sexual polymorphism, dominance of the cannabinoid presence, direct correlation between the contents of cannabidiol and tetrahydrocannabinol, formation of glandular trichomes and essential oils on leaves and perianths. of tetrahydrocannabinol does not exceed 0.08%. The breeding methods developed to increase the cannabidiol content made it possible to create new starting material stabilized in terms of the cannabidiol and tetrahydrocannabinol contents at the level of 1.5–3.0 and 0.04–0.07%, respectively. For the time, we established that there was no relationship between CBG and THC. Targeted selection for increase in the CBG content led to the creation of hemp variety Vik 2020 with a cannabigerol content of up to 1.0% and without THC. Conclusions. The results obtained prove the uniqueness of technical hemp as a biological object of research, possibilities of developing new scientific theories, breeding methods, genetic mechanisms of cannabinoid synthesis and practical of and medicinal in different production advanced processing 34–38
tetrahydrocannabinol (THC), in the 1960s-70s was 0.3-0.5%. Longer than 40-year work resulted in hemp varieties without any drug activity. In the process of seed production to certified seeds, registered hemp varieties do not accumulate narcotically active tetrahydrocannabinol more than 0.05%. Therefore, the problem of the social safety of technical hemp has been solved via breeding.
Purpose and objectives. The study purpose was development of new hemp varieties of for different uses, identification of biological characteristics of the monoeciousness traits and diversity of cannabinoid compounds, and evaluation of prospects of developing varieties for seeds, fiber, oil, and medicinal drugs.
Material and methods. Hemp varieties grown in breeding variety trials and breeding nurseries were taken as the test material. The oil content was determined by C.V. Rushkovsky's method; the fatty acid composition of oilby gas chromatography on a chromatograph Selmichrom-1; the contents of cannabinoids and terpenesby an internal standard method on a gas-liquid chromatograph HP 6890 Series Hewlett Packard. Relationships between the traits were assessed with correlation coefficients.
Results and discussion. Breeding approaches to reduce the content of narcotically active tetrahydrocannabinol (THC) were quite stringent. At the early stages, family-group selection of plants with decreased contents of cannabinoids negatively affected the performance, especially the weight of seeds from elite plants. Subsequently, when the population expanded, selection for increased fiber content as well as seed and stem yields became possible. The breeders' objective was to preserve the main benefit of hemp -formation of the bast-fiber layer on stems.
The first high-yielding monoecious hemp variety YUSO 31, which could be used both for fiber and for seeds and had a decreased THC content, was entered in the State Register of Varieties in 1987. Currently, this variety is the standard of several traits (early maturity, low THC content, high fiber and oil contents, yield and inflorescences habit) in the world breeding practice.
The breeding was hampered because of a number of biological features of hemp: specific odor, cross-pollinating, heterozygosity, sexual polymorphism, dominance of the cannabinoid presence, direct correlation between the contents of cannabidiol and tetrahydrocannabinol, formation of glandular trichomes and essential oils on leaves and perianths.
Essential oil from narcotic hemp C. indica is known to contain 0.015% of THC and more than 137 terpenes with a maximum content of β-Caryophyllene, Terpinolene, β-Myrcene, α-Pinene, Trans-β-Ocimene (68.42%). Studies have shown that plants with a high THC content, including vegetative and generative organs (stems, leaves, and perianths), are covered with cystolith and glandular hairs of all 3 types (bulbous, sessile, and stalked), with the largest number of them on perianths. Trichome heads are filled with brown liquid.
C. sativa plants also have a specific hemp smell, essential oils and glandular hairs. However, they are less pronounced than those of narcotic hemp and even within this species their manifestation weakens when the cannabinoid content is reduced to a minimum. Changes in the hair structure are also noted. In technical hemp varieties with a tetrahydrocannabinol content of 0.01%, hairs appear more often as single inclusions on veins of the leaf blade. Trichomes are characterized by smaller hair heads than in plants with higher contents of cannabinoids (0.08% THC). Through the microscope, it is seen that heads are filled with light secretion. In varieties with an increased THC content, there is a direct correlation between the number of glandular hairs and the contents of cannabinoids, and if THC is absent, or if its content is very low, this relationship is broken. Almost all industrial hemp varieties with a minimum THC content or without THC at all were revealed to contain essential oils (0.06-0.24%) [8,9].
In the breeding course, breeding material without the specific odor, trichomes or THC (variety YUSO 45) was distinguished. Continuous breeding towards reducing narcotic contents in hemp led to the development of a new unique variety without not only THC, but without the other cannabinoids (variety Viktoriia) (Fig. 1).
Hemp breeding is also complicated by sex polymorphism, the ontogenetic and phylogenic characteristics of which are determined by the habit factors and the male/female flowers ratio. At the same time, permanent mutationssex mosaicslargely influence the genotypic expression of sexual characteristics. According to different theories of hemp sex genetics, there are from 5 sexual types of monoecious plants (Neuer, Sengbusch, 1943) to 22 sexual types of masculinized and feminized plants (N.N. Grishko, 1940). According to an improved classification, 12 sexual types of dioecious and monoecious hemp are considered (N.D. Migal, 1992) [10]. Male hemp plants are the main destabilizer of monoecious hemp populations (Fig. 2). Therefore, during the propagation of a variety to the 2 nd reproduction (certified seeds), the number of such plants is limited to 1% by the international standard EU. % tions with such plants decreases segregation into undesirable sex types, including non-productive masculinized plants and late-ripening feminized male plants (Fig. 3).
Fig. 3. A Monoecious Feminized Female plant and an Inflorescence Fragment
Moreover, such a population with a high percentage of monoecious feminized female plants ensures a consistently high performance of the variety.
Modern hemp varieties are noticeable for multipurposeness: for fiber, seeds, oil. At the same time, there are check varieties (variety Hliana) with a certain yield (fiber and oil contents -30%, stem yield -7.5-8.0 t/ha, and seed yield -1.0-1.2 t/ha) and specialized varietiesfor fiber, seeds and oil. The peculiarity of these varieties with high values of one parameter is that they can also be used for processing all parts of plants.
Foe example, monoecious hemp variety Hlukhivskyi 51 has up to 38-40% of fiber in stems, the stem biomass is 8.5-10.0 t/ha, and the yield of seeds with an oil content of 28% is 0.8-0.9 t/ha. This variety is highly profitable in fiber production (3.2-3.8 t/ha [the check variety gives 2.0-2.2 t/ha]) and production of hards (55-60%), which can be used to manufacture pellets, briquettes and biocomposite materials. Seeds obtained upon harvesting for double use are processed to produce oil.
Seed variety Hlesіia is characterized by a maximum seed yield (up to 1.8 t/ha), high oil content (34%) and a medium fiber content (up to 28%).
Hemp variety Mykolaichyk has an increased oil content in seeds of 38-40%, seed yield of up to 1.5 t/ha and fiber content of 28% (Table 1). The uniqueness of this variety also lies in the fatty acid composition of seed oil. The ratio of omega-6 to omega-3 (ω-6:ω-3) polyunsaturated fatty acids of 4:1 is considered to be the best one. In this variety, the ratio ranges 3:1 to 4:1 from year to year. The content of gamma-linolenic acid is 2.53%. Today, hemp hulled seeds are introduced in nutrition practice, and their value significantly increases with the optimal ratio of omega-6 to omega-3 fatty acids and an increased amount of -linolenic acid ( Table 2). If previously it was believed that synthesis of cannabinoids is a negative feature, now after extraction of pure substances and research into their biological and medicinal properties, it became known about possibilities of their use for medical purposes. At present, cannabidiol (CBD), cannabichromene (CBC), cannabigerol (CBG), cannabinol (CBN), cannabidivarin (CBDV) are considered the most valuable ones. Studies of several hemp varieties (Santica, Fedora, Felina, Ferimon, YUSO 31) also found acid analogues of some cannabinoids -CBGA, CBDA, and THCA. Of these varieties, YUSO 31 has the lowest THC content. It should also be noted that plants of this variety differ in many of the above-listed cannabinoids and their acids. Such cannabinoid compounds as CBC, CBG, CBN and CBDV appeared in small leaves during seed maturation. The greatest amount of CBG was detected in plants of varieties Santica and YUSO 31, while CBD and CBDA were found in all the varieties under investigation, but with a strong variability of the trait from 0.000% to 5.12% (Table 3). Moreover, the maximum levels of CBD and CBDA are associated with the maximum amounts of THC and THCA. In this regard, investigations are is underway to search for such hemp forms that have a high CBD content and a low THC content, not exceeding the legally limited standard. In the EU, it is 0.2%; in the United States -0.3%; in Russia -0.1%; and in Ukraine -0.08% of THC. As one can see, the most severe restrictions are set in Ukraine, even for research.
At the first stage of our studies, it was necessary to find out how close the correlation between the CBD and THC contents was and the limits of increasing the CBD level without ex-ceeding the THC content limit [11,12]. It was found that the correlation between the CBD and THC levels is close to 1 (r = 0.7-0.9).
The direct dependence of synthesis of one cannabinoid on another restricts breeding for an increased cannabinoid content in the range of 1.5-3.0%. In this case, the content of tetrahydrocannabinol does not exceed 0.08% [13.14].
The breeding methods developed to increase the cannabidiol content made it possible to create new starting material (variety Mriia) stabilized in terms of the cannabidiol and tetrahydrocannabinol contents at the level of 1.5-3.0 and 0.04-0.07%, respectively.
For the first time, we established that there was no relationship between CBG and THC. Targeted selection for increase in the CBG content led to the creation of hemp variety Vik 2020 with a cannabigerol content of up to 1.0% and without THC.
In the future, the breeding methods developed will allow creating hemp varieties both with combinations of therapeutic cannabinoids, thereby increasing their medical value, and with individual cannabinoids: cannabinol, cannabichromene, and cannabidivarin, without any narcotic effects.
Conclusions.
The results obtained prove the uniqueness of technical hemp as a biological object of research, possibilities of developing new scientific theories, breeding methods, genetic mechanisms of cannabinoid synthesis and practical use of such hemp products as oil, hulled seeds, fiber and medicinal agents in different production areas, where, as more advanced processing techniques are developed, the effectiveness of hemp rises.
The technical hemp varieties, universal, seed and fiber ones, are noticeable for the absence of THC and CBD and can be used for fiber (the fiber content is 30, 28 and 38% in universal, seed and fiber varieties, respectively), seeds (the seed yield is 1.0-1.2, 1.5-1.8 and 0.8-0.9 t/ha in universal, seed and fiber varieties, respectively) and oil (the oil content in seeds is 30, 34-38 and 28% in universal, seed and fiber varieties, respectively).
For the first time in the history of hemp breeding, studies have has been conducted to develop medicinal hemp varieties.
THE UNIQUENESS OF BIOLOGICAL CHARACTERISTICS OF TECHNICAL HEMP AND PROSPECTS FOR ITS PRACTICAL USE
Laiko I.M.
Institute of Bast Crops of NAAS, Ukraine
Purpose and objectives. The study purpose was development of new hemp varieties of for different uses, identification of biological characteristics of the monoeciousness traits and diversity of cannabinoid compounds, and evaluation of prospects of developing varieties for seeds, fiber, oil, and medicinal drugs. Material and methods. Hemp varieties grown in breeding variety trials and breeding nurseries were taken as the test material. The oil content was determined by C.V. Rushkovsky's method; the fatty acid composition of oil -by gas chromatography on a chromatograph Selmichrom-1; the contents of cannabinoids and terpenesby an internal standard method on a gas-liquid chromatograph HP 6890 Series Hewlett Packard. Relationships between the traits were assessed with correlation coefficients. Results and discussion. The breeding was hampered because of a number of biological features of hemp: specific odor, cross-pollinating, heterozygosity, sex polymorphism, dominance of the cannabinoid presence, a direct correlation between the contents of cannabidiol and tetrahydrocannabinol, formation of glandular hairs (trichomes) and essential oils on leaves and perianths. It was demonstrated that in varieties with an increased THC content there was a direct correlation between the number of glandular hairs and the contents of cannabinoids, and if THC is absent, or if its content is very low, this relationship is broken. Almost all industrial hemp varieties with a minimum THC content or without THC at all were revealed to contain essential oils (0.06-0.24%). Hemp breeding is also complicated by sex polymorphism, the ontogenetic and phylogenic characteristics of which are determined by the habit factors and the male/female flowers ratio.
Universal check variety Hliana is the best variety in terms of sex composition and monoeciousness stability with minimal segregation of male plants in reproductions. There are universal varieties (check variety Hliana) with a certain yield (fiber and oil contents -30%, stem yield -7.5-8.0 t/ha, and seed yield -1.0-1.2 t/ha) and specialized varietiesfor fiber (variety Hlukhivskyi 51, the fiber content is 38-40%), seeds (variety Hlesiia, the seed yield is 1.5-1.8 t/ha) and oil (variety Mykolaichyk, the oil content is 38-40%). The prospects of breeding to create medicinal varieties have been proven. The direct dependence of synthesis of one cannabinoid on another (r = 0.7-0.9) restricts breeding for an increased cannabinoid content in the range of 1.5-3.0%. In this case, the content of tetrahydrocannabinol does not exceed 0.08%. The breeding methods developed to increase the cannabidiol content made it possible to create new starting material stabilized in terms of the cannabidiol and tetrahydrocannabinol contents at the level of 1.5-3.0 and 0.04-0.07%, respectively. For the first time, we established that there was no relationship between CBG and THC. Targeted selection for increase in the CBG content led to the creation of hemp variety Vik 2020 with a cannabigerol content of up to 1.0% and without THC. Conclusions. The results obtained prove the uniqueness of technical hemp as a biological object of research, possibilities of developing new scientific theories, breeding methods, genetic mechanisms of cannabinoid synthesis and practical use of such hemp products as oil, hulled seeds, fiber and medicinal agents in different production areas, where, as more advanced processing techniques are developed, the effectiveness of hemp rises. The technical hemp varieties, universal, seed and fiber ones, are noticeable for the absence of THC and CBD and can be used for fiber (the fiber content is 30, 28 and 38% in universal, seed and fiber varieties, respectively), seeds (the seed yield is 1.0-1.2, 1.5-1.8 and 0.8-0.9 t/ha in universal, seed and fiber varieties, respectively) and oil (the oil content in seeds is 30, 34-38 and 28% in universal, seed and fiber varieties, respectively). For the first time in the history of hemp breeding, studies have has been conducted to develop medicinal hemp varieties. A medicinal hemp variety has been created (variety Mriia). It, in addition to leaf biomass with an increased CBD content, gives a seed yield of 0.8-1.0 t/ha with oil content of up to 28% and a stem yield of 7.0 t/ha with a fiber content of 28-30%. Variety Vik 2020 with an increased content of KBG (1.0%) and without THC has been created. | 2021-01-07T09:08:46.681Z | 2020-07-03T00:00:00.000 | {
"year": 2020,
"sha1": "bf0b06130d492b28b6787194f9db28c02f30deb7",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/pbsd/article/download/206985/207640",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "92dfaa583de0e8f2e3fd57f8b3397670ba31d56e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
252881950 | pes2o/s2orc | v3-fos-license | Investigation of the effect of ultrasonic shock on corrosion behavior of stainless steel 316L
Refineries and petrochemical industries, due to the presence of all-metal equipment and acidic environment, have high corrosion. This study investigates the effect of ultrasonic shock on the corrosion behavior of 316 stainless steels in order to the increasing use of stainless steels in these industries. The aim of this study was to investigate the effect of ultrasonic stress relief method on increasing the strength of parts in corrosive environments. It should be noted that the residual stress in the samples is created by welding. In this study, first the considered sample is De-tensioning using ultrasonic vibration at a frequency of 20 kHz, then the obtained results are compared with the results of samples without De-tensioning operation and also thermal De-tensioning operation. XRD method used to measure the residual stress. The obtained results show that in ultrasonic method and heat treatment, the amount of residual stress is reduced by 58.7% and 54.3%, respectively. It has also been observed that the use of ultrasonic waves has increased the life of the sample in a corrosive environment.
Introduction
tress corrosion was first identified in 1965 in the United States during the failure analysis of a gas transmission pipeline. SCC indicates cracking caused by the simultaneous effect of corrosion and stress. The stresses in the parts are applied or residual stresses. Cold forming and deformation, welding, heat treatment and machining are among the factors that cause residual stress. In most cases, the importance and magnitude of these tensions are ignored. This type of stress causes small cracks inside the parts. Under the structure, these cracks can have intergranular or grain boundary morphology. In terms of macro cracks, SCC has a fragile appearance and factors such as temperature, pressure, concentration, pH, viscosity, part material and fluid flow are among the environmental parameters that affect the crack growth rate [1]. This type of corrosion is one of the most common types of corrosion in the industry, in which, in addition to the corrosiveness of the environment, the mechanical stress factor is also a necessary condition for its occurrence. For many years, it has been believed that for the occurrence of stress cracking, the simultaneous presence of three factors: a corrosive environment, the presence of metal with an alloy sensitive to this type of cracking, and the presence of tensile stress on the metal is required [2]. For example, hot aqueous chloride solution can create cracks in stainless steel at a considerable speed, while they do not have such an effect on carbon steel, aluminum and other non-ferrous alloys. In other words, any corrosive environment is capable of creating cracks in a limited number of metals and alloys [3].
There are different methods for stress relief of parts, the most important of which are: natural stress relief, thermal stress relief, vibration stress relief, overloading, shot peening, and stress relief with ultrasonic blows. In this research, two methods of stress relief with ultrasonic and thermal waves are investigated.
Materials and methods
The validity of the results of a research is influenced by the validity of the method chosen for the research. The methods of a research are actually tools to achieve reality. Since the purpose of this research is to investigate the effect of ultrasonic waves on the corrosion behavior of L316 stainless steel, initially welding method was used to create residual stress. Six samples of pipes welded under the same conditions have been selected. The main purpose of this research is to investigate the effectiveness of the ultrasonic stress relief method and to prevent stress corrosion in a corrosive environment. For this purpose, two samples were subjected to ultrasonic stress relief and thermal stress relief method was used for comparison. Also, a mode without any operation is considered. The residual stress of all six parts has been measured using the XRD method. 300 series stainless steels, especially 316 and L316, which are widely used in industry and environments with acidic properties, especially sulfuric acid solutions, were used to select the samples. Stainless steels have an iron base and contain at least 12 percent cream, which can be as high as 30 percent.
The samples used in this research are 2-inch tubes with a thickness of 3.4 mm made of L316 stainless steel, prepared according to the ISO 7S39-8 standard. Welding method is used to create residual stress. Six pipe samples that were welded under the same conditions and selected. It should be noted that tungsten electrode and neutral gas method was used for welding. In this process, different electrodes are used, such as pure tungsten, tungsten with thorium, and tungsten with zirconium. Also, the choice of power source and type of shielding gas is highly effective on the depth of penetration and the shape of the weld cross-section. In the following, Figure 1 shows the preparation of samples for welding operations. given. The type of connection used to prepare the samples is chamfer type with an angle of 60 degrees, which is shown in Figure 2.
Numerical results
In this section, the results obtained from the samples are analyzed and the results are compared with each other. In order to determine the effectiveness of ultrasonic and thermal stress relief operations, a reference was needed for comparison, and due to time and facilities limitations, it was not possible to conduct XRD tests in large numbers. Therefore, by controlling the welding conditions, two samples without stress relief are considered as reference samples. The results of the XRD test of samples without stress removal are shown in Figure 3.
According to the figure 3, it can be seen that the residual stress in both pieces was almost the same, which indicates that the preparation conditions for the samples were the same.
In order to investigate the thermal stress relief, two samples have been selected in the same conditions and the residual stress in the samples has been determined using XRD test. The results of the samples that were stressed by the heat treatment method are given in Figure 4. According to the figure, it can be seen that the amount of residual stress in the first and second samples was almost the same. By comparing the results of residual stress in the heat treatment method and without heat treatment, it can be seen that the average residual stress has decreased by 54%. The results of the XRD test of the stressed samples using the ultrasonic method are shown in Figure 5. According to the figure, it can be seen that the amount of residual stress has decreased by 58% compared to the sample without stress relief operation. Also, the comparison of the results of stress-relieved samples with heat treatment and the use of ultrasonic waves shows that ultrasonic waves have performed better and the amount of residual stress has decreased by about 4% compared to the stress-relieved sample with heat treatment.
Conclusion
The aim of the study was to investigate the effect of ultrasonic waves on the corrosion behavior of L316 stainless steel. At first, welding method was used to create residual stress and six samples were prepared in the same conditions. Two samples are stressed by ultrasonic stressing method and the other two samples are de-stressed by heat treatment. The residual stress of all six pieces has been measured by XRD test. At the end, the resistance of the samples stressed by ultrasonic waves and without stress relief in corrosive environment has been compared. | 2022-10-14T15:38:39.776Z | 2022-07-23T00:00:00.000 | {
"year": 2022,
"sha1": "3f32142be1b3f45f13bfb16039e434ed676c887b",
"oa_license": "CCBYNC",
"oa_url": "http://masm.araku.ac.ir/article_254695_8605cb0da73eabc882f1f204c2d2270a.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0841fbd09c4773673754fa0b8bb4282225443bb8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
248143061 | pes2o/s2orc | v3-fos-license | Indonesia's Omnibus Law on Job Creation: Legal Hierarchy and Responses to Judicial Review in the Labour Cluster of Amendments
Abstract Indonesia enacted a controversial ‘Omnibus Law’ on Job Creation in late 2020, and its implementing regulations followed in February 2021. This Law, and particularly the labour cluster of amendments within it, has been linked to Indonesia's recent ‘democratic decline’ or ‘illiberal turn’. Many of the amendments reduce worker protections with the aim of producing a more flexible labour market. While it is these obvious amendments in favour of employers’ interests that have attracted the most attention, a deeper analysis of the changes introduced by this Law reveals additional important factors at play. There has been a significant repositioning of labour regulations within Indonesia's hierarchy of legal instruments, as well as important responses to Constitutional Court judicial review cases. Overall, this deeper legal analysis produces mixed evidence for democratic decline in Indonesia.
Introduction
On 5 October 2020, during the COVID-19 pandemic, the Indonesian National Legislature (Dewan Perwakilan Rakyat or DPR) passed a very controversial 'Omnibus Law'. This piece of legislation, with a massive 1187 pages, was then signed by President Joko Widodo and came into effect on 2 November 2020. It is now officially known as Law no. 11/2020 on Job Creation (Undang-Undang no. 11/2020 tentang Cipta Kerja). 1 Representing the culmination of a signature policy of the President, this Law was aimed at simplifying overlapping regulations and boosting foreign direct investment by improving the ease of doing business. Indeed, climbing the World Bank's Doing Business ranking was at least partly a driver behind this policy. 2 The Job Creation Law introduced a new framework for business licensing and then simultaneously amended 77 existing national laws covering a very wide sweep of issues including, but not limited to: environmental *Senior Lecturer, Department of Business Law & Taxation, Monash University, Australia. 1 Hereinafter referred to as the 'Job Creation Law' (in text) and 'Law no 11/2020 on Job Creation' (in footnotes). protection, spatial planning, special economic zones, small and medium enterprises, land rights, transport, energy, agriculture, fisheries, and taxation.
Changes to the labour and social security laws are also a key aspect of the Job Creation Law, and this Article will focus on this cluster of labour-related amendments. The Law has introduced a range of changes to the existing Law no. 13/2003 on Labour, 3 including on the regulation of fixed-term contracts and outsourcing, minimum wage determination, dismissals, severance pay, leave, working time, and employment of foreign workers. The Law also introduced a framework for unemployment insurance as an additional form of workers' social security scheme and amended some aspects of the 2017 Overseas Migrant Worker Law. Subsequently, by 2 February 2021, the Law's forty-nine required implementing regulations (mostly in the form of Government Regulations) were authorised by President Joko Widodofour among them relating to the labour cluster of amendments. These regulations were released publicly on 21 February 2021.
Several labour-related amendments introduced by the Job Creation Law, and its implementing regulations, are 'de-regulatory' or 'flexibilising' in nature. They have reduced restrictions in relation to time limits on fixed-term contracts 4 and the types of work that can be outsourced, 5 reduced severance pay calculations, 6 and have increased the maximum number of overtime hours that can be worked per day. 7 Some exemptions have been introduced for micro and small enterprises, including from needing to pay the minimum wage. 8 These are all important changes that are likely to have profound impacts on worker protection in Indonesia, and have garnered a great deal of resistance and critical commentary from the union movement and their supporters within Indonesia and abroad. Some other changes lean in the opposite direction, though, perhaps to lessen union opposition to the flexibilising amendments. For example, compensatory payments to workers when their fixed-term contracts end have been introduced, 9 as have new penalties for employers who pay wages late 10 and new criminal sanctions for non-payment of severance pay. 11 The new unemployment social security scheme also aims to partially shift the burden of dismissal costs from employers to the state. 12 Several commentators have linked the Job Creation Law to larger political shifts in Indonesia which have been labelled as a 'democratic decline' or 'illiberal turn'. 13 one of the most democratic nations in Southeast Asia, various democracy indexes 14 and academic analyses indicate a weakening in its democratic institutions in recent years. Rollbacks of democratic institutions have been driven by certain politico-business elites, while civil society organisations' ability to resist such changes has weakened. This decline began before the COVID-19 pandemic but appears to have been exacerbated by it due to social distancing requirements and the associated limiting of the ability to pressure government through public protests. 15 These politically driven labour law changes that have attracted the most media coverage and commentary, however, are only the more obvious part of the suite of amendments included in the Job Creation Law. There are two other significant technical law-making issues inherent within the Job Creation Law's labour cluster and its implementing regulations. The first issue is that there has been a substantial rearrangement of labour law norms within the hierarchy of legal instruments in Indonesia. Many substantive workers' rights have been 'downgraded' from 'Law' (Undang-Undang) to 'Government Regulation' (Peraturan Pemerintah), while some other rules have been 'upgraded' from lower-level implementing regulations to Government Regulation or to Law. 16 Inherent within these shifts is the balance between legislative and executive law-making powers, given that implementing regulations are the domain of the executive government and are much easier to amend than Law.
The second dynamic that is playing out within these labour amendments is a dialectic between the Constitutional Court, and to a much lesser extent the Supreme Court, with their judicial review functions, and the legislative and executive branches of government. A significant number of labour law amendments in the Job Creation Law are either intended to affirm previous Constitutional Court rulings or to override them. While this may be at least partly read as a positive development indicating some maturity in Indonesia's legal system due to the responses to judicial review rulings, one Constitutional Court case was ignored entirely. And, in one important instance the legal hierarchy has arguably been used strategically to override the Constitutional Court by 'hiding' the change within the lower-level implementing regulations.
My analysis of the labour cluster in the Job Creation Law presented in this Article indicates the need to examine more deeply the technical amendments alongside the more obvious politically driven legal changes. The technical aspects of the amendments in the labour cluster provide some mixed evidence for the 'democratic decline' or 'illiberal turn' in Indonesia. The rearrangement of labour law norms within the legal hierarchy does mean a significant shift from legislative to executive power over labour regulation which may portend further, easier, reductions to worker protections in Indonesia in the future. It also appears that the Job Creation Law and its implementing regulations were drafted as an integrated package, thereby merging legislative and executive roles. The responses to previous judicial review cases are more positive as they mostly acknowledge and reinforce the role of the Constitutional Court, but as noted we also see some evidence of attempts to remove labour law rules from the jurisdiction of the Constitutional Court. This Article is organised as follows: In the first two Parts, I explain the relevant background to the legal hierarchy and the judicial review processes in Indonesia. Next, I briefly trace the history of labour law in Indonesia, noting the roles of both legal hierarchy and judicial review, and then introduce the Job Creation Law in more detail. Following this, I provide key examples of where labour law rules have been reordered within the legal hierarchy by the Job Creation Law and its implementing regulations. I also provide examples of where the Job Creation Law has responded to judicial review cases. My conclusions are offered in final part of the Article.
Law-Making and Legal Hierarchy in Indonesia
In order to understand the significance of the shifting of labour rules between different levels of legal instruments that has occurred through the Job Creation Law, a brief explanation of Indonesia's distinctive, complex hierarchy of legal instruments is first necessary here. Within this hierarchy, some of the legal instruments are legislative while other types are executive or occasionally even emanate from the higher courts. Damian and Hornick, writing in 1972, noted that Indonesia's 'basic' Laws, ie, those enacted by the National Legislature (DPR), tended to function as broad policy statements rather than as providing detailed operative rules. 17 The detailed rules were generally found instead in implementing regulations issued by the executive branch of government. This situation was emblematic of the executive-dominated regime during Suharto's authoritarian New Order regime when the DPR was merely a rubber-stamp body. But this pattern has also largely continued in the post-New Order democratic reform era, though significant steps have certainly been taken to strengthen national and regional legislatures, and to clarify and standardise the various types of legal instruments.
The current version of the hierarchy of legal instruments reads, from highest to lowest: (i) The 1945 Constitution as amended; (ii) Decrees of the People's Consultative Assembly (Ketetapan MPR); 18 (iii) Laws (Undang-Undang) (or Government Regulations in Lieu of Laws); (iv) Government Regulations (Peraturan Pemerintah); (v) Presidential Regulations (Peraturan Presiden); (vi) Provincial Regulations; and (vii) District/Municipal regulations. 19 In terms of legal strength, it makes little difference whether a particular rule is placed in a particular type of instrument so long as lower-level regulations do not contradict those higher in the list. This list, however, is incomplete because various other executive government and judicial institutions also have the power to issue regulations, 20 but the exact position of such regulations in the hierarchy is left unclear. This includes the often-used Ministerial Regulations (Peraturan Menteri), Ministerial Decisions (Keputusan Menteri) and Circular Letters (Surat Edaran), 21 which have all, for example, played very important roles in labour regulation in Indonesia.
The making of Laws in Indonesia is often a slow process, especially when compared to the earlier years of the reform era. Planned Laws should be listed on the National Program for Legislation (Prolegnas) at the start of the five-year parliamentary term and then this list is also updated annually. 22 Inclusion in this list is not, however, a guarantee that a Law will make it through to enactment, with some draft bills sitting on the Prolegnas list for years. Unlisted bills may also Note that the MPR can now only issue decrees in very limited circumstances and the inclusion of these in the hierarchy refers to decrees from before the Reform era. MPR Decrees are also excluded from judicial review. 19 Law no 12/2011 on Law-Making, art 7(1). 20 ibid, art 8. 21 Note, however, that article 8 of the Law-Making Law of 2011 specifically refers to 'peraturan' or 'regulations', which leaves the position of administrative decisions (keputusan), and circular letters unclear. The Supreme Court has refused to review these types of administrative instrumentssee discussion associated with (n 46) below. 22 Law no 12/2011 on Law-Making, art 20. occasionally be considered. 23 The DPR often fails to meet its own legislative targets. Commentators have attributed this low productivity to a variety of interacting factors including: the multi-party and multi-Commission structures in the DPR, complicated deliberative procedures, the increasing volume of non-legislative activities of DPR representatives, the Regional Representative Council (DPD)'s attempts to increase its influence, and 'money politics'. 24 The public also has the formal right to provide input on draft bills and, to facilitate this, all draft Laws are supposed to be made easily accessible to all. 25 Bureaucrats play key roles in legislative and regulatory drafting in Indonesia. It is useful to think of Laws as being 'framework' or 'umbrella' instruments which explicitly or implicitly require the executive government to fill gaps with implementing regulations in various forms. While bureaucrats may play a role in the drafting of Laws, especially those initiated by the President, they are usually the primary drafters of implementing regulations. 26 Government Regulations, for example, are drafted by the relevant ministry and become effective when signed by the President. Each national ministry has a legal division where civil servants perform the role of legislative drafters and government lawyers. Often, years may pass between the enactment of a Law and the formulation of its required implementing regulations producing legal uncertainty in the interim. In addition, given the general nature of Laws, there may be significant leeway given to the implementing regulations. The drafting of implementing regulations may become the site of bureaucratic power struggles and infighting, 27 where the original social groups interested in the Law are often excluded, 28 and the resulting regulation may differ from the original intention of the DPR. As examples, this occurred with the implementing regulations to the mandatory corporate social responsibility requirements for resources companies in the 2007 Company Law, 29 and with the implementing regulations to the 2014 Village Law. 30 Prior to the Job Creation Law, Indonesia had not used an omnibus law format as a law-making strategy on this scale, though there have been a few Laws that amended multiple existing Laws at once. 31 Although not prohibited, there was previously no tradition of enacting omnibus laws within Indonesia's civil law system (omnibus laws are more often found in common law systems). The idea to use this format appears to have been first mooted in 2017 by the Minister for Agriculture and Spatial Planning, Sofjan Djalil, who took inspiration from his studies in the United States. 32 justification given for the use of an omnibus law format was to streamline and clarify many overlapping Laws and regulations, as well as to push through a very large amount of legal change simultaneously without getting slowed down by DPR legislative processes. Indeed, these are the key motivations for use of the omnibus law format around the world. 33 The Job Creation Law also clearly specified that its implementing regulations would be mostly in the form of Government Regulations, as well as a couple of Presidential Regulations. This particularly sidelines the previous role of Ministerial Regulations which, as noted above, do not have an explicitly defined place in the legal hierarchy.
The hierarchy of legal instruments in Indonesia is, therefore, not just an organising principleit also denotes the balance of powers between the different branches of government. As will be discussed below, there have been significant shifts in powers over labour regulation in Indonesia as a result of the labour cluster of amendments in the Job Creation Law.
Judicial Review in Indonesia
The other piece of the puzzle that provides important background to the labour cluster of amendments in the Job Creation Law is judicial review. Indonesia has a dual judicial review system 34 where jurisdiction is divided between the two peak courts, the Constitutional Court (Mahkamah Konstitusi) and the Supreme Court (Mahkamah Agung). The Constitutional Court has the power to review national Laws (ie, Undang-Undang) made by the DPR in order to determine whether they accord with the Constitution of the Republic of Indonesia 1945, 35 including its Bill of Rights chapter. 36 Meanwhile, the Supreme Court has the power to review lower-level regulations. 37 There are some differences, however, in how these courts have approached their judicial review functions and some significant gaps remain in coverage of various regulations within the hierarchy of legal instruments. The judicial review decisions of both courts are final, and theoretically binding, but the question of executive and legislative reaction to judicial review decisions also arises.
Indonesia's Constitutional Court was established in 2003. One of the central pillars of Indonesia's democracy, in general it is felt that the Constitutional Court 'has performed its functions with professionalism and integrity'. 38 It has tended, especially in its early years under the first Chief Justice Jimly Asshiddiqie, to produce decisions that are more rigorous and legally supported compared to other courts in Indonesia, including the Supreme Court. That said, two Constitutional Court judges have been found guilty of accepting bribes: Chief Justice Akil Mochtar in 2014 and Justice Patrialis Akbar in 2017. As noted, the Constitutional Court can review national Laws for their constitutionality. Individual Indonesian citizens, community groups, public and private legal entities and state institutions all have legal standing to bring constitutional review challenges if they have suffered damage to their constitutional rights by the application of the impugned Law. 39 There are no court administration fees charged for reviews in the Constitutional Court, although parties must cover their own legal representation costs if used.
The Constitutional Court has the powers to review national Laws as a whole as to whether they comply with proper law-making processes (ie, uji formil) and also to review the substantive content of Laws as to whether they adhere to the Constitution (ie, uji materiil). The Constitutional Court can strike down particular provisions or even entire Laws. Over time it has also developed a softer 'conditionally constitutional' type of ruling. This means that particular legal provisions are to be read according to the Court's interpretation. The Court's decisions generally have effect on the validity of the Law in question from the date of the decision, although it has in certain cases specified retroactive or future effect. 40 The Constitutional Court, as it is not an appeals court, can only rule on cases in the abstract and cannot make determinations regarding concrete cases of loss. 41 Therefore, those who bring such challenges will not be awarded damages or other restitution, and due to the prospective nature of the Court's decisions, favourable decisions often do not remedy the situation that led to the challenge. 42 A key limitation of the Constitutional Court is that its jurisdiction is limited to the review of the Laws made by the DPR and it therefore does not have the power to review lower-level national regulations or regional regulations. That review power is instead given to the Supreme Court. Although the Supreme Court had these judicial review powers in previous eras, 43 it has only really begun to use them in the post-Suharto period. It undertakes approximately seventy to ninety judicial reviews each yearthese are a comparatively very small part of the Supreme Court's overall caseload, which in 2019 amounted to a total of over 19,000 new case lodgements. 44 While it is clear that the Supreme Court can review regulations as to whether they accord with laws, including against the procedural requirements found in the 2011 Law on Law-Making, 45 there are still many uncertainties about its review powers. The Supreme Court has often refused to review certain legal instruments such as Circular Letters (Surat Edaran) or administrative decisions (keputusan), on the grounds that they are not regulations (peraturan). 46 It is also not entirely clear whether the Supreme Court has the power to review regulations against any higher-level regulation including the Constitution, or only against Laws, or whether regulations of the same level can be reviewed against each other. 47 Access to the Supreme Court is also more limited; while taking a review case to the Constitutional Court is free, at the Supreme Court applicants must pay IDR 1 million (approximately US$70) if they lose the case. 48 While the Constitutional Court holds in-person hearings and gives applicants an opportunity to later improve their written submissions, the proceedings at the Supreme Court are conducted entirely on the submitted paperwork.
Some leading scholars of Indonesian law, such as Simon Butt, Nicholas Parsons and Tim Lindsey, have documented that the Supreme Court has generally been reluctant to exercise its judicial review function. 49 It has shown this reluctance through contradictory interpretation of the rules on standing, regularly using technicalities to summarily dismiss cases, and its avoidance of highly political cases. 50 While the rules on standing are similar to those of the Constitutional Court, 51 the Supreme Court has interpreted these comparatively more narrowly at times. 52 Also, prior to 2011, applications for review in the Supreme Court needed to be lodged within 180 days of the enactment of the regulation in question, 53 and being out of time was a key reason for rejection of judicial review challenges in that period. Further, the Supreme Court's decisions often suffer from a 'lack of consistent, principled, and transparent judicial reasoning'. 54 Rifqi Assegaf, however, argues that the Supreme Court has lately increased its autonomy and become more willing to exercise its judicial review power, and in many cases has invalidated regulations that contradict those higher in the legal hierarchy. 55 The judicial review decisions of both the Constitutional Court and the Supreme Court are final and binding; 56 however, enforcement has at times been problematic, particularly in the earlier years of the post-Suharto era. Government reaction to the Courts' rulings depend on the political will to follow constitutional principles. At times, government reactions have arguably taken advantage of the Supreme Court's more restricted approach to judicial review compared to that of the Constitutional Court. 57 A notable example of this occurred in relation to the Electricity Law Case Regulations; this was justified as being a review of the Law that looked to the implementing regulations for evidence.
These are, however, relatively isolated examples. Simon Butt notes that beyond the early period of the Constitutional Court's operations, fears that it would become irrelevant due to government bypassing legislative processes and thereby bypassing the Court did not manifest, and the government has tended to comply with the Court's decisions. 65 As will be detailed below, the Job Creation Law, and its implementing Government Regulations, has mostly recognised relevant judicial review decisions in relation to the 2003 Labour Law, but on some issues has attempted to circumvent these rulings.
Labour Law in Indonesia and its Challengers
Having set out the general background context to legal hierarchy and judicial review in Indonesia, in this Part, I now turn to discuss how labour law has evolved within these wider legal processes. Indonesia largely rebuilt its labour law system in the early 2000s during the democratic reform processes that followed the end of the New Order regime. Previously, Indonesian labour law had consisted of a complex patchwork of some Indonesian national laws and various lower-level executive regulations, including some which contradicted the higher-level Laws. 66 There was little vestigial Dutch influence in Indonesia's labour regulation; its nature was largely determined by the postcolonial authoritarian context. 67 The laws effectively had few protections for collective labour rights and permitted only one state-sanctioned labour union, but had a relatively well-developed series of individual worker protections. During the New Order period, there was a very brief umbrella labour law enacted in 1969, 68 but for the most part the operative rules were found in Ministerial Regulations. In 1997, a replacement Labour Law had been enacted 69 but its implementation was postponed due to widespread labour movement protests, and it never came into effect.
Then, in the Reform era beginning in 1998, the systematic rebuilding of the labour law system began when one of the first acts of President Suharto's successor, B.J. Habibie, was to ratify a number of International Labour Organisation (ILO) Conventions. This included the ILO Convention on Freedom of Association. This set the scene for a major recreation of Indonesia's labour laws. Firstly, in 2000, a trade union law was enacted allowing free unionisation and providing protections for unions and their members. 70 Then, a new general Labour Law was passed in 2003, 71 covering a wide swathe of issues including decentralised minimum wage setting, working time, leave rights, and dismissal reasons, procedures and associated payments among others. This Law included some new labour rights, but it also amalgamated and upgraded a number of the New Order and early Reform era ministerial regulations. 72 Despite being far more comprehensive than any previous Indonesian labour law, the 2003 Labour Law still required a substantial number of implementing 65 Butt (n 46) 6, 72. 66 There were some attempts to challenge lower-level labour regulations through judicial review in the Supreme Court in the 1990s on the basis that they contradicted higher-level Laws (see Appendix 2 for summaries of two of these cases). It appears that these cases were all dismissed by the Court, but they did nonetheless contribute to pressuring the government to change some of its regulations. regulations to provide additional detailed rules, 73 and these were produced in subsequent years. 74 In 2004, the trio of major labour laws was completed by an Industrial Disputes Resolution Law 75 which included the establishment of a new Industrial Relations Court. In related areas, a Social Security Law, which provided a framework for expanded workers' social security schemes, was also passed in 2004, 76 although its implementation did not actually begin until a decade later. For Indonesian overseas migrant workers, protections and rules on recruitment and processing were introduced through an initial law in 2004, 77 and then via an amended and more protective law in 2017. 78 The 2003 Labour Law in particular was met with opposing reactions from key actors. On the one hand, labour activists and researchers have blamed its enablement of fixed-term contracts and outsourcing for the growth in these forms of non-standard employment. Employers have used these extensively to try to avoid the higher standards that attached to permanent work contracts. 79 On the other hand, business groups have criticised the Law for its perceived rigidities, particularly in relation to high rates of severance pay for redundancy of permanent workers and the limitations placed on the use of fixed-term contracts and outsourcing. In late 2005 and early 2006, the government under then President Susilo Bambang Yudhoyono tried and failed to introduce flexibilising reforms to severance pay, fixed-term contracts, important holidays and to promote more opportunities for foreign workers. That attempt was stymied by mass worker demonstrations across major cities in Indonesia that reached a peak on May Day 2006, causing the government to backtrack on its plans. 80 Later, in 2015, under President Joko Widodo, minimum wage setting was effectively recentralised as a national government power under a Government Regulation on Wages. 81 The other main way that the 2003 Labour Law has been challenged is through Constitutional Court reviews. It has been among the most judicially reviewed of all Indonesian Laws. 82 At the time of writing there have been 31 constitutional reviews of this Law (see Appendix 1 for summaries of these cases). Most of these challenges were submitted by trade unions or current or former workers, but there were also two cases submitted by the Indonesian Employers Association (APINDO), one by a lawyer, and two cases submitted by the same company director. Of the 31 cases, 12 were upheld or partially upheld, of which most were among the earlier cases, probably due to the 'low hanging fruit' in the 2003 Labour Law having been targeted more quickly. Repeat challenges to the same articles tended to fail on the grounds that the constitutional argument presented was not substantially different to that of an earlier case. Three of the failed cases were due to lack of legal standing, and the most recent case in 2020 was dismissed because the Job Creation Law had already amended the impugned article. There have also been a number of Supreme Court judicial reviews of lower-level labour regulations and administrative decisions (see Appendix 2 for case summaries). These cases are more difficult to collate systematically as they relate to various forms of legal instruments and because the Supreme Court's case database is difficult to search on the basis of legal subject areas. 83 Among the cases collected here, all but one of the challenges failed. This includes a number of challenges by business groups to regional minimum wage regulations (especially in East Java). There were also four separate challenges by unions to the 2015 Government Regulation on Wages which all failed due to the 2003 Labour Law having been concurrently under review by the Constitutional Court. 84 These labour-related cases, therefore, accord with scholars' general accounts of the Supreme Court's record of often using technicalities to dismiss judicial review cases as discussed above.
The Enactment of the Omnibus Law on Job Creation and its Implementing Regulations Many of the same 'rigidity' complaints about the 2003 Labour Law that triggered the law reform attempt in 2006 remerged in the labour cluster in the Job Creation Law in 2020. However, trade unions and their allies were not this time successful in blocking amendments despite again holding demonstrations both before and after the Law was passed. 85 As noted above, many commentators have seen this denial of public sentiment as a symptom of Indonesia's 'democratic decline' or 'illiberal turn'. They have argued that President Joko Widodo's primary concern has been with building alliances with political and business elites which has led him, and the majority of political parties holding seats in the DPR, to ignore popular protests against this Law. 86 For example, Marcus Mietzner argued that the Job Creation Law was undoubtedly influenced by some prominent tycoons with political roles, but that the President also actively championed the Law himself as an expedient economic policy. 87 The legislative process leading to the enactment of the Job Creation Law was unusually quick in the Indonesian law-making context. The policy to produce an omnibus law was officially launched by President Joko Widodo in his second-term inauguration speech in October 2019. 88 The Omnibus Bill was then listed on the annual Prolegnas in January 2020 and submitted to the DPR in February 2020. Due to the COVID-19 pandemic, deliberations in the DPR did not begin until April 2020. The original draft labour cluster included amendments relating to foreign workers, wages, work hours, leave, outsourcing, redundancy and social security. 89 civil society dissent, 90 trade union demonstrations were organised in reaction to the draft. 91 In response, on 24 April 2020, President Joko Widodo announced that the deliberations on the labour cluster would be postponed. 92 However, the labour cluster was suddenly returned to the overall draft law on 25 September 2020, and debate resumed. 93 As noted, the full Law was passed just two weeks later in early October 2020. The haste, according to the Minister for Labour Ida Fauziyah, was due to the need to reduce the spread of COVID-19 among legislators. 94 A more cynical reading of the situation, however, suggests that the government was motivated to prevent mass labour mobilisation in opposition to the Law and thereby to prevent a repeat of the thwarted reform attempt of 2006.
Only two parties in the DPR opposed the Omnibus Billthe Democratic Party (Partai Demokrat) and the Prosperous Justice Party (Partai Keadilan Sejahtera). Their opposition was based on various stated reasons: the lack of public participation, that the deliberations were too rushed, lack of evidence that the law would achieve its objectives, that the law was too neo-liberal, and that the COVID-19 pandemic was the true place attention should have been focused. 95 These objections have echoed more widely as well, and the Job Creation Law has been strongly criticised for having had no or limited public consultations as is formally required. 96 During the deliberations, it also emerged that the draft included a rule that the law could be amended merely by a Government Regulationthis caused great controversy for being unconstitutional, and was eventually removed as a 'mistake'. 97 There have been numerous drafts circulating and even once agreed to by the DPR, for some time it was not clear exactly which draft had actually been enacted. There were also accusations of post-enactment changes to the Law, 98 that editorial changes were indeed made to the text. 99 The final Law also has a couple of errors with cross-references to articles that do not exist, 100 and there has been much misinformation circulating about its content at least partly caused by all these drafts, rapid changes and errors.
Following the passing of the Job Creation Law, further demonstrations erupted across the country. 101 The government responded by attempting to counter what it called hoaxes or false information about the Law and by exhorting the public to await the implementing regulations to gain a true picture of its implications. 102 Opponents of the Law, including trade unions and NGOs, also quickly moved to lodge various applications for judicial review of the Job Creation Law in the Constitutional Court (see further discussion in the Postscript below).
The controversy surrounding the Job Creation Law has continued with the enactment of its implementing regulations. The Law required that these regulations be put in place within three months, ie, by 2 February 2021again an unusually quick enactment time. All of the required 49 implementing regulations (45 Government Regulations and 4 Presidential Regulations) were promulgated on 21 February 2021, but most were apparently signed by the President on 2 February to match the Law's own deadline. 103 This arguably leaves some question as to the state of the law during this two and a half week gap, exacerbated by the fact that drafts of the regulations uploaded earlier to the official internet portal did not all match the final enacted versions. 104 Although the four labour-related Government Regulations did go through some consultation processes with selected social groups, 105 the unusually short enactment period and the high degree of coordination between the labour cluster in the Job Creation Law and its implementing Government Regulations leads to the conclusion that they were carefully drafted as an integrated package with a clear plan about where to place certain substantive rules. This is evidenced by the fact that some rules were made more general or disappeared from the main Law but re-emerged in the Regulations. This issue is discussed in more detail in the following Part.
Reordering the Hierarchy of Labour Law Norms Via the Job Creation Law
The Job Creation Law, and its implementing Government Regulations, have together not only changed many substantive labour rules, but have also shifted their positions within the legal hierarchy. There seem to have been some different factors simultaneously influencing this process. Firstly, there has been an effort to standardise the forms of legal instruments used and to dispense with those that have ambiguous positions within the legal hierarchy such as ministerial regulations. Secondly, some of the key 'flexibilising' amendments have been 'downgraded' from the level of Law and placed within a Government Regulation. This gives important future rule-making power over these issues to the executive branch of government. It also removes these rules from the review jurisdiction of the Constitutional Court and places them into that of the Supreme Court, which (as discussed above) has traditionally been far less likely to approve review applications. Finally, some other general aspects of the central government's policies, particularly on minimum wage-setting, have been upgraded to Law, while the specific details have been placed in the new Government Regulations.
The following discussion traces these legal hierarchy changes in relation to three key issues: fixedterm contracts, the calculation of dismissal pay, and minimum wage-setting. My intention here is not to cover all such shifts in the Job Creation Law's labour cluster; rather, it is to use these selected examples to illustrate how significant movements within the legal hierarchy have occurred. In the first two examplesfixed-term contracts and dismissal paymentsparticular rules have been 'downgraded' whereby the key details are no longer found in the Law, but rather in Government Regulations. With regards to minimum wage setting, here there have been some 'upgrades' while remaining details have been placed in the new Government Regulation.
Note that in some other areas, changes to the position of rules within the legal hierarchy have also occurred as a reaction to judicial review. I will separately examine responses to judicial review below.
Fixed-Term Contracts
Indonesian labour law draws a distinction between fixed-term employment contracts (perjanjian kerja waktu tertentu) and permanent employment contracts (perjanjian kerja waktu tidak tertentu) where, in addition to security of employment term, a key difference between the two types of contract relates to dismissal payments. In general, where a contract is ended early by the employer, fixed-term workers have had the right to be paid out their wages through to the end of their contract, 106 while permanent workers have rights to severance and reward payments which accrue over time. The firing of permanent workers has, therefore, usually been more expensive than for fixed-term workers.
The original provisions on fixed-term contracts, dating from the New Order era, were contained in ministerial regulations. 107 These restricted such contracts to work that could be completed in a short period of time, or that was seasonal, non-routine or related to a new product. The maximum period of employment was two years with a possible one-year extension. Then after a 30-day hiatus, a further renewal of maximum two years was possible. 108 This same pattern of contract periods and renewals was then upgraded in the legal hierarchy and included in the 2003 Labour Law. 109 In 2004, a ministerial decision reinforced these regulations and also stipulated more clearly when an ostensibly fixed-term contract would be deemed to be permanent, including in circumstances where permitted renewal patterns were breached. 110 Court challenged the articles that specified when fixed-term contracts would be deemed to be permanent, 111 but only one of these casesrelating to the administrative procedures involved in this processwas upheld. 112 Under the new Job Creation Law, fixed-term contracts may still only be used for work that is of short duration or is seasonal, and contracts which do not meet these criteria will be deemed to be permanent. 113 However, mention of the maximum length of fixed-term contracts has now been entirely removed from the Law, along with all restrictions on renewal. Instead, the details on maximum length and renewal have been downgraded to the implementing Government Regulation. 114 There is a new maximum contract term of five years which may be renewed for a maximum of a further five years. 115 The need for a hiatus between these two contracts has disappeared. The amendments do not specify precisely what should happen after two contract terms have been completed, but it appears that there is no longer any restriction on the number of renewals and therefore no automatic conversion to a permanent contract at any point in time. 116 The new Job Creation Law has not changed the principle that where an employer brings a fixedterm contract to an end early, the worker is entitled to have their remaining wages through to the end of the contract paid as compensation. 117 In addition, in the Law a new form of compensation is now to be paid when the specified term of employment, or the specified completion of a certain task, has been completed. 118 There are no details on this compensation in the Law but, as per the implementing Government Regulation, this new additional compensation is calculated as being equal to one month's wages per 12 months of service. The compensation for more than 12 months will be calculated proportionally (ie, 24 months service would be equal to 2 months wages), and service of between 1 and 12 months will also be calculated proportionally. 119 This compensation is also still to be paid where a fixed-term contract is ended early by either party. 120 Administrative sanctions can be imposed on employers for contravention of the requirement to pay such compensation. 121 The time of service for the purpose of this compensation is to be calculated from 2 November 2020 when the Job Creation Law was enacted. 122 There is an exception for micro and small enterprises, which are permitted to pay this compensation based on agreement rather than on the stated calculation. 123 The amendments in relation to fixed-term contracts lean in contradictory directionsthe lengthening of maximum contract time to five years and removal of the risk of workers being deemed to be permanent no doubt work in employers' favour. But the introduction of the new form of compensation is in workers' interests, as is the removal of the 30-day hiatus between renewal of contracts. Importantly, though, the key detailed provisions on length of contract term and calculation of compensation are now found in the Government Regulationwhich makes them more susceptible to change by the executive branch of government in the future.
Calculation of Dismissal Payments
As noted above, rates of dismissal payments have long been controversial in Indonesia with employer groups viewing them as producing overly high costs. During the New Order period, employers needed to seek permission of the Regional and Central Labour Disputes Settlement Committees (P4Ds and P4P) before dismissing workers. Once permission was granted, the Committees could then direct employers to provide dismissal payments. 124 Over time, a series of Ministerial Regulations gradually introduced a scale for calculating dismissal payments based on time of service. 125 Then, in the 2003 Labour Law, those existing provisions on dismissal calculations were upgraded from regulation to Law, with the Law mostly replicating but slightly extending the previous scheme. The base scale of severance payments (uang pesangon) was determined based on term of service, starting with less than one year of service requiring a severance payment of one month's wages, and finally service of up to eight years or more attracting a payment equal to nine months of wages. An additional reward payment (uang penghargaan) was calculated on a separate scalestarting at three to six years of service being rewarded with two months' wages. 126 Workers were also entitled to have unused rights paid out (uang penggantian hak).
Then, the 2003 Labour Law provided that the reason for ending the employment relationship determined how the base calculations of severance and reward pay were treated. For example, redundancy for the sake of 'efficiency' where the company had not experienced two years of losses, required twice the base severance payment to be made and one amount of reward pay. 127 Similarly, where the dismissal occurred due to merger or acquisition and the employer did not want to keep the worker in the new company, twice the base severance payment was required. 128 Severance paid due to the death of a worker was also required to be twice the base amount, as was retirement where the worker had not been enrolled in a pension scheme. 129 In 2020, the Job Creation Law created quite a bit of worried speculation in the media regarding changes made to dismissal payment calculations. 130 As it turned out, it was only once the Government Regulation was released three months later that the real reduction in dismissal payments became clear. However, this link between the reason for dismissal and the calculation of severance pay re-emerged in the implementing Government Regulation. 131 As noted above, this strongly suggests that the Law and Government Regulation were drafted as an integrated package. The Government Regulation sets out new severance payment calculations for various forms of redundancy and other types of dismissal, and in many cases the calculations are significantly lower than before. 132 For example, redundancy in the context of company merger or split where the employer does not want to keep the worker in the new company now attracts only half of the severance pay that it did previously.
On the whole, these changed calculations swing in employers' favour. However, it should be noted that employers are now liable for criminal penalties for non-payment of severance and reward payments, with sanctions set as imprisonment for a period of between one and four years and/or a fine of between IDR 100 million and 400 million (between US$7,100 and US$28,500). 133 Further, the introduction of a new unemployment social security scheme may eventually help to offset these changes by shifting the burden of dismissal from employers to the state. However, once again, the 'downgrading' of the key calculations for severance pay to the level of Government Regulation, almost returning to the legal framework of New Order times, means that these calculations would be much more easily amended by the executive in the future.
Minimum Wage-Setting
Minimum wage-setting in Indonesia has undergone substantial political and regulatory changes in the past decades. Authority for minimum wage-setting was devolved from the central government to the provincial-level governors in the year 2000, and tripartite Wage Councils were formed as advisory bodies to the governors from 2004. The ability of trade unions to influence wage-setting through these wage councils was initially hampered by union fragmentation and difficulties in orga- Per GR 35/2021, while all forms of employer-initiated redundancy now attract the base amount of reward payment and payment to replace any unused rights, the amount of severance pay still varies. Severance payments are to be calculated at the base rate (1.0 times) for: redundancy caused by merger or company split where employment cannot continue, corporate takeover where the worker is willing to continue employment, redundancy for efficiency to prevent loss, where the company is closing not due to losses, where the company is postponing debts not caused by losses and for constructive dismissal. Severance is to be calculated at half (0.5 times) the base amount in cases of: takeover where the worker is unwilling to continue, redundancy for efficiency reasons where the company has experienced losses, force majeure where the company does close, where the company closes following two years of continuous losses, where the company is postponing debts caused by losses and corporate bankruptcy. Severance is calculated at three-quarters the base amount (0.75 times) for dismissal for reason of force majeure where the company does not close. The Government Regulation also provides details on dismissal payment calculations for other forms of dismissal, beyond redundancy. For example, for dismissal due to a worker's violation of ordinary provisions in their contract, collective agreement or company rules, following warning letters, then they are entitled to 0.5 times the base severance pay, one lot of reward pay and payment to replace any unused rights. For dismissal for reasons of breach of 'urgent' (bersifat mendesak) contractual provisions (see discussion below), the worker will only be entitled to replacement of any unused rights and any separation pay as contractually agreed. effectively recentralised minimum wage-setting by mandating a formula that governors must use. 136 This also severely reduced the role of unions in wage-setting. As noted above, unions lodged four separate judicial review cases in the Supreme Court in relation to this 2015 regulation, but these challenges were all unsuccessful. 137 The central government Ministry of Labour has since issued further ad hoc advice, including a request to provincial governors not to raise wages for 2021 due to the COVID-19 pandemic (though a handful of governors ignored this). 138 Under the new Job Creation Law, changes to minimum wage regulation have involved upgrading to the status of Law some key provisions in the 2015 Government Regulation on central government control over wage-setting, while placing additional details within the relevant new implementing Government Regulation. 139 Provincial and regional governments that make decisions on minimum wages that contradict the central government's requirements will now incur administrative sanctions. 140 As per the Job Creation Law, each of the 34 provincial governors is still required to set annual provincial minimum wages by following central government policy. A new, rather more complicated, formula for doing so has been stipulated in the Government Regulation. While Governors may additionally set regional (sub-provincial) minimum wages, the Government Regulation now also sets preconditions for doing so, based on economic growth figures. 141 By omission, the Law has removed the option to also set sectoral regional minimum wages, and the Government Regulation merely provides for the interim continuation of existing sectoral wage standards. 142 Wage Councils will still formally exist, but the Law and the Government Regulation are now reworded in such a way as to remove governors' specific duty to pay attention to the Councils' recommendations. 143 Another key article on minimum wages in the 2015 Government Regulation on Wages has also been upgraded to Law. This article states that minimum wages are to be applied only to workers with less than one year of service. 144 This statement in the Law was confusing since the same article prohibited employers from paying below the minimum wage in general (with criminal sanctions attached). 145 Clarification has been provided in the implementing Government Regulation, which stipulates that the minimum wage is effective for workers with less than one year of service, and beyond this time wages are to refer to the wage structure and scale that each business must have. 146 In the Job Creation Law, employers who pay wages late, either on purpose or negligently, will now be liable for fines to be paid to the worker. 147 The formula for calculating these fines is found in the Government Regulation. 148 will add 5 per cent of the total bonus. 149 In addition, individual contracts, collective bargaining agreements and/or company rules may also now specify fines that an employer must pay if they contravene any specified requirements in those contracts or rules. 150 Another important and entirely new change in the Job Creation Law is an exemption from the requirement to pay minimum wages for micro and small enterprises. 151 The relevant Government Regulation now provides that wages in micro and small enterprises are to be based on agreement between the employer and the worker, and are to be at least equal to 50 per cent of average community consumption in the relevant Province and at least 25 per cent above the provincial poverty line. 152 Overall, in relation to minimum wage-setting, previously existing re-centralising policies have been upgraded from regulation to Law, thereby entrenching the requirement that provincial governors follow central government wage calculations. Meanwhile, all the specific calculations for minimum wages, the lowest wage to be paid in micro and small enterprises, and the formula for calculation of fines are located in the Government Regulation. While this certainly follows the logic of providing specific details in lower-level instruments in the legal hierarchy, it also has the effect of making these more susceptible to future change and placing them outside the judicial review jurisdiction of the Constitutional Court.
Responding to Judicial Review As discussed above, there have been 31 judicial reviews of the 2003 Labour Law in the Constitutional Court, of which 12 were either fully or partially upheld. The Job Creation Law and its implementing regulations have together responded in various ways to these Court rulings. Although the causal link between the judgments and the legislative responses largely needs to be inferred here, the 'Academic Discussion Paper' 153 that preceded the draft Omnibus Bill did note the influence of some of these cases on the legislative drafting process in its article-by-article 'Analysis Matrix'.
These legislative responses to the judicial review cases include straight-forward affirmation, as occurred in relation to the Late Payment of Wages Case, 154 such that the Job Creation Law now acknowledges that late payment of wages for three months gives rise to the right of a worker to end the employment relationship even if the employer pays on time thereafter. 155 Similarly, the amendments have affirmed the Constitutional Court's decision in the Dismissal for Marriage Case 156 by removing the previous ability of employers to use contractual clauses to avoid the prohibition on dismissal of a worker for having a blood or marriage relationship within the workplace. 157 The Job Creation Law has also deleted article 96 of the 2003 Labour Law. 158 This article, which had limited workers' rights to claim unpaid entitlements to a maximum period of two years, was declared by the Constitutional Court to be null and void in the Time Limit for Claims Case. 159 Similarly, amendment to article 95 of the 2003 Labour Law relating to priority rules in case of corporate bankruptcy 160 In contrast, the Job Creation Law has also directly overridden the Redundancy for Efficiency Reasons Case, 162 where the Constitutional Court held that the article on redundancy for efficiency reasons was constitutional provided that it was interpreted as only occurring in the context of permanent closure of the business. The Law now specifically provides that efficiency is a permitted reason for dismissal whether or not the business closes permanently. 163 The Law has also entirely removed the previous ability of employers to apply to the relevant provincial governor for up to one-year exemption from paying the minimum wages. 164 In doing so, it removes the effect of the Minimum Wage Exemption Case, 165 where it was held that if an exemption was granted, the difference in wages still needed to be paid later and became a debt owed to the worker.
However, there was no response in the Job Creation Law to the Union Bargaining Case. 166 This case concerned article 120 of the 2003 Labour Law, which gave the one union with more than 50 per cent representation in an enterprise the right to bargain collectively. In 2009, the Constitutional Court handed down a conditionally constitutional ruling, allowing up to three unions, each with at least 10 per cent membership in an enterprise, to be proportionally represented in collective bargaining. The Court also recommended legislative change to this article to clarify its meaning. However, the Job Creation Law has not done this, thereby leaving the Court's interpretation in place. No explanation has been offered as to why this has occurred.
Some other responses to the Constitutional Court's decisions require more detailed explanation, and three examples are discussed below.
Regulation of Outsourcing
In this first example, regarding the regulation of outsourcing (ie, use of subcontracted labour supplied by a third-party agency), politically driven amendments have been combined with a technical response to a Constitutional Court case. The amendments in the Job Creation Law are clearly intended to increase flexibility for employers to use outsourced labour and are explicitly based on the assumption that increasing outsourcing will necessarily increase job opportunities. 167 Yet, at the same time, somewhat contradictorily, the amendments also affirm a Court ruling aimed at protecting outsourced workers.
Since 2003, outsourcing has been specifically permitted under Indonesian labour law, but the 'insourced' has also been deleted. Therefore, this gives far greater freedom to employers and is likely to significantly increase the amount of outsourced work. Previously, the 2003 Labour Law provided that the protection of workers and working conditions was the responsibility of the supplier firm and, somewhat ambiguously, that working conditions had to be at least of the same standard as in the main firm or in accordance with legislative minimums. 172 In the subsequent Ministerial Regulations, however, mention of working conditions needing to be at least the same as in the main firm was absent and the only requirement was that minimum legislative labour standards be met. 173 In the new Job Creation Law, this previous ambiguity has been removed, and now outsourced workers' conditions need only match the general legal minimums. 174 Here, then, we have a subtle amendment that again falls in employers' favour.
The outsourcing permit process has also undergone a change. Previously, an intent to subcontract work had to be reported to the local Department of Labour and outsourced work could not begin until proof of registration was received. 175 As per the new implementing Government Regulation, labour supply agencies will still require a permit, but this will need to come from the relevant central Government Ministry instead of the local Department. 176 How this requirement is eventually administered will most likely determine the effect of this on employers and their use of outsourcing.
Finally, in relation to outsourcing, there has been a response to the Constitutional Court. A 2012 Constitutional Court decision (henceforth, 'Transfer of Outsourcing Case') 177 held that the rights of outsourced workers on fixed-term contracts had to be protected in the event that a labour supply service provider was changed. This is known more generally in labour law as a 'Transfer of Undertaking Protection of Employment', or TUPE. This 2012 Court decision was quickly acknowledged by the Ministry of Labour via a Circular Letter which required this protection for fixed-term contract holders. But the Circular Letter also specified that an outsourcing contract need not protect workers in the event of a change of provider if the workers involved were on permanent employment contracts. 178 Later, the Court's decision was also more broadly reinforced in the 2012 Ministerial Regulation mentioned above, and it did not make a clear distinction between fixed-term and permanent employment in this respect. 179 In the new Job Creation Law, the Constitutional Court's principle has been upgraded to Law, 180 where it is specified that it is fixed-term contracts that must contain protections in the event of a change of provider. In the implementing Government Regulation, this principle is reiterated, but with the additional provision that if a worker does not obtain a guarantee of continued work then it is the outsourcing company that is responsible for the worker's rights. 181 172 Law no 13/2003 on Labour, art 65(4). 173 Dismissal for Criminal Misconduct In this second example, via the Job Creation Law and its implementing Government Regulations, there has been an attempt to reintroduce provisions that had been declared null and void by the Constitutional Court in the very first judicial review of the 2003 Labour Law (Multiple Challenges I Case). 182 This set of amendments clearly makes use of the legal hierarchy to override the Court's decision.
The new Job Creation Law has amalgamated, and in the process also amended, a number of existing articles in the 2003 Labour Law on ending the employment relationship into a new article 154A. This new article 154A now covers in just one article all possible ways that the employment relationship may end including closure of the business, redundancy, constructive dismissal, dismissal for breach of contract, resignation and retirement.
Among other issues, the changes here have tried to deal with the long-standing confusion in the law regarding an employer's right to dismiss a worker for gross misconduct. This stems from the Multiple Challenges I Case, where the articles permitting dismissal for criminal aspects of worker misconduct were declared null and void. Dismissal for criminal misconduct was held to contravene the principle of being innocent until proven guilty, and further because the articles had potentially required the Industrial Relations Court (a civil court) to consider criminal matters. This decision created a great deal of confusion as to whether employers could dismiss workers for misconduct. In 2005, the Minister for Labour issued a Circular Letter to the effect that employers would need to wait for a court finding of criminal guilt before dismissing a worker. 183 Later, in 2015, the Supreme Court issued a Circular Letter declaring the opposite viewthat dismissal before criminal charges had been concluded was permitted. 184 Decisions of the Court of Industrial Relations dealing with this issue were also mixed. 185 In the new Job Creation Law, all mention of the ability of an employer to dismiss a worker for serious misconduct has now been removed. Instead, in the Law there is merely general permission to dismiss a worker for contravention of their individual contract, collective bargaining agreement and/or company rules and where the necessary warnings have first been issued. 186 Then, in the implementing Government Regulation, a distinction has been drawn between an ordinary contravention of such rules and a contravention which is 'urgent' (bersifat mendesak). 187 An ordinary contravention requires three warning notices and has an attached right to some severance payment. An 'urgent' contravention does not require warning notices and has no right to any severance payment. 188 The definition of 'bersifat mendesak' is then provided in the Elucidation (an explanatory memorandum that is used to interpret the law or regulation but is not formally part of the law itself 189 ) appended to the Government Regulation, which lists a number of criminal behaviours such as theft, embezzlement, drunkenness and use of narcotics, among others. These are the very criminal behaviours that had originally been listed in the 2003 Labour Law.
Therefore, this set of amendments seeks to shift conduct that is potentially criminal into the realm of civil law by framing it as breach of contract. Presumably, this will have implications for the drafting of individual employment contracts and collective agreements into the future. The amendments have also 'downgraded' this subject matter out of the Law and into Government Regulation, and hence out of the judicial review jurisdiction of the Constitutional Court and into that of the Supreme Court. Placing the list of 'urgent' breaches of contract in the Elucidation is a curious attempt to 'hide' the reintroduction of dismissal for 'criminal' behaviours as far down the hierarchy of legal instruments as possible. However, it should be noted that elucidations are still potentially reviewable by the relevant Court (in this instance the Supreme Court). 190
Dismissal Procedures
In this third example relating to dismissal procedures, the Job Creation Law has cleared up an inconsistency in the law, and in doing so has at least indirectly responded to additional comments (obiter dicta) of the Constitutional Court, but not to a direct constitutionality ruling. It is therefore difficult to demonstrate a causal link to the Court decision here, but it is still likely that the Court had some influence on the legislative changes.
In the 2003 Labour Law, Indonesia retained a procedural protection against dismissal from the earlier law of 1964. 191 This protection required employers to negotiate dismissal with the relevant worker and/or union and, if an agreement was not reached, to then make an application (permohonan) in writing to the relevant industrial dispute resolution body (i.e. the Court of Industrial Relations once it was established in 2006) for a 'determination' (penetapan) permitting them to fire a worker. 192 Obtaining a determination was not necessary if the worker was still in a trial period or if the worker had voluntarily resigned or retired. Workers were to remain employed with their full entitlements until a legally binding decision was reached. 193 Without such a court determination, within one year, workers could bring an action to the Court to dispute any purported dismissal. 194 A procedural anomaly was created between this requirement to obtain a determination found in the 2003 Labour Law and the 2004 Industrial Disputes Resolution Law. 195 Labour disputes are generally required to pass through bipartite negotiations between the employer and the worker/union. Then, there is a choice between mediation or conciliation (or arbitration for interests and interunion disputes) 196 before, if still unresolved, the dispute may progress to the Court of Industrial Relations. The 2004 Industrial Disputes Resolution Law briefly acknowledged the requirement that employers seek a determination for dismissal, 197 but it did not resolve precisely how this was supposed to interact with the general dispute resolution procedures.
In 2015, the Constitutional Court considered this inconsistency between the two laws in the Permission to Dismiss Case. 198 Although the Court entirely rejected the constitutional claim being made (and indeed the legal arguments in the claim were poorly constructed), in its reasoning it stated that a case for dismissal could not be conducted as a unilateral application (permohonan) for a determination but that it must be a dispute (sengketa/gugatan) where the views of the opposing party (ie, the worker) will be heard. The Constitutional Court did not, however, go so far as to directly interpret the provisions in the 2003 Labour Law requiring employers to obtain a determination, arguably leaving the law still unclear.
In the new Job Creation Law, the procedural requirement for an employer to obtain a determination from the Court of Industrial Relations in order to legally dismiss a worker has now been entirely removed. 199 As has the associated right of workers to take a case directly to the Court if they were dismissed without their employer obtaining a determination. 200 As per the relevant implementing Government Regulation, where a worker does not refuse dismissal, the employer must then notify the local department of Labour of this fact. 201 A worker who wants to refuse dismissal needs to provide their reasons in writing within 7 days. 202 Then, the worker will have the option of pursuing the general labour dispute resolution procedures. Workers are still to be kept employed, or suspended with full pay, until the dismissal dispute resolution procedures have concluded. 203 This could be for a considerable period of time, particularly if the case is appealed all the way through to the Supreme Court, although a Supreme Court Circular Letter has previously (and controversially as it contradicts a higher-level law) limited that period of paid suspension to six months. 204 This change has certainly cleared up the previous uncertainty in relation to dismissal procedures and, in doing so, has essentially affirmed the reasoning in the 2015 Constitutional Court decision. However, note that the new Job Creation Law has neglected to amend the two articles (ie, articles 82 and 96) in the 2004 Industrial Disputes Resolution Law that still mention the need for an employer to obtain a determination. This 2004 Law has been scheduled on the Prolegnas for revision since at least 2010.
Conclusions
The labour cluster in the Job Creation Law of 2020 and its implementing Government Regulations have enacted far more complicated legal changes than have been generally acknowledged in the resulting media coverage. Indeed, the sheer number and complexity of the changes means that arguably the 2003 Labour Law should have been fully re-issued rather than just amended as the resulting jumble of amendments will most likely impede public access to legal knowledge. While many of the changes have certainly been flexibilising in nature and therefore swing in employers' favour, others have also been to workers' advantage. A deeper examination of the amendments has also revealed significant shifts in the placement of labour regulations within the legal hierarchy. In many instances the detailed rules and calculations previously found in the 2003 Labour Law have been downgraded to the Government Regulation level. When taking a longer-term historical view, we see that to some extent this has undone the 'upgrading' of ministerial regulations to Law that occurred during the democratisation period of the early 2000s and the enactment of the 2003 Labour Law. The Job Creation Law now places many detailed rules within the domain of the executive government and they are hence more susceptible to future change. This is particularly significant given the historical difficulties in passing legislative amendments in Indonesiathe government relied on the extraordinary times of the COVID-19 pandemic to pass the Job Creation Law.
The downward movement in the hierarchy also removes these rules from the Constitutional Court's judicial review jurisdiction into that of the Supreme Court. Those few substantive rules that have now been 'upgraded' to Law, most particularly in relation to minimum wage-setting, can be linked to the increasing re-centralisation of government powers in Indonesia and also signal 199 the continued marginalisation of trade unions from wage-setting processes. This certainly gives some credence to the link that scholars have drawn between the Job Creation Law and the 'democratic decline' or 'illiberal turn' in Indonesia. The strong impression that this Law and its implementing Government Regulations were largely drafted as an integrated package, blending legislative and executive government functions, is also troubling for democratic principles of separation of powers.
The conclusions that we can draw from the responses to Constitutional Court reviews, though, are more mixed. For the most part, these amendments have taken Court decisions seriously, either by affirming them or directly overriding them. There was only one case (Union Bargaining Case) that has been ignored entirely in the amendments, leaving the Court's interpretation in place. This thereby generally affirms the role of the Constitutional Court as a crucial pillar of Indonesia's democratic political and legal system, and this does at least mean that some principles of human rights derived from the Constitution and acknowledged by the Court have been integrated into the legislation. However, the reintroduction of dismissal for 'criminal' misconduct indicates some creativity in the legislative response to the Constitutional Court, and the placement of these rules within the Government Regulation and its Elucidation is reminiscent of some of the earlier evasive government responses to the Court's decisions.
Postscript
Since this Article was written, Indonesia's Constitutional Court has handed down decisions in thirteen judicial review challenges to the Job Creation Law. Many of the applicants in these challenges were trade unions and other civil society organisations and individual workers. All but one of these challenges were dismissed, but the one case that (partially) succeeded is particularly significant. This decision, 205 announced on 25 November 2021, held that the Job Creation Law's law-making procedures and use of an omnibus law format were conditionally unconstitutional. However, rather than declaring the Job Creation Law to be immediately null and void, the Court controversially gave the DPR a maximum time limit of two years in which to fix the Law's defects. At the end of that time limit, if the Law is not ameliorated, then it will become permanently unconstitutional. Planning the government's response to the Court's decision is still ongoing, but it appears that the government will first attempt to amend the 2011 Law on Law-Making in order to specifically enable the use of omnibus law formats. | 2022-04-14T15:17:57.115Z | 2022-04-12T00:00:00.000 | {
"year": 2022,
"sha1": "1264d5e7b2071575eabeb14a745a5ec17df09105",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3F547D86D2D559A10AD212AD3C122E18/S2194607822000072a.pdf/div-class-title-indonesia-s-omnibus-law-on-job-creation-legal-hierarchy-and-responses-to-judicial-review-in-the-labour-cluster-of-amendments-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "533382434e2dc0fde1a3ab128fd037c0cf8588c7",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": []
} |
220074649 | pes2o/s2orc | v3-fos-license | Violent Injuries Among College Students in China: An Exploration of Gender Mental Stress Model
The purpose of this study was to explore the gender-specific mental stress model of violent injuries among Chinese college students. A cross-sectional, multistage sampling process was employed to recruit a total of 5025 college students from 22 universities in China. Survey respondents reported their exposure to violent injuries and noted individual and environmental factors that could relate to violent injuries. Both unadjusted and adjusted statistical methods were used to examine the relationships between selected individual and environmental variables with violent injuries among male and female college students. The overall prevalence of violent injuries among male and female college students in this study was 4.40% (95% CI [0.10%, 7.80%]) and 5.20% (95% CI [0.05%, 10.35%]). The study found that higher mental stress (OR: 3.32), lower level universities (OR: 5.99), and family location in rural areas (OR: 4.00) were associated with a higher likelihood of violent injuries, and mothers employed as professionals (OR: 0.07) was associated with lower prevalence of violent injuries among male students. Unlike male students, mental stress and mothers’ occupation were not associated with violent injuries among female students. University type was also associated with violent injuries but this association was inverted (OR: 0.06) among female students. This study found gender-specific relationships affecting violent injuries among college students in China. Prevention strategies need to be developed in consideration of gender influences and should be enacted to reduce the negative impact of violent injuries on society and personal health in China.
World Health Organization, 2008). China is a developing country and has been experiencing a social transition from a center-directed to a market-based economy since 1978. Chinese people have had to face many social problems accompanying this long transition such as uncertain social environments, social inequity, unequal development, and fierce competition (Yang et al., 2009). Chinese college students typically come from one-child families, are economically disadvantaged, and have poorly developed psychological coping skills (Jiang et al., 2018). This makes them vulnerable to mental and behavioral problems in today's society. A study demonstrated that 27.7% of Chinese college students have a tendency toward violent behavior (Guo et al., 2010). Violence-related injuries have become a severe social and public health problem among Chinese college students.
There are many causes of violent injuries, and physical violence is one such cause (World Health Organization, 2018;Yang et al., 2015). Violent injuries in this study were defined as any physical pain or damage that was intentionally inflicted by another person. This definition had been used in previous studies (Yang et al., 2015). Ecological models emphasize that violent injuries are influenced by both individual and environmental variables. Previous studies have reported factors associated with violent injuries, but most focused only on the individual level (Guo et al., 2010;World Health Organization, 2018). Few studies have examined violent injuries at the environmental level (World Health Organization, 2018;Yang et al., 2015). Many studies explored gender differences in violent injuries (Faergemann et al., 2009;Subba et al., 2010;Tingne et al., 2014). No comparative studies were found with men and women that examined patterns influencing violent injury using a multilevel framework. This study will explore gender and the mental stress model in relation to violent injuries at both the individual and environmental level.
According to the familiar Stimulus, Cognition, and Response (SCR) stress model, various stimuli (S) affect the internal states of people through cognition (C) that, in turn, leads to a mental response (R) (Mehrabian & Russell, 1974). In turn, violent injuries affect people's physical and mental health (Ponsford, 2016;Schwartz et al., 2015). This model is important to understand how a stimulus, such as mental stress, may affect a person's violent injuries. Courtenay et al. (2002) argued that socially constructed gender roles have a far-reaching influence on the ascriptive guidelines of what is considered appropriate for each gender. Chinese society is dominated by patriarchy. Masculinity is deeply rooted in the cultural mandates of any patriarchy, which expects men to be the leaders of society and the heads of their families. In a patriarchal society, men are expected to take primary responsibility for maintaining the economic well-being of their society and family, while women are relegated with subordinate roles primarily in the domestic sphere. Thus, men may face more stressful situations and could, therefore, be more vulnerable to stress which could lead to more mental and health problems that in turn could result in violent injuries.
Social resources are necessary to help prevent mental stress and associated violent injuries. The violent injuries that occur may be different in males and females due to different socioeconomic resources and their respective gender roles (Yang, 2018). Individual, family, and organizational situations are very different and thus may be associated with different mental stressors and different violent injuries based on gender (World Health Organization, 2008;Yang, 2018).
Family and organizational socioeconomic positions among college students play a critical role in behavioral and health problems. Since 1978, China has been transitioning from a centralized to a market-based economy (Yang et al, 2009). The transition has promised improved living standards, and markedly increased choice in consumer consumption, education, health, and employment for the Chinese population (Hao, 2006;Yang et al., 2009). However, the Chinese countryside still typically lags far behind urban areas (Lin, 1992;Yang et al., 2009). It was predicted rural college students would have more life challenges that would lead to more behavioral and health problems. In China, there are great social resource inequalities between different universities. High level universities are heavily invested in by the government, have more financial resources, excellent physical facilities and equipment, and more opportunities for educational advancement. The difference between the high level universities and lower level universities may have an influence on violent injuries. This influence may also be experienced differently between male and female students, because they have different cognitions in part due to differences in gender norms and roles (Mao & Bottorff, 2016;Mehrabian & Russell, 1974). This study will examine how mental stress and other factors affect violent injuries among male and female Chinese college students.
Study Area and Participants
A nationwide, cross-sectional, multistage sampling method was used in this study. To obtain a representative sample, geographic location, cultural diversity, and economic development were considered in the sampling process. This was important since China is such a large country with diverse cultures. In stage one, 22 universities were selected across mainland China differentiated by regional location. In stage two, levels within each university were selected. All levels selected had to have medical/health courses to be included. In stage three, one-third of medical/health courses were randomly selected from each level. On average, three classes were selected to participate in the study at each university. In stage four, all students in the selected classes were surveyed. The sample size was determined based upon the need to obtain accurate prevalence estimates for violent injuries. It was calculated by Var (p) = D*(<1-p>/N), where D is the "design effect," which resulted from the sampling technique (Yang, 2018).
Data Collection
All responses to the survey questionnaire were anonymous. The same data collection protocol was used across all 22 universities to ensure homogeneity of questionnaire administration and data collection techniques. Participants were asked to complete a standardized questionnaire after receiving instructions from survey administrators in the classroom. It took about 15 min to complete the paper-based questionnaire, but every student was given enough time to ask clarifying questions if needed. This study was approved by the Ethics Committee at the Medical Center, Zhejiang University (ZM, 14201), and verbal consent was obtained from all respondents, following verbal instruction from an investigator. Each participants consent status was uniformly recorded in the record books. Students had an opportunity to request information or clarification about the survey items and were given adequate time for questionnaire completion.
Measures
Dependent Variable. There are various definitions of violent injuries. As in previous research, violent injuries in this study were defined as any physical pain or damage that had been intentionally inflicted by another person (Grisso et al., 1999). Specifically, the participants were asked if they had received any violent injuries during the past 12 months. A reportable injury was defined as any injury satisfying at least one of the following criteria: (a) required nonemergency medical treatment, (b) required emergency room or other kinds of emergency treatment, or (c) required to rest for one-half day or longer (Yang et al., 2015).
Independent Variables
Individual-Level Independent Variables. Given that some studies have stressed that culture and family environments are associated with violent injuries (Herbert et al., 2011;Yang et al., 2015). Ethnicity (Han Chinese/minority status) and both father's and mother's occupations were included as variables in this study. Age was included as a routine control variable. Other research has indicated that low socioeconomic status is associated with high mental stress and mental problems (Yang et al., 2009(Yang et al., , 2015. So family income was included as a variable in this study. This variable was measured through the question: "how much was the income of each person in your family last year (in RMB Yuans)?" Categories included below ¥10,000, ¥10,000 to less than ¥20,000, ¥20,000, and over ¥20,000 (see Table 1).
Mental stress was an individual variable included in this study and it was measured by the Perceived Stress Scale, Chinese version (CPSS) (Yang & Huang, 2003). This questionnaire has acceptable levels of reliability and validity, and has been widely used to assess respondents' mental stress (Ge et al., 2020;Lin et al., 2019;Peng et al., 2019;Yang et al., 2009).This scale is comprised of 14 items that assess a participant's perception of stress during the month prior to taking the survey. Items were rated on a five-point Likert type scale. The higher the total score, the greater the perceived level of stress. Following prior practice, high stress was operationalized as a total score ≥25 (Yang et al., 2009).
Environmental-Level Variables
Two environmental-level variables, prior environment and current environment, were included in this study. The former refers to the home environment in which one grew up before age 13 years; the latter refers to the environment one is currently living in at his or her university.
Prior environment is related to family location. It is known that the social and economic development of a young person can vary greatly between city, urban, and rural environments. To determine prior environment, participants were asked, "where did you grew up before 13 years of age?" The possible responses were a city, a county or town, or a rural area. A host of studies have confirmed that birthplaces and places of residence before 13 years of age play an important role in later behavioral patterns (Yang, 2018). Current environment was measured by university type. In China, universities are ranked from low to high and ranking is directly related to social and economic resources available to a given college. Thus, college ranking or type may impact mental health and ultimately violent injuries. University type was determined using the China university ranking system (low, middle, and high level) as established by the National Ministry of Education (National Ministry of Education, 2015).
Data Analysis
All data were entered into a Microsoft Excel database. The dataset was then imported into SAS (9.3 version) for statistical analyses. Descriptive statistics were used to calculate the prevalence of violent injuries among male and female students, respectively. Both unadjusted and adjusted methods were considered in the analyses to assess associations between the dependent variable and selected factors that could be related to violent injuries. The unadjusted method used only the selected factors of interest as independent variables. SAS survey procedures were applied in all analyses using university as the clustering unit to account for a within-clustering correlation attributable to the complex sample. Associations were confirmed through application of a multilevel logistic regression model using the SAS GlIMMIX procedure (Wang et al., 2008). In this analysis, multiple level models were built respectively by female and male. It started with the Null Model, a two-level (individual and university city) model with random intercepts, which did not include any predictors except a constant, in assessing variation of an individual experiencing a violent injury. From this base, full models were constructed. The significance of the random parameter variance estimates was assessed using the Wald joint t-test statistic. All analyses were weighted. Weights included: (a) sampling weights, as the inverse of the probability of selection, calculated by university and (b) post-stratification weights, calculated in relation to sex, based on estimated distributions of this characteristic from a national survey (National Ministry of Education, 2015). The final overall weights were computed as the product of the above two weights. A nonresponse weight was not considered because nonresponse rates were low in this study.
Results
A total of 5025 individuals were identified as potential subjects for this study. After excluding incomplete responses, a final sample of 4903 (97.6%) valid questionnaires were included in this study. Of the respondents, 3350 (68.3%) were female and 1553 (31.7%) were male. The study found violent injury prevalence among male college students was 5.2% (95% CI [0.05%, 10.35%]) and among female college students was 4.4% (95% CI [0.1%, 7.8%]). This difference in prevalence was not significant. The unadjusted model indicated that age, mother's occupation, family location, and university types were associated with violent injuries among female students. Age, father's occupation, mother's occupation, family income, family location, stress, and university types were associated with violent injuries among male students.
The multilevel logistic regression model showed that higher mental stress (OR: 3.32 < 95% CI [1.98, 5.57]), lower level universities (OR: 5.99 < 95% CI [1.22, 9.51]), and being raised in a rural setting (OR: 4.00 < 95% CI [1.33, 14.28]) were associated with a higher likelihood violent injuries, while having a mother with a professional occupation (OR: 0.07 < 95% CI [0.01, 0.35]) was associated with a lower incidence of violent injuries among male students. Unlike male students, mental stress and mothers' occupation were not associated with violent injuries among female students. University type was associated with violent injuries, but this association was inverted (OR: 0.06) among female students. In addition, being raised in a rural area (OR: 7.14 < 95% CI [1.47, 33.33]) was also associated with higher violent injuries in females (see Table 1).
Discussion
This study examines violent injury prevalence and identifies related factors among male and female college students in mainland China. The study found violent injury prevalence among male college students was 5.2% (95% CI [0.05%, 10.35%]) and among female college students was 4.4% (95% CI [0.1%, 7.8%]). Male prevalence of violent injuries was slightly higher than female but this difference was not significant. The prevalence of violent injury was lower in college females than that reported in female adults (10.7% < 95% CI [7.8%, 15.5%]), while the prevalence of violent injury in college males was similar to that reported in male adults (4.5% < 95% CI [1.3%, 6.2%]) (Yang et al., 2015). In Chinese society, men are in the stronger position and women are in the weaker position. This difference is especially prominent within Chinese families. After females are married, the male assumes the dominant role and the female accepts the more passive role. Males in the dominant role may find it perfectly acceptable to violently abuse women to keep women in their passive and dependent role (Yang et al., 2015). This may explain why violent injuries are higher in adult females than in college student females who are typically unmarried (Zhao et al., 2006).
Addressing a gap in the literature, this study found mental stress was positively associated with violent injuries among male students but did not find this association among female students. Culturally related gender norms and gender roles may contribute to this difference. Gender norms and roles influence attitudes and behaviors in many areas, including relationships, parenting, schooling, work, and health practices (Johnson et al., 2009;Mao & Bottorff, 2016). Gender roles can also create economic and cultural pressures that affect the lives of females and males differently (Mao & Bottorff, 2016). Chinese society is dominated by patriarchy, which refers to a social system where males are the central authority figures. Demonstrating masculinity is deeply rooted in the cultural mandates of any patriarchy. In a patriarchal society, men are expected to take primary responsibility for maintaining the economic well-being of their society and family, while women are relegated to subordinate roles in the domestic sphere. Thus, men may feel high levels of pressure and stress to take care of their wives and families, which may lead to mental health problems that can contribute to violent injuries. This study provided support for this relationship in that males reported significantly higher stress scores than females. The prevalence of high stress levels among males was 70.9% (95% CI [64.2%, 77.6%]) and among females was 56.2% (95% CI [49.0%, 63.4%]).
This study found male students attending low level universities have a higher prevalence of violent injury than male students attending higher level universities, but this association was inverted among female students. Both university environment stress and gender norms and roles may contribute to this difference within the context of Chinese culture. Lower level universities have less financial and social resources, poor living environments and equipment, and fewer educational opportunities. This situation is in conflict with the male social role which requires males to obtain a good education, obtain a good job, and provide for their family. Females expect a comfortable life. Low university may lack resources but they also have lower academic expectations which may make college life easier and less stressful. For female college students, this may lead to less mental and behavioral problems, including violent injuries.
This study found that mothers working as professionals was a protective factor against violent injuries among male students, but it had no influence on violent injuries among female students. This may be due to the difference in demand for social resources between men and women because of gender role. Parents' social position is an important social resource. Male students may need this resource more than females because they have more social and family responsibilities. Parents who were employed as professionals typically had relatively good social reputations, high income, stable work, and more free time, which tended to reduce the risk of violent injuries for them and their children (Yang et al., 2015).
This study found there is a strong association between living in urban or rural environments and violent injuries among both female and male college students. This is most likely because of the distinct social and economic differences between urban and rural areas. Rural areas throughout most of China are still lagging behind urban areas in terms of income, education level, public facilities, medical care, and old-age care. Rural residents have great life challenges (Yang et al., 2009). This situation may impact both their male and female children.
There are several limitations of this study. Firstly, this study was a cross-sectional sample, and thus causal relationships could not be determined. Secondly, the research results were from college students and thus cannot be generalized to noncollege populations of the same age or to other populations in China. Finally, our range of environmental factors was relatively limited. In future work, more environmental variables such as regional location, level of economic development, city population size, and unemployment rate should be considered.
Conclusion
This study found gender-specific mental stress and contextual variables were related to violent injuries among college students in China. In China, culturally related gender norms and gender roles may contribute to gender models. The findings from this study can be used to inform future violent injury prevention programs and policies in China. Greater attention needs to be given to males due to the mental stress they experience and the relationship between mental stress and violent injuries.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2020-06-27T13:06:12.180Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "6363c1ea893a22cf6cdfd7bcee904449105f9ad4",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1557988320936503",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "766c2dfe92f44185c2e6b9abff851d00d38b5add",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
247066497 | pes2o/s2orc | v3-fos-license | How Various Drug Delivery Methods Could Aid in the Translation of Genome Prime Editing Technologies
Drug delivery systems can be engineered to enhance the localization of therapeutics in specific tissues in response to externally applied stimuli and/or local environmental changes. In recent decades, efforts to improve drug delivery techniques at both nano- and macroscale have led to a new era of therapeutic efficacy. Such technological advancements resulted in improved drug delivery systems regularly entering the clinical setting. However, these delivery innovations are unfortunately not always readily applied to newly developed technologies. One of these new and exciting technologies that has been overlooked by drug delivery scientists is prime editing. Prime editing is a novel genome editing technology that exhibits the plug-and-play capability of CRISPR/Cas9 editors while avoiding double-strand DNA breaks throughout the entire process. This article focuses on describing the potential advantages and disadvantages of selecting nanomedicine technologies along with prime editing capabilities for the delivery of cargo.
Introduction
e field of drug delivery has overcome many pharmaceutical hurdles. However, the majority of drug delivery research continues to focus on only a subset of pharmaceutical agents and diseases. A chief example of this "pharmaceutical neglect" is the lack of proposed delivery strategies for the new, high-interest field of genome prime editing. Since clustered regularly interspaced short palindromic repeats (CRISPR)-inspired prime editing technology has very recently been described [1], its publication has resulted in hyped reports from several media outlets, the founding of a start-up company, and the receipt of venture capital investments. In essence, this technology allows for flexible genome manipulation without double-strand DNA breaks observed in standard CRISPR/Cas9 systems. Despite the broad potential to create novel treatments for a plethora of genetic diseases, no proposals have yet been shared in the public domain for an effective nonviral delivery system for prime editing technologies. is article focuses on identifying the requirements for an effective prime editing delivery system, evaluation of advantages and disadvantages of current nanomedicine vehicles, and a proposal for which research areas should be pursued.
Prime Editing
Prime editing technologies have the potential to essentially revolutionize current genome-editing practices. Since the CRISPR/Cas9 system has been first utilized, adoption of the technology has soared with more than $1 billion currently being spent annually by federal governments on CRISPRbased research [2]. However, while the CRISPR/Cas9 system has experienced rapid adoption, drawbacks of the system have become evident. CRISPR/Cas9 systems utilize either nonhomologous end-joining or homology-directed repair to restore DNA viability, both of which involve the repair of double-stranded DNA breaks [3]. In selecting a system that requires double-strand breaks, the prevalence of undesired insertions and/or deletions increases. Prime editing serves to overcome many of the limitations of the current CRISPR systems by allowing for genome manipulation using only single-strand DNA breaks (Figure 1) and may potentially give rise to a new frontier in genome editing research.
What Is Prime Editing?
Prime editing is a versatile genome editing method that "writes" new genetic information into precise DNA locations.
is method differs from CRISPR/Cas9 systems in that it employs a catalytically impaired Cas9 endonuclease fused to a reverse transcriptase that is programmed via prime editing guide RNA (pegRNA) [1]. pegRNA encodes both the desired edit and the target DNA site. e prime editing method enables DNA insertions, deletions, and substitutions without requiring doublestrand DNA breaks or exogenous donor DNA templates. By avoiding the sporadic DNA repair associated with doublestrand breaks, prime editing can improve the accuracy of gene editing in vitro and theoretically also in vivo.
Unfortunately, the in vivo efficacy of prime editing systems is yet to be adequately demonstrated. To achieve widespread in vivo use, effective delivery of the prime editing machinery is required. Lentiviral systems have been proposed as transporters for the base editor 3 (BE3) system as its large size would not fit within generic adenoviral systems [4]; however, viral delivery systems often lead to mutagenic and carcinogenic side effects [5]. erefore, a nonviral delivery system is preferred for patient safety reasons. Unfortunately, transfection efficiency is often lower for such systems. Ultimately, the development of effective nonviral delivery systems for prime editors will likely require innovation and a thorough evaluation of delivery system requirements.
Requirements for Delivery of Prime Editing Technologies
Before directly identifying the requirements for a delivery vehicle for prime editing technologies, it should be acknowledged that the optimal delivery vehicle will likely change depending on the specific disease target. Unfortunately, the very advantage of broad applicability of prime editing for a plethora of diseases is a major disadvantage in selecting the proper delivery method. A single delivery approach will not enable the full-breadth adoption of prime editing for all therapies designed to treat known human pathogenic genetic variants. However, certain techniques that provide a reasonable mode of delivery to a broad subset of disease targets and their broad requirements will be discussed here. First, one of the most important considerations of prime editing delivery systems is that they must be able to deliver the entire prime editing complex. e entire complex is composed of a prime editing protein containing an RNAguided DNA-nicking domain (usually Cas9 nickase) fused to a reverse transcriptase (RT) domain and complexed with prime editing guide RNA (pegRNA). Essentially, both protein and large RNA strand must be delivered. In theory, these components can be either codelivered in the same carrier or transported in separate carriers. However, in practice, codelivery of active agents in the same nanocarrier has generally resulted in greater therapeutic efficacy [6], likely by limiting opportunities for errors during administration and transport. erefore, it is highly recommended that the nanocarrier be capable of carrying both prime editing protein and RNA as a PE-pegRNA complex while preventing the two materials from detrimentally interacting.
Second, the delivery of the PE-pegRNA complex tends to be more difficult than that of small-molecule drugs. In general, proteins for drug delivery exhibit notoriously short circulatory half-lives, poor absorption and permeability profiles, and high rates of denaturation during transport [7]. RNA molecules are similar in that they are readily metabolized when exposed in the bloodstream, induce immune responses in many extracellular environments, and demonstrate low tissue penetrance [8]. In essence, an optimal delivery system must fully cloak both protein and RNA components from the bloodstream and tissue interactions until its arrival at the target cell.
ird, genome editing technologies require delivery vehicles that enable intracellular and intranuclear (or intramitochondrial) uptake in the target cell. While many gene delivery systems utilize viral vectors, nonviral delivery systems are often preferred because of engineered control over toxicity profiles. Nonviral vectors are more advantageous over viral vectors due to their biosafety associated with less immunotoxicity. Plasmid DNA, liposome-DNA complexes (lipoplexes), and polymer-DNA complexes are examples of commonly used nonviral vectors. However, transfection efficiency of nonviral delivery systems regularly plummets. Alterations to the formulation of nonviral delivery systems to improve solubility (e.g., PEGylation) often work counterproductively when intracellular entry is necessary [9]. Furthermore, while hydrophilic surfaces restrict interactions with bloodstream components, they also frequently inhibit interactions with target cells. A designed prime editing delivery system should be engineered with a specific mechanism for target cell penetration and likely a method for intracellular motility and organelle uptake.
Fourth, the delivery system should enable a path for regulatory approval and therefore cannot be designed with extreme complexities. Drug delivery scientists discreetly shy away from admitting that very few nanomedicine systems are currently available in the commercial market, despite the large growth of interest in the scientific field. Many strong nanomedicine candidates fail to achieve set regulation standards as a result of the inability to account for all the degradation products, lack of demonstrated enhanced efficacy, or the triggering of system-mediated toxicities [10]. In the long run, it is strongly recommended to opt for systems that exhibit robust semblance to current commercially available nanomedicine products.
Advantages and Disadvantages of Specific Nanomedicine Systems
In general, a goal of nearly all drug delivery systems is to reduce off-target effects from the wide biodistribution of active pharmaceutical ingredients (APIs). Many factors need to be considered when designing an effective drug delivery system. Although it is beyond the scope of this article to fully list them here, some obvious considerations when engineering a system are to be cognizant of the nature of the drug (small molecule, large molecule, biologic, gene therapy, etc.), the particular tissues or cells being targeted, the drug modes of action, and the route of drug administration, as well as several other pharmacological factors. Despite the need for such an effort, the payoff can be monumental as the advantages of tailored drug delivery systems far outweigh nontargeted therapeutics. Even though local drug administration may aid prime editing delivery, local administration routes are not discussed here as systemic transporters likely have the broadest applicability for diseases susceptible to prime editing treatment. Accordingly, several nanotechnology-based delivery systems are addressed (Figure 2), and their potential for prime editing delivery (Table 1) is discussed.
Liposomes.
Liposomes are vesicles composed of at least one lipid bilayer [11]. ey are generally spherical, yet can assume other shapes with proper engineering. e bilayer structure of liposomes effectively serves as a barrier between the internal components and external surrounding fluid, allowing therapeutic agents to be protected during transport. e phospholipid assembly also enables hydrophilicity on both sides of the membrane, allowing the loading of watersoluble drugs within the liposome interior and the loading of lipophilic compounds by housing them within the bilayer. Drugs exhibiting an intermediate partition coefficient (logP) can segregate between the two phases. e liposomal compartmental space can house smaller liposomes, enabling unique architectures and the development of multilamellar liposome types. Specific lipids and other components can be tailored to increase the rigidity and stability of the liposomes or to ensure a slow-release vessel. Furthermore, mechanisms for targeting and tracking the vesicles can be incorporated within the liposomes throughout the majority of the synthesis process. Accordingly, liposomes should undoubtedly be considered when contemplating a straightforward nanoenabled delivery approach for new commercially relevant therapeutics.
With regard to the delivery of prime editing machinery, liposomes have several distinct advantages. First, the size of the liposomes can be optimized for cargo delivery, and the interior compartment space can house both prime editing protein and pegRNA. In fact, delivery of large protein-RNA complexes has been previously demonstrated using liposomes [12]. Second, several liposomal products have passed the US Food and Drug Administration (FDA) regulations and are commercially available. Adapting these currently approved liposomes for prime editing delivery would allow for a more direct path through the regulatory process. Finally, more advanced liposomal gene delivery systems have been engineered to enhance cellular [13], nuclear [14], and mitochondrial uptake [15] and could be examined for design innovation purposes.
However, liposomal systems often have several drawbacks. While liposomes mimic natural membranes, they are still foreign materials in the body and are known to be cleared by the mononuclear phagocytic system [16]. Efforts to use synthetic phospholipids and incorporate polyethylene glycol (PEG) coatings have somewhat lengthened the time to full blood clearance [17]; however, there are concerns accompanying these techniques regarding their ability to inhibit bloodstream extravasation. In addition, liposomal stability is a concern. Phospholipids sporadically jump from one membrane to another, leading to the occasional merging and coalescence of vesicles. Liposomes are not different and may often fuse membranes with unintended cells or bursts 4 Genetics Research when in close proximity to other membranes. One estimate is that up to 30% of liposomal contents can be leaked in this manner, leading to large amounts of nontargeted API exposure [18]. In vivo fate of liposomes can be significantly affected by the interaction between liposomes and cells where they can be absorbed or undergo endocytosis. Stability during storage also remains an issue with most liposomal formulations requiring frozen storage conditions. If a specific prime editing therapy could initiate severe side effects without directed localization, liposomes may not provide the stability needed to adequately reduce off-target effects. Nonetheless, liposomes are likely to be strong candidate carriers for low-toxicity prime editing therapies.
Micelles.
Micelles are the fundamental building blocks of emulsion-based formulations. A micelle is a three-dimensional assembly involving multiple amphiphilic surfactants. e hydrophobic ends (tails) of the multiple surfactant molecules arrange themselves near one another in order to minimize contact with water molecules, leading to a structure in which the hydrophilic ends encounter the water molecules at the periphery. Like liposomes, micelles are generally spherical, yet rod and planar structures can be obtained using surfactants with proper head-to-tail volume ratios and distinct solution conditions. By exchanging the solvent, inverse micelles can also be formed where the hydrophilic ends cluster and the hydrophobic ends interact with the solvent. Pharmaceutical formulations take advantage of both micelle types in the form of water-in-oil (W/O) emulsions and oil-inwater (O/W) emulsions. Advanced water-in-oil-in-water (W/ O/W) and oil-in-water-in-oil (O/W/O) emulsions have also been developed to inhibit globular coalescence. Micelles carry molecular cargo by thermodynamically stabilizing the molecules in the core. Generally, fat-soluble agents are poorly soluble in surrounding solvents, leading to micelles being major applications in the transport of fat-soluble nutrients and drugs. Because of the simplicity of the system and the long history of using detergents, micelle systems are not always recognized as nanoparticle delivery systems, yet they have been shown to be effective in the delivery of genes and specific biologics [19]. For example, a recent study has demonstrated the potential of polyplex micelles in delivering Cas9 mRNA and guide RNA for in vivo genome editing in the mouse brain [20].
However, with regard to their potential for prime editing delivery, micelles face many challenges. First, micelles generally exhibit a size limitation, usually ranging from 2 to 20 nm, which can hinder the delivery applicability of larger macromolecules. In fact, amphiphilic block copolymers often self-assemble into micellar structures themselves [21], but rarely surpass the size limit as a stable structure. is inherent size limitation strongly negates the potential use of micelles to deliver protein-RNA complexes. Second, the stability of a three-dimensional supramolecular structure is difficult to maintain without covalent crosslinking. As such, internal contents are often "spilled" from micelles during delivery [22].
ird, micelles can activate the immune system and trigger rapid clearance. While PEGylation can decrease clearance, the presence of a hydrophilic chain can disrupt micellar stability, even in the case of amphiphilic polymeric micelles. Finally, as certain adverse reactions have been associated with strong surfactants, toxicity must always be considered when designing micelle-based systems. For these reasons, micelles are not recommended for the formulation of genome prime editing technologies.
Exosomes.
Exosomes are similar in structure to liposomes, being membrane-bound vesicles; however, one intriguing difference sets them apart. Exosomes are derived from the endosomal compartment of cells and carry unique cell biomarkers that are characteristic of the cell of origin. erefore, exosomes often display inherent targeting molecules on the outer surface of the membrane, enabling improvement in target cell uptake [23]. Exosomes are known to undergo endocytosis or fusion with the plasma membrane of target cells [24,25], and internalized exosomes are degraded after delivering the cargo into the cytosol [25].
Exosomes can be engineered in a variety of sizes, and various exosome-loading procedures have been developed [26]. Recently, exosomes showed the potential in delivering prime editing proteins and gRNA. A recent study demonstrated that CIRSPR/Cas9 protein and sgRNA can be packaged into exosomes that in turn successfully transduced cells in vitro [27]. Nanomedic, an exosome-based CRISPR/ Cas9 delivery system, showed promising efficiency of gene editing in various hard-to-transfer cell types, including human iPS cells, neurons, and myoblasts [28].
While it may initially seem that exosomes are an obvious choice for prime editing delivery over liposomes, certain drawbacks limit their utility, although not entirely. First, exosome engineering remains in its infancy, and the ability to design an exosome with firm-targeting capabilities is still somewhat only theoretical. Furthermore, alterations to exosomes after synthesis, for purposes such as bloodstream cloaking, tend to result in loss of desired exosomal properties [29]. Finally, the key biomarkers present in exosomes often lead to robust cytotoxicity, immune responses, and direct uptake by the reticuloendothelial system.
Polymer-Based Systems.
Polymers are long-chain molecules composed of many repeating subunits (monomers).
eoretically, polymers assume a three-dimensional architecture in solution owing to interactions with the solvent environment. Hydrophilic polymers are commonly employed to improve the solubility of drugs with low solubility, while hydrophobic polymer components are often used for drug stabilization. Many different polymer types exist and can enable a wide range of properties relevant to drug delivery, namely, polymers can be covalently attached to drugs or utilize noncovalent interactions for drug transport. Furthermore, polymer nanosystems can be designed with various architectures, sizes, and compositions. As such, many different systems can be engineered to fully encapsulate both prime editing protein and pegRNA. Additionally, polymers enable facile attachment of in vivo tracking moieties, targeting molecules, and environmentally Genetics Research 5 responsive entities and are thus often considered the most versatile platform for drug delivery. Upon arrival to the target cells, the most of polymer-based delivery vehicle undergo biodegradation or breakdown to initiate the delivery cargo into the cells. While a polymer-based system may be a strong candidate for prime editing delivery, several factors should be considered. First, covalent attachment of a polymer to a protein may lead to incapacitation of the protein [30], and nearly all covalent attachments of polymers to RNA lead to loss of function. is being noted, polymer encapsulation is likely to be the best approach for prime editing delivery. It is not directly relevant to discuss all polymer types in this article; however, it should be mentioned that certain polymer types may reduce bloodstream interactions and/or immune responses to the nanosystem. For regulatory purposes, the determination of in vivo polymer degradation and excretion profiles is essential. Most regulatory agencies tend to look more favorably on liposomal systems than polymerbased systems, yet a few polymer-based systems have achieved US FDA approval [31]. However, any additional modifications to the polymer delivery system (e.g., to enhance tracking, transport, and targeting) will steepen the climb for regulatory approval. For research and grantobtaining purposes, polymer-based systems can likely produce the best results, but from a commercialization standpoint, polymer nanosystems might introduce too many hurdles to merit their investigation.
Dendrimers.
One of the most unique nanoparticle systems is the use of dendrimers, which are macromolecules composed of repeatedly branched chains. eir divergent synthesis begins with a single molecule at the core that then branches in multiple directions. Four branches become eight in the next stage of synthesis (called a generation), then sixteen, thirty-two, and so on. e dendrimer structure is unique in that the entire structure exhibits polymeric flexibility, yet the dendrimer surface exhibits an extremely high charge density. Dendrimers are known to chelate small molecules and ions within their branched structures, and their high charge density permits penetration across difficult in vivo barriers, such as the blood-brain barrier [32].
Dendrimers face similar issues as those of micelles when being evaluated for prime editing delivery. In theory, no dendrimer size limit should exist; however, in practice, it is very difficult to produce dendrimers beyond six generations. Quite simply, the charge density becomes too large, and steric repulsion forces restrict further branching. erefore, the probability of developing a dendrimer system with sufficient cargo space to deliver both prime editing protein and pegRNA molecule is quite small. 4.6. Rigid Nanoparticle Systems. In this article, rigid nanoparticles are defined as nanoparticles composed of any material (organic, inorganic, metallic, etc.) that elicits a rigid morphology, including silica, metal oxide, and palladium nanoparticles. Essentially, rigid nanoparticle systems utilize conventional nanoparticles as delivery vessels. ese systems can be designed using either top-down or bottom-up approaches. Top-down techniques tend to result in samples with high polydispersity in size and shape, whereas bottomup methods can produce monodisperse samples with unique architectures. While surfaces can be chemically modified for the attachment of both proteins and nucleic acids, the attached molecules are on the surface of the particles and therefore presented to bloodstream components and rapidly disabled or cleared. Many of these systems rely solely on the adsorption of the APIs without chemical modification. However, these systems are likely to fail as prime editing delivery systems for the same reason.
On the contrary, certain rigid nanoparticle systems, such as silica, can be produced with a hollow core using bottomup approaches, allowing for the encapsulation of various molecules [33]. As a result of the required etching processes, these hollow nanoparticles have a porous shell. Loading of proteins and nucleic acids into these particles could prove difficult depending on the pore size. Furthermore, if the pore size is too large, the molecular cargo will not be retained within the hollow core during transport. Methods have been developed to place an external shell around the hollow rigid nanoparticles to block pores and fully encapsulate drugs [34]; however, a nanoparticle system of such complexity has never attained federal regulatory approval. In addition, rigid nanoparticle systems often undergo surface dissolution to some extent in aqueous environments, leading to the release of various ions. In many cases, particularly with metallic nanoparticles, released ions pose considerable cytotoxic threats [35]. Overall, rigid nanoparticle systems should be avoided for prime editing delivery unless a strong rationale justifies their use.
Conclusion
Prime editing technologies have the potential to alter the genome editing space and achieve biomedical treatment breakthroughs. To realize this potential, proper delivery systems must be engineered for prime editing transport and localization. Prime editing delivery systems must be able to costabilize both prime editing protein and pegRNA during transport, possess a method for cellular internalization, and maintain a straightforward path to regulatory approval. Based on these criteria, liposomes are likely to be the most promising nanomedicine candidates for prime editing delivery if their potential toxicity and instability in the circulation system are well addressed. Polymer-based carriers with lower toxicity and higher stability may represent the second best option. However, the efficiency and safety of each delivery system must be carefully considered with the inevitable variance in between systems and cell types.
Data Availability
No data were used to support this study.
Conflicts of Interest
e author declares that there are no conflicts of interest. 6 Genetics Research | 2022-02-24T16:09:43.949Z | 2022-02-21T00:00:00.000 | {
"year": 2022,
"sha1": "94a17030af78fe36197e9bdcf9e5686409c49130",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/gr/2022/7301825.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6021a8510905a167bb376fa46628134a74b64b42",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211231426 | pes2o/s2orc | v3-fos-license | Autonomous Dam Surveillance Robot System Based on Multi-Sensor Fusion
Dams are important engineering facilities in the water conservancy industry. They have many functions, such as flood control, electric power generation, irrigation, water supply, shipping, etc. Therefore, their long-term safety is crucial to operational stability. Because of the complexity of the dam environment, robots with various kinds of sensors are a good choice to replace humans to perform a surveillance job. In this paper, an autonomous system design is proposed for dam ground surveillance robots, which includes general solution, electromechanical layout, sensors scheme, and navigation method. A strong and agile skid-steered mobile robot body platform is designed and created, which can be controlled accurately based on an MCU and an onboard IMU. A novel low-cost LiDAR is adopted for odometry estimation. To realize more robust localization results, two Kalman filter loops are used with the robot kinematic model to fuse wheel encoder, IMU, LiDAR odometry, and a low-cost GNSS receiver data. Besides, a recognition network based on YOLO v3 is deployed to realize real-time recognition of cracks and people during surveillance. As a system, by connecting the robot, the cloud server and the users with IOT technology, the proposed solution could be more robust and practical.
Introduction
In the water conservancy industry, there are many fundamental engineering facilities, such as dams, water and soil conservation, water transfer project, shipping project, water supply project, hydraulic power plants, irrigation facilities, etc. Big water dams are the most comprehensive facilities. They always have many functions, such as flood control, electric power generation, irrigation, water supply, shipping, etc. [1]. Therefore, the safety of big dams is crucial to cities and people around.
In the past, staff must check the environment, structure, and electromechanical facilities of dams regularly every day. This is a necessary way of keeping the dam running safely. However, it cannot realize all-weather all-day monitor and sometimes staffs are at risk since most dams are constructed in remote rural areas. Nowadays, with the great improvement of robot technology, robots have been used in public safety, security check, disaster rescue, and high-voltage lines inspection [2][3][4][5]. Many researchers and engineers are also paying attention to utilize underwater robots, unmanned aerial vehicles, unmanned surface vehicles for dams' surveillance and inspection. A state machine is designed here for robot control, which includes self-check, autonomous navigation, remote control, idle, and emergency states. The robot would get into self-check state once the power is turned on. All sensors, motors, localization data, the high-level computer will then be checked. The robot would get into an idle state and wait for task command from the cloud server if the self-check is passed. Autonomous navigation would be triggered once the task and waypoints are received. The emergency state would be triggered when the robot is stuck or some sensors are running wrong. The logic of the state machine is shown in Figure 2.
Robot Platform
To realize high robustness and agile motion performance, the robot body is built with an allaluminum chassis which about a total of 10 Kg weight. Two 200W DC brushless high torque motors are deployed to realize four-wheel drive. A 24 V lithium battery with 40 AH capacity is used as the power supply.
All motors are controlled by a customized MCU board, which connects the motors and drivers by CAN bus. The maximal speed is 2 m/s; the maximum payload is 40 Kg; the maximum runtime is more than 4 hours. It is suitable for rugged all-terrain operation with four off-road tires. Figure 3 shows the body design.
As shown in Figure 4, for the robot body, we will use the velocity and the angular velocity in the robot axis as state variables, i.e., [ ] . After the kinematic analysis of the robot, we can find the relationship between robot velocities and wheel speed. A state machine is designed here for robot control, which includes self-check, autonomous navigation, remote control, idle, and emergency states. The robot would get into self-check state once the power is turned on. All sensors, motors, localization data, the high-level computer will then be checked. The robot would get into an idle state and wait for task command from the cloud server if the self-check is passed. Autonomous navigation would be triggered once the task and waypoints are received. The emergency state would be triggered when the robot is stuck or some sensors are running wrong. The logic of the state machine is shown in Figure 2. A state machine is designed here for robot control, which includes self-check, autonomous navigation, remote control, idle, and emergency states. The robot would get into self-check state once the power is turned on. All sensors, motors, localization data, the high-level computer will then be checked. The robot would get into an idle state and wait for task command from the cloud server if the self-check is passed. Autonomous navigation would be triggered once the task and waypoints are received. The emergency state would be triggered when the robot is stuck or some sensors are running wrong. The logic of the state machine is shown in Figure 2.
Robot Platform
To realize high robustness and agile motion performance, the robot body is built with an allaluminum chassis which about a total of 10 Kg weight. Two 200W DC brushless high torque motors are deployed to realize four-wheel drive. A 24 V lithium battery with 40 AH capacity is used as the power supply.
All motors are controlled by a customized MCU board, which connects the motors and drivers by CAN bus. The maximal speed is 2 m/s; the maximum payload is 40 Kg; the maximum runtime is more than 4 hours. It is suitable for rugged all-terrain operation with four off-road tires. Figure 3 shows the body design.
As shown in Figure 4, for the robot body, we will use the velocity and the angular velocity in the robot axis as state variables, i.e., [ ] . After the kinematic analysis of the robot, we can find the relationship between robot velocities and wheel speed.
Robot Platform
To realize high robustness and agile motion performance, the robot body is built with an all-aluminum chassis which about a total of 10 Kg weight. Two 200W DC brushless high torque motors are deployed to realize four-wheel drive. A 24 V lithium battery with 40 AH capacity is used as the power supply.
All motors are controlled by a customized MCU board, which connects the motors and drivers by CAN bus. The maximal speed is 2 m/s; the maximum payload is 40 Kg; the maximum runtime is more than 4 hours. It is suitable for rugged all-terrain operation with four off-road tires. Figure 3 shows the body design.
where r is so called effective radius of wheels, n is the reduction ratio, 2c is a spacing wheel track. From (2), we could calculate the desired wheel velocities easily, but the direct calculation does not work well since the model is a realistic model and the angular velocity always has a large lag. Therefore, we add an onboard IMU and use the following equation to set the wheel velocities. As shown in Figure 4, for the robot body, we will use the velocity and the angular velocity in the robot axis as state variables, i.e., [v x w] T . After the kinematic analysis of the robot, we can find the relationship between robot velocities and wheel speed.
where r is so called effective radius of wheels, n is the reduction ratio, 2c is a spacing wheel track.
[ ] = 60 [ where r is so called effective radius of wheels, n is the reduction ratio, 2c is a spacing wheel track. From (2), we could calculate the desired wheel velocities easily, but the direct calculation does not work well since the model is a realistic model and the angular velocity always has a large lag. Therefore, we add an onboard IMU and use the following equation to set the wheel velocities. For the robot body, it will get velocity and angular velocity command v d x w d T from upper PC. We should control the velocities to track the command. From (1), we have From (2), we could calculate the desired wheel velocities easily, but the direct calculation does not work well since the model is a realistic model and the angular velocity always has a large lag. Therefore, we add an onboard IMU and use the following equation to set the wheel velocities.
where k p , k i , k d are constants for the simple PID controller, e w = w d − w i , w i is the feedback from the onboard IMU. The practical response could be seen in Figure 5.
Sensors 2020, 20, x 6 of 20 where , , are constants for the simple PID controller, = − , is the feedback from the onboard IMU. The practical response could be seen in Figure 5. It is simple to calculate the wheel odometry in the inertial frame. We drive the robot moving a square and back to the origin, the dead reckoning result is shown in Figure 6
Sensors
The experimental robot with sensors is shown in Figure 7. From the front to the back, there are 1 GNSS receiver, a stereo camera, a Livox LiDAR, a monocular camera, a 4G LTE wireless router, an industrial computer, a GNSS antenna. An onboard MCU controller with IMU is located inside the body box. Motion control is handled by the onboard MCU which also sends odometry data to the high-level industrial computer. The navigation is handled by the computer which is connected to the cloud server.
A navigation map is built mainly by the LiDAR and odometry data using the LOAM SLAM method. The map would be regarded as prior knowledge. The monocular camera in the middle is used to transfer video streaming and do the people and crack detection jobs. The GNSS receiver with It is simple to calculate the wheel odometry in the inertial frame. We drive the robot moving a square and back to the origin, the dead reckoning result is shown in Figure 6 and final position value is [0.1229-0.2654 0.03] (should be [0 0 0] T ). The whole distance is about 70 m and we did experiments 10 times, the origin RMSE (root mean square errors) is 0.3135.
Sensors 2020, 20, x 6 of 20 where , , are constants for the simple PID controller, = − , is the feedback from the onboard IMU. The practical response could be seen in Figure 5. It is simple to calculate the wheel odometry in the inertial frame. We drive the robot moving a square and back to the origin, the dead reckoning result is shown in Figure 6
Sensors
The experimental robot with sensors is shown in Figure 7. From the front to the back, there are 1 GNSS receiver, a stereo camera, a Livox LiDAR, a monocular camera, a 4G LTE wireless router, an industrial computer, a GNSS antenna. An onboard MCU controller with IMU is located inside the body box. Motion control is handled by the onboard MCU which also sends odometry data to the high-level industrial computer. The navigation is handled by the computer which is connected to the cloud server.
A navigation map is built mainly by the LiDAR and odometry data using the LOAM SLAM method. The map would be regarded as prior knowledge. The monocular camera in the middle is
Sensors
The experimental robot with sensors is shown in Figure 7. From the front to the back, there are 1 GNSS receiver, a stereo camera, a Livox LiDAR, a monocular camera, a 4G LTE wireless router, an industrial computer, a GNSS antenna. An onboard MCU controller with IMU is located inside the Sensors 2020, 20, 1097 7 of 19 body box. Motion control is handled by the onboard MCU which also sends odometry data to the high-level industrial computer. The navigation is handled by the computer which is connected to the cloud server.
Localization and Navigation
The navigation includes perception, localization, path planning, and a remote controller. The whole diagram is depicted in Figure 8. The mature cost grid map is used for general environment representation. The filtered point cloud and depth data would be obstacles shown on the map.
The path planning method is divided into two parts here. The global waypoints are sent to the robot by the cloud server, which is pre-defined by different jobs. The local motion planning is achieved by the DWA algorithm on the grid map.
Localization is one of the most difficult problems for dam robots since the environment is changing and the robot has to move indoor and outdoor. Using one single sensor to acquire a positon is an impossible task. Many sensors would introduce fusion problem. Extended Kalman filter (EKF) is a good way to solve this problem and has been deployed in many robot localization applications. A navigation map is built mainly by the LiDAR and odometry data using the LOAM SLAM method. The map would be regarded as prior knowledge. The monocular camera in the middle is used to transfer video streaming and do the people and crack detection jobs. The GNSS receiver with antenna can acquire accurate position data in the wide-open area. On the other hand, the LiDAR's point cloud and cameras' depth data are designed to be used in obstacles perception as well in the future.
Localization and Navigation
The navigation includes perception, localization, path planning, and a remote controller. The whole diagram is depicted in Figure 8. The mature cost grid map is used for general environment representation. The filtered point cloud and depth data would be obstacles shown on the map.
Localization and Navigation
The navigation includes perception, localization, path planning, and a remote controller. The whole diagram is depicted in Figure 8. The mature cost grid map is used for general environment representation. The filtered point cloud and depth data would be obstacles shown on the map.
The path planning method is divided into two parts here. The global waypoints are sent to the robot by the cloud server, which is pre-defined by different jobs. The local motion planning is achieved by the DWA algorithm on the grid map.
Localization is one of the most difficult problems for dam robots since the environment is changing and the robot has to move indoor and outdoor. Using one single sensor to acquire a positon is an impossible task. Many sensors would introduce fusion problem. Extended Kalman filter (EKF) is a good way to solve this problem and has been deployed in many robot localization applications. The path planning method is divided into two parts here. The global waypoints are sent to the robot by the cloud server, which is pre-defined by different jobs. The local motion planning is achieved by the DWA algorithm on the grid map.
Localization is one of the most difficult problems for dam robots since the environment is changing and the robot has to move indoor and outdoor. Using one single sensor to acquire a positon is an impossible task. Many sensors would introduce fusion problem. Extended Kalman filter (EKF) is a good way to solve this problem and has been deployed in many robot localization applications.
A two EKF node structure is proposed here to build a robust localization system. The GNSS data is transformed based on the local origin and then input to EKF global node update process. The yaw angle from the LiDAR odometry is input to the EKF update process. The result of the EKF local node is input to the EKF global predict process. The LiDAR odometry data are input to the EKF local update process. The wheel odometry data and the IMU measurement is input to the EKF local predict process. It is noted here that the local EKF node is running on the embedded MCU at 200Hz and it handles in 3D space with 16 states.
It is important to get the 3D pose information for the rubber ground surface. However, the global EKF node is running at 30Hz on the computer and it only produces in 2D space. The robot position and orientation in the map are acquired with the EKF global updating. The fusion localization structure diagram is shown in Figure 9. Where W d l , W d l are the wheel velocity command, W l , W l are the feedback wheel velocity from encoders, V d x , ω d are the velocity command generated from motion planning, V x , ωz are the wheel odometry outputs, x, y, yaw are the LiDAR odometry, X, Y, Yaw are the final global pose estimation result. A two EKF node structure is proposed here to build a robust localization system. The GNSS data is transformed based on the local origin and then input to EKF global node update process. The yaw angle from the LiDAR odometry is input to the EKF update process. The result of the EKF local node is input to the EKF global predict process. The LiDAR odometry data are input to the EKF local update process. The wheel odometry data and the IMU measurement is input to the EKF local predict process. It is noted here that the local EKF node is running on the embedded MCU at 200Hz and it handles in 3D space with 16 states.
It is important to get the 3D pose information for the rubber ground surface. However, the global EKF node is running at 30Hz on the computer and it only produces in 2D space. The robot position and orientation in the map are acquired with the EKF global updating. The fusion localization structure diagram is shown in Figure 9. Where W , W are the wheel velocity command, W , W are the feedback wheel velocity from encoders, V , ω are the velocity command generated from motion planning, V , ωz are the wheel odometry outputs, , , are the LiDAR odometry, , , are the final global pose estimation result. With the kinematic models and zero-velocity constraints, we are now ready to design a kinematic-model-based robot positioning scheme for the local EKF node. The general state update equation is With the kinematic models and zero-velocity constraints, we are now ready to design a kinematic-model-based robot positioning scheme for the local EKF node. The general state update equation is where x t = [q 0 q 1 q 2 q 3 x y z V x V y V z δϕ bias δθ bias δψ bias δV xbias δV ybias δV zbias ] T is the state variables, q 0 q 1 q 2 q 3 are the quaternion, x y z are the position in the inertial frame, V x V y V z are the corresponding velocities, δϕ bias δθ bias δψ bias are the rotation zero-offset, δV xbias δV ybias δV zbias are the acceleration zero-offset. We have the following detail state update equations.
where g x (k)g y (k)g z (k) are the components of gravity.
of 19
The global EKF node is designed in the same method, the details are omitted here. The localization result in the practical environment can be found in Figure 10. The body coordinate represents the filtered pose of the robot now. The robot was running a circle in the practical environment. We can see that the localization performs well even sometimes the GNSS receiver is out of the lock and the result is ready for navigation. The localization and path planning could fail sometimes, even if so many sensors have been deployed. In this application, we designed a remote controller module. Users can control the robot remotely with real-time video streaming. The Figure 11 shows a picture of an experiment environment for a surveillance job, this picture is captured from the camera on the robot. In this environment, the road is narrow and the GNSS signal is weak because of many trees. However, the proposed robot system works well. The practical results could be seen in Figure 12. The goal and the blue line are the planned global path. The green line is the local path planning result. We can see the robot locate itself well and find where to go correctly.
Environment Inspection
The dam crack identification and pedestrian detection are the key technologies related to the safety of the dam. We use the trained model of YOLO V3 [31] to recognize them. We use the dataset online to train the YOLO V3 algorithm and then recognize the dam crack. The YOLO V3 algorithm includes a darknet-53 module, eight DBL components, three convolutional layers, two upsampling layers, and two tensor concat layers. The darknet-53 includes a DBL component and five residual learning units res1, res2, res8, res8, and res4.
The DBL contains a convolutional layer, a BN layer and a leaky ReLU. The convolutional parameters of the convolutional layer are the kernel size 3 × 3, stride 1, same padding and output channels 32. The res1, res2, res8, and res4 contain 1, 2, 8, and 4 basic units of the residual learning, respectively, and each basic unit of the residual learning contains two DBL and an identity map. The convolutional parameters of the first DBL are the kernel size 1 × 1, stride 1, and the output channels are equal to the number of basic units of the residual learning. The convolutional parameters of the second DBL are the kernel size 3 × 3, stride 2, same padding, and output channels are equal to 2 times basic units of the residual learning.
Environment Inspection
The dam crack identification and pedestrian detection are the key technologies related to the safety of the dam. We use the trained model of YOLO V3 [31] to recognize them. We use the dataset online to train the YOLO V3 algorithm and then recognize the dam crack. The YOLO V3 algorithm includes a darknet-53 module, eight DBL components, three convolutional layers, two upsampling layers, and two tensor concat layers. The darknet-53 includes a DBL component and five residual learning units res1, res2, res8, res8, and res4.
The DBL contains a convolutional layer, a BN layer and a leaky ReLU. The convolutional parameters of the convolutional layer are the kernel size 3 × 3, stride 1, same padding and output channels 32. The res1, res2, res8, and res4 contain 1, 2, 8, and 4 basic units of the residual learning, respectively, and each basic unit of the residual learning contains two DBL and an identity map. The convolutional parameters of the first DBL are the kernel size 1 × 1, stride 1, and the output channels are equal to the number of basic units of the residual learning. The convolutional parameters of the second DBL are the kernel size 3 × 3, stride 2, same padding, and output channels are equal to 2 times basic units of the residual learning.
The three convolutional layers are CConv1, CConv2, and CConv3. The filter sizes of the CConv1, CConv2, and CConv3 are 512, 256, and 128 respectively. The convolutional layers CConv1, CConv2, and CConv3 respectively contain six convolutional operations. The convolutional parameters of the first convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the second convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the third convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the fourth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the fifth convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the sixth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The upsampling layers Upsample1 and Upsample2 sample the input feature maps and the input feature maps of the concat layer into the same size.
The network structure is shown in Figure 13. This method can do the detection job at 10 Hz with less than 20% CPU occupying. Some results are shown in Figures 14 and 15. The three convolutional layers are CConv1, CConv2, and CConv3. The filter sizes of the CConv1, CConv2, and CConv3 are 512, 256, and 128 respectively. The convolutional layers CConv1, CConv2, and CConv3 respectively contain six convolutional operations. The convolutional parameters of the first convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the second convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the third convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the fourth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the fifth convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the sixth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The upsampling layers Upsample1 and Upsample2 sample the input feature maps and the input feature maps of the concat layer into the same size.
The network structure is shown in Figure 13. This method can do the detection job at 10 Hz with less than 20% CPU occupying. Some results are shown in Figure 14 and 15. The three convolutional layers are CConv1, CConv2, and CConv3. The filter sizes of the CConv1, CConv2, and CConv3 are 512, 256, and 128 respectively. The convolutional layers CConv1, CConv2, and CConv3 respectively contain six convolutional operations. The convolutional parameters of the first convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the second convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the third convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the fourth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The convolutional parameters of the fifth convolutional operation are the convolutional kernel size 1 × 1, and the number of output channels is equal to the filter size. The convolutional parameters of the sixth convolutional operation are the convolutional kernel size 3 × 3, and the number of output channels is equal to 2 times the filter size. The upsampling layers Upsample1 and Upsample2 sample the input feature maps and the input feature maps of the concat layer into the same size.
The network structure is shown in Figure 13. This method can do the detection job at 10 Hz with less than 20% CPU occupying. Some results are shown in Figure 14 and 15.
Conclusions
In this paper, a general robot system is proposed for dam surveillance. The robot is connected to cloud servers and terminal users by mobile internet and IoT network. Like the robot itself, this paper introduces mechanics layout, sensor selection, and the navigation method. A simple controller and a wheel odometry calculation are proposed and achieve good performance. A two-node EKF structure localization framework is proposed to solve localization problem, which fuses LiDAR SLAM, wheel odometry, IMU, and GNSS signals. For unexpected circumstances, a remote controller based on real-time video streaming is deployed as an emergency supplement. To make the whole system able to work all-time, a control state machine is also introduced. A YOLO v3 network is trained and deployed to detect dam crack and people around. From the practical experiments, this system can work well and is capable of the surveillance job for dams. Afterward, we will pay more attention to specific dams' surveillance jobs, such as intrusion detection and dam deformation detection. We believe robots will greatly improve the work efficiency for water conservancy in the future.
Conclusions
In this paper, a general robot system is proposed for dam surveillance. The robot is connected to cloud servers and terminal users by mobile internet and IoT network. Like the robot itself, this paper introduces mechanics layout, sensor selection, and the navigation method. A simple controller and a wheel odometry calculation are proposed and achieve good performance. A two-node EKF structure localization framework is proposed to solve localization problem, which fuses LiDAR SLAM, wheel odometry, IMU, and GNSS signals. For unexpected circumstances, a remote controller based on real-time video streaming is deployed as an emergency supplement. To make the whole system able to work all-time, a control state machine is also introduced. A YOLO v3 network is trained and deployed to detect dam crack and people around. From the practical experiments, this system can work well and is capable of the surveillance job for dams. Afterward, we will pay more attention to specific dams' surveillance jobs, such as intrusion detection and dam deformation detection. We believe robots will greatly improve the work efficiency for water conservancy in the future. | 2020-02-22T14:03:56.061Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "e1156bb8f18999389d0db6c497e108fa8e5f77e8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s20041097",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a58afa3349379d3b1b6bbfc409aa098dd94d37c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
259294778 | pes2o/s2orc | v3-fos-license | Survey Results on Online Teaching and Learning Environments in the European EMMA Project
. In the Erasmus+ project EMMA, a concept for a common digital teaching and learning platform is to be developed for an online joint master program. In the initial phase, a status quo survey was carried out among the consortium members, which shows which digital infrastructures are already in use and which functions are considered particularly important by the teachers. This paper shows the first results of the short survey via online questionnaire and discusses the challenges derived from it. Due to the non-homogeneous infrastructure and software use, there are neither the teaching-learning platform nor digital communication applications tools that are used equally by all six European higher education institutions. However, the consortium pursues the idea of specifying a limited tool pool order to strengthen the user experience and usability of teachers and students with different interdisciplinary backgrounds and digitalization expertise.
Introduction
The abbreviation "EMMA" stands for European Master on Active Aging and Age Friendly Society.This project's timeline is from 9/2020-8/2024 and six European higher education institutions from Austria, Finland, Greece, Ireland, Portugal, and Slovenia are participating.The project aims to address the challenge of demographic change in Europe by developing an interdisciplinary, compatible, and future-oriented master program within the framework of the European Approach for Quality Assurance of Joint Program (EQAR) [1].Active aging aims to find new ways to engage older people in society before and even after retirement and thus enable them to continue to lead a meaningful life and even support economic, social, and natural sustainability [2].An age-friendly society means creating environments that are truly age-friendly and inclusive and knows about the valuable contribution of older people to strengthen future human development.This requires action in many sectors: health, long-term care, transport, housing, labor, social protection, climate change, information, and communication as well as digital transformation, and many more [3].
The project's major challenge, among others, is to create a transnational online learning environment that is user-friendly for teachers and learners alike and contains all necessary aspects for synchronous and asynchronous teaching, learning and team building.
Methods
The challenge in the development of a jointly usable online platform lies first in ascertaining the needs and restrictions of the individual partner universities.Therefore, a user-centered development approach was chosen, which allows the active integration of needs and wishes of the future target group, at least on the teacher level.Thus, an initial online short questionnaire was developed to elicit these from the project partners.The participants had two weeks to complete the questionnaire to give the development team of the virtual learning platform initial indications for the development.1 presents the entire online questionnaire.The six questions used as well as the selected answer options are based on the experience of the project team members.
Results
The survey was sent out to the six partner universities.For some universities, the responses were submitted at the personal level, while others discussed offline in advance and then submitted a collective "opinion".Responses: Karelia UAS: four returns, Carinthia UAS and University of Ljubljana: each three returns, University College Cork, University of Lisbon and Athens with each one returns.
The results from questions Q01, Q03 and Q05 show clear duplications in the responses as the same information was provided for the same institution (No.).For this reason, the results of these questions are also shown below in a normalized version (NNo.) in order to show the use of individual platforms in the correct relationship.
In Q02 most participants selected the use of the learning management platform Moodle (No. Selected 11, normalized No 3); both in the total number and in the normalized number (one selection per university).One person selects Canvas, and additionally, one participant indicated under "Other" that Moodle is not used directly but via a collaborative LTI connection.Nevertheless, Moodle is not used across the board in the participating European universities; platform technologies such as Canvas or Open Class are also used.The answers of Q03 are shown in Table 2.The most important functions of a learning platform in the opinion of the questionnaire participants are that there is the possibility of easily sharing different learning materials, closely followed by an existing communication option.Reasons for this are seen above all in the fact that students have thereby a smaller inhibition threshold to communicate with their lecturers and fellow students -online team building possibility.The functions "multimedia capability" and "evaluation" were also frequently mentioned; on the one hand that multimedia content can be used more easily for teaching purposes or alternative teaching methods like flipped classroom and on the other that evaluation offers added value for students, particularly through meaningful feedback.because of the combination of communication and storage functionalities in one platform.A platform rarely used even with integration with Moodle as a feature was BigBlueButton.Options like GoToMeeting, Skype4Business, Zoho or Jitsi were not selected.In Table 4 (Q05) the different key features lecturers prefer in the virtual classroom are mentioned.Key features most selected were file and screen sharing, breakout rooms to engage students easily and ease the online group discussions and work.For asynchronous or flipped classroom lecture types, recording is also necessary.
The last question QO6 was about the used plagiarism detection systems in the universities.Commonly used is the software Turnitin (No. 7, NNo. 4) but not every university uses it; the Urkund system is also used by two individual universities.
Discussion
Based on the results of the online survey, it was found that even among the small cohort of universities, there is no uniform selection, especially at the level of the technologies used.The universities draw from the full range of digital possibilities and apply them.Even though the trend is generally moving strongly towards open-source applications in the public sector, also based on the Open Source Software Strategy 202-2023 of the European Commission [4], nothing of this could be detected in the survey.For example, the use of the various virtual meeting tools shows that Zoom or Microsoft Teams is clearly preferred by the teachers, since the operation and selection of functions is suitable for their teaching concepts.Conversely, this means that a common teaching and learning platform or a uniform concept in terms of the user-friendliness of the Joint Online Master will also raise the question of costs for licenses and usage rights.What does not emerge from the survey, however, is the prior experience of using other technologies.Thus, there could be a bias in favor of familiar tools in which people feel confident enough to use for teaching.In addition, some universities have clear guidelines about which tools and platforms can and cannot be used in teaching, which makes it difficult to work together and find a common tool pool.Also, according to the results, it was observed that "newer" concepts such as gamification seem to play no or only a very minor role in teaching so far.It could be further surveyed whether e.g.game-based online offers are not considered useful or applicable or whether it is rather a matter of barriers and fears regarding the development and application of such methods.It should also be critically noted that the online survey has a very limited view of the instructors especially in the questions about personal preferences and does not consider the students' wishes at all.In this phase, important parameters for teaching and learning design such as user experience, learning experience or joy of use could not be surveyed.However, since this is only a survey of initial tendencies in the consortium, the development team considered this to be sufficient or to provide the initial basis for concept proposals and subsequent discussions.
Conclusions
The first information regarding the landscape of digital teaching and learning platforms of the partner universities clearly shows that there is a non-homogeneous structure.This means that the development of a uniform tool usage strategy is made more difficult, or compromise solutions must be used.In EMMA the partners decided to go with a singlehost structure and if needed to connect the other systems via LTI methodology.[5] In any case, this survey shows only initial trends in its extent.Subsequent surveys with teachers and students, especially due to their interdisciplinary backgrounds, of the partner universities are needed in order to design and build an applicable and enjoyable teaching and learning platform.These conclusions will feed into the e-MeBe (e-Wellbeing and Mental Health in Older Adults) project, also funded by Erasmus+.In this project, which started at the end of 2022, the questionnaire will be extended and deepened and likewise the opinion of the students will be added in order to not only survey the technical infrastructure of the teaching and learning platform, but to enable and create an innovative concept of learning experience.
Healthcare Transformation with Informatics and Artificial Intelligence J. Mantas et al. (Eds.)© 2023 The authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
Table 1 .
Survey overview Zoom, Microsoft Teams, Big Blue Button, GoToMeeting, Skype for Business, Zoho, Jitsi or Other Q05 What are the key features you need in your virtual classroom?Breakout Rooms, Whiteboard, Polls, Recording, Easy access to recordings, File and screen sharing, Audio sharing or Other Q06 What plagiarism detection system is your university using?Turnitin, Urkund, I don't know or Other Table
Table 3 .
Zoom is commonly in use and preferred by the survey participants because of easy use and functionality.MS Teams is often used D.E.Ströckl et al. / Survey Results on Online Teaching and Learning Environments | 2023-07-01T06:16:09.967Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "3ac1a3f6c09b56b94f9c1d4fd2c842b8b0f5078f",
"oa_license": "CCBYNC",
"oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230523",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9516cf216ae2115a93d6b651fc6e9f754ae9ac39",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
17934082 | pes2o/s2orc | v3-fos-license | Membrane lipids in invadopodia and podosomes: key structures for cancer invasion and metastasis.
Invadopodia are extracellular matrix (ECM)-degrading protrusions formed by invasive cancer cells. Podosomes are structures functionally similar to invadopodia that are found in oncogene-transformed fibroblasts and monocyte-derived cells, including macrophages and osteoclasts. These structures are thought to play important roles in the pericellular remodeling of ECM during cancer invasion and metastasis. Much effort has been directed toward identification of the molecular components and regulators of invadopodia/podosomes, which could be therapeutic targets in the treatment of malignant cancers. However, it remains largely unknown how these components are assembled into invadopodia/podosomes and how the assembly process is spatially and temporally regulated. This review will summarize recent progress on the molecular mechanisms of invadopodia/podosome formation, with strong emphasis on the roles of lipid rafts and phosphoinositides.
IntroductIon
Metastatic dissemination of cancer cells is the leading cause of mortality in patients with malignant cancers [1,2]. Cancer cells need to degrade the extracellular matrix (ECM), which exists in the basement membrane, tumor stroma, and blood vessel walls, to emigrate from original tumor sites and invade adjacent tissues, and to eventually form metastatic sites at distant organs [3]. These processes seem to be facilitated by the formation of invadopodia, which are ventral membrane protrusions with ECM degradation activity formed by invasive cancer cells [4][5][6][7][8] (Fig. 1A and B). The ability of cancer cells to form invadopodia is closely related to their invasive and metastatic properties [9][10][11]. Additionally, during intravasation, invadopodia-like protrusions in cancer cells were observed in vivo by intravital imaging [12]. Furthermore, a recent study showed that invadopodia perforate the native basement membrane, allowing the invasive cancer cells to invade into the stroma [13]. Oncogene-transformed fibroblasts and cells of monocyte lineage also form functionally similar structures called podosomes that have ECM degradation activity ( Fig. 1C and D). The podosomes of macrophages/osteoclasts are used not only to elicit their physiological functions, but also to help cancer cells achieve efficient metastasis. Therefore, invadopodia/podosomes and their molecular regulators are considered as potential targets in the development of therapeutic strategies for cancer invasion and metastasis.
To date, many components of invadopodia have been reported, including proteins involved in the regulation of the actin cytoskeleton, cell signaling, cell-ECM adhesion, ECM degradation, and membrane remodeling [8,14]. We and other researchers have previously proposed that invadopodia formation occurs in several steps [9,13,15,16]. Invadopodia precursors are assembled by actin polymerization machinery in response to extracellular stimuli. These structures are then stabilized by additional actin polymerization, and finally they gather matrix metalloproteinases to mature into functional invadopodia, which contain microtubules and intermediate filaments in addition to actin filaments. How these events occur at restricted sites on the plasma membrane of invasive cancer cells, however, is obscure. Recently, several studies regarding the role of membrane lipids in the regulation of invadopodia/podosome formation have been reported.
LIpId rAfts And cAveoLIn-1 In InvAdopodIA formAtIon
Lipid rafts are cholesterol-and sphingolipid-enriched membrane microdomains that are also referred to as lipid microdomains or detergent-resistant membranes (DRM). Lipid rafts have been implicated in a number of critical cellular processes, such as membrane transport and signal transduction [17,18], as well as several pathological conditions, including cancer progression [19][20][21]. Caveolin-1 is a ubiquitously expressed scaffolding protein that is enriched in caveolae, which are subtypes of lipid rafts [22,23]. Caveolin-1 is involved in several cellular functions such as endocytosis, vesicular transport, and signal transduction [23,24].
Both we and Caldieri et al. recently reported that invadopodia are lipid raft-enriched domains in human breast cancer and melanoma cells [10,25]. We also observed that lipid rafts were enriched at podosomes formed by Src-transformed fibroblasts (unpublished observations). The inhibition of lipid rafts by the depletion or sequestration of membrane cholesterol, or the blocking of glycosphingolipid synthesis, has been shown to impair invadopodia formation and function [10,25]. Timelapse observation revealed that lipid raft membranes are actively trafficked and internalized around invadopodia, which indicates the possible involvement of lipid rafts in the transport of invadopodia components [10]. Several invadopodia components involved in actin polymerization and membrane trafficking, including neural Wiskott-Aldrich syndrome protein (N-WASP), dynamin-2, and Arf6, are known to localize at lipid rafts [17,26,27]. Therefore, lipid rafts may act as platforms for localizing and activating these molecular machineries at the sites of invadopodia formation, which results in focalized ECM degradation.
The 2 studies also revealed that caveolin-1 is an essential regulator of the invadopodia-mediated Invadopodia formation by MDA-MB-231 human invasive breast cancer cells. The cells were cultured on rhodamine-gelatin-coated coverslips and stained with phalloidin to detect invadopodia that are enriched with actin filaments (F-actin). Upper and lower panels are confocal images showing XY and XZ sections, respectively. Invadopodia were observed as dot-like structures containing F-actin, which degrade the rhodaminegelatin matrix, resulting in the loss of gelatin fluorescence in the region of the invadopodia (arrowheads). (C) Podosomes formed by NIH3T3 cells transformed by constitutively active Src (NIH3T3 src). Parental NIH3T3 and NIH3T3 src cells were cultured and stained as described in (A). NIH3T3 src cells, but not parental NIH3T3 cells, form podosomes, which are observed as donut-like actin structures and colocalized with the gelatin degradation sites (arrowheads). (D) Podosome formation of macrophages and osteoclasts. RAW264.7 cells were cultured in the presence of lipopolysaccharide (LPS) (100 ng/ml) or RANKL (10 ng/ml) for 72 h to induce differentiation into the macrophages or osteoclasts, respectively. Cells were stained with phalloidin and 4',6-diamidino-2-phenylindole (DAPI). Macrophages form podosomes that often organize into large clusters associated with the gelatin degradation sites (arrowheads). Osteoclasts form a dense circumferential band of F-actin, called the sealing zone (yellow arrowheads), and clusters of podosomes that are observed inside the sealing zone (white arrowheads). A large gelatin degradation region was observed under these structures. www.impactjournals.com/oncotarget degradation of ECM, which indicates that caveolin-1 plays an essential role in cancer cell invasion [10,25]. Indeed, at least in breast cancer cell lines, caveolin-1 expression is predominantly observed in invasive cell lines and well correlated with invadopodia activity [10]. In melanoma cells, caveolin-1 functions at invadopodia through cholesterol transport to maintain proper levels of plasma membrane cholesterol [25]. Meanwhile, caveolin-1 is primarily involved in the transport of lipid raft-associated membrane type I matrix metalloproteinase (MT1-MMP), an invadopodia-enriched matrix metalloproteinase that is responsible for the ECM degradation activity of invadopodia [10]. Although further studies are needed to elucidate the precise functions of caveolin-1 in invadopodia formation, these findings imply that caveolin-1 plays multiple roles in the trafficking of invadopodia components. Clinical studies showed that the increased expression of caveolin-1 is correlated with the presence of metastasis and poor prognosis in several human cancers [28,29]. Taken together, blocking the functions of lipid rafts and caveolin-1 should be an approach to targeting invadopodia-mediated cancer cell invasion.
We recently reported that PI(4,5)P 2 is enriched at invadopodia and blockage of the PI(4,5)P 2 function suppresses invadopodia formation and ECM degradation by invasive human breast cancer cells [39]. We also found that a kinase generating PI(4,5)P 2 , phosphatidylinositol-4-phosphate 5-kinase type Iα (PIPK Iα), accumulates at the invadopodia and that a knockdown of PIPK Iα inhibits invadopodia formation. Importantly, the knockdown of PIPK Iα only affects a pool of PI(4,5)P 2 , which is locally and newly produced by PIPK Iα. The knockdown of PIPK Iα resulted in only a slight decrease in the total amount of PI(4,5)P 2 , and did not affect the PI3-kinase signaling pathway, in which PI(4,5)P 2 acts as a major substrate. Therefore, PI(4,5)P 2 seems to exert its function via direct regulation of its own targets. Our previous study showed figure 2: A model for the regulation of invadopodia/ podosome formation by membrane lipids. Lipid rafts may act as platforms for the recruitment of components of invadopodia/podosomes and for localized signaling by phosphoinositides. Caveolin-1 enriched in lipid rafts plays a role in cholesterol transport to maintain the levels of plasma membrane cholesterol (Chol) and also in MT1-MMP transport for the maturation of invadopodia. PI(4,5)P 2 generated by PIP kinase type Iα (PIPK) acts as a signaling molecule to locally activate several invadopodia components, including N-WASP, and also serves as a substrate of PI3-kinase (PI3K) for generation of PI(3,4,5)P 3 , which in turn regulates invadopodia formation, most likely through PDK1 and Akt. Nck is required for N-WASP dependent actin polymerization induced by PI(4,5)P 2 and also stimulates PIPK for local enrichment of PI(4,5)P 2 . PI(3,4)P 2 , produced from PI(3,4,5)P 3 by the action of a specific phosphatase, possibly synaptojanin-2 (SJ-2), recruits Tks5/FISH to the plasma membrane, along with its binding partner N-WASP and other proteins involved in the formation of invadopodia/podosomes. It should be noted that the functions and requirements of these molecules may be slightly different between invadopodia and podosomes, as well as among cell types. www.impactjournals.com/oncotarget that N-WASP and its activators, including Nck, are critical regulators of actin polymerization at the invadopodia core structures [9]. Because the activation of N-WASP is regulated by the amount of PI(4,5)P 2 on the plasma membrane [40], N-WASP is the most probable candidate for the PI(4,5)P 2 target. A recent study identified the existence of the reciprocal interdependence between Nck and PI(4,5)P 2 for regulation of N-WASP activity [41]. Nck is required for N-WASP-dependent actin polymerization induced by PI(4,5)P 2 and Nck also stimulates PI(4,5)P 2 production via recruitment of PIPK Iα. Considering that Nck and PI(4,5)P 2 are essential for invadopodia formation [9,39], these components may interdependently activate N-WASP to assemble invadopodia structures.
The PI3-kinases are a family of lipid kinases that phosphorylate phosphoinositides at the D-3 position of the inositol headgroup and thus produce D-3 phosphoinositides [48]. PI3-kinases mediate the signal transduction of extracellular stimuli and regulate diverse cellular events, such as mitogenesis, survival, membrane transport, and cell migration [36]. PI3-kinases are subdivided into 3 classes (I-III) in mammals on the basis of their enzyme domain structures and substrate specificity [33]. Uncontrolled activation of the PI3-kinase signaling pathway leads to several pathological phenomena, including tumorigenesis and tumor malignancies [36]. This is evidenced by the fact that the expression and activity of several members of the PI3-kinase signaling pathway are frequently altered in a variety of human cancers [49]. PI3-kinase activity is also required for invadopodia formation, as shown in invasive melanoma cells [50]. In line with this, we recently found that class IA PI3K catalytic subunit p110α is selectively involved in invadopodia formation in breast cancer cells, and that PDK1 and Akt mediate the signaling (manuscript in submission). The PIK3CA gene, which encodes p110α, is one of the most frequently amplified and mutated genes identified in human cancers [49,51]. Several clinical studies revealed that mutations leading to the activation of the PIK3CA gene are associated with invasive and metastatic phenotypes, as well as poor prognosis [52][53][54]. Moreover, introduction of the mutant PIK3CA gene was reported to enhance the migration, invasion, and metastasis of breast cancer cells [55]. Therefore, p110α is considered as a promising molecular target for the intervention of malignant cancers, and it has led to the development of several specific inhibitors [56]. ceLLs of monocyte orIgIn generAte podosomes, whIch contrIbute to cAncer ceLL InvAsIon And osteoLytIc bone metAstAsIs Podosomes are F-actin-rich, dynamic adhesion structures found in Src-transformed cells (Fig. 1C) and the physiological context of monocyte-derived cells such as macrophages and osteoclasts (Fig. 1D). In the past decade, the generation of podosomes has been proven to be associated with the gene responsible for an X chromosome-linked immunodeficiency disease, Wiskott-Aldrich syndrome (WAS). As macrophages from patients with WAS have defects in generating podosomes and polarization of the cell [57], these structures are thought to be important for chemotactic migration and/or the invasion of macrophages. Actually, the product of the gene (i.e., WASP) and its ectopic analogue, N-WASP, were shown to be indispensable for actin polymerization at podosomes via activating the Arp2/3 complex [57][58][59][60][61].
It is well established that the neoplastic properties of cancer cells are affected by interactions with the tumor microenvironment [62]. Tumor-associated macrophages (TAMs) have been implicated in tumor progression, metastasis, and poor prognosis in several human cancers [63,64]. A paracrine loop between macrophages and cancer cells has been proven to facilitate cancer cell migration and invasion both in vitro and in vivo, confirming the vicious role of TAMs [65,66]. Cancer cells stimulate the invasion of macrophages by secreting colony stimulating factor-1 (CSF-1), which in turn causes the macrophages to stimulate invasion of the cancer cells by secreting epidermal growth factor (EGF). EGF and CSF-1 are shown to stimulate invadopodia formation in cancer cells and podosome formation in macrophages, respectively [6,9]. Therefore, the paracrine loop between cancer cells and TAMs may promote cancer progression [63,64] partly via the formation of invadopodia/podosomes.
Osteoclasts are highly specialized multi-nucleated cells that are differentiated from the monocyte/ macrophage precursors on the bone surface in response to CSF-1 and receptor activator of nuclear-factor-κB ligand (RANKL). During the differentiation into mature cells, osteoclasts reorganize the actin cytoskeleton to form a dense circumferential band of F-actin (Fig. 1D). This www.impactjournals.com/oncotarget ring forms a tight adhesive contact (the sealing zone) that defines a subcellular environment (which is known as a resorption pit or lacuna) into which H+ and lytic enzymes are secreted, thereby allowing effective erosion of the bone [58,67]. The fully mature osteoclast can detach from the bone and move away from the resorption lacuna to participate in several rounds of resorption, which require podosome-associated cell motility [67,68]. One of the upstream regulators of WASP, the cytoplasmic kinase Src is essential for osteoclast activity in vivo, because Src knockout mice suffer from severe osteopetrosis caused by deficient osteoclast activity [69]. Osteoclasts derived from such mice cannot adhere and spread properly, and fail to give rise to mature sealing zones when attached to the bone. Bone metastases from breast cancer are typically osteolytic and cause destruction of the bone [70]. Breast cancer cells augment the activity of bone resorption via promoting the differentiation and podosome formation of osteoclasts by secreting transforming growth factor-beta (TGF-β), tumor necrosis factor-alpha (TNF-α), interleukins (ILs), and parathyroid hormone-related protein (PTHrP), which leads to osteolytic bone metastasis.
We investigated the localization of different species of phosphoinositides using various phosphoinositidebinding PH domains [77]. We demonstrated that PI (3,4) P2 is highly enriched in podosomes compared to the relatively diffused localization of PI (3,4,5)P3, which is also found in lamellipodia and intracellular vesicles. What is intriguing is that excessive expression of the PH domain of Tapp1, which binds to PI(3,4)P2, as well as the PH domain of Akt, which binds both to PI(3,4)P2 and PI (3,4,5) P3, significantly suppressed podosome formation. This effect is thought to occur through sequestering those lipids by the domains, because the amount of protein expressed in a cell tends to correlate with the suppression effect. Furthermore, we found that PI (3,4)P2 is synthesized by PI3-kinase and synaptojanin-2 in the vicinity of the focal adhesions, and that this phosphoinositide triggers the recruitment of a protein complex that includes Tks5, Grb2, and N-WASP, which results in the conversion of the adhesion sites to podosomes [77,78]. Our results support the essential role of synaptojanin-2 in glioma cell migration and invasion [43], although the localization of PI(3,4)P2 in glioma cells has not been determined. Tks5 is an adaptor protein with an N-terminal phox homology (PX) domain, which was originally identified as an Src substrate [79]. Both Tks5 and its relative, Tks4, have been shown to play important roles in podosome formation, matrix degradation, and tumor growth in vivo [80][81][82][83]. Recently, they have been shown to mediate the generation of reactive oxygen species (ROS) at the invadopodia of cancer cells, which is required for invadopodia formation and cancer cell invasion [84,85]. Moreover, Tks5 binds to supervillin, a lipid raft-enriched protein which is involved in integrin recycling, cell motility, and invadopodia formation [86][87][88], which suggests that they play roles as versatile regulators of invadopodia/podosomes. As the PX domain of Tks5 binds to PI(3,4)P2, and this interaction is essential for podosome formation downstream of Src [77,82,83], targeting this interaction would be a promising therapeutic strategy for the selective intervention of cancer cell invasion and metastasis.
concLudIng remArks
As described above, accumulating evidence leaves us in no doubt that invadopodia/podosomes play a pivotal role in the invasion and metastasis of cancer cells. Moreover, podosomes formed by TAMs and osteoclasts in the tumor microenvironment seem to play supportive roles for cancer invasion and metastasis. The organization and components of the plasma membrane, such as lipid rafts and phosphoinositides, regulate the formation of invadopodia/podosomes. Therefore, targeting the molecular components of these structures, which include membrane lipids and their synthetic pathways, will contribute to the development of new strategies for the treatment of cancer invasion and metastasis.
One question that still remains answered is how lipid raft formation/degradation and phosphoinositide turnover are spatiotemporally regulated at invadopodia/podosomes. It is evident that invadopodia/podosomes are formed through several functional steps. Therefore, lipid rafts and phosphoinositide species may have distinct functions at different stages of invadopodia/podosome formation. Furthermore, although invadopodia and podosomes seem to share basic molecular components and functions, i.e., ECM degradation, their morphologies are quite different, even among cell types. If membrane lipids determine the site of invadopodia/podosome assembly, they may be critical determinants for the morphology, and most likely the function, of these structures. Further studies will be needed to address these questions. | 2014-10-01T00:00:00.000Z | 2010-09-01T00:00:00.000 | {
"year": 2010,
"sha1": "2d49e084f071ad7f7cc9837a027461c6117eac4c",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=164&path[]=212",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d49e084f071ad7f7cc9837a027461c6117eac4c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25622431 | pes2o/s2orc | v3-fos-license | The Helium-Rich Cataclysmic Variable ES Ceti
We report photometry of the helium-rich cataclysmic variable ES Ceti during 2001-2004. The star is roughly stable at V ~ 17.0 and has a light curve dominated by a single period of 620 s, which remains measurably constant over the 3 year baseline. The weight of evidence suggests that this is the true orbital period of the underlying binary, not a"superhump"as initially assumed. We report GALEX ultraviolet magnitudes, which establish a very blue flux distribution (F_nu ~ nu^1.3), and therefore a large bolometric correction. Other evidence (the very strong He II 4686 emission, and a ROSAT detection in soft X-rays) also indicates a strong EUV source, and comparison to helium-atmosphere models suggests a temperature of 130+-10 kK. For a distance of 350 pc, we estimate a luminosity of (0.8-1.7)x10^34 erg/s, yielding a mass accretion rate of (2-4)x10^-9 M_sol/yr onto an assumed 0.7 M_sol white dwarf. This appears to be about as expected for white dwarfs orbiting each other in a 10 minute binary, assuming that mass transfer is powered by gravitational radiation losses. We estimate mean accretion rates for other helium-rich cataclysmic variables, and find that they also follow the expected M-dot ~ P_o^-5 relation. There is some evidence (the lack of superhumps, and the small apparent size of the luminous region) that the mass transfer stream in ES Cet directly strikes the white dwarf, rather than circularizing to form an accretion disk.
INTRODUCTION
The star KUV 01584-0939 (hereafter ES Ceti) was discovered as an ultraviolet-bright object during the Kiso survey (Kondo, Noguchi, & Maehara 1984), with strong He II and weak Balmer emission (Wegner, McMahan, & Boley 1987). It soon became clear that the "Balmer" lines are really He II Pickering lines, which identifies ES Cet as essentially a pure helium star. Warner & Woudt (2002, hereafter WW) found a persistent photometric period of 620.2 s, certifying ES Cet as a member of the AM Canum Venaticorum class: cataclysmic variables containing a pair of helium white dwarfs.
Since they possess Roche-lobe-filling white dwarfs, all stars in this class have a very short orbital period; and their light curves are typically dominated by variations at the orbital period P o and a closely spaced "superhump" period. Superhump variations, which are usually dominant, arise from the precession of the accretion disk (Whitehurst 1988, Hirose & Osaki 1990. These periods can be quite difficult to disentangle and interpret; confusion between them has been a significant barrier to progress. Therefore we have conducted a three-year study of ES Cet, in order to parse the periodic signal into its correct components. Here we report details of that study, along with a general study of the AM CVn class.
OBSERVATIONS AND LIGHT CURVES
The optical observations consist of nightly time-series photometry obtained with the 1.3m McGraw-Hill telescope at the MDM Observatory on Kitt Peak, and several telescopes at the South African Astronomical Observatory. The observations covered ~140 hr over 51 nights during 2001-4, and are summarized in Table 1. The data are differential CCD photometry (Skillman & Patterson 1993, O'Donoghue 1995, usually with a wide bandpass (~4000-8000 Å) to maximize throughput. ES Cet's mean brightness was roughly stable (near V=17.0) from night to night, with an average peak-to-peak range of ~0.12 mag, as seen in the raw light curve of Figure 1. This was also true in the WW data. The nightly power spectra were also essentially identical to those of WW: a fundamental frequency of 139.3 c/d and several harmonics.
PERIOD ANALYSIS
In order to study fine structure in the periodicity, we selected our densest data streams and calculated separate power spectra. The best interval was in 2002 October (JD 2452550-6), and the relevant portions of that power spectrum are shown in Figure 2. The star exhibits, to a first approximation, a fairly pure signal at 139.31(1) c/d, with harmonics but no obvious sideband structure. Analysis of the other dense data streams (2001 October and September) gave a similar result.
These three time series are long enough to yield precise mean waveforms for the 620 s signal, and these are shown in Figure 3. Maxima and minima are well defined and the waveform is essentially stable. Individual-night waveforms also agreed with the general pattern (slow rise, fast decline). In order to track the stability of the 139.31 c/d signal, we constructed an O-C diagram using the nightly timings of minimum light (Table 2). With respect to a suitably chosen test ephemeris with constant period, the O-C residuals essentially trace out a straight line of zero slope -indicating the constancy of the period. This is shown in Figure 4. More precisely, the star tracks the ephemeris Minimum light = HJD 2452201.3941(2) + 0.007178376(2) E.
Having ascertained that the signal is of good frequency and amplitude stability, we decided to refine the search for fine structure by "cleaning" the time series for strong signals. We subtracted the obvious signals, calculated the power spectrum of the residuals, and then reinserted the strong signals (which tower off-scale) in the power spectrum. The relevant results are seen in Figure 5, showing only a broad bump on the high-frequency side of the strong signals.
The individual peaks comprising these broad bumps correspond to very low Fourier amplitudes -all less than 0.01 mag. For our time series, these are too weak to permit a secure frequency measurement. So the main result here is failure; we could not certify and measure any fine structure. But since the windowing of our time series is far from ideal, and since such asymmetry (blue shift) is unusual in our experience, the bumps deserve a mention and might be worth further study. They all sit ~12 c/d displaced from the main signal, and their asymmetry about the main signal assures that they are not an artifact of simple amplitude modulation (which produces equal power on both sides).
THE AM CVN CREDENTIALS
ES Cet's helium-rich spectrum, very short period, and constant magnitude certify it as a very likely member of the AM Canum Venaticorum class of CVs. The very blue colors suggest a good match to HP Librae and AM CVn itself. But for the latter two stars, and indeed for all previous members of the AM CVn class, the dominant periodic signal is the superhump (see e.g. O'Donoghue 1995). That is what we expected when we began this study; but there is precious little evidence to support this.
Precise parsing of the signals has been done for just two stars in this class: AM CVn (Skillman et al. 1999) and HP Lib (Patterson et al. 2002). These stars show orbital variations of 0.012 mag full amplitude, and superhump variations of 0.02-0.05 mag. The orbital periods are stable over the longest measurable baselines (years), while superhump periods wander slightly on timescales of months. The superhump harmonics are also characterized by a subtle but regular sideband structure (see Figures 3-4 and Table 2 of Skillman et al. 1999).
In the ES Cet time series we searched carefully for both of these superhump earmarks, and did not find them. Figures 3 and 4 demonstrate the stability of the periodic signal. Figure 5 suggests that there may be some fine-structure near the strong signals; but unlike the case of other AM CVn stars, we could not find any precise frequency shift associated with these weak bumps. Thus the fine structure test also fails.
Excuses might be found for such malfeasance, but the most likely hypothesis is simply that the photometric period is the orbital period. Recent spectroscopic studies show equivalent widths and profiles of the emission lines varying with a period of 620 s, and this certainly suggests an orbital interpretation (Steeghs 2003, Woudt & Warner 2003. Thus the weight of evidence certainly favors the orbital view.
WHY NO SUPERHUMPS?
This leaves unanswered the question of why ES Cet does not show superhumps. Stars can avoid superhumps by having too high a mass ratio (q=M 2 /M 1 ≥0.4 seems to forbid them), or too low an M & ( M & ≤10 -9 M /yr, or M V >+8, seems to forbid them). At P o =620 s with a degenerate secondary, ES Cet should have M 2~0 .1 M and therefore q probably near 0.15 (see Figure 2 of Faulkner, Flannery & Warner 1972); so that solution probably does not work here. 1984); but this relation is not calibrated for helium CVs, so we now seek to remedy this.
There are just two distance-finding calibrators we could consider "primary": trigonometric parallax, and spectrophotometry of the mass-donor star. Among the AM CVns there are just three useful parallaxes (GP Com and V396 Hya, Thorstensen 2004;AM CVn, Dahn 2004) and no detected donors. A secondary method is a disk model fit to accurate spectrophotometry (continuum + lines) over a substantial wavelength interval. This method was applied to AM CVn and CR Boo (El-Khoury and Wickramasinghe 2000; Nasser, Solheim, & Semionoff 2001). Finally we could consider a standard-candle method: using the M V 's of AM CVn and CR Boo to estimate those of other class members. This last method we would consider "tertiary". But it is known to work quite well for hydrogen-rich dwarf novae in eruption, and is a plausible consequence of the well-known thermostat resident in the accretion disks of dwarf novae (Warner 1987, Cannizzo 1998. We collect the relevant information in Table 3. Most is self-explanatory, but a few points deserve mention. We cite references pertinent to the issues considered here, not necessarily the most complete or recent studies. Distance estimates from primary and secondary indicators are listed without comment, whereas those from a standard-candle method are less reliable and listed in parentheses. Of course, no distance is estimated from emission lines, since that is what we wish to establish. Mass ratios are estimated from the superhump period excess or the motion of spectral lines. We exclude from Table 3 one certain AM CVn member (KL Draconis) and two other possibles (V407 Vulpeculae and RXS J0806.3+1527) -basically because not enough information is yet available on them. Figure 6 shows the correlation of the equivalent width of He I λ5876 with M V . Very weak emission components are sometimes seen in the high states; the strengths are hard to measure, so we characterize them as "<1 Å". We also use information from individual stars in different states. Figure 6 shows the same general trend exhibited by hydrogen-rich CVs: strong emission when (intrinsically) faint, weak emission when bright. ES Cet has a line of 5 Å equivalent width, suggesting M V ~9.4 (and d~350 pc) according to Figure 6. Assuming a bolometric correction similar to that of other AM CVn stars, 4 this suggests M & ≈10 -10 M /yr and therefore might explain why ES Cet does not superhump: because the accretion rate is too feeble.
However, this argument is only suggestive, since the data of Figure 6 are so sparse and since the commonality of ES Cet with the other stars is unproven. So we will adopt this as only a nominal distance. What other constraints are there on M & ?
ULTRAVIOLET AND X-RAY OBSERVATIONS: HOW MUCH TOTAL FLUX?
For a star as blue as ES Cet, the V magnitude greatly underestimates the bolometric flux. The star was detected in the first release of GALEX photometric magnitudes (Schiminovich 2004). The FUV (centered at 1520 Å) and NUV (2310 Å) magnitudes were found to be 15.25 and 15.7(±0.1) respectively on the AB magnitude scale. This corresponds to a dereddened continuum distribution of F ν~ν 1.3 , which is essentially the bluest known CV.
The flux distribution is shown in Figure 7, where we include also the hard X-ray flux detected by CHANDRA (Strohmayer 2004) and the soft X-ray flux marginally detected by ROSAT (Voges et al. 1999). The former is clearly not significant, comprising only 0.1% of the total flux. There is little direct spectral information in the 7 detected photons of the latter observation; but the spectrum must be very soft, in order not to conflict with the very low flux seen above 0.5 keV by CHANDRA. We experimented with blackbody fits to the ROSAT data, for the interstellar absorption expected along that line of sight (N H =2×10 20 cm -2 , Schlegel et al. 1998). For the range of temperatures appropriate to "one-component" fits (including also the optical-UV flux), we find fluxes near the diamond in Figure 7.
We can learn more by studying the flux distributions of model stars. The vast majority of CVs do not have appreciable flux above the Lyman limit, due to H opacity in the disk's outer layers; the same applies, mutatis mutandis, to a hot helium disk, due primarily to the He II edges at 228 and 912 Å. But for ES Cet the blue optical-UV color implies very high temperature, enough to erode or maybe even destroy the edges. The helium-rich white dwarf models of Wesemael (1981) explore this in detail. In Figure 7 we superpose the model flux distributions at T=100000 and 150000 K. These appear to bracket the observations fairly well, and we conclude that a fairly good fit to all the data can be found with one component at T=130±15 kK, The total flux in such a component is 8(±4)×10 -10 erg/cm 2 /s, which corresponds to a luminosity L=8(±4)×10 33 2 300 d erg/s, where d 300 is the distance in units of 300 pc. This is essentially the maximum-luminosity solution for ES Cet. A two-component fit is possible, but produces less total flux. The ROSAT data can be fit by lower temperatures, but only by overproducing optical/UV emission. The data can be fit by higher temperatures, but then the luminosity of the soft X-ray source is small, since most of it actually falls above 0.1 4 This will turn out to be wrong. The distance estimate may or may not be reasonable, but we will see that the EUV emission (evinced by the He II lines) is much greater in ES Cetimplying a higher bolometric correction and therefore a higher M & . keV, i.e. in the ROSAT passband.
Another constraint can come from the He II 4686 emission, whose great strength (80 Å equivalent width) testifies to a large supply of photons with energy above 54 eV. The observed luminosity in the line is 10 30 2 300 d erg/s. Repeating the calculation in Eq. (6) of Patterson & Raymond (1985) for a soft X-ray source, we estimate an ionizing luminosity L SX~( 1-10)×10 33 2 300 d erg/s. This corresponds to an unabsorbed flux roughly shown by the cross in Figure 7, where its location suggests that this is probably the X-ray source (barely) seen by ROSAT.
An alternative version of this argument, from the full range of Wesemael's heliumatmosphere models, can be made as follows. We have mined his tables and figures for the dependences of the following parameters on T eff : (a) the bolometric correction relative to the optical-UV flux; and (b) the bolometric correction relative to the V flux. We mean by "bolometric correction" the "correction in magnitudes for flux outside the relatively accessible 912-10000 Å and 5000-6000 Å bands". These dependences are shown in Figure 8.
For temperatures below ~10 5 K, Figure 8 shows quite modest corrections. Even at 10 5 K, the optical/UV sum only undercounts the flux by 2.5 magnitudes, or a factor of 10. Can the temperature be as high as 3×10 5 K, in which case the undercount is much more serious -a factor of 100? Well, no, this is not feasible. So high a temperature, if required also to reproduce the optical-UV data, would produce a very luminous soft X-ray source -in contrast to the very weak source seen by ROSAT. If not required to reproduce the optical-UV flux, such a temperature is feasible but cannot represent much luminosity (since it does not leak appreciably into the UV or hard X-ray, and produces a pretty wimpy ROSAT detection). There might also be a worry about variability, since the observations relevant to Figure 7 were obtained at widely different times. But ES Cet deeply impressed us with its constancy in visual light. Although we cannot rule out changes of up to 0.1 mag, we essentially saw the same mean brightness on every night of the campaign. Nor is there any historical evidence for brightness change. Indeed, apart from the strictly periodic variation, it is the most constant CV we have ever observed.
nominal 350 pc distance. Here we have assumed R 1 ∝M 1 -0.7 , appropriate for a white dwarf of moderate mass.
ACCRETION RATES IN THE AM CVN CLASS
We repeated this analysis for other members of the AM CVn class. These are easier, because the continuum slopes are more modest, implying small bolometric corrections (even for the hottest, probably AM CVn, the He II edges largely suppress emission at EUV wavelengths). After integrating the observed 0.1-1.0 µ fluxes and applying the (small) bolometric corrections, or estimating for the stars without published ultraviolet data, we used our distance estimate to estimate bolometric luminosities. We then calculated mean accretion rates from the assumption of pure disk accretion onto a 0.7 M white dwarf. These are plotted versus P o in the lower frame of Figure 9. For the three dwarf novae we took care to average over a cycle of eruption and quiescence. The error shown includes uncertainty in bolometric flux, M 1 (0.5-1.0 M ), and distance (except for ES Cet). ( 2 a , Z S ) (b, c) An evolved helium star, which begins mass transfer before the end of nuclear burning. This possibility was anticipated in the earliest discussions of AM CVn (Faulkner, Flannery, & Warner 1972 where P 1000 =P o /1000 s. These differences in mass-radius and mass-period make a How do the points compare with the solid-line predictions in Figure 9? In general, the points track along a P o -5 curve pretty well, as predicted. Most of them could be consistent with either a cold (ZS) white-dwarf secondary, or a larger SKH/TF secondary. ES Cet seems to be consistent only with a ZS secondary; but that error bar (only) does not include distance uncertainty, which is considerable since the star is arguably sui generis. At 1 kpc, which is possible, the star would jump up to the SKH prediction.
Three other constraints are significant. First, some AM CVn stars suffer disk instabilities, and others don't. The theory for this has been developed by Smak (1983) and Tsugawa & Osaki (1997); the dashed lines in the lower frame of Figure 9 show the results of Tsugawa & Osaki. Below M &~1 0 -11 (depending slightly on M 1 ), disks are too cool to ionize helium anywhere, and accrete more or less steadily. Above the upper dashed line, disks are too hot everywhere to suffer the ionization instability. For intermediate M & , disks cycle between cold and hot states, and the stars consequently show dwarf-nova eruptions. All the AM CVn binaries obey these predictions (the three dwarf novae are CR Boo, CP Eri, and V803 Cen). So far, so good.
Second, we have independent evidence regarding the masses of the secondaries, via the superhumps which tend to dominate their light curves. At least for H-rich CVs, there is a good correlation between mass ratio and the observed fractional superhump period excess ε=(P sh -P o )/P o , namely ε=0.22(2)q (Patterson 2001). So we know four mass ratios, plus two others (GP Com and V396 Hya) based on the small orbital wiggles in the emission lines (Morales-Rueda et al. 2003, Ruiz et al. 2001. These estimates are given in Table 3, and shown in the upper frame of Figure 9. The curves show predictions for two versions of mass-radius. Again the points are roughly between the theory curves, though at short period there is a decided preference for SKH. And third, it appears from studies of H-rich CVs that there is a magic number (roughly 10 -9 M /yr but possibly defined by the upper dashed line in Figure 9), which determines the presence or absence of superhumps. 7 How does this expectation fare in Figure 9? Leaving out ES Cet, the two stars of highest M & superhump; the two stars of lowest M & don't; and the three stars of intermediate M & basically do, but only when M & is higher (superoutburst). This is quite consistent with what we have learned about superhumps in CVs, except for the embarrassing noncompliance of ES Cet. In Sec. 5 we speculated that insufficient M & in ES Cet might be the culprit; but after accounting for the observed and inferred EUV radiation, we see that superhump absence is still unexplained.
A possible clue to that absence can be found by estimating the size of the emitting region. In order to produce the correct flux at the ~130 kK temperature suggested by Figure 7, the luminous area should have a radius of 4.5×10 8 0 . 1 300 d cm, assuming A=π×R 2 (appropriate for a disk or white dwarf). This could represent a fairly massive (1.1 M ) white dwarf, or a large spot on a white dwarf. It is not so easily reconciled with a disk, which should have R~14×10 8 cm. This recalls the "direct impact" theory of Marsh & Steeghs (2002, hereafter MS), who calculated that the mass-transfer stream should strike the white dwarf rather than form a disk -for a white dwarf binary with P=9.5 minutes (V407 Vul). That would explain it: no superhumps because there's no disk. We repeated the MS calculation for the slightly longer period (10.3 versus 9.5 minutes), and verified that there is no great sensitivity to period. There is still a wide swath of M 1 -M 2 space (shown in Figure 2 of MS) where direct impact is expected, as long as M 1 <0.8 M (more massive primaries are smaller than the stream's impact parameter, and are therefore missed).
This seems pretty plausible to us, and we take the small emission area to be an empirical clue that it is the correct explanation. The data of Figure 9 seem compliant in all other respects with the simple theory invoking GR only as the driver of evolution: the accretion rates are about right and show the correct scaling with P o ; the eruptive stars are just those in the unstable-disk regime; and the superhump characteristics are normal except for ES Cet, which may well not have a disk. The House of Can Ven appears to be in satisfactory order.
SUMMARY AND OUTLOOK
1. ES Cet's magnitude remained roughly constant at V=17.0 throughout our three years of photometry, with little flickering. The powerful periodic signal discovered by WW was detected on every night at 139.3 c/d, along with its first three harmonics.
2. The signal's full amplitude was 0.12 mag, with little variability. The phase remained measurably constant over the 3 year baseline.
3. Most other stars in the AM CVn class are dominated by a superhump variation; we studied the ES Cet time series to find the telltale signatures of superhumps, and mostly failed to find them. The power spectra did show some "blue bumps" ~12 c/d displaced from the main signal, but the bumps were weak and not displaced by any precise amount. With a strictly constant phase and no repeatable fine structure (to our limits of measurement), the main signal most likely denotes the true orbital period of the binary.
4. We present an empirical correlation between M V and the equivalent width of He I λ5876 emission, which resembles the well-known relation for hydrogen-rich CVs. We use this to extract a nominal distance estimate of 350 pc to ES Cet, although a better estimate of distance is sorely needed.
5. The ultraviolet magnitudes of ES Cet establish it to be a very blue star -essentially bluer than any known CV. Available constraints from X-ray observation (ROSAT detection and a strong CHANDRA limit at 0.5 keV), plus the great strength of He II 4686 emission, establish that most of the flux is emitted at EUV wavelengths. A one-component fit to the data gives a temperature of 130±10 kK, and a luminosity of 7. We estimate a size for the emitting region in ES Cet: 4.5×10 8 0 . 1 300 d cm. This seems too low to be the accretion disk, but could be the white dwarf, or a large spot on the white dwarf. This provides some evidence supporting the idea that in the ultracompact binaries, the masstransfer stream directly impacts the white dwarf, rather than forming an accretion disk. It would also explain why ES Cet fails to show superhumps: because there is no disk.
8. The placement of stars in Figure 9 includes the uncertainty in bolometric flux, an assumed uncertainty in M 1 (0.5-1.0 M ), and the uncertainty in distance -except that no distance uncertainty is assigned to ES Cet since we regard it as sui generis. Since accretion rates scale as d 2 M 1 -1.7 , more accurate constraints on M 1 and distance in all of these stars would go a long way towards refining the test shown in Figure 9.
We gratefully acknowledge discussions and communication of unpublished data from Conard Dahn, John Thorstensen, and Tom Marsh; and observational assistance from Jonathan Kemp and Eve Armstrong. We relied on financial support from the NSF (AST-00-98254) and STSCI (NST 90-9459.04A). NOTE.
-F X /F ν denotes F(0.1-2.5 keV)/F(5000-6000 Å), with an estimated correction for absorption. Most CVs are hard X-ray sources, so that correction assumes a hard spectrum. Like most data discussed in this paper, this was obtained through a broad bandpass, centered near 5500 Å. We estimate a mean V of 17.0, although there could be a zero-point error of up to 0.15 mag. October. The signal is remarkably stable in period, amplitude, and waveform -from month to month, and even from night to night. FIGURE 5. -"Cleaned" power spectra of 2002 September and October (upper and lower panels respectively). The fundamental and harmonics have been subtracted from the time series, and then re-inserted into the power spectra, to draw attention to the "blue bump" flanking the main signals. This is typically displaced by ~12 cycles/day, but we found no convincing pattern in the precise frequencies.
FIGURE 6. -Equivalent width of He I λ5876 emission versus M V , for AM CVn stars. The correlation found (strong emission when faint, weak emission when bright) mimics that found in H-rich CVs. FIGURE 7. -Observed fluxes in ES Cet, corrected for reddening. The star shows a very blue optical-UV continuum, weak emission in the ROSAT PSPC (0.1-2.0 keV) band, and very weak emission in the CHANDRA (0.5-10 keV) band. The very low CHANDRA counts in 0.5-2.0 keV establish that ROSAT observed a soft X-ray source, and fits of that data to a soft source gave fluxes near the diamond. The cross estimates the flux above 54 eV inferred from the strength of the He II 4686 emission. The CHANDRA observation is represented by a 0.8 keV blackbody (Strohmayer 2004), and the two theory curves show the fluxes expected from heliumrich model atmospheres at 100 and 150 kK. A single temperature of 130±10 kK fits all the data. | 2017-09-21T11:30:27.264Z | 2004-12-02T00:00:00.000 | {
"year": 2004,
"sha1": "f2c3bdf5844997064b82fca221c9b7260c373f20",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "f2c3bdf5844997064b82fca221c9b7260c373f20",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260889513 | pes2o/s2orc | v3-fos-license | Epidemiological analysis of death among patients on maintenance hemodialysis: results from the beijing blood purification quality Control and Improvement Center
Background China has the largest number of patients on maintenance hemodialysis (MHD) worldwide. Despite continuous improvements in hemodialysis techniques, patients on MHD have a higher mortality rate than the general population. Understanding the characteristics of death in this population can better promote clinical practice, thereby improving patients’ survival. Methods We collected demographic and clinical data for patients on MHD registered in the Beijing Blood Purification Quality Control and Improvement Center database from 2014 to 2020. The annual mortality rate was calculatedand the primary cause of end-stage renal disease (ESRD), dialysis vintage, and cause of death among deceased patients were analyzed. Results (1) 24,363 patients on MHD were included, of which 6,065 patients died from 2014 to 2020. The annual mortality rate fluctuated between 7.4% and 8.0%. The median age of death was 70.0 (60.8–79.0) years and the male to female ratio was 1.27:1 (2). The top three primary causes of ESRD in deceased patients were chronic glomerulonephritis (CGN), diabetic nephropathy (DN), and hypertensive nephropathy (HN). Comparison of the annual mortality rate showed DN > HN > CGN (3). The median dialysis vintage of deceased patients was 3.7 (1.8–6.9) years, which slowly increased annually. Patients with diabetes had a shorter dialysis vintage than patients without diabetes (3.4 vs. 4.1 years, Z = 8.3, P < 0.001) (4). The major causes of death were cardiovascular disease (20.2%), sudden death (18.1%), infection (17.9%), and cerebrovascular disease (12.6%). Proportions of death from cardiovascular disease, infection, and sudden death were higher in patients with diabetes (22.2%, 20.2%, and 20.0%) than patients without diabetes (18.4%, 15.8%, and 16.3%). Sudden death was the leading cause of death in young (18–44 years; 27.0%) and middle aged (45–64 years; 20.8%) patients, whereas infection was the leading cause of death in patients aged ≥ 75 years (24.5%). Conclusion The annual mortality rate of patients on MHD in Beijing was relatively stable from 2014 to 2020. Sudden death was more likely to occur in young and middle-aged patients, and more patients aged ≥ 75 years died from infections.
Epidemiological analysis of death among patients on maintenance hemodialysis: results from the beijing blood purification quality Control and Improvement Center Jing Liu 1 , Huixian Zhang 1 , Zongli Diao 1 , Wang Guo 1 , Hongdong Huang 1 , Li Zuo 2* and Wenhu Liu 1* Background Kidney disease is an important disease affecting human health globally.Currently, the prevention and control of kidney disease in China still faces serious challenges.The incidence of kidney disease in China is high, with approximately 120 million patients with chronic kidney disease (CKD), and 1 to 2 million new cases of uremia each year [1].The number of Chinese patients with CKD and ESRD are increasing annually, bringing a heavy burden on China's medical and health resources.Therefore, kidney disease has become a major disease and an important public health issue that affects the national health of China.Today, hemodialysis remains the main blood purification treatment for ESRD nationwide.The China Kidney Disease Network 2016 Annual Data Report showed that patients on hemodialysis accounted for 91.94% of all patients on dialysis [2].The number of patients on maintenance hemodialysis (MHD) reached 735,000 in 2021 in China, ranking the highest worldwide [1].Continuous technical improvement of blood purification has improved the quality of patients' life and prolonged their survival time.However, compared with the general population, patients on hemodialysis have a 70% shorter life expectancy and a high mortality rate [3].Therefore, investigating epidemiological characteristics of deceased patients and factors influencing death and taking appropriate measures to improve daily clinical practice, which will minimize the risk for death, will be beneficial for their survival and prognosis.
The present study retrospectively analyzed the death status of patients on MHD and aimed to clarify the characteristics of death among this population during the last 7 years in Beijing, to provide useful information for clinical practice.
Methods
Data for this study were obtained from the Beijing Blood Purification Quality Control and Improvement Center (BJBPQCIC), which manages all hemodialysis centers except the 10 military dialysis centers in Beijing.Since 2007, the BJBPQCIC has used an electronic data collection system to collect patient-level data.To improve data integrity, the platform generates an integrity degree (%) for each data variable, which is fed back to each center during the annual quality control inspection.The centers are urged to improve the integrity of data in the next year.Data integrity has therefore continuously improved, and the degree of data integrity for the variables involved in this paper was above 90% in 2020.
Study Population
All patients on MHD who were registered in the BJB-PQCIC database from 1st January 2014 to 31st December 2020 were included in our analysis.The inclusion criteria were patients aged ≥ 18 years of either sex with a dialysis vintage > 90 days.Exclusion criteria were lack of important data, including date of birth, date of first dialysis, outcome (death, peritoneal dialysis, kidney transplantation, transfer, renal function recovery, withdrawal), or date of outcome.
Study methods
This was a cross-sectional study.Demographic and clinical data were collected from the BJBPQCIC database, including date of birth, sex, primary cause of ESRD, date of first dialysis, outcome and date of outcome, cause of death, and comorbidities.Data collection, collation, and analysis were performed by dedicated personnel.
The annual mortality rate was calculated as follows.The annual number of deceased patients on MHD / the annual number of patients on MHD (person-years).
The dialysis vintage of deceased patients (years) refers to the interval between the date of first dialysis and the date of death.
Normally distributed data were expressed as mean ± standard deviation, and non-normally distributed data were expressed as median (quartile) (M [P25%-P75%]).Categorical variables were expressed as frequency (percentage).The dialysis vintages between the two groups (with or without diabetes) were compared by rank sum test.All analyses were conducted using SPSS 26.0 software (IBM Corp., USA).
Results
In total, 29,183 patients were registered in the BJBPQCIC database between 2014 and 2020.Based on our inclusion and exclusion criteria, we eventually included 24,363 patients undergoing MHD (Fig. 1).
Demographic data and the Annual Mortality Rate
There were 24,363 patients on MHD, with a male-tofemale ratio of 1.41:1.Of these, 6,065 patients died, and the annual mortality rate fluctuated between 7.4% and 8.0%.Among the deceased patients, the median age of death was 70.0 (60.8-79.0)years, patients aged ≥ 65 years accounted for 63.9%, and the male-to-female ratio was 1.27:1, which remained relatively stable each year (Table 1).
Comparison of the annual mortality rate according to the top three primary causes of ESRD showed that DN > HN > CGN (Fig. 3).
Dialysis Vintage
The median dialysis vintage of deceased patients was 3.7 (1.8-6.9)years; 38.2% had a dialysis vintage ≥ 5 years, and 12.3% had a dialysis vintage ≥ 10 years.The proportions of patients with dialysis vintages of < 1 year and 1-3 years decreased annually, whereas those with dialysis vintages of 5-10 years and more than 10 years increased annually.The annual median dialysis vintage showed a slow increase over time (Table 2).
The median dialysis vintage was 3.4 (1.7-5.9) years in patients with diabetes and 4.1 (1.8-8.3) years in patients without diabetes (Z = 8.3, P < 0.001).After further grouping by dialysis vintage, we found that the proportion of patients with a dialysis vintage ≥ 10 years was significantly lower among patients with diabetes (6.2%) than patients without diabetes (18.0%) (Fig. 4).
The proportions of each cause of death annually from 2014 to 2020 are shown in Fig. 6.The top cause of death each year fluctuated among cardiovascular disease, sudden death, and infection.
The proportions of death from cardiovascular disease, sudden death, and infection were higher among patients with diabetes than patients without diabetes (Fig. 7).
The leading cause of death in the three groups of patients with CGN, DN, or HN as the primary cause of ESRD was cardiovascular disease.The proportions of death from cardiovascular disease, sudden death, and infection were higher among patients with DN than patients with CGN and HN (Fig. 8).
The top three causes of death differed among different age groups, and the proportions of death from sudden death and cerebrovascular disease were higher in young (18-44 years) and middle-aged (45-64 years) patients than older patients (≥ 65 years).Sudden death was the leading cause of death in young and middle-aged patients.The proportions of death from cardiovascular disease and infection were higher in older patients (≥ 65 years).Cardiovascular disease was the leading cause of death in the group aged 65-74 years, and infection was the leading cause of death in patients aged ≥ 75 years (Fig. 9).
Discussion
This study retrospectively analyzed the death status of patients on MHD in Beijing from 2014 to 2020.We found that the annual mortality rate was relatively stable.The dialysis vintage of deceased patients increased annually, and the main causes of death were cardiovascular disease, sudden death, infection, and cerebrovascular disease.The proportions of death from sudden death and infection in all deceased patients increased significantly compared with those in previous periods (2007-2011).Sudden death was the leading cause of death in young and middle-aged patients, while more patients aged ≥ 75 years died from infections.Diabetes, whether as a comorbidity or the primary cause of ESRD, showed a high mortality and a short survival time.
This study showed that the annual mortality rate of patients on MHD in Beijing from 2014 to 2020 ranged from 7.4% to 8.0% and remained basically stable compared with the previously reported rate for 2007-2010 of 7.4-9.0%[4].This was close to previously reported rates for Wuhan (10.3-7.1%)[5] and Shanghai (4.6-8.4%)[6] in China and in Japan (9.6-10.0%,2014-2018) [7].It was much lower than the rate reported for the United States (16.7-15.9%,2014-2019)[8].Even compared with the mortality rate of Asian Americans (13.2-13.1%)[8], the rate in our study still showed a significant survival advantage.This result was consistent with a previous study focused on the difference in mortality rate between patients on MHD in Beijing and the United States, which suggested that the survival advantage of Beijing patients may be related to race and the clinical practice pattern [4].Kawaguchi et al. investigated the relationship between the frequency and duration of patient-doctor contact (PDC) and clinical outcomes during hemodialysis treatment [9].The result showed that more frequent and longer PDC was inversely associated with all-cause mortality and the rate of first hospitalization, and the role of physicians in the team of healthcare professionals was critical for improving the quality of hemodialysis [9].In Beijing, doctors in the dialysis center are relatively permanent and have frequent contact with patients.According to our experience, the PDC frequency is at least 8-12 times/month, which is much higher than the 4 times/ month reported by Kawaguchi et al [9].Whether formulating and adjusting dialysis prescriptions before and after dialysis, or making ward rounds to see patients during dialysis, doctors in Beijing remain actively involved.This clinical practice pattern is more refined and individualized, which is conducive to timely detection and treatment of situations that may endanger patients' lives.In addition, there is a broad consensus that differences in hemodialysis vascular access use are a significant part of mortality differences across countries [10].Our study did not analyze the current situation of vascular access, showed the utilization rate of arteriovenous fistula in patients on MHD in Beijing was 87.2% [11], which was lower than the proportion in Japan (95%) [12] but higher than that in the United States (63.9-65.6%)[8].Therefore, this could also be a reason for the above mentioned difference in mortality rates.
It should be noted that, one cause of death in the United States Renal data System was withdrawal, which accounted for approximately 15% of the total deaths [8].There was no related death record in our database for deceased patients who had withdrawn from dialysis, therefore, our study might have underestimated the overall mortality rate.However, we believe that the number of patients who withdrew from hemodailysis was small because ESRD has been covered by medical insurance in Beijing since 2004, and the government provides economic security for patients.Therefore, few patients on MHD in Beijing quit treatment for economic reasons.In this study, 101 patients withdrew from hemodialysis for economic reasons, but we did not know whether they died.Even if they were all assumed to have died and were included in the deceased patient group, it would only account for 1.6% of the total deaths.In addition, few patients quit dialysis because of serious illness.Therefore, this confounding factor only had a slight effect on the actual mortality rate in our study.
The differences in mortality rates among patients on MHD in different countries or regions are also influenced by factors such as age, primary cause of ESRD, and comorbidities.For the period from 2014 to 2018, the patients on hemodialysis in Beijing and the United States were younger (33.6 − 40.4% and 45.5 − 47.0% [8] ≥ 65 years old ,respectively) than those in Japan(63.8− 67.9%≥ 65 years old [7]; regarding the primary cause of ESRD, the proportion of patients on dialysis with DN was substantially lower in Beijing (14.8-17.6%)than in the United States (45.5-47.0%)[8] and Japan (38.1-39.0%)[7].Although comorbidities are also an important factor affecting the survival time , we were unable to compare the comorbidities of our patients on MHD with those in Japan and the United States because of the incomplete information in our database.These differences in population characteristics will affect the comparison of mortality rates between regions.
Cardiovascular disease has long been the leading cause of death in patients on dialysis, and this study showed the same results (20.2%).This proportion was decreased compared with previous data for Beijing (2007)(2008)(2009)(2010)(2011), Fig. 8 Causes of death among patients on maintenance hemodialysis in Beijing who died from 2014 to 2020 (%) grouped by primary cause of end-stage renal disease (CGN n = 964, DN n = 929, HN n = 766).CGN: chronic glomerulonephritis; DN: diabetic nephropathy; HN: hypertensive nephropathy from 25.4 to 20.2% [13].However, the proportions of death from infection and sudden death both increased, with infection increasing from 13.1 to 17.9% and sudden death significantly increasing from 7.4 to 18.1%.The proportion of sudden death in our study is similar to that reported in Sichuan Province [14] and Shanxi Province [15] in China, but is higher than that in Japan [7] and lower than that in the United States [8].In our study, 1,095 cases of sudden death were reported, including 858 cases of sudden cardiac death (78.4%).Sudden death was the leading cause of death in young(18-44 years) and middle-aged patients (45-64 years), and its proportion was significantly higher than that in older patients (≥ 65 years); it was most obvious in young patients (27%).A reason for this finding may be related to diabetes.Diabetes is an independent predictor of sudden cardiac death in patients on hemodialysis [16].Among the deceased patients in our study, sudden death accounted for a higher proportion of patients with diabetes (20.0%) than patients without diabetes (16.3%).Moreover, the prevalence of diabetes was higher in young and middleaged patients (53.2%) than in older patients (45.6%).Another reason may be that cerebrovascular disease also accounted for a high proportion of death in young and middle-aged patients in the present study, with the majority having cerebral hemorrhage (88.5%), which was also an important cause of sudden death.Therefore, it was possible that some unexpected deaths in young and middle-aged patients caused by cerebrovascular accidents were reported as "sudden death" or "sudden cardiac death" because of a lack of autopsy confirmation.In a previous autopsy study involving 35 Japanese patients on hemodialysis with sudden death [17], there were 9 cases of stroke, which accounted for 25.7% of patients, and was basically equivalent to the proportion of cardiovascular disease (10 cases, 28.6%).In addition, 125 sudden death patients in our study were caused by hyperkalemia, of which more than half (52.8%) were young and middleaged patients.It was also observed in our clinical practice that this population was more likely to have interdialytic volume overload and hyperkalemia.Hyperkalemia causes cardiac depression; volume overload leads to a high ultrafiltration rate during dialysis, which is prone to hypotension, ischemia of other end-organs, and syncope; meanwhile, abrupt fluctuations in electrolytes (e.g., potassium, calcium) during dialysis.All these factors increase the risk of sudden cardiac death [18].It should also be mentioned that the proportion of sudden deaths in 2020 was higher than the proportions in 2019 and previous years.The reasons maybe related to the COVID-19 epidemic: From the end of 2019 to the beginning of 2020, COVID-19 began to erupt in Wuhan, China.Because of the implementation of strict control measures and cooperation in Beijing, there was a very small number of COVID-19 infected individuals in 2020 (982 infectors and 9 deaths) However, Beijing experienced a SARS epidemic in 2003 that left a deep impression on the people of Beijing.Therefore, when the COVID-19 epidemic began, many people in Beijing rarely went out because of fear of contracting the virus, and many patients with CKD who were being followed up on a monthly basis did not come to the outpatient clinic for a long time (even if this resulted in medication discontinuation).Even patients on regular dialysis reduced their frequency of seeking medical treatment, which meant that many people tolerated symptoms without seeking medical help.This may have led to delays in the diagnosis and treatment of some life-threatening diseases, especially cardiovascular diseases, leading to an increased probability of sudden death among patients on dialysis.
Diabetes increases systemic vascular inflammation.Patients on MHD with diabetes have a higher risk for cardiovascular disease [19] and a higher relative risk for death [20] compared with patients on MHD without diabetes.Our study showed that patients on MHD with DN had the highest annual mortality rate, which was consistent with the results of other domestic studies [15].In addition, the proportions of death from cardiovascular disease, sudden death, and infection were higher in patients with diabetes than in patients without diabetes.Since 2007, the number of patients on hemodialysis with diabetes in Beijing has increased annually, and diabetes has become the leading cause of incident hemodialysis [21].This is also an important reason for the increase in the proportion of death from infection compared with that in the previous period.In our study, the proportion of deceased patients on MHD with diabetes who had a dialysis vintage ≥ 10 years was only 6.2%, which was significantly lower than that of patients without diabetes (18.0%).These findings all suggest high mortality and short survival in patients on MHD with diabetes.
China is currently experiencing an aging society, and more than half of the patients with ESRD starting dialysis are middle-aged and older [1].In recent years in Beijing, the proportion of new patients on hemodialysis aged ≥ 60 years has increased annually [21].In our study, the median age of death was 70.0 (60.7-78.9)years.With increasing age, the functions of various organs decline and the ability to resist disease reduces; thus,the incidence of death from infection in older patients is often high.Our study showed that 24.5% of deceased patients aged over 75 years died of infection, which was the leading cause of death in this age group, and pneumonia accounted for 73.6%.However, even with the effect of aging, the dialysis vintage of deceased patients still slowly increased annually (from 3.4 to 4.0 years).The proportion of deceased patients with a dialysis vintage ≥ 5 years increased compared with that reported for 2007-2010 (38.2%vs.30.2%) [13].This may be related to the annual improvements in the quality of hemodialysis in Beijing.
The present study had some limitations.First, there were missing values.This study is representative if the deletions were random but may be biased if the deletions were nonrandom.Second, this was a retrospective study, the cause of death was self-reported by each hemodialysis center, and there may be errors in the judgment of the cause of death.
Conclusion
The results of our study have good guiding significance for clinical practice.Comprehensive management should be performed for patients on MHD with diabetes, including life guidance, specialist care, and prevention and control of cardiovascular disease and infections.In older patients, prevention and control of infection, especially respiratory infection, should receive more attention to further reduce mortality.Sudden death accounted for a high proportion of deaths in young and middle-aged patients, suggesting that management of this population needs to be more detailed and individualized, such as good control of blood pressure and blood sugar, and correcting abnormalities of calcium, phosphorus, and parathyroid hormone to reduce the occurrence and progression of vascular calcification.It is also necessary to strengthen the publicity and education regarding weight and diet control for patients during interdialysis.Multiple factors should be considered when formulating and adapting patients' dialysis regimens, including their comorbidities, symptoms, underlying residual kidney function, and lifestyle patterns, to reduce the risk of sudden death in this population.
Fig. 3 Fig. 5 Fig. 4
Fig. 3 Annual mortality rate of patients on maintenance hemodialysis in Beijing from 2014 to 2020 by the top three primary causes of end-stage renal disease (%).CGN: chronic glomerulonephritis; DN: diabetic nephropathy; HN: hypertensive nephropathy
Fig. 6
Fig. 6 Proportion of causes of death in each year among patients on maintenance hemodialysis in Beijing who died from 2014 to 2020 (%)
Fig. 7
Fig. 7 Causes of death among patients with or without diabetes on maintenance hemodialysis in Beijing who died from 2014 to 2020 (%) (N = 6,065: patients without diabetes n = 3,132, patients with diabetes n = 2,933)
Table 2
Number of patients grouped by dialysis vintages among patients on MHD who died in Beijing who died from 2014 | 2023-08-15T13:36:35.051Z | 2023-08-15T00:00:00.000 | {
"year": 2023,
"sha1": "084ceeb340d4e080b95c000f1bb500320fbd263a",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/counter/pdf/10.1186/s12882-023-03271-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e90d79adb5c2dffc57d13232cba55a8fedcfe03",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14022613 | pes2o/s2orc | v3-fos-license | The action of ovarian hormones in cardiovascular disease
The incidence of cardiovascular disease (CAD) differs between men and women, in part because of differences in risk factors and hormones. This sexual dimorphism means a lower incidence in atherosclerotic diseases in premenopausal women, which subsequently rises in postmenopausal women to eventually equal that of men. These observations point towards estrogen and progesterone playing a lifetime protective role against CAD in women. As exogenous estrogen and estrogen plus progesterone preparations produce significant reductions in low-density lipoprotein (LDL) cholesterol levels and significant increases in high-density lipoprotein (HDL) cholesterol, this should in theory lower the risk of CAD. However, results from oral contraceptive (OC) use and combined estrogen and progesterone hormone replacement therapy (HRT) have suggested that hormone replacement regimes do not provide cardiovascular protection. In fact, depending on the preparation and the presence or absence of genetic risk factors, an increased risk of cardiovascular diseases such as venous thrombosis, myocardial infarction (MI) and stroke have been observed. Interestingly, in the majority of studies the increase in risk was highest in the first year, after which an increase in risk was not observed, and in some studies a lower risk of CAD was evident after four or five years of exogenous hormone administration. While the debate continues about the merits of HRT, and several good reviews exist on the statistics of CAD in relation to exogenous hormones, we have decided to review the literature to piece together the physiological actions of estrogen and progesterone preparations on the individual mechanistic components leading to CAD; namely, the altered endothelium and the haemostatic balance between coagulation and fibrinolysis. We present possible mechanisms for how HRT and OCs protect against MI in the absence of cardiovascular risk factors but increase the incidence of MI in their presence. We also speculate on the roles played by hormones on the shortand long-term risks of cardiovascular disease. Key terms: hormone replacement therapy (HRT), oral contraceptives, atherosclerosis cardiovascular disease, estrogen, progestins, venous thrombosis, myocardial infarction. it also has direct effects on the myocardium, endothelium, and vascular smooth muscle (VSM). Est rogens enhance f low of cholestero l f rom the d ie t through chylomicrons and chylomicron remnants to the liver, through very low-density l ipoprotein (VLDL) and low-densi ty lipoprotein (LDL) to cells, and through reverse cholesterol transport from cells via high-density lipoprotein (HDL) to the liver to be finally eliminated in the bile and intestine (Knopp, 1997). Estrogen increases evels of VLDL and subsequently the levels of triglycerides, decreases LDL levels due to the up-regulation of LDL receptors, and increases HDL due to increased secretion
it also has direct effects on the myocardium, endothelium, and vascular smooth muscle ( V S M ) .E s t r o g e n s e n h a n c e f l o w o f c h o l e s t e r o l f r o m t h e d i e t t h r o u g h chylomicrons and chylomicron remnants to the liver, through very low-density lipoprotein (VLDL) and low-density lipoprotein (LDL) to cells, and through reverse cholesterol transport from cells via high-density lipoprotein (HDL) to the liver to be finally eliminated in the bile and intestine (Knopp, 1997).Estrogen increases levels of VLDL and subsequently the levels of triglycerides, decreases LDL levels due to the up-regulation of LDL receptors, and increases HDL due to increased secretion of apoA-I and reduced removal of HDL due to a reduction in hepatic lipase activity (Knopp, 1997;Espeland et al., 1998;Zhu et al., 1999;Mendelsohn & Karas, 1999; Table I).
Estrogen is widely regarded as having beneficial effects on the three layers of the arterial wall; the intima (endothelium), the media, and the adventitia.These beneficial effects include reduction in plasma fibrinogen, plasminogen activator inhibitor (PAI-1) activity, reduced LDL oxidation in plasma, enhanced glucose metabolism, and enhanced insulin resistance (Knopp, 1997;Espeland et al., 1998;Mendelsohn & Karas, 1999;Cushman et al., 1999;Sack et al., 1994; Table I).In the arterial endothelium, estrogen increases nitric oxide (NO) synthase activity and NO production (Hishikawa et al., 1995;Caulin-Glaser et al., 1997;Table I).NO is beneficial to arterial vasomotion in women who have angina pectoris due to vasospasm (Guetta and Cannon, 1996).In the intima and media of the arterial wall, estrogen reduces calcification and secretion of inflammatory cytokines, such as fibroblast growth factor (FGF), inter-cellular adhesion molecule (ICAM-1), vascular cell adhesion molecule (VCAM-1), endothelial-and plateletselectin (E-and P-selectin), nuclear factor kappa B (NFκB) and consequently reduces the atherosclerosis which is associated with release of these cytokines in animal models (Knopp, 1997;Guetta and Cannon, 1996;Adams et al., 1990;Haarbo et al., 1991; Table I).
Conflicting results have been reported on the effects of progestins on atherosclerosis.Several studies demonstrate no effect in both primate and cholesterol-fed rabbit models (Adams et al., 1990;Haarbo et al., 1991).However, other studies have shown that natural progesterone and synthetic progestins oppose the beneficial effects of estrogen (Hanke et al., 1996;Adams et al., 1997).Progestins have been reported to oppose the estrogen-induced increase in plasma NO metabolites (Imthurn et al., 1997) which indicates that progestins inhibit NO production in endothelial cells a n d i s f u r t h e r e v i d e n c e o f t h e proatherogenic effects of progestins in the presence of estrogen.Progestins appear to reduce the stimulatory effect of estrogens on lipoprotein transport in the bloodstream.For example, VLDL secretion is reduced; r e m n a n t r e m o v a l i s i m p a i r e d , L D L receptors are down-regulated, increasing LDL-cholesterol levels; and HDL levels are reduced in response to increased hepatic lipase activity.Progestins also increase glucose, insulin and fibrinogen plasma levels (Knopp, 1997).
The estrogen receptor (ER) is classically a ligand-dependent nuclear transcription factor.However, recent evidence indicates that non-nuclear liganded ER can also regulate the activity of intracellular second messengers and membrane-associated receptors and signaling complexes (Ho & Liao, 2002).In the cardiovascular system, these non-nuclear signaling pathways mediate rapid vasodilatation (White et al., 1995), inhibition of the response to vessel injury (White et al., 1997;Sullivan et al., 1995), reduction in myocardial injury after i n f a r c t i o n ( N o d e e t a l ., 1 9 9 7 ) , a n d a t t e n u a t i o n o f c a r d i a c h y p e r t r o p h y (Douglas et al., 1998).The importance of estrogen in the cardiovascular system has been elucidated from ER knockout and mutation studies.In a single case study, a young man presented with homozygous disruption of the ERα gene, which resulted in the expression of a truncated receptor lacking both DNA and hormone-binding domains.This patient developed premature CAD and impaired brachial endotheliumdependent vasodilatation providing further evidence for a developmental and protective role for estrogen in the heart and protection from CAD (Sudhir et al., 1997a;Sudhir et al., 1997b).Studies in ovariectomized mice show that 17ß-estradiol inhibits intimal and m e d i a l v a s c u l a r s m o o t h m u s c l e proliferation (Sullivan et al., 1995), implying a protective role for estrogen on both endothelial cells (EC) and vascular smooth muscle cells (VSMC).In ER knockout mice, 17ß-estradiol also inhibits medial thickening and VSMC proliferation in carotid injury studies (Iafrati et al., 1997).This suggests that the vascular protection produced by estrogen may be mediated in an ERα-independent manner.Furthermore, when hearts from these mice are subjected to ischemia-reperfusion they present a greater degree of global ischemia and higher incidence of arrhythmias (Zhai et al., 2000).Studies which show that ovariectomized ERαKO mice exposed to cerebral ischemia have strokes which affect a greater area of the brain also demonstrate that ERα mediates the neuroprotective effects of estrogen (Dubal et al., 2001).Evidence is also mounting that ERβ also has a role to play in the cardiovascular system.ERß expression is induced in VSMC after vascular injury (Linder et al., 1998), and ERβKO mice are hypertensive and have VSMC ion channel dysfunction (Zhu et al., 2002).(Bloemenkamp et al., 2003).The risk for venous thrombosis is highest during the first year of OC use.Furthermore, OC users with inherited clotting defects develop venous thrombosis, not only more often, but also sooner, than do those without inherited clotting defects (Bloemenkamp et al., 2000).The nature of the exogenous hormone regime is also a risk factor (WHO, 1995).While the concentrations of estrogen have been declining since the first OC usage in the late 1950s, from values as high as 100 ug mestranol to 15 ug ethinyl estradiol, the progestin dosage has remained relatively constant.The World Heath Organization (WHO) concluded that although current u s e r s o f e s t r o g e n a n d p r o g e s t e r o n e combined contraceptives have a low absolute risk of venous thromboembolism, their risk is still three to six times greater than that of nonusers, with the risk probably being highest during the first year of use (WHO, 1997).Although conflicting reports exist, the increase in venous thrombosis risk remains constant despite changes in the nature and dosage of estrogen.The progestin component, essential to suppress ovulation, has changed only in composition from first generation to third generation.As with estrogenic compounds, each progestin has been associated with an increased risk of CAD.This risk has been demonstrated to be greater for the newer t h i r d -g e n e r a t i o n t h a n f r o m s e c o n dg e n e r a t i o n p r o g e s t i n s ( F a r m e r & Lawrenson, 1998).
Venous disease
The picture for CAD is no brighter in w o m e n t a k i n g H R T p r e p a r a t i o n s .Unopposed estrogen treatment, while delivering beneficial relief from postmenopausal symptoms, also results in e n d o m e t r i a l d i s o r d e r s s u c h a s endometriosis and cancer (Smith et al., 1975;Berger & Fowler, 1997).Estrogen replacement therapy is still given to women w h o d o n o t h a v e a u t e r u s , a n d medroxyprogesterone acetate (MPA) is added to the majority of current HRT preparations to counteract the endometrial abnormalities which arise from unopposed estrogen administration.In a trial of oral conjugated equine estrogen plus MPA, no overall reduction in CAD events was observed in postmenopausal women with established coronary disease.However, the treatment did lead to an increase in the rate of thromboembolic events and gallbladder disease (Hulley et al., 1998).As with OC, the intriguing observation was made that the risk of thrombotic events was higher in the first year of use, and there was a suggestion that the risk decreased to below control (placebo) levels after four to five years of treatment (reviewed in Rosendaal et al., 2002).This emerging pattern for exogenous hormone preparations indicates that there is a decreased risk of arterial disease while at the same time an increased the risk for venous thrombosis.Whether a common mechanism of coagulation and inflammation contributes to both responses is unclear.Furthermore, there are two separate phases in venous thrombosis: a significant increase in risk in the short term, with a plausible reduction in risk associated with long-term use.
Arterial disease
Atherosclerosis is a progressive disease which is characterized by the accumulation of lipids and fibrous elements in the walls of large arteries and constitutes the most important factor in the growing incidence of CAD.Several risk factors, such as cigarette smoking, diabetes, hypertension and elevated serum lipid concentration, have been shown to increase the incidence and accelerate the progression of the disease (Multiple risk factor intervention trial, 1982;Castelli, 1996).Atherosclerosis is responsible for MI and stroke, as well as for their respective precursor disorders, angina pectoris and transient ischemic cerebral attacks (Born et al., 1991).Atherosclerosis is a focal intimal disease of arteries from the aorta down to vessels of 3 mm external diameter.However, not all arteries are equally susceptible; the i n t e r n a l m a m m a r y a r t e r y i s m o s t l y unaffected while coronary arteries are at high risk (Davies and Woolf, 1993).The increased risk of MI and ischemic and hemorrhagic stroke associated with oral contraceptive usage appears elevated only in women with hypertension or who smoke, the risk being negligible in the absence of these risk factors.
Clinical symptoms of atherosclerosis depend on four mechanisms: 1) lipid accumulation and connective tissue matrix production can increase plaque volume so that it encroaches on the lumen and impedes blood flow.2) A plaque can enter an unstable phase and fissure which leads to thrombus formation.The thrombus can e n c r o a c h o r o c c l u d e t h e l u m e n o r , alternatively, embolize, impact and occlude a smaller distal vessel.3) Although atherosclerosis is a focal disease, it is associated with a generalized abnormality in vascular tone in affected vessels which favors vasoconstriction, especially during stress and exercise.4) Medial atrophy and destruction can lead to aneurysm formation (Davies and Woolf, 1993).
Many hypothesis of atherogenesis have been proposed.These tend not to be mutually exclusive and differ more in the emphasis given to particular events rather than to opposing points of view.It is well accepted that lipid accumulation in the arterial wall, caused by hyperlipidemia, is the initial step.Recently, advances in cellular and molecular biology have focused on the role of inflammation in atherogenesis (Figure 1).In many animal models, signs o f i n f l a m m a t i o n a r e o b s e r v e d simultaneously with lipid accumulation.In both experimental animals and humans, blood leukocytes, mediators of the immune response and inflammation, attach to the endothelial cells that line the intima.The normal endothelium does not support the binding of white blood cells.However, soon after the start of an atherogenic diet, endothelial cells begin to express on their surface adhesion molecules capable of binding leukocytes (Huo & Ley, 2001).Among these adhesion molecules, VCAM-1 and ICAM-1 bind monocytes and T lymphocytes found in early atheroma (Steps 1 & 2, figure 1) (Libby et al., 2002a, b).Increased VCAM-1 expression is also localized to sites prone to atherogenesis, such as branch points in arteries where endothelial cells are subject to disturbed blood flow (Topper et al., 1996).Serum levels of soluble P-selectin and ICAM-1 are also elevated in patients with peripheral atherosclerotic disease (Huo and Ley, 2001).Abnormal laminar shear stress reduces NO production which in turn increases VCAM-1 expression (De Caterina et al., 1995).Progestins inhibit NO production (Imthurn et al., 1997).It is widely reported that estrogen replacement therapy decreases VCAM-1 expression, thereby providing a potentially-protective mechanism to early atherogenic processes (Nathan & Chaudhuri, 1997; Seljeflot et I).An increase in reactive oxygen species (ROS) in the endothelium, intima and adventitia also plays a major role in the deposition of LDL.Risk factors, s u c h a s h y p e r h o m o c y s t e i n e m i a a n d smoking, increase the oxidation of LDL (oxLDL) and thus deposition through expression of scavenger receptors in the arterial wall (Harrison et al., 2003).Increased shear stress may also induce formation by VSMC of proteoglycans that bind lipoprotein particles, facilitating their oxidative modification and further inducing an inflammatory response (Lee et al., 2001) (Step 3, figure 1).Once the leukocytes have adhered to the endothelium and an i n f l a m m a t o r y r e s p o n s e i s i n i t i a t e d , monocytes penetrate the intima in response to monocyte chemoattractant protein-1 (MCP-1) (Gu et al., 1998), (Step 4 & 5, figure 1).Once in the arterial wall, the monocytes differentiate into macrophages i n r e s p o n s e t o m a c r o p h a g e c o l o n ystimulating factor (M-CSF) (Step 4 & 5, figure 1) (Qiao et al., 1997).Estrogen decreases the expression of MCP-1 and M-CSF thereby decreasing monocyte adhesion to endothelial cells and monocyte migration into the subendothelial space (Nathan & Chaudhuri, 1997).Monocyte adhesion and migration produce a localized inflammatory response in which tumor necrosis factor beta (TNFß) and interferon gamma (INFγ) are released by macrophages (Step 6, figure 1).During this stage, expression of Creactive protein (CRP), a marker of inflammation, is also increased.Combined estrogen and progesterone therapy increases CRP expression and therefore may further increase the damage induced by the inflammatory response (van Baal et al., 1999b, c;Skouby et al., 2002).These m a c r o p h a g e s a l s o h a v e i n c r e a s e d expression of the scavenger receptor A and C D 3 6 w h i c h i n t e r n a l i z e m o d i f i e d lipoproteins (minimally modified and oxLDL), accumulating cholesteryl esters in the form of cytoplasmic droplets, leading to foam cell formation which characterize the fatty streak, a hallmark of the early atherosclerotic lesion (Step 7 & 8, figure 1) (Libby, 2002a, b).Estrogen also decreases LDL oxidation, thereby preventing foam cell formation and lesion progression (Crook, 2001;Tedeschi-Reiner & Reiner, 2001; Table I).
Growth factors, inflammatory mediators and proteolytic enzymes released by foam cells induce smooth muscle cell replication, which accumulate in the plaque and lay down extracellular matrix, transforming the fatty streaks into a complicated atheroma.As the lesion grows, it narrows the arterial lumen, interfering with blood-flow and causing clinical symptoms, such as angina pectoris or acute MI (Steps 9-11, figure 1).
This apparently smooth progression of plaque growth is frequently marked by bursts in growth of the atheroma (Yokoya et al., 1999).There is evidence to suggest that physical disruption of plaques may trigger thrombosis and promote sudden expansion of the atheroma (Davies, 1996).The most common mechanism of plaque disruption is a fracture of the plaque's fibrous cap through the elaboration of proteases, such as the matrix-degrading neutral metalloproteinases.This cap serves to separate the thrombogenic lipid-rich core of the atheroma from the bloodstream, which contains coagulation factors.Fissure of the fibrous cap allows contact between coagulation factors and tissue factor (TF), the main pro-thrombotic element in the lipid core, causing blood coagulation.Subsequently, platelets become activated by thrombin, generated by the coagulation cascade and by contact with the intimal compartment, causing thrombus formation.If the thrombus occludes the artery, it can cause an acute MI (Libby, 2002a, b).
HORMONES AND THE COAGULATION PATHWAY
The intrinsic coagulation pathway starts with injury to the vessel wall, with exposure of sub-endothelial collagen.The first step in this pathway is the conversion of Factor XII to Factor XIIa.The presence of OCs increases the concentration of Factor XII (Gordon et al., 1983).Factor XIIa promotes the activation of Factor XI to Factor XIa, which in turn activates Factor IX.Factor IX has been shown to be increased in women taking HRT (Lowe et al., 2001).HRT preparations have also been associated with a decrease in plasma levels of Factor VIII (Acs et al., 2002).The combination of Factor IXa and Factor VIII promote the activation of Factor X.On the other side, the extrinsic pathway is initiated by the exposure of Tissue Factor (TF) to circulating Factor VII.TF has been extensively reported to be regulated by exogenous ovarian hormone treatments.Holschermann et al. (1999) reported an increase in TF expression by blood monocytes in the presence of OCs that may favor intravascular clotting activation.Furthermore, several HRT preparations have been demonstrated to lower TF levels in the endothelium, but at the same time increase TF/Factor VIIa activity (Koh et al., 2001).The circulating levels of Factor VII are also altered in response to HRT.In short-term studies (up to one year), cyclic regimes of estrogen and progesterone demonstrate that unopposed estrogen increases Factor VII concentration, an effect which was reversed upon addition of progesterone (Bladbjerg et al., 2002;Lowe et al., 2001;Cushman et al., 1999).Other reports have observed a decrease in Factor VII levels after six weeks of combined treatment HRT (Peverill et al., 2001), while others have shown a differing increase in Factor VII depending on the generation of progestin used in OCs (Kluft 2000).The active complex of TF/Factor VIIa is inhibited by tissue factor pathway inhibitor (TFPI) (Badimon et al., 1999).This protein is down-regulated by estrogen and HRT and OC combinations (Bladbjerg et al., 2002;Peverill et al., 2001;Luyer et al., 2001;Harris et al., 1999), leading to the presence of a hypercoagulable state.Proteolytic cleavage resulting in the formation of Factor VIIa leads, in collaboration with TF, to the activation of Factor X (Factor Xa).This conversion point of the intrinsic and extrinsic pathways promotes the conversion of the soluble blood protein prothrombrin to thrombin.Factor Va also plays a regulatory role in this process.The presence of a mutation in factor V, known as Factor V Leiden, greatly increases the risk of venous thrombosis associated with OCs 30-50 times (Bauer, 2002;Vandenbroucke et al., 2001).Increases in circulating levels of prothrombrin are observed with unopposed estrogen and HRT treatment (Vahkavaara et al., 2001;Peverill et al., 2001, respectively).The net results of these regulations have led to reports of increased levels of thrombin in the presence of estrogen and HRT (Norris et al., 2002;Vahkavaara et al., 2001;Peverill et al., 2001).The amino acid fragment (F1+2), which is generated during prothrombin activation and thus provides a good indication of thrombin levels, was shown to be increased in the presence of HRT, and these levels were higher in women who subsequently developed recurrent venous thrombosis (Hoibraaten et al., 2001;Cano & Van Baal, 2001).Thrombin has several roles: firstly to promote the formation of fibrin from fibrinogen; and secondly, to form a complex with thrombomodulin.This complex can also down-regulate the circulating levels of thrombin (dashed line with negative sign in Figure 2).In contrast to the above mentioned hormonal regulation of the coagulation cascades, combined estrogen and progestin treatments display anticoagulant activity by lowering the circulating levels of the fibrin precursor fibrinogen (Acs et al., 2002;Cushman et al., 1999;van Baal et al., 1999b;Norris et al., 2002; Table I).HRT has also been reported to increase fibrin turnover (Sidelmann et al., 2003).Finally, in clot formation Factor XIIIa, a transglutaminase, crosslinks the fibrin monomers.The degradation of the clot in the processes leading to fibrinolysis is regulated by sex steroid hormones (Table I).As mentioned above, thrombin forms a complex with thombomodulin which in turn activates Protein C. Reports are contradictory on the effect of hormones on the levels of activated protein C (APC).Some reports demonstrate higher levels in the presence of HRT (Lowe et al., 2001), while others show a reduction (Hoibraaten et al., 2000;Lowe et al., 2001).APC inactivates Factor V, Factor VIIIa (by proteolytic cleavage) and PAI-1.Interestingly, the risk factor, Factor V Leiden, has diminished sensitivity to APC favoring a coagulation state.In accordance with this effect, elevated Factor VIII levels are associated with an increased risk of deep vein thrombosis (Koster et al., 1995;Kamphuisen et al., 2001).A further anticoagulant action of hormones is demonstrated by both unopposed estrogen (Vahkavaara et al., 2001) and HRT-reducing circulating levels of PAI-1 (Lowe et al., 2001;Koh, 2002;Cushman et al., 1999; Table I).PAI-1 inactivates tissue plasminogen activator (t-PA) and plasminogen activator type-urokinase (u-PA).The conversion of plasminogen to plasmin, which participates in fibrinolysis, is regulated by both u-PA and by t-PA.Interestingly, at this stage, hormones create a balance between promoting and reducing plasmin levels.Levels of plasminogen are increased by both estrogen (Luyer et al., 2001) and HRT (Acs et al., 2002), while t-PA levels have been reported to be reduced by estrogen (Vahkavaara et al., 2001) and HRT (Lowe et al., 2001;Koh, 2002). However, Hoetzer et al. (2003) reported that estrogen could increase t-PA levels 30% compared to controls, while the addition of progesterone, in HRT, reversed this effect.As it is clear that hormones have a strong procoagulative role (on the generation of thrombin) and that there exists a fine hormonal balance in the promotion or inhibition of fibrinolysis, depending on the regime and dosage, hormones may increase or decrease the coagulative potential and thus impose either positive or negative effects on cardiovascular disease.
Coagulation and venous thrombosis
Estrogen and combined estrogen and progestin exogenous preparations alter the coagulable state (reviewed in Figure 2).An increase in the concentration of blood procoagulants may initiate clotting and lead to thrombosis formation increasing the risk of venous thrombosis.This clot may disengage from the vessel wall and flow, via the right side of the heart, to the pulmonary arteries causing pulmonary e m b o l i s m .T h e i n c r e a s e i n v e n o u s thrombosis risk is concentrated in the first year of treatment.As demonstrated in Figure 2 and the accompanying text, exogenous hormones maintain a delicate balance between pro-and anti-coagulative states.To date there is insufficient clinical data relating to the levels of coagulation c a s c a d e i n t e r m e d i a t e s t o d r a w f i r m conclusions, but initial reports suggest there may be an increase in fibrinolytic activity with duration of hormonal usage (Salobir et al., 2002).
Coagulation and atherosclerosis
Clotting at the site of the lesion may involve both the intrinsic and extrinsic coagulation pathways (Khrenov et al., 2002a).The extrinsic, TF-dependent, pathway plays a m a j o r r o l e i n d e t e r m i n i n g t h e thrombogenicity of atherosclerotic lesions a n d s u b s e q u e n t g e n e r a t i o n o f a c u t e coronary syndromes (Toschi et al., 1997).In the normal vessel, TF is not expressed on endothelial cells or monocytes which are exposed to the circulating blood.TF is restricted to the adventitia and VSMC of the media (Wilcox et al., 1989).However, in atherosclerotic lesions, high levels of TF mRNA and protein can be found in all three cell types which make up the vessel wall (Moreno et al., 1996).Furthermore, TF expression levels have a high degree of correlation with plaque severity and vulnerability (Ardissino et al., 1997).
Although the extrinsic pathway is responsible for the initiation of thrombus formation, there is mounting evidence that it is solely responsible for the occlusive thrombus formation (Sramek et al., 2001;Bilora et al., 2001;Rosendaal et al., 1990;Triemstra et al., 1995).This implies that the intrinsic pathway, which activates factor X 50-fold more efficiently and amplifies the coagulation triggered by the TFdependent pathway (Mann, 1999), also contributes to thrombogenicity of the atherosclerotic lesion.
Recent evidence indicates that the LDL/ HDL ratio may also affect coagulation.Increased LDL concentrations have a procoagulant effect (Moyer et al., 1998) whereas HDL acts as an anticoagulant (Griffin et al., 1999).The mechanism responsible for these effects seems to be lipoproteins providing a phospholipid surface where the assembly of enzymatic complexes of the coagulation cascade can take place.Specifically, VLDL and oxLDL can support prothrombinase activity and LDL supports extrinsic and intrinsic Xase activity (Khrenov et al., 2002b).While the a b i l i t y o f l i p o p r o t e i n s t o s u p p o r t coagulation complex assembly is far less than that observed in platelets, enriched LDL and oxLDL present in the lipid-rich core may play an important role in Xase and prothombinase complex assembly, adding to the thrombogenicity of the lesion.As mentioned above, estrogen decreases LDL and increases HDL levels, while progestins have the opposite effect.
The accumulation of oxLDL within the plaque induces pathological changes in all v e s s e l w a l l c e l l s , i n c r e a s i n g thrombogenesis (Khrenov et al., 2002a).Oxidized LDL induces TF expression in endothelial cells (Fei et al., 1993), macrophages (Brand et al., 1994) and VSMC (Penn et al., 2000).Accumulated oxLDL may also enhance the ability of atherosclerotic lesion cells to form the phospholipid surface required for the assembly and activity of enzymatic complexes of the intrinsic pathway.Exposure of human macrophages and VSMC to oxLDL increases their ability to support Xase and prothrombinase complex activity, greatly increasing thrombin formation (Ananyeva et al., 2002).This increase in intrinsic pathway procoagulant activity is related to increased expression of factor VIII binding sites and more efficient assembly of Xase complex due to increased exposure of phosphatidylserine (PS) on oxLDL-treated cells (Wintergerst et al., 2000).This data indicates that the intrinsic pathway may play an important role in upregulating the thrombogenicity of atherosclerotic lesions following endothelial layer removal and subsequent exposure of VSMC and macrophages to blood flow.
Although apoptotic cells are absent from normal arteries, they are common in advanced plaques and include VSMC, m a c r o p h a g e s a n d T l y m p h o c y t e s (Bjorkerud & Bjorkerud, 1996).Apoptosis can be induced by a variety of agents; however, apoptosis is probably mediated by oxLDL within the plaque (Okura et al., 2000).One of the characteristics of apoptotic cells is the translocation of PS from the inner to the outer surface of the plasma membrane by a loss of membrane phospholipid asymmetry.The increased expression and accessibility of PS on the outer surface of the plasma membrane increases the thrombogenic potential of these cells by providing a platform for the assembly of complexes of both intrinsic and extrinsic coagulation pathways (Gilbert & Arena, 1996).Another thrombogenic effect of apoptotic cells present in the plaque is the shedding of PS-and TF-rich microparticles (Mallat et al., 1999;Bombeli et al., 1997).
Thrombogenicity is related to upregulation of both intrinsic and extrinsic pathways.However, down-regulation of anticoagulant and fibrinolytic activity may a l s o c o n t r i b u t e t o a t h e r o t h r o m b o s i s (Khrenov et al., 2002a).Oral contraceptives have been demonstrated to lower the sensitivity of factors involved in thrombin generation to APC, thus promoting a more coagulative state (reviewed in Cano & Van Baal, 2001).There is further evidence which shows that thrombomodulin and the e n d o t h e l i a l c e l l p r o t e i n C r e c e p t o r e x p r e s s i o n a r e d o w n -r e g u l a t e d i n atherosclerotic plaques (Laszik et al., 1994).Since HDL increases protein C activity, this down-regulation may be associated with decreased HDL levels observed in atherosclerosis (Griffin et al., 1999).
THE BENEFITS AND THE RISKS OF EXOGENOUS HORMONES IN ARTERIAL DISEASE
The paradox of why exogenous hormones i n c r e a s e t h e s h o r t -t e r m r i s k o f cardiovascular disease yet may lower longterm evident risk may be related to the nature of the genesis of venous thrombosis a n d M I .M I i s a c o n s e q u e n c e o f atherosclerosis, while venous thrombosis is a disease of the veins which arises from hypercoagulation and venous stasis.Arteries are larger than veins and have greater flexibility brought about presence of an elastic layer.Movement of the wall of the endothelium is maintained by a delicate balance of dilating factors, such as NO and b r a d y k i n i n , i n c o m b i n a t i o n w i t h constricting factors, such as angiotensin-II, thromboxane, and endothelin among others (reviewed in Cano & Van Baal, 2001).These may be contributing factors as to why arterial disease and myocardial infarction have a lower incidence than venous thrombosis.Another important MEDINA ET AL. Biol Res 36, 2003, 325-341 factor is the process of atherogenesis which occurs only in the arteries.As previously mentioned, exogenous hormones only increase the risk of MI in the presence of cardiovascular risk factors.As depicted in Figure 3 (Scenario A) using HRT as an example in women with no risk factors the i n c r e a s e i n b l o o d p r o c o a g u l a n t s (hypercoagulative state) will not lead to thrombosis formation, due to the size and movement of the artery, and thus the beneficial effects of estrogen in preventing the formation of an atherosclerotic plaque may provide long-term protection against MI and stroke.These beneficial effects of exogenous hormones include: 1) lowering LDL and raising HDL (Table I); 2) reducing intimal damage on vessel walls (Table I); 3) lowering the expression of adhesion molecules such as E-selectin and sICAM-1 (Van Baal et al., 1999a;Table I) 3, Scenario B).In this case, a risk factor such as hypertension facilitates incorporation of LDL into the arterial wall (Medina et al., 1997), the first step in the formation of an atherosclerotic plaque.This step is further confounded by smoking, which leads to the oxidation of LDL and thus enhanced deposition.Once an atherosclerotic plaque is formed (or is already present), the lowering by estrogens and/or HRT of soluble ICAM-1 and E-selectin and the increasing levels of CRP and matrix m e t a l l o p r o t e i n a s e -9 ( M M P -9 ) w i l l destabilize the plaque (Cano & Van Baal, 2001;Stork et al., 2002;Piercy et al., 2002;Zanger et al., 2000).When rupture of the atherosclerotic plaque occurs, a process which is also promoted by hypertension, the coagulation cascade is initiated.In this instant, the increased presence of TF, and other factors, that exist in the presence of e x o g e n o u s h o r m o n e s w i l l p r o m o t e coagulation and increase the clotting potential, leading to more rapid and greater clot formation and thus thrombosis and MI (Figure 3).If thromboembolism, occurs a high change of stroke will ensue.In support of a hypercoagulable state being a risk f a c t o r o n l y i n t h e p r e s e n c e o f a n atherosclerotic plague, which is deposited more frequently in the presence of the above-mentioned risk factors, markers of coagulation and fibrinolysis (such as t-PA, PAI-1 fibrinogen, and D-dimer) are not associated with increased risk of myocardial i n f a r c t i o n , b u t a r e a s s o c i a t e d w i t h atherosclerosis (Haverkate, 2002).
THE FUTURE OF EXOGENOUS HORMONES
The small increase in CAD in women using OCs does not outweigh the benefits from avoiding the trauma and complications arising from unwanted pregnancies, especially in non-smokers with no prior history or cardiovascular risk factors.Along w i t h b e n e f i c i a l e f f e c t s o n m o o d , osteoporosis and hot flushes, the HRT preparations were to be a simple and safe prophylactic for heart disease.This idea is based on apparently foolproof logic: premenopausal women have lower CAD, take away the hormones at menopause and the protection is lost, add back the hormones and we get back the protection.What may have appeared to be simple on paper has proved to be a nightmare in the clinic.Women are now facing the scenario that not only are their HRT preparations not delivering a protective effect, but that these preparations may actually be putting them at higher risk for CAD (certainly in the short term).
Unfortunately, there are a number of variables queuing up for consideration.First, we have no evidence that if women never went through menopause, that they w o u l d m a i n t a i n t h e c a r d i o v a s c u l a r protection as they grow older, since changes in the coagulation system are known to occur naturally with the aging process.Second, although women develop CAD about ten years later than men, they are likely to fare worse after a heart attack (Giardina, 2000).Third, the hormone regimes given as HRT can never exactly m i m i c t h e c i r c u l a t i n g b a l a n c e a n d concentrations present in premenopausal women.Neither will exogenous hormones emulate the specificity of in vivo action derived from the local expression of hormones.
Although an in vivo premenopausal situation is never truly possible, the future of HRT is not dead.The beneficial effects of exogenous hormones are required by a subset of women, and as life expectancy increases with every generation, the postmenopausal phase will account for an ever-increasing portion of a woman's life.
The negative results from the Women's Health Initiative study (Wassertheil-Smoller et al., 2003) may reflect the combination of risk factors, such as smoking, with hormonal preparations.In the Women's Health Initiative trial, 50% of the women on HRT had smoked before or continued to smoke during the study (Mueck & Seeger, 2003).Estrogen turnover is increased in women who smoke, reducing the beneficial estrogenic actions (Mueck & Seeger, 2003).
Through the vast array of data regarding OCs, HRT and cardiovascular disease, a beneficial effect or at the very least an insurance of no increased CAD risk, does appear to be present under tightly-defined parameters.It is evident from these clinical studies that the choice of exogenous hormone, combined with personal history and the presence of cardiovascular risk factors need to be taken into consideration before prescription.The feasibility of screening for CAD risk factors may have to be considered.Although the results are not fully clear on the effects of HRT on CAD in healthy, risk-factor-free women, the data does suggest that HRT should not be prescribed, at present, for the prevention of cardiovascular disease.In regard to preparation and dosage, Rosendaal et al. (2002) recommend that contraceptives with 30 ug ethinyl estradiol should be the first choice and that third-generation progestins should be avoided due to their association with increased venous thrombosis.
The information accumulated to date has mainly concentrated on estrogens as a major risk factor, but increasing evidence supports a role for progestins in pathogenesis.The objectives now facing the scientist and the clinician is to better understand the workings, at the physiological and molecular level, of estrogen and progesterone and to determine where and when hormones are required and apply them accordingly.Hopefully, the foolproof plan was merely naive in its execution, while the logic is still firmly in place.
Although sexual dimorphism converts to a lower incidence in arterial diseases in p r e m e n o p a u s a l w o m e n , e x o g e n o u s h o r m o n e s i n t h e f o r m o f O C s a n d postmenopausal HRT have been associated with increased risk of venous thrombosis, MI and stroke.The presence of hormone preparations appears to add to the increased risk of venous thrombosis caused by genetic factors such as personal or family history, Factor V Leiden, deficiencies of protein C, p r o t e i n S , o r a n t i t h r o m b i n I I I a n d hyperhomocysteinemia. Emerging studies show that in vitro fertilization treatment and ovulation induction are also risk factors for venous thrombosis
Figure 1 .
Figure 1.Hormonal effects on the atherogenic process.A. expression of inflammatory and adhesion molecules; initial lipid infiltration and accumulation.B. Plaque growth and increased LDL deposition.C. Plaque rupture and thrombus formation.
Figure 2 .
Figure 2. Hormonal effects on the coagulation cascade.
Figure 3 .
Figure 3. Role of hormone replacement therapy (HRT) in atherogenesis in women with and without cardiovascular risk factors.
; 4) lowering MCP-1 (Stork et al., 2002) and (5) increasing anticoagulant APC activity.H o w e v e r , w h e n w o m e n c o m b i n e e x o g e n o u s h o r m o n e s w i t h p r e v i o u s cardiovascular risk factors, an increase in risk of MI is observed (Figure | 2016-10-14T01:18:46.145Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "0f3be172fd45da653777c99ce70cf33dca74558c",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.cl/pdf/bres/v36n3-4/art05.pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "0f3be172fd45da653777c99ce70cf33dca74558c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254356888 | pes2o/s2orc | v3-fos-license | Psychological Empowerment of Women through Micro-Enterprises Established in Parbat District, Nepal
One method of empowerment is a progressive one. Women's empowerment is essential to manage and alter traditional traditions in the modern setting. Governmental and non-governmental organizations are attempting to empower women holistically to eliminate gender-based prejudice from the process. One form of empowerment that might give people the confidence to play a significant role in society is psychological empowerment. The study aimed to determine how micro businesses gave women psychological power. In the Parbat district, 384 working women participated in the survey. Because the p-value was 0.000, which was less than 0.05 significant levels, the results showed a significant shift in the micro-enterprise between before (mean 15.6105) and after (mean 25.8421). It demonstrates the beneficial effects of micro-business on the psychological emancipation of women. Therefore, the government should keep up this policy to maintain success and give women more political clout.
Introduction
Barriers to incorrect cultural stereotypes must be removed for people to reach their full potential and make a difference, especially women (Rowlands, 1995). Agarwal and Rao (1996) defined women's empowerment as a woman's capacity to be economically independent and in charge of decisions that may have a good or negative impact on her life (Agarwal & Rao, 1996). Has mainly benefited women. Women must start a microbusiness or small enterprise to support their family and make a living in this male-dominated environment where work possibilities are limited (Madichie & Nkamnehe, 2010). In general, women work harder than men, yet they receive less recognition. According to Fofana, Antonides, Niehof, and Ophem in their article How microfinance empowers women in Cote De'ivore (2015) and Gomez (2013), they behave similarly to their and manage the home in some way to improve the standard of living for the family. Women are primarily involved in microfinance projects in developing nations like Nepal, organized in self-help organizations. Finding a job is made easier for women thanks to microcredit. Women frequently discover microfinance organizations as a result. By providing them with the tools and resources they need to launch small companies, most initiatives aim to lessen women's poverty. Other programs enhance rural communities' living standards and address other social issues. The Millennium Development Initiative is also advanced (Khan, Bhat, & Sangmi, 2020).
Empowerment includes various areas, including social and cultural, health, economic, legal, political, psychological, natural resources, and spirituality Malhotra & Schuler, 2005). Women's psychological empowerment is defined by, among other things, increased self-assurance, self-esteem, self-reliance, breaking gender conventions, and decreased psychological discomfort. Due to micro-enterprise, women's collective efficacy, proactive attitude, and self-efficacy are likely to increase.
Despite the positive impact of micro businesses on women's psychological empowerment, stress and strain on women cannot be ignored (Moyle et al., 2006). According to Kim et al. (2007), microenterprise has a better chance of boosting women's self-confidence and financial confidence and shattering gender norms. Another study found that women who engage in microenterprise strengthen their social networks and values (Hansen, 2015). An investigation into the impact of microfinance on beneficiaries' socioeconomic status in Nepal gathered the opinions of microfinance managers. 25% of respondents agreed that microfinance helps beneficiaries improve their financial situation, with 75% strongly agreeing (Dhakal & Nepal, 2016).
The impact of small farmers' cooperative limited (SFCL) 's impact on Nepalese women's sociocultural and political empowerment was the subject of independent research. The opinions of microfinance managers were gathered as part of an inquiry examining the effect of microfinance on beneficiaries' socioeconomic levels in Nepal. 25% of respondents agreed, with 3 75% strongly agreeing that recipients of microloans can improve their financial circumstances (Dhakal & Nepal, 2016). A separate study examined the effect of small farmers' cooperative limited (SFCL) on Nepalese women's sociocultural and political empowerment (Poudel & Pokharel, 2017). The psychological empowerment of women through microfinance and SFCL was not discussed in either study. Given this gap, a study was undertaken to determine the psychological empowerment of women through micro-enterprises developed in the Parbat district of Nepal. No study was conducted among the micro-enterprise beneficiaries in the Parbat district of Nepal.
Materials & Methods
The study was carried out in Nepal's Parbat district. Nepal's Parbat District is a hilly region. One of Nepal's 77 districts, it is a part of the Gandaki Province. The region encompasses 494 km2, and Kusma serves as its district headquarters (191 sq mi). The Parbat district also has a Micro Enterprise Development Program (MEDEP). The United Nations Development Program (UNDP) and the Nepal Ministry of Industry, Commerce, and Supplies jointly launched this program in 1998, with money from AusAID. It aims to make Nepal less impoverished. The program's goal is to assist participants in establishing microbusinesses that would enable them to engage in sustainable income-generating and livelihood activities.
From 1998 to 2018, it was a successful effort that was carried out in 77 districts throughout Nepal. The UNDP had set a goal of 70% female involvement, and MEDEP is proven to be a helpful tool for empowering women. Consequently, the researcher chose this area for examination. Both quantitative and qualitative data were gathered from the field for this mixedmethod study. 384 women were chosen randomly from the district, and several case studies were also carried out. Short case studies and statistical analysis of quantitative data were also produced to logically validate the quantitative conclusions.
Result & Discussion
Men predominate in our society, and women are excluded from essential services. Women are less wealthy than men and have less access to necessities, including healthcare, clean water to drink, sanitary conditions, and education. In rural Nepal, there is still a persistent belief that girls should not go to school but should help with family duties. They do not receive the same treatment as boys do. Women still frequently do not receive salaries at work that are equivalent to those of men. Some places do not pay women for their labor and experience various forms of violence (United Nations., 2015).
Therefore, there is a need for more female employees at work to lower this ratio. It aids in reducing female prejudice and poverty. Consequently, one of the crucial interventions in rural and isolated places is micro-enterprise (Chant, 2014). Microbusinesses run by women help them support their families financially. They depend less and less on the males in their families. Additionally, it helps them pay off loans and debt accumulated for various reasons. Compared to men, women are more accountable to their family members.
The perception is that women are more organized and punctual than men. Women can now work and make money because of this. They are more likely to take care of their family members' financial, educational, and medical requirements . Organizations work with women and women-related businesses in Nepal's rural and remote areas to help them become self-sufficient in their families and communities. Additionally, it has improved their social standing and self-esteem. Women are gaining power on all fronts-socially, economically, and psychologically.
This kind of initiative helps shift the way that society thinks. Even their society shows more regard for them. Politics, economics, health, knowledge, capacity, and sustainability play multiple roles in women's empowerment at the individual, group, and organizational levels.
There are connections between the political, psychological, social, and economic spheres .
Through microbusiness, women are altering their society due to women's empowerment. They are helping to create jobs for other people. They are developing their entire civilization. They have a good perspective, promoting society's expansion and improvement (Khan, Bhat, & Sangmi, 2020). In Nepalese culture, women put in a lot of effort to keep their family position. Along with living a good life, they also desire to be well-known in society. They created tiny businesses in Nepal's rural and remote regions due to their optimistic and forward-thinking mindset.
Descriptive statistics of psychological empowerment of women
Human rights and development are fundamentally dependent on the empowerment of women. While women's empowerment might hasten development, it also helps to reduce gender inequities. Women's empowerment doesn't mean they are handed power; instead, it just gives them the ability to use it. Regarding motivation and education, women now have more influence but still do not belong to an empowered class (Valarmathi & Hepsipa, 2014). Women are learning more but still do not receive the same treatment as men in all areas. The concept of women's empowerment is often misinterpreted in rural and isolated areas. They are unlikely to believe that women's empowerment can help their families and societies advance, though.
As a form of wealth, health is. The mental well-being of individuals is crucial. Women's empowerment isn't complete without psychological empowerment because mental health is a crucial component of total wellbeing. Mental health is rarely discussed in our society. When mental health is discussed, people's perceptions of it are not good (Moubarak, Afthanorhan, & Alrasheedi, 2022).
In Nepal's rural and isolated areas, women should be empowered, and their psychology should be addressed. This study reveals the psychological empowerment of women through micro business. Table 1 displays the descriptive statistics (mean) for women's psychological empowerment. The average mean before and after engagement in micro-enterprise for women's psychological empowerment varied significantly, from 15.61 to 25.85. The total average mean before (1.73) and after (2.87) participation in a micro-enterprise for women's psychological empowerment shows a significant change. The mean value for women working hard to complete the tasks they were promised to do improved dramatically from 1.71 to 2.99 after they took part in micro-enterprise. Similar to this, there was an increase in the mean value of respondents feeling confident to handle any situation following (2.85) involvement in micro-enterprise. The average value was 1.59 previously. Additionally, the mean value for feeling proud to be a woman before and after participating in micro-enterprise was 1.78 and 2.97, respectively.
Additionally, after participating in a micro business (2.90), women were more aware of their qualities and abilities (1.83). The mean value before (1.80) and after (2.92) participation in micro-enterprise was altered for optimistic attitudes about their future. The mean value for constantly feeling happy in their life grew from 1.75 to 2.95 after participating in microenterprise. Additionally, the mean value of women's involvement in family discussion and decision-making increased after (2.89) participating in a micro business (1.80).
After participating in micro-enterprise, it was discovered that community members listened to their recommendations because the mean value improved from 1.70 to 2.69. Finally, after participating in the micro-enterprise, respondents' mean value increased from 1.64 to 2.70, signaling that they thought they could head a community organization.
One of the responders to the field interview, Mrs. Maya Kisan, 30, states, "I didn't have much confidence to talk to anyone." We didn't have much money, and our family was poor. But I started tailoring with the help of my family. This effort now involves the entire family. As a result, our income has grown, and we've been able to keep giving our kids a top-notch education. My confidence has increased, which is vital for the future.
Psychological empowerment of women through micro-enterprise
In order to create a good country, women must be given more influence. If women are given more influence, society will be more stable. It helps develop a good family, society, and country (Gupta, 2018). Women have some social ties and are in charge of their families. Rural and remote areas of Nepal benefit from women's empowerment since it promotes balanced development.
Women must be psychologically empowered to handle a problem, boost self-confidence, increase freedom of choice, and build coping mechanisms at home. As a result, they become more intelligent. Positive women have a significant impact on their families and society. Due to their abilities, assurance, and leadership traits, they can more effectively improve their civilizations. (Afthanorhan, Alrasheedi, & Moubarak, 2022). Almost of women think they are in charge of their families. They can work confidently and continue raising their families thanks to this thinking. Additionally, it helps society progress.
In this part, a matched sample test was used to determine whether micro-enterprise has a positive psychological impact on women. Table 2 displays the variances in mean, standard deviation, and significance. It demonstrates that the tendency to consistently put forth their best effort to complete their assignment on time resulted in the highest change in mean value (-1.28) with a standard deviation of 1.71. (-1.28). According to Table 2, the mean value and standard deviation for feeling confident in any situation before and after taking part in micro-enterprise were -1.26 and 1.05. Additionally, the standard deviation of the mean value of being pleased to be a woman (-1.19) increased. The change in the mean value for constantly feeling happy in life was -1.20, with a standard deviation of 1.32. Additionally, with a standard deviation of 0.87, there was an improvement in their perception of the future (-1.11). After participating in micro-enterprise, women were more active in family discussion and decision-making (-1.09, standard deviation 0.92). Their comprehensive understanding of their abilities also improved (-1.06), with a standard deviation of 1.72. More women (-1.06) with a standard deviation of 0.76 thought they could lead a community organization through microenterprise. The last change in the mean value for community members seeking assistance with an issue was 0.99, with a standard deviation of 0.88, resulting from participation in microenterprise. Additionally, the p-value was 0.000, less than a 0.05 significant value, showing notable variances in the averages for all indicators.
In her interview, Mrs. Nirmaya Nepali, 40, claimed that her society did not value women. People avoided them in discussion groups because they thought women couldn't work as effectively as men. She now runs a tailoring business with three employees. She stated that more people are incorporating her in debate programs and asking for her advice.
Psychological changes before and after
Most organizations today concentrate on the psychological empowerment of women in addition to the economic, political, social, and educational empowerment. Women's psychological empowerment enables them to be independent, self-assured and equipped to handle any situation. Women must be educated about their strengths and abilities to succeed as members of their families and society.
As students work on a project, they start to study and develop their knowledge of the subject, which enables them to advance (BHASIN, 2021). It is crucial for women to feel psychologically empowered in developing countries like Nepal. Many problems and prejudices impact women. Therefore, strengthening women psychologically helps them overcome such limitations.
The psychological alterations that take place in women both before and after they take part in a micro business are covered in this section. Table 4.28 displays the statistics for paired samples. Micro-enterprise has been used to study the psychological changes in women. It shows that the mean value was 15.61 with a standard deviation of 5.98 before engaging in micro-enterprise. Micro-enterprise was used, and the result was 25.84 with a 2.73 standard deviation. Table 3 shows that the difference in mean was -10.23 with a standard deviation of 7.10. It shows that there was much change in the psychology of women after micro-enterprise. Moreover, it shows that there was a significant difference between both mean values as the pvalue was 0.000 which was less than the 0.05 significant level.
Conclusion & Recommendation
Finally, there were noticeable improvements in the psychological state among the women who worked at or managed microenterprises. Women who participated in micro-enterprises reported feeling more confident in their abilities, capable of taking on leadership roles, adept at handling problems, and generally happier. Therefore, the relevant authority should regularly offer technical assistance and update training to enable women to transform society. | 2022-12-07T19:24:17.116Z | 2022-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "69db0e777e9d580f826117351bdca8806f6a3033",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/njmr/article/download/48848/36466",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ebea09c91933378ed85fe5f825f7a346e7a19cb",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
246258045 | pes2o/s2orc | v3-fos-license | Self-medication among Medical Students and Staffs of a Tertiary Care Centre during COVID-19 Pandemic: A Descriptive Cross-sectional Study
ABSTRACT Introduction: Self-medication is a common practice worldwide. Major problems related to selfmedication are wastage of resources, increased resistance of pathogens, adverse reactions, and prolonged suffering. This study aimed to find the prevalence of self-medication among medical students and staffs of a tertiary care centre during the COVID-19 pandemic. Methods: A descriptive cross-sectional study was conducted among medical students and staffs of a tertiary care centre from 1st November to 30th November, 2021. Ethical clearance was taken from the Institutional Review Committee (Reference number: 2710202102). Convenience sampling was done to reach the sample size. Online questionnaires consisting of information on self-medication and socio-demographic characteristics were used. The data was transferred into an Excel spreadsheet and later was exported to Statistical Package for the Social Sciences version 20 for analysis. Point estimate at 95% confidence interval was calculated along with frequency and proportion for binary data. Results: Among 383 participants, the prevalence of self-medication during the pandemic was 193 (50.4%) (45.39-55.40 at 95% Confidence Interval). About half of the respondents 90 (50.3%) who selfmedicated purchased the medicines directly from the pharmacy. The most consumed medicines were Paracetamol 128 (18.9%), Vitamin C 126 (18.6%), Zinc 86 (12.7%), Multivitamins 75 (11.1%), and Vitamin D 65 (9.6%) followed by Azithromycin 54 (8%), cough syrup 53 (7.8%) and Ibuprofen 46 (6.8%). Conclusions: The prevalence of self-medication during the COVID-19 pandemic is lower compared to that of other developing countries. Paracetamol and Vitamin C are the most consumed drugs for self-medication and Azithromycin is the most used prescription-only drug for self-medication during the COVID-19 pandemic.
INTRODUCTION
The World Health Organization defines self-medication as the selection and utilization of medicines to treat selfrecognized symptoms or ailments without consulting a physician. 1 Family, friends, neighbours, the pharmacist, previous prescribed drug, or suggestions from an advertisement in newspapers or popular magazines are common sources of self-medication.² Self-medication is a common practice worldwide. 3 During the pandemic, the constant fear of going outside and using health services may have an impact on the use of self-medication. 4 At this stage of the pandemic, self-medication is found as a common practice in many countries. 5 Major problems related to self-medication are wastage of resources, increased resistance of pathogens, and serious health hazards such as adverse reaction and prolonged suffering. Antimicrobial resistance is a current problem worldwide particularly in developing countries where antibiotics are available without any prescription. 6 This study aimed to find the prevalence of selfmedication among medical students and staffs of a tertiary care centre of Nepal during the COVID-19 pandemic. p= prevalence of self-medication among medical students and staffs of a tertiary care centre taken as 50% for maximum sample size q= 1-p e= margin of error, 6%
METHODS
The minimum sample size calculated was 267. Taking 10% non-response rate, the sample size becomes 294. However, the total sample size taken was 383.
The main instrument to collect data was an online questionnaire. An online questionnaire was prepared using Google forms and was sent to participants through email, social media (Facebook, Instagram, Viber, and Whatsapp). The survey was kept brief to achieve greater response rates. Pretest was conducted among 20 participants for clarity and content prior to dissemination for others. Day-to-day supervision of sent questionnaires was done. Those who did not respond were sent up to three reminders.
The data was transferred into an Excel spreadsheet and later exported to Statistical Package for the Social Sciences version 20 for data analysis. Descriptive statistics were used, and variables were represented in terms of percentages. Point estimate at 95% confidence interval was calculated along with frequency and proportion for binary data.
DISCUSSION
The magnitude of self-medication during the COVID-19 pandemic in our study was 193 (50.4%). Three studies in Pakistan, Bangladesh, and Togo have reported the prevalence of self-medication during the COVID-19 pandemic to be 53%, 71.40 %, and 34.2% respectively. [7][8][9] The prevalence of self-medication in Pakistan was similar to this study whereas it was much higher in Bangladesh and lower in Togo.
In this study, 17.2% of respondents medicated without any symptoms for prevention. This was consistent with the study in Bangladesh. 8 The majority in our study, who selfmedicated, obtained the medicines directly from a pharmacy. The study in Pakistan mentioned, for the majority, the source of the drug was either a prescription written for a family member or directly from the pharmacy. 7 Directly from the pharmacy may be a common source of self-medication in countries like Nepal and Pakistan because dispensing of medicines from pharmacies is not strictly regulated in these regions. and Vitamin C and as the most consumed medicine for self-medication respectively. 5,9 The most used 'prescription-only' drug for selfmedication was Azithromycin. Though the use of Azithromycin has increased during the COVID-19 pandemic, routine use of azithromycin for COVID-19 in absence of additional indication is not recommended. 10,11 Instead, inappropriate use of antibiotics can further lead to increased antimicrobial resistance. Adverse effects were seen using medicines which were also seen in other studies. 7,12 The limitation of this study was that the limited sample size and the convenience sampling technique could not make the results representative of the entire population.
CONCLUSIONS
This study concludes that the prevalence of selfmedication during the COVID-19 pandemic was lower compared to that of other developing countries.
Paracetamol and Vitamin C were the most consumed drugs for self-medication and Azithromycin was the most used prescription-only drug for self-medication during the COVID-19 pandemic. | 2022-01-25T16:12:40.738Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "6090411714d94401b1355be9ef1a5f041691e6be",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd7064cb595b03cc42cc7012397a41007b7d323b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210926327 | pes2o/s2orc | v3-fos-license | Usage of cloud storage facilities by medical students in a low-middle income country, Sri Lanka: a cross sectional study
Background Cloud storage facilities (CSF) has become popular among the internet users. There is limited data on CSF usage among university students in low middle-income countries including Sri Lanka. In this study we present the CSF usage among medical students at the Faculty of Medicine, University of Kelaniya. Methods We undertook a cross sectional study at the Faculty of Medicine, University of Kelaniya, Sri Lanka. Stratified random sampling was used to recruit students representing all the batches. A self-administrated questionnaire was given. Results Of 261 (90.9%) respondents, 181 (69.3%) were females. CSF awareness was 56.5% (95%CI: 50.3–62.6%) and CSF usage was 50.8% (95%CI: 44.4–57.2%). Awareness was higher in males (P = 0.003) and was low in senior students. Of CSF aware students, 85% knew about Google Drive and 70.6% used it. 73.6 and 42.1% knew about Dropbox and OneDrive. 50.0 and 22.0% used them respectively. There was no association between CSF awareness and pre-university entrance or undergraduate examination performance. Inadequate knowledge, time, accessibility, security and privacy concerns limited CSF usage. 69.8% indicated that they would like to undergo training on CSF as an effective tool for education. Conclusion CSF awareness and usage among the students were 56.5 and 50.8%. Google drive is the most popular CSF. Lack of knowledge, accessibility, concerns on security and privacy limited CSF usage among students. Majority were interested to undergo training on CSF and undergraduate Information Communication Technology (ICT) curricula should introduce CSF as effective educational tools.
Azure. The IaaS provides the users with resources such as servers, storage and computation facilities [3].
Clouds promise these benefits have been the reason for increasing adoption of cloud computing in many business areas already and in the healthcare domain in recent years [4]. For instance, the combination of CC and traditional mobile computing has resulted in the emergence of Mobile Cloud Computing in business sector [5]. One of Cloud promises in health sector is the possibility of handling the huge medical databases in order to improve the patient care by timely prediction with a good accuracy [6,7]. Many more CC applications and capabilities are prominent in the other fields as well.
In CC dominant atmosphere in the world today, Cloud storage facilities (CSF) has gained popular over traditional storage media due to following advantages; free of charge availability by many providers, file synchronization facilities, file sharing facilities and reliability of services without worrying on data loss [8][9][10][11][12]. The CSF provide additional benefits for students other than saving their digital materials [12][13][14]. For example, students can take digital notes online and access them anytime and anyplace in a convenient way. These notes can be easily shared among colleagues. These help to avoid physical constraints face by students in accessing and sharing study materials. The CSF also facilitates collaborative work among students and increase productivity in group work [14].
Both students and university teachers use CSF to store teaching leaning and research materials. A previous study showed a higher demand for CSF in German higher education sector and 34% of higher education sector used cloud computing [15]. This is mainly due to the ease of access through any internet-enabled device [16], facilitate collaborative work [14] and CSF serves as backups and recovery solutions in hardware failures [17]. However the users of these public CSF have raised concerns on privacy invasion risks and data security breaches [14,18]. Previous study done in Germany reported that 85% of university students used at least one CSF and Dropbox was the most popular CSF among them. Nearly 80% of students used CSF to store project work and teaching materials and 55% for other personal data [15].
There is limited data on CSF usage among university students in low middle income countries including Sri Lanka. This study was done to assess the knowledge, practice and attitude towards CSF among medical students at the Faculty of Medicine, University of Kelaniya. The existing literature does not provide sufficient evidence on the usage of CSF by medical students in low middle income countries. Hence our finding will be a unique contribution to world literature. Further, the finding will be a useful for administrators, policymakers and teachers in many higher education institutions in medicine, especially in developing countries in adaptation of ICT education in medical curricula. In Sri Lankan context, this study contributes for medical administrators to identify the future doctors' attitude on trending technology like cloud base services.
Related works
This section describes the previous literature in recent years which is related to the use in CSF by university students.
Several recent studies related the use of CSF by university students have been carried out. One large scale online survey [15] conducted by Meske et al., targeted more than 3000 participants including students (72%) as well as employees (28%) at the University of Muenster in Germany. The analysis of survey results indicated a high demand for cloud service solution in German higher education sector where the most of the students (85%) used at least one cloud service (employees: 73%). Students mainly used cloud services for educational purpose (project work -(83%) and teaching material -(78%)). Employees main use was to save work-related materials (78%). The most important reason for rejecting cloud storage services was security concerns (students: 64%; employees: 62%). The primary aim of this paper was to describe and present the main results of a preliminary large-scale survey on cloud services at the University of Muenster with more than 3000 participants in order to identify how the cloud service should be designed to be attractive for the target audience.
In another research [19] by Ashtari & Eydgahi examined how the engineering students at Eastern Michigan University accept and use the cloud services long after its adoption in the education process. The researchers used the Technology Acceptance Model (TAM) and Determinants of Perceived Ease of Use model to determine the CC adaptation by students.
A 97.5% of participants indicated that they are utilizing the cloud-based university class management application that enabled direct student access to Google Drive and other Google cloud suite. The majority of the students (97.5%) were utilizing at least two forms of cloud technology and 87.5% were using three or more applications. The reasons given by students for using cloud applications were: accessibility, the ability to share data, the low cost and the ability to back up files. The most common concerns were data privacy, fear of losing data, and difficulty of use. The researchers suggest that a combined model drawing from more aspects of internet technology will be more useful in further examinations of cloud computing adaptation.
Stantchev et al. [20] used the Technology Acceptance Model (TAM) to investigate the motivations that lead higher education students to replace several Learning Management System (LMS) services with cloud file hosting services in the field of information sharing and collaboration. Research findings extended previous research that has investigated the use of Dropbox to cover certain weaknesses of LMS within the higher education setting. The results showed that Dropbox receives better valuation than LMS for the three considered constructs: attitude toward using, perceived ease of use and perceived usefulness.
Another study based on first year medical students can be seen in the work of Peacock & Grande [21]. The main objective of their work was to present the results of effectively using a free Google cloud suite including Google Drive to manage and teach a first-year pathology course at Mayo Medical School in USA. The results demonstrated that Google cloud suite allowed faculty to build an efficient and effective classroom teaching and management system. 87% of participants responded positively in favor of Google Drive as a storage location for course materials. Ibrahim Arpaci et al. [12] investigated the adoption of cloud computing to achieve knowledge management using TAM. Researchers examined the cloud services involvement in knowledge creation and discovery, storage, sharing, and application among the students and concluded that the integration of cloud computing services into the educational settings may promote students' academic performance, effectiveness, and efficiency by facilitating knowledge management mainly due to the cloud services that enable for students to access and synchronize their digital reference materials any time, from anywhere, and using any device. The Table 1 compares the similarities and differences in the current study with the other works stated in the Related Works section.
Hypotheses in the study
This study is designed to assess the following the hypotheses.
The majority of the medical students do not use the CSF or underutilized in a situation where IT infrastructures are provided them free of charge. Students with prior experience of IT are the leading users of CSF. Students who perform well in previous and current exams use the CSF.
Methodology
We undertook a cross sectional study at the Faculty of Medicine, University of Kelaniya, Sri Lanka. The research methodology depicts in the Fig. 1. The study was conducted from August 2016 to December 2016. A selfadministrated questionnaire was used to collect data. There were five batches of medical students in the Faculty and stratified random sampling methods with proportional allocation was used to recruit students from each batch. Sample size was calculated to estimate the proportion of students who are aware of cloud storages using below formula and assumptions.
Sample size (SS) will be calculated using the following formula.
Hence, sample size was calculated as 384. Additional 10% was added as non-response bias (384 + 38 = 422). There were 903 total students in the Faculty and calculated sample size represented 47% of the population. Hence the sample size is more than 5% of the total population, we adopted finite population correction (FPC) to avoid over sampling.
Revised sample size was calculated as 287. Please refer the Additional file 1: Table S1 that provides the allocated number of students from each batch. Student's name lists of each batch were obtained and the required number of students from each batch were selected using a random number generator.
Self-administered questionnaire was used to obtain data. The questionnaire had two parts. The first part of the questionnaire included student's academic year, gender, results [15] An Online survey that included whole student population and employees at the University of Muenster in Germany compared to the current study that used printed questionnaire to collect data from a sample of medical students. Both surveys focused on the use of CSF.
Ashtari & Eydgahi in 2017 [19] The study sample was selected by inviting to 40 engineering students in a specified study setup. The objective was to find the students' use and acceptance of CC by using TAM. The current study applied the stratified random sampling method to select the study sample from a medical faculty and attempted to find specifically the use of CSF.
Stantchev et al. in 2014 [20] TAM was used to report weaknesses of several services of LMS over Dropbox cloud hosting service. Sample size was 121 students in computer science in final year and master level. Students involvement in Dropbox use as a CSF was the similarity found the two studies.
Peacock & Grande in 2016 [21] This study involved the first year medical students in order to examine the possibility of effectively using a free Google cloud suite, including Google Drive to manage and teach a first-year pathology course. Medical students involved in the both studies. The current study specifically examine the CSF usage in the education process by the whole medical students.
Ibrahim et al. in 2017 [12] 221 Students in Information Technology (IT) subject stream who followed a training course on knowledge management and CC were involved in the study. The adaptation of cloud services in knowledge management was examined using TAM. Both studies focused on cloud services but on different research aspects.
of the undergraduate medical examination including continuous assessments and Unit exams. Ethical approval for the study was obtained from the Faculty of Medicine, University of Kelaniya. All the medical students were over 18 years of age. At least one of authors participated in the data collection process. All the students were informed about the study in addition printed information, provided with the consent form to be completed by the participant. The informed written consent was obtained from those students who participated voluntarily in the study.
The following are the details explanation of the steps that show in the flowchart (Fig. 1).
The computer labs of the faculty are used for practical classes in medicine and teaching ICT. The students are not allowed to use external devices in teaching lab due to security reasons. This makes an unpleasant situation for some students. Students can easily overcome this problem using the free cloud storage. The issue is whether the students are aware of cloud storage or not. No previous data was available in the faulty in this regard. Literature survey proved the lack of research in the use of CSF by medical students, specially from developing countries. The faculty encourages to have the ethical clearance for researches that involves students. Hence the project proposal was submitted to faculty ethical clearance committee to have the approval for the research. The questionnaires that was revised by the ethic committee were distributed among the selected students. A sample of 287 students were selected after applying the finite population correction (FPC) to recalculate the sample size that was selected with the stratified random sampling methods with proportional allocation. Data was entered and verified in the REDCap software system. R statistical package was used to perform descriptive analysis on data. Did the literature surveys to compare our findings with the world. Data analysis and literature surveys were repeated as required. Started the article writing in the following order: Methodology, Data analysis, Discussion, Conclusion, abstract and Introduction.
Results
The statistical analysis was done using R version 3.5.3. The average CSF awareness and usage were calculated with confident intervals. Pearson's Chi-square test statistics were checked for statistical differences between genders. Spearman's rank correlation coefficient was used to determine the connection, if any between CSF awareness and exam performance. The trend in the awareness of CSF between academic years was checked using the Generalized linear model.
Description of the study sample
We distributed 287 questionnaires and 261 (90.9%) students responded. There were 181 (69.3%) female students in the sample. Number of students who responded to the questionnaire from first year to final year as follows: 49 (18.8%), 55 (21.1%), 51 (19.5%), 59 (22.6%) and 47 (18.0%). 13 students did not respond to the question on awareness of CSF and we removed them from the analysis. These students belonged to all academic years and respective numbers were 2,3,2,3 and 3 from first year to final year. We present the results of 248 respondents in the subsequent analysis. Among 140 of students who were aware of CSF, highest awareness was observed for Google Drive (85.0%). Second and third highest awareness were for Dropbox (73.6%) and OneDrive (42.1%). 63 (45.0%) students were already aware of CSF before entering to the University ( Table 2).
From the students who were aware of Google Drive (85.0%), 12.1% had accessed it daily and 20.0% had accessed it more than once a week. Second and third highest awareness were for Dropbox (73.6%) and OneDrive (42.1%). From them more than 15% had never accessed them. Although students were aware of iCloud and Amazon, majority had never accessed them (Table 3).
Among the students who were aware of CSF, 79 (56.4%) had used cloud facility to transfer electronic files and 64 (45.7%) students used cloud storage to save educational materials. 36 (25.7%) had synchronized their files with CSF. Further, 109 (77.9%) students mentioned that cloud storage is useful for educational purposes and 19 (13.5%) students in view that CSF is little or no use for educational purpose.
Limiting factors of using CSF
Of 140 CSF aware students, 55 (39.3%) and 28 (20.0%) mentioned that they do not have adequate time and knowledge to use CSF respectively. Limiting factors for using CS included lack of accessibility 45(32.1%), concerns on security 37 (26.4%) and privacy 32 (22.9%).
Interest to learn more about CSF
Of total students, 35 (14.1%) student did not want to use CSF and 173 (69.8%) indicated that they would like to learn CSF as an effective tool for education. This included 72 (76.6%) of students who were not previously aware of CSF and 101 (84.9%) students who were already aware of CSF. 40 (16.1%) students did not wish to have a training on CSF and this included 22 (23.4%) students who were not aware of CSF and 18 (15.1%) who were aware of CSF.
Discussion
Among the students at the Faculty of Medicine, University of Kelaniya, 56.5% (95%CI: 50.3-62.6%) were aware of CSF and 50.8% (95%CI: 44.4-57.2%) used CSF. Our results showed a lower CSF usage among local medical students compare to western countries [15,19]. It is important to note that Faculty of Medicine has provided unlimited Wi-Fi internet access to the students. All the students had opportunity to learn ICT during the first year of the degree program and among the CSF aware students only 17.6% had learnt CSF during the ICT lessons. These ICT lessons were not compulsory for students and low students' attendance were observed for ICT lessons. Male students showed a higher awareness compared to females and newer students showed higher awareness compared to older students reflecting higher technology penetration among males and newer students. However, there was no CSF usage difference between males and female students.
Of those who were aware of CSF, nearly 50% of students used CSF to store educational materials and 25% had used synchronize facility to create backups [15,19]. Further 50% students had used CSF to transfer electronic files [19]. Google Drive was the most popular CSF among the students, 12% accessed it daily and 20% used it more than once a week [21,22]. Dropbox and OneDrive were the second and third popular CSF among students. However, more than 17% of them had never accessed them. Although students were aware of iCloud and Amazon, the majority had never accessed them. This pattern was different to a previous study where Dropbox was the most popular CSF among students [10,15,20].
We could not elicit a difference in Z score at G.C.E. Advance Level results nor grade five scholarship results among students who were aware and not aware of CSF. G.C.E. Advance Level is the entrance examination to state universities and grade five-scholarship examination is the selection examination for the prominent national schools in the country. This shows that CSF awareness does not depend on the nature of the school which students attended, whether those schools were equipped with ICT facilities or not, nor educational performance before entering to the University. We could not elicit a correlation between CSF usage and examination performance in the University that shows students who were familiar with ICT tools were not in advantage over others.
Main limiting factors for using CSF were lack of accessibility and concerns on security and privacy [23,24]. Further students mentioned lack of time and knowledge hindered CSF usage [19]. The majority of those who were aware of CSF in view that these facilities can be used for educational purposes and 13.6% expressed that CSF is no or little use for academic activities. Out of all who participated, 82% were in favor of having a training on CSF and this was 76.6% among who did not know CSF. This reflects the requirement of ICT education throughout the medical curriculum rather than limiting it to the first year. Students often neglect ICT lessons during the first year due to workload of other subjects and do not understand the importance of ICT in continuous professional development.
Limitations of the study
Following limitations are in this study; higher percentage of female students were recruited to the study as we adopted stratified random sampling. This is due to the fact that there were higher percentage of females in among the students. Unit examination results were available only for the senor students as junior students were still had not reached to the Unit examination level. This study included students from only one state medical faculty of the country.
Conclusion
Our survey results showed that CSF awareness and usage among the students were 56.5 and 50.8%. Google drive is the most popular CSF followed by Dropbox and OneDrive. Lack of knowledge, accessibility, concerns on security and privacy limited CSF usage among students. Majority were interested to undergo training on CSF and undergraduate ICT curricula should introduce CSF as effective educational tools. We emphasize the requirement of the ICT exposure in medical education to overcome the technological challenges face by future doctors. Future research, not limited one institution is encouraged to have more validated results but it is challenging to convince the importance of this kind of research in every medical institution.
Additional file 1: Table S1. Number of students allocated to each batch in the study sample.
Abbreviations CC: Cloud computing; CSF: Cloud storage facilities; ICT: Information communication technology; LMS: Learning management system; TAM: Technology acceptance model | 2019-09-16T20:17:54.269Z | 2019-07-18T00:00:00.000 | {
"year": 2020,
"sha1": "113ac84b0c762f1dd110c059ab3295ed132a13bf",
"oa_license": "CCBY",
"oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-020-1029-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b303c1fae64399ad1db64b335336a0cd7866011a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology",
"Medicine"
]
} |
72893631 | pes2o/s2orc | v3-fos-license | Hamatological parameters and malaria parasite infection among pregnant women in Northwest Nigeria
Objective
To evaluate some hematological and anthropometric parameters, malaria infection at different trimesters in pregnancy.
Introduction.
Maternal mortality is the death of pregnant women due to complications of pregnancy or during child birth. Out of the global maternal deaths, 99% occur in the developing countries, and Nigeria accounts for 10%, which is the second highest in the world [1] . About 50% of pregnancies are unplanned [2] , therefore, most women are unprepared for pregnancy in that the physical, nutritional, physiological demands are not met. During pregnancy, extra calories are needed due to a woman's increased basal metabolic rate and higher energy demands [3] . Prenatal infection is a major cause of maternal, fetal and neonatal morbidity and mortality [4,5] . Nutritional deficits may increase the risk of perinatal infection by diminishing or abolishing protective mechanisms [6] .
Infection has a major effect on adverse pregnancy outcomes
Comments
The work presented here can be improved further to make up a case report. Otherwise major revisions (design procedures including the appropriate choice of sample size and results analysis and presentation) will be required to qualify this article to a peer review article.
which appears the strongest among populations that suffer from malnutrition [7] . The most likely mediating factor linking this association is the effect of nutritional status on various host defense mechanisms and relationship existing between micronutrient deficiency and infection-mediated adverse pregnancy outcomes [7] . Malaria infection during pregnancy is a major public health problem in tropical and subtropical regions throughout the world [8] . Malaria is the most highly prevalent tropical disease, with high morbidity and mortality and high economic and social impact [9] . This study was centered on pregnant women attending antenatal clinic, their stages of pregnancy, nutritional status and malaria infection.
Subjects
Fifty pregnant women (6 in first trimester, 28 in second trimesters and 16 in third trimesters) between the ages of years were enrolled in the Antenatal Clinic of Family Health Care Centre in Samaru-Zaria, Kaduna State Nigeria for the study. Ten non-pregnant age-matched women were used as control subjects. Ethical approval was obtained from the Departmental Board of Research and verbal consents were obtained from the subjects.
Methodology
Semi-structured questionnaires were administered to obtain data on demographic and socio-economic variables, reproductive and medical history. Anthropometric variables were measured while women were wearing light clothing and bare footed with UNICEF electronic scale by SECA for weight and heightiometer. Trained medical officers in the health centre assisted in bleeding the women in the morning, following a standard procedure for blood collection [10] . Venous blood (5 mL) samples were drawn from the median cubital vein with minimum stasis while subjects were sitting. Part of the blood was slowly ejected into a K2EDTA containing tubes while the rest was left for about 30 min to coagulate. The uncoagulated blood was used to test for malaria parasite [11] , white blood cells count, packed cell volume and hemoglobin [12] . Random blood sugar was measured by spectrophotometric method [10] . Serum albumin was measured by the method described by Silverman et al [13] .
Statistical analysis
All calculations were done using the SPSS 13 statistical software package. Data were presented as mean依SD, and statistical analysis was carried out using the student's paired t-test and ANOVA. Differences were considered to be statistically significant at an error probability of less than 0.05 (P<0.05). Table 1 shows characteristics of respondents with majority within age of [20][21][22][23][24][25][26][27][28][29] years and, most had secondary education (38%) followed by Quaranic/Adult education (30%) compared to non-pregnant respondents which majority had post-secondary education (70%). Occupations of pregnant women are mostly as full-time housewives (54%). Casual labour 5 10 -- Table 2 presents the anthropometric characteristics of respondents which showed no significant difference in weight, height and BMI when compared across the trimesters with nonpregnant control. Values were mean依SD. Table 3 shows mean haematological values (WBC, RBS, PCV, Hb, and Albumin) between non-pregnant women, pregnant women and three trimesters, which indicated higher values for all the parameters for non-pregnant women but not statistically significant. The prevalence of malaria infection in pregnant women is shown in Table 4 which showed 40% of pregnant women examined were infected compared to 30% non-pregnant with those with first pregnancy (primagravid) recording the highest infection (47.62%). When data were also disaggregated according to age of pregnant women (Figure 1), those within age 23-26 years were least infected (16.7%). Pregnant women in the third trimester had the highest (50%) malaria infection followed by those in second trimester (35.7%) and third trimester (33.3%). Figure 2 shows the distribution of malaria infection according to educational status of the pregnant women which indicated increase in prevalence with increase education status. Data Figure 1. Prevalence of malaria parasite in pregnant women by age.
Discussion
The aim of this study was to evaluate some anthropometric indices, hematological profile and malaria infection of pregnant women at different trimesters and compare with non pregnant women. There is no significant difference in the value of all the anthropometric and hematological parameters analyzed even when compared at different stages of pregnancy, although there was variation in actual numeric values. This study agrees with the report of Osonoga et al. but disagrees with the report of James et al. that there is significant difference across the trimesters in the value of WBC and PCV [14,15] . Osonoga et al. reported that reasons for the lack of significant difference may be due to quality healthcare available to the pregnant woman, and adequate management of their blood profiles with dietary supplementation [14] . However, mean numeric values for most of the hematological profiles were below the normal range values for pregnant women reported [16] . In highly endemic malarious area, the prevalence of clinical malaria is higher and its severity greater in pregnant women than that of in non-pregnant women [17] . This is also true in this study in which higher prevalence rate (40%) of malaria infection in pregnant women was recorded compared to non-pregnant women (30%). This was higher when correlated with other report which recorded low prevalence rate (6.8%) [17] . High prevalence rate was in primigravidae than multigravidae in accordance with report of Marielle et al. in pregnant women in Gabon and woman within age group of 15-18 years [18] . This may be attributed to the low level of immunity at the early periods of conception by women with first conception as reported [19] . High prevalence rate in the study area could result in maternal anaemia as reported by Osonoga et al. which correlated with the haemoglobin concentration of the pregnant women obtained in this study [14] . Inadequacies during pregnancy can trigger a cascade of metabolism disorders and result in severe health disorder which can adversely affects mother's and child's health by increasing the rate of pregnancy and delivery complications in women and contributes to deteriorating fetus development and fetus conditions which leads to increasing newborn morbidity [20] . The result of this study showed higher prevalence rate of malaria infection in pregnant women and those with first conception (primigravidae) with the highest prevalence. Continuous monitoring of hematological profile and malaria parasites infection is very essential for better outcome of pregnancy.
Conflict of interest statement
We declare that we have no conflict of interest.
Background
The authors tried to provide the background on an important work to address the question on the etiology of variable hematologic parameters and malaria infection during pregnancy. However, the authors unsuccessfully provided only limited references to support his hypothesis that widely exist today. The author has also failed to present a strong argument/rationale to justify the question wanted to research for.
Research frontiers
The question at hand is a crucial research question to be investigated on the existing varying literatures that reported this topic. However, the design of the research that was intended for this question has not been well formulated hence implemented to bring up vivid answers.
Related reports
The design of this work was based on a case-control study. This design is a perfect design for similar etiology studies. Contrary to what was expected, however, the authors reported the findings in a way that differed from the usual reporting frame for the case-control studies. It would be more precise if the authors did report odds of hematologic indices or malaria cases for cases and controls and their ratios. The current results in this report are difficult to associate with the question at hand.
Innovations & breakthroughs
Difficulty to discern.
Applications
Possible after major revisions.
Peer review
The work presented here can be improved further to make up a case report. Otherwise major revisions (design procedures including the appropriate choice of sample size and results analysis and presentation) will be required to qualify this article to a peer review article. | 2019-03-10T13:05:39.806Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "f6973f714863f601f6238f6ea068d295055a30b0",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s2222-1808(13)60010-9",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "32f8af47663de46483cd8ce0313819f8284946e5",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203031874 | pes2o/s2orc | v3-fos-license | Inverse dynamics of underactuated flexible mechanical systems
An alternative approach to the inverse dynamics of flexible mechanical systems is presented. In contrast to a sequential discretization in space and time a simultaneous space‐time discretization is applied to the problem of inverse dynamics of underactuated systems. In particular, a space‐time finite element formulation will be compared with a formulation which is based on the method of characteristics. Numerical examples are presented which underline the importance of solving this class of problems in space and time simultaneously.
Introduction
Servo constraints have been successfully applied to solve the inverse dynamics of discrete mechanical systems. In this approach the equations governing the motion of the discrete mechanical system at hand are supplemented by algebraic servo constraints. The servo constraints serve the purpose of partially prescribing the motion of the mechanical system (cf. [1,5]). In principle the same approach can also be applied to the inverse dynamics of flexible mechanical systems such as elastic ropes and beams. To this end a discretization in space needs be applied first to generate the discrete mechanical system. Then servo constraints can be appended leading again to differential algebraic equations (DAEs). However, the index of the resulting DAEs can be quite large hindering their numerical solution. In this contribution, an alternative approach will be presented. The present work focuses on mechanical systems whose motion is governed by quasilinear hyperbolic partial differential equations (PDEs). In particular a system is considered whose motion is governed by: Here, s ∈ S = [0, l] contains the arc-length of a reference curve in R n dim , n dim ∈ {1, 2, 3}, t ∈ T = [0, t e ] is the time domain of interest, and r(s, t) ∈ R n dim . We further introduce the space-time domain Ω = S × T . The main task is to find f (t) ∈ R n dim for g(t) ∈ R n dim prescribed, such that (1) is satisfied. In section 2 and 3 a formulation based on the method of characteristics and a space-time finite element formulation is introduced respectively. Here, n dim = 1 and B(s, t) = b 2 are
Method of characteristics
The PDE in (1) can be recasted in the form of a system of first order PDEs by introducing q(s, t) = ∂ t r(s, t) and p(s, t) = ∂ s r(s, t): Using the column vectors x = q p and C = c 0 , alongside the square matrix A ∈ R 2×2 , the system of first order PDEs (2) then reads: In the method of characteristics (cf. [3,4]) a curve t = k(s) is called a characteristic curve if Since the considered problem is hyperbolic, (4) leads to two real-valued solutions for the characteristic curves along which q(s, t) and p(s, t) satisfy the following system of ODEs: The expression ( · ) α , α = 1, 2, refers to the two characteristic curves. The resulting system of ODEs (5), can be solved numerically, e.g. by using finite differences. The boundary and initial conditions specified in (1) can be applied directly at the nodes of the characteristic net. (1) can be rewritten as a system of first order in time PDEs: Multiplying each equation in (6) with a sufficiently smooth testfunction w 1 (s, t) and w 2 (s, t), respectively, and integrating over the space-time domain Ω, leads together with the Neumann boundary conditions ∂ s r(0, t) = f (t) and ∂ s r(l, t) = 0 after integrating by parts to The servo-constraint r(l, t) = g(t) which has to be satisfied for all t ∈ T can be enforced in a weak sense by using a test function w 3 (t) to obtain T w 3 (t) · (r(l, t) − g(t)) dt = 0.
Equations (3) and (8) constitute the newly proposed space-time finite element formualtion. The test functions w 1 (s, t), w 2 (s, t), w 3 (t), along with the trial functions r(s, t), v(s, t), f (t) can now be approximated by piecewise continuous polynomials (e.g. Lagrangian shape functions). Applying standard finite element procedures yields an algebraic system of equations for the determination of the nodal degrees of freedom associated with the discrete trial functions.
Numerical examples
Linear elastic bar: A bar with length l, cross-sectional area A, density ρ and Young's modulus E is investigated. The task is to find the force F (t) which is acting on one end of the bar (s = 0) such that the other end (s = l) tracks a prescribed trajectory g(t). Assuming linear constitutive relations and linear kinematics, the servo-constrained longitudinal wave propagation in the bar is governed by problem (1), by setting n dim = 1, B(s, t) = E/ρ, c = 0 and f (t) = F (t)/EA. The methods presented in sections 2 and 3 yield numerical results which coincide very well with the analytical reference solution (cf. Fig.1). Here g(t) claims a sinusoidal rest-to-rest maneuver of the bar at s = l. Both methods yield numerical results which coincide very well with the analytical reference solution. For comparison, a spatial discretization of the linear elastic bar using finite elements leads to a semi-discrete servo-constraint problem. The semi-discrete problem can be viewed as spring-mass system in which the number of masses, say n, corresponds to the number of nodes in the finite element model. It can be shown that the index of the DAEs is 2n + 1, see [2,6]. Accordingly, using a reasonably accurate finite element discretization (in space) yields DAEs with excessively high index.
Nonlinear elastic string: The second example deals with large planar (n dim = 2) deformations of an elastic string. In the undeformed stress-free reference configuration the string has length l, cross-sectional area A, density ρ and Young's modulus E. The task is to find the external force F (t) which is acting on one end of the string (s = 0) such that the other end (s = l) follows a prescribed trajectory g(t) ∈ R 2 . The motion of the string is characterized by r(s, t) ∈ R 2 , and the force in the extensible string is denoted by n(s, t) ∈ R 2 . Furthermore, b(s, t) ∈ R 2 denotes a body force per unit reference length and the stretch ν(s, t) = ∂ s r is introduced as a measurement of longitudinal deformation of the string. The motion of the extensible string is governed by a quasilinear hyperbolic PDE (cf. [7]) which again fits with the square matrix B = E 2ρ 1 − 1 ν 2 I + 2 ν 4 (∂ s r ⊗ ∂ s r) and f (t) = F (t)ν(0, t)/N (ν(0, t)) into the framework of problem (1) by assuming the constitutive relation N (ν) = (EA/2ν)(ν 2 − 1). The numerical results (cf. Fig.2) have been achieved by applying bi-linear shape functions in space-time. | 2019-09-17T02:45:43.902Z | 2019-09-04T00:00:00.000 | {
"year": 2019,
"sha1": "e9a243e91507def4a7146f6f7e6dbec01a12731c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/pamm.201900458",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "3c47db4172752d63351e59f0f2b34c4013ed41d1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237156510 | pes2o/s2orc | v3-fos-license | “The health equity curse”: ethical tensions in promoting health equity
Background Public health (PH) practitioners have a strong moral commitment to health equity and social justice. However, PH values often do not align with health systems values, making it challenging for PH practitioners to promote health equity. In spite of a growing range of PH ethics frameworks and theories, little is known about ethical concerns related to promotion of health equity in PH practice. The purpose of this paper is to examine the ethical concerns of PH practitioners in promoting health equity in the context of mental health promotion and prevention of harms of substance use. Methods As part of a broader program of public health systems and services research, we interviewed 32 PH practitioners. Results Using constant comparative analysis, we identified four systemic ethical tensions: [1] biomedical versus social determinants of health agenda; [2] systems driven agendas versus situational care; [3] stigma and discrimination versus respect for persons; and [4] trust and autonomy versus surveillance and social control. Conclusions Naming these tensions provides insights into the daily ethical challenges of PH practitioners and an opportunity to reflect on the relevance of PH frameworks. These findings highlight the value of relational ethics as a promising approach for developing ethical frameworks for PH practice. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11594-y.
Background
Health inequities result from an unequal distribution of the determinants of health, disadvantaging those who lack wealth, power or prestige [1,2]. Health inequities increase as social position decreases [3,4]. As social positioning decreases, the higher the concentration of harms from illicit substance use, poor mental health, unmet health care needs, and difficulties accessing health care [5,6]. These harms often intersect with, and are exacerbated by, the stigma and discrimination associated with drug use, mental illness, poverty, and marginalized social location further affecting health and well-being [7][8][9][10][11]. In such situations, the promotion of health equity raises questions of justice related to the structural conditions that create inequities, such as who has access to resources for health and how structural disadvantages limit access [12].
The twin moral aims of public health are to promote population health and reduce health inequities [13,14]. Promoting health equity and social justice are part of the ontological foundation of public health (PH) [15][16][17]. Barrett et al. [18] specifically highlights health equity as an important area of concern for PH ethics. Although there are strong national and international commitments to health equity as a key goal of health systems and services [19,20], the degree to which organized health systems actually undertake these moral aims is contested [21,22]. Delivering health services is a complex process with significant political and economic influences and multiple competing demands. However, while PH has an obligation to promote health equity, doing so within a health system that does not prioritize health equity is challenging and a source of ethical tensions that are often not well articulated [23].
In British Columbia, Canada, calls for PH system renewal led to the development of a framework for core functions in public health [24] and later the guiding PH framework [25]. Both the core functions and guiding PH frameworks include a directive to apply a health equity lens in all PH programs. Of the 21 core programs, our PH knowledge user partners identified mental health promotion and prevention of harms of substance use as two important areas to understand and learn about the application of a health equity lens and the ethical issues encountered in promoting health equity. The purpose of this article is to describe and discuss the ethical tensions experienced by PH practitioners with obligations to promote health equity in the core public health areas of mental health promotion and prevention of harms of substance use. We begin with some background on PH ethics followed by a description of the study methodology and then present and discuss findings related to four key ethical issues experienced by PH practitioners.
There are a range of ethical issues identified in the PH literature related to infectious disease control, emergency preparedness, public health communication, costeffective decision-making and more [18,26]. For countries without universal healthcare, the lack of universal insurance and the uninsured are key ethical and health equity concerns [27]. Although Canada has a system of universal health care for accessing doctors and hospitals, health inequities in Canada are growing and PH providers often work in health systems that are not aligned with PH values of social justice or address the broader determinants of health [28][29][30][31].
Historically, bioethics has focused on individual relationships and clinical biomedical issues concerned with "right" courses of action primarily in acute care settings with a lack of attention to ethical issues in public health [32]. Furthermore, dominant bioethical frameworks that focus on individuals and biomedical issues do not address PH ethical concerns adequately [33][34][35]. Thus, PH ethics is developing distinct from clinical bioethics with a focus on beneficence, respect for persons, and justice [36][37][38][39][40]. Unlike clinical bioethics, PH ethics: [1] concerns populations, public policy and policy structures rather than individuals; [2] places equity at the forefront; [3] includes actors outside the healthcare sector; [4] focuses on prevention of illness and disease, and health promotion [41].
Several authors have proposed health equity as foundational to PH ethics [13,[15][16][17]42]. Dan Beauchamp wrote "public health should be a way of doing justice" ( [17] p. 8) identifying that public health is social justice. Peter [43] applies Rawls' approach to justice as fair societies to argue that social inequalities are wrong when they stem from unjust social, political and economic institutions. He states further, "it thus embeds the pursuit of health equity in the pursuit of social justice in general" (p.160). Faden and Powers offer "a non-ideal theory of justice, intended to offer practical guidance on questions of which inequalities matter most when just background conditions are not in place" ( [16] p. 30). Their Twin Aim Theory of Social Justice [44,45] explicates six elements of well-being as criteria for procedural justice at the level of the individual, whilst simultaneously addressing the design and reform of social arrangements to guard against systematic patterns of injustice. Their work sketches out normative ethical guidance for policy makers [16].
As Peter [43] points out, Rawls did not include health as a primary good but did include self-respect as an important primary good and that if certain social positions are devalued "such that people cannot gain a sense of self-respect, then these structures are unjust" (p.167). Jennings [46] draws attention to political theory and relational interpretations of agency, autonomy, and justice as well as values of collectivity, equal respect, parity of voice, mutuality, and solidarity as important to conceptions of PH ethics. Baylis, Kenny and Sherwin's [15] and Kenny Sherwin and Baylis' [42] conception of PH ethics emphasizes solidarity and the public good. Using relational theory, these authors see individuals as interdependent and socially, politically, and economically situated. Rather than listing a hierarchy of principles, where independent autonomy is privileged [47], this approach holds competing ethical issues in tension towards the interdependent aim of the public's health, while recognizing that persons are not all equally situated in relation to opportunities for health [42]. Thus, recognizing the many ways in which persons can be differentially constrained based on different social locations. The PH practitioner's task is to make visible the impacts that result from policy and healthcare decisions with a view to equitably balancing competing demands amongst differently situated social groups.
However, to develop normative guidance for PH, it is critical to ground ethical theory and perspectives in the everyday ethical concerns that arise for practitioners [48]. Applied ethicists Leget, Borry and DeVries [49] argue that a critical ethical approach integrating empirical research and normative ethical theory can clarify issues and has the potential to set the conditions for supporting real world practice through an ongoing dialectical process. Thus, there is a role for research in shaping the development of ethical frameworks recognizing that these frameworks should inform practice with the reverse also being true. Past investigations of ethics in public health practice have identified issues related to collaboration, priority setting, resource allocation and decision making [50][51][52]. We found one study of public health decision makers and health equity [53] but none that focus explicitly on public health practitioners and promotion of health equity. A better understanding of the ethical concerns of PH practitioners specifically in relation to their efforts to reduce health inequities is essential to inform frameworks that are relevant and attuned to development of ethical and more equitable PH decision-making and practice.
Methods
This study is one of four interrelated studies in a broader program of public health systems and services research with an overarching aim to generate knowledge about the integration of an equity lens in PH during a time of PH system renewal in the province of British Columbia (BC), Canada [54]. The PH areas of mental health promotion and the prevention of harms of substance use were the focus of a constructivist grounded theory study to explore the process of how PH practitioners navigate health equity work [55,56]. As part of this grounded theory, we identified a range of ethical issues in the promotion of health equity. Understanding these issues is an important starting point for understanding the process of navigating health equity work. This study received ethical approval from the University of Victoria, University of Saskatchewan, and six participating Health Authorities (HA) (REB# H11-03359). We obtained written consent from participants.
Sampling and data gathering
We used purposive and snowball sampling in collaboration with HA partners to identify PH practitioners with PH responsibilities for mental health promotion and prevention of harms of substance use in their work. The sample consisted of 32 participants with whom we conducted 29 semi-structured interviews (face-to-face and phone) and one focus group of 3 people. We developed the interview guide for this study, and it is available as a supplemental file. Participants represented five regional HAs and one Provincial Health Authority. Participants had, on average, 10.26 years of PH experience. Registered nurses (RNs) were the majority of participants [25] and all but one had post-secondary education. Interviews averaged 60 min and were recorded and transcribed verbatim by an experienced transcriptionist and verified by the study research assistant.
Data analysis
We employed constant comparative analysis, a method foundational to grounded theory and an accepted approach to qualitative enquiry [57][58][59]. This method involves detailed coding to develop concepts and relationships among the codes by comparing incidentto-incident, incident-to concept, and concept-to-concept to take the analysis from the "ground" up through higher levels of abstraction [60]. Our team of five researchers (four faculty and one research assistant) generated a set of inductive codes for the coding framework from lineby-line coding of the initial three interviews. The research assistant continued coding the interviews line-byline using the initially developed framework and adding to the coding structure using NVivo software. The team conducted in-depth analytic discussions to compare incidents within the same interview, across interviews, and over time to establish relationships and differences between incidents and concepts [55,60]. We employed memoing, diagramming, and reflexivity in the conceptual development of the key themes and to enhance rigor. The eventual grounded theory (GT) is reported elsewhere [56]. In this paper, we describe in depth one element of the grounded theory -the specific ethical tensions identified by PH practitioners that arose from their practice promoting mental health and preventing harms of substance use.
Results
Participants identified four systemic ethical tensions: [1] biomedical versus social determinants of health agenda; [2] systems driven agendas versus situational care; [3] stigma and discrimination versus respect for persons; and [4] choice and autonomy versus surveillance and social control. In describing these four tensions, we lay out the full range of ethical issues expressed by participants as a group. The extent to which individual participants were aware of or described these tensions varied across participants. While some practitioners expressed a high degree of awareness of a range of issues, other practitioners were less aware of the broad range of ethical issues related to health equity work. So, the lens for viewing what constituted an ethical issue in health equity work varied across participants. As described elsewhere [56], PH practitioners with a critical public health 'lens' were more likely to recognize and experience ethical tensions. For each of these ethical tensions, we describe the underlying values conflict and identify the ethical concerns from the perspectives of PH practitioners. Although we present each of the four areas of ethical tensions separately, they are interrelated, and PH practitioners often must simultaneously navigate these issues to promote health equity.
Theme 1: the health equity curse: biomedical versus social determinants of health agenda
Participants identified a primary area of ethical tension as the dominance of a biomedical agenda that obscured the PH focus on health equity, with a subsequent lack of focus on the social determinants of health and systemic responses to reduce health inequities. They defined the biomedical agenda as the dominance of acute care priorities with a consequent emphasis on the treatment of disease, illness and injury for individuals rather than prevention and promotion. The following participant working on an integrated outreach team describes: We've got the swing how to get people tested to make sure that the medication is working for them. So the medical system is actually quite good, quite slick. But in terms of the support for the other parts of their sort of hierarchy of needs, the housing, the food, you know all the stability that goes along with, or instability that goes along with poverty, I think we are still a long way from sorting that out. (S4- 29) Participants observed that within a biomedical system, there is a lack of understanding and valuing of PH work especially the work of prevention.
It's really hard to say we prevented this mom from harming her baby, or we prevented this mom from having a postpartum psychosis and going into a hospital … it's really hard to say we prevented something from happening. And so, you know, money and hospitals, people can see when they're voting or putting money into the healthcare system, they can say, okay this x-ray machine does x-rays that prevents pneumothorax, which prevents death. Right? So that's an easy thing for people to see, but in prevention and health promotion, it's really hard to say, you know, having these clinics will prevent something from happening because the outcome should be nothing. And it's really hard to prove nothing. (S4-04) Furthermore, participants highlighted that even within PH what seemed to matter was communicable disease prevention and a focus on secondary prevention with less attention to primary prevention or health promotion such as described by the participant above.
And so the (name of organization) says that they're into health equity, and they say they understand social determinants of health, but if you look at everything they do and all of their work plans and stuff, they're all about some bugs and viruses, and emergency, you know, Ebola responses … ..That seems to be the level of where we sit with health equity and they don't know how to talk about-or they don't publicly talk about what health has to say about the … the systemic stuff that we have, our policies that create health inequity. (S4-02) While participants recognized that the health system was not solely responsible for addressing the determinants of health, the lack of value for the PH role and focusing upstream on the root causes of health inequities were often described by participants as unimportant to health systems. This same participant describes the daily ethical challenges of working in health care organizations that do not embrace health equity: We're ethically challenged every day because we work in health care in a place where people don't have adequate services, don't get treated well in the system, don't have proper housing. So, sort of a different level for me. I was just going to say … . once you start seeing the world through a social determinants lens, it's like you'reyou can never go back and it's a bit of a curse. It's not easy. (S4-02) Other participants also shared that once you saw the world through a health equity lens it raised more ethical challenges than if you did not, because it means living with the moral discomfort of knowing what is needed but being unable to act. They acknowledged that sometimes it was easier for them and their colleagues not to hold a health equity lens because accepting the status quo reduces discomfort. This paradoxical "health equity curse" included knowing that clients, families, groups, communities and populations are experiencing deficits in resources for health but their needs do not fit into the dominant model of biomedical care.
This lack of access to resources for health was a fundamental ethical issue and experienced as morally distressing, as described by the following participant: Because you know, you're stuck … .you can't give people a better house … ..You can't get them a sink, you can't give them the basic needs, right? So you are, you're very torn and almost feel guilty at the end of the day when you go home and you think, like Working within a healthcare system that fails to act on the determinants of health weighed heavily on the PH practitioners in our study. They felt that they had few, if any, resources available to address determinants of health or the structural causes that produced health inequities or the subsequent distress associated with being aware and unable to act.
Theme 2: procedures, checklists and checkboxes: systems driven agendas versus situational care Participants highlighted how the pressure of meeting systems requirements drove PH work rather than the situational needs of clients. Participants pointed to systems requirements such as procedures, guidelines, checklists and checkboxes as the drivers of their work and ultimately actions/inactions taken to promote health equity. One participant described: You know, public health is so indoctrinated with policies and procedures and guidelines and charting, and again, that often gets taken up with, you know, what's being delivered from above into how we do our work, So again, it's not really about the clients themselves and the work with them, but it's about the criteria put out by [Health Authority]. (S4-12) In particular, PH practitioners described how systems requirements, based on standardized assessments rather than structural or situational factors were prioritized when it came to determining eligibility for services and programs. One participant stated: Certain mothers quote unquote "qualify" for a home visit due to some varying risk factors. And is that an equitable way of treating our population? Because it leaves out that aesthetic way of knowing about that person. You know? Saying "I just have this feeling that this mom needs a visit" or "just from her tone of voice, I think she's not telling me she's depressed but I sense something" so I go out and visit and sure enough, there's several different things going on. (S4- 18) In addition to program eligibility criteria that allowed little room for clinical assessment of situations, participants described the ethical issues of working with checklists/checkboxes, procedures, and guidelines rather than focusing on the person and their context.
So what I mean by that is probably in this office if we were to do ideal nursing work or ideal support, family support work, we would be able to call all the moms and ask them what they wanted from us and be able to implement that whether it's going out to see them in their homes, or taking them to, you know, the store to buy proper food, you know, helping them, whatever they wanted, whatever they felt that they needed at that time to meet where they were at. If we were able to do that without constraints of resources and um checklists and things like that, … ..So I think for us, all our ethical dilemmas come from the facts that we work on the ground very differently than what the people who create our resource pool and our jobs, and our job description work from. (S4-04).
The examples above also highlight a move away from universal to targeted programs with a focus on standardized criteria for assessing risk. One participant describes the evolution of this shift.
You know there's always a nurse available that if a parent had been discharged with a new baby, they would get a home visit to make sure that things are going well, you know, to do an assessment on their mood. So the universal program over the last number of years is getting more streamlined into more targeted populations, the higher risk group or the higher priorities is how they term it in public health. So that, the universal approach, is kind of shifting a bit, looking at budgets, you know, how to invest your money, right? But always I feel that with the thought of universal approach, a lot of people get kind of lost -because it's not always obvious that there's issues, right? ( S4-12) Systems requirements related to program eligibility, procedures, checklists and checkboxes and shift from universal to targeted programs shift the focus away from promoting equity in that resources cannot be based on assessment of need. Some practitioners pointed to the mantra of patient centered care as a health system priority but with little attention to the social conditions that impact individual health reflecting a value of individualism/ neo-liberalism. Thus, there is a tension between systems driven agendas in which the focus is on meeting the demands/needs of the system and situationally driven care in which individuals and their needs are understood within a set of social circumstances.
Theme 3: systemic stigma and discrimination versus respect for persons
Participants described stigma and discrimination as pervasive within health care systems. They described witnessing various forms of stigma related to mental illness, substance use, addiction, HIV, blaming and criminalizing of people experiencing health inequities.
I find there's more judgement. You know … not having the same kind of emphasis or compassion, or understanding of the complexities of health inequities, you know, and the determinants of health, even though that is part of the lens in public health, there's still sort of … there's a certain attitude of like they choose just for themselves. (S4-12) The quote above highlights a dominant understanding that it is the individual who is to blame (e.g. they choose this for themselves) rather than a recognition of systemic inequities. The participant below describes how this plays out specifically related to mental health and substance use.
Oh we won't treat you if you're using and if you're mentally ill … maybe it's because of your use, so therefore we won't deal with you, I think that really reflects our society's attitudes, about, probably our state-about how we feel about mental health and how we feel about addiction, right? So if you're so … I don't know, you know, 'lazy', or 'unorganized' or 'undisciplined enough' to be using something, we're not-so this is an underlying theme with addiction: you know, you're not -you're just a drain on our system and so, you know, you're wasting bed space here because you're addicted to something … So we want you to get it together before you come back … So there's that serious underlying theme that threads through how, I think, our society sees people who use drugs. And then, you know, so if they come in using and with mental health, that kind of gets layered into how they're treated. (S4-02) While participants recognized stigma of mental health and substance use as in the example above, they were less likely to name intersections of stigma with various other forms of discrimination related to ethnicity, sex and gender.
You know, there's certain gaps for instance for the First Nations population who don't live on reserve, they can access our services, right? But, you know, there's just always, maybe not as comfortable to walk into our building that's very clinical and very institutional feelingit's a very old building. You know, big counter, so I mean I think that can be a barrier for people feeling comfortable to access the services. (S4-12) Although this PH practitioner did not directly name racism or link the 'institutional feeling' to a colonizing history, racial discrimination compounds other stigma related to mental health and substance use. Participants did at times identify sex and gender as areas of discrimination but did not necessarily recognize or identify the intersections of various forms of stigma and discrimination.
As a result of various forms of stigma and discrimination, participants described healthcare systems as producing mistrust and affecting health care experiences of populations they were working with.
And because they've probably been treated in the past, they're not wanting to access service and they mistrust now … .The majority of my clientele that I work with will not, and I've never seen this before, will not go to the hospital. And I kid you not, until it's almost too late or too late. I've never seen that, because of how they've been treated. (S4-25) Participants described how their clients' concerns were often dismissed outright and the challenges related to system processes such as navigating through bureaucracy, filling out multiple forms and getting through gatekeepers was daunting, creating ethical concerns related to the personal capacity and energy of clients and practitioners to work to access a system that is highly stigmatizing and limited in what can be provided. For example, one participant described the work of carefully choosing terminology in documentation to favorably present a client so that they could get access to housing and described this as 'fudging it" rather than seeing this as a way to reduce stigma knowing that housing was scarce commodity in the community.
Several participants discussed how the line between practitioner and client experiences is not so distinct. Some participants self-identified as having past problematic drug use, being gender non-conforming, having experience with mental health issues, or having family members or loved ones in need of mental health or substance use supports. One participant described how their identity as queer was not recognized as an asset in the workplace but rather something that they had to manage carefully in terms of who they shared this information with. Finally, because the work of PH practitioners brought them close to groups that are so often stigmatized, they were found themselves personally impacted by stigma, "I think the work that we do is also stigmatized. Like, our clients are stigmatized for their health and social status and we are stigmatized for working with them" (S4-20). Thus, having to navigate stigma and discrimination on multiple fronts for themselves and their clients. However, there was seemingly little appetite to address systemic stigma within organizations.
My agency ... they say they care about these issues [of equity], but if we start talking about them too much, they tell us to not talk openly about it. Yeah. Like a few of us will get quite fired up every so often about how they profile groups, and then, you know, say "These people are more at risk" and they stigmatize them sort of there, or don't look at all the complexities that go into why that group, you know, is more vulnerable. (S4-02) For these practitioners, discussion was stifled leaving sources of inequity unaddressed and continuing to operate in the very systems meant to provide care.
Theme 4: trust and autonomy versus surveillance and social control
The context of relationships between practitioners and clients was one of mistrust due to systemic stigma and past negative experiences in healthcare. Consequently, participants indicated that building and preserving trust and autonomy were priorities that sometimes came into conflict with organizational or legislated demands that required measures of surveillance and at times social control.
Participants particularly noted concerns related to trust and autonomy around maintaining confidentiality and consent regarding communicable disease reporting to protect the public. Participants shared how navigating STI reporting requires a nuanced approach to keep clients engaged in care and meet population health mandates. It takes time to build trust, learn details and assess risks in a situation as well as decision making about how to reduce both individual and population risks. One practitioner described working with a client who was positive for HIV and she had not told her partner.
Only a few hours ago we were faced with this ethical issue where one of our clients who comes up from time to time, where we know that she has an ongoing relationship with someone who isn't aware of her HIV status. And so that's always a bit of an issue ... but they aren't sexually active, so it hasn't been a big concern to us that he doesn't know. But he said that yesterday he was picking her up and then he was poked with a needle. And so suddenly I'm thinking he needs to know so he can access care, he should be offered post exposure prophylaxis and the window is so short for that. But we can't inform him and break her confidentiality. I wonder if we can find her to talk to her and let her know, like "hey this is what he told us. Can we work with you at all to disclose?" . . . So I was sort of sitting here thinking "I can't not do anything" . . . And I think really feeling the pressure of it because of it being this short time frame where we if we can get anything happening, we need to get it happening now. (S4-23) This exemplified how practitioners work to preserve trust as well as being finely attuned to their clients and their clients" particular situations as they worked to navigate their obligations in the face of possible risk to the public. As our participants described, approaching disclosure in a client led way was emotionally intense and required persistent engagement, and ongoing assessment. Acting prematurely might cause the client to disengage and lose trust in the practitioner and then increasing risks for population health. Thus, knowing when it was appropriate to break confidentiality to disclose private information was a delicate relational dance in which the practitioner had to balance the relationship with client and the health of others as circumstances unfolded. Although a different form of surveillance, the practitioner below describes being requested to check up on a client.
And the times where I feel like my ethics have been compromised is where I've been asked to have quite a specific follow up. Like, you know, one example would be to call the doctor to make sure that the client attended for a baby checkup. Or something like that. For me, .., if that was an agreement I had with the client already personally, I would feel okay about that. But for me as maybe we've never even met, that feels like policing and that feels unethical to me.
(S4-21)
This participant highlights that being asked to check on someone is a form of policing in healthcare that feels unethical. Other participants described ethically challenging situations as knowing when to call the police or child protection, knowing that such calls would bring in systems of social control and work against the hardfought earning of trust. Participants described being in the position of working to preserve trust and autonomy with their clients and attempting to manage surveillance and social control to prevent further inequities. We would note that a focus on managing surveillance and control takes the emphasis away from providing support and access to resources that can promote health equity.
Strengths and limitations
There are several limitations of this analysis. First, it specifically focused on issues experienced by PH practitioners in promoting mental health and preventing the harms of substance use at a specific point in time. Ethical issues may be different in other PH core program areas. Yet, this may be an area of PH work where a lack of attention to social determinants of health (SDOH) was more readily visible and apparent but at the same time may not reflect health systems programs to connect people to the SDOH or programs within health authorities that address issues related to SDOH like housing. Second, this study took place in one provincial geographic context in Canada, representing rural to urban settings with different systems of health care delivery across six different publicly funded Health Authorities. Reporting to the provincial Ministry of Health, each Health Authority delivers the same public health services but tailored to their context. However, this may also be a strength and contribute to the opportunity to extend these findings to other contexts.
Discussion
In this paper, we have sketched out systemic ethical challenges in PH practice related to the promotion of health equity. We specifically outline four systemic ethical challenges that arise in PH practice related to the dominance of biomedicine, bureaucratic systems, systemic stigma and discrimination, and potential systems of surveillance and control. All of these shape PH providers' interactions with clients and affect their ability to promote health equity. The dominance of biomedicine in the health care system focuses action on treatment of disease for individuals leaving little space for public health and little attention to the broader social determinants of health or more simply the conditions in which people live and work. Systems requirements, often infused with the values of biomedicine and individualism (such as procedures and standardized checklists) do not account for the unique situatedness in which individuals, groups and communities are positioned. In the current health care system dominated by biomedicine and bureaucratic approaches to care and the presence of stigma, it is often difficult and challenging for PH providers to meet obligations related to health equity leaving them with the burden of unmet needs and feeling like health equity is a curse. This study provides insights into health equity issues in public health that are only briefly mentioned in others studies of public health ethics issues noted at the beginning of this paper.
Notable in the ethical concerns of PH providers is the pull of biomedicine, neo-liberal and individualist discourses that obscure the broader social and often structurally violent conditions that produce vulnerability to health inequities. Similarly, in a systematic review of literature on health equity, Farrar and colleagues [61] identified that capitalism, biomedicine and difficulties with collaboration impact the ability of public health to advocate for health. In fact, Smith [53] found that PH decision makers were uncomfortable and shied away from issues related to justice and power. Participants described having to practice within health systems that drift to targeting behaviors of individuals, groups, and populations rather than recognizing social, economic, historical, and political risks, and social conditions that impact health. Despite the growing evidence that targeted behavioral approaches have limited utility for groups experiencing disadvantage [62,63] are ineffective [64] and may, in some cases, even widen inequities [65], lifestyle and behavioral approaches still dominate within Canadian PH policy [21,64]. In fact, much of what participants are calling for here in relation to assessing and providing resources based on need is aligned with proportionate universalism, an approach where health actions are universal but provided in proportionate to level of disadvantage [66].
Stigma related to illicit substance use, homelessness, mental illness, HIV, Hepatitis C, often intersect with forms of discrimination including racism, classism, and gender bias [8]. These findings broaden the understanding of stigma as an ethical issue in healthcare beyond association with disease conditions to encompass poverty and substance use stigma [67]. Stigma and discrimination contribute to social exclusion, limit access to resources for health and exacerbate health inequities. It is clear from our findings that stigma is pervasive in healthcare reflecting unjust structural arrangements limiting the achievement of health equity. Of note, participants often spoke to one but not multiples sources of stigma. It is not clear whether this is due to the complexity of multiples stigmas and discrimination and/or lack of knowledge about how various forms of stigma and discrimination compound creating even greater inequities for some. Furthermore, it is of serious concern that participants felt that their attempts to address systemic stigma within healthcare are stifled and that they experience stigma by association, also known as courtesy stigma [68,69].
All PH practitioners in our study described the negative effects of seeing firsthand or hearing stories from their clients about the challenge of living with health inequities. They bore witness to the interface of clients with the healthcare system and the inability of such systems to address health inequities. Thus, what became clear is that PH practitioners' call to act is not an abstraction but comes from their professional obligations and from working with, alongside, and in communities experiencing health and social inequities. Much has been written about moral distress in acute care with less attention to moral distress among PH practitioners [70][71][72]. What is strikingly similar is the degree to which PH practitioners are bearing witness to systemic issues over which they have little control (e.g. structural conditions) and as a result feel powerless to assist their clients even when health equity is articulated as important and expected [25].
Reducing health inequities must include actions that: [1] improve the conditions of daily life; [2] tackle the inequitable distribution of power, money and resources; and [3] measure and understand the problem and assess the impact of action (Commission on the Social Determinants of Health, 2008). As Jennings [46] observes, both political and moral theory are important to the future of public health ethics. Given the ethical issues identified in this study, we could not agree more. Addressing biomedical, bureaucratic and individualist ideology as well as stigma and discrimination is inherently political. Jennings [73] expands on the need for public health ethics to be informed by concepts of relational solidarity and care. More specifically, Baylis, Kenny and Sherwin's [15] relational conception of PH ethics resonates with the issues related to health equity in PH practice. Relational perspectives are reflected in the narratives of participants, highlighting the importance of understanding how individuals are situated as part of health equity work. This is directly aligned with health equity perspectives that focus on the importance of social, historical, political and economic positioning for understanding individual and community access to social determinants of health [1]. We would highlight that the ability to recognize ethical issues in practice is part of the competencies of public health providers and that these competencies include naming ethical issues as well as identifying appropriate courses of action. In the view of these findings, achieving such competencies is more so a curse than an achievement in health systems that do not operationalize values of health equity.
Conclusion
The ability to enact PH as social justice is largely constrained within a system that privilege biomedicine and bureaucracy even when there are commitments to health equity. As well, stigma and discrimination are embedded deeply within health care systems and practitioners are constrained in their ability to disrupt these patterns of devaluing and exclusion by the system itself. Critically important to working with individuals and groups experiencing health inequities is the ability of practitioners to think situationally and to preserve trust in relationships, but this often comes into conflict with duties to protect the public as well as societal pressures to surveil and exert social control. As with any complex challenge, there needs to be multiple avenues to action to transition to equity-focused health care systems. As with any complex challenge, there needs to be multiple avenues to action to transition to equity-focused health care systems. Decision makers and public health practitioners need to examine and track the process of operationalizing values of health equity as values alone are not enough [74]. Setting benchmarks and reporting on progress can help to alter the culture of a biomedical focused system and provide support for practitioners to encourage shifts toward equity-oriented systems. Additionally, all health care practitioners need improved education and support in application of structural competencies and encouraging practitioners in critical decision-making rather than bureaucratic checklists. We highlight that the frameworks and perspectives of relational ethics offer a more promising approach for recognition and attention to health equity issues in public health.
Abbreviations PH: Public health; RN: Registered nurse; SDOH: Social determinants of health
Additional file 1. Interview Questions S4 -Supplemental File. Interview transcripts. Semi-structured interviews were guided by the use of the interview questions for both the individual and focus group interviews. Interviews were audio recorded and transcribed verbatim. | 2020-10-28T19:21:45.425Z | 2020-10-16T00:00:00.000 | {
"year": 2021,
"sha1": "311e89890428bff429fb3b4ae397fe893c72223b",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-11594-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52029c260619e43c3a3ba5a7aec1b75ed195d764",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228083240 | pes2o/s2orc | v3-fos-license | The COX-2-derived PGE2 autocrine contributes to bradykinin-induced matrix metalloproteinase-9 expression and astrocytic migration via STAT3 signaling
The matrix metalloproteinase-9 (MMP-9) is up-regulated by several proinflammatory mediators in the central nervous system (CNS) diseases. Increasing reports show that MMP-9 expression is an inflammatory biomarker of several CNS disorders, including the CNS inflammation and neurodegeneration. Bradykinin (BK) is a common proinflammatory mediator and elevated in several brain injury and inflammatory disorders. The raised BK may be detrimental effects on the CNS that may aggravate brain inflammation through MMP-9 up-regulation or cyclooxygenase-2 (COX-2)-derived prostaglandin E2 (PGE2) production in brain astrocytes. However, the relationship between BK-induced MMP-9 expression and COX-2-derived PGE2 release in brain astrocytes remains unclear. Herein we used rat brain astrocytes (RBA) to investigate the role of the COX-2/PGE2 system in BK-induced MMP-9 expression. We used zymographic, RT-PCR, EIA, and Western blotting analyses to confirm that BK induces MMP-9 expression via a COX-2/PGE2-dependent pathway. Our results show activation of native COX-2 by BK led to PGE2 production and release. Subsequently, PGE2 induced MMP-9 expression via PGE2 receptor (EP)-mediated c-Src, Jak2, ERK1/2, and then activated signal transducer and activator of transcription 3 (STAT3) signaling pathway. Finally, up-regulation of MMP-9 by BK via the pathway may promote astrocytic migration. These results demonstrated that a novel autocrine pathway for BK-induced MMP-9 protein expression is mediated through activation of STAT3 by native COX-2/PGE2-mediated c-Src/Jak2/ERK cascades in brain astrocytes. 9AsGp9CBXvHHi3gF-G6FRA Video Abstract Video Abstract
Background
The cyclooxygenase-2 (COX-2), known as prostaglandin (PG)-endoperoxide synthase, is inducible expressed in several tissues by various stimuli to promote PGs biosynthesis, PGE 2 especially, during inflammatory responses in several cell types [1][2][3][4]. Previous studies have shown that overexpression of COX-2 is detected in various inflammatory tissues including macrophages and vascular cells of patients with atherosclerosis. Several evidences have further indicated COX-2 as a major therapeutic target for the treatment of inflammatory disorders [1]. Moreover, homozygous deletion of the COX-2 gene in mice leads to a striking reduction of endotoxin-induced inflammation [5]. Therefore, COX-2 may play a crucial role in the development of various inflammatory disorders. In brain, up-regulation of COX-2 leads to increased production of PGs which may be associated with the central nervous system (CNS) inflammation and neurodegenerative disorders [6]. Moreover, we have demonstrated that several proinflammatory mediators like bradykinin (BK) can induce COX-2 expression and PGE 2 production in brain astrocytes [7]. Thus, the COX-2/PGE 2 system may exert as a critical pathological mediator in brain inflammatory diseases.
Matrix metalloproteinases (MMPs) are a large family of zinc-dependent endopeptidases which is a crucial molecule for the turnover of extracellular matrix (ECM) and pathophysiological processes [8]. In the CNS, MMPs, MMP-9 especially, has been demonstrated to participate in morphogenesis, wounding healing, and neurite outgrowth [9]. Several lines of evidence have showed that up-regulation of MMP-9 may contribute to the pathogenic process of brain diseases by several brain injuries [10]. Moreover, several proinflammatory mediators such as cytokines and endotoxin have been shown to induce MMP-9 expression and activity in rat brain astrocytes [11,12]. Our previous studies have showed that several proinflammatory mediators including BK can induce MMP-9 expression and MMP-9-related functions in brain astrocytes [13]. These studies indicated that MMP-9 may play a critical role in brain inflammation and disorders, and this has aroused our interest to investigate the correlation of COX-2/PGE 2 system with MMP-9 regulation in brain astrocytes. Here, we used the model in RBA cells to investigate the role of COX-2/ PGE 2 system in BK-induced MMP-9 expression and the relative events like cell migration.
The astrocytes are one type of glial cells in the CNS, which have been proposed to exert a wide range of functions including participating in the immune and repairing responses to brain injury and diseases [14,15]. Following injury to the human CNS, astrocytes become reactive and respond in stereotypical manner termed astrogliosis [16] which is characterized by astrocyte proliferation and functional changes in inflammatory diseases [17]. In brain, BK and related peptides are released during trauma, stroke, and neurogenic inflammation [18][19][20], which may play a critical role in the initiation of the CNS inflammatory diseases. All these pathophysiological processes may be involved in inflammatory reactions which were regulated by COX-2/PGE 2 system. However, the effect of COX-2/PGE 2 system on BK-induced MMP-9 expression are still unclear, although we have demonstrated that BK induces COX-2 and MMP-9 expression in brain astrocytes [7,21].
Astrocytes are known to express B2 BK receptor [15,22], a heterotrimeric G protein-coupled receptor (GPCR) that has been thought to be coupled to PLCβ via interaction with Gq proteins [23]. Activation of BK receptors may induce cell response or gene expression via several signaling molecules, including PKCs, Ca 2+ , and mitogenactivated protein kinases (MAPKs) in several cell types [24][25][26]. In addition, BK has been shown to regulate the activity and expression of COX-2 through different mechanism in diverse cell types including astrocytes [7,27,28]. Likewise, BK induces the activity and expression of MMP-9 via several pathways in brain astrocytes [21,29]. However, the signaling mechanisms underlying BK-stimulated COX-2-derived PGE 2 release associated with MMP-9 gene expression in brain astrocytes remain unclear. Thus, the involvement of COX-2/PGE 2 system in the up-regulation of MMP-9 expression by BK was also under research.
In this study, we investigated the molecular mechanisms underlying BK-induced MMP-9 expression in rat brain astrocytes (RBA). These results suggested that BKinduced MMP-9 expression is mediated through activation of COX-2-derived PGE 2 release. The released PGE 2 acts as autocrine signals to activate c-Src, Jak2, ERK1/2, and STAT3 via PGE 2 receptor (EP)-dependent manner leading to up-regulation of MMP-9 in RBA cells. These results provide new insights into the inflammatory mechanisms of BK and COX-2/PGE 2 action which may be recognized as therapeutic targets in brain inflammatory diseases.
Cell cultures and treatments
The rat brain astrocytic cell line (RBA, CTX TNA2) was purchased from BCRC (Hsinchu, Taiwan) and used throughout this study. Cells were plated onto 12-well culture plates and made quiescent at confluence by incubation in serum-free DMEM/F-12 for 24 h, and then incubated with BK at 37 °C for the indicated time intervals. When the inhibitors were used, cells were pretreated with the inhibitor for 1 h before exposure to BK. Treatment of RBA with these inhibitors alone had no significant effect on cell viability determined by an XTT assay (data not shown).
MMP gelatin zymography
Growth-arrested cells were incubated with BK for the indicated time intervals. After treatment, the cultured media were collected and analyzed by gelatin zymography [22]. Gelatinolytic activity was manifested as horizontal white bands on a blue background. Because cleaved MMPs were not reliably detectable, only proform zymogens were quantified.
Total RNA extraction and real time-PCR analysis
Total RNA was extracted from RBA cells [22]. The cDNA obtained from 0.5 μg total RNA was used as a template for PCR amplification. Oligonucleotide primers were designed on the basis of Genbank entries for rat MMP-9 and GAPDH. The primers were: The amplification was performed in 30 cycles at 55 °C, 30 s; 72, 1 min; 94 °C, 30 s. PCR fragments were analyzed on 2% agarose 1X TAE gel containing ethidium bromide and their size was compared with a molecular weight markers. Amplification of β-actin, a relatively invariant internal reference RNA, was performed in parallel, and cDNA amounts were standardized to equivalent β-actin mRNA levels.
Preparation of cell extracts and Western blot analysis
Growth-arrested cells were incubated with BK at 37 °C for the indicated time intervals. The cells were washed with ice-cold phosphate-buffered saline (PBS), scraped, and collected by centrifugation at 45,000×g for 1 h at 4 °C to yield the whole cell extract, as previously described [21]. Samples were analyzed by Western blot, transferred to nitrocellulose membrane, and then incubated overnight using an anti-phospho-c-Src, phospho-Jak2, phospho-ERK1/2, phospho-STAT3, or GAPDH antibody. Membranes were washed four times with TTBS for 5 min each, incubated with a 1:2000 dilution of anti-rabbit horseradish peroxidase antibody for 1 h. The immunoreactive bands were detected by ECL reagents and captured by a UVP BioSpectrum 500 Imaging System (Upland, CA). The image densitometry analysis was quantified by an UN-SCAN-IT gel software (Orem, UT).
Measurement of PGE 2 release
The cells were seeded in 12-well plates and grew to confluence. Cells were shifted to serum-free DMEM/F-12 medium for 24 h, and then incubated with BK for various time intervals. The culture supernatants were collected to measure PGE 2 levels using an EIA kit as specified by the manufacturer (Cayman Chemical).
Transient transfection with siRNAs
Transient transfection of small interfering RNA (siRNA) duplexes corresponding to rat COX-2 and scrambled siRNAs (100 nM) was performed using a Lipofetamine ™ RNAiMAX reagent (Invitrogen) according to the manufacturer's instructions.
Cell migration assay
RBA cells were cultured to confluence in 6-well culture plates and starved with serum-free DMEM/F-12 medium for 24 h. The monolayer cells were manually scratched with a pipette blue tip to create extended and definite scratches in the center of the dishes with a bright and clear field (~ 2 mm). The detached cells were removed by washing the cells once with PBS. Serum-free DMEM/F-12 medium with or without BK was added to each dish as indicated after pretreatment with the inhibitors for 1 h, containing a DNA synthesis inhibitor hydroxyurea (10 μM) during the period of experiment [29]. Numbers of migratory cells were counted from the resulting four phase images for each point and then averaged for each experimental condition. The data presented are summarized from three separate assays.
Statistical analysis of data
All data were estimated using GraphPad Prism Program (GraphPad, San Diego, CA). Quantitative data were analyzed by one-way ANOVA followed by Tukey's honestly significant difference tests between individual groups. Data were expressed as mean ± SEM. A value of P < 0.05 was considered significant.
Effect of celecoxib on BK-induced MMP-9 expression in brain astrocytes
The COX-2/PGE 2 system is also critical to brain inflammatory diseases [31]. First, we investigate the effect of COX-2/PGE 2 system on BK-induced MMP-9 expression, rat brain astrocytes (RBA) were pretreated with or without a selective inhibitor of COX-2 activity celecoxib (CLC) for 1 h and then incubated with BK for the indicated time intervals. As shown in Fig. 1a, pretreatment with CLC (30 μM) significantly attenuated BK-induced MMP-9 expression determined by zymography. The result suggested that COX-2 might play a regulatory role in BK-induced MMP-9 expression. We further determined whether COX-2 contributes to BK-induced MMP-9 expression via regulating the transcriptional level, analyzed by RT-PCR. The data showed that pretreatment of RBA with different concentrations of CLC (1, 10, and 30 μM) markedly blocked BK-induced MMP-9 mRNA expression in a concentration-dependent manner (Fig. 1b). These results suggested that COX-2 may be a critical element in BK-induced MMP-9 expression in RBA cells. To further confirm the suggestion, we determined whether BK stimulates the downstream product of COX-2, prostaglandin E 2 (PGE 2 ), increase and the effect of CLC on the event, the conditioned media were collected and measured PGE 2 levels using an EIA kit. The data showed that BK-induced PGE 2 biosynthesis was inhibited by pretreatment of cells with CLC ( Fig. 1c). Moreover, we found that BK-induced MMP-9 expression was attenuated by knockdown of COX-2 by transfection of RBA cells with the COX-2 siRNA (Fig. 1d). These results demonstrated that COX-2-derived PGE 2 production may contribute to BK-induced MMP-9 expression in RBA cells.
PGE 2 induces de novo MMP-9 expression via EP receptors
Here, to further demonstrate whether BK-induced PGE 2 production is important for MMP-9 expression, the RBA cells were directly incubated with PGE 2 for the indicated time intervals and concentrations. As shown in Fig. 3a, PGE 2 induced MMP-9 expression in a time-and concentration-dependent manner, a significant increase within 4-24 h. Moreover, we also demonstrated that PGE 2 induced concentration-dependently MMP-9 mRNA expression by RT-PCR analysis (Fig. 3b). To determine whether PGE 2 -induced MMP-9 expression is mediated through EP receptors, cells were pretreated with the antagonist of EP1 (Sc), EP3 (L798), or EP4 (GW) and then incubated with PGE 2 (10 μM) for the indicated time intervals. The results showed that pretreatment with Sc (3 μM), L798 (3 μM), or GW (1 μM) suppressed PGE 2 -induced MMP-9 expression during the period of observation (Fig. 3c), indicating that PGE 2 could indeed induce de novo MMP-9 expression through the EP receptors, including EP1, EP3, and EP4 in these cells.
Involvement of c-Src in BK-and PGE 2 -induced MMP-9 expression
To simultaneously investigate the signaling mechanism of BK-and PGE 2 -induced MMP-9 expression, the pharmacological inhibitors of signaling molecules were used. First, we determined the role of c-Src in BK-and PGE 2 -induced MMP-9 expression, cells were pretreated with the inhibitor of c-Src (PP1) for 1 h and then incubated with BK or PGE 2 for the indicated times. As shown in Fig. 4a, b, pretreatment with PP1 (1 μM) significantly attenuated BK-and PGE 2 -induced MMP-9 expression, suggesting that c-Src was involved in these responses. To further demonstrate the effect of PP1 on BK-and PGE 2 -stimulated c-Src phosphorylation, the phosphorylation of c-Src was analyzed by Western blot. The data showed that pretreatment with PP1 blocked BK-stimulated phosphorylation of c-Src (Fig. 4c, left panel). Additionally, PGE 2 also stimulate time-dependently c-Src phosphorylation which was blocked by pretreatment of RBA with PP1 (Fig. 4c, right panel). These data suggested that BK induces MMP-9 expression via
To further determine whether activation of Jak2/STAT3 cascade in BK-induced responses mediated through phosphorylation of Jak2/STAT3 cascade, as shown in Fig. 6c, BK time-dependently stimulated phosphorylation of Jak2/ STAT3 cascade determined by Western blot. A significant response was obtained within 1-3 min. Moreover, pretreatment with the inhibitor of Jak2 (AG) significantly inhibited BK-stimulated phosphorylation of Jak2/STAT3 cascade. We further demonstrate the role of COX-2 in BK-stimulated phosphorylation of Jak2/STAT3 cascade, cells were pretreated with CLC and then incubated with BK for 3 min. The data showed that pretreatment with CLC (30 μM) significantly blocked BK-stimulated phosphorylation of Jak2/STAT3 cascade (Fig. 6c), suggesting that BK-stimulated phosphorylation of Jak2/STAT3 cascade is mediated through COX-2/PGE 2 system. Subsequently, to confirm the role of COX-2/PGE 2 system in activation of Jak2/STAT3 cascade, cells were directly incubated with PGE 2 . As shown in Fig. 6d, PGE 2 stimulated phosphorylation of Jak2/STAT3 cascade at 3 min determined by Western blot. Pretreatment with AG also significantly blocked this PGE 2 response. To demonstrate the effect of the signaling molecules, including ERK1/2 and c-Src in PGE 2 -stimulated phosphorylation of Jak2/STAT3 cascade, cells were pretreated with U0126 or PP1 and then incubated with PGE 2 for 3 min. The data showed that pretreatment with PP1 markedly blocked PGE 2 -stimulated phosphorylation of Jak2 and STAT3. Moreover, pretreatment with U0126 inhibited STAT3 phosphorylation, but not Jak2, indicating that PGE 2 -stimulated STAT3 phosphorylation was mediated through c-Src/Jak2/ERK1/2 pathway. The results suggested that BK-stimulated activation of Jak2/STAT3 cascade via COX-2/PGE 2 system is required for MMP-9 up-regulation in RBA cells.
Discussion
Among MMPs, MMP-9 expression and activation play a critical role in tissue remodeling in the pathogenesis of brain diseases [10]. The MMP-9 contributes to a wide range of biological activities in the CNS diseases, including stroke, Alzheimer's disease, and malignant glioma [10]. Reduction of MMP activity by pharmacological inhibitors or gene knock-out strategies protects the brain from advanced neuroinflammation [36]. These studies suggest that up-regulation of MMP-9 by pro-inflammatory factors may be a great effect upon brain inflammation and neurodegeneration. Moreover, BK and related peptides are simultaneously produced and released following brain injury [37]. Our previous data have demonstrated that BK induces MMP-9 expression in astrocytes which may change astrocytic functions such as cell motility and neuroinflammation [21,29]. Moreover, BK also induces COX-2 expression in astrocytes [7]. These findings imply that BK may play an important role in brain injury, astroglioma, or the CNS diseases. Pharmacological and knockout-mouse approaches suggest that targeting COX-2 or MMP-9 and their upstream signaling pathways should yield useful therapeutic targets for brain injury and inflammation. Herein, we investigate the effect of COX-2/PGE 2 system on BK-induced MMP-9 expression in brain astrocytes and its mechanism. In this study, we found that COX-2/PGE 2 system may be a novel regulator to participate in BK-induced MMP-9 expression in rat brain astrocytes. The results suggest that in brain astrocytes, BK stimulated COX-2-derived PGE 2 autocrine and further induced MMP-9-dependent astrocytic migration. It is mediated through PGE 2 receptors (EPs)-linked to the protein kinases (e.g., c-Src and Jak2)activated ERK1/2 signal leading to induction of STAT3 pathways. First, we found that a selective COX-2 inhibitor celecoxib (CLC) and knockdown of COX-2 by transfection with siRNA for COX-2 can inhibit BK-induced MMP-9 expression in RBA cells (Fig. 1). A close correlation was observed between the expression of COX-2 under BK-induced conditions and the expression of MMP-9. This result is the first finding that COX-2 can contribute to MMP-9 up-regulation by BK in brain astrocytes. Next, several reports have indicated that COX-2-derived PGE 2 may up-regulate MMP-9 expression in pancreatic cancer or macrophages [38,39]. Moreover, a study showed that EP3 receptor signaling on endothelial cells is essential for the MMP-9 upregulation that enhances tumor metastasis and angiogenesis [40]. Thus, we investigated whether BK-induced MMP-9 expression in mediated through PGE 2 receptors (EPs) in brain astrocytes. The results showed that pretreatment with the antagonist of EP1 (Sc-19220), EP3 (L798-106), or EP4 (GW627368) attenuated BK-induced MMP-9 expression during the period of observation (Fig. 2), suggesting that BK induces MMP-9 expression via the PGE 2 -dependent EP receptors (e.g., EP1, EP3, and EP4) in RBA cells. These data suggested that BK-induced MMP-9 expression may be mediated through COX-2-derived PGE 2 autocrine in RBA cells.
Accordingly, we presumed that the COX-2-derived PGE 2 production may contribute to the BK-induced MMP-9 expression in RBA cells. To confirm the hypothesis, the cells were directly stimulated with PGE 2 (a metabolic product of COX-2). As expected, the data showed that PGE 2 induced MMP-9 expression in a time-and concentration-dependent manner (Fig. 3a). Moreover, PGE 2 also induced MMP-9 mRNA expression in RBA cells (Fig. 3b), indicating that COX-2-derived PGE 2 is involved in BK-induced MMP-9 expression. We further demonstrated that PGE 2 -induced MMP-9 expression via PGE 2 receptor (EP)-dependent pathways, The results showed that PGE 2 -induced MMP-9 expression was markedly attenuated by pretreatment with various EP antagonists, including EP1, EP3, and EP4 (Fig. 3c), suggesting that PGE 2 -induced MMP-9 expression is In brain astrocytes (RBA), BK induces COX-2/PGE 2 -dependent MMP-9 expression via EP-mediated c-Src, Jak2, and ERK1/2 signals resulting in activation of STAT3. The COX-2/PGE 2 -meditaed MMP-9 expression by BK leads to RBA cell migration mediated through EP (i.e., EP1, EP3, and EP4)-dependent manner in RBA cells. These results demonstrate that an autocrine mechanism of the brain inflammatory responses through cooperation between BK and PGE 2 to form a positive loop mediating the native COX-2/PGE 2 production and de novo MMP-9 expression. It is consistent with PGE 2 -induced metalloproteinase 9 (MMP-9) expression and activity occurs through EP-1/EP-3/ EP-4 in in cultured monocytic cells [41] and mice lacking COX-2 or EP4 in bone marrow-derived cells show a reduced expression of MMP9, which results in decreased infiltration of monocytes and T cells into the CNS [42].
Many reports and our previous data have indicated that several protein kinases such as c-Src may contribute to various stimuli-induced MMP-9 expression in several cell types [43][44][45]. Moreover, several reports also demonstrate that c-Src is crucial for MMP-9 expression in brain astrocytes [46,47]. Here, the data showed that BK induced the expression of MMP-9 was attenuated by PP1 (Fig. 4a). Similarly, pretreatment with PP1 significantly inhibited PGE 2 -induced MMP-9 expression in RBA cells (Fig. 4b). Moreover, BK or PGE 2 can stimulate phosphorylation of c-Src which was significantly blocked by PP1 (Fig. 4c), indicating that c-Src phosphorylation plays an important role in PGE 2 -induced MMP-9 expression, consistent with BK-induced MMP-9 expression through c-Src revealed by zymography in RBA cells. These results are consistent with up-regulation of MMP-9 by c-Src in IL-1β induction in brain astrocytes [43], in TNF-α stimulation in osteoblast-like MC3T3-E1 cells [44], and in thrombin-induced neuroblastoma SK-N-SH cell migration [45].
Janus kinases (Jaks) are a family of four tyrosine kinases (Jak1, Jak2, Jak3 and Tyk2) that selectively associate with cytokine receptor chains and mediate signaling by phosphorylating tyrosine residues on various proteins in the pathway, including STAT (signal transducer and activator of transcription) transcription factors [49][50][51]. The Jak/STAT signaling pathway is implicated in the pathogenesis of inflammatory, autoimmune, and degenerative diseases including rheumatoid arthritis [52]. In the CNS, Jak/STAT cascade is a critical part of several intracellular signaling events that regulate many pathophysiological functions. A report has indicated that age-and disease-dependent deterioration in the Jak2/ STAT3 axis plays a critical role in the pathogenesis of Alzheimer's disease [53]. These studies suggest that Jak/ STAT may play a critical role in regulation of inducible gene expression in inflammatory responses. Therefore, we further investigated the role of Jak/STAT pathway in BK-or PGE 2 -induced MMP-9 expression in brain astrocytes. The results showed that pretreatment with AG490 (a Jak2 inhibitor) and CBE (a STAT3 inhibitor) both significantly blocked BK-induced MMP-9 expression (Fig. 6a), indicating that Jak2 and STAT3 are involved in BK-induced MMP-9 expression. The result is consistent with promotion of cell migration and invasion MMP-9 through the Jak2/Stat3/MMP9 signaling pathway in B7-H3 stimulation in colorectal cancer [54]. Moreover, the data showed that BK can stimulate phosphorylation of Jak2 and STAT3β in a time-dependent manner which were attenuated by pretreatment with a selective COX-2 inhibitor celecoxib (CLC) and AG490 (Fig. 6c), suggested that BK-stimulated phosphorylation of Jak2 and STAT3β are mediated through COX-2-dependent pathway. The results also indicated that BK stimulates STAT3β phosphorylation via Jak2-mediated manner. We further demonstrated whether BK induces MMP-9 expression via COX-2/PGE 2 -dependent activation of Jak2/ STAT3 pathways, RBA cells were directly treated with PGE 2 . Predictably, PGE 2 -induced MMP-9 expression was markedly attenuated by pretreatment with AG490 and CBE (Fig. 6b), indicating that Jak2 and STAT3 are involved in the response. Similarly, the data showed that PGE 2 -stimulate phosphorylation of Jak2 and STAT3β were attenuated by pretreatment with AG490 (Fig. 6d), indicated that PGE 2 -stimulated phosphorylation of STAT3β is mediated through Jak2-dependent pathway.
Moreover, previous reports have indicated that the best characterized interactions of the Jak/STAT pathway are with the MAPKs [49]. The MAPKs specifically phosphorylates a serine near the C terminus of most STATs that will enhance transcriptional activation by STAT [49]. Thus, the MAPKs (i.e. ERK1/2, JNK1/2, and p38 MAPK) are key signaling enzymes that couple receptor activation to gene transcription by phosphorylating STATs. Our data showed that PGE 2 -induced STAT3β phosphorylation, but not Jak2, was attenuated by U0126, suggesting that phosphorylation of STAT3β is mediated through ERK1/2 pathway (Fig. 6d). Moreover, pretreatment with PP1 also attenuated PGE 2 -stimulated Jak2 and STAT3β phosphorylation (Fig. 6d), indicating that c-Src may be an upstream regulator of Jak2/STAT3β cascade in RBA cells. These results suggested that COX-2/PGE 2 systemdependent activation of Jak2/STAT3β cascade is a novel and critical pathway for BK-induced MMP-9 expression in brain astrocytes. Moreover, PGE 2 -stimulated STAT3β phosphorylation is mediated through c-Src/Jak2 linking to phosphorylation of ERK1/2 in these cells. For the role of STAT3, we are the first presented that STAT3β plays a critical role in induction of MMP-9 by BK and PGE 2 in brain astrocytes (RBA). Taken together, these results suggest key roles of PGE 2 autocrine and STAT3 in the severity of brain inflammation through up-regulation of MMP-9 in brain astrocytes.
Conclusions
In summary, we showed that BK induced expression of MMP-9 via COX-2-dependent PGE 2 production leading to PGE 2 receptor (EP)-mediated pathways. Subsequently, the autocrine PGE 2 -induced MMP-9 expression is mediated through EP(1/3/4)-dependent c-Src/Jak2/ERK1/2 linking to STAT3 activation in RBA cells. Finally, BKinduced MMP-9-dependent RBA cell migration is also mediated through these pathways. Based on the observations from literatures and our findings, Fig. 7b depicts a model for the molecular mechanisms underlying BKinduced COX-2/PGE 2 -dependent MMP-9 expression and cell migration (Fig. 7a) of RBA cells. These findings concerning BK-induced MMP-9 gene expression through a novel and PGE 2 autocrine regulation in brain astrocytes imply that BK, COX-2/PGE 2 system, and MMP-9 play an important role in amplifying brain inflammation and CNS diseases. Pharmacological approaches suggest that targeting COX-2/PGE 2 system and Jak/STAT cascade signaling components would yield useful therapeutic targets for brain inflammatory diseases. | 2020-12-10T04:16:55.652Z | 2020-11-23T00:00:00.000 | {
"year": 2020,
"sha1": "68adb70f218450176b8ae68db9c265b614bd8d41",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12964-020-00680-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "1e49d8d2a39e58607621628f8b3cc42fd4edc038",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
209475894 | pes2o/s2orc | v3-fos-license | High-performance thermoelectric silver selenide thin films cation exchanged from a copper selenide template
Over the past decade, Ag2Se has attracted increasing attention due to its potentially excellent thermoelectric (TE) performance as an n-type semiconductor. It has been considered a promising alternative to Bi–Te alloys and other commonly used yet toxic and/or expensive TE materials. To optimize the TE performance of Ag2Se, recent research has focused on fabricating nanosized Ag2Se. However, synthesizing Ag2Se nanoparticles involves energy-intensive and time-consuming techniques with poor yield of final product. In this work, we report a low-cost, solution-processed approach that enables the formation of Ag2Se thin films from Cu2−xSe template films via cation exchange at room temperature. Our simple two-step method involves fabricating Cu2−xSe thin films by the thiol-amine dissolution of bulk Cu2Se, followed by soaking Cu2−xSe films in AgNO3 solution and annealing to form Ag2Se. We report an average power factor (PF) of 617 ± 82 μW m−1 K−2 and a corresponding ZT value of 0.35 at room temperature. We obtained a maximum PF of 825 μW m−1 K−2 and a ZT value of 0.46 at room temperature for our best-performing Ag2Se thin-film after soaking for 5 minutes. These high PFs have been achieved via full solution processing without hot-pressing.
Introduction
Burning of fossil fuels for energy and heat production is a major contributor towards climate change and global warming. 1 One of the largest energy consumers is the industrial sector which accounts for about 32% of total U.S. energy consumption with nearly 1933.2PJ per year of energy dissipated as waste heat during manufacturing processes. 2,3Of the rest of the energy produced, almost 50% goes into commercial and residential heating, ventilation and air conditioning (HVAC) operating costs where the majority of the energy is used for unnecessarily heating and cooling entire infrastructures such as ceilings, walls and unoccupied spaces. 4Thermoelectric (TE) devices provide a two-prong approach to solving the energy and HVAC issues outlined above.A TE material has the unique potential to directly convert thermal energy into electricity and vice versa. 5n one hand, operating in a waste heat recovery mode, TEs can absorb waste heat and convert it to usable electricity, which may reduce the consumption of nonrenewable resources.On the other hand, exible TE modules operating in heat pumping mode have the potential to be integrated with clothing for local temperature control [6][7][8] possibly replacing building wide HVAC systems and leading to unprecedented energy savings.Therefore, TE devices exhibit signicant potential to alleviate global warming and environmental pollution issues.To gauge the performance of a TE material, the gure of merit commonly used is denoted as ZT, dened as S 2 sTk À1 , where S is the Seebeck coefficient, s is the electrical conductivity, T is the absolute temperature and k is the total thermal conductivity. 5,9S 2 s is also known as the power factor (PF).An efficient TE material requires maximizing the PF while minimizing thermal conductivity.However, it is challenging to simultaneously achieve high electrical conductivity, high Seebeck coefficient and low thermal conductivity in any traditional material due to parameter interdependency. 102][13] For example, energy ltering of cold electrons can help to overcome the Seebeck coefficient and electrical conductivity trade-off, 14 while the formation of nanocrystalline grains can decouple the relationship between electrical and thermal conductivity. 157][18] If exible TE devices can be fabricated on fabrics, it would have implications in enabling temperatureregulating clothing that can reduce the energy and space consumption of household and building HVAC systems signicantly.
Currently, bulk pellets of n-or p-type doped Bi 2 Te 3 and Sb 2 Te 3 materials dominate the commercial TE market due to their ideal balance between electrical and thermal properties. 19lthough these devices have good performances with ZT values around 1, they suffer from high manufacturing costs, toxicity, and rigidity thus limiting their use in applications that require exible form factors.The TE community has already made substantial progress on nanostructured and solutionprocessable Bi 2 Te 3 and Sb 2 Te 3 materials to enhance performance and versatility while reducing manufacturing costs; 18 however, toxicity remains a large problem.To accomplish the goal of widespread exible, integrated TE devices in clothing and other applications, toxic materials must be eliminated.Cu 2 Se has made rapid progress as a potential heavy-metal-free p-type material but a matching low-cost high-performance non-toxic n-type material is required for a working TE generator.In this regard, silver selenide (Ag 2 Se) has attracted significant interest, owing to its promising potential in TE applications. 20,213][24][25][26][27][28][29][30][31] In addition, Ag and Se are less toxic than Bi, Pb, Sb and Te, and Se is approximately 10 times more abundant than Te. 32Despite these promising advances, to maximize the TE potential of Ag 2 Se, nanocrystalline Ag 2 Se is required.Finally, to keep the nal costs of the TE modules low, an ideal process would involve an inexpensive synthetic approach towards Ag 2 Se nanostructures, combined with a high-throughput solutionprocessing fabrication approach for widespread deployment on various substrates with exible form factors.
Various reaction routes for directly synthesizing Ag 2 Se nanocrystals (NCs) have been reported, 22,[33][34][35][36] but there exist few on nanostructured Ag 2 Se for TE applications.Perez-Taborda et al. fabricated Ag 2 Se thin lms on glass substrates via pulsed hybrid reactive magnetron sputtering (PHRMS), reporting a PF of 4655 AE 407 mW m À1 K À2 at 376 K. 37 Despite the large PF, the power output required to run PHRMS would lead to high manufacturing costs. 38 6 around 544.5 mW m À1 K À2 at 405 K, 30,39 and 1840 mW m À1 K À2 at 400 K respectively; 40 however, the use of hot pressing limits the true solution processability on exible substrates and induces signicant NC fusion.2][43] This method resulted in stable Ag 2 Se NCs, but it is restricted to batch processing and the yield of the synthesis is relatively low ($4% yield). 43An ideal synthetic route to overcome these challenges would be to use a high-yield technique with low-cost precursors to generate a nanostructured lm.In this regard, a thiol-amine approach proposed by Webber et al. allows one to dissolve bulk chalcogenide semiconductors in solvents such as ethylenediamine (en) and ethanedithiol (edtH 2 ) and deposit as thin lms with high throughput. 44Unfortunately, not every chalcogenide (e.g.Ag 2 Se) is amenable to this dissolution-deposition technique.Although a number of hypotheses have been proposed in literature, none of them provide a detailed mechanism regarding the nature of dissolution of metal chalcogenides.5][46] An alternative approach that has been typically used in the semiconductor NC community is to take advantage of established protocols to synthesize common chalcogenide NCs and use cation exchange (CE) as a facile method to convert it into a high value product that cannot be synthesized directly. 47E has been demonstrated to be a versatile, efficient and convenient tool to expand the library of attainable materials with new and unique material phases, shapes and compositions. 24,26,48,49CE transformation normally involves a two-step process, the rst step being the synthesis of the base material as a template, and the second step being the exchange between the host and the guest cations within the crystal lattice.
In this report, we successfully demonstrated the fabrication of polycrystalline Ag 2 Se thin lms from Cu 2Àx Se thin lm templates that exhibit an average PF of 617 AE 82 mW m À1 K À2 with a ZT of 0.35 at room temperature.To demonstrate the applicability of our hypothesis, we start off with a template of Cu 2Àx Se thin lms that was prepared via the method reported in the work of Lin et al. 50The resulting Cu 2Àx Se thin lms are not perfectly stoichiometric when synthesized under these chemical conditions, which is normal for the entire copper chalcogenides family. 24,51Evidence shows that the vacancies in the lattice actually accelerate the exchange process as they provide alternative pathways for the diffusion of Ag + ions, even at low temperatures. 24,52We then soaked our Cu 2Àx Se lms into a Ag +rich solution for various amounts of time and annealed our nal samples aerwards as outlined in Fig. 1.We conrmed the transformation of copper selenide to silver selenide using X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), energy dispersive X-ray spectroscopy (EDS) and inductively coupled plasma mass spectrometry (ICP-MS) in addition to investigating the TE properties of the resultant lms.
The following steps were performed in a nitrogen-lled glovebox.100 mg of Cu 2 Se was weighed and transferred into a 5 mL glass vial. 2 mL of ethylenediamine (en) was measured and transferred to the glass vial, followed by 0.2 mL of ethanedithiol (edtH 2 ) to the same glass vial.The solution was stirred magnetically at 35 C for about 20 minutes until the solution turned dark brown.
The glass substrates (9.5 mm  9.5 mm) were sonicated three times for 5-7 minutes in acetone, isopropyl alcohol and methanol respectively.To fabricate a Cu 2Àx Se thin lm with thickness ranges between 70-100 nm, 35 mL of Cu 2 Se thiolamine solution was spun coat on a glass substrate at 1800 rpm for 60 seconds.The coated substrate was le on the hot plate to allow the solvent to dry at 35 C for 2 minutes.The temperature of the hot plate was ramped up to 350 C by increasing the temperature by 50 C every 5 minutes.The thin lm was annealed on the hot plate for one hour aer the temperature reached 350 C. Aer an hour, the hot plate was switched off which allowed the thin lm to cool down to room temperature.All Cu 2Àx Se thin lms were prepared using the same procedures mentioned above.
To prepare for the Ag + ion soaking process, 19 mg of AgNO 3 was measured, transferred into the glovebox and dissolved in 10 mL of methanol in a glass vial to form a 0.01 M AgNO 3 solution.10 mL of methanol was prepared in a separate glass vial.Previously fabricated Cu 2Àx Se thin lms were held by a tweezer, soaked into the 0.01 M Ag + ion solution for varying amounts of time and slowly transferred to the pure methanol solution for 45-60 seconds to wash out the excess Ag + ions from the surface of the thin lm.The Ag 2 Se thin lm was placed on the hot plate at 50 C to dry out the excess solvent and annealed at 350 C for 30 minutes to repair any cracks and/or release any trapped ions at the grain boundaries during the CE process.The procedure was repeated for each thin lm.
Materials characterizations
To characterize the morphology, structure and composition of the Cu 2Àx Se template samples and the post-soaked Ag 2 Se samples, we performed scanning electron microscopy (SEM), Xray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), inductively coupled plasma mass spectrometry (ICP-MS) and TE measurements on our samples.We also performed electrical measurements via the standard four-probe van-der-Pauw method and Seebeck measurements using small Peltier units (CUI Inc.) to measure the induced TE voltage created by various temperature gradients across the sample.Electrical measurements and Seebeck measurements were run via a LabVIEW program along with Keithley 2400 sourcemeters and Keithley 2000 multimeters.Hall carrier concentrations were also obtained using an Ecopia HMS-5000 variable temperature Hall effect measurement system.Thermal conductivity data was extracted using the differential 3u method using an SR865A lock-in amplier from Stanford Research Systems (ESI, Fig. S1 and S2 †).Film thickness measurements were analyzed via a Veeco Dektak 150 prolometer.Full details about the characterizations can be found in the ESI † along with ESI gures.
In addition, error bars were added to each plot in Fig. 6 to show the precision and reliability of our data.Aside from the instrumental errors, the major error for the electrical conductivity values came from the measurement of thickness (standard deviation ¼ AE10 nm) using the prolometer.The detailed error analysis calculations are shown in the ESI † section.
Results and discussion
In many CE reactions for semiconductors, Cd-based and Cubased NCs have been used as ubiquitous nanocrystal templates to fabricate complex nanomaterials that are otherwise hard to obtain under standard conditions. 48,49,53To form a thin-lm NC template, we adopted a method developed by Webber et al. which demonstrated that bulk copper chalcogenides such as Cu 2 Se, Cu 2 Te and Cu 2 S can be dissolved in a thiol-amine solution mixture and be cast as thin lms on substrates. 45,51,54,55However, bulk Ag 2 Se powder fails to do so at ambient conditions (see ESI, Fig. S3 †). 45,50To overcome this issue, we use an innovative combination of thiol-amine dissolution and CE techniques to realize a simple pathway to obtaining high-performance Ag 2 Se thin lms.
We rationalize the CE using a mix of thermodynamic and kinetic parameters.The thermodynamic driving force of the CE reaction is determined by a number of factors including crystal lattice energy, dissociation and solvation energies, dislocation energies and interfacial strain energy. 49To predict the likelihood of the transformation from Cu 2 Se to Ag 2 Se in bulk at room temperature, we conducted calculations on the dissociation and solvation energies during the CE reaction, as suggested in a report by Rivest et al. 49 The overall CE equation is described as, 2Ag + (liquid) + Cu 2 Se (NCs) / Ag 2 Se (NCs) + 2Cu + (liquid) The equation describes an isovalent system where Ag + ions are the incoming cations, Cu is parent cation and Se is the parent anion in the NCs.The thermodynamics of the system can be described in terms of several elementary steps of the CE reaction using the approximate free energy values obtained from past literature, as shown in the following equations: Cu / Cu + solvation energy: The net energy of the overall transformation is calculated to be: 255.2 + 350 À210 À 400 ¼ À4.8 kJ mol À1 .The negative value suggests that the CE reaction is spontaneous at room temperature, and hence is thermodynamically favorable.However, kinetic factors such as activation energy barriers and ion diffusivity also play an important role in determining the outcome of the reactions and the nature of the nal products. 24,49Cation exchanging on the nanoscale reduces the limitations stemming from the bulk solid-state exchange.The larger surface area of NCs reduces the sub-reaction activation energy barriers which exist mostly in bulk solid-state exchange, meaning that CE reactions in NCs could happen almost spontaneously.The high curvature surfaces of NCs and low coordination facets that serve as high-energy sites would also reduce the nucleation reaction barriers in NCs and facilitate nucleation. 49Normally, a large excess of incoming cations creating a concentration gradient will be sufficient to drive the reaction at room temperature.Additionally, both Ag + and Cu + have high mobility due to their small ionic radii and are both soluble in common solvents such as acetonitrile.
Scanning electron microscopy (SEM) images presented in Fig. 2 show the quality of the thin lm sample before and aer the soaking process.No signicant change in the morphology between the before-soaked Cu 2Àx Se and post-soaked Ag 2 Se polycrystalline thin lms is observed with grain sizes on the order of tens of nanometers.While there are a few noticeable voids in the Ag 2 Se lm, overall the lms are continuous and give reliable TE measurements as will be discussed later.The grain size and morphology of the initial Cu 2Àx Se lm and resultant Ag 2 Se lm were mainly controlled by the annealing temperature.We annealed our Cu 2Àx Se samples aer dissolution for one hour at 350 C (which gave us the best quality lms) and our Ag 2 Se samples aer soaking at 350 C for 30 minutes.Once the initial Cu 2 Se lms are fabricated, annealing time does not seem to have a signicant impact on grain size or domain size, as shown in the SEM images where there is no drastic change in grain size before and aer annealing our Ag 2 Se sample.The grain size we obtained for Cu 2Àx Se and Ag 2 Se is estimated to be between 30-50 nm based on the SEM data.
Energy dispersive X-ray spectroscopy (EDS) presented in Fig. 3 shows the thin lm sample before soaking, revealing the presence of Cu and Se while the post-soaked sample indicates the presence of Ag and Se and the absence of Cu, clearly showing that most of the Cu + ions were substituted during the soaking process for Ag + ions in the thin lm lattice.The rest of the elements such as O, Si, Na, K, Al, Zn originate from the glass substrate and impurities therein.We also detect a minute amount of sulfur from residual thiols in the samples.
To further conrm whether our Cu 2Àx Se thin lm sample was converted into Ag 2 Se via ion exchange technique, we used X-ray photoelectron spectroscopy (XPS) to identify the presence of Cu and Se before soaking, and the presence of Ag and Se aer soaking and annealing.We observe the two Cu peaks disappear as shown in Fig. 4a and the Se peak shis to the right slightly as shown in Fig. 4c aer CE.The Se peak shi may be attributed to the change of Se valence state due to the reduction of Cu vacancies and the occupancy of Ag ions within the lattice. 51The Ag peaks appearing at $369 eV and at $375 eV illustrate a strong presence of Ag within the thin lm sample aer soaking as shown in Fig. 4b.
To validate the structural transformation from Cu 2Àx Se to Ag 2 Se within the crystal lattice of our thin lm sample, we performed X-ray diffraction (XRD) analysis on our Cu 2Àx Se samples before and aer soaking in a Ag + salt solution.We compare the XRD data of our Cu 2Àx Se sample with Cu In the post-soaked sample, the Cu 2Àx Se peaks at 26.9 and 44.8 disappear.The XRD peaks in our post-soaked Ag 2 Se samples match some of the orthorhombic reference peaks and some of the 10 nm tetragonal Ag 2 Se XRD peaks based on earlier reports from our group. 22It is postulated that the phase of Ag 2 Se relies on the crystallite size. 61,62From our data, we observe that our post-soaked Ag 2 Se thin lm sample is a phase mixture of orthorhombic and tetragonal structures with the dominant phase being tetragonal. 22This is expected due to the phase transition between tetragonal and orthorhombic phases occurring at a crystallite size of 40 nm which is around the average size of our grain sizes ($30-50 nm) as seen in Fig. 2. 22,41,42 The post-soaked sample XRD also conrms that, due to the absence of any detectable Cu 2 Se-related XRD peaks, a complete exchange between Ag + ions and Cu + ions occurred during the soaking process.
The TE properties of the thin lms were measured at room temperature.We tested several different soaking times: 1 min, 5 min, 10 min, 20 min, 40 min and 60 min.Results show that the TE properties remain mostly consistent across different soaking times.The Cu 2Àx Se thin lms with an average thickness of 80 AE 10 nm exhibit an average electrical conductivity of nearly 2.39 AE 0.3 Â 10 5 S m À1 at room temperature.The high electrical conductivity of the Cu 2Àx Se thin lm may be attributed to the high carrier concentration of holes due to Cu-vacancies.Post soaking in Ag-solution, all the samples exhibit a signicant drop in electrical conductivity due to the rapid diffusion of Ag + guest ions into both the vacant sites and interstitial sites leading to early phases of from p-type Cu 2 Se to n-type Ag 2 Se.Therefore, short soak times will result in mixed ptype and n-type transport.Rather than exchanging the Cu + at the surface of the nanocrystalline grains, the Ag + ions can also diffuse into Cu 2Àx Se grains and initiate the CE at preferred regions called "reaction zones". 24,52Aer a longer soaking time, more "reaction zones" form, more CE reactions take place and 22 and reference data for an orthorhombic Ag 2 Se film (blue, ICSD-261822). 60The XRD patterns show that the post-soaked annealed Ag 2 Se film has a mixed phase of tetragonal and orthorhombic structures with a dominant tetragonal phase.
Cu + ions continue to be removed from the lattice.Overall, the electrical conductivities vary little over soaking time showing high consistency.
Immediately aer soaking, we observe a sign reversal of the Seebeck coefficient (Fig. 6b) from positive (p-type) to negative (n-type), consistent with our thermodynamic analysis that the CE reaction occurs spontaneously to form n-type Ag 2 Se from ptype Cu 2 Se.Seebeck coefficient values stay mostly the same with an average value of À90 AE 6.32 mV K À1 for various soak times.By combining electrical conductivity and Seebeck coefficient values, PF of the lms were obtained, as shown in Fig. 6c.The PF values uctuate slightly within a small range with an average value 617 AE 82 mW m À1 K À2 .While the maximum PF value, 825 mW m À1 K À2 , occurs at 5 min soak time we believe that the location and concentration of the residual Cu atoms in the postsoaked sample play a synergistic role in contributing to the higher electrical conductivity compared to all other post-soaked samples (Table 1).Future experiments will be directed to understand the exact role of the Cu-ions.Aer 60 minutes, the amount of Cu-ions in the sample saturate out at nearly 4% (Table 1).We hypothesize that longer exposure times might lead to back diffusion of Cu + ions into the Ag 2 Se lattice thus limiting the CE.
To observe how the concentration of Cu + ions vary with soaking time, we performed ICP-MS to obtain a quantitative analysis on how the Cu + ions are being removed over time.It is difficult to conrm with certainty that there is a complete exchange between Ag + and Cu + ions based on our ICP results.As seen in Table 1, the ICP results show that there exists a sizeable amount of Cu + ions in the post-soaked Ag 2 Se sample, but with an increase in the soaking time qualitatively the amount of Cu + ions in the sample decreases monotonically based on the increasing Ag/Cu ratio.The TE performance is not severely affected, however, suggesting that the Cu + ions may be loosely bound on the surface or trapped in grain boundaries rather than actively doping the material.This is also supported by Hall carrier concentration measurements which show a relatively constant carrier concentration and Hall mobility values across the post-soaked samples.While Hall measurements are not completely accurate for nano-grained polycrystalline samples, qualitatively we observe that all our samples show almost similar values.We suspect that the low carrier density could be attributed to the residual Cu atoms and the sulfur from the thiols as shown in the EDS data in Fig. 3. Further experiments need to be conducted to explore the effect of these residual Cu atoms and sulfur on the TE properties.
Last but not the least, in order to quantify the ZT value for our samples, we conducted room temperature thermal conductivity measurements using the differential 3u method and obtained a value of 0.53 AE 0.18 W m À1 K À1 (details in ESI †).The total thermal conductivity (k) typically comprises of a lattice contribution (k l ) and an electronic contribution (k e ).k e can be approximated as LTs, where L is the Lorenz number ($1.8 Â 10 À8 V 2 K À2 ), T is the absolute temperature (300 K in our case) and s is the electrical conductivity (72 886 S m À1 for the sample measured) which gives us a value of 0.394 W m À1 K À1 and thus a value of 0.136 W m À1 K À1 for k l .Compared to the bulk values of k l ¼ 0.5 W m À1 K À1 , 21 our sample shows an almost 73% reduction in the lattice thermal conductivity values which is most likely due to the nano-grained structure (average grain sizes between 30-50 nm) of our thin lm samples.Consequently, we obtain an average ZT value of around 0.35 with a peak ZT value of 0.46 at a 5 minute soak time.If the residual Cu atoms and sulfur could be removed or their concentration be reduced from our Ag 2 Se thin lms, it would possibly drive up the ZT value to around 1. Compared to the ZT values in bulk powder ranging from 0.32 to 0.99, our average ZT value of 0.35 obtained from a completely solution-processed technique is competitive with bulk values without any need for hot pressing or spark plasma sintering.
Conclusions
In conclusion, we report an efficient strategy for fabricating nanostructured Ag
Fig. 1
Fig. 1 Schematic describing the process in which a Cu-deficient Cu 2Àx Se thin film transforms into a Ag 2 Se thin film by fully immersing the Cu 2Àx Se thin film into a Ag + -rich solution for different amounts of time followed by annealing.Ag + concentration gradient is established between the soaking solution and the thin film drives the Ag + ions to diffuse into the Cu 2Àx Se crystal lattice and replace the Cu + ions.Electrically, this process results in a switching of majority carrier type from p-type (Cu 2Àx Se) to n-type (Ag 2 Se) and an increase in the average PF from 135 mW m À1 K À2 to 617 mW m À1 K À2 .The photographs show the Cu 2Àx Se film (before soak) and the Ag 2 Se film (after soak) on glass substrates.
Fig. 2
Fig. 2 Scanning electron microscope (SEM) images of (a) Cu 2Àx Se thin film sample before soaking (b) Ag 2 Se thin film after soaking and (c) the Ag 2 Se sample post annealing.The images demonstrate the nanocrystalline grain structure as well as the continuous void-free nature of the Cu 2Àx Se and the Ag 2 Se films.Annealing does not significantly affect the average grain size of the Ag 2 Se film.
1.95 Se and Cu 2 Se reference les and the XRD data of our post-soak Ag 2 Se sample with tetragonal and orthorhombic Ag 2 Se reference les as shown in Fig. 5.The XRD data for the as-fabricated Cu 2Àx Se sample closely resembles the Cu 1.95 Se reference sample. 58Compared to the peaks at 26.3 and 43.6 for the Cu 1.95 Se reference, 58 the major peaks in our Cu 2Àx Se are shied slightly by 0.5 and 1.2 respectively, indicating that the unit cell in the Cu 2Àx Se lattice shrinks when nanostructured.Post-soaked Ag 2 Se samples are analyzed in a similar fashion as Cu 2Àx Se.
Fig. 3
Fig. 3 Energy dispersive X-ray spectroscopy (EDS) of (a) a Cu 2Àx Se sample on glass before soaking and (b) a Ag 2 Se sample on glass after soaking and annealing.Before soaking, the sample shows peaks corresponding to Cu and Se elements, while after soaking the sample shows only Ag and Se peaks, indicating that the Cu 2Àx Se sample has converted into Ag 2 Se.The rest of the elements detected by EDS, including Si and O are from the glass substrate and trace amounts of Na, K, Al, and Zn are from glass impurities as well as a minute amount of sulfur from residual thiols.
Fig. 4 X
Fig.4X-ray photoelectron spectroscopy (XPS) images of (a) Cu peaks at 2p orbital in Cu 2Àx Se sample before (red) and after (black) soaking, (b) Ag peaks at 3d orbital before (green) and after (black) soaking, (c) Se peaks at 3d orbital before (blue) and after (black) soaking.
Fig. 6
Fig. 6 (a) Electrical conductivity of as-fabricated Cu 2Àx Se sample (blue star) and Ag 2 Se samples (black squares) as a function of soaking time in Ag + ion solution.The dotted line at 0 minutes indicates the electrical conductivity of the p-type Cu 2Àx Se thin film.Our results show that the amount of soaking time does not significantly affect the electrical conductivity.(b) Seebeck coefficient of Cu 2Àx Se sample and Ag 2 Se samples as a function of soaking time in Ag + ion solution.The sign of Seebeck coefficient changes from positive to negative indicating a transformation from p-type Cu 2Àx Se to n-type Ag 2 Se.(c) PF data as a function of soaking time.The average PF value measured from 11 samples is 617 AE 82 mW m À1 K À2 and the maximum PF obtained is 825 mW m À1 K À2 after 5 minutes of soaking time.Fig. 5 (a) X-ray diffraction (XRD) data for a spin-coated Cu 2Àx Se sample before soak (green), reference powder diffraction data for a Cu 1.95 Se sample (blue, ICSD-243957) 58 and reference data for a perfectly stoichiometric Cu 2 Se (blue) 59 with a low temperature aphase.As the XRD data of the Cu 2Àx Se sample closely resembles that of the Cu 1.95 Se sample, the Cu deficiency of the Cu 2Àx Se sample is estimated to be roughly about x ¼ 0.05.(b) XRD data for a Ag 2 Se film (red) after soaking in Ag + ion solution for 40 minutes and annealing at 350 C for 30 minutes, reference data for a tetragonal Ag 2 Se film (blue)22 and reference data for an orthorhombic Ag 2 Se film (blue, ICSD-261822).60The XRD patterns show that the post-soaked annealed Ag 2 Se film has a mixed phase of tetragonal and orthorhombic structures with a dominant tetragonal phase.
2
Se thin lms via an ion exchange technique between Cu + ions and Ag + ions at room temperature.Starting from a low-cost p-type Cu 2Àx Se template prepared by a thiolamine dissolution process, n-type Ag 2 Se thin lms were made by simply soaking the Cu 2Àx Se thin lms into a Ag + ion solution while maintaining mostly the original morphology of Cu 2Àx Se thin lms.The Seebeck coefficient of the thin lms postsoaking switches from positive to negative values indicating the change in nature of carrier transport.The thermal conductivity of Ag 2 Se is measured to be 0.53 W m À1 K À1 .The average electrical conductivity of the post-soaked samples is around 7.5 Â 10 4 S m À1 .Based on the values of these three parameters, we obtain an average PF of 617 AE 80 mW m À1 K À2 and an average ZT of 0.35 at room temperature with an overall maximum PF of 825 mW m À1 K À2 .Furthermore, the soaking Employing more cost-effective solutionsynthesis techniques, Ding et al., Xiao et al. and Pei et al. obtained Ag 2 Se NCs with PFs of 987.4 AE 104.1 mW m À1 K À2 at 300 K,
Table 1
Soak time, Hall mobility, carrier density, Ag/Cu ratio (ICP) for selected samples that were soaked for different amounts of time in a Ag + ion solutionSoak timeMobility[cm 2 V À1 s À1 ] Carrier density [cm À3 ]approach is a safe, solution-processable, economical fabrication technique which could be potentially useful in industrial scale-up production.Our results demonstrate that Ag 2 Se thin lm materials can become one of the most promising n-type materials for use in a variety of TE applications at a large scale. | 2019-12-05T09:29:07.700Z | 2019-12-03T00:00:00.000 | {
"year": 2019,
"sha1": "8d165ea1af6e3b08e8863ad7add1d23354fa6bdb",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/na/c9na00605b",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d7d2b299bad1d7e9f86b1d06fcb744029822b76",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
247085866 | pes2o/s2orc | v3-fos-license | Neural Correlates of Listening to Varying Synchrony Between Beats in Samba Percussion and Relations to Feeling the Groove
Listening to samba percussion often elicits feelings of pleasure and the desire to move with the beat—an experience sometimes referred to as “feeling the groove”- as well as social connectedness. Here we investigated the effects of performance timing in a Brazilian samba percussion ensemble on listeners’ experienced pleasantness and the desire to move/dance in a behavioral experiment, as well as on neural processing as assessed via functional magnetic resonance imaging (fMRI). Participants listened to different excerpts of samba percussion produced by multiple instruments that either were “in sync”, with no additional asynchrony between instrumental parts other than what is usual in naturalistic recordings, or were presented “out of sync” by delaying the snare drums (by 28, 55, or 83 ms). Results of the behavioral experiment showed increasing pleasantness and desire to move/dance with increasing synchrony between instruments. Analysis of hemodynamic responses revealed stronger bilateral brain activity in the supplementary motor area, the left premotor area, and the left middle frontal gyrus with increasing synchrony between instruments. Listening to “in sync” percussion thus strengthens audio-motor interactions by recruiting motor-related brain areas involved in rhythm processing and beat perception to a higher degree. Such motor related activity may form the basis for “feeling the groove” and the associated desire to move to music. Furthermore, in an exploratory analysis we found that participants who reported stronger emotional responses to samba percussion in everyday life showed higher activity in the subgenual cingulate cortex, an area involved in prosocial emotions, social group identification and social bonding.
INTRODUCTION
Imagine yourself among a hundred head strong percussion section of a samba school during a carnival parade in Brazil: you and the people around you are singing, dancing, and ecstatically happy, and you have goose bumps all over while feeling a close connection with the people around you. What causes such strong emotional engagement, feelings of pleasure, and an irresistible urge to move along when listening to beating drums? The desire to move and dance with music, especially with highly rhythmic music, together with the experience of feeling pleasure has been described as "feeling the groove" (Madison, 2006;Madison et al., 2011;Janata et al., 2012). Furthermore, such strong emotional experiences in human groups have been observed in the context of religious rituals, where Emile Durkheim referred to it in 1912 as the phenomenon of "collective effervescence"-"a kind of electricity that quickly transports them [individuals in a group] to an extraordinary degree of exaltation" (Durkheim, 2008, p. 162). Even in the laboratory, experiencing pleasurable music can lead to peak emotional responses marked by "chills" that are correlated to physiological arousal (Panksepp, 1995;Grewe et al., 2007Grewe et al., , 2009Salimpoor et al., 2009;Witek, 2009).
Factors That Influence "Feeling the Groove"
There is a growing body of studies examining the underpinnings of "feeling the groove, " namely the physical properties of music that correlate with this unique experience and potentially cause or contribute to it. Predictive coding accounts have proposed that musical expectations, which are continuously formed and then fulfilled or violated to varying degrees during music listening, mediate the pleasurable experience of groove-based music Vuust et al., 2018;Koelsch et al., 2019). Studies of groove induction have consistently highlighted the impact of rhythm-related factors that increase the amount of temporal information and influence musical expectations. These factors include beat salience (i.e., the degree to which the perception of a periodic beat is encouraged by the rhythmic patterning of sound events), the relative magnitude of periodic sound events at metrical levels faster than the beat, the density of sound events between beats, and higher-order characteristics of rhythms like syncopation (a shift of rhythmic emphasis from metrically strong to metrically weak beats; Madison et al., 2011;Madison and Sioros, 2014;Witek et al., 2014). Some of these rhythmic factors (e.g., beat salience) have a linear relationship to feeling groove (e.g., higher groove ratings in music are related to increasing beat salience and the density of sound events between beats; Madison et al., 2011). By contrast, higher order factors such as syncopation have been found to have an inverted U-shaped relation, with highest groove ratings for medium levels of rhythmic complexity Matthews et al., 2019Matthews et al., , 2020.
Furthermore, studies of the effects of microtiming on the experience of groove have revealed mixed results (Madison and Sioros, 2014;Senn et al., 2017). "Microtiming" refers to small temporal deviations from beats (defined relative to theoretical timepoints associated with metrical structure) that occur at a millisecond timescale. These deviations increase temporal uncertainty but also play an important role in communicating expressive aspects of music performance (Keil, 1995;Palmer, 1997;Keller et al., 2014). Keil proposed in his theory of "participatory discrepancies" (Keil, 1995) that the tension created by asynchronous timing between bass and drum is important for creating groove music. This view is widely shared among musicians (Berliner, 1994), but experimental studies aimed at testing it have yielded contradictory results (Senn et al., 2017;Datseris et al., 2019). Madison et al. (2011) differentiated between systematic (repetitive or intentional) microtiming, which is related to intended expressive shifts in music performance, and nonsystematic (non-repetitive or unintentional) microtiming, related to human limits in perception and motor control. In their study of different musical genres (Greek, Indian, Jazz, Brazilian samba, and West African music), no relation was found between unsystematic microtiming and groove, but correlations with systematic microtiming were positive in Brazilian samba (larger microtiming deviations were related to higher groove ratings) and negative in Greek music (greater isochrony was associated with increased groove ratings).
While this study and others focused on the magnitude of microtiming deviations, Senn et al. (2017) studied the patterning of microtiming deviations by comparing fixed time shifts in naturalistic recordings of a drum and bass duo, playing swing or funk. Their results indicated that phase shifts between instrumental parts (displacing e.g., the entire drum track relative to the bass track by certain amounts) did not influence listeners' groove experience but shifts within instrumental parts (only displacing the snare drum, while keeping the bass track and other drum parts in their original temporal position) had a negative effect on groove experience. Furthermore, a comparison of the effects of such fixed microtiming displacements with locally applied manipulations, using scaled versions of originally performed microtiming patterns in naturalistic recordings, revealed that fixed snare drum displacements irritated expert listeners more than the flexible deviations occurring in the original performances. Together these findings suggest that the effects of microtiming deviations on the experience of groove are manifold and depend on the type of timing deviation.
Finally, beat tempo has been found to be only weakly related to groove (Madison et al., 2011), though listeners' experiences of groove are strongest at a range of tempi corresponding to preferred movement rates (Etani et al., 2018). Madison et al. (2011) explored several factors in their study of different musical genres, including Brazilian samba, and found evidence for universal effects of groove across different styles. However, it is noteworthy that Brazilian samba was found to have higher potential than other genres to induce groove. This result corroborates our choice for selecting samba rhythms for inducing groove and associated emotional experiences.
Neural Basis of "Feeling the Groove" and Rhythm Processing
To identify the neural correlates of groove, a recent functional magnetic resonance imaging (fMRI) study investigated the feeling of groove while listening by manipulating rhythmic and harmonic complexity in musical stimuli (rhythmic piano chord patterns, Matthews et al., 2020). This study confirmed that the sensation of groove is related to processing in motor and reward networks in the brain. The authors proposed a theoretical model to account for how the interaction of brain areas within different cortico-striatal circuits supports internal representations of the beat (putamen, supplementary motor area and premotor area) and beat-based musical expectations (caudate, prefrontal and parietal regions). Both circuits pass information to a reward network (nucleus accumbens and medial orbitofrontal cortex) that generates the typical response to groove, i.e., the feeling of pleasure and the desire to move.
Relatedly, the formation of musical expectations is central to "predictive coding" theoretical frameworks of music listening Vuust et al., 2018;Koelsch et al., 2019). More generally, predictive coding is considered to be fundamental for action and perception, describing the process of generating and updating internal representations of the environment by comparing predicted (top-down) with actual sensory input (bottom-up) using prediction errors for updating. The generation of internal representations during perception has also been linked to action-simulation processes involving the motor system (Hommel et al., 2001;Rizzolatti and Craighero, 2004;Schubotz, 2007). In line with this, the sensory-motor theory of rhythm perception (Todd and Lee, 2015) claims that rhythm and beat experience involve a sensory representation of the input as well as a motor representation of the body. In support of this theory, rhythm processing and beat perception are associated with activation of motor areas in the brain, including the supplementary motor area, premotor cortex, basal ganglia, and cerebellum (Chen et al., 2008;Bengtsson et al., 2009;Grahn and Rowe, 2009;Kornysheva et al., 2010;Danielsen et al., 2014). Moreover, listening to musical rhythms that are judged to be beautiful (or liked) leads to higher activation in premotor and cerebellar areas than listening to non-preferred rhythms (Kornysheva et al., 2010), and musical groove modulates the motor cortex excitability in musicians (Stupacher et al., 2013).
Sensorimotor Synchronization, Rhythmic Entrainment, and Emotional Processing
Another line of research that motivates the present study addresses sensorimotor synchronization and rhythmic entrainment. With respect to functional relevance, the experience of groove in music is associated with better synchronization and effortless coordination of movements due to optimal sensorimotor coupling (Merker et al., 2009;Janata et al., 2012;Leow et al., 2014Leow et al., , 2021Fitch, 2016;Witek et al., 2017). Furthermore, interpersonal synchrony during joint musical activities has become a topic of intense interest because of its positive effects on cooperation, prosocial behavior, and interpersonal affiliation. For example, joint drumming or finger tapping in synchrony increases subsequent cooperation between the co-acting individuals (Kirschner and Tomasello, 2009;Kokal et al., 2011), feelings of affiliation (Hove and Risen, 2009), and activation of brain areas involved in reward processing (Kokal et al., 2011). Kokal et al. (2011) showed stronger activation in the caudate, an intersection between reward and motor networks in the brain, when participants experienced higher interpersonal synchrony during dyadic drumming. This activation was not only related to reward, but also predicted the amount of prosocial behavior engaged in afterward, and was furthermore mediated by the interindividual differences in the ease with which participants learned to produce the joint rhythm.
As mentioned earlier, it is assumed that action and perception share common neural substrates (Hommel et al., 2001;Rizzolatti and Craighero, 2004). Consistent with this general principle, and its specific instantiation in the sensory-motor theory of rhythm perception (Todd and Lee, 2015), it is assumed that beat perception entails a covert action simulation process that involves triggering motor representations that would be necessary for action execution and for predicting action outcomes. Relatedly, dynamic attending theory (Large and Jones, 1999) proposes that temporal prediction emerges via process of entrainment, a phenomenon whereby two or more "systems" become coupled. This coupling can take place at several levels in the context of music (neural, perceptual, autonomic physiological, motor, social, subjectively; see Trost et al., 2017). Notably, there is evidence that emotional experience is related to entrainment during listening to percussion (Trost et al., 2014(Trost et al., , 2017Cameron et al., 2019).
Thus, the perception of an action such as listening to samba percussion, which requires high interpersonal synchronization (i.e., good synchrony among instruments or playing "in sync"), can be expected to recruit similar brain areas as executing that action, including motor regions and reward regions, and brain areas related to prosocial behavior and emotional processing, in particular feeling connected with others. Furthermore, these processes can be expected to be facilitated by high synchrony between percussion instruments due to enhanced action simulation and entrainment at multiple levels leading to stronger "resonance" in the system.
With regard to emotional processing during music listening, Becker (2004) proposed that some individuals show profound emotional experiences when listening to music. Such "deep listeners" show enhanced physiological responses (e.g., goosebumps, a racing heartbeat) to music and describe themselves as having relatively pronounced emotional responses when listening to music (Becker, 2004). Becker noted the relationship between such pronounced emotional experiences during music listening and experiences of trance and ecstasy in religious ceremonies (cf., Durkheim, 2008). However, only a handful of studies have examined deep listeners. For example, Penman and Becker (2009) showed that deep listeners have higher galvanic skin responses, indicating stronger emotional responses, than individuals from various control groups, and Chapin et al. (2010) employed deep listeners to investigate the neural correlates of emotional responses during natural music listening. Such depth of emotional responses to music on a trait level might also contribute to the above-mentioned prosocial behavior and affiliative emotions when experiencing synchrony in musical ensembles. The subgenual cingulate cortex is an area that has been implicated in attachment, affiliative emotions, altruistic decisions, prosocial behavior, emotional processing, and ingroup-related effort (Bartels and Zeki, 2004;Aron et al., 2005;Moll et al., 2006Moll et al., , 2012Krueger et al., 2007;Zahn et al., 2009;Rusch et al., 2014;Bortolini et al., 2017) and thus might be the intersection for emotional experience that facilitates affiliative emotions and prosocial behavior. In a relevant study, Rusch et al. (2014) investigated the degree to which participants perceive their family as a distinct and cohesive group (high entitativity). They found, in concordance with the role of the subgenual cingulate cortex in affiliation (Moll et al., 2012), that increased activity in the subgenual cingulate cortex was related to high entitativity reflecting group belongingness. The authors conclude that the subgenual cingulate cortex may represent the link between kin-related emotional attachment and group perception.
Outline of the Present Study
In the present study we aimed to bring together different lines of research on experiencing synchronous action and feeling the groove in rhythmic musical sounds in order to explore links between synchrony perception, in particular its temporal properties, and affective experiences induced by music. To this end, we investigate the effect of varying degrees of synchrony between multiple instrumental parts from a samba percussion ensemble on experienced pleasantness and neural processing during listening. More specifically, we used naturalistic sounds of high-quality professional recordings of a samba percussion ensemble, namely the rhythm or percussion section (Bateria) of a Brazilian samba school (Escola de samba 1 ). These recordings consisted of sounds played "in sync" (with natural microtiming deviations occurring in the naturalistic recordings but with no additional asynchrony between instrumental parts) or manipulated sounds played "out of sync" (with varying degrees of asynchrony added to the naturalistic recordings, creating fixed time shifts between instrumental parts). For simplicity, the sounds of the percussion section of a Brazilian samba school will be referred to as "samba percussion" throughout the manuscript.
First in a behavioral study, we investigated how: (1) varying degrees of synchrony between multiple instrumental parts; and (2) loudness influence the pleasurable experience of a listener and the desire to move/dance with the samba percussion sounds. We hypothesized that listening to more synchronous stimuli would induce: (a) greater pleasantness; and (b) stronger desire to move/dance. This is motivated by the assumption that: (a) experiencing synchrony (compared to experiencing asynchrony) between instruments during joint drumming triggers activity in brain areas processing reward (and might be related to the rhythmic abilities of drummers, Kokal et al., 2011); 1 Background to Brazilian samba schools: A Brazilian samba school is an association that typically comprises hundreds of people involved in dancing, drumming (as well as playing percussion and other instruments), and singing during well-rehearsed carnival parades featuring original costumes and vehicles, following a specific theme and rules governing competition with other schools. The percussion section ("Bateria de escola de samba") comprises about 250-300 percussionists in top samba schools in Rio de Janeiro. See section "Materials and Methods", Figure 1 and Supplementary Material for a description of instruments and the typical rhythm. See here for a live recording: https://www.youtube.com/ watch?v=QgXsUZHaYc4 and https://www.youtube.com/watch?v=Adw8u6gL2HE for a demonstration of single instruments in a Brazilian samba school percussion section. Being a member or fan of a particular samba school in Brazil contributes to social identity. rhythm processing and beat perception are related to activation of motor areas in the human brain (e.g., Chen et al., 2008;Grahn and Rowe, 2009) that facilitate the desire to move/dance. Furthermore, we assumed that very loud stimuli would induce greater pleasantness and a stronger desire to move/dance than less loud versions of the stimuli. An extension of the sensory-motor theory of rhythm perception (Todd and Lee, 2015) proposes that rhythm perception is a form of vestibular perception by indicating overlapping brain areas in vestibular and rhythm processing. In addition, the authors assume that the neuroanatomical and functional connections of the vestibular system with cortico-subcortical regions involved in emotion and motivation (Rajagopalan et al., 2017) are the basis of a rewardbased learning mechanism that underlies the compulsion to move with a beat. The involvement of the vestibular system in rhythm perception has further implications. have shown that samples of loud techno music (above 90 dB sound pressure level, SPL) elicited a greater vestibular response and suggest that vibrotactile stimulation might be the source of pleasure, as such stimulation is sought in self-selected motion as in swings, rocking chairs or fun parks, and even in a self-evoked motion like head banging to music. Thus, vestibular responses may account for pleasurable sensations during listening to loud rock and dance music. With regard to our stimuli, we expected an effect of loudness only when the instruments in the percussion section are "in sync". We assume that a higher beat salience is conveyed by "in sync" stimuli (due to unambiguous timing cues) especially when presented very loud, in which case there will be a stronger vestibular response. In contrast, sounds played "out of sync" e.g., sounds with varying degrees of asynchrony between instrumental parts, have rhythmical properties that would be more difficult for actions simulation processes during perception and this would also influence the vestibular response and relations to reward. Finally, we examined (3) whether interindividual differences in rhythm and time perception abilities are related to the experience of pleasantness and the processing of asynchronies in samba percussion.
Second, in a fMRI study, we investigated the effects of varying degrees of synchrony between multiple instrumental parts on experienced pleasantness and its neural processing while amateur musicians listened to samba percussion. We also assessed the degree to which participants were "deep listeners" (Becker, 2004;Chapin et al., 2010), based on selfreports of emotional responsiveness to samba percussion in daily life, to explore the potential relevance of this construct to inter-individual differences in neural processing. Based on evidence that emotional experiences are related to neural entrainment during music listening (Trost et al., 2014(Trost et al., , 2017Cameron et al., 2019), we expected that such experiences would be enhanced by higher synchrony between samba percussion instruments. We selected the subgenual cingulate cortex as an a priori region of interest for additional analyses because of its involvement in emotional processing, including attachment and affiliative emotions related to prosocial behavior and social group identification (Bartels and Zeki, 2004;Aron et al., 2005;Moll et al., 2006Moll et al., , 2012Krueger et al., 2007;Zahn et al., 2009;Rusch et al., 2014;Bortolini et al., 2017). We were specifically interested in whether differential responses in this region of interest are related to individual differences in the self-reported intensity of emotional responses when listening to samba percussion in daily life.
Behavioral Study
Twelve volunteers (mean ± standard deviation, SD: 30.6 ± 6.9 years, range: 20-40 years, eight female) participated in the study. Eight of the 12 participants played one or more instrument(s) and all participants reported that they were familiar with and liked listening to the sounds and rhythms of samba percussion (see Supplementary Material 1.1 Participants for information about their musical background).
Functional Magnetic Resonance Imaging Study
The final sample for the fMRI experiment comprised 21 new volunteers (mean age ± SD: 34.4 ± 5.5 years; range: 26-42, five female), none of whom had participated in the behavioral study. All of these participants had musical experience (16 amateur and five professional musicians), played an instrument, and were familiar with and liked listening to the sounds and rhythms of samba percussion (see Supplementary Material 1.1 Participants for the detailed information about musical background). 19 out of the 21 participants reported that they were able to play the typical samba percussion rhythm. None of the participants had a history of neurological disorders or were taking centrally active medications. 20 participants were right-handed and one participant was left-handed according to the Edinburgh handedness inventory (Oldfield, 1971).
All participants in both studies had normal vision or corrected to normal vision (contact lenses) and normal hearing abilities. They gave their written informed consent to participate and were naïve to the hypotheses and manipulation of the stimuli. The experiments were performed in accordance with ethical standards compliant with the declaration of Helsinki and had been approved by the local scientific ethics committee (Copa D'Or Hospital, Rio de Janeiro, Brazil, No. 57.482).
Stimuli
The typical sequence performed by a Brazilian samba school percussion section was arranged and recorded by a professional musician in Rio de Janeiro using an overdubbing procedure on multiple audio tracks (tempo: 135 bpm; 2/4 bar; 135 s; including two parts each with a refrain = bossa; see Supplementary Material, 1.2 Recording of stimuli for a comprehensive description and analyses of stimulus features). The following instruments were included in this multitracked percussion section: repinique-high pitch double-headed drum; surdos-low pitched bass drums (three different versions, the first, second and third surdo), caixas-snare drums, chocalhos-shakers, cuícashigh-pitched Brazilian friction drums, tamborins-small frame drums, and agogôs-agogo bells. For the behavioral study, an excerpt from the beginning of the recording (24 s) was used, consisting of the call of the repinique (4 bars long), the entrance of the surdos, caixas and tamborins, and, finally, 14 bars after the start, the entrance of chocalhos, cuícas, and agogôs. In order to increase the number of trials for the fMRI experiment, three different excerpts (each 19.555 s, either containing the call of repinique or a break/refrain-bossa) of the recording were used. The synchrony manipulation was applied to these excerpts by using Logic Pro 9 (Apple Inc., Cupertino, CA, United States) either to align the tracks of the single instruments "in time/in sync" (0 ms delay between instrumental parts) or to delay the caixas (snare drums that play the underlying beat of the rhythm) by 28 ms, 55 ms or 83 ms ("out of time/out of sync"). These time shifts are multiples of beat subdivisions at the 64th-note level (see Figure 1 for relations of the rhythms and accents between the caixas-and the three different surdos; see also Supplementary Material 3 Listening Examples Audio 1-4). Note that stimuli that were used to create the time shifts for the synchrony manipulation comprise naturalistic recordings that contain microtiming deviations that occur normally in live performances. Furthermore, sound intensity of stimuli used in the behavioral study was manipulated by normalizing the mean root mean square (RMS) of the audio waveform, resulting in very loud (95 dB SPL) and loud (85 dB SPL) versions of stimuli. There was no manipulation of sound intensity for the stimuli in the fMRI study (all stimuli were presented with the same loudness).
Experimental Procedure and Data Analyses for the Behavioral Study
In the behavioral study, participants were tested while seated in an experimental room. Stimulus delivery was controlled by Presentation 16.4 software (Neurobehavioral Systems Inc., Berkely, CA, United States) running on a computer and stimuli were heard via Sennheiser HD280PRO headphones (Sennheiser Electronic GmbH & Co. KG, Wedemark, Germany). The experimental procedure in the behavioral study comprised: (1) perception tasks that assessed, first, the experienced pleasantness and desire to move/dance and, second, the ability to perceive timing shifts in percussion sounds, both using the stimuli described above; (2) tests of participants' general rhythmic and auditory perceptual abilities; and (3) questionnaires.
(1) The experimental design for the first perception task was based on a 2 (task: pleasantness/desire to move rating) × 2 (loudness: very loud/loud stimuli) × 4 (synchrony between instrumental parts: 0, 28, 55, or 83 ms delay of snare drums) factorial design. In each trial, participants were required to listen to a stimulus sequence and to judge their experienced pleasantness and their desire to move/dance with it (both on a rating scale from 0 = "not at all" to 10 = "very much"). All stimuli were presented once in a randomized order. In the second perception task, participants' ability to detect asynchronies between percussion parts was examined for the four different levels of synchrony (0, 28, 55, or 83 ms delay of snare drums) only for the very loud stimuli (95 dB SPL). In each trial, participants were required to judge the degree to which the instruments were being played in time (i.e., in synchrony) on a rating scale ranging from 0 = "not at all synchronously" to 10 = "perfectly synchronized"). This test was repeated (i.e., done twice), and all stimuli were presented in a randomized order. For calculating an index of the sensitivity to the timing shifts in percussion sounds, each participant's (subjective) ratings for perceived synchrony between instruments were correlated with the (objective) values (0, 28, 55, 83) used to delay the snare drums using the Pearson correlation coefficient r.
(2) In order to evaluate participants' general rhythmic abilities, the rhythmic part of the Musical Ear Test (MET; Wallentin et al., 2010) was used. Participants were required to compare two rhythm patterns by judging whether the patterns were same or different. Presentation of each pair of rhythm patterns started with 4 metronome clicks (100 bpm, 4/4 bar) followed by a sequence of 4-11 wood block sounds (one bar, first rhythmical phrase), further metronome clicks to complete the second bar, a sequence of 4-11 wood block sounds (one bar, second rhythmical phase) and further metronome clicks to complete the fourth bar. Fifty-two pairs of rhythm patterns that varied in difficulty were presented, with half of the pairs containing change in the second pattern.
In order to evaluate participants' perceptual abilities, an adaptation of an auditory flutter fusion task (Rammsayer and Altenmueller, 2006) was created to determine perceptual thresholds for perceiving two sounds as being simultaneous. Stimuli consisted of sound pairs created by two bongo sounds (one with a high pitch, the other with a low pitch) that were separated by a gap ranging from 0 to 200 ms. In each experimental trial, two synchronous (0 ms gap) and one sound pair that contained a gap (deviant stimulus) were presented. Participants indicated which of the three sound pairs had a gap. Stimuli presentation followed an adaptive staircase procedure (see Janata and Paroo, 2006): if the participant responded correctly, the gap in the deviant stimulus decreased in the following trial, whereas the gap increased for the next trial in following incorrect responses. The adaptive testing procedure required participants to have ten turnaround points (incorrect answers), and the gap size values for the last correct response before the incorrect response for the last six turnaround points were averaged to determine the perceptual threshold.
(3) Finally, participants completed questionnaires that assessed: (a) their musical background by custom-created questions specific to samba percussion, practicing hours, and experience playing in musical ensembles (see Supplementary Material 1.1 Participants); (b) emotional experience (Geneva Emotional Music Scale, GEMS-9, Zentner et al., 2008) and (c) physiological sensations (based on questionnaires for identifying "deep listeners", cf. Becker, 2004;Chapin et al., 2010) when listening to the samba percussion in daily life.
Physiological sensations in response to samba percussion (in daily life) were assessed based on questionnaires designed to identify "deep listeners" (cf. Becker, 2004;Chapin et al., 2010). Participants were required to rate on a 5-point scale (1 = "never" to 5 = "always") the degree to which they experience physiological sensations when normally listening to samba percussion: goose bumps, sensations in the stomach, tingles down the spine, shivering, heart palpitations, lumps in the throat, a racing heart, synesthesia.
The data analyses were performed with SPSS 20.0 (IBM Corp., Armonk, NY, United States) using multivariate tests for repeated measures Analysis of Variance (ANOVA), paired t-tests for comparisons of stimuli, and the Pearson correlation coefficient r. Consistent with our directional hypotheses, onetailed significance testing was applied.
Experimental Procedure for the Functional Magnetic Resonance Imaging Study
During the fMRI experiment, participants lay in supine position on the scanner bed, with the right hand resting on the scannercompatible response box. Written instructions were projected by an LCD projector onto a screen behind the participant's head (viewed via a mirror on the top of the head coil). All auditory stimuli were presented via scanner-compatible headphones (MRconfon GmbH, Magdeburg, Germany). Stimulus delivery was controlled by Presentation 16.4 software (Neurobehavioral Systems Inc., Berkely, CA, United States) running on a computer.
The fMRI experiment comprised 48 experimental trials (12 for each of the four experimental conditions-varying degrees of synchrony of 0, 28, 55, and 83 ms delay) and six null events (20 s silence) that were presented in 2 runs (each 15 min, each comprising 24 experimental trials and three null events). Experimental trials followed the same structure: a fixation cross was presented for 5 s and then additional to that percussion sounds (19.555 s) were presented. Participants were instructed to listen to the percussion stimuli carefully and passively without any movement (see also Supplementary Material 1.3 Control for movement during the fMRI experiment). After listening they were required to rate how pleasant (enjoyable) was listening to the excerpt on a 10-point rating scale with the anchor points 1 = "very unpleasant" and 10 = "very pleasant" that was presented for 10 s (by moving a red circle to the position on the scale). Stimuli with varying degrees of synchrony between instruments were presented across experimental trials in a pseudo-randomized order (see Supplementary Material 1.4 Stimulus presentation order in the fMRI experiment). Participants were familiarized with the task prior to the scanning session using different excerpts than during fMRI scanning. Note that ratings of the desire to move/dance were not collected in the fMRI experiment.
After fMRI scanning, a debriefing session took place on strategies and other aspects of task performance during the fMRI study (see Supplementary Material 1.5 Debriefing after fMRI scanning). As in the behavioral study, participants also completed questionnaires that assessed: (a) their musical background by customized questions specific to samba percussion, practice, and ensemble experience (see Supplementary Material 1.1 Participants); (b) emotional experience (GEMS-9, Zentner et al., 2008) and (c) physiological sensations when listening to the samba percussion in daily life.
Participants were asked to report their emotional experience (using the GEMS-9, Zentner et al., 2008, as in the behavioral study described above) and physiological sensations (according to questionnaires that identify "deep listeners", cf. Becker, 2004;Chapin et al., 2010) when listening to samba percussion in general. Here (in contrast to the behavioral study) participants rated on a 5-point scale (1 = "never" to 5 = "always") only five of the most prevalent physiological sensations identified in the behavioral study: goose bumps, sensations in the stomach, tingles down the spine, shivering, a racing heart. Furthermore, in order to identify "deep listeners", participants were to judge on a 5-point rating scale whether their emotional responses when listening to samba percussion in daily life are stronger or weaker/equally strong compared to the emotional responses of most people they know (1 = "much less, " 2 = "less, " 3 = "equal, " 4 = "stronger" or 5 = "much stronger"). Participants further indicated the role of samba percussion in their life (1 = "no role at all" to 5 = "great role on their life") and how strongly they experience emotions when listening to samba percussion (1 = "not at all" to 5 = "intensively").
Finally, participants' ability/sensitivity at detecting asynchronies between percussion parts in the stimuli was examined (comparable to the second perception task of the behavioral study). In this behavioral test, participants listened to 24 stimuli (two presentations for each) again outside the fMRI scanner and judged the degree to which the instruments were played in time (i.e., in synchrony) on a rating scale ranging from 1 = "not at all synchronously" to 10 = "perfectly synchronized". Stimulus delivery was controlled by Presentation 16.4 software (Neurobehavioral Systems Inc., Berkely, CA, United States; running on a computer) and stimuli were heard over Sennheiser HD280PRO headphones (Sennheiser Electronic GmbH & Co., KG, Wedemark, Germany). The presentation order followed constraints described in Supplementary Material 1.4. In order to calculate an index for the sensitivity to the timing shifts in percussion sounds, each participant's (subjective) ratings for perceived synchrony between instruments were correlated with the (objective) values (0, 28, 55, 83) used to delay the snare drums using the Pearson correlation coefficient r, as implemented in Matlab R2012a (The Mathworks Inc., Natick, MA, United States).
Behavioral data analyses were performed with SPSS 20.0 (IBM Corp., Armonk, NY, United States) using multivariate tests for repeated measure Analysis of Variance (ANOVA) and paired t-tests for comparisons of stimuli with different degrees of synchrony between instrumental parts.
MRI Acquisition and Data Analysis
MR scans were performed on a 3 Tesla Philips Achieva MR scanner (Koninklijke Philips N.V., Amsterdam, The Netherlands) with an eight-channel SENSE head coil. Two runs of 360 functional whole brain images, sensitive to the blood oxygenation level dependent (BOLD) signal, were acquired using a single-shot T2 * -weighted fast-field echo (FFE) echoplanar imaging (EPI) sequence. Each volume consisted of 39 AC-PC aligned slices covering the whole brain with the following parameters: voxel size 3 mm × 3 mm and a slice thickness of 3.0 mm and 0.75 interslice gap; repetition time (TR) 2.5 s, echo time (TE) 22 ms, flip angle 90 • , acquisition matrix 80 × 80, field of view (FOV) 240× 240× 145.5 mm, ascending image acquisition. Before each functional run, five dummy volumes were collected for T1 equilibration purposes. A SENSE factor of 2 and "dynamic stabilization" were additionally employed. These parameters were based on careful sequence parameter optimization in order to maximize temporal signal-to-noise (Bellgowan et al., 2006;Bodurka et al., 2007) in brain regions that normally suffer from magnetic susceptibility effects, including the basal forebrain areas and ventromedial regions of the prefrontal cortex. Additionally, a set of anatomical images was acquired (see Supplementary Material 1.6 fMRI parameter for anatomical images). Head motion was restricted with foam padding and straps over the forehead and under the chin (estimated translation and rotation parameters were inspected and never exceeded 2 mm or 2 degrees). fMRI data were analyzed using Statistical Parametric Mapping SPM8 implemented in Matlab R2012a (The Mathworks Inc., Natick, MA, United States). All functional images were preprocessed by realigning all volumes of each subject to the first functional volume and in a second step to the mean image. Functional images were co-registered to the 3D anatomical image of the participant. The 3D anatomical image was segmented, and the gray matter segment was normalized to a gray matter template corresponding to the Montreal Neurological Institute (MNI) brain template and obtained parameters were used for normalization of the functional data. The voxel dimensions of each reconstructed functional scan were 3 × 3 × 3 mm. Finally, functional images were spatially smoothed with a 6 mm fullwidth half-maximum Gaussian filter. In the first-level analysis, pre-processed images of each participant were analyzed with a General Linear Model (GLM) according to a factorial design using the four experimental conditions (varying degrees of synchrony between instruments). For each functional run, the GLM model of the first level included four predictors of interest covering the experimental manipulations, namely the delay between the snare drums (caixa) and the other instruments in the samba percussion stimuli: (1) 0 ms; (2) 28 ms; (3) 55 ms; (4) 83 ms and were modeled as boxcar functions with a length of 19.555 s convolved with the canonical hemodynamic response function. Moreover, a predictor (of no interest) that covered the time for presenting the rating scale and the response of participants was included by modeling a boxcar function with a length of 10 s convolved with the canonical hemodynamic response function. The six movement parameters were included as further predictors of no interest. Low frequency drifts from the perceptual functional runs were removed using a high pass filter of 516 s and a correction for autocorrelation [AR(1)] was applied. In the first level analyses, a contrast was generated that tested for increasing responses with increasing synchrony between instruments: 0 > 28 > 55 > 83 ms (i.e., the interaction contrast [2 1 -1 -2]). These contrast images of single participants were used in a second-level one-sample t-test for random effects analysis. Additionally, a parametric analysis was conducted with the index that describes the rhythmic abilities of the participants (correlation coefficients obtained from the synchrony judgment task). For identified areas, we first report activations that were significant at p < 0.05, corrected for multiple comparisons (using Family Wise Error Rate, FWE, on a cluster level) with a minimum cluster size of 10 voxels. Additionally, we report the results in an exploratory manner at a significance level of p < 0.005, uncorrected for multiple comparisons, and activation clusters with minimum size of 10 voxels. Displays of activations were created by means of software packages MRIcron 2 by superimposing SPM t-maps resulting from the second level analysis maps on a MNI standard brain. Labeling of activation clusters was done with the anatomy toolbox 3 , bspmview 4 and brain atlases. Region of interest analysis was done using MarsBaR (Brett et al., 2002) 5 by extracting parameter estimates in the contrast of increasing synchrony between instruments (0 > 28 > 55 > 83 ms) at a predefined coordinate for the subgenual cingulate cortex (Moll et al., 2006; x = -2, y = 15, z = -5, using a 10 mm sphere), and for control to an anatomical mask for the nucleus accumbens (Pauli et al., 2018).
For the second perception task, where participants evaluated synchrony between instruments explicitly (using only very loud stimuli; Figure 2C), an ANOVA analyzing the levels of synchrony confirmed a significant main effect [F (3,9) = 13.4, p < 0.001, η 2 = 0.82; with significant differences between all levels of synchrony, ps < 0.01, Bonferroni corrected] consistent with sensitivity to the asynchrony manipulation. Stimuli with more asynchrony between instrumental parts were evaluated with lower ratings of synchrony, i.e., perceived as played out of time.
Participants' Rhythmic and Perceptual Abilities
In order to explore interindividual differences, two tests on measures of participants' rhythmic and auditory perceptual abilities were analyzed. Performance on the Musical Ear Test (MET) ranged from 61.5 to 88.5% accuracy rate (mean ± SD: 79.6 ± 7.8%), i.e., well above chance (50%). Perceptual thresholds for perceiving two sounds as simultaneous had a range from 2.7 to 21.8 ms (mean ± SD: 8.0 ± 5.7 ms), i.e., below the levels of asynchrony in our manipulated stimuli.
We assumed that participants with better rhythmic and perceptual abilities would be better at detecting asynchronies between instrumental parts, as manipulated in our stimuli, and that this sensitivity should be reflected in participants' ratings. Therefore, we calculated sensitivity indices by computing correlations between the (objective) values (0, 28, 55, 83) used to delay the snare drums in our stimuli and participants' (subjective) ratings (ranging from 0 to 10) of (A) felt pleasantness and (B) desire to move/dance and (C) the direct evaluation of synchrony. Individual participants' sensitivity index coefficients varied between r = 0.23 to -0.96 (mean ± SD: r = -0.62 ± 0.39) for the pleasantness ratings (A), and r = 0.06 to -0.92 (mean ± SD: r = -0.70 ± 0.31) for the ratings on the desire to move/dance (B), and r = -0.30 to -0.95 (mean ± SD: r = -0.83 ± 0.18) for the direct rating on synchrony (C). More negative values indicate greater sensitivity to the synchrony manipulation (i.e., stimuli with higher asynchrony between instrumental parts were given lower subjective ratings). The three sensitivity indices were in turn correlated with: (1) performance on the test on rhythmic abilities (MET range 61.5-88.5% correct answers); and (2) perceptual thresholds for perceiving two sounds as simultaneous (range 2.7-21.8 ms). Rhythmic abilities (performance in MET) were found to be related to the sensitivity indices based on pleasantness ratings (r = -0.57, p < 0.05) and ratings of the desire to move/dance (r = -0.55, p < 0.05). Specifically, participants who achieved better performance in the rhythmic abilities test rated more synchronous stimuli as being more pleasurable and expressed a stronger desire to move/dance. No such correlation was observed for the sensitivity index based on direct ratings of synchrony (C) and there were likewise no significant correlations between perceptual thresholds for perceiving sounds as being simultaneous and any of the three ratings-based sensitivity indices.
Questionnaire Responses
Participants indicated that they normally feel emotions such as joyful activation (rating mean ± SD: 4.5 ± 0.5 on a 5-point scale) and power (4.2 ± 1.3; see Supplementary Figure 1A for all ratings) when listening to samba percussion in daily life. Furthermore, participants reported usually having physiological sensations such as of goose bumps (3.8 ± 1.0) or a racing heart (3.1 ± 1.1) when listening to samba percussion (see Supplementary Figure 1B for all ratings).
Behavioral Measures
Behavioral responses during functional magnetic resonance imaging scanning: Confirming our objective categorization of stimuli, participants' ratings on how pleasant was listening to the music excepts obtained during fMRI scanning revealed that listening to synchronous stimuli, i.e., when instruments were aligned "in time", was most pleasant (mean ± SD: 8.9 ± 0.7). It was still pleasant to listening the asynchronous ("out of time") stimuli with a 28 ms delay of the snare drums (8.4 ± 0.9). However, ratings of pleasantness dropped considerably for the stimuli that delayed the snare drums by 55 ms (4.5 ± 1.7) or 83 ms (2.7 ± 1.7). An ANOVA analyzing the levels of synchrony confirmed a significant main effect [F (3,18) = 74, p = 0.001, η 2 = 0.93]. Differences between all synchrony conditions were significant (ps < 0.001 for comparisons between conditions, Bonferroni corrected).
Ability/sensitivity to detect asynchronies between instrumental parts: In a behavioral test after fMRI scanning, participants were required to evaluate the synchrony between instrumental parts in the samba percussion stimuli on a scale ranging from 1 = "not at all synchronously" to 10 = "perfectly synchronized". More synchronous stimuli were evaluated as being better synchronized (mean ± SD 0 ms: 9.1 ± 0.8; 28 ms: 8.2 ± 1.1) than stimuli that had larger timing shifts (55 ms: 3.6 ± 1.5; 83 ms: 2.3 ± 1.1). An ANOVA analyzing the judgments for different levels of synchrony confirmed a significant main effect [F (3,18) = 134, p < 0.001, η 2 = 0.96]. The differences between all levels of synchrony were significant (ps < 0.001 for comparisons between conditions, Bonferroni corrected).
In order to assess participants' abilities to perceive different degrees of synchrony in the samba percussion stimuli, we calculated an index of the sensitivity to timing shifts, as in the behavioral study, by correlating the (objective) asynchrony values (0, 28, 55, 83 ms) with the participants' (subjective) ratings. More negative values reflect higher concordance between perception of synchrony and objective synchrony, with r = -1.0 indicating a perfect match. Correlation coefficients ranged from r = -0.93, indicating high sensitivity to different levels of synchrony, to r = -0.55, indicating moderate sensitivity to asynchronies (mean ± SD: r = -0.84 ± 0.10).
Questionnaires: Responses in the questionnaire about participants' general emotional experience (GEMS-9, Zentner et al., 2008) when listening to samba percussion in daily life indicate that participants normally feel emotions such as joyful activation (rating mean ± SD: 4.4 ± 0.7 on a 5-point scale) and power (4.0 ± 1.0) as well as wonder (4.0 ± 1.1; see Supplementary Figure 2A for all ratings). Furthermore, participants reported usually having physiological sensations such as goose bumps (3.8 ± 1.1) and a racing heart (3.5 ± 1.3) when listening to samba percussion (see Supplementary Figure 2B for all ratings).
With regard to identifying "deep listeners", 14 participants reported that their emotional responses when listening to the samba percussion in daily life are stronger (n = 9) or much stronger (n = 5) than the corresponding emotional responses of most people they know, and seven participants indicated that their emotional responses are weaker than (n = 1) or equal to (n = 6) others' emotional responses. Evaluations of participants classified as "deep listeners" or "non-deep listeners" on emotional and physiological responses when listening to samba percussion (and several other ratings) can be found in Supplementary Material (2.3 Description of deep listeners and non-deep listeners by their self-reported evaluations, Supplementary Figure 3). Participants who considered themselves to have stronger emotional responses when listening to samba percussion ("deep listeners") reported feeling more joyful activation, wonder, transcendence and nostalgia as well as more physiological sensation of a "racing heart" (cf. Supplementary Figure 3) compared to participants (n = 7) who indicated that their emotional responses are weaker than or equal to others' emotional responses (non-deep listeners).
Effects of varying degrees of synchrony in samba percussion:
In order to identify which brain areas were more active for synchronous samba percussion, we calculated a contrast testing for brain responses showing increased activation when listening to stimuli with increasing synchrony between instruments (i.e., the contrast for 0 > 28 > 55 > 83 ms). Thus, significant activations in this analysis reflect stronger brain activity for stimuli that were more "in time" (i.e., with lower asynchrony between instruments). This analysis revealed stronger hemodynamic responses in the supplementary motor area (SMA proper) that were more pronounced in the left hemisphere, but also extended into the right hemisphere, the left middle frontal gyrus partly extending into the superior frontal gyrus (cluster covering Brodmann area, BA, 6 and 8), and the left premotor areas (BA6) extending into the primary motor (BA4) and somatosensory area (BA3). These areas showed significant results when correcting for multiple comparisons (p < 0.05, family-wise error, FWE, corrected on a cluster level, Table 1 and Figure 3).
In order to further explore the fMRI data, we report brain areas showing effects for the same contrast at an uncorrected significance level (p < 0.005, minimum cluster size of 10 voxels). The following activation clusters were found (see Table 1): right premotor area extending into the primary motor (BA4) and somatosensory area (BA3, BA1); right somatosensory area (BA3) extending into secondary somatosensory area (SII) and primary motor cortex; left rolandic operculum; right inferior frontal gyrus (pars triangularis); right middle frontal gyrus; left inferior parietal lobe (BA39); a cluster in the right cerebellum (lobule VIIa crus I hemisphere); bilateral putamen and hippocampal region (subicular complex, entorhinal cortex, cornu ammonis) extending into the amygdala; bilateral middle cingulate cortex; left middle occipital gyrus; left middle temporal gyrus and fusiform gyrus; left thalamus. Because the subgenual cingulate cortex is a small structure and a region of interest for further analysis, it can be mentioned that there is a small cluster (6 voxels, x = -3, y = 17, z = -10, Z = 2.93) at this exploratory significance level of p < 0.005. In order to explore brain responses in the nucleus accumbens (as key structure of the reward network), we performed a region of interest analysis using an anatomical mask (Pauli et al., 2018) FIGURE 3 | fMRI analysis of brain activation. Brain areas active for listening to samba percussion stimuli with varying degrees of synchrony between instruments. Contrast shows increasing brain activation with increasing synchrony between instruments in the ensemble at p < 0.05, FWE corrected at cluster level. and found no significant results between experimental conditions (see Supplementary Figure 4).
The reverse contrast exploring brain areas that show stronger activity for increasing asynchrony in samba percussion stimuli revealed no significant brain activations in motor-related areas when correcting for multiple comparisons. The brain areas revealed in an exploratory analysis at a lower, uncorrected significance level are listed in Supplementary Table 2.
Combined Behavioral and Functional Magnetic Resonance Imaging Results: Exploratory Analyses on Interindividual Differences
Relations with emotional responses to samba percussion in general: Parameters for brain activation from the contrast 0 > 28 > 55 > 83 ms were extracted for the subgenual cingulate cortex (region of interest defined as a 10 mm sphere at a predefined coordinate, Moll et al., 2006: x = -2, y = 15, z = -5) and compared between groups of participants who were classified as deep listeners versus non-deep listeners. Participants who reported stronger emotional responses when usually listening to samba percussion (deep listeners) showed stronger brain activation (Parameter estimates, PE, mean = 0.58, SD 0.71, s.e.m. 0.19) than participants with weaker emotional responses [nondeep listeners: PE = -0.26, SD 0.67, s.e.m. 0.25; t(19) = 2.6, p < 0.05, two-tailed, see Figure 4]. In order to test whether this effect was specifically related to experiencing pleasantness, we analyzed the brain response in the reward area of the nucleus accumbens using an anatomical mask (Pauli et al., 2018) and found no differences ("deep listener" PE = 0.41, SD 0.65, s.e.m. 0.17, "non-deep listener" PE = 0.20, SD 0.80, s.e.m. 0.33; t < 1). Sensitivity to differences in degree of synchrony: For this analysis, we used the behavioral index of sensitivity to timing shifts (i.e., the correlation between the objective values FIGURE 4 | Brain activation of the subgenual cingulate cortex. Left side: region of interest (ROI) of the subgenual cingulate cortex defined as a 10 mm sphere at a predefined coordinate (Moll et al., 2006; x = -2, y = 15, z = -5). Right side: bar graphs show the mean and standard error of the mean for the parameter estimates (arbitrary units) in the subgenual cingulate cortex ROI for the contrast of increasing synchrony between instruments. The bars present 14 participants who believed that their emotional responses when listening to samba percussion in daily life are stronger than the emotional responses of most of the people they know and seven participants who indicated that their emotional responses are less than or equal to most of the people they know. and subjective ratings for perceived synchrony acquired after fMRI scanning) in a parametric analysis of the fMRI data. Specifically, we tested whether fMRI effects for the contrast 0 > 28 > 55 > 83 ms, which showed increasing brain activation for increasing synchrony, is related to the individual differences in sensitivity to asynchronies. No significant results were found when correcting for multiple comparisons (p < 0.05, FWE, corrected on a cluster level). However, given that the sample size of 21 participants is rather small and homogenous in terms of musical experience (and our sample did not include non-musicians or participants with very poor rhythm perception abilities), we conducted an exploratory analysis of individual differences with uncorrected significance levels (p < 0.005, minimum cluster size of 10 voxels). This analysis revealed relations between a better ability to detect asynchronies in the behavioral test and hemodynamic responses for the contrast of increasing synchrony in instrumental parts (0 > 28 > 55 > 83 ms) in the left cerebellum and the temporal lobe/amygdala (see Supplementary Table 3).
DISCUSSION
In the current research, we investigated the effect of varying degrees of (a)synchrony between multiple instrumental parts in samba percussion music on listeners' groove-related subjective experiences and associated brain responses. First, experiences of pleasantness and the desire to move/dance were assessed for samba percussion stimuli with varying degrees of asynchrony presented at loud or very loud sound intensity levels in a behavioral experiment. Second, in an fMRI study, neural processing was examined in amateur musicians as they listened to the same stimuli with varying levels of synchrony (at only one loudness level) and rated how pleasant was listening to the excerpt. In both studies, interindividual differences related to auditory perceptual abilities and emotional responsiveness to samba percussion were examined in an exploratory manner. Listeners' subjective ratings indicated that increasing synchrony between instrumental parts was associated with greater experienced pleasantness in both studies, and with the increasing desire to move/dance in the behavioral study. Moreover, the fMRI results indicated that listening to samba percussion with increasing synchrony was accompanied by stronger brain activity in the supplementary motor area bilaterally, the left middle frontal gyrus, and the left premotor area. Furthermore, participants who reported having stronger emotional responses when listening to samba percussion in daily life displayed stronger brain activations in the subgenual cingulate cortex. Additionally, in the behavioral study, we found that participants with better rhythmic discrimination abilities were more sensitive to the synchrony manipulation in the stimuli. Below we discuss additional interindividual differences in neural processing related to perceptual sensitivity (observed at an uncorrected significance level) in the expectation that they will be useful for generating hypothesis in future studies.
Behavioral Effects of Varying Synchrony in Samba Percussion and Sound Intensity
We show in both the behavioral study and the behavioral measures of the fMRI experiment that individuals who are familiar with the rhythm of samba percussion music were sensitive to our objective manipulation of synchrony between instrumental parts, which was in a range (28-83 ms) that corresponds to asynchronies commonly observed in ensemble performance (Keller, 2014). The results of our behavioral study demonstrated that percussion stimuli with high synchrony between instrumental parts evoked more pleasantness and a greater desire to move/dance than stimuli with lower synchrony, suggesting a link with the concept of "feeling of groove" (cf. Witek et al., 2014;Matthews et al., 2019Matthews et al., , 2020. Synchrony between instruments, or in other words "playing in time", may thus contribute to the experience of "groove" in ensemble music, possibly by modulating beat salience (Keil, 1995;Hove et al., 2007;Madison et al., 2011).
The behavioral results on experienced pleasantness obtained with samba-experienced listeners in both of our studies are consistent with other studies showing effects of familiarity on groove ratings (Senn et al., 2018), as well as effects of asynchronies in percussion sounds, especially when the snare drums were shifted in time (Frühauf et al., 2013;Senn et al., 2017). Corroborating this, Frühauf et al. (2013) found that judgments of drum pattern quality (assessed by different criteria including ratings about liking and feeling animated to move for drum patterns played on a snare and a bass drum) were higher for synchronized (i.e., quantized) stimuli than for stimuli with increasing deviations in microtiming (15 and 25 ms shifts of either instrument from the beat). In the stimuli for the current studies, the snare drum was shifted in time relative to other instruments in a fixed manner, i.e., resulting in a constant degree of overall asynchrony throughout an excerpt, while leaving the musical structure intact. Such fixed shifts differ from variability in timing associated with increasing expressiveness, as such variability is applied locally, flexibly and not constantly in terms of magnitude (Clarke, 1989;Keil, 1995). In the present studies, expressive timing variations were inherent in the natural recordings of the professional musician who produced the stimuli (see section "Materials and Methods" and Supplementary Material). Our experimental manipulation of asynchronies with a fixed shift of one part led to a less pleasant experience, which is consistent with Senn et al. (2017), who compared fixed and flexible time shifts. Although constant shifts may seem unnatural or "mechanical", in real life, the spatial distance between co-performers, including instrument groups in street parades during carnival, leads to sound transmission delays that challenge musical ensemble coordination by introducing time shifts between instruments (Bartlette et al., 2006).
In our behavioral study, we additionally investigated effects of sound intensity. Confirming our hypothesis, we found that very loud (95 dB SPL) samba percussion sound evoked more pleasantness and greater desire to move/dance than loud (85 dB SPL) sounds, but only when the sounds were well synchronized across parts. A possible explanation for these effects is that very loud percussion sounds elicit a more pronounced vestibular response, in addition to auditory responses, that enhance emotional processing Todd and Lee, 2015;Rajagopalan et al., 2017). Consistent with this interpretation, found that samples of loud techno music (above 90 dB SPL) elicited a greater vestibular responses associated with pleasurable self-motion. Also in alignment with our results, Hove et al. (2020) found that listeners gave music clips higher groove and enjoyment ratings when the clips were relatively loud (see also Hove et al., 2019 for effects of loudness) and when auditory presentation was accompanied by additional tactile stimulation. However, in contrast to these findings, Stupacher et al. (2016) reported that sound intensity had no effect on groove ratings. In the relevant study, participants listened to music excerpts with high, medium, and low groove, presented with high (0 dB), mid (-6 dB), and low (-12 dB) sound intensities, and were asked to rate on a 7-point scale "to what extent did you feel that the musical excerpt grooved" (Stupacher et al., 2016, study 2). Their results showed an effect of groove levels on ratings, but there was neither a main effect of sound intensity nor an interaction of sound intensity with different levels of groove on ratings by the participants. Their findings are consistent with Janata et al. (2012), who report that listeners did not agree with the statement "The groove depends on the overall loudness of the music". In our studies, we had no direct rating for groove, but instead asked listeners about felt pleasantness (in both studies) and the desire to move/dance when listening to our music excerpts (in our behavioral study). Our results of the behavioral study therefore support the assumption that loudness alone does not increase pleasurable experience for percussion stimuli, but rather that such experiences also depend on musical features such as synchrony between instrumental parts, which might specifically influence beat salience (i.e., not just overall sound salience).
Neural Correlates of Varying Degrees of Synchrony in Samba Percussion
Analyses of fMRI data focused on contrasting brain activations while listening to the varying degrees of asynchrony in the samba percussion stimuli. The results revealed that listening to stimuli with higher synchrony between instrumental parts was accompanied by stronger brain activation in motor-related areas including the supplementary motor area and premotor area, as well as the middle frontal gyrus (cluster covering areas of BA6 and 8), despite the fact that participants listened passively (without overt movement). This finding is broadly consistent with research (reviewed in the introduction) that points to the involvement of motor-related areas during perceptual processing (Hommel et al., 2001;Rizzolatti and Craighero, 2004;Schubotz, 2007) and more specifically in beat perception and rhythm processing (Grahn and Brett, 2007;Chen et al., 2008;Bengtsson et al., 2009;Rowe, 2009, 2013;Todd and Lee, 2015;Matthews et al., 2020). Activity in these motor-related areas (especially the supplementary motor area, premotor cortex, basal ganglia, and cerebellum) is assumed to be functionally linked to internal representations of the beat that play a role in forming temporal predictions and musical expectations by recruiting prefrontal and parietal areas (Schubotz, 2007;Novembre and Keller, 2014;Vuust and Witek, 2014;Morillon and Baillet, 2017;Vuust et al., 2018;Koelsch et al., 2019). This view is consistent with the proposal that there is a dorsal stream of auditory processing that plays a role in sensorimotor coupling and prediction (Rauschecker, 2018). The dorsal stream comprises a route stemming from primary auditory areas over the inferior parietal lobe to premotor areas (BA 6 and 8), terminating in the inferior frontal gyrus (BA44). The present finding of stronger activity in the supplementary motor area, premotor area, and middle frontal gyrus (BA6/8), is in agreement with this proposed role of sensorimotor coupling in prediction (beat representation and forming rhythmic expectations). Stimuli where percussion instruments play "in time" presumably facilitate these prediction processes, as compared to stimuli where time shifts in instrument groups increase temporal complexity, due to greater clarity in signaling the underlying beat structure. This interpretation is consistent with evidence that pulse clarity is associated with strengthened audio-motor coupling network (Burunat et al., 2017). Our exploratory results (at an uncorrected significance level) revealed an extended network of additional motor areas including basal ganglia and the cerebellum, further corroborating the interpretation that audio-motor coupling was strengthened when listening to more synchronized stimuli.
In the domain of music perception generally, the coupling between auditory and motor areas is especially relevant to the extent that the strength of such coupling increases with musical expertise (Bangert et al., 2006;Lahav et al., 2007;Mutschler et al., 2007;Zatorre et al., 2007;Engel et al., 2012). Although we did not compare musicians and non-musicians, the fact that most of participants in our fMRI study had expertise in playing samba percussion pattern fits with previous findings that action expertise contributes to the level of processing in motorrelated areas during perception (Bangert et al., 2006;Lahav et al., 2007;Zatorre et al., 2007). The strength of audio-motor coupling in the brain also has behavioral consequences through effects on movement execution. Janata et al. (2012) studied groove as a sensorimotor phenomenon, and found that high groove music yielded more accurate movement synchrony in a tapping task, as well as inducing a greater amount of spontaneous movement, such as foot tapping (see also Witek et al., 2014). Furthermore, Leow et al. (2014Leow et al. ( , 2021 highlighted implications of audio-motor coupling for gait rehabilitation in clinical settings by showing that high-groove music elicited longer and faster steps than low-groove music in healthy adults (see also Hove and Keller, 2015). Trost et al. (2017) discussed how rhythmic entrainment contributes to induce (positive) musical emotions by differentiating several levels of possible entrainment mechanisms (on neural, perceptual, autonomic physiological, motor, social, and subjective levels). Stronger audio-motor coupling while listening to samba percussion with high synchronization between instruments might trigger entrainment on several levels that in turn contribute to the desire to move and evoke pleasantness, with these multilevel processes collectively leading to the experience of "feeling the groove". The present results thus add to knowledge on the brain bases of music processing by demonstrating that synchronization between instruments might influence feeling the groove, most likely by modulating beat salience, predictive processing, and multilevel entrainment.
Experiencing pleasurable music activates brain areas implicated more broadly in reward processing (Blood and Zatorre, 2001;Koelsch et al., 2006;Koelsch, 2010;Trost et al., 2012). These areas include the striatal dopaminergic system (Salimpoor et al., 2011Zatorre and Salimpoor, 2013), which is involved in peak emotional responses marked by "chills" (Panksepp, 1995;Grewe et al., 2007Grewe et al., , 2009Salimpoor et al., 2009;see Witek, 2009 for peak emotional responses/physiological arousal when experiencing groove). Also, Matthews et al. (2020) report activation of reward areas for music excerpts that are perceived to have higher groove. However, we failed to find reward-related brain activity when listening to samba percussion stimuli with greater synchrony between instruments. Specifically, we found no evidence for significantly stronger activation in the core structure of the reward network, i.e., the ventral striatum, although some of the brain areas at lower levels of significance might be considered as being part of a more extended reward network (e.g., putamen, amygdala, hippocampus, medial areas).
In an additional region of interest analysis (Supplementary Figure 4), we found a trend for higher activity in the nucleus accumbens for samba percussion with higher synchrony between instruments, together with high variability between participants (which presumably contributed to non-significant results). Several explanations may account for this outcome.
First, it is unlikely that listening to the synchronous samba percussion in our study reliably evoked an emotional peak response ("chill"), which could, under other circumstances, lead to an activation in the nucleus accumbens (Salimpoor et al., 2011). We used a block design with repetitions and our stimuli did not contain much rhythmic variation within the 20 s excerpt. More extensive musical context and greater variation of musical (rhythmical) parameters might be necessary to reliably evoke peak emotional responses like "chills". Such peak responses, together with activity in the nucleus accumbens, might be better captured with event-related designs where activity is locked to changes in stimuli (e.g., events that evoke surprise due to violated expectations) and which also capture phasic responses. Our stimuli were not constructed with the aim of violating expectations or evoking peak emotional responses. In the context of groove, a so-called "groove listening state" rather than peakbased emotional response has been postulated, consistent with subjective descriptions by listeners (Witek, 2009).
A second factor that could have contributed to the lack of direct evidence for reward processing relates movement constraints. Listening to high groove music elicits the desire to move/dance along-as our stimuli did (cf., the behavioral study). However, any movement, even finger tapping, was restricted during the fMRI experiment. Indeed, as debriefing of participants confirmed, not moving was difficult and frustrating for some of the participants. Thus, a main aspect of feeling the groove, namely moving with the beats, was eliminated, which might have reduced the pleasurable experience. Third, over the course of the experiment, participants listened to similar stimuli repeatedly, which might have caused habituation of brain responses, thus diminishing pleasurable experiences and related brain responses.
Exploratory Results on Interindividual Differences in Neural Processing
Previous studies have shown that being in synchrony with others while performing collective musical actions (e.g., singing, finger tapping or drumming together) increases trust, affiliation, and prosocial behavior (Anshel and Kipper, 1988;Hove and Risen, 2009;Kirschner and Tomasello, 2009;Wiltermuth and Heath, 2009;Kokal et al., 2011). The present research addressed the perception of synchrony in such musical actions, under the assumption that listening to samba percussion, which requires high interpersonal synchronization (i.e., high synchrony among instrumental parts or playing "in sync"), recruits similar brain areas as executing such synchronous actions (Hommel et al., 2001;Rizzolatti and Craighero, 2004;Schubotz, 2007). These include motor areas, reward areas, and also brain areas related to prosocial behavior and further emotional processing linked to affiliation and feeling connected with others. Individual differences become relevant to the extent that the level of activation in these brain areas might be related to the concept of "deep listeners" (Becker, 2004), defined here as individuals who have pronounced emotional experiences and physiological sensations when listening to music or percussion in general (see also Chapin et al., 2010). Such emotional responses to music on a trait level might also contribute to feeling connected with others easily, social bonding and prosocial behavior when experiencing synchrony in musical ensembles. In the sample in our fMRI study, most participants were classified as "deep listeners", as they described themselves as having stronger emotional responses when listening to samba percussion relative to people they know. Descriptive data included in Supplementary Figure 3 confirm differences in emotional and physiological experiences while listening to samba percussion, according to self-reported ratings.
Although we have unequal groups (14 deep listeners and 7 non-deep listeners) in our fMRI study and low power in terms of sample size, we nonetheless analyzed brain responses in the subgenual cingulate cortex as a region of interest as intersection for emotional experience and social bonding in order to explore inter-individual differences. The subgenual cingulate cortex was found to respond more strongly in "deep listeners". Thus, merely perceiving synchrony in musical actions leads to differential activation of a brain area that has been implicated in social processing related to attachment, affiliation, prosocial behavior, altruistic decisions, ingroup related effort, and belongingness to a group (Bartels and Zeki, 2004;Aron et al., 2005;Moll et al., 2006Moll et al., , 2012Krueger et al., 2007;Zahn et al., 2009;Rusch et al., 2014;Bortolini et al., 2017). Furthermore, the subgenual cingulate cortex is a key hub in emotional processing, and a target for deep brain stimulation in treatment resistant major depression disorder (Kibleur et al., 2017). Although our findings on this issue are preliminary, they invite further investigation of the effects of emotional processing and entrainment during both active and passive participation in joint music-making on prosocial behavior toward co-participants (see also Kokal et al., 2011;Trost et al., 2017). Our results likewise encourage the future exploration of therapeutic uses of music with groove-inducing properties in behavioral and neuropsychiatric disorders characterized by the impaired functioning of neurophysiological mechanisms related to entrainment and social and emotional processing (Hove and Keller, 2015;Koelsch, 2015;Aalbers et al., 2017;Braun Janzen et al., 2019).
In such future work, the concept of "deep listeners" deserves greater scrutiny. To classify participants as "deep listeners" in the current work, we used participants' responses to a question probing whether their emotional responses, when listening to samba percussion in general, are stronger than in most other people they know (cf. Becker, 2004;Chapin et al., 2010). Future studies that focus on interindividual differences in emotional experience with music might benefit from using objective measures to classify participants. For example, physiological responses to stimuli could be recorded and used instead of self-evaluations of emotionality (possibly in more naturalistic or simulated virtual environments to capture social aspects). Alternatively, composite scores based on several different emotions felt in response to music could be used.
In addition to interindividual differences related to social and emotional processing, our results yielded evidence for individual variation related to more basic perceptual-motor processing capacities. Specifically, in our behavioral study, we found some indication that participants who had better rhythm perception abilities (measured by the MET, Wallentin et al., 2010) were more sensitive to asynchronies in the samba percussion stimuli, as reflected in ratings about felt pleasantness and the desire to move/dance. This finding motivated us to explore interindividual differences in brain responses in the fMRI experiment. Although our participants were musically trained, they varied in their ability to detect asynchronies in the samba percussion stimuli (reflected by the correlation of the objective asynchrony values in the stimuli and subjective ratings of synchrony obtained in the behavioral test after fMRI scanning). A parametric analysis that included this index of sensitivity revealed (albeit at a weak statistical level) that increasing synchrony was associated with stronger brain activity in motor related areas including the cerebellum, as well as in the temporal lobe/amygdala, to the greatest degree for participants who were better at detecting asynchronies. The cerebellum is involved in temporal processing of events not only in sensorimotor control, but also in perception, with a specific role in generating internal representations of temporal relations in the sub-second range (Spencer and Ivry, 2013;Sokolov et al., 2017). For example, patients with cerebellar lesions display decreased perceptual sensitivity to rhythmic perturbations in auditory sequences at beat rates relevant to musical tempo (Schwartze et al., 2016). Furthermore, in our fMRI study, we observed stronger responses in the amygdala for participants who were sensitive to different levels of synchrony. The amygdala has been found to be involved in a variety of auditory and musical tasks, e.g., during the perception of emotionally neutral stimuli that are ambiguous or difficult to predict in terms of their timing (Hsu et al., 2005;Herry et al., 2007). Amygdala activation during music listening has been was observed in response to the violation of listeners' (harmonic) expectancies (Koelsch et al., 2008), dissonance (Trost et al., 2015), or when experts were listening to improvised melodies that contained relatively pronounced cues to behavioral uncertainty in the musical performance (Engel and Keller, 2011). Our findings thus indicate that participants who were better able to detect asynchronies in the samba percussion stimuli had stronger brain activation in brain areas involved in processing timing cues. These results are in line with previous studies investigating interindividual differences in beat perception and related brain activity, e.g., strong and weak beat perceivers (Grahn and Brett, 2007) and differences in musicians vs. non-musicians when listening to groove-based music (Matthews et al., 2020). However, our exploratory findings on this issue should be interpreted with caution, and need to be confirmed in future studies with larger sample sizes and greater variation in rhythmic abilities (e.g., by including non-musicians or individuals with weak beat perception capacities).
CONCLUSION
The results of the present behavioral and fMRI studies indicate that listening to samba percussion music in which instruments are more "in time" leads to a greater desire to dance/move (in the behavioral study) and greater experienced pleasantness (in both studies), and that these experiences are accompanied by an activation of motor-related brain areas involved in rhythm processing and beat perception (Grahn and Brett, 2007;Chen et al., 2008;Bengtsson et al., 2009;Rowe, 2009, 2013). Such motor-related brain activity for better synchronized stimuli might reflect a strengthening of audio-motor coupling that: (a) could be a basis for facilitating the desire to move/dance with the percussion sounds that have a predictable, strong beat as confirmed by the ratings of the behavioral study (cf., Janata et al., 2012;Stupacher et al., 2013;Matthews et al., 2020); and (b) leads to a pleasurable experience via entrainment operating at multiple levels (cf., Trost et al., 2017). With regard to individual differences, exploratory analyses suggested that listeners who reported stronger emotional responses to samba percussion in general showed higher brain activation in the subgenual cingulate cortex, an area that has been implicated in prosocial behavior, emotional processing, and attachment (Moll et al., 2006;Zahn et al., 2009). The role of this brain region, together with the more extended audiomotor network, in the prosocial effects of entrainment during joint music-making deserves further investigation (Kirschner and Tomasello, 2009;Kokal et al., 2011), especially insofar as it has implications for clinical uses of music in behavioral and neuropsychiatric therapy.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors on request, without undue reservation to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Scientific Ethics Committee of Copa D'Or Hospital, Rio de Janeiro, Brazil. The participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AE, JM, PK, and SH designed the behavioral and fMRI study. AE, SH, and MM acquired the data of the behavioral and fMRI study. AE and SH analyzed the data with advice from and consultations with JM and PK. AE wrote the manuscript with contributions from all authors. | 2022-02-25T14:23:15.069Z | 2022-02-25T00:00:00.000 | {
"year": 2022,
"sha1": "f591ef3d5605df81205eb10fc8b5bf23553ef202",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "f591ef3d5605df81205eb10fc8b5bf23553ef202",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234503698 | pes2o/s2orc | v3-fos-license | Fibroepithelial Polyp of Vagina – A Rare Case Report
Case Report Polypoid lesions or grape like masses of vagina are always worrisome and uncommon. In infants and young girls, such lesions are suspicious for sarcoma botryoides. In adults, benign vaginal polyps with bizarre stromal cells may occur, leading to a misdiagnosis of sarcoma. Fibroepithelial polyps of the vagina (FEPV) have attracted special interest during the past decades because of the presence of atypical cells and abnormal mitoses in some of them 2'. Fibroepithelial polyps of the vagina (FEPV) are mucosal polypoid lesions with a connective tissue core covered by a benign squamous epithelium. They are thought to be rare. Here we present an unusual case of 42 years old female patient with complaint of pain in lower abdomen since 6 months diagnosed with fibroepithelial polyp of the vagina (FEPV).
INTRODUCTION
Polypoid lesions or grape like masses of vagina are always worrisome and uncommon. In infants and young girls, such lesions are suspicious for sarcoma botryoides. In adults, benign vaginal polyps with bizarre stromal cells may occur, leading to a misdiagnosis of sarcoma. Polyp that may be similar both grossly and microscopically, but that behavein a benign fashion and both of which are rare.
Fibroepithelial polyps are benign lesions that can affect the lower female genital tract. FEPV are mucosal polypoid lesions with a connective tissue core covered by a benign squamous epithelium. They are thought to be rare as few cases are reported in literature. The lesions can vary in size and number and appear to be hormonally related and have been reported more often in pregnant women. Although benign, it can be confused with sarcoma botryoides, rhabdomyosarcoma and mixed mesodermal tumor because of its bizarre histology. FEPV should be considered in the differential for vaginal neoplasm.
CASE REPORT
A 42 years old female patient presented with complaint of pain in lower abdomen since 6 months. There is no per vaginal discharge. Colposcopy shows no abnormality. However there is 1 x 1 cm hard indurated mass on posterior vaginal wall.
GROSS
Received multiple (Two), irregular, grey white, soft tissue bit. One measures 0.4 x 0.4 x 0.1 and other tissue bit measures 0.2 x 0.2 x 0.1cm. They were called as skin tags, were pedunculated or sessile. They had either smooth surfaces. The cut surface was uniform and soft grey or pearly white.
MICROSCOPIC FINDINGS
Microscopic examination on low power revealed polypoid structures covered with benign squamous epithelium. A focal area (tip) of the polyp showed focal ulceration with numerous dense acute and chronic inflammatory cells. There were also irregular dilated blood vessels in the fibrous connective tissue. At the base and midportion of the polyp the stroma was composed of loose fibrous connective tissue without any hypercellularity. On high power there were, within the stroma, plump cells resembling fibroblast, with delicate branching cytoplasm process and dark vesicular nuclei. No large atypical stromal cells were identified (Fig 2 & 3).
DISCUSSION
Fibroepithelial polyps of the vagina are uncommon benign lesions that may have a bizarre histologic appearance. Consequently, they are of concern for their potential to be misdiagnosed as malignant tumors of the vaginal connective tissue. The patient's age is an important indicator of the benign nature of these lesions, because sarcoma botryoides occurs almost exclusively in girls under the age of 8 years. The etiology of these polyps is unclear. Elliott and Elliott [3] described a 0.5 to 5.0 mm subepithelial myxoid stromal zone extending from vulva to endocervix in mature women. Approximately 25% of healthy women demonstrated anisonucleosis in this zone with bizarre nuclear features similar to those seen in fibroepithelial polyps. Elliott et al., [4] comment that the "bizarre overgrowth of the subepithelial mesenchyme may be an excessive response of some Mullerian tissue to growth-provoking hormones.
Burt et al., [5] reported that 3 of 5 patients had a history of hormone replacement, suggesting a possible contribution of hormonal stimulation to the development of stromal hypertrophy and polyp formation.
Thus, it may be that the hormone-rich environment existing in pregnant women may stimulate excessive growth of the stromal zone described by Elliott and Elliott. "In summary, fibroepithelial polyps are uncommon benign vaginal lesions that raise the specter of sarcoma botryoides. Although they may occur in pregnancy, the pregnancy does not appear to have an influence on the degree of stromal atypia. Spontaneous vaginal delivery is not contraindicated. Local excision is curative, and the subsequent course is benign.
CONCLUSION
FEPV are common lesions with benign mono and multinucleated fibroblastic stromal cells in which myoid differentiation is often present. It may develop as a result of granulation tissue reaction after some local injury of vaginal mucosa. Hormonal factor may play a role in modulating growth of polyp. Delayed differentiation of myofibroblastic cells may explain why granulation sometimes does not contract properly but turns into polyp. | 2021-02-03T12:45:30.420Z | 2020-11-20T00:00:00.000 | {
"year": 2020,
"sha1": "56cc3e5fc887516e380750f5ee4bf9bba890bacc",
"oa_license": null,
"oa_url": "https://doi.org/10.36347/sjmcr.2020.v08i11.012",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "56cc3e5fc887516e380750f5ee4bf9bba890bacc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256963949 | pes2o/s2orc | v3-fos-license | Reducing group delay spread using uniform long-period gratings
Despite the promise of an orders-of-magnitude increase in transmission capacity, practical implementation of mode-division multiplexing faces a number of challenges. The most important among them is the complexity of digital signal processing (DSP) for compensating mode crosstalk and modal dispersion. The most promising method proposed so far for reducing this DSP complexity is strong mode coupling. We propose and demonstrate, for the first time, a method of inducing strong mode coupling and reducing group delay spread using uniform long-period gratings (LPGs). Even though the LPGs have a fixed grating period, mode coupling is effective among all mode groups and over a broad wavelength range. Both insertion loss and mode-dependent loss can be significantly reduced by optimizing the index profile of and the number of modes supported by the fiber in which the LPG is applied.
channel and even 0.01 dB/km of loss reduction is being sought for SMFs, the LPG used to induce strong mode coupling must also introduce extremely low losses, preferably below 0.1 dB, to ensure that the transmission capacity of an MDM system is competitive with parallel SMF transmission systems.
In this paper, we present a practical approach to achieving low-loss strong mode coupling using LPGs by reducing both the number of LPGs and the intrinsic loss of each LPG due to coupling to cladding modes. This approach is based on applying the LPG on properly designed graded-index (GRIN) FMFs. First, we show that it is possible to use only one uniform LPG with a fixed grating period to introduce efficient coupling not only among all modes but also over a broad range of wavelengths. The use of only one grating minimizes the extrinsic loss to the largest extent possible. Furthermore, through simulations, we also demonstrate the reduction of intrinsic loss and mode-dependent loss (MDL) by optimizing the index profile of and the number of modes supported by the fiber in which the LPG is applied.
FMFs with equally-spaced effective indices
As stated previously, since mode coupling mediated by LPGs is a coherent, phase-matched process, a different grating is required for each pair of modes. The only way to couple all mode groups using only one uniform LPG with a fixed grating period is to ensure that the effective indices of the mode groups are designed to be nearly equally spaced. Similar to parabolic quantum wells that allow equally-spaced energy levels, GRIN fibers with a parabolic index distribution are expected to support mode groups with nearly equally-spaced effective indices. For a parabolic GRIN fiber with a core index distribution satisfying = − Δ n n n r a 2 ( / ) 2 1 2 1 2 2 (where Δ = − n n n /2 1 2 2 2 1 2 , n 1 and n 2 are the refractive indices of the core and cladding, r is the radial position, and a is the core radius), the amplitude of the electric field β Ε = − E(x, y)exp( j z) of a guided mode in the core satisfies 13 : where k 0 is the free-space wavenumber, β is the propagation constant of the guided mode, and = k k n 1 0 1 . It is in fact a differential equation for isotropic 2D harmonic oscillators with known Hermite-Gaussian mode solutions 14 . This eigenvalue problem, assuming that the fields completely vanish in the cladding, admits standard solutions given by are non-negative integers representing the orders of the modes. So, the propagation constants x y x y 1 2 1 1 of successive mode groups are nearly equally spaced. The design of FMFs with equally-spaced effective indices is scalable to a larger number of mode groups. As the number of mode groups grows in a parabolic GRIN fiber, the condition for equally-spaced effective indices becomes better satisfied since most of the modes are well confined. In reality, evanescent fields do exist in the cladding. So, we adopt a nearly parabolic index profile for the core with a low-index trench in order to strongly confine the guided mode to the core. An optimized design is represented by the blue curve in Fig. 1(a). We fabricated a GRIN fiber according to this design. The actual index profile of the FMF is represented by the magenta curve in Fig. 1 , the five mode groups (9 LP modes) at 1550 nm have almost the same effective-index differences ranging from 2.54 × 10 −3 to 2.55 × 10 −3 between successive mode groups for reasons explained above 6 . To characterize mode profiles of different mode groups, as shown in Fig. 1(b), we employed the spatially-and spectrally-resolved imaging (S 2 ) method 15,16 as shown in Fig. 1(c), see details in the Methods section.
Mode coupling and GDS reduction
We first demonstrate enhanced mode coupling among all 9 LP modes using a uniform single-period LPG. Subsequently, we use a series of such LPGs to reduce GDS. Fig. 2(a) is the experimental setup for demonstrating strong mode coupling and the reduction of GDS. The mechanical LPG consists of a lower replaceable plate with gratings cut into it and an upper flat steel plate over which an adjustable screw is used to apply pressure to the fiber sandwiched between these two plates 17,18 . This mechanical grating has the same level of uniformity as LPGs fabricated using the arc method 18 . The relative angle between the fiber and the LPG can be adjusted to change the effective grating period. Details of the experimental procedures are contained in the Methods section.
To investigate mode coupling, a single uniform mechanical LPG was applied at the beginning of the FMF. Because of the relatively large effective index difference between mode groups, coupling between mode groups is negligible without the LPG. The PL was used for selective excitation of a dominant mode group when no pressure is applied to the LPG. After applying pressure on the LPG, new modal contents are generated from mode coupling mediated by the LPG. Modal dispersion in the GRIN FMF was exploited to separate different modal groups in the time domain. Therefore, the differences in the powers of each mode group in the impulse responses of the GRIN FMF with and without pressure on the LPG can be used to characterize mode coupling mediated by the LPG. It should be pointed out that, even though the FMF has equally-spaced effective indices, the group indices are, in general, not equally spaced. It turns out that the modal dispersion between the second and third group is rather small for this particular GRIN FMF, so these two mode groups are lumped together. Figure 2(b) demonstrates that a single uniform LPG can indeed induce mode coupling between all mode groups. Taking the top left plot as an example, when no pressure was applied on the mechanical LPG, there was one main peak in the impulse response, representing that the power was mainly in the LP 01 mode. When pressure was applied on the LPG, more modes/peaks appeared in the waveform, signifying that the power in the LP 01 mode had been coupled into other mode groups. The red and green lines show the effect of mode coupling as the pressure on the mechanical LPG was successively increased. The inset shows the percentage of power in the dominant input LP 01 mode for different pressures. It is observed that mode coupling increases with applied pressure. The rest of the plots in Fig. 2(b) show the impulse response waveforms when the initial power was mainly in other mode groups. They indicate that power in each mode/peak can always be coupled into not only its neighboring modes but also next-to-neighbor modes with one LPG. Similarly, the insets show the percentage of power in the dominant input MG for different pressures. When the dominant input is in MGs 2 and 3, the percentage of power in these two groups did not change monotonically with pressure, likely because the power in MG 1 is coupled back into MG 2. It can be concluded that a uniform single-period LPG can couple all the LP modes of the 5 mode groups.
The use of parabolic GRIN fiber also allows the uniform LPGs to achieve strong mode coupling for a broad range of wavelengths. The phase matching condition for coupling between two modes is 19 where Δn is the effective index difference between the two modes, λ is the wavelength in free-space, and Λ is the grating period. To evaluate the bandwidth of coupling due to the LPG, effective indices of LP 01 and LP 11 at different wavelengths and subsequently, the left-hand side of Eq. (4), π λ = Δ λ λ A n/ , were calculated. Taking both material dispersion and waveguide dispersion into consideration, the difference in A λ at 1520 nm and 1580 nm is , which is small enough to maintain phase matching condition with this wavelength range as shown in the top row of Fig. 2(c). The top three plots in Fig. 2(c) show that the impulse response waveforms are similar at three different wavelengths, signifying efficient, broadband mode coupling even with a fixed grating. The insets verify that mode coupling is efficient over a broadband. To verify that the broadband coupling was due to the grating rather than simply the applied pressure, the grating period was tuned to observe the change in the impulse-response waveform. As can be seen from the bottom three plots and the three insets in Fig. 2(c), mode coupling is efficient only for certain grating periods that satisfy the phase-matching condition. The grating period only affects the right-hand side / of Eq. (4), and the relative difference between the minimum and maximum effective periods is = − = . × − r B B / 1 4 3 10 611 637 2 large enough to annihilate the phase matching condition as shown in the bottom row of Fig. 2(c). The total insertion loss (IL) when the effective grating period doesn't satisfy the phase matching condition for the guided modes was measured, using the setup in Fig. 2, to be less than 0.06 dB. The loss can be considered as the extrinsic non-resonant microbending loss of the LPG.
One of the strengths of the method for enhanced mode coupling proposed here is its scalability to a larger number of modes, including modes with high azimuthal numbers. It should be recognized that what is required is for the mode with a high azimuthal number M (with azimuthal dependence in the grating having a component ϕ ± e j , required to couple modes with a high or low azimuthal number to its neighbor is exactly the same, and independent of M. We then demonstrate the reduction of GDS using multiple LPGs distributed along the 4.3 km FMF. The lateral offset between the PL and FMF was adjusted to excite all modes at different group delays (GDs) with almost equal power. For each waveform measured under different pressure/mode-coupling efficiency, the RMS pulse width representing the GDS of the FMF was calculated using the following formula 13 It dt ( ) , and I(t) is the measured intensity waveform. Meanwhile, for each applied force, the loss induced by the LPGs was also measured. The RMS widths as functions of the measured loss induced by two or four LPGs are shown in Fig. 3. The insets are the impulse response waveforms at different losses/pressures. At low pressure (low loss), the optical power remained evenly distributed among the 4 peaks. When a strong force was applied on the grating plate, 4 peaks in the impulse response merged into an almost symmetric single peak centered at the average group delay. The RMS width decreased as the average loss/applied pressure increased. As expected, when four LPGs were used along the FMF, the GDS was reduced further compared with using only two LPGs, because GDS was accumulated over a shorter distance before modes are scrambled. Theoretically, the RMS width is proportional to N (N is the number of LPGs) 5 when the length of fiber between two LPGs is the same. Thus when the total length is the same, the RMS width is proportional to N 1/ . The fluctuations in RMS width and loss are due to environmental changes such as temperature and fiber deformation.
Reducing intrinsic loss and MDL
We now turn our attention to reducing the intrinsic loss and MDL. We use the RMS of log-unit MDL here, which is the statistically important parameter for characterizing MDL in the strong coupling regime 20 . Figure 3 shows that significant mode mixing can be achieved using LPGs, accompanied by an average loss of 0.6 dB. It is desirable to further reduce this loss. The main source of the loss is due to the seemingly unavoidable power transfer from the highest-order mode group into cladding modes. This also means that the resulting MDL is large. In order to alleviate this problem, we propose the use of a specially designed fiber for the LPG, which supports at least one more mode group than that supported by the fiber for transmission. The rationale is explained in Fig. 4 by comparing two different fibers used in the grating section for the same transmission fiber that supports five mode groups. The fiber in Fig. 4(a) is a trench-assisted GRIN fiber, and it supports five mode groups. The index profile is adjusted to make the effective indices of the five mode groups equally spaced to ensure efficient mode coupling, and the use of a trench was found to be necessary. The fiber in Fig. 4(b) is a GRIN fiber with a pedestal at the core-cladding boundary, and it supports six mode groups, one more group than that supported by the transmission fiber. The index profile is adjusted to make the effective indices of the first five mode groups equally spaced, and the effective index difference between the fifth and sixth mode groups much smaller than the average index difference between the first five mode groups. Eliminating the trench and adding the index pedestal was found to The black lines in Fig. 4(a) and (b) represent the effective indices of mode groups in these two fibers. It can be seen that the effective index difference between the highest-order mode group and the cladding index is always smaller than the average index difference between the neighboring core mode groups for both index profiles. Figure 4(c) plots the effective index differences between neighboring mode groups. The last blue point in Fig. 4(c) represents the index difference between the highest core mode group and the cladding index in the 5-mode-group fiber. So, when a LPG phase matched for core mode coupling is applied on the 5-mode-group FMF, the highest-order mode group would be easily coupled to some cladding modes, incurring a large intrinsic loss.
This seemingly unavoidable intrinsic loss can be eliminated using the fiber in Fig. 4(b) that supports one more mode group than the 5-mode-group transmission fiber. As can be seen in Fig. 4(b), the index difference between the last two mode groups (K 1 = 2.51 × 10 −3 ) is much smaller than the average effective index difference between successive core mode groups (K = 2.65 × 10 −3 ) as shown in Fig. 4(c), while the effective index difference between the second highest-order mode group and the cladding index (K 2 = 2.73 × 10 −3 ) is much larger than K. In this case, it will be inefficient for the first 5 mode groups to couple into either the highest-order mode group or the cladding modes. So when signals are contained in the first 5 mode groups in the 6-group FMF, the intrinsic loss of the LPG can be significantly reduced.
The reduced IL and MDL of the proposed method has been verified by numerical simulations. Our method relies on the properties of the FMF in which the grating is applied, more so than the particular types of gratings that are used. In our simulations here we assume tilted index gratings rather than mechanical gratings were used. We assume that the MDM signal is carried on a transmission fiber that supports 5 mode groups and LPGs are used periodically to enhance mode coupling. LPGs written in a 5-mode-group GRIN FMF and a 6-mode-group GRIN FMF, as described above, are compared. The GDSs for these two cases are computed from the eigenvalues of the group delay operators, which, in turn, are computed from the transfer matrix of the fiber link 21 (see details in the Methods section).
To provide a fair comparison of IL and MDL, we adjusted the parameters of the LPGs for these two cases so that the reductions of GDS for these two cases are statistically identical. To do so, we plot the ensemble average of the standard deviations of the group delays, normalized by the group delay of one span for 100 instances of the random intragroup coupling matrices and the span lengths as a function of the number of spans. The nearly identical GDSs for these two cases, as shown in Fig. 4(f), were achieved with a grating length of 3.5 cm, tilt angle of 85 o , and index contrasts of 5.5 × 10 −5 and 6 × 10 −5 for the index LPGs written in the 5-mode-group GRIN FMF and a 6-mode-group GRIN FMF, respectively. Tilting is necessary because different spatial modes are orthogonal to each other, and there would be no coupling between different modes without tilting. The GDSs for these two cases increase approximately with the square-root of the number of spans (or propagation length) due to strong mode coupling mediated by both types of LPGs. The GDSs as a function of the number of spans for the case of intragroup coupling only and completely random coupling among all modes are also shown for comparison. From the coupling matrices of the LPGs used for Fig. 4(f), we use singular-value decomposition to compute the IL and MDL as functions of wavelength for the two types of LPGs 20 . Using LPGs in a FMF with the same number (5) of mode groups as the transmission fiber, the IL and MDL are in the 0.6 dB and 1 dB range, as shown in Fig. 4(d). On the other hand, using LPGs in a FMF with one more mode group than the transmission fiber, the IL and MDL are reduced to below 0.05 and 0.06 dB respectively, as shown in Fig. 4(e). As can be seen, using LPGs in a FMF supporting one more mode group than the transmission fiber significantly reduces the loss and MDL over the entire C band.
Conclusions
In conclusion, we propose and experimentally demonstrate the reduction of GDS using LPGs with very low insertion loss and mode-dependent loss. By designing a FMF with equally-spaced effective indices on which the LPG is applied, all mode groups in the FMF can be efficiently coupled using just one uniform LPG with a fixed grating period, instead of a different LPG for each mode group pair. In addition, by applying the LPG in a FMF that supports at least one more mode group than the transmission fiber, insertion loss and mode-dependent loss due to coupling from the core mode to the cladding mode can be largely suppressed. These strategies lead to the lowest intrinsic and extrinsic losses as well as mode-dependent loss to date, for inducing strong mode coupling using LPGs. Furthermore, we have verified that low-loss strong mode coupling could be achieved over a broad range of wavelengths. By periodically applying these LPGs along the transmission fiber, GDS increases as the square root of the transmission distance, rather than linearly without strong mode coupling. These results illustrate that simple LPGs can serve as a practical tool to reduce the GDS in FMFs, thus overcoming the MIMO DSP complexity issue, which is one of the most critical challenges for the practical implementation of mode-division multiplexed systems.
Methods
In the S 2 method of mode characterization, a camera was used to record output images of the FMF when light of different wavelengths was launched. By using principal component analysis (PCA) and independent component analysis (ICA) 15,16 , we acquired profiles of all 9 LP modes in Fig. 1(b).
We used the setup in Fig. 2(a) to demonstrate enhanced coupling and the reduction of GDS. A data pattern from the pattern generator was used to modulate the light from a laser. The pattern generator produced a short pulse, which consisted of one bit 1 and a long series of bit 0 s. After modulation, an impulse of light was launched into one of the input fibers of the photonic lantern (PL), which was connected with the GRIN FMF by butt coupling. The 15-mode mode-selective PL made in-house, although being state-of-the-art, cannot guarantee exciting exactly one mode group at a time. After propagation through the FMF, the (c) Index differences between mode groups in the 5-modegroup fiber and the 6-mode-group fiber, respectively. The last blue point represents the index difference between the highest-order core mode group and the cladding index. (d), (e) IL and MDL vs. wavelength for LPGs written in the 5-mode-group fiber and the 6-mode-group fiber, respectively. (f) Normalized GDSs as functions of the number of spans using LPGs written in the 5-mode-group fiber and the 6-mode-group fiber, compared with the cases of intragroup coupling and completely random coupling.
SCienTifiC REPORTS | (2018) 8:3882 | DOI:10.1038/s41598-018-21609-1 signal was detected by a multimode InGaAs PIN + TIA receiver. The receiver was connected to the oscilloscope to record the impulse response waveforms. The pulse generator is Hewlett-Packard 70841B with 3 G bandwidth, the modulator is OKI EAM OM5753C30B with 30 G bandwidth, the receiver is Discovery R402 PIN-TIA with 10 GHz bandwidth, and the oscilloscope is Agilent Infiniium DSO81204A with 12 G bandwidth.
In the simulations to demonstrate the reduction of loss using LPGs written on FMFs that support one more mode group, we obtain the transfer matrix of the link by multiplying the propagation matrix, including the effect of random intragroup coupling, of each transmission fiber span and the coupling matrix of each LPG. Completely random coupling between degenerate modes in a mode group, represented by a random unitary matrix, is assumed as this indeed occurs in real fibers. An extra length of fiber uniformly distributed between ± 1 m is added to each span to account for the imprecise positions of the LPGs. To obtain the coupling matrix of the LPGs, we first compute the mode profiles of core modes and cladding modes of the GRIN FMFs, and then the coupling coefficients among all modes. Subsequently, we use the coupled-mode equations (CMEs) to calculate the coupling matrix of the LPGs 22-24 among core modes as well as cladding modes. To ensure the calculated loss is accurate, 4 cladding mode groups are included.
Data availability. All data generated or analysed during this study are included in this published article. | 2023-02-18T14:40:14.178Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "7f294d9439b9ac0413f392dd2f0f67c2c369d384",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-21609-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "7f294d9439b9ac0413f392dd2f0f67c2c369d384",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
258807427 | pes2o/s2orc | v3-fos-license | Challenging, giant occipital encephalocele in a pediatric saipanese male
Key Clinical Message Giant occipital encephalocele is a rare form of congenital anomaly that involves protrusion of brain tissue (greater in size than the patient's cranial cavity) from a defect in the skull. This case reports illustrates repair of a giant encephalocele and emphasizes important methods to reduce risk for blood loss and other complications. Abstract A rare form of congenital anomaly, giant occipital encephalocele involves protrusion of brain tissue from a defect in the skull (in this case from the occiput). While encephalocele itself is a fairly rare entity, those qualifying as “giant”—defined by size of the deformity exceeding that of the skull itself – require very technically challenging surgery.
| INTRODUCTION
An encephalocele is a type of cranial dysraphism characterized by herniation of intracranial contents including meninges, brain, blood vessels, and ventricular components through a midline calvarial defect. 1 As a rare form of neural tube defect, their etiology involves defective embryogenesis that can occur spontaneously or in association with early gestational exposures including hyperthermia, radiation, salicylates, and viral infection. 2 The giant variant, deemed as such when the encephalocele sac is larger than the newborn's head, represents a particularly rare and formidable treatment challenge in neurological surgery. 3,4 These challenges arise from the relatively small circulating blood volume of the newborn patient coupled with resection of the often highly vascular encephalocele, which potentiates the risk of blood loss, hypovolemia, hypothermia, coagulopathy, and electrolyte imbalance from large fluid shifts in addition to unique anesthetic and airway concerns. 2,5 Large encephaloceles can be surgically treated with either truncation of the dysplastic neural tissue or reinternalization of the tissue into the cranial vault. 6 Reinternalization in the giant variant poses a significant challenge due to the exceptionally high volume of tissue involved. Groups seeking to reinternalize the dysplastic tissue in these cases have reported performing concomitant cranial vault expansion to accommodate this volume and to avoid mass effect on the adjacent brain and intracranial hypertension. 7 Because there is unclear benefit and possible detriment to reinternalization of the dysplastic neural tissue-and it can necessitate a more complex operation with higher risk-it is our practice to truncate the encephalocele tissue in large or giant lesions.
| CASE ILLUSTRATION
We present a newborn Saipanese male with limited prenatal care who presented to our institution with a giant occipital encephalocele with intact overlying dermis and anomaly that involves protrusion of brain tissue (greater in size than the patient's cranial cavity) from a defect in the skull. This case reports illustrates repair of a giant encephalocele and emphasizes important methods to reduce risk for blood loss and other complications.
Abstract: A rare form of congenital anomaly, giant occipital encephalocele involves protrusion of brain tissue from a defect in the skull (in this case from the occiput). While encephalocele itself is a fairly rare entity, those qualifying as "giant"-defined by size of the deformity exceeding that of the skull itself -require very technically challenging surgery.
K E Y W O R D S
congenital malformation, dysraphism, encephalocele, neural tube defect, pediatric pedunculated base (Figure 1). On examination, the patient elicited a strong cry with good facial symmetry and suckle, tolerated oral feeds, moved all extremities spontaneously with gross symmetry, and had a full but soft fontanel. An MRI was obtained showed a giant encephalocele sac containing a large amount of dysplastic brain tissue and ventricular components with a robust vascular supply as demonstrated on MR venogram (Figures 2 and 3). As noted by Zhahid and Khizar, 8 the meningeal membrane that surrounds the giant encephalocele can be covered by a normal membrane, one that is unusually thin, or alternatively a dysplastic (abnormal) membrane, as was the case with our patient. The "large amount of dysplastic brain tissue" evident on MRI is therefore a reference to the large amount of abnormal tissue surrounding the brain that will eventually require dissection and removal.
As this patient was certainly a candidate for surgical intervention, discussion was had with the parents regarding the profound neurocognitive disability often associated with these severe malformations and the decision was made to proceed with maximal surgical care after counseling. After fiberoptic intubation the patient was placed prone in a horseshoe headrest taking care to support the encephalocele manually during positioning, prepping, and draping to avoid tearing or rupture. A circumferential incision is made just below the equator of the lesion and dissection is carried anteriorly and circumferentially toward the calvarial defect being careful to remain in the dysplatic subcutaneous plane outside the neural tissue to minimize blood loss. Once the root of the encephalocele with its vascular supply is isolated around a perimeter, it is ligated with heavy silk suture at the base ( Figure 4). Given the size of the encephalocele, it was determined that attempting to internalize the encephalocele material into the calvarium could place the native and ostensibly more normal neural structures at risk. Therefore, the encephalocele was truncated at its pedunculated base which appeared highly vascular and disorganized. The dysplatic tissue was then resected leaving a ligated end which is internalized and covered with native dura that was freed surrounding the defect ( Figure 5). The patient remained hemodynamically stable during the perioperative and postoperative course and was extubated. Following the encephalocele repair, head circumference and transfontanelle ultrasound results were tracked on a daily basis. This demonstrated increasing head circumference through growth curves and increasing ventricular size consistent with hydrocephalus. As a result, the decision was made to place a cerebrospinal fluid shunt in delayed fashion for unresolved hydrocephalus. At the time the shunt was F I G U R E 1 Newborn Saipanese male with limited prenatal care born at full term with an occipital mass.
F I G U R E 2
Magnetic resonance imaging revealed a 1.4 cm calvarial defect with a giant occipital encephalocele containing a large amount of dysplastic brain tissue.
F I G U R E 3
Magnetic resonance venogram shows robust vascular bundle supplying the dysplastic brain tissue within the encephalocele. placed (within 3 weeks postoperative) there were no signs of CSF leak, visual acuity deficits, or wound infection. Of note, hydrocephalus is commonly reported in conjunction with giant encephaloceles. As such, it was not thought to be a complication of the surgical intervention or the result of bleeding or other iatrogenic cause, but rather a feature related to the natural history and constellation of pathology associated with occipital encephaloceles.
| DISCUSSION
Numerous factors affect the outcome of surgical intervention in patients with encephaloceles, with the occiput being the most common location for this class of cranial dysraphisms. Among these factors include location, the size of the sac, the amount of brain matter herniated into the contents of the sac, the presence versus absence of the brainstem, occipital lobe, and dural sinuses within the sac, and whether or not hydrocephalus is present. 8 Even in cases where the surgeon is well aware of these factors and fully equipped to address them, complications such as intraoperative blood loss and perioperative hypothermia are in some cases inevitable and the potential for their occurrence lends an added layer of complexity to these cases.
Even more complex and challenging are giant occipital encephaloceles, an extremely rare, formidable form of encephalocele that can be very challenging for neurosurgeons to treat successfully. Also referred to as large or massive encephaloceles, giant encephaloceles are reported in only a few cases in reports in the literature and their exact incidence is unknown. Typically, patients with giant occipital encephaloceles will present as neonates and infants due to difficulties nursing and feeding, even when their defect should be recognized and addressed at birth. 4 When the patients eventually present weeks or even months down the line, microcephaly, cleft lip, and CSF leak may be observed. 4 Preoperatively, MRI should be evaluated to assess the condition of the transverse sinus and torcula in giant occipital encephaloceles, as these sinuses can herniate with other contents into the sac. MRI is also the first choice of imaging because CT should only be performed as a last resort due to risks of infant exposure to radiation (sometimes CT may be necessary to assess the structural integrity of the underlying bone). In a recent cases series of 14 children with giant encephaloceles (13 of which were located occipitally), hydrocephalus was present in 10 patients upon presentation. 4 Seven of these patients ultimately required ventriculoperitoneal (VP) shunt, of which five were placed during the encephalocele repair surgery and the remaining two were inserted postoperatively in the setting of worsening hydrocephalus, the same scenario that required postoperative shunt insertion in our patient. 4 Intraoperatively, perhaps the most challenging decision the surgeon will face is whether to perform partial excision of the brain or return its contents in full back into the intracranial cavity. In an ideal situation, all brain tissue would of course be kept, but the challenge of doing so in these anomalous cases is that the volume of herniated brain exceeds the size of cranial vault and cavity. In the series reported by Mahapatra and colleagues, 7/14 patients underwent partial excision of brain prior to closure. 4 Other measures can be taken to avoid concerning elevations in ICP even when the full brain contents are returnedthese include expansile cranioplasty and craniectomyhowever, these measures involve a delicate balance of many considerations as even small fluctuations in ICP in infants can lead to complications such as sudden cardiorespiratory arrest. 4 In sum, this surgery should be completed F I G U R E 4 The pedicle of the dysplastic brain tissue is ligated with suture after circumferential dissection approaching the calvarial defect.
F I G U R E 5
The remnant ligated pedicle and surrounding dysplastic dura and dermal tissue at the cranial defect after resection of the encephalocele contents. in an efficient manner as possible due to risks for infant hypothermia, complications resulting from prolonged administration of anesthesia, blood loss, infection, respiratory distress, and aspiration pneumonia, among others.
| CONCLUSIONS
Herein, we present the rare case of a newborn Saiponese male with giant occipital encephalocele that was surgically reduced back into the patient's intracranial cavity during an exceedingly complex and risky procedure. Fortunately, the surgery was successful and the patient made good recovery in the immediate postoperative course. When encountered, all encephaloceles-including those meeting the criteria for "giant" and located occipitally-should be evaluated for the following, as these are the most important factors informing operative plan: encephalocele location, size, contents, and presence versus absence of hydrocephalus.
FUNDING INFORMATION
Authors report no funding sources related to this work. | 2023-05-21T05:06:50.549Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "44f734c9dfdb8d5c3b22dca446da6e73300ea67f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "44f734c9dfdb8d5c3b22dca446da6e73300ea67f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251325819 | pes2o/s2orc | v3-fos-license | Editorial: Peripheral immune system and neurodegenerative disease
COPYRIGHT © 2022 Zhang, Wang, Yuan and Deng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: Peripheral immune system and neurodegenerative disease
Editorial on the Research Topic Peripheral immune system and neurodegenerative disease Neurodegenerative diseases are a class of chronic and irreversible disorders characterized by progressive degeneration and loss of function of the central and/or peripheral nervous systems. The main pathological feature of neurodegenerative disease in the central nervous system (CNS) is selective neuronal loss in the brain and spinal cord, leading to cognitive and/or motor dysfunction. The immune system plays a variety of roles in the pathophysiology of neurodegenerative diseases. Current understanding of microglia from basic and clinical findings is as the main innate immune cells in the brain, which can be activated and involved in the neuroinflammation in nearly all neurodegenerative disorders. In recent years, many scientists have shifted their ground on conceptualizing neurodegenerative disease as a neuron-centric disease; rather, a close functional connection between the peripheral immune system and central nervous system has been increasingly acknowledged. An increasing number of circulating immune cells have been detected in neurodegenerative brains. In this regard, understanding how the peripheral immune system interacts with the central nervous system in terms of regulating the onset and development of neurodegenerative diseases assumes importance. Studies aim at exploring the role of the peripheral immune system in neurodegenerative diseases will help to identify new targets and improve the feasibility of therapeutic interventions. The manuscripts in this Research Topic focus on the relationship between the peripheral immune and neurodegenerative disease or neurodegenerative pathological changes. We highlight three specific themes in this topic: (1) peripheral innate/adapt immune and neurodegenerative disease; . /fnagi. .
(2) immunology-related serum or plasma or blood platelet studies on neurodegenerative disease; (3) the crosstalk between the peripheral immunity and central nervous system. The manuscripts in this Research Topic cover three main types of neurodegenerative disease: Alzheimer's disease (AD), Parkinson's disease (PD), and Amyotrophic Lateral Sclerosis (ALS). AD, as the most common type of dementia worldwide, has already affected over 50 million people. The lack of definite diagnostic biomarkers and effective treatments is the main cause of uncontrolled AD. The progression of AD is a dynamic process, from pre-symptomatic AD, mild cognitive impairment (MCI), to AD stages. Therefore, a cost-effective, easy-to-measure biomarker to identify subjects who will develop AD, especially at the pre-symptomatic stage is urgently needed. Qin et al. investigated serum biomarkers during different AD stages and potential novel protein biomarkers of presymptomatic AD. There are thirteen proteins in the serum that were significantly different in patients with AD or MCI group. Some proteins including cathepsin D, immunoglobulin E (IgE), epidermal growth factor receptor (EGFR), matrix metalloproteinase-9 (MMP-9), von Willebrand factor (vWF), haptoglobin, and phosphorylated Tau-181 (p-Tau181) correlated with all cognitive measures. They conclude the serum level of p-Tau181 might be broadly available to identify individuals with pre-clinical AD and assess the severity of AD. Huang et al. used a meta-analysis method systematically to evaluate the association of peripheral blood cell counts and indices with AD and MCI. The changes in leukocyte, lymphocyte, neutrophil, and CD8 + T cell counts, as well as the neutrophil-lymphocyte ratio and the CD4 + /CD8 + ratio, are closely associated with AD, which provides us a potential diagnostic value clinical data. Besides peripheral functional immune blood cells, the complement system, an important arm of the innate immune system, is inextricably intertwined with the development of cognitive impairment. Li Z. et al. investigated and discussed the differences in complement activation pathways in cognitive impairment and type 2 diabetes mellitus (T2DM) with cognitive impairment, which provide scientific data on innate immune links between cognitive dysfunction and other diseases. In the treatment for AD, Yang et al. carried out a Randomized Controlled Trial to investigate the effects of sport stacking on the overall cognitive repairment and brain function recovery in patients with MCI and AD. It suggested that sport stacking may increase the level of neuroprotective growth factors and enhance neural plasticity. In addition, Peng and Wu reviewed a protein named Irisin, which is an exercise-stimulating cleaved product from transmembrane fibronectin type III domaincontaining protein five in elderly dementia and cognitive impairment. One of the important roles of Irisin is that it can be regarded as a mediator of muscle brain cross talk to provide theoretical support for exercise therapy for patients with dementia. These findings suggested that both exercise and sport are beneficial to patients with elderly cognitive impairment, MCI, and AD.
Parkinson's Disease is the second most common neurodegenerative disease in the elderly with the fastestgrowing morbidity. PD is mainly characterized by motor features, such as postural instability, bradykinesia, tremor, and rigidity, which are caused by selective loss of dopaminergic neurons in the substantia nigra pars compacta. The interaction between CNS-resident cells and peripheral immune cells in PD pathogenesis has attracted the attention of researchers. In this Research Topic, Zhang et al. systematically retrieved and evaluated the functions of natural killer cells (NK) in PD. NK cells maybe play a neuroprotective role in PD pathogenesis. Regulating the function of NK cells reveals novel targets for the management and treatment of PD. Li D. et al. comprehensively reviewed a famous factor PDGF (platelet-derived growth factors). The manuscript covers the classification, structure, biological functions, and pathogenic roles in PD of PDGF. In the course of PD, PDGF participated the pathogenesis through a variety of mechanisms such as regulating mitochondrial function, Ca 2+ homeostasis, protein misfolding aggregation, and neuroinflammation. They also discuss the potential treatment strategy of PDGF as a target through multiple methods, especially in genetic treatment. In ALS, Yu et al. made a detailed review of crosstalk between the peripheral and central immune system from a neuroimmunological perspective which provides new insight into pathogenic mechanisms and innovative therapeutic approaches for ALS. The most noteworthy is Zang et al. comprehensively reviewed the crosstalk of central and peripheral immune systems in the three neurodegenerative diseases mentioned above. They conclude the role and molecular mechanism of the most main central immune cells (microglia and astrocytes) and peripheral immune cells (Monocytes, NK cells, T cells, Dendritic cells, and B cells) in these neurodegenerative disorders.
In summary, this Research Topic highlights the emerging role of the peripheral and central immune systems in common neurodegenerative diseases. These manuscripts provide us with the potential target in peripheral immune cells from the diagnosis to therapy. In the future, we certainly expect more studies to be added and the discussion to continue.
Author contributions
KZ and CW decided the layout, wrote the manuscript, and acted as Editors for this Research Topic. All authors contributed to the article and approved the submitted version. | 2022-08-05T13:29:15.842Z | 2022-08-05T00:00:00.000 | {
"year": 2022,
"sha1": "aba518156ce496781ffd329d25344d8854f1a2cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "aba518156ce496781ffd329d25344d8854f1a2cb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5765264 | pes2o/s2orc | v3-fos-license | Ferulic Acid Alleviates Changes in a Rat Model of Metabolic Syndrome Induced by High-Carbohydrate, High-Fat Diet
Metabolic syndrome is a cluster of metabolic abnormalities characterized by obesity, insulin resistance, hypertension and dyslipidemia. Ferulic acid (FA) is the major phenolic compound found in rice oil and various fruits and vegetables. In this study, we examined the beneficial effects of FA in minimizing insulin resistance, vascular dysfunction and remodeling in a rat model of high-carbohydrate, high-fat diet-induced metabolic changes, which is regarded as an analogue of metabolic syndrome (MS) in man. Male Sprague-Dawley rats were fed a high carbohydrate, high fat (HCHF) diet and 15% fructose in drinking water for 16 weeks, where control rats were fed with standard chow diet and tap water. FA (30 or 60 mg/kg) was orally administered to the HCHF and control rats during the last six weeks of the study. We observed that FA significantly improved insulin sensitivity and lipid profiles, and reduced elevated blood pressure, compared to untreated controls (p < 0.05). Moreover, FA also improved vascular function and prevented vascular remodeling of mesenteric arteries. The effects of FA in HCHF-induced MS may be realized through suppression of oxidative stress by down-regulation of p47phox, increased nitric oxide (NO) bioavailability with up-regulation of endothelial nitric oxide synthase (eNOS) and suppression of tumor necrosis factor-α (TNF-α). Our results suggest that supplementation of FA may have health benefits by minimizing the cardiovascular complications of MS and alleviating its symptoms.
Introduction
Plant polyphenols are phytochemical compounds found in various plants and fruits. This group of compounds has been intensively investigated as a potential source for treatment of various diseases including metabolic syndrome, diabetes and cancer [1]. Polyphenolic compounds are classified into simple phenols, flavonoids, hydroxycinnamic acids, coumarins, xanthones, acetophenones, phenylacetic acids and the less common stibenes and lignans [2,3]. These natural polyphenols have been shown to have varying bioavailability and marked health benefits in various diseases [1]. Ferulic acid (FA; 4-hydroxy-3-methoxycinnamic acid), a hydroxycinnamic acid derivative, is abundant in fruits and vegetables, such as tomato, orange, other citrus fruits, carrot, sweet corn, cabbage, broccoli, banana and rice bran [4,5]. FA is esterified in various forms in these sources. It is relatively well absorbed when compared with flavonoid compounds [6]. Numerous studies have shown that FA possesses potent antioxidant activity by scavenging free radicals and enhancing the cell stress response through the up-regulation of the cytoprotective system [7]. Moreover, FA has been shown to reduce systolic blood pressure in spontaneously hypertensive rats (SHR) [8], and to elicit improved endothelial function in 2 kidney-1 clip (2K-1C) hypertensive rats and high fat diet rabbits [9,10]. Treatment with FA decreased blood glucose in mice [11], reduced plasma triglyceride, free fatty acid and total cholesterol in diabetic rats and mice [12,13]. FA also decreases some inflammatory mediators such as prostaglandin E2 and tumor necrosis factor-alpha (TNF-α) [14], improves nitric oxide (NO) bioavailability and increases NO synthesis [9]. Based on this evidence, FA may offer beneficial effects against many disorders associated with oxidative stress and inflammation including metabolic syndrome, diabetes, cardiovascular disease, Alzheimer's disease and cancer [5,7].
Metabolic syndrome is a major health problem which predisposes those affected to the development of type 2 diabetes, cardiovascular and kidney diseases [15]. It is characterized by the presence of three or more of the following risk factors: hypertension, hyperglycemia, dyslipidemia, obesity and insulin resistance [15]. The prevalence of metabolic syndrome is rapidly increasing worldwide including the developing countries. This is due primarily to prevailing sedentary lifestyles and unhealthy dietary habits [16], specifically a diet rich in saturated fat and carbohydrates such as fructose and sucrose. Intake of this diet is associated with many complications including cardiovascular disease, nonalcoholic fatty liver disease (NAFLD) and metabolic syndrome [17].
Although the pathogenesis of metabolic syndrome is complex and the underlying mechanisms are not clearly understood, many experimental animal models for metabolic syndrome have enriched our understanding of the etiology, pathophysiological basis and the development of therapies as described in the literature [18][19][20]. Data obtained from various studies has shown that the animal models of metabolic syndrome mimic the major signs of metabolic syndrome in humans, especially hypertension, dyslipidemia, diabetes, impaired glucose tolerance, obesity and insulin resistance. Among all these animal models, the evidence suggests that chronic consumption of a high-carbohydrate (in the form of fructose) and high-fat diet by normal rodents comes the closest to fulfilling the criteria by which metabolic syndrome in man are defined [20]. Accordingly, in what follows, the abbreviation (metabolic syndrome (MS)) should be taken to denote the high carbohydrate, high fat (HCHF) rat model of MS. Previous studies have demonstrated that rats fed with high carbohydrate and high fat diet for few months develop as revealed by insulin resistance, dyslipidemia, vascular dysfunction, inflammation, fibrosis and enlargement of the heart with structural remodeling [17,21]. Compounds that could prevent MS and long-term vascular complications, such as vascular remodeling, could be very beneficial for health promotion.
Impaired vascular function is probably associated witha diminution of the vasoprotective effect of endothelial NO and increased oxidant stress by enhanced formation of reactive oxygen species (ROS) and release of pro-inflammatory mediators (e.g., TNF-α) [14,22]. The aim of this study was to determine whether FA could prevent metabolic syndrome and vascular remodeling in rats in which MS was induced by an HCHF diet, and to elucidate the mechanism underlying the alleviation of oxidative stress, inflammation and vascular dysfunction.
Animals and Diets
Male Sprague-Dawley rats weighing 220-250 g were supplied by the National Laboratory weighing Center, Mahidol University, Salaya (Nakornpathom, Thailand). After 7 days of acclimatization, the rats were randomly assigned to 2 groups: a control group (C, n = 32) received standard rat chow diet (Chareon Pokapan Co. Ltd., Bangkok, Thailand) with tap water; and a high-carbohydrate, high-fat diet group (HCHF; n = 48), which were fed with HCHF diet together with 15% fructose in drinking water for 16 weeks. At week 10, the metabolic syndrome state was confirmed by measurement of fasting blood glucose (FBG) (ě100 mg/dL), systolic blood pressure (SBP) (ě140 mmHg) and lipid profiles (hypertriglyceridemia or low high density lipoprotein-cholesterol (HDL-C) level). All HCHF rats that satisfied the presumptive MS criteria were randomly divided into 3 groups (n = 16/group) with matched body weight, SBP, FBG and lipid profiles. The studied groups were treated as follows for the last 6 weeks of the experimental period: (1) Normal control rats were divided into 2 groups of 16. The first (group 1) was treated orally with vehicle alone, propylene glycol (PG), at 1.5 mL/kg/day: (C + PG), the second control group (2) was treated orally with ferulic acid (FA) 60 mg/kg/day: (C + FA60). The first of the three groups of the HCHF rat model of MS (3) was treated orally with PG, vehicle at 1.5 mL/kg/day: (MS + PG), the second MS group (4) was treated orally with a FA (30 mg/kg/day): (MS + FA30); and the third MS group, (5) was treated orally with a high dose of FA (60 mg/kg/day): (MS + FA60).
Ferulic acid (FA; trans-Ferulic acid 99%) ( Figure 1) was obtained from Sigma-Aldrich (St. Louis, MO, USA) ( Figure 1). All the rats were housed at the Northeast Laboratory Animal Center (Khon Kaen University, Khon Kaen, Thailand) in a temperature-controlled (25˘2˝C) room, on a 12h light/dark cycle with free access to the group-specific diets and water. All experimental protocols were approved by the Animal Ethics Committee of Khon Kaen University.
Nutrients 2015, 7 4 cycle with free access to the group-specific diets and water. All experimental protocols were approved by the Animal Ethics Committee of Khon Kaen University. The composition and preparation of the HCHF diet has been described in a previous study [17] with some modifications. It consisted of 175 g fructose, 350 g condensed milk, 200 g pork tallow, 200 g powdered rat chow, 25 g of Hubble, Mendel and Wakeman salt mixture, and 50 g water per kilogram of diet. The energy densities in the food pellets of control and HCHF feeding diets are shown in Table 1. As the key carbohydrate for the HCHF group is fructose, the drinking water for this group was supplemented with 15% fructose. Meanwhile, the standard chow-fed rats received tap water. Therefore, animals in the HCHF group received more carbohydrate than those in the control group. Rats in all groups were given free access to food and water.
Physiological and Metabolic Variables
All rats were monitored for diet consumption and water intake. The weight gain of each rat was measured weekly. Plasma concentrations of total cholesterol, triglyceride (TG) and HDL-cholesterol were monitored before the feeding period (i.e., at the end of the 7 day acclimatization period), at week 10 and at the end of experimental period by using a timed-endpoint methods [23].
Indirect Measurement of Blood Pressure in Conscious Rats
The SBP was monitored every 4 weeks in conscious rats, pre-warmed (32 °C) for 10 min by non-invasive tail-cuff plethysmography (IITC/Life Science, Woodland Hills, CA, USA). Five repeated measurements were taken for each rat. The individual SBPs were obtained from an average of The composition and preparation of the HCHF diet has been described in a previous study [17] with some modifications. It consisted of 175 g fructose, 350 g condensed milk, 200 g pork tallow, 200 g powdered rat chow, 25 g of Hubble, Mendel and Wakeman salt mixture, and 50 g water per kilogram of diet. The energy densities in the food pellets of control and HCHF feeding diets are shown in Table 1. As the key carbohydrate for the HCHF group is fructose, the drinking water for this group was supplemented with 15% fructose. Meanwhile, the standard chow-fed rats received tap water. Therefore, animals in the HCHF group received more carbohydrate than those in the control group. Rats in all groups were given free access to food and water.
Physiological and Metabolic Variables
All rats were monitored for diet consumption and water intake. The weight gain of each rat was measured weekly. Plasma concentrations of total cholesterol, triglyceride (TG) and HDL-cholesterol were monitored before the feeding period (i.e., at the end of the 7 day acclimatization period), at week 10 and at the end of experimental period by using a timed-endpoint methods [23].
Indirect Measurement of Blood Pressure in Conscious Rats
The SBP was monitored every 4 weeks in conscious rats, pre-warmed (32˝C) for 10 min by non-invasive tail-cuff plethysmography (IITC/Life Science, Woodland Hills, CA, USA). Five repeated measurements were taken for each rat. The individual SBPs were obtained from an average of 3 consistent readings of SBP. After 8 weeks, SBP was measured every 2 weeks until the end of experimental period.
Fasting Blood Glucose, Oral Glucose Tolerance Test
An oral glucose tolerance test (OGTT) was performed every 4 weeks and FBG was measured every 2 weeks starting before the feeding period. Blood samples were taken from a lateral tail vein to measure FBG using a glucometer (Roche Diagnostics, Sydney, Australia). After 8 weeks, the OGTT was measured every 2 weeks throughout the last 8 weeks of experimental period. Before the OGGT, rats were deprived of diet for 8-12 h. And the 15% fructose-supplement drinking water in the HCHF group was replaced by normal drinking water during this period. The rats were subjected to a glucose load of 2 g/kg body weight (orally administered) and blood glucose concentrations were measured before glucose loading and at 30, 60, and 120 min after administration. Blood glucose concentrations over the period of 120 min were used to calculate the area under the curve (AUC) of the concentration time curve.
Hemodynamic Measurements
At the end of the 16 week experimental period, all rats were anesthetized with an intraperitoneal injection of pentobarbital sodium (60 mg/kg). The femoral artery was cannulated with a polyethylene tube and connected to a pressure transducer for monitoring blood pressure (BP) and heart rate (HR). Hindlimb blood flow (HBF) and hindlimb vascular resistance (HVR) were also measured, as previously described [25]. After blood flow measurements, vascular reactivity was evaluated by infusing vasoactive agents at various doses through an additional catheter in the femoral vein in a stepwise fashion at 5-min intervals. The vasoactive agents tested were an endothelial-dependent vasodilator, acetylcholine, (ACh; 3, 10, 30 nmol/kg) [26], an endothelial-independent vasodilator, sodium nitroprusside (SNP; 1, 3, 10 nmol/kg) [26], and an alpha sympathomimetic agent, phenylephrine (Phe; 0.01, 0.03, 0.1 µmol/kg) [27]. Changes in blood pressure were expressed as percentage of control values obtained immediately before the administration of the test substance. After hemodynamic measurements, rats were sacrificed with an overdose of the anesthetic drug. Blood samples were collected from the abdominal aorta and centrifuged at 3500ˆg for 15 min at 4˝C to obtain the plasma for assaying plasma TNF-α and oxidative stress markers. Heart, left ventricle (LV) and liver were separated and weighted. Organ weights were normalized with respect to body weight (mg/g body weight (BW)). The aorta and carotid arteries were rapidly isolated from the rats and used for Western blot analysis of endothelial nitric oxide synthase (eNOS) and p47 phox nicotinamide adenine dinucleotide phosphate (NADPH) oxidase expression, and superoxide production (O 2 ‚´) . [25,26]. Plasma malondialdehyde concentrations (MDA) were determined by measuring thiobarbituric acid reactive substances following a previously described method [28].
Assay of Nitric Oxide Metabolites
The levels of nitrate/nitrite in the plasma, the end products of NO metabolism, were quantified by an enzymatic conversion method with the Griess reaction as previously described [26].
Plasma TNF-α
The plasma concentration of TNF-α was determined by an ELISA kit (eBioscience, San Diego, CA, USA).
Histology
Arteries from six rats of each group were used for histology. At the end of the experiment, rats were sacrificed with an overdose of pentobarbital sodium, mesenteric resistance arteries were collected and fixed with 4% phosphate-buffered formaldehyde. To determine medial cross-sectional area (CSA), arterial wall thickness and media to lumen ratio (M/L), the mesenteric arteries were embedded in paraffin blocks and 5 µm thick sections were cut and stained with hematoxylin and eosin (H&E). The stained sections were examined with light microscopy (Nikon ECLIPSE Ni-u, Nikon Instruments Inc., Melville, NY, USA) and the images were captured with a digital microscope camera (Nikon DS-Ri1 Camera). CSA, measured in tissue sections under aˆ40 objective was calculated by subtractingthe lumen area (A i ) from the total vessel area including the lumen (A e ). The external radius (R e ) and the internal radius (R i ) were calculated as the square root of A e /π and A i /π, respectively. Arterial wall thickness was calculated as R e minus R i . Finally, M/L ratio was calculated as the wall thickness divided by radius of the lumen [25].
Western Blot Analysis
Western blotting was performed in aortas from each experimental group to detect eNOS and p47 phox as previously described [25,28]. Briefly, proteins of aortic homogenates were separated by electrophoresis on 10% sodium dodecyl sulfate polyacrylamide gel.
The proteins were electrophoretically transferred to a polyvinylidene difluoride membrane, blocked with 5% skimmed milk in Tris buffered saline containing 0.1% Tween-20 and then incubated with primary antibody of mouse monoclonal anti-eNOS (BD Bioscience, San Jose, CA, USA) and mouse monoclonal anti-p47 phox (Santa Cruz Biotechnology, Indian Gulch, CA, USA) overnight. Then membranes were repeatedly washed and incubated with the secondary antibody horseradish peroxidase goat anti-mouse immunoglobulin G (IgG) (Santa Cruz Biotechnology) for 2 h at room temperature. The blots were incubated in the enhanced chemiluminescent (ECL) substrate solution (Thermo Fisher Scientific, Rockford, IL, USA). The intensity of specific eNOS, p47 phox and β-actin bands was visualized and captured by ImageQuantTM 400 (GE Healthcare Life Science, Pittsburgh, PA, USA). The expression of eNOS, p47 phox protein wasnormalized to β-actin expression from the same sample and values are presented as percentages of those from the aorta of normal controls.
Statistical Analysis
All data are presented as mean˘standard error of the mean (SEM). Statistically significant differences among groups were calculated using one way analysis of variance (ANOVA) followed by the Student-Newman-Keuls post hoc test. Statistical significance was defined as p < 0.05.
Effect of FA on Body Weight and Organ Weight of MS Rats
Rats fed with HCHF gained weight at a similar rate to control animals on the normal diet and supplementation with FA in the normal diet or HCHF diet groups did not affect body weight (Table 2). However, MS rats showed a marked increase in liver weight and a small increase in heart and left ventricular weight when compared with control rats. Treatment with FA apparently normalized the increased liver weight, while FA did not alter organ weight in normal rats ( Table 2).
Effect of FA on Fasting Blood Glucose and Oral Glucose Tolerance Test
HCHF diet was associated with a significant increase in FBG levels, AUC for the oral glucose tolerance test, fasting serum insulin and HOMA-IR scores when compared to control rats at the end of 16 weeks. The changes indicate an impaired glucose tolerance in the HCHF rats. FA treatment (30 and 60 mg/kg) significantly prevented these changes and alleviated the insulin resistant state in a dose-dependent manner. FA had no hypoglycemic effect and did not alter the glucose tolerance test results in control rats (Table 3).
Effect of FA on Lipid Profile and Plasma TNF-α
The HCHF diet induced significant increases in plasma total cholesterol, triglycerides and a significant decrease in HDL-cholesterol when compared with the control group. FA (30 and 60 mg/kg) significantly prevented the increase in plasma triglycerides, total cholesterol and decrease in plasma HDL-C. The values of these variables in normal controls-treated with FA did not differ from control values (C + PG, Table 3). A chronic inflammatory state is one of the important characteristics of metabolic syndrome. Plasma TNF-α was markedly elevated in MS rats and treatment with FA in HCHF animals largely suppressed release of the inflammatory cytokine (Table 3).
Effect of FA on Blood Pressure
Metabolic syndrome is characterized by an elevation of blood pressure. The HCHF diet induced an increase in SBP ( Figure 2) which was significant when compared to controls within 4 weeks of feeding and progressively increased throughout the 16 weeks of the study period. FA, administered from week 10 through week 16, significantly prevented the increased SBP when compared to the MS (MS + PG) group. The antihypertensive effect was evident within 2 and 4 weeks following the adminsitration of high and low doses of FA, respectively, and the antihypertensive effect was demonstrated in a dose-dependent manner. In contrast, FA when administered to control rats fed a normal diet did not show any hypotensive or blood pressure lowering effect.
Effect of FA on Hemodynamic Parameters and Vascular Reactivity
HCHF diet fed rats showed the abnormalities of cardiovascular dynamics as demonstrated by the increase in SBP, mean arterial pressure (MAP), diastolic blood pressure (DBP) and HR when compared with controls (Table 4). These changes were associated with a decrease in HBF and increased HVR (Table 4). Increased heart rate in HCHF-fed rats could be the cause of blood pressure elevation. However, we found that heart rate contributed less to the development of hypertension than the peripheral vascular resistance, since the heart rate of the HCHF group increased by only 23%, whereas the hindlimb vascular resistance more than doubled with respect to the normal control values (Table 4). Interestingly, treatment with FA (30 and 60 mg/kg) significantly alleviated the changes in a dose-dependent manner when compared with MS rats.
Nutrients 2015, 7 9 peripheral vascular resistance, since the heart rate of the HCHF group increased by only 23%, whereas the hindlimb vascular resistance more than doubled with respect to the normal control values (Table 4). Interestingly, treatment with FA (30 and 60 mg/kg) significantly alleviated the changes in a dose-dependent manner when compared with MS rats. Furthermore, MS rats showed diminished vascular responses to the vasoactive agents, Phe ( Figure 3A) and ACh ( Figure 3C) when compared with the normal control group. MS rats-treated with FA in a dose-dependent manner restored the vascular responsiveness by preventing the attenuation of the vasoconstrictive and vasodilatory effects of Phe and ACh ( Figure 3A,C). In MS rats, the vascular response to SNP, an endothelium-independent vasodilator, was unchanged (SNP; Figure 3B) and was not altered following the treatment with FA.
Nutrients 2015, 7 10 Furthermore, MS rats showed diminished vascular responses to the vasoactive agents, Phe ( Figure 3A) and ACh ( Figure 3C) when compared with the normal control group. MS rats-treated with FA in a dose-dependent manner restored the vascular responsiveness by preventing the attenuation of the vasoconstrictive and vasodilatory effects of Phe and ACh ( Figure 3A,C). In MS rats, the vascular response to SNP, an endothelium-independent vasodilator, was unchanged (SNP; Figure 3B) and was not altered following the treatment with FA. Figure 3. Effect of FA on vascular responses in MS rats. The blood pressure response in rats to (A) phenylephrine-induced high blood pressure; (B) sodium nitroprusside-induced decreased blood pressure; and (C) acetylcholine-induced decreased blood pressure was assessed. The mean arterial pressures are presented as mean ± standard error of the mean (SEM) (n = 10/group); * p < 0.05 vs. C + PG; # p <0.05 vs. MS group; † p < 0.05 vs. MS with FA 30 mg/kg. C + PG: normal control rats received propylene glycol as a vehicle; C + FA60: normal control rats receiving ferulic acid 60 mg/kg; MS+PG: metabolic syndrome rats receiving propylene glycol; MS+FA30: metabolic syndrome rats receiving ferulic acid 30 mg/kg; MS+FA60: metabolic syndrome rats receiving ferulic acid 60 mg/kg. FA, ferulic acid; MS, metabolic syndrome.
Effect of FA on Oxidative Stress
Excess production of ROS is one of the main factors contributing to lipid peroxidation and oxidative damage in metabolic syndrome. In this study, we assessed vascular oxidative status by measurement of Figure 3. Effect of FA on vascular responses in MS rats. The blood pressure response in rats to (A) phenylephrine-induced high blood pressure; (B) sodium nitroprusside-induced decreased blood pressure; and (C) acetylcholine-induced decreased blood pressure was assessed. The mean arterial pressures are presented as mean˘standard error of the mean (SEM) (n = 10/group); * p < 0.05 vs. C + PG; # p <0.05 vs. MS group; : p < 0.05 vs. MS with FA 30 mg/kg. C + PG: normal control rats received propylene glycol as a vehicle; C + FA60: normal control rats receiving ferulic acid 60 mg/kg; MS+PG: metabolic syndrome rats receiving propylene glycol; MS+FA30: metabolic syndrome rats receiving ferulic acid 30 mg/kg; MS+FA60: metabolic syndrome rats receiving ferulic acid 60 mg/kg. FA, ferulic acid; MS, metabolic syndrome.
Effect of FA on Oxidative Stress
Excess production of ROS is one of the main factors contributing to lipid peroxidation and oxidative damage in metabolic syndrome. In this study, we assessed vascular oxidative status by measurement of superoxide production in carotid arteries and oxidative products of lipid peroxidation in plasma. Figure 4A shows that vascular superoxide production from carotid strips was significantly higher in MS rats than in the control groups. Plasma MDA levels were significantly greater in MS rats than controls ( Figure 4B). Administration of FA (30 and 60 mg/kg) significantly prevented the HCHF-induced increase in vascular superoxide production and plasma MDA in a dose dependent manner ( Figure 4A,B). FA treatment did not alter basal superoxide formation or any of the oxidative stress markers in normal control rats.
Nutrients 2015, 7 11 superoxide production in carotid arteries and oxidative products of lipid peroxidation in plasma. Figure 4A shows that vascular superoxide production from carotid strips was significantly higher in MS rats than in the control groups. Plasma MDA levels were significantly greater in MS rats than controls ( Figure 4B). Administration of FA (30 and 60 mg/kg) significantly prevented the HCHF-induced increase in vascular superoxide production and plasma MDA in a dose dependent manner ( Figure 4A,B). FA treatment did not alter basal superoxide formation or any of the oxidative stress markers in normal control rats.
Effect of FA on Nitric Oxide Formation
NO released from vascular tissues plays critical roles in the vascular response, changes in pressure and flow, and cardiovascular protection [29]. The plasma nitrate/nitrite ratio, representing NO metabolites, in MS rats was significantly lower than in the control groups. FA prevented the attenuation of plasma nitrate/nitrite levels when compared with the HCHF group not treated with FA. FA treatment did not affect plasma NO in control rats ( Figure 5).
Effect of FA on Nitric Oxide Formation
NO released from vascular tissues plays critical roles in the vascular response, changes in pressure and flow, and cardiovascular protection [29]. The plasma nitrate/nitrite ratio, representing NO metabolites, in MS rats was significantly lower than in the control groups. FA prevented the attenuation of plasma nitrate/nitrite levels when compared with the HCHF group not treated with FA. FA treatment did not affect plasma NO in control rats ( Figure 5). Figure 6 illustrates the histological changes in mesenteric arteries from the various experimental groups. Vascular wall thickness, M/L ratio and CSA were significantly increased in the HCHF group. However the luminal cross sectional area remained unchanged ( Figure 6D). The histological changes in the vessel medial layer were indicative of hypertrophic vascular remodeling. These results show that, chronic consumption of HCHF induced vascular remodeling and that treatment with FA (30 and 60 mg/kg) attenuated this remodeling in a dose dependent manner as indicated by significantly reduced vascular wall thickness, M/L ratio and CSA ( Figure 6A-C).
Effect of FA on Arterial Protein Expression of eNOS and p47 phox
There were changes in NO and O2 •− production in MS rats which were largely prevented by FA. However, it is not clear whether these changes were associated with up-regulation and/or down-regulation of eNOS and NADPH oxidase enzymes. Therefore, Western blot analysis was performed to examine the expression levels of eNOS and NADPH oxidase subunit p47 phox in the aorta. When comparing MS rats with controls, we observed a significant decrease in eNOS expression ( Figure 7A), whereas the protein expression of p47 phox was significantly increased ( Figure 7B). Thus, FA administration (60 mg/kg) prevented the reduction of eNOS and the increased p47 phox expression. However, FA treatment in normal control rats did not cause any significant changes in the protein expression of eNOS and p47 phox subunit ( Figure 7A,B). Figure 6 illustrates the histological changes in mesenteric arteries from the various experimental groups. Vascular wall thickness, M/L ratio and CSA were significantly increased in the HCHF group. However the luminal cross sectional area remained unchanged ( Figure 6D). The histological changes in the vessel medial layer were indicative of hypertrophic vascular remodeling. These results show that, chronic consumption of HCHF induced vascular remodeling and that treatment with FA (30 and 60 mg/kg) attenuated this remodeling in a dose dependent manner as indicated by significantly reduced vascular wall thickness, M/L ratio and CSA ( Figure 6A-C).
Effect of FA on Arterial Protein Expression of eNOS and p47 phox
There were changes in NO and O 2 ‚´p roduction in MS rats which were largely prevented by FA. However, it is not clear whether these changes were associated with up-regulation and/or down-regulation of eNOS and NADPH oxidase enzymes. Therefore, Western blot analysis was performed to examine the expression levels of eNOS and NADPH oxidase subunit p47 phox in the aorta. When comparing MS rats with controls, we observed a significant decrease in eNOS expression ( Figure 7A), whereas the protein expression of p47 phox was significantly increased ( Figure 7B). Thus, FA administration (60 mg/kg) prevented the reduction of eNOS and the increased p47 phox expression. However, FA treatment in normal control rats did not cause any significant changes in the protein expression of eNOS and p47 phox subunit ( Figure 7A,B).
Discussion
In the present study we have demonstrated that rats fed with the HCHF diet for 16 weeks developed the signs of metabolic syndrome including hyperglycemia, insulin resistance, dyslipidemia, high blood pressure, vascular remodeling, oxidative stress and inflammation. The beneficial effect of oral supplementation of FA in MS rats for six weeks was evaluated and it was found that FA could reverse almost all the deleterious changes in these animals. The protective effects resulted from the restoration of insulin sensitivity, and normalization of blood pressure and vascular responsiveness, whereas the underlying mechanism may be the suppression of oxidative stress by downregulation of NADPH oxidases, inhibition of inflammatory cytokine and maintenance of nitric oxide availability.
Animals fed with HCHF diet did not gain weight in comparison with control rats. However, organ weight, particularly that of the liver was increased. This is consistent with other reports using a similar HCHF rat model which found liver inflammation and steatosis, together with cardiac hypertrophy [30]. In this study, FA supplementation was started at week 10, which was after hypertension had already developed. FA was associated with decreased blood pressure, but not decreased heart and left ventricular weights. This suggests that cardiac hypertrophy having resulted from remodeling would take more time to regress to normal. Moreover, FA also decreased the liver weight. The exact mechanism of this effect is not known and needs further study. However, since liver cells can change rather rapidly due to their high rate of division when they are stimulated, the reduced liver weight in FA-supplemented rats is probably due to the suppression of liver cells' proliferation. FA reduces oxidative stress and insulin
Discussion
In the present study we have demonstrated that rats fed with the HCHF diet for 16 weeks developed the signs of metabolic syndrome including hyperglycemia, insulin resistance, dyslipidemia, high blood pressure, vascular remodeling, oxidative stress and inflammation. The beneficial effect of oral supplementation of FA in MS rats for six weeks was evaluated and it was found that FA could reverse almost all the deleterious changes in these animals. The protective effects resulted from the restoration of insulin sensitivity, and normalization of blood pressure and vascular responsiveness, whereas the underlying mechanism may be the suppression of oxidative stress by downregulation of NADPH oxidases, inhibition of inflammatory cytokine and maintenance of nitric oxide availability.
Animals fed with HCHF diet did not gain weight in comparison with control rats. However, organ weight, particularly that of the liver was increased. This is consistent with other reports using a similar HCHF rat model which found liver inflammation and steatosis, together with cardiac hypertrophy [30]. In this study, FA supplementation was started at week 10, which was after hypertension had already developed. FA was associated with decreased blood pressure, but not decreased heart and left ventricular weights. This suggests that cardiac hypertrophy having resulted from remodeling would take more time to regress to normal. Moreover, FA also decreased the liver weight. The exact mechanism of this effect is not known and needs further study. However, since liver cells can change rather rapidly due to their high rate of division when they are stimulated, the reduced liver weight in FA-supplemented rats is probably due to the suppression of liver cells' proliferation. FA reduces oxidative stress and insulin resistance, thereby, normalizing lipid metabolism and suppressing inflammation, leading to an alleviation of the fatty liver and also preventing the changes in arterial morphology and function.
Insulin resistance is a hallmark of metabolic syndrome and type 2 diabetes. The elevation of FBG and plasma insulin resulting in an increase of HOMA-IR is largely alleviated by FA. This suggests that FA may have an insulin sensitizing effect, which it is consistent with previous reports of the anti-diabetic action of FA in type 2 diabetic mice [11]. The restoration of insulin sensitivity, in turn, may account for the improvement in lipid profile, i.e., reduction in hypercholesterolemia and hypertriglyceridemia and increased HDL-C. It should be noted that the HCHF diet-induced MS is in part associated with the release of inflammatory cytokines, i.e., TNF-α in the present study. The release of inflammatory cytokines is known to be a very strong inducer of insulin resistance [31]. In a recent study we reported the inflammatory cytokine-induced insulin resistance in human hepatocellular liver carcinoma cell line (HepG2), whereas suppression of its signaling pathway restored insulin sensitivity [32]. The suppression of plasma TNF-α by FA found in the present study may significantly contribute to the insulin sensitizing effect.
The elevation of blood pressure is another feature of the metabolic syndrome state. Patients with MS and type 2 diabetes are frequently afflicted with cardiovascular complications, for instance: hypertension, coronary heart disease and ischemic stroke [33]. The good control of blood pressure in hypertensive and diabetic patients is known to benefit cardiovascular outcomes [34]. The etiology of hypertension remains poorly understood. However, oxidative stress in association with a chronic inflammatory state plays a major role in the modulation of vasomotor tone and vascular remodeling [29]. We observed an impaired vascular response and high blood pressure in MS rats, which was associated with vascular dysfunction, suppression of cytoprotection plasma nitric oxide and vascular eNOS expression, and an increase of p47phox expression. Ferulic acid has been shown to be a potent antioxidant in vitro and in vivo by up-regulating the strongly cytoprotective enzyme, heme oxygenase-1(HO-1), heat shock protein 70 (HSP70) and Protein kinase B (Akt), as well as suppressing oxidant and inflammation generation by down-regulation of cyclooxygenase-2 (COX-2) [4,7]. Moreover, FA was shown to inhibit angiotensin converting enzyme (ACE) activity [8]. In the HCHF diet-induced high blood pressure and vascular dysfunction reported here, the antioxidant, anti-inflammation and ACE inhibition of FA could account for the antihypertensive and vascular protective effects. It should be noted that FA did not lower blood pressure in normal rats, suggesting that its blood pressure lowering effect is observed only under pathological conditions. Long standing insulin resistance in metabolic syndrome and diabetes, as well as hypertension cause macrovascular changes like atherosclerosis and eventually leads to cardiovascular complications and ischemic stroke [33]. In this study, there was a thickening of the media of the mesenteric arterial wall, which may be due to the proliferation and migration of smooth muscle cells and or accumulation of extracellular matrix [35]. The thickened media may lead to increased arterial stiffness, a feature associated with the development of atherosclerosis. Treatment with FA attenuated the vascular changes in HCHF diet rats so that the vessels appeared normal. Previous reports have shown that FA can reduce inflammatory cell infiltration and collagen deposition in kidney and heart tissue [36]. Moreover, the effects of angiotensin II and oxidants which play a critical role in vascular damage could be abolished by FA [37]. Altogether, the prevention of vascular remodeling could be very important aspect of the way in which FA may prevent cardiovascular complications in the chronic metabolic syndrome state.
The antioxidant effects of FA may be due not only to free radical scavenging activity which was observed as inhibition of superoxide and MDA formation, but may also include suppression of p47 phox expression, the enzymes generating reactive oxygen species. The restoration of eNOS expression may be a consequence of suppression of oxidant formation, as FA had no effect on NO and eNOS levels in control animals.
Conclusions
The present study demonstrates that the HCHF diet induces metabolic syndrome-like signs in rats and that this was associated with oxidative stress, inflammation and vascular remodeling. Oral supplementation of FA ameliorates HCHF-induced MS, improves insulin sensitivity, lipid profiles and vascular endothelial function, decreases blood pressure, and reduces oxidative stress and inflammation. These therapeutic effects of FA may be due to its antioxidant and anti-inflammatory properties. This study provides evidence of the health benefit of FA consumption. | 2015-09-18T23:22:04.000Z | 2015-08-01T00:00:00.000 | {
"year": 2015,
"sha1": "5468903034017ca1c50a7abd4d3fdbd40b455e15",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/7/8/5283/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5468903034017ca1c50a7abd4d3fdbd40b455e15",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
262193688 | pes2o/s2orc | v3-fos-license | Detection of a Nerve Agent Simulant by a Fluorescent Sensor Array
: Detection of nerve agents (NAs) gas in the environment through portable devices to protect people in case of emergencies still remains a challenge for scientists involved in this research field. Current detection strategies require the use of cumbersome, expensive equipment that is only accessible to specialized personnel. By contrast, emerging optical detection is one of the most promising strategies for the development of reliable, easy readout devices. However, the selectivity of the existing optical sensors needs to be improved. To overcome the lack of selectivity, the innovative strategy of the optical arrays is under evaluation due to the specific response, the ease of preparation, the portability of the equipment, and the possibility to use affordable detectors, such as smartphones, that are easily accessible to non-specialized operators. In this work, the first optical-based sensor array for the selective detection of gaseous dimethylmethylphosphonate (DMMP), a NAs simulant, is reported, employing a simple smartphone as a detector and obtaining remarkably efficient and selective detection.
Introduction
The real-time detection of hazardous gases in the environment using easy-handling, reliable systems still represents a challenging target that has recently attracted many scientific interests.Special attention is focused on the detection of toxic gases.Among these, nerve agents (NAs) are still used to harm people during conflicts and terrorist attacks, as demonstrated by recent international events [1,2].The most common compounds used for this purpose are organic esters of phosphoric acid, also known as organophosphates (OP), namely the G-type (Sarin, Soman), V-type (VX), and the latest developed A-type (also known as Novichok) (see Figure 1) [3].Despite the fact that the production, stockpile, and use of these agents are strictly forbidden, they are still used to harm civilians during conflicts, representing a threat to human safety.Due to the high toxicity of NAs, reliable model compounds, also called simulants, are widely employed for research purposes.This class of less toxic organophosphates mimics the structure and properties of the real NAs.In particular, dimethylmethylphosphonate (DMMP) has been demonstrated to be one of the best simulants of G-type nerve agents (Figure 1) [4].
The detection of nerve agents is efficiently performed by means of instrumental techniques, including GC-MS and HPLC, reaching high sensitivity and selectivity.However, the time-consuming sample preparation precludes real-time detection in the field.In addition, only specialized operators have access to such expensive and technologically advanced equipment.To overcome these limitations, many efforts have been oriented toward the development of portable, easy readout and affordable devices for the realtime sensing of OP gas, accessible to everyone [5][6][7].Detection of NAs using molecular probes has been demonstrated to be a powerful approach [8][9][10][11][12].In particular, the use of optical (colorimetric and fluorescent) receptors seems to be the most convenient detection technique due to the high sensitivity achievable, the low cost of the equipment, the easy readout, and the fast response [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].To meet the need for easy accessibility, an intriguing recent strategy involves the use of widely diffused tools, such as smartphones and digital cameras, as detectors.In this context, only a few examples of portable optical devices based on a smartphone as a detector can be found in the literature that are able to efficiently detect NAs gas [28][29][30][31].The detection of nerve agents is efficiently performed by means of instrumental techniques, including GC-MS and HPLC, reaching high sensitivity and selectivity.However, the time-consuming sample preparation precludes real-time detection in the field.In addition, only specialized operators have access to such expensive and technologically advanced equipment.To overcome these limitations, many efforts have been oriented toward the development of portable, easy readout and affordable devices for the real-time sensing of OP gas, accessible to everyone [5][6][7].Detection of NAs using molecular probes has been demonstrated to be a powerful approach [8][9][10][11][12].In particular, the use of optical (colorimetric and fluorescent) receptors seems to be the most convenient detection technique due to the high sensitivity achievable, the low cost of the equipment, the easy readout, and the fast response [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].To meet the need for easy accessibility, an intriguing recent strategy involves the use of widely diffused tools, such as smartphones and digital cameras, as detectors.In this context, only a few examples of portable optical devices based on a smartphone as a detector can be found in the literature that are able to efficiently detect NAs gas [28][29][30][31].
One of the main problems related to sensing, and in particular the sensing of hazardous compounds, is the need for selectivity in order to avoid false-positive responses.Recently, this problem has been bypassed by array technology, a device containing different "receptors" able to interact with different affinity with the target analyte.Considering the whole response of these receptors and exploiting multivariate statistical analysis, a characteristic fingerprint of the desired analyte can be measured, obtaining excellent levels of selectivity [32][33][34][35][36].
In addition, optical-based array sensors can reveal and discriminate many chemical compounds.These systems work in a similar way to the mammalian olfactory system, leading to a complex response due to all probes being unique for a single analyte.An optical array comprises many optical/fluorescent organic receptors, which show different affinity for the target analyte.The array technology is based on the non-specific interaction of multiple organic receptors with the selected analyte [37].To the best of our knowledge, no example of NA detection by an array device has been reported in the literature.
In this work, we present the first fluorescent-based array for the selective detection of DMMP gas.The synthesis of different chemiresponsive fluorescent receptors was One of the main problems related to sensing, and in particular the sensing of hazardous compounds, is the need for selectivity in order to avoid false-positive responses.Recently, this problem has been bypassed by array technology, a device containing different "receptors" able to interact with different affinity with the target analyte.Considering the whole response of these receptors and exploiting multivariate statistical analysis, a characteristic fingerprint of the desired analyte can be measured, obtaining excellent levels of selectivity [32][33][34][35][36].
In addition, optical-based array sensors can reveal and discriminate many chemical compounds.These systems work in a similar way to the mammalian olfactory system, leading to a complex response due to all probes being unique for a single analyte.An optical array comprises many optical/fluorescent organic receptors, which show different affinity for the target analyte.The array technology is based on the non-specific interaction of multiple organic receptors with the selected analyte [37].To the best of our knowledge, no example of NA detection by an array device has been reported in the literature.
In this work, we present the first fluorescent-based array for the selective detection of DMMP gas.The synthesis of different chemiresponsive fluorescent receptors was performed, with each one able to give a typical change in the emission intensity after the non-covalent interaction with DMMP in solution and in the gas phase.To this purpose, specifically functionalized Bodipys (MBP, OBP, OBEP, BDPy-Di-AE, BDPy-AE, PBP, MBEP, PBEP), Rhodamine (RhBP, RhBM), Naphthylamide (Napht-1) fluorescent scaffolds, and properly modified Carbon Dots (CDs-C 2 -OH, CDs-C 3 -OH e CDs-C 4 -OH) (see Figure 2) were synthesized and dropped onto a solid support, obtaining the array device able to selectively detect DMMP vapors in ppm/sub-ppm.Selectivity was confirmed by PCA analysis, obtaining detection limits lower than the toxicity levels of NAs.
(see Figure 2) were synthesized and dropped onto a solid support, obtaining the arr device able to selectively detect DMMP vapors in ppm/sub-ppm.Selectivity w confirmed by PCA analysis, obtaining detection limits lower than the toxicity levels NAs.Previously, we used the array technology to detect trinitrotoluene (TNT) [34] an plant pathogenic fungi [37].The array used for the TNT detection was limited to sev different fluorescent probes, which do not guarantee the sufficient selectivity required f the Nas detection.While, in the case of fungi detection, we realized an array with fluorescent molecular probes.In the present work, we realize an array device containin 15 different fluorescent probes, also including properly functionalized carbo nanoparticles, considering the high detection efficiency of these nanosystems towar DMMP [38].
Materials and Methods
General Experimental Methods: The NMR experiments were carried out at 27 °C o a Varian UNITY Inova 500 MHz spectrometer (International Equipment Trading Ltd Mundelein, IL, USA) ( 1 H at 499.88 MHz, 13 C NMR at 125.7 MHz) equipped with a pul field gradient module (Z axis) and a tunable 5 mm Varian inverse detection probe (ID PFG).ESI mass spectra were acquired on an API 2000-ABSciex using CH3CN or CH3O (positive or negative ion mode).
Procedure for sensing by array: experimental setup.The power of the UV-Vis lam selected was 6 W, and the excitation wavelength used was 365 nm.The array device w placed in the dark chamber, and its position can be modified thanks to the presence of t control probe.Indeed, any difference in light irradiation is normalized to the control.T UV source and the smartphone were placed 20 cm away from the array plate.The arr device was obtained by dropping 2 µL of each fluorescent probe (1 × 10 −3 M in CHCl3) an 2 µL of the phenanthrene (control, 1 M in CHCl3) in different positions onto rectangular × 3 cm RP18 silica gel foils.The array plate was then irradiated through an UV lamp 365 nm) in the dark chamber, and the image of the emission was acquired using smartphone (iPhone 13, 24 Mpixel).Then, the array was introduced in a closed contain Previously, we used the array technology to detect trinitrotoluene (TNT) [34] and plant pathogenic fungi [37].The array used for the TNT detection was limited to seven different fluorescent probes, which do not guarantee the sufficient selectivity required for the Nas detection.While, in the case of fungi detection, we realized an array with 17 fluorescent molecular probes.In the present work, we realize an array device containing 15 different fluorescent probes, also including properly functionalized carbon nanoparticles, considering the high detection efficiency of these nanosystems towards DMMP [38].
Materials and Methods
General Experimental Methods: The NMR experiments were carried out at 27 • C on a Varian UNITY Inova 500 MHz spectrometer (International Equipment Trading Ltd., Mundelein, IL, USA) ( 1 H at 499.88 MHz, 13 C NMR at 125.7 MHz) equipped with a pulse field gradient module (Z axis) and a tunable 5 mm Varian inverse detection probe (ID-PFG).ESI mass spectra were acquired on an API 2000-ABSciex using CH 3 CN or CH 3 OH (positive or negative ion mode).
Procedure for sensing by array: experimental setup.The power of the UV-Vis lamp selected was 6 W, and the excitation wavelength used was 365 nm.The array device was placed in the dark chamber, and its position can be modified thanks to the presence of the control probe.Indeed, any difference in light irradiation is normalized to the control.The UV source and the smartphone were placed 20 cm away from the array plate.The array device was obtained by dropping 2 µL of each fluorescent probe (1 × 10 −3 M in CHCl 3 ) and 2 µL of the phenanthrene (control, 1 M in CHCl 3 ) in different positions onto rectangular 5 × 3 cm RP18 silica gel foils.The array plate was then irradiated through an UV lamp (λ 365 nm) in the dark chamber, and the image of the emission was acquired using a smartphone (iPhone 13, 24 Mpixel).Then, the array was introduced in a closed container (580 cc) containing a precise amount of DMMP.These vials were heated (by an oven) at 50 • C for 1 h to totally evaporate the DMMP.The amounts in ppm of DMMP have been calculated considering the partial pressure of DMMP at 50 • C [39].After this time, the image has been acquired and elaborated with Fiji.In details, this software transforms the image from the RGB value to grayscale by the formula G = (Rvalue + Gvalue + Bvalue)/3.The grayscale values are normalized to the control (phenanthrene).The statistical treatment was performed by Excel (Microsoft 365).Multivariate quantification was performed by means of the PLS tool of SIMCA-P 11 (Umetrics).The dataset was centered and unity scaled.
Results and Discussion
We based our array device on four different classes of organic fluorophores, in particular rhodamines, Bodipy's, naphthylamides, and carbon dots (CDs), properly functionalized with functional groups able to interact through non-covalent interactions with DMMP.These probes were selected to cover a wide range of emission: 400-700 nm with rhodamines, 500-700 nm with Bodipy's, 400-600 nm with naphthylamides, and 350-500 nm with carbon dots.Furthermore, the probes in the array will interact with DMMP through non-covalent interactions, such as hydrogen bonds, ion-dipole interactions, and π-π interactions.
(580 cc) containing a precise amount of DMMP.These vials were heated (by an oven) at 50 °C for 1 h to totally evaporate the DMMP.The amounts in ppm of DMMP have been calculated considering the partial pressure of DMMP at 50 °C [39].After this time, the image has been acquired and elaborated with Fiji.In details, this software transforms the image from the RGB value to grayscale by the formula G = (Rvalue + Gvalue + Bvalue)/3.The grayscale values are normalized to the control (phenanthrene).The statistical treatment was performed by Excel (Microsoft 365).Multivariate quantification was performed by means of the PLS tool of SIMCA-P 11 (Umetrics).The dataset was centered and unity scaled.
Results and Discussion
We based our array device on four different classes of organic fluorophores, in particular rhodamines, Bodipy's, naphthylamides, and carbon dots (CDs), properly functionalized with functional groups able to interact through non-covalent interactions with DMMP.These probes were selected to cover a wide range of emission: 400-700 nm with rhodamines, 500-700 nm with Bodipy's, 400-600 nm with naphthylamides, and 350-500 nm with carbon dots.Furthermore, the probes in the array will interact with DMMP through non-covalent interactions, such as hydrogen bonds, ion-dipole interactions, and π-π interactions.
The commercially available Rhodamine-B (RhB) was treated with 2-hydroxypiperazione or morpholine in the presence of HBTU (O-(Benzotriazol-1-yl)-N,N,N′,N′-tetramethyluronium hexafluorophosphate) as coupling reagent, thus obtaining RHBP or RhBM, respectively (Scheme 1, see Supplementary Materials for the details).Bodipy's probes have been synthesized following the pathway reported in Scheme 2. In particular, PBP, MBP, and OBP have been synthesized by the reaction of 3,5-dimethyl-4-ethyl pyrrole with the appropriate pyridine-carboxy aldehyde in the presence of triethylamine and boron trifluoride.PBEP, MBEP, and OBEP were obtained after the Bodipy's probes have been synthesized following the pathway reported in Scheme 2. In particular, PBP, MBP, and OBP have been synthesized by the reaction of 3,5-dimethyl-4ethyl pyrrole with the appropriate pyridine-carboxy aldehyde in the presence of triethylamine and boron trifluoride.PBEP, MBEP, and OBEP were obtained after the reaction of PBP, MBP, and OBP with an excess of ethyl iodide.The reaction of 3,5-dimethyl-4-ethyl pyrrole with chloroacethyl chloride leads to BDPy-Cl, which, in the presence of an excess of ethanolamine or diethanolamine, can be converted into BDPy-AE or BDPy-Di-AE, respectively (see Supplementary Materials for the details).
Napht-1 was synthesized following a modified procedure [32].In particular, anhydride-Br-NO 2 -naphthoic was converted into isobuthyl-Br-NO 2 -naphthalimide by reaction in the presence of a slight excess of isobuthylamine.Then, the reaction of this imide with a large excess of ethanolamine leads to Napht-1 with a good yield (see Scheme 3 and Supplementary Materials for the details).Napht-1 was synthesized following a modified procedure [32].In particular, anh dride-Br-NO2-naphthoic was converted into isobuthyl-Br-NO2-naphthalimide by reacti in the presence of a slight excess of isobuthylamine.Then, the reaction of this imide w a large excess of ethanolamine leads to Napht-1 with a good yield (see Scheme 3 and Su plementary Materials for the details).
Carbon dots (CDs) functionalized with alcoholic groups were obtained following t pathway reported in Scheme 3. In particular, the reaction of native CDs with an excess pentafluoro phenol in solvolysis leads to CDs-Pf covered by pentafluoro phenol, whi in the presence of the appropriate amino-alcohol, is converted into CDs-C2-OH, CDs-C OH, and CDs-C4-OH, respectively (see Scheme 3 and Supplementary Materials for t details).All compounds and CDs have been fully characterized (see Supplementary M terials).
Scheme 2.
Synthetic pathways for the synthesis of Bodipy's probes.An array device was prepared following the scheme represented in Figure 3.The solid support selected for this purpose is reverse-phase silica gel (RP-18), to avoid interaction between the solid phase and probes as well as the DMMP.Then, 2 µL of the 1 mM chloroform solution of each probe was dropped onto the solid support, and the solvent was removed by evaporation at room temperature.An image with the smartphone has been acquired before and after the exposition to DMMP vapors.In particular, a precise amount of DMMP was inserted into a closed vial together with the array device.The vial has been kept at 50 °C for 1 h, allowing the total evaporation of DMMP.After this time, a Carbon dots (CDs) functionalized with alcoholic groups were obtained following the pathway reported in Scheme 3. In particular, the reaction of native CDs with an excess of pentafluoro phenol in solvolysis leads to CDs-Pf covered by pentafluoro phenol, which, in the presence of the appropriate amino-alcohol, is converted into CDs-C 2 -OH, CDs-C 3 -OH, and CDs-C 4 -OH, respectively (see Scheme 3 and Supplementary Materials for the details).
All compounds and CDs have been fully characterized (see Supplementary Materials).
An array device was prepared following the scheme represented in Figure 3.The solid support selected for this purpose is reverse-phase silica gel (RP-18), to avoid interaction between the solid phase and probes as well as the DMMP.Then, 2 µL of the 1 mM chloroform solution of each probe was dropped onto the solid support, and the solvent was removed by evaporation at room temperature.An image with the smartphone has been acquired before and after the exposition to DMMP vapors.In particular, a precise amount of DMMP was inserted into a closed vial together with the array device.The vial has been kept at 50 • C for 1 h, allowing the total evaporation of DMMP.After this time, a new image has been acquired and elaborated with Fiji [44].A typical example of setup analysis is reported in Figure S14 of the Supplementary Materials.This software converts the images into RGB channel values, which are then converted to the gray channel (G) by using the formula G = (R value + G value + B value )/3, thus obtaining a single value for each pixel.The emission intensities of this G scale for each probe have been compared to phenanthrene (Ctrl in Figure 1), and these normalized values (ratio between the intensity of each probe and the intensity of the control) have been tabulated for statistical treatment using the Excel software (Microsoft 365 ProPlus).Figure 4 shows the normalized response of each probe to 100 ppm of DMMP.In particular, MBEP and PBEP show an increasing emission in the presence of the analyte, while the other probes decrease their emission.Selectivity is a crucial parameter for a real sensing device.To validate the efficacy of the array technology, we tested the response of the array to other organic molecules commonly present in the air (i.e., acetone, ethanol, acetic acid, ethyl acetate, ammonia, and triethylamine), chlorinated organic solvents (i.e., chloroform and tetrachloroethane TCE), and other phosphorous-based compounds (i.e., triethylphosphine and triphenylphosphine).In particular, Figure 5a shows the response of each probe to 100 ppm of these analytes.As we can see, each probe shows a different response to the different compound, supporting the good selectivity for DMMP, which is also confirmed by the PCA analysis reported in Figure 5b.Good clustering and discrimination can be observed with all the selected analytes.Plots also reveal that the outliers representing observations are inside the Hotelling T2 ellipses at 95% confidence.Figure 4 shows the normalized response of each probe to 100 ppm of DMMP.In particular, MBEP and PBEP show an increasing emission in the presence of the analyte, while the other probes decrease their emission.Selectivity is a crucial parameter for a real sensing device.To validate the efficacy of the array technology, we tested the response of the array to other organic molecules commonly present in the air (i.e., acetone, ethanol, acetic acid, ethyl acetate, ammonia, and triethylamine), chlorinated organic solvents (i.e., chloroform and tetrachloroethane TCE), and other phosphorous-based compounds (i.e., triethylphosphine and triphenylphosphine).In particular, Figure 5a shows the response of each probe to 100 ppm of these analytes.As we can see, each probe shows a different response to the different compound, supporting the good selectivity for DMMP, which is also confirmed by the PCA analysis reported in Figure 5b.Good clustering and discrimination can be observed with all the selected analytes.Plots also reveal that the outliers representing observations are inside the Hotelling T2 ellipses at 95% confidence.
Then, we tested the array's response different concentrations of DMMP vapors.In particular, Figure 6 shows the change in emission of the probes at different ppm of DMMP.We note that a sort of linear response can be detected for Bodipy's receptors, PBP, OBP, OBEP, BDPy-Di-AE, and PBEP.Furthermore, we observed that cationic Bodipy's (MBEP and PBEP) show an increase in emission in the presence of DMMP, while non-charged probes (PBP, OBP, and BDPy-Di-AE) undergo a quenching of emission.Figure 6b shows the responses of these probes to the progressive amounts of DMMP.In particular, each probe changes its emission with different behavior, suggesting that a quantitative analysis is difficult to perform.Notably, the array can detect 0.1 ppm of DMMP, which is lower than the LD50 values of G-and V-type NAs [3]. Figure 4 shows the normalized response of each probe to 100 ppm of DMMP.In particular, MBEP and PBEP show an increasing emission in the presence of the analyte, while the other probes decrease their emission.Selectivity is a crucial parameter for a real sensing device.To validate the efficacy of the array technology, we tested the response of the array to other organic molecules commonly present in the air (i.e., acetone, ethanol, acetic acid, ethyl acetate, ammonia, and triethylamine), chlorinated organic solvents (i.e., chloroform and tetrachloroethane TCE), and other phosphorous-based compounds (i.e., triethylphosphine and triphenylphosphine).In particular, Figure 5a shows the response of each probe to 100 ppm of these analytes.As we can see, each probe shows a different response to the different compound, supporting the good selectivity for DMMP, which is also confirmed by the PCA analysis reported in Figure 5b.Good clustering and discrimination can be observed with all the selected analytes.Plots also reveal that the outliers representing observations are inside the Hotelling T2 ellipses at 95% confidence.To verify the capabilities of the method in quantitative respects, we applied multivariate Partial Least Squares (PLS) regression.PLS is a multivariate statistical technique used for regression and dimensionality reduction, particularly in cases where a high-dimensional dataset is collected and it is required to establish a relationship between predictors (independent variables) and a response (dependent variable).Figure 7 reports the results.The model used 5 principal components, accounting for a cumulative Q2 value of around 0.8 as evaluated by the cross-validation procedure.Results revealed good linear multivariate regression quantification from 5 to 100 ppm.At lower concentrations, the analyte is detectable but barely quantifiable.The red line represents the linear fit, and the green line represents the ideal linear relationship.The extreme closeness and similarity of the two curves confirm a very good quantification capability above 5 ppm, close to the LD50 values of G-and V-type NAs [3].
A few examples of NAs sensors exploiting a smartphone as a detector have been reported.Sulfur mustard and phosgene have been detected by using a fluorescent sensor, obtaining a turn-on response after the covalent reaction between sensor and analytes and a limit of detection in solution of 14 and 70 ppb, respectively [45].Similarly, diisopropyl fluorophosphates (DFP) have been detected by colorimetric change by a covalent reaction with the sensor, obtaining a limit of detection of 0.17 ppm in the solid state [31].Our research group realized a fluorescent reusable sensor for DMMP with a limit of detection of 535 ppm on solid state [28].In the present work, we obtained a lower detection limit on solid state (0.1 ppm), supporting the selectivity with a wider range of different analytes.
A real-life detection of NAs by using this sensor can be performed by simple exposure of the array to air contaminated by a nerve agent.In particular, due to the higher volatility of NAs with respect to DMMP (e.g., the volatility of Soman and DMMP is 22,000 mg/m 3 and 5562 mg/m 3 , respectively) [39], the array will detect NA gases instantaneously, and by exploiting a camera or a smartphone, a real-time analysis can be obtained.This system can be used by militaries or adopted by sensitive targets, such as stations or airports.Then, we tested the array's response to different concentrations of DMMP vapors.In particular, Figure 6 shows the change in emission of the probes at different ppm of DMMP.We note that a sort of linear response can be detected for Bodipy's receptors, PBP, OBP, OBEP, BDPy-Di-AE, and PBEP.Furthermore, we observed that cationic Bodipy's (MBEP and PBEP) show an increase in emission in the presence of DMMP, while noncharged probes (PBP, OBP, and BDPy-Di-AE) undergo a quenching of emission.Figure 6b shows the responses of these probes to the progressive amounts of DMMP.In particular, each probe changes its emission with different behavior, suggesting that a quantitative results.The model used 5 principal components, accounting for a cumulative Q2 value of around 0.8 as evaluated by the cross-validation procedure.Results revealed good linear multivariate regression quantification from 5 to 100 ppm.At lower concentrations, the analyte is detectable but barely quantifiable.The red line represents the linear fit, and the green line represents the ideal linear relationship.The extreme closeness and similarity of the two curves confirm a very good quantification capability above 5 ppm, close to the LD50 values of Gand V-type NAs [3].A few examples of NAs sensors exploiting a smartphone as a detector have been reported.Sulfur mustard and phosgene have been detected by using a fluorescent sensor, obtaining a turn-on response after the covalent reaction between sensor and analytes and a limit of detection in solution of 14 and 70 ppb, respectively [45].Similarly, diisopropyl fluorophosphates (DFP) have been detected by colorimetric change by a covalent reaction with the sensor, obtaining a limit of detection of 0.17 ppm in the solid state [31].Our research group realized a fluorescent reusable sensor for DMMP with a limit of detection of 535 ppm on solid state [28].In the present work, we obtained a lower detection limit on solid state (0.1 ppm), supporting the selectivity with a wider range of different analytes.
A real-life detection of NAs by using this sensor can be performed by simple exposure of the array to air contaminated by a nerve agent.In particular, due to the higher volatility of NAs with respect to DMMP (e.g., the volatility of Soman and DMMP is 22,000 mg/m 3 and 5562 mg/m 3 , respectively) [39], the array will detect NA gases instantaneously, and by exploiting a camera or a smartphone, a real-time analysis can be obtained.This system can be used by militaries or adopted by sensitive targets, such as stations or airports.
Conclusions
In summary, the first example of an optical array device able to detect efficiently (0.1 ppm) DMMP gas (a simulant of NAs G-Type) and with selectivity by exploiting a simple smartphone as a detector has been reported.This array has been prepared by dropping organic fluorescent probes based on Bodipy's, Rhodamine, Naphthylamide, and carbon dots onto reverse-phase silica gel (RP18).This device shows good selectivity, also confirmed by multivariate analysis, for DMMP with respect to other common solvents and phosphorous-based organic molecules.Furthermore, the possibility of detecting subppm levels of DMMP was demonstrated, establishing an emission trend for some of the Bodipy's probes.The use of a smartphone, easily connected to the internet, also leads to the possibility of sending data to a remote control station, thus elaborating on the results in real time.Further studies are being conducted to perform quantitative analyses, improve the elaboration of images by an application for the smartphone, and recover the device.At this stage of the work, the quantitative analysis needs to be improved.However, we believe that, in the case of the presence of nerve agent, a quantitative analysis is secondary to the qualitative detection of the NA.In fact, the crucial problem is whether the NA is present or not in the environment in order to sound the alarm.Furthermore, we tried to recover the array by exposing the device to thermal cycles or solvent washing.These preliminary tests show a progressive degradation of the solid support.The possibility of using other polymeric materials is under evaluation.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/chemosensors11090503/s1,synthesis and characterization of probes, detailed procedure for the array preparation and sensing.
Chemosensors 2023 , 13 Figure 1 .
Figure 1.Chemical structures of organophosphorous NAs and simulants used in this work.
Figure 1 .
Figure 1.Chemical structures of organophosphorous NAs and simulants used in this work.
Figure 2 .
Figure 2. Representation of the array and chemical structures of the organic probes used in t device.
Figure 2 .
Figure 2. Representation of the array and chemical structures of the organic probes used in the device.
Figure 3 .
Figure 3. Sensing by array by smartphone.
Figure 3 .
Figure 3. Sensing by array by smartphone.
Figure 3 .
Figure 3. Sensing by array by smartphone.
Figure 7 .
Figure 7. Expected vs. predicted dilution factor calculated by the PLS model with 5 components.The red line represents the linear fit, and the green line represents the ideal linear relationship.
Figure 7 .
Figure 7. Expected vs. predicted dilution factor calculated by the PLS model with 5 components.The red line represents the linear fit, and the green line represents the ideal linear relationship. | 2023-09-24T16:28:45.741Z | 2023-09-15T00:00:00.000 | {
"year": 2023,
"sha1": "c04fd327232ac724aa963b6f3b6c74273ae593aa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9040/11/9/503/pdf?version=1694782378",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1b95489b5ac954a1cab4fd40dc3386d22489e2a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
231709556 | pes2o/s2orc | v3-fos-license | A Primal-Dual Approach to Constrained Markov Decision Processes
In many operations management problems, we need to make decisions sequentially to minimize the cost while satisfying certain constraints. One modeling approach to study such problems is constrained Markov decision process (CMDP). When solving the CMDP to derive good operational policies, there are two key challenges: one is the prohibitively large state space and action space; the other is the hard-to-compute transition kernel. In this work, we develop a sampling-based primal-dual algorithm to solve CMDPs. Our approach alternatively applies regularized policy iteration to improve the policy and subgradient ascent to maintain the constraints. Under mild regularity conditions, we show that the algorithm converges at rate $ O(\log(T)/\sqrt{T})$, where T is the number of iterations. When the CMDP has a weakly coupled structure, our approach can substantially reduce the dimension of the problem through an embedded decomposition. We apply the algorithm to two important applications with weakly coupled structures: multi-product inventory management and multi-class queue scheduling, and show that it generates controls that outperform state-of-art heuristics.
Introduction
In many sequential decision-making problems, a single utility might not suffice to describe the real objectives faced by the decision-makers. A natural approach to study such problems is to optimize one objective while putting constraints on the others. In this context, the constrained Markov decision process (CMDP) has become an important modeling tool for sequential multi-objective decision-making problems under uncertainty. A CMDP aims to minimize one type of cost while keeping the other costs below certain thresholds. It has been successfully applied to analyze various important applications, including admission control and routing in telecommunication networks, scheduling for hospital admissions, and maintenance scheduling for infrastructures (Altman 1999).
Due to the complicated system dynamics and the scale of the problem, exact optimal solutions to CMDPs can rarely be derived. Instead, numerical approximations become the main workhorse to study CMDPs. In this paper, we propose a sampling-based primal-dual algorithm that can efficiently solve a wide range of CMDPs.
One basic approach to solve the CMDP is to use a linear programming (LP) formulation based on the occupancy measure. This approach faces two key challenges in implementations: it requires knowledge of the transition kernel of the underlying dynamical system explicitly; it does not scale well as the state space and action space get large. An alternative approach is to apply the Lagrangian duality. In particular, by dualizing the constraints and utilizing strong duality, we can translate the CMDP into a max-min problem, where for a given Lagrangian multiplier, the inner minimization problem is just a standard Markov decision process (MDP). This approach allows us to solve the inner problem using standard dynamic programming based methods. It does not require direct knowledge of the transition kernel as long as we can estimate the value functions from simulated or empirical data. In implementations, one would iteratively update the MDP policy and the Langrangian multiplier. The current development of this approach requires solving the MDP to get the optimal policy for each updated Langrangian multiplier (see, for example, Le et al. (2019), Miryoosefi et al. (2019)), which can be computationally costly. A more natural idea is to solve the MDP only approximately at each iteration. In this paper, we investigate this idea and show that at each iteration, we only need to do one iteration of policy update to achieve the optimal convergence rate (in terms of the number of primal-dual iterations). Compared to the existing algorithms utilizing Lagrangian duality, our primal-dual algorithm can be run at a much lower cost at each iteration. We also demonstrate that our algorithm can be easily combined with many other approximate dynamic programming techniques, such as Monte Carlo policy evaluation, TD-learning, and value function approximations (Sutton and Barto 2018).
A key ingredient of our algorithm is regularized policy iteration. The standard policy iteration includes two steps: policy evaluation and policy improvement. The policy evaluation step calculates the action-value function under a given policy. Then, the policy improvement step defines a new policy by taking the action that minimizes the action-value function. Through a Kullback-Leibler (KL) regularization term, the regularized policy iteration modifies the policy improvement step by reweighing the probability of taking each action via a softmax transformation of the action-value function. This modification allows us to view the policy update step as running mirror descent for the objective function in the policy space (Nemirovski 2012). In addition, we update the Lagrangian multiplier using subgradient ascent, which also belongs to the family of mirror descent methods.
This unified viewpoint makes the improved primal-dual algorithm possible. Noticeably, many recent developments in reinforcement learning also benefit from regularization, which has been shown to improve exploration and robustness. For example, Trust Region Policy Optimization and Proximal Policy Optimization use KL divergence between two consecutive policies as a penalty in policy improvement (Schulman et al. 2015(Schulman et al. , 2017. Soft-Q-learning uses Shannon entropy as a penalty in value iteration (Haarnoja et al. 2017). Geist et al. (2019) propose a unified framework to analyze the above algorithms via regularized Bellman operator (see also Liu et al. (2019a), Shani et al. (2019), Wang et al. (2019) for convergence analysis of regularized policy iteration).
In terms of applications of the algorithm, we study an important class of CMDPs which we refer to as weakly coupled CMDPs (Singh and Cohn 1998). A weakly coupled CMDP comprises multiple sub-problems that are independent except for a collection of coupling constraints. Due to the linking constraints, the scale of problem grows exponentially in the number of sub-problems. Hence, even in the case where each sub-problem is computationally tractable, it can be computationally prohibitive to solve the joint problem. Our primal-dual algorithm naturally helps break the curse of dimensionality in this case. In particular, the weakly coupled CMDP can be decomposed into independent sub-problems in the policy iteration step. In this case, the complexity only grows linearly with the number of sub-problems. We also comment that the weakly coupled CMDP can be viewed as a Lagrangian relaxation of the weakly coupled MDP (Adelman and Mersereau 2008).
Even though there is a relaxation gap between the two, as we will demonstrate in our numerical experiments, the (modified) policy obtained via CMDP can perform very well for the original MDP problem in the applications we considered.
We apply the primal-dual algorithm to solve two classical operations management problems: inventory planning and queue scheduling. For the inventory planning problem, we consider a multiproduct newsvendor problem with budget constraints (Turken et al. 2012). We formulate this problem as a weakly coupled CMDP and study a small-scale instance where we can numerically solve for the optimal policy. We show that our policy can indeed achieve O(log(T )/ √ T ) convergence in this case, where T is the number of iterations. For the queue scheduling problem, we consider a multiclass multi-pool parallel-server system where the decision-maker needs to route different classes of customers to different pools of servers in order to minimize the performance cost (holding cost plus overflow cost). We allow the service rates to be both customer-class and server-pool dependent.
Since each pool only has a finite number of servers, the routing policy needs to satisfy the capacity constraints. This optimal scheduling problem can be formulated as a weakly coupled MDP. We consider instances where it is prohibitive to solve for the optimal policy. Applying the Lagrangian relaxation, we solve the resulting weakly coupled CMDP by combining our primal-dual algorithm with value function approximation techniques. We show that our method generates comparable or even better policies than the state-of-art policies.
Literature review
In this section, we review some of the existing methods/results to solve CMDPs. The goal is to clearly state the contribution of our work. Most existing algorithms for CMDPs is adapted from methods for MDPs, and can be roughly divided into three categories: LP based approaches, dynamic programming based approaches (including policy iteration and value iteration), and the policy gradient methods.
One LP based approach utilizes the occupation measure, which is the weighted proportion of time the system spends at each state-action pair. The objective and constraints can be written as the inner products of instantaneous cost functions and occupation measure. The other LP based approach utilizes the dynamic programming principle and treats the value function (defined on state space) as the decision variables. In particular, the optimal value function of an MDP is the largest super-harmonic function that satisfies certain linear constraints determined by transition dynamics. For the CMDP, we obtain an LP by combing the dynamic programming principle with the Lagrangian duality. These two LP formulations are dual of each other (Altman 1999 Achiam et al. (2017) propose a trust region method that focuses on safe exploration. Liu et al. (2019b) develop an interior point method with logarithmic barrier functions. Chow et al. (2018) propose to use Lyapunov functions to handle constraints. However, the key challenge of policy gradient based methods to solve CMDPs is that the corresponding optimization problems are non-convex. In most cases, only convergence to a local minimum can be guaranteed and the convergence rates are often hard to establish.
Organization of the paper and notations
The paper is organized as follows. We first introduce the CMDP and review some classical results that are relevant to our subsequent development in Section 2. We then introduce our algorithm in Section 3, and show that the algorithm achieves the optimal convergence rate in Section 4. In Section 5, we discuss how our algorithm can be applied to (approximately) solve weakly coupled CMDPs and weakly coupled MDPs. We then implement our algorithm to solve an inventory planing problem and a queue scheduling problem in Sections 6 and 7 respectively. Lastly, we conclude the paper and discuss some interesting future directions in Section 8. Finally, given two sequences of real numbers {a n } n≥1 and {b n } n≥1 , we say b n = O(a n ), b n = Ω(a n ), 6 and b n = Θ(a n ) if there exist some constants C, C ′ > 0 such that b n ≤ Ca n , b n ≥ C ′ a n , and C ′ a n ≤ b n ≤ Ca n , respectively. We also introduce theÕ(·) notation when we ignore the logarithmic factors.
For example, if b n ≤ Ca n · log(n), we denote it by b n =Õ(a n ).
Constrained Markov Decision Process
We start by considering a discrete-time MDP characterized by the tuple (S, A, P, γ, µ 0 ). Here, S and A denote the state and action spaces; P = {P (·|s, a)} (s,a)∈S×A is the collection of probability measures indexed by the state-action pair (s, a). For each (s, a), P (·|s, a) characterizes the one-step transition probability of the Markov chain conditioning on being in state s and taking action a.
Function c = {c(s, a)} (s,a)∈S×A is the expected instantaneous cost where c(s, a) is the cost incurred by taking action a at state s. Lastly, γ ∈ (0, 1) and µ 0 = {µ 0 (s)} s∈S are the discount rate and the distribution of the initial state, respectively. Given an MDP (S, A, P, γ, µ 0 ), a policy π determines what action to take at each state. We define the expected cumulative discounted cost with initial state s 0 under policy π as where s t , a t are the state and action at time t and E π denotes the expectation with respect to the transition dynamics determined by policy π. We further weight the expected costs according to the initial state distribution and define Our goal is to minimize the cost C(π) over a suitably defined class of policies.
As an extension to MDP, the CMDP model optimizes one objective while keeping others satisfying certain constraints. Specifically, in addition to the original cost c, we introduce K auxiliary instantaneous costs d k = {d k (s, a)} (s,a)∈S×A , ∀ k ∈ [K]. The goal of a CMDP is to find the policy that minimizes the cost defined in (2) while keeping the following constraints satisfied In order to make the expression more concise, we define D(π) := (D 1 (π), . . . , D K (π)) ⊤ , q := (q 1 , . . . , q K ) ⊤ , and write the constraints in (3) as D(π) ≤ q.
We remark that the CMDP is only one modeling choice to model problems with multiple objectives/constraints. This particular modeling choice turns out to enjoy a lot of analytical and computational tractability as we will discuss next. CMDP is also closely connected to an important class of MDPs -weakly coupled MDP. In particular, CMDPs can be viewed as a relaxation of weakly coupled MDPs (Adelman and Mersereau 2008). We will provide more discussions about this in Section 5.
Policy Spaces
Solving a CMDP requires finding the optimal policy over a properly defined policy space, which is a function space. Imposing suitable regularity conditions on the policy space will facilitate the development of algorithms. We next introduce a few classes of commonly used policies. It is natural to require that all policies are non-anticipative, which means that the decision-maker does not have access to future information. Define the history at time t to be the sequence of previous states and actions as well as the current state, i.e., h t := (s 0 , a 0 , . . . , a t−1 , s t ). Then a non-anticipative policy can be viewed as a mapping from h t and t to the action space. We refer to such a policy as a "behavior policy". If a policy only depends on the current state s t and time t instead of the whole history h t , it is called a "Markov policy". For a Markov policy, if it is independent of the time index t, it is referred to as a "stationary policy". When a stationary policy is a deterministic mapping from the state space to the action space, it becomes a "stationary deterministic policy". We use Π, Π M , Π S , Π D to denote the space of behavior, Markov, stationary, and stationary deterministic policies, respectively.
Given an arbitrary policy space U , we can further generate a new type of policy called "mixing policies" via an initial randomization. Specifically, let ρ be a probability measure on U . Under a mixing policy on U with mixing probability ρ, we first draw a policy, say π g , from U following the distribution ρ. Then π g is executed for t = 0, 1, 2, · · · . We denote by M(U ) the space of mixing policies constructed from U . An important special case is M(Π S ), i.e., the space of mixing stationary policies. When allowing mixing operation, we incorporate the randomness of the initial mixing into the calculation of the accumulated cost. In particular, for π ∈ M(U ) with initial randomization ρ, By definition, we note that A class of policies U is called a "dominating class" for a CMDP, if for any policy π ∈ Π, there exists a policyπ ∈ U such that For CMDPs, when the instantaneous costs c(·, ·) and d k (·, ·) are uniformly bounded from below, Π S is dominating (Altman 1999). The class of mixing stationary polices M(Π S ) is also dominating in this case (Theorem 8.4 in (Altman 1999)).
Classical Approaches to Solve CMDPs
There are two classical approaches to CMDPs. We use CMDPs with finite state and action spaces as examples. The first method utilizes the occupation measure. Given a policy π, the occupation measure is defined as where P π (·, ·|s 0 ) denotes the probability measure induced by policy π with initial state s 0 . Note that the occupation measure is the weighted long-run proportion of time that the system spends at each state-action pair. We can then express the accumulated costs in (2) and (3) as Let Q denote the set of feasible occupation measures, i.e., for any occupancy measure ν ∈ Q there exists a policy π that leads to ν. By Theorem 3.2 in Altman (1999), Q can be represented by the collection of vectors {ν(s, a)} (s,a)∈S×A that satisfies the following system of linear equations: where 1(·) is the indicator function. Then we obtain the following LP formulation of CMDP min (s,a)∈S×A c(s, a) · ν(s, a) The second method utilizes the Lagrangian duality. Let λ ∈ R K denote the Lagrangian multipler. Define Then the CMDP can be equivalently formulated as inf π∈Π S sup λ≥0 L(π, λ). By Theorem 3.6 in Altman (1999), we can exchange the order of inf and sup and obtain, where the last equation holds because for each fixed λ, the inner problem is an unconstrained MDP and the optimal policy is a stationary deterministic policy. We emphasize that given the optimal solution λ * to the dual problem, not every policy π(λ * ) that minimizes L(π, λ * ) is the optimal policy to the original CMDP. A necessary condition for π(λ * ) to be optimal for the original CMDP is the complementary slackness: The dual problem sup λ≥0 inf π∈Π D L(π, λ) leads to the following LP formulation: where φ(s) denotes the value function with initial state s. Note that (5) and (7) are dual of each other.
Various methods have been developed in the literature to solve the LPs (5) or (7). There are two main obstacles to solve the LPs in practice. First, it can be computationally prohibitive when dealing with a large state space or a large action space. Second, it requires explicit characterization of the transition kernel P . To overcome these difficulties, we next develop a sampling-based primaldual algorithm to solve CMDPs.
The Primal-Dual Algorithm
Consider the Lagrangian dual problem For each fixed λ, the inner problem is an unconstrained MDP. A natural idea is to solve the unconstrained MDP via a sampling-based method and then update the Lagrangian multipliers via subgradient ascent. Such an idea is exploited in (Le et al. 2019). However, this method is computationally expensive, since we need to solve a new MDP every time the Lagrangian multipliers are updated. In contrast, our method only requires a single policy update at each iteration, i.e., we do not need to solve for the corresponding optimal policy at each iteration.
We develop the algorithm and analyze its convergence in M(Π S ), the space of mixing stationary policies, rather than Π S . The benefits of allowing the mixing are twofolds. First, it provides an intuitive way to understand strong duality: With the mixing operation, we can treat C(π) and D(π) as infinite-dimensional linear functions with respect to the distributions of initial randomization of policies in Π S . Hence, the Lagrangian L(π, λ) is a bilinear function and strong duality follows from the minimax theorem (Sion et al. 1958). Second, in primal-dual algorithms, we in general need to take the average of the trajectories to obtain convergence (Nedić and Ozdaglar 2009). In our case, caution needs to be taken when defining the average. In particular, note that the objective and constraints are inner products of the cost functions and the occupation measures. Thus, what we need to average across are the occupation measures. However, since the mapping from the policy to the corresponding occupation measure is nonlinear, we cannot average the policy π(·|s), i.e., the probability of taking each action at each state, directly. The mixing operation provides a simple way to average the occupation measures. In addition, given a mixing policy, under mild regularity conditions, there exists a nonmixing stationary policy that has the same occupation measure (Theorem 3.1 of Altman (1999)).
In particular, for π ∈ M(Π S ), let ν π (·, ·) be the corresponding occupation measure. Then, we can construct such a stationary policyπ viã Our algorithmic development is based on strong duality (9), which holds under certain regularity conditions (see Section 4 for details). By the minimax theorem, there exists a saddle point (π * , λ * ) such that Moreover, π * is an optimal solution to the primal problem and λ * is an optimal solution to the dual problem. In addition, L(π * , λ * ) equals to the optimal cost of the CMDP. The saddle point property (11) suggests that we can use iterative primal-dual updates to find the saddle point.
We next introduce our actual algorithm. Note that for a fixed value of λ, the inner inf-problem is In what follows, we refer to the inner problem inf π∈M(Π S ) L(π, λ) as the modified unconstrained MDP.
For a given policy π and Lagrangian multiplier λ, define which is known as the action-value function or Q-function. Let π m and λ m denote the policy and the Lagrangian multiplier obtained at iteration m. For the policy update, we use KL divergence as the regularization (Geist et al. 2019). In particular, the regularized policy iteration is defined as π m (a|s) = arg min where η m−1 > 0 is the stepsize that determines the power of regularization. Note that the regularized policy iteration (13) is defined state-wise, i.e., for each s ∈ S. The minimization is taken over the probability simplex ∆ A := {π(·|s) : 0 ≤ π(a|s) ≤ 1, a∈A π(a|s) = 1}.
Let Λ M denote a suitably bounded domain that includes the dual optimal solution λ * . We will provide an explicit construction of Λ M in Section 4. To update the Lagrangian multiplier, we use the projected subgradient ascent: where Proj Λ M {·} denotes the projection (in L 2 -norm) on Λ M . We need such a projection to ensure the boundedness of "subgradient" in order to establish convergence.
By the definition of KL-divergence, the regularized policy iteration can be re-written as where Z m−1 is some normalization constant. For the subgradient ascent update, we have Both (15) and (16) can be evaluated/approximated using simulation. More advanced approximation techniques for policy evaluation like TD-learning can also be applied here.
Then, the algorithm outputs a mixing policy and Lagrangian multiplier by taking a weighted average of the outputs: The averaging operation is required for convergence, since the objective L(π, λ) is bilinear and does not possess sufficient convexity. In particular, counter-examples that fail to converge without averaging exist. The summation in the definition ofπ T is interpreted as the mixing operation, i.e., it mixes policies (π 0 , . . . , π T −1 ) with initial randomization distribution (η 0 , · · · ,η T −1 ). Note that this essentially takes the average of the occupation measures of π m 's. Fromπ T , we can apply (10) to define a non-mixing stationary policy that has the same occupation measure.
Above all, our primal-dual algorithm is summarized in Algorithm 1.
Convergence Analysis
In this section, we conduct detailed performance analysis of Algorithm 1. In particular, we study the performance of policyπ T by analyzing the values of the objective C(π T ) and the constraints D(π T ). We show that the objective value C(π T ) converges to the optimal C * := C(π * ) = L(π * , λ * ) at a rate of O(log(T )/ √ T ). In addition, even thoughπ T may be infeasible, we show that the violation of constraints, which is measured by converges to zero at a rate of O(log(T )/ √ T ). The analysis builds on a combination of subgradient method for saddle point problem and mirror descent for regularized policy iteration.
Recall that our algorithmic development builds on the strong duality of CMDP. For CMDPs with finite state and action spaces, the strong duality always holds (Theorem 3.6 in (Altman 1999)). However, when the state space is countably infinite, we need more regularity conditions to ensure the strong duality. One sufficient condition is that the instantaneous costs of the CMDP are uniformly bounded from below (see Definition 7.1, Theorem 9.9, and Chapter 10.3 in (Altman 1999)). Specifically, we impose the following assumption.
Assumption 1. [Lower Bound of Instantaneous Costs] There exists a constant W such that for
all s ∈ S, a ∈ A, and k = 1, 2, . . . , K, To establish the convergence result, we also require the Slater's condition:
Assumption 2. [Slater's Condition] There exists some policyπ such that
Slater's condition ensures the existences of finite and bounded optimal Lagrangian multipliers This condition is commonly assumed in the constrained optimization literature. For many practical problems, the Slater's condition holds trivially.
Our last assumption is about the boundedness of the "subgradient", which regularizes the movement of policies and Lagrangian multipliers at each iteration. Recall that in Algorithm 1, after applying the subgradient ascent for Lagrangian multipliers, we project λ onto a bounded domain Λ M , which takes the form where M is an upper bound of λ * and r > 0 is a slackness constant.
Assumption 3. [Bounded Subgradient] There exists some constant G > 0 such that for any λ ∈ Λ M and policy π ∈ M(Π S ), Since Q π,λ (s, a) is linear in λ, it is necessary to restrict λ to a bounded domain Λ M for Assumption 3 to hold. That is why we need the projection step in updating λ. Note that when the instantaneous cost functions c(·, ·) and d k (·, ·) are uniformly bounded or when the state and action spaces are finite, Assumption 3 holds trivially.
Lastly, we comment that the Slater's condition (Assumption 2) not only guarantees the existence and boundedness of λ * , but also provides an explicit upper bound of λ * . In particular, letπ be a Slater point (a policy that satisfies the Slater's condition), then we have wherec ≤ C(π * ) is an arbitrary lower bound for the dual problem. In many applications, it is possible to obtain a better upper bound of λ * than (20) by exploiting the structure of the specific problem.
Next, to establish the convergence, we need to construct an appropriate potential function, which is also known as Bregman divergence in the optimization literature. The potential function ensures that the regularized policy iteration is equivalent to minimize the sum of a linear approximation of the objective function and the potential function. We next introduce this potential function, which is essentially a weighted KL-divergence.
When π 1 and π 2 are mixing policies, we first transform them to the equivalent stationary policies via (10), and then define Φ π (π 1 π 2 ) as the weighted KL-divergence between the equivalent stationary policies.
By definition, Φ π (π 1 π 2 ) measures the discrepancy between two policies weighted by a given state occupation measure. It connects the regularized policy iteration in (13), which is defined state-wise, with a single objective and serves as the Bregman divergence in mirror descent analysis.
Unlike the traditional analysis of mirror descent where the potential function is fixed (Nemirovski 2012), in the analysis of regularized policy iteration, we need to construct a policy-dependent potential function and cannot fix the weight of KL-divergence. However, since policy updates are defined state-wise, for an arbitrary weight, the regularized policy iteration always takes the form of minimizing a linear approximation of the objective function regularized by a certain potential function. Thus, the analysis of mirror descent can be applied here with some modifications.
We are now ready to introduce the convergence results of our primal-dual algorithm.
If the step size is constant η m = η, then where r is the slackness constant in (19).
Theorem 1 indicates that with decreasing step size, η m = Θ(1/ √ m), our primal-dual algorithm achieves O(log(T )/ √ T ) convergence. In particular, For constant step size, η m = η, our primal-dual algorithm converges to a neighborhood of the optimal at rate O(1/T ). In particular, These convergence rates match those in Le et al. (2019), which requires solving the modified unconstrained MDP to the optimal at each iteration. We also note that it is unlikely to improve the convergence rate beyond Θ(1/ √ T ). This is because the dual problem is a finite-dimensional concave optimization problem without strong concavity. The convergence rate of the subgradient method in this case is lower bounded by Ω(1/ √ T ) (Bubeck 2014). The proof of Theorem 1 is deferred to the appendix.
We comment that in the bounds in Theorem 1, although the slackness constant r appears in denominators only, the constant G, which is an upper bound of the subgradients, grows linearly in r. In particular, by Assumption 3, G is determined by the shape of Λ M . Hence, r cannot be set too large.
Weakly Coupled MDP and Weakly Coupled CMDP
One fundamental challenge in solving MDPs and CMDPs is the curse of dimensionality. However, there is an important class of problems that has certain decomposable structures. These problems, which are often referred to as weakly coupled MDPs/CMDPs, contain multiple subproblems which are almost independent of each other except for some linking constraints on the action space (Singh and Cohn 1998). More precisely, for a weakly coupled MDP consisting of I sub-problems P2. For each state s t and action a t , the instantaneous cost admits an additively separable form P3. The joint initial distribution satisfies µ 0 (s) = µ 1 0 (s 1 ) · µ 2 0 (s 2 ) · . . . · µ I 0 (s I ) and the one-step transition dynamics of the sub-MDPs are independent of each other, i.e., For the linking constraints, let b i (·, ·) : S i × A i → R K be a K-dimensional real function, which can be interpreted as the resource consumed by the i-th sub-problem, i ∈ [I]. Then, at each state s, the feasible actions need to satisfy b(s, a) = where q ∈ R K . Note that the linking constraint (22) is a hard constraint and needs to be satisfied path-by-path almost surely. For a weakly couple CMDP, it satisfies the same structural properties, P1-P3, as the weekly coupled MDP. The only difference is that the linking constraint now takes the form The weakly coupled MDP and the weakly coupled CMDP are closely related to each other. Let A(s) = a = (a 1 , . . . , a I ) ∈ A : be the (joint) action space of a weakly coupled MDP. Then, the Bellman equation is When the number of sub-MDPs I is large, even if the scale of each subproblem is small, the size of joint state space S can be prohibitively large. Hence, solving the MDP directly can be intractable.
Two decomposition schemes have been proposed to alleviate the curse of dimensionality: LP-based ADP relaxation and Lagrangian relaxation (Adelman and Mersereau 2008). Both of them lead to I independent sub-LPs, which reduces the complexity significantly. The LP-based ADP relaxation approximates the value function with additively separable functions, i.e., The Lagrangian relaxation dualizes the constraints (22) based on the LP representation of the Bellman equation. The latter relaxation translates the weakly coupled MDP to a weakly coupled CMDP. It has been established that the optimal cost of the relaxed CMDP provides a lower bound for the optimal cost of the original MDP (Adelman and Mersereau 2008).
Many Operations Management problems can be formulated as weakly coupled MDPs/CMDPs.
Examples include inventory planning problems with multiple types of inventories and budget constraints, and scheduling of parallel-server queues with multiple classes of customers. We provide more details about these problems in Sections 6 and 7, where we apply our primal-dual algorithms to solve them.
When applying the primal-dual algorithm to solve weakly coupled CMDPs, it can be easily adapted to enjoy the decomposability. We call a policy π decomposable if it takes the product form: Since our algorithm converges with any initial policy, we shall start with a decomposable policy.
Let {s t } t≥0 = {(s 1 t , . . . , s I t )} t≥0 and {a t } t≥0 = {(a 1 t , . . . , a I t )} t≥0 be the trajectory of the CMDP under policy π = (π 1 , . . . , π I ). To simplify the notations, for each i ∈ [I], we define Then, the CMDP can be written as min (π 1 ,...,π I ) When applying the primal-dual algorithm, if we start with a decomposable policy, then the policies obtained in all subsequent iterations are decomposable. To see this, we note that the Lagrangian function, can be decomposed into I independent subproblems. If π m is decomposable, where Q π i m ,λ (·, ·) is the Q-function of the i-th modified sub-MDP with instantaneous cost c i (·, ·) + λ ⊤ b i (·, ·). Here, we ignore the constant λ T q, since subtracting a common constant in the Q-function does not change the updates of regularized policy iteration. This indicates that the regularized policy iteration, including policy evaluation and improvement, can be implemented separately in parallel via Moreover, as the subgradient of Lagrangian multiplier takes form ∂ λ L(π m , λ) = I i=1 B i (π i m ) − q, it can be evaluated for the sub-MDPs in parallel as well. Above all, in this case, the primal-dual algorithm improves the computational complexity from depending exponentially on I to linearly on I.
Application to an Inventory Planning Problem
In this section, we apply the primal-dual algorithm to solve a multi-product multi-period newsvendor problem with budget constraints.
Consider the inventory planning problem with I distinct products. At the beginning of each period, we need to decide the quantities to order based on the current inventory levels. The orders are assumed to be fulfilled without delay. After the inventory is replenished, a random demand is realized. We assume the demands for each product are independent. Let F i denote the cumulative distribution function of the demand for product i in each period. In particular, for each period, the demand for product i is an independent draw from the distribution F i . For each product i ∈ [I], we denote its inventory level at the beginning of period t by s i t , the quantity we ordered by a i t , and the demand in period t by w i t . For product i in period t, if the demand does not exceed the current inventory level, i.e., w i t ≤ s i t + a i t , all the demand is fulfilled and the remaining inventory can be carried to the next period. Otherwise, only s i t + a i t units are fulfilled in the current period. The remaining (w i t − s i t − a i t ) units are carried to the next period as backlog. We allow s i t 's to be negative to represent backlogs. For product i, inventory incurs a holding cost of h i per unit per period and backlog incurs a backlog cost of b i per unit per period. In addition, product i in inventory consumes v i resource per unit per period. For a fixed q > 0, we impose the following budget constraint The resource can be interpreted as, for example, the volume of each product. In this case, the above constraint put restrictions on the warehouse space.
The inventory planning problem can be formulated as a weakly coupled CMDP with state s = (s 1 , . . . , s I ), action a = (a 1 , . . . , a I ), and transition dynamics As the demands are independent, P (s t+1 |s t , a t+1 ) = I i=1 P (s i t+1 |s i t , a i t+1 ). The instantaneous cost function and auxiliary cost function are To verify the correctness of our convergence analysis, we consider a small-scale instance of the problem with appropriate truncations. Such a truncation makes the state and action spaces finite.
In this case, the optimal cost can be solved numerically (using the LP formulation). In particular, consider I = 2, and demands for the two products are both uniformly distributed on set {1, 2, . . . , 10}, We impose an upper bound 10 and a lower bound −10 for the state space. In particular, when backlogs drop below −10, the excess demands are lost without incurring any cost.
When implementing the primal-dual algorithm, we use the standard Monte Carlo method to estimate the Q-function for a given policy. Figure 1 shows the trajectories of objective values and the constraint violations for different iterations with constant step size. We observe that after 500 iterations, the averaged CMDP cost (without multiplying the (1 − γ) factor) converge to 49.26, which is close to the optimal value 46.47. In terms of feasibility, we calculate the violation of constraints, which is the expected value of the auxiliary cost minus the budget threshold. We observe that the averaged violation value converges to 0.1 and many policies in the last iterations do not violate the constraint at all. Figure 2 shows the relationship between T −1 t=0η t C(π t ) and the reciprocal of the number of iterations (for constant step size) or the reciprocal of square root of the number of iterations (for decreasing step size). In both cases, we observe a straight line, which confirms the rates of convergence developed in Theorem 1.
Application to Queueing Scheduling
In this section, we apply our primal-dual algorithm to a queue scheduling problem, which is motivated by applications in service operations management. Service systems often feature multiple classes of customers with different service needs and multiple pools of servers with different skillsets.
Efficiently matching customers with compatible servers is critical to the management of these systems. In this context, we consider a parallel-server system (PSS) with multiple classes of customers and multiple pools (types) of servers. Customers waiting in queue incur some holding costs and routing customers to different pools leads to different routing costs. The goal is to find a scheduling policy that minimizes the performance cost (holding cost plus routing cost). This class of problems is known as the skill-based routing problem and has been widely studied in the literature. We refer to (Chen et al. 2020) for a comprehensive survey of related works. Averaged violation Figure 1 Trajectories of costs and constraints with constant step sizes In what follows, we first introduce the queueing model and some heuristic policies adapted from policies developed in the literature. We then present the implementation details of our primal-dual algorithm in this setting. Due to the large state and action spaces, we combine our primal-dual algorithm with several approximation techniques. Lastly, we compare the performance of our policy with the benchmark policies numerically.
Model and Benchmarks
The multi-class multi-pool queuing network has I classes of customers and J pools of servers. We consider a discrete time model. In each period, the number of arrivals of class i customers follows a Poisson distribution with rate θ i . There are N j homogeneous servers in pool j, j ∈ [J ]. We assume that each customer can only be served by one server and each server can only serve one customer at a time. If a class i customer is served by a server from pool j, its service time follows a geometric distribution with success probability µ ij . When there is no compatibility between customer class i and server type j, µ ij = 0. Figure 3 provides a pictorial illustration of such a system.
Figure 3
Multi-class multi-pool queueing system We consider non-preemptive scheduling policies. Let A i (t) denote the number of new class i arrivals in time period t, i.e., A i (t) follows a Poisson distribution with rate θ i . Let Z ij (t) denote the number of class i customers in service in pool j at the beginning time period t. We also denote U ij (t) as the number of class i customers assigned to pool j for time period t. Note that U ij (t)'s are determined by our scheduling policy. Then the number of class i departures from pool j at the end of time period t, R ij (t), follows a Binomial distribution with parameter Z ij (t) + U ij (t) and µ ij .
Let X i (t) denote the number of class i customers waiting in queue at the beginning of period t.
Then we have the following system dynamics, The state of the system is s The routing policy needs to satisfy the following constraints i.e., we can not schedule more customers than there are waiting, and i.e., the number of customers in service can not exceed the capacity. Note that constraints (27)- (28) are hard constraints, i.e, they need to be satisfied path-by-path.
Each class i customer waiting in queue incurs a holding cost of h i per period. There is also a one-shot routing cost of r ij for scheduling a class i customer to a pool j server. The overall cost for period t is given by Our goal is to minimize the cumulative discounted costs: γ t · c(s(t), a(t)) .
The problem we consider here is a weakly coupled MDP with I sub-problems, where each sub-problem is an inverted-V model (i.e., a single customer class and multiple server pools). In particular, for the i-th sub-problem, define state and action as s i (t) = (X i (t), Z i1 (t), . . . , Z iJ (t)) and a i (t) = (U i1 (t), . . . , U iJ (t)). The transition dynamics of the i-th sub-system follows Given s i (t), the corresponding action space is defined as We also define the auxiliary cost function Then the capacity constraints (28) can be expressed as which takes the same form as the linking constraint in (22).
There are three important features of the problem that we attempt to address in this section: 1) non-preemptive routing; 2) both class-and-pool dependent service rate; 3) routing cost (overflow cost). The first two features require us to keep track of a very high dimensional state space, i.e., I(J + 1). The third feature has not been extensively studied in the literature.
We next introduce two heuristic policies adapted from policies developed in the literature. For PSS with multiple classes of customers and multiple pools of servers, a myopic policy called the cµ-rule (or generalization of it), has been shown to be asymptotically optimal in some systems, where the goal is to minimize the holding cost (Mandelbaum and Stolyar 2004). The idea is to minimize the instantaneous cost-reduction rate at each decision epoch. Another policy is called the max-pressure policy, which is known to be throughput optimal and asymptotically cost optimal for some forms of convex holding cost , Dai et al. 2008. We next consider modified versions of the above routing policies, which take the routing costs into account (Chen et al. 2020).
At each decision epoch t, we choose U ij (t)'s that solve the following optimization problem: where ω ij (t)'s are some modified instantaneous costs we introduce next. We consider two different forms of w ij (t)'s. The first one sets ω ij (t) = h i − r ij , which is adapted from the cµ-rule. We refer to this policy as the modified cµ-rule. The second one sets w ij (t) = h i X i (t) − r ij , which is adapted from the max-pressure policy. We refer to this policy as the modified max-pressure policy.
Solution method
We consider the CMDP relaxation of the weakly coupled MDP: γ t · c(s(t), a(t)) and apply the primal-dual algorithm to solve it. The decoupling allows us to translate the original problem to I sub-problems. In particular, in each iteration, we use regularized policy iteration to update the scheduling policy for a single-class multi-pool system with modified instantaneous cost: for the i-th sub-problem.
Even with the decomposition, the state and policy spaces are still too large in this case. We next introduce some further approximations to reduce the dimension of the problem. We shall omit the index i in subsequent discussions as the development focuses on each sub-problem.
Policy space reduction: For each sub-problem, the policy space is still prohibitive. To see this, consider a system with 3 pools and 30 servers in each pool. When the queue length is 90 and all pools are empty, there are roughly 30 3 feasible actions. To overcome the challenge, we reduce the action space to only include priority rules. State-dependent extreme policies have been shown to be asymptotically optimal in the scheduling of PSS due to the linear system dynamics and linear holding costs (Harrison and Zeevi 2004). Denote −1 as the waiting option. The priority rule is denoted by a priority list that ends with −1. For example, priority (1, 2, −1) means pool 1 is preferred to pool 2, which is preferred to waiting. When following priority (1, 2, −1), we first assign as many customers to pool 1 as possible. If there are still customers waiting after pool 1 assignment, we start assigning them to pool 2. After that, if there are still customers waiting, we keep them in the queue. We denote this reduced policy space asÃ.
Value function approximation: In our policy iteration step, given a policy π, we need to estimate the function Q π,λ (s, a) for all s ∈ S, a ∈Ã, where the state s = (x, z 1 , . . . , z J ). Due to the large state space, we can not enumerate all the states to evaluate the value function. Instead, we use value function approximation with quadratic basis. The idea is to find θ π,a ∈ R (J+1) 2 +1 such that Q π,λ (s, a) ≈ φ(s), θ π,a .
where φ(s) is the quadratic basis. To obtain θ π,a at each iteration, we first randomly sample M states {s i } i∈ [M] and use Monte Carlo simulation to estimate Q π,λ (s i , a). Then, set
Experiment Results
For the numerical experiments, we consider a similar setting as that in Dai and Shi (2019), which is motivated by hospital inpatient-flow management. In particular, we consider a network with 3 classes of customers and 3 pools of servers. Pool i is considered the primary pool for class i customers with r ii = 0, ∀i ∈ [I]. The major difference between our model and the model considered in Dai and Shi (2019) is that we allow the service rates to vary for different server types, i.e, µ ij depends on both i and j. This captures the potential slowdown effect due to off-service placement (Song et al. 2020).
Note that for class i customers, the primary server pool i has the largest service rate and zero routing cost. For customer class i, we define its nominal traffic intensity as ρ i = θ i /(N i µ ii ). Then the nominal traffic intensity of the three classes are ρ 1 = 1, ρ 2 = 16/15, and ρ 3 = 5/6. This indicates that the first two classes are unstable if we do not do any "overflow".
We initialize the system with X i (0) = 50 and Z 11 (0) = 20, Z 22 (0) = 30, Z 33 (0) = 40, and Z ij (0) = 0 for i = j, i, j ∈ [3]. We compare the performance of our policy with the two benchmark policies for problems with different routing costs and discount rates.
When constructing the policy space for our primal-dual algorithms, because each customer class has a primary server pool with the fastest service rate and zero routing cost, we always give the primary pool the highest priority. In particular, the action spaces for three classes are defined as In our primal-dual update, we use the constant stepsize 0.1. When using simulation to estimate the value function, we truncate at T = 100, 150, 800 for γ = 0.9, 0.95, 0.99 respectively. This ensures that γ T ≈ 10 −4 , i.e., the truncation errors are almost negligible. When fitting the parameters for the quadratic value function approximation, we sample 1000 states and use simulation to estimate the Q-function at these states. For each value of γ, we start with the Lagrangian multipliers λ 0 = (10, 10, 10) and run the prima-dual algorithm for 30 iterations, and take the policy obtained in the last iteration. Note that this policy may not be feasible to the original weakly coupled MDP.
In order to obtain a feasible policy, we adopt the following modification. In each period, for each pool, when the number of scheduled customers exceeds the capacity, the primary customers are prioritized for admission. We then admit the "overflowed" customers uniformly at random until the capacity is reached. The customers who are not admitted to service will be sent back to their corresponding queues and wait for the next decision epoch. For example, suppose that there are 20 servers available in pool 1 but the policy schedules (15,5,5) customers from the three classes to this pool. The modified policy first admits the 15 customers from class 1 and then randomly picks 5 among the 10 customers of classes 2 and 3 to admit.
Given a policy, to evaluate its performance, we estimate the cumulative discounted costs from 500 independent replications of the system over T periods of time. The results are summarized in Tables 1 and 2 . We observe that the policies obtained via the primal-dual algorithm performs well. It outperforms the two benchmark policies in most cases. When the routing cost is large (Table 1), the cost under the modified cµ-rule increases substantially as the discount rate γ increases. When taking a closer look at w ij (t)'s, we note that in this case, w 21 (t) = −2.7 and w 23 (t) = −2.6. This implies that the modified cµ-rule would never overflow class 2 customers. As a result, the system is unstable, i.e., the class 2 queue blow up as t increases. (The cumulative discounted cost is well-defined as the discount rate decays exponentially in t while the queue length grows linearly in t.) The modified max-pressure is able to achieve reasonably good performance in this case. When γ is small, our algorithm achieves comparable (slightly better) performance as the max-pressure policy. When γ is large, i.e, γ = 0.99, our policy is able to achieve a substantially lower cost than the max-pressure policy, i.e., a 21% cost reduction. This is because the max-pressure policy only starts overflowing when the queues are large enough. In this example where overflow is necessary to achieve system stability, we need more aggressive overflow. Our policy is able to "learn" this through the primaldual training.
When the overflow cost is small (Table (2), the modified cµ-rule is able to achieve better performance than the modified max-pressure policy. Note that in this case, all w ij (t)'s are nonnegative for both the modified cµ-rule and the modified max-pressure policy (when X i (t) > 0). When γ is small, our policy achieves comparable performance as the modified cµ-rule, when γ is large, i.e., γ = 0.99, our policy can achieve a 21% cost reduction over the modified cµ-rule. This suggests that overflow needs to be exercised carefully.
We next discuss the structure of the policies obtained via primal-dual algorithm. We observe that our policies in general follow a threshold structure: overflow customers only when the queue length exceeds some threshold. However, the thresholds are highly dependent on the states of the system. Take the scheduling policy for class 1 and 2 customers with discount rate γ = 0.9 as an example. In Figure 4, we plot the values of the threshold of starting overflowing for different values of Z 11 's and Z 22 's. We observe that holding Z 12 and Z 13 fixed, as Z 11 increases, the threshold for overflow decreases. Similarly, holding Z 21 and Z 23 fixed, as Z 22 increases, the threshold for overflow also decreases.
Figure 4
The threshold for class 1 and 2 queues when starting overflowing.
Conclusion and Future Directions
In this work, we propose a sampling-based primal-dual algorithm to solve CMDPs. Our approach alternatively applies regularized policy iteration to improve the policy and subgradient ascent to maintain the constraints. The algorithm achieves O(log(T )/ √ T ) convergence rate and only requires one policy update at each primal-dual iteration. Our algorithm also enjoys the decomposability property for weakly coupled CMDPs. We demonstrate the applications of our algorithm to solve two important operations management problems with weakly coupled structures: multi-product inventory management and multi-class queue scheduling.
In Section 7, we also show the good empirical performance of our algorithm to solve an important class of weakly coupled MDPs. This opens two directions for future research. First, it is be important to quantify the optimality gap between the weakly coupled MDP and its CMDP relaxation theoretically. The gap can be large in some problems as demonstrate in Adelman and Mersereau (2008). It would be interesting to establish easy-to-verify conditions about when the gap is small.
Second, the policy obtained via the Lagrangian relaxation may not satisfy the hard constraints in the original MDP. One approach to overcome the issue is to use more stringent thresholds when defining constraints in the CMDP relaxation (Balseiro et al. 2019). The other approach is to modify the CMDP based policies to construct good MDP policies. For example, Brown and Smith (2020) study a dynamic assortment problem and propose an index heuristic from the relaxed problem, and show that the policy achieves asymptotic optimality. In Section 7, we apply a rather straightforward modification to the CMDP based policy in order to satisfy the hard constraints in the original MDP. In general, how to "translate" the policy derived based on the relaxed problem to the original MDP would be an interesting research direction.
Appendix. Proof of Main Results
The proof of Theorem 1 relies the following lemma, which upper and lower bounds the movement of the Lagrangian after a single iteration/update of the policy and the Lagrangian multipliers.
Lemma 1. Let {(π m , λ m )} m≥0 be the sequences of stationary policies and Lagrangian multipliers generated by Algorithm 1. Then for arbitrary λ ∈ R K + and π ∈ Π S , we have the upper bound and the lower bound · sup s∈S sup a∈A |Q λm,πm (s, a)| 2 .
Before we prove Lemma 1, we first present two auxiliary lemmas. The first lemma (Lemma 2) is rather standard. A similar version of the result can be found in Proposition 3.2.2 in Bertsekas and Scientific (2015).
For self-completeness, we still provide the proof here.
Lemma 2. Let f be a proper convex function on a space Ω (not necessary a Euclidean space). Let C be an open set in Ω, and Ψ ξ (· ·) be the Bregman divergence induced by a strictly convex function ξ on Ω. For an arbitrary constant η > 0 and a point x 0 ∈ Ω, define Then we have By symmetry, for a concave function g on Ω and x * = arg max x∈C g(x) − 1 η Ψ ξ x x 0 . Then Proof of Lemma 2 We first consider the minimization problem. Since x * minimizes the objective f (x) + η −1 · Ψ ξ (x x 0 ) on set C, there exists a subgradient of the form Here q * ∈ ∂ x f (x * ) is some subgradient of f (x) at x * . As a result, by the property of subgradient, for all x ∈ C, we have where the last equality follows from the definition of Bregman divergence, i.e., Ψ ξ (x y) = ξ(x) − ξ(y) − ∇ξ(y), x − y .
For the maximization problem, we only need to consider −g and apply above result.
The next lemma is Lemma 6.1 in (Kakade and Langford 2002). Given two policies, it characterizes the difference of expected accumulated costs as the inner product of the advantage function of one policy and the occupation measure of another policy. Note that the value function V π and the action-value function Q π of an MDP under policy π are defined in (1) and (12).
Then we obtain the upper bound.
Note that the space of the stationary policy, Π S , can be represented as the product space of simplex ∆ A .
We are now ready to prove Theorem 1. | 2021-01-27T02:16:19.982Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "3886a019d85902dc703ffc778ce7a24d09440248",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3886a019d85902dc703ffc778ce7a24d09440248",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
248660800 | pes2o/s2orc | v3-fos-license | Evolution of multivariate drought hazard, vulnerability and risk in India under climate change
. Changes in climate and socio-economic conditions pose a major threat to water security, particularly in the densely populated, agriculture-dependent and rapidly developing country of India. Therefore, for cogent mitigation and adaptation planning, it is important to assess the future evolution of drought hazard, vulnerability and risk. Earlier studies have demonstrated projected drought risk over India on the basis of frequency analysis and/or hazard assessment alone. This study investigates and evaluates the change in projected drought risk under future climatic and socio-economic conditions by combining drought hazard and vulnerability projections at a country-wide scale. A multivariate standardized drought index (MSDI) accounting for concurrent deficits in precipitation and soil moisture is chosen to quantify droughts. Drought vulnerability assessment is carried out combining exposure, adaptive capacity and sensitivity indicators, using a robust multi-criteria decision-making method called the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In the worst-case scenario for drought hazard (RCP2.6-Far future), there is a projected decrease in the area under high or very high drought hazard classes in the country by approximately 7 %. Further, the worst-case scenario for drought vulnerability (RCP6.0-SSP2-Near future) shows a 33 % rise in the areal extent of high or very high drought vulnerability classes. The western Uttar Pradesh, Haryana and western Rajasthan regions are found to be high risk under all scenarios. Bivariate choro-pleth analysis shows that the projected drought risk is ma-jorly driven by changes in drought vulnerability attributable to societal developments rather than changes in drought hazard resulting from climatic conditions. The present study can aid policy makers, administrators and drought managers in developing decision support systems for efficient drought management.
Introduction
Droughts play a major role in water resource planning and management, agronomy, and freshwater availability (Mishra andSingh, 2010, 2011).Droughts may be exacerbated by climate change or societal developments or through a combination of the two.For building drought resilience, it is important to assess the role of these changes on the evolution of drought at regional scales, particularly for rapidly growing heavily agriculture-dependent countries such as India.Though socio-economic development is reported to have a greater impact on the water availability compared to the climate-induced impacts in some regions across the globe, the role of climate change cannot be entirely eliminated (Koutroulis et al., 2019).Representative Concentration Pathways (RCPs;van Vuuren et al., 2011) that are radiative forcing scenarios for different greenhouse gas emission levels are commonly used for climate change impact studies.Shared Socioeconomic Pathways (SSPs; O'Neill et al., 2017), on the other hand, provide different narratives of future societal development.Plausible combinations of different RCPs and SSPs are useful to study the future projections of drought risk (Kim et al., 2020).
According to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) (IPCC, 2014), the risk of an extreme event can be quantified as a product of hazard, vulnerability and exposure.Drought hazard is a function of magnitude and occurrence probability Published by Copernicus Publications on behalf of the European Geosciences Union. of drought events.On the other hand, drought vulnerability is the degree to which a region is susceptible to drought and is a function of sensitivity, adaptive capacity and exposure components.These components in turn describe the socio-economic, physical and infrastructural factors and are illustrated through drought vulnerability indicators.A comprehensive drought risk assessment involves the proper selection of drought indicators for hazard analysis and the proper selection of drought vulnerability indicators and reliable aggregation technique for vulnerability analysis (Carrão et al., 2016;Naumann et al., 2014;Sahana et al., 2021).By virtue of taking into consideration both drought hazard and vulnerability, a combination of RCP and SSP scenarios offer a comprehensive approach for drought risk projection.
Several studies have carried out risk assessments of drought and water availability across different regions of the world under changing climate and socio-economic conditions.Singh and Kumar (2019) quantified the water availability in the Indian region due to climate and demographic changes.Ahmadalipour et al. (2019) carried out drought risk assessment in the African region for different population growth and climate change scenarios.Chen et al. (2021) evaluated the effect of changing climate, population and GDP on the drought risk for China.Park et al. (2021) presented drought risk projections under changing meteorological conditions and socio-economic scenarios for South Korea.A comprehensive drought risk assessment for Europe was carried out by Koutroulis et al. (2018) under changing climate and socio-economic scenarios by evaluating exposure, sensitivity and adaptive capacity components for the projected period.Along similar lines, Koutroulis et al. (2019) quantified the global water availability under high-end climate change.Water use vulnerability was assessed by Kim et al. (2020) for a river basin in Korea for different climate and socioeconomic scenarios.
For the Indian region, projections of drought hazard and/or risk and water availability have been developed in earlier studies using climate scenarios alone (Aadhar andMishra, 2020, 2021;Gupta et al., 2020;Gupta and Jain, 2018) with the exception of Singh and Kumar (2019), who consider the role of both climate and socio-economic scenarios for obtaining future projections of water availability (Singh and Kumar, 2019).However, Singh and Kumar (2019) represent future socio-economic changes using a simplistic approach that considers changes in population alone.A combination of RCP and SSP scenarios by integrating hazard and vulnerability information is required to assess drought risk in India in the near and far future.Further, most studies that assess drought hazard under climate change scenarios consider either univariate or multivariate approaches based on precipitation deficits and temperature effects (Aadhar andMishra, 2020, 2021;Gupta et al., 2020;Gupta and Jain, 2018).However, droughts can often manifest as a complex interplay of multiple influencing variables, necessitating a multivariate approach for the characterization of drought hazard (Sa-hana et al., 2020).For the agrarian country of India, agrometeorological drought hazard projections should consider deficits in precipitation or soil moisture or both.
The present study aims at comprehensive drought risk projections for India by accomplishing the following objectives: (a) multivariate drought hazard projection using a multivariate standardized drought index (MSDI) that considers concurrent deficits in precipitation and soil moisture for future warming scenarios, (b) drought vulnerability projection considering combinations of RCP and SSP scenarios, using a list of drought vulnerability indicators that represent exposure, sensitivity and adaptive capacity components, (c) drought risk projection integrating hazard and drought vulnerability information, (d) development of bivariate choropleth plots under future scenarios to quantify the individual roles of climate and societal changes in driving drought risk, and (d) identification of regions and zones that are expected to be under the worst drought risk conditions in the near and far future.
Hydro-climatic variables
The multivariate drought risk assessment focusing on agricultural drought requires a combined analysis of precipitation and soil moisture data.The drought hazard assessment for the baseline period requires observed hydro-climatic variables.Gridded daily precipitation data (mm; precipitation: https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_25_NetCDF.html, last access: 20 November 2020) at 0.25 • lat ×0.25 • long resolution are obtained from the India Meteorological Department (IMD) (Pai et al., 2014).This dataset has been employed in various studies over the Indian region (Sahana et al., 2021).Gridded monthly root-zone soil moisture data (m 3 m −3 ) over the Indian region at 1/2 • lat ×2/3 • long resolution are obtained from Modern-Era Retrospective Analysis for Research and Application (MERRA-Land) (soil moisture: https://disc.gsfc.nasa.gov/datasets/MST1NXMLD_5.2.0/summary?keywords=MERRA-land,last access: 14 April 2018).This dataset has been employed for drought studies across the world (Farahmand and AghaKouchak, 2015;AghaKouchak, 2015) and also for Indian regions (Sahana et al., 2020(Sahana et al., , 2021)).The above two datasets are regridded to a common spatial resolution of 0.5 • lat ×0.5 • long and rescaled to monthly resolution for the historical drought hazard assessment.Re-gridding of the observed datasets to 0.5 • lat ×0.5 • long resolution is carried out using the triangulation-based linear interpolation method (Watson and Philip, 1984).Further, monthly time series of spatial variation in terms of standard deviation of precipitation and soil moisture from their observed and rescaled datasets is shown in Fig. S1 in the Supplement.It is observed that the rescaling of datasets from their parent resolution to 0.5 • lat ×0.5 • long results in no additional variability.
In order to evaluate the projected drought hazard over India, the projected precipitation and soil moisture data at a spatial resolution of 0.5 • lat ×0.5 • long are obtained from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) (Warszawski et al., 2014).The historical and projected data from available general circulation models (GCMs), namely GFDL-ESM2M, HADGEM2-ES, IPSL-CM5A-LR and MIROC5, and for two RCPs -RCP2.6 and RCP6.0 -are downloaded from the ISIMIP data portal (https://esg.pik-potsdam.de/search/isimip/, last access: 9 September 2020).The daily precipitation data (kg m −2 s − 1) are already downscaled and biascorrected with respect to global-level observed precipitation from EWEMBI (EartH2Observe observations, WFDEI and ERA-Interim data Merged and Bias-corrected for ISIMIP).These data have been previously used to study the soil moisture droughts for Europe (Grillakis, 2019) and terrestrial water storage in mainland China (Jia et al., 2020).The countrywide average annual precipitation for the projected period is higher compared to the baseline periods as shown in Fig. S2.As a part of ISIMIP2b experiments, the LPJmL impact model (Sitch et al., 2003), a global vegetation model that is capable of representing fine-resolution physical processes using carbon, water and energy balance equations (Schaphoff et al., 2018) under a changed climate, is driven by the biascorrected GCM precipitation to simulate the root-zone soil moisture (kg m −2 ).For our study, the soil moisture data up to three layers accounting for 1 m depth are used.The countrywide average annual soil moisture for the projected period is slightly lower compared to the baseline periods as shown in Fig. S3.The observed and simulated country-wide average of monthly precipitation and soil moisture for the period 1980-2005 is presented in Fig. S4.The performance of all the ISIMIP models are comparable with that of the observed data except for the soil moisture during monsoon months.The lowered soil moisture estimates from LPJmL model (ISIMIP experiments) simulations compared to the MERRA-Land soil moisture observations for the monsoon months could be due to overestimation of LPJmL's simulated run-off (Zaherpour et al., 2018).Although the simulated soil moisture data underestimate the monsoon months' soil moisture (June, July, August, September) during the historic period (Fig. S4), we did not perform the bias correction, since we intend to capture the variability in the soil moisture rather than the magnitudes for drought index calculation.The projected daily precipitation is cumulated over each month to get the monthly precipitation values and its units converted from kilograms per square metre per second (kg m −2 s −1 ) to millimetres (mm).The projected monthly soil moisture (average monthly soil moisture) from the model is converted from kilograms per square metre (kg m −2 ) to cu-bic metres per cubic metre (m 3 m −3 ).The ensemble mean of monthly precipitation and soil moisture from different GCMs is computed.Further, these ensemble mean monthly precipitation and soil moisture time series are used for drought hazard assessment.Although climate variables from CMIP6 are available, drought responses by CMIP5 models are similar to those of CMIP6 (Cook et al., 2020).Hence we proceeded with the CMIP5 data for drought hazard assessment.
Drought vulnerability indicators
The country-wide drought vulnerability indicators adopted for drought vulnerability assessment are listed in Table 1, along with their sources, spatial and temporal distribution, units, method of data generation, relevance, and correlation to drought vulnerability for both the observed (around the year 2010) and projected datasets .The presented drought vulnerability indicators comprise sensitivity, exposure and adaptive capacity indicators (Table 1).Drought vulnerability indicators such as groundwater availability, irrigation index and waterbody fraction for the projected period are not directly available.Hence, these indicators are proxied by representative indicators (Table 1) through multiple linear regression (MLR).An extensive vulnerability assessment encompasses other social and economic vulnerability indicators such as those used by Meza et al. (2020).However, for a densely populated and rapidly developing nation such as India, the acquisition of reliable datasets on these indicators is often challenging.Most importantly, the unavailability of projections of these indicators over the Indian region limits their use in this study, since our primary goal is to compare baseline drought risk with that under future projected climate change.Further, the weightages for the categorical vulnerability indicators for drought vulnerability assessment are adopted from Ekrami et al. (2016), Sahana et al. (2021), andThomas et al. (2016) and are given in Table S1 in the Supplement.Finally, drought vulnerability indicators are extracted for the RCP2.6-SSP2 and RCP6.0-SSP2 scenarios for the periods 2060 and 2100 so as to represent different climate and socio-economic scenarios for the near-future and far-future periods respectively.In general, socio-economic development is a slow process that takes time to be reflected in terms of significant changes in the socio-economic indicators (Dellink et al., 2017).Further, the majority of the drought vulnerability and/or risk studies across the globe have adopted a static vulnerability assessment that represents drought vulnerability as a snapshot in time (Hagenlocher et al., 2019).Therefore, we used the static vulnerability indicators for the years 2010, 2060 and 2099 to quantify drought vulnerability for the baseline, far-future and near-future periods respectively.
Drought vulnerability indicators such as population density and GDP for the year 2010 from the SSP2 pathway are comparable with their respective observed dataset, with small/negligible difference between the observed and SSPhttps://doi.org/10.5194/nhess-23-623-2023 Nat. Hazards Earth Syst.Sci., 23, 623-641, 2023 (Chou et al., 2019).Further, LULC projections can also be derived based on the land use models, using past LULC data and socio-economic factors to drive the land use change.
However, the development of such models at country scale is beyond the scope of the present study.
Methods
The methodology adopted to study the evolution of drought risk is given in Fig. 1.
Drought hazard assessment
Drought hazard forms an important component of drought risk assessment.Here, we assess the country-wide drought hazard based on the deficits in precipitation and soil moisture.Therefore, the multivariate standardized drought index (MSDI) of the non-parametric form is computed using the bivariate case of the Gringorten plotting position formula (Gringorten, 1963).MSDI is equally capable of capturing deficits individually in precipitation or soil moisture or their joint deficit, considering dependence between these two variables.This is a unique advantage of MSDI (Hao and AghaKouchak, 2014) over other univariate indices.Further, MSDI is capable of representing the onset, propagation and termination of drought.In Fig. S6, considering −0.8 as the threshold for drought trigger, it can be seen that whenever either the standardized precipitation index (SPI) or the standardized streamflow index (SSI) falls below this threshold, MSDI covers the critical trajectory and offers a conservative characterization of drought, thereby capturing attenuation and lag effects.The steps involved in the calculation of MSDI is presented below.
1.The joint probability distribution of the 1-month timescale precipitation (R) and soil moisture (S) is given by where r and s represent the value of the random variables R and S respectively, and p represents the joint probability of the precipitation and soil moisture.
2. For the sample size n, the count of occurrence of the pair (r i , s i ) for r i ≤ r k and s i ≤ s k is denoted as m k .r k and s k here denote the kth observation for precipitation and soil moisture respectively.The number of joint occurrences (m k ) of precipitation and soil moisture pair below r k and s k from the whole set of observations is used to calculate empirical joint probability for the kth observation based on the bivariate Gringorten plotting position (Gringorten, 1963) as 3. The above empirical joint probability is then standardized to obtain the multivariate index MSDI.
where ϕ is the standard normal distribution function.
Since the empirical distributions use ranks of data instead of actual values, the sample size should be sufficiently large.
The method of drought hazard assessment followed in the present study is based on Kim et al. (2015).Hazard is measured as the product of magnitude and the associated frequency of occurrence of an event.The MSDI time series at each region is categorized into four groups similar to Mckee et al. (1993).These categories are assigned weights according to the magnitude of MSDI value.Higher weights will be assigned to the worst (high negative) MSDI values, and vice versa.Further, each weight category is divided into different clusters based on the frequency of occurrence of MSDI values.The total number of clusters for ratings in each MSDI category is determined using the prominent kmeans data clustering algorithm.Higher ratings will be assigned to the cluster with high-frequency values, and vice versa.The weightage and rating scheme are depicted graphically in Fig. 1.In the k-means clustering technique, the distance between the data points is computed using the squared Euclidean distance metric.To avoid the convergence to local minima, the k-means algorithm is run with 100 random initial seeds with 10 000 iterations.The Calinski-Harabasz index (CHI) (Caliński and Harabasz, 1974) is used to determine the optimum number of clusters and is given by where n = number of data points, K = number of clusters, BGSS is the amount of scatter between groups, G {k} = centroid of the kth cluster, G = centroid of all the observations, WGSS = K k=1 WGSS {k} is within the group scatter, and WGSS {k} = are the observations.The k-means clustering algorithm is driven for 1 to n clusters.The number of clusters that gives highest value of CHI is the optimum number of clusters.This optimum number of clusters is used for assigning ratings.The categorized weightages and computed ratings are used to calculate the drought hazard (DH) for every region as below.
where t is the length of the MSDI time series.Although the weightages and ratings are intrinsically linked, the above scheme assures drought hazard quantification based on magnitudes and frequencies.The DH values from Eq. ( 5) are standardized as shown below to obtain the drought hazard index (DHI) that varies between 0 and 1.
The weighing and rating scheme to calculate DHI for a randomly chosen grid are given in Table S2.S1).This gives the decision matrix n ij , where i = 1, 2, . ..n represents the number of regions and j = 1, 2, . .., m represents the number of drought vulnerability indicators.
2. The above decision matrix n ij is associated with the indicator weights w j obtained from the analytic hierarchy process (AHP) method (Sahana et al., 2021).This gives the weighted decision matrix v ij : 3. Positive (A + ) and negative (A − ) ideal solutions are calculated for each of the indicators.
where I and J are associated with the benefit and cost criteria respectively.Here population density, LULC, slope and soil texture that bear a positive correlation with the drought vulnerability are considered as benefit criteria.On the other hand, irrigation index, groundwater availability, waterbody fraction and GDP that bear a negative correlation with drought vulnerability are considered as cost criteria.
4. Positive (d + i ) and negative (d − i ) separation measures for each region i are computed based on A + and A − (also shown in Fig. 1).
5. The relative closeness (R i ) of each region to the positive ideal solution is calculated as R i signifies vulnerability of region i to drought.R is further standardized to vary between 0 and 1 to obtain the drought vulnerability index (DVI).
Drought risk assessment
The hazard and vulnerability information computed in the form of DHI and DVI, respectively, are combined to evaluate the drought risk.Accordingly, the drought hazard capturing the droughts in baseline , near-future (2021-2060) and far-future (2061-2099) periods is combined with drought vulnerability at 2010, 2060 and 2099 respectively.The definition of risk as provided by the IPCC (AR5) (IPCC, 2014) is adopted.Though the AR5 delineates exposure as a separate component of the risk, we have included exposure to be an integral part of the vulnerability following Vittal et al. (2020), since such a definition is unlikely to affect the overall conclusions of risk assessment.
Drought risk values computed using Eq. ( 15) are further standardized spatially to obtain the drought risk index (DRI).Standardization of drought risk at each grid is carried out using the equation Standardization is performed such that the values are distributed between 0 and 1 so as to classify different risk categories.Further, circumstances such as highly vulnerable population being exposed to mild droughts or no droughts at all may arrive and are handled well due to the integrated assessment of drought risk.For example, if the hazard is low in a region, it is likely to be classified as "low to moderate" in terms of drought risk despite having high vulnerability.
Apart from representing the risk as a product of hazard and vulnerability, it can also be represented using a bivariate choropleth (Mohanty et al., 2020).The colour scale of these bivariate choropleths is characterized by all possible combinations of DHI and DVI classes.Such maps clearly demarcate the hazard-driven and vulnerability-driven risk.
Results and discussion
3.1 Drought hazard
Projection of hydro-climatic variables
The multi-model ensemble precipitation and soil moisture data from the four GCMs are used for drought hazard assessment.The country-wide accumulated data (summed over all grids) of these hydro-climatic variables are shown in Fig. 2. The projected precipitation, as well as soil moisture, for the RCP6.0 scenario is high compared to the RCP2.6 scenario.Further, it is noted that the variability in both the variables increases with time.However, the variability in the hydroclimatic variables in the baseline period is high compared to the projected period.
Projection of drought hazard
The multi-model ensemble drought hazard for different RCP scenarios and time slices along with the baseline period are shown in Fig. 3.The indices representing drought hazard are classified into five categories based on an equal classification scheme: 0-0.2 (very low), 0.2-0.4(low), 0.4-0.6 (medium), 0.6-0.8(high) and 0.8-1 (very high).The MSDIbased drought hazard maps developed for the baseline period match well with hazard maps developed from other multivariate indices such as SPEI (Gupta et al., 2020), compared to those developed from the univariate SPI (Vittal et al., 2020).It is observed that the projected hazard over many regions is less severe compared to the baseline period.However, certain parts of north-western India and eastern coastal regions are in the high drought hazard class.The hazard transition from the baseline to different scenarios is presented in Fig. 4. The baseline and projected scenarios of drought hazard are represented using five different classes -very low, low, medium, high and very high.Every region (grid) of the country may transit from one class in the baseline scenario to another class in the projected scenario or remain in the same class for both baseline and projected scenarios.In the transition matrix we compute the percent area of the country that transitioned from one hazard class to other to quantify the effect of climate change.The upper triangle in the figure represents the percent area transition from lower to higher hazard classes, the lower triangle represents the percent area transition from higher to lower hazard classes, and the diagonal elements represent the percent area with no transition.
In general, a transition from higher hazard classes to lower hazard classes is observed under the projected scenarios, implying that more regions in the country are expected to come under the low hazard category in the future.From Figs. 2a, S2 and S3, we see that precipitation and soil moisture for the projected period show an increasing trend.Further, it is to be noted that the hazard assessment using MSDI is based on the long-term mean and variability in these drought indicators under a probabilistic analysis framework and not necessarily the magnitudes of precipitation and soil moisture.Here we see that the projections of these indicators exhibit lower variability compared to the baseline period (Fig. 2a).Therefore, it is observed that many regions undergo the transition from high hazard to low hazard.The future drought hazard assessment using the projected hydro-climatic variables revealed that more than 35 % area of the country is expected to be in the low hazard class, compared to 8 % in the baseline period (refer Figs. 4 and 10).It is also interesting that the area in the high hazard class is greater in the far future compared to the near future irrespective of the RCP scenarios.This is ascribed to the higher variability in the hydro-climatic variables in the far-future compared to the near-future period that resulted in a higher magnitude of drought events.Of all the future drought hazard scenarios considered, the RCP2.6-Farscenario revealed the largest area (2.8 %) in the high and very high hazard classes.This accounts for a 7 % reduction in high and very high hazard classes compared to the baseline scenario.It is observed that north-western India and parts of Jammu, Kashmir, Andhra Pradesh and Marathwada come in the high hazard classes.
It is interesting to note that the probabilistic Budyko framework-based projected annual per capita water availability analysis (PCWA) for the Indian region by Singh and Kumar (2019) shows a decrease in PCWA in a 2.0 • C warmer world compared to a 1.5 • C warmer world under CMIP5-based mitigation, medium stabilization and high-end (RCP8.5)climate change scenarios, indicating high hazard in the far future.Similarly, a higher drought hazard is observed in the far future compared to the near future by Gupta and Jain (2018) and Gupta et al. (2020), who performed SPEI-based drought hazard analysis using CMIP5 GCMs under high-end climate change.Further, frequencybased soil moisture drought analysis by Aadhar andMishra (2020, 2021) and SPEI-based drought frequency analysis by Zhai et al. (2020) show an increased drought frequency in the future period over South Asia compared to the baseline period.This shows that the far-future period is more prone to drought hazard than the near-future period.On the other hand, a few studies such as Koutroulis et al. (2019) and Cook et al. (2020), who used CMIP5 and CMIP6 simulations respectively, show that drought exposure and frequency over the Indian region decrease with time.Such contradicting observations are possibly due to the selection of low-skill GCMs (Aadhar and Mishra, 2020) in Koutroulis et al. (2019) and Cook et al. (2020).It is to be noted that the four GCMs considered in the present study for precipitation and soil moisture simulations are bias-corrected for precipitation and cover more uncertainty in temperature and precipitation changes compared to other GCM subsets (McSweeney et al., 2015).However, the inclusion of other skilled GCMs can account for the wide range of uncertainty in the drought hazard assessment.https://doi.org/10.5194/nhess-23-623-2023 Nat. Hazards Earth Syst.Sci., 23, 623-641, 2023
Projection of drought vulnerability indicators
The varying drought vulnerability indicators for the drought vulnerability assessment are shown in Fig. 2. It is observed that GDP increases with time continuously, whereas population reaches its peak during the end of the near future (2060) and decreases gradually by the end of the century.The representative indicators obtained through human influences, varying land use and water abstractions according to the RCP2.6-SSP2 and RCP6.0-SSP2 conditions are used to derive the drought vulnerability indicators such as irriga-tion index, waterbody fraction and groundwater availability for the projected period.It is observed that the irrigation index decreases with time for RCP2.6-SSP2 and RCP6.0-SSP2 projections.The waterbody fraction remains constant for the RCP2.6-SSP2projection and increases with time for the RCP6.0-SSP2projection.Further, groundwater availability remains constant for RCP2.6-SSP2 and RCP6.0-SSP2 projections.The biggest difference in land use land cover changes is observed in the RCP6.0 condition compared to RCP2.6.It is also seen that percent area under habitation increases continuously with time in the case of RCP6.0.Slope and soil texture data are assumed to be constant (Fig. S7).
Projection of drought vulnerability
The multi-model ensemble drought vulnerability projections for different scenarios are presented in Fig. 5.It is observed that many regions of the country are expected to be more vulnerable to drought compared to the baseline period.In general, parts of north-western and eastern India and the southern coast are observed to be in the high vulnerability class in the future scenarios.The transition of drought vulnerability from one class of vulnerability from baseline to another class of vulnerability in the future is given in Fig. 6.It can be observed that the drought vulnerability under RCP6.0-SSP2scenario is worst compared to the RCP2.6-SSP2scenario, since a high transition from lower vulnerability classes to higher vulnerability classes is observed in the former case.
As much as 42.9 % area transits from lower vulnerability classes to higher vulnerability classes under RCP6.0-SSP2-Nearfuture.Also, a 33 % increase in the area in the high and very high vulnerability classes is observed in this worstcase scenario, with north-western India, the western coast, and parts of Chhattisgarh, Odisha and Jharkhand in the very high vulnerability class.
https://doi.org/10.5194/nhess-23-623-2023Nat.Hazards Earth Syst.Sci., 23, 623-641, 2023 In the global freshwater vulnerability analysis conducted by Koutroulis et al. (2019), although they show that the sensitivity component of the overall freshwater vulnerability is increasing with time, an increasing adaptive capacity and decreasing exposure are reducing India's vulnerability to drought.However, our study shows an increasing vulnerability to drought, considering sensitivity, adaptive capacity and exposure factors.Such contradicting observations in drought vulnerability are possibly due to the choice of lowskill GCMs in Koutroulis et al. (2019).Further, the socioeconomic challenges for adaptation and mitigation in different SSP narratives are led by different development pathways (O'Neill et al., 2017).Therefore, the adoption of other SSPs in drought vulnerability assessments may unveil other plausible drought vulnerability projections.
Next, we aggregate hazard and vulnerability information on meteorological sub-division scale (Meteorological subdivisions are the meteorologically homogenous regions identified by India Meteorological Department; Kelkar and Sreejith, 2020) to identify the sub-divisions under critical drought condition due to the interplay of hazard and vulnerability.Scatter of drought hazard and drought vulnerability for 30 sub-divisions is shown in Fig. S8.It is seen that the western Rajasthan, Haryana and western Uttar Pradesh sub-divisions are expected to have high drought risk compared to the other sub-divisions in all the scenarios.Further, the number of sub-divisions falling under critical drought risk (DHI > 0.25, DVI > 0.75) is high in the case of the RCP6.0-SSP2scenario, with 22 meteorological sub-divisions having high vulnerability (DVI > 0.75), particularly in the RCP6.0-SSP2-Nearfuture scenario.
Projection of drought risk
The multi-model ensemble drought hazard and vulnerability projections under different scenarios are combined according to Eq. ( 15) to obtain drought risk projections (Fig. 7).It is to be noted that the validation of the drought risk map for the baseline period has been carried out by Sahana et al. (2021), based on the disaster data in terms of number of people affected.It is noted that parts of Rajasthan, Madhya Pradesh, Maharashtra, Orissa and Tamil Nadu, Kerala, Chhattisgarh, Haryana, Himachal Pradesh, Chandigarh, Assam, and Nagaland that are in the moderate to severe drought risk category have experienced moderate to worse drought disaster.Further, the drought risk estimates for the baseline period from the present study compares well with regionalscale drought risk studies in India such as those for Andhra Pradesh (Murthy et al., 2015), Bearma basin (Thomas et al., 2016) and Maharashtra (Swami and Parthasarathy, 2021).From the drought risk projections, it is noted that parts of north-western India are expected to be more prone to drought risk compared to the baseline period.On the other hand, central Indian regions are expected to switch to lower risk classes.The transition of drought risk from one class of vulnerability from baseline to another class of risk in the future is given in Fig. 8.The highest transition (30 % area) from lower risk to higher risk classes is observed in the RCP6.0-SSP2-Farfuture scenario.Also, overall drought risk reduces by 0.8 % in this scenario compared to the baseline.It is interesting to note that the RCP6.0-SSP2-Farfuture scenario is not the worst-case scenario in drought vulnerability projection, yet it turned out to be the worst-case scenario in drought risk projection due to high drought hazard projection, revealing the importance of comprehensive drought risk assessment.Risk is an outcome of the interaction between hazard and vulnerability and is also a function of time.The fact that worst-case scenarios are different for drought hazard and drought vulnerability indicates dissimilar behaviour of drought hazard and vulnerability indicators in inducing drought risk.For example, population density is high in the near-future period (2060) compared to the farfuture period (2100), while precipitation is continuously increasing in the projected period.A combination of such different hazard and vulnerability behaviour in a given time period is effectively captured through comprehensive risk analysis.Therefore, though the RCP6.0-SSP2-Farfuture scenario is not the worst-case scenario for drought vulnerability compared to RCP6.0-SSP2-Near future, the interaction of high hazard with moderate to high vulnerability resulted in the worst drought risk scenario in the case of RCP6.0-SSP2-Far future.However, in general, when the changes in drought risk for all the future scenarios are compared with the baseline, it is observed that the area falling under drought risk due to drought vulnerability is increased (Fig. 9).It is to be noted that the water availability projections for India by Koutroulis et al. (2019) show decreasing drought risk with time, as opposed to the increasing drought risk from the present study.The choice of climate change scenarios and climate models by Koutroulis et al. (2019) could be a possible reason for such a difference.Further, projected bivariate choropleth maps for unique combinations of DHI and DVI are presented in Fig. 9.It is seen that most of the regions are constituted by low hazard and high vulnerability, indicating the high impact of societal developments rather than climate-invoked changes.Hence it is important to take the drought mitigation plans based on the socio-economic conditions instead of just considering hydro-climatic conditions of the region of interest.Consolidated results showing the percent area of different classes of drought hazard, vulnerability and risk under various climate and socio-economic scenarios are given in Fig. 10.Of all the future drought hazard scenarios considered, the RCP2.6-Farscenario revealed the largest area (2.8 %) in the high and very high hazard classes.In the case of drought vulnerability, as much as 42.9 % area transits from lower-vulnerability classes to higher-vulnerability classes under RCP6.0-SSP2-Nearfuture, with 93 % area of the country in the high and very high drought vulnerability classes.Further, in the worst-case drought risk scenario (RCP6.0-SSP2-Farfuture), it is observed that 2.7 % area of the country is in the high and very high drought risk classes.
Potential applications
The drought hazard, vulnerability and risk projection maps from the present study, developed at 0.5 • lat ×0.5 • long spatial resolution, are comparable with blocks/district-level area.Therefore, these maps can assist the block-level administrators to know region-specific causative factors inducing severe drought risk both in the baseline and projected periods, besides the natural components governing the drought risk.Also, these maps can inform the state or federal disaster management authorities concerning the climate action https://doi.org/10.5194/nhess-23-623-2023 Nat. Hazards Earth Syst.Sci., 23, 623-641, 2023 plans.The change in drought risk at different projected periods can modulate adaptation and mitigation strategies and can be included in decision support systems for drought management.Since drought risk is found to be mainly driven by societal factors, action plans should be directed to improve socio-economic conditions.Groundwater conservation, conjunctive use of surface and groundwater, farmer participation in crop insurance, and water-saving farm practices and technologies are some important measures that can be adopted for raising the socio-economic standards.Further, the frame- This study presents future projections of drought risk over India under changing climate and socio-economic conditions.This is achieved combining the drought hazard and drought vulnerability projections.Drought hazard assessment is carried out using a multivariate drought index known as MSDI, an indicator of agro-meteorological drought.Drought vulnerability is assessed using a robust multi-criteria decisionmaking technique called TOPSIS, considering changes in relevant socio-economic indicators.Drought risk projection studies undertaken over the Indian region are based on drought hazard alone, and no consideration has been given to the drought vulnerability component.The present study quantifies the relative contribution of drought hazard and drought vulnerability to the overall drought risk projections in a comprehensive risk framework.Thus, our analysis can aid different stakeholders involved in drought management for adaptation and mitigation plans under changing climate and socio-economic conditions.This marks the significant improvement in our study over existing studies on drought risk assessment in India under climate change.Further, we present for the first time future projected bivariate choropleth plots to identify the drivers of overall drought risk across the country.The multi-model ensemble drought hazard and drought vulnerability are computed for the two RCP-SSP scenarios: RCP2.6-SSP2 and RCP6.0-SSP2 for the near-and far-future timelines.The current study is limited by simulations from a single global vegetation model rather than multiple impact models including hydrologic or land surface simulations.Important conclusions of the study are outlined below.
The MSDI-based drought hazard assessment reveals that more than 35 % area in India is projected to be in the low hazard class as opposed to 8 % in the baseline period possibly due to rising precipitation in the region as projected by climate models.The RCP2.6-Far scenario shows 2.8 % area of the country in the high and very high hazard classes, accounting for a 7 % reduction in those two drought hazard categories.In general, the spatial extent of high and very high hazard classes is greater in the far future compared to the near future.
Drought vulnerability is projected to increase for all scenarios, with 77 % area in the high or very high vulnerability class compared to 66 % in the baseline period.A rise of 33 % area in the high or very high vulnerability class is observed in the RCP6.0-SSP2-Nearfuture scenario.Among the two RCP-SSP scenarios considered, the RCP6.0-SSP2scenario exhibits the worst case of drought vulnerability due to the high transition from lower to higher vulnerability classes compared to the RCP2.0-SSP2scenario.
The integration of drought hazard and vulnerability projections shows an overall decrease in drought risk projections, resulting primarily from a reduction in drought hazard.However, a transition from lower to higher risk classes rang-ing up to 30 % is observed in the RCP6.0-SSP2-Farfuture scenario.Meteorological sub-divisions such as western Rajasthan, Haryana and western Uttar Pradesh are expected to be at high risk in the projected period under all the scenarios.
Bivariate choropleth analysis shows that future drought risk is significantly driven by increased vulnerability resulting from societal developments rather than climate-induced changes in hazard.Therefore, future efforts on building drought resilience in the country must include strengthening socio-economic conditions.drought vulnerability indicator datasets and the Potsdam-Institute for Climate Impact Research for providing ISIMIP data (https: //esg.pik-potsdam.de/search/isimip/,last access: 9 September 2020) for the drought risk projection study.The authors appreciate the financial support received from the Government of India.The authors also thank Roshan Jha for his comments on the first draft.Review statement.This paper was edited by Paolo Tarolli and reviewed by Marthe Wens and two anonymous referees.
Figure 1 .
Figure 1.Framework to assess drought risk evolution.Monthly rainfall and monthly soil moisture are used to compute the multivariate standardized drought index (MSDI).Weights and ratings system of MSDI is adopted to further compute drought hazard index (DHI).Multicriteria decision-making technique -TOPSIS -is used to calculate drought vulnerability index (DVI) considering eight drought vulnerability indicators.The product of DHI and DVI is the drought risk index (DRI).Drought risk assessment is carried out for the baseline (1980-2015), near-future (2021-2050) and far-future (2061-2100) periods for various climate and socio-economic scenarios.
Figure 2 .
Figure 2. Datasets used for drought risk assessment.(a) Projected hydro-climatic variables such as monthly precipitation and monthly soil moisture are used for drought hazard assessment.(b) Projected drought vulnerability indicators such as irrigation index, waterbody fraction, groundwater availability, population, GDP and land use land cover, along with static drought vulnerability indicators such as slope and soil texture, are used for drought vulnerability assessment.Datasets for projected period are divided into near future (2021-2060) and far future (2061-2100) to check the evolution of drought risk.
Figure 4 .
Figure 4. Transition of drought hazard from baseline period to projected period.The value in each cell represents the change in percent area of the country from one hazard class to another.The red colour shows transition, and blue represents no transition.
Figure 6 .
Figure 6.Transition of drought vulnerability from baseline period to projected period.The value in each cell represents the change in percent area of the country from one vulnerability class to another.The red colour shows transition, and blue represents no transition.
Figure 8 .
Figure 8. Transition of drought risk from baseline period to projected period.The value in each cell represents the change in percent area of the country from one risk class to another.The red colour shows transition, and blue represents no transition.
Figure 10 .
Figure 10.Summary of drought risk evolution.Percent area of different classes of drought hazard, vulnerability and risk under various climate and socio-economic scenarios.
Financial support .
This research has been supported by the Department of Science and Technology, Ministry of Science and Technology, India (grant no.ECR/2017/000566) and the Department of Science and Technology, Ministry of Science and Technology, India (grant no.DST/CCP/CoE/140/2018).
Table 1 .
Drought vulnerability indicators used for drought vulnerability assessment.The sources for indicators in the baseline period and the projected period along with their relevance and correlation with drought vulnerability are presented.Representative indicators to arrive at the drought vulnerability indicators for projected period are also listed.
work of our study is applicable for state-wise drought risk assessment with reliable hydro-climatic and socio-economic indicators.Such an assessment can recommend measures for watershed management, irrigation and agricultural practices and reorganizing water demand and supply management at a local scale. | 2022-05-10T00:42:18.325Z | 2023-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "71a7745e01d524cbf5b1785a0988905c35c27dec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5194/nhess-23-623-2023",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eac0b904f8261f34dbb60b5281d7a90a930bf3d6",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
259629459 | pes2o/s2orc | v3-fos-license | Implementation of the MDLC Method in the Pronounce Arabic (Makhorijul Huruf) Application Using Macromedia in PAUD Awwalussalaam
Education is one of the foundations in life that starts early. Early childhood education is a program that requires parents to introduce their children to various things, especially introducing Al-Qur'an recitation. This is shown as a coaching effort for children from birth to the age of six. In the learning process it often happens that a teacher has difficulty conveying material to students, especially introducing the Al-Qur'an. This is due to the relatively short time in the learning process. Therefore, the purpose of this study is to create a learning application using macromedia flash to introduce recitation of the Qur'an which can help teachers and parents of students in the learning process of children outside of learning hours. The method used in developing this application uses the multimedia development life cycle method with the Luther model which has six stages, namely, (1) Concept, (2) Design, (3) Material Collection, (4) Manufacture, (5) Testing, (6) Distribution. Starting from this problem, the method used can be one of the interactive learning materials designed to be as attractive as possible to arouse children's enthusiasm in learning the Pronounce Arabic (Makhorijul Huruf) of the Qur'an so as to improve the quality of learning and minimize the estimated time used in the learning process. The result of this research is an application to recognize the Makhorijul Huruf of the Qur'an which can assist teachers in conveying learning with new and interesting methods.
Implementation of the MDLC Method in the Pronounce
Arabic (
A B S T R A C T A R T I C L E I N F O
Education is one of the foundations in life that starts early. Early childhood education is a program that requires parents to introduce their children to various things, especially introducing Al-Qur'an recitation. This is shown as a coaching effort for children from birth to the age of six. In the learning process it often happens that a teacher has difficulty conveying material to students, especially introducing the Al-Qur'an. This is due to the relatively short time in the learning process. Therefore, the purpose of this study is to create a learning application using macromedia flash to introduce recitation of the Qur'an which can help teachers and parents of students in the learning process of children outside of learning hours. The method used in developing this application uses the multimedia development life cycle method with the Luther model which has six stages, namely, (1) Concept, (2) Design, (3) Material Collection, (4) Manufacture, (5) Testing, (6) Distribution. Starting from this problem, the method used can be one of the interactive learning materials designed to be as attractive as possible to arouse children's enthusiasm in learning the Pronounce Arabic (Makhorijul Huruf) of the Qur'an so as to improve the quality of learning and minimize the estimated time used in the learning process. The result of this research is an application to recognize the Makhorijul Huruf of the Qur'an which can assist teachers in conveying learning with new and interesting methods.
Background
Background The development of information technology today is growing rapidly. Information technology has a major influence on various aspects of life, especially in the field of education. In recent years, mobile has appeared device, various features have been embedded, such as image processing, video, processing documents and so on (Harahap, 2021). Among them, letter identification, Pronounce Arabic (Makhorijul Huruf), and Tajwid. Before reading the Qur'an students should be able to distinguish the sound of the ijaiyah letters or those that are known with Makhorijul Huruf. However, during this pandemic, teachers reading the Al-Quran have to think closely related to Al-Quran learning media itself (Saputra, 2018). According to Ahmad Suryadi, technology is a means, tool or method used in conveying messages and solving a problem through knowledge for a problem to achieve a certain goal and become a separate scientific discipline Technology in education can change conventional learning methods into non-conventional ones. Technology is also often assumed in perceptions that lead to electronic problems or technical equipment, but basically educational technology has a very broad meaning, especially to change learning methods to be more efficient (Nurdyansyah and Qorirotul, 2017). Education is the most important thing for every country to be able to develop rapidly. Country that great will put education as his first priority, because with education, poverty in the people of the country will be replaced by prosperity. However, In its development, education in Indonesia always has to face several problems at every stage (Budiman, 2019). Reading can be defined as perceiving a written text in order to understand its content.1 The text is a piece of naturally occurring of spoken and written record identified for purposes or definable communicative function (Khairuddin, 2014). A learning method that is able to change the situation to be more interesting and efficient is the biggest hope that teachers have, especially in PAUD Awwalussalaam. The limited time that teachers have in teaching Makhorijul Huruf of the Qur'an to children is the biggest challenge they face.
According to Yetti Rahally in the journal she wrote, it says that the learning process in an institution will be directed if it has clear guidelines to direct the process of implementing appropriate learning. The first problem is that students find it difficult to absorb the material delivered by educators is due to learning media as well still limited to books that present solid material and appearance uninteresting, as well as the many questions and assignments given by educators so that it makes students bored to learn it (Hotimah, 2021). Interesting learning can increase the comprehension of early childhood in dealing with the learning process. The problem that occurs in PAUD Awwalussalaam is not only that the delivery time in learning is limited, but the method is less interesting to make this happen. Implementation of interesting learning requires a medium in the form of an application that attracts the attention of children in using it. This application that introduces the Makhorijul Huruf of the Qur'an aims to minimize the estimated learning time given by the teacher so that it can be studied at home. The attractiveness of children in the learning process is one of the reasons for making this application so that children can have an interest in learning. Al-Qur'an is the main guideline in Islam. Al-Quran consists of 30 Juz, 114 Surahs, and 6236 Verses. In reading the Al-Quran there are factors that affect the level of ability to read it (Junaedi, 2019). The use of the Multimedia Development Life Cycle Method is one of the ideas that will be applied to overcome the problems that occur to make applications attractive and efficient. Arif Rinaldi (Dikananda et al., 2021) wrote in his journal that the Multimedia Development Life Cycle Method consists of six stages, namely concept, design, material collecting, assembly, testing, and distribution. According to Luther Sutopo's Theory which was written in the book "Authoring Interactive Multimedia" this method at each stage does not have to be done sequentially but can be done in parallel with the planning stage which must be started first.The Qur'an is the main guide in Islam. Al-Quran consists of 30 Juz, 114 Surahs, and 6236 Verses. In reading the Al-Qur'an, there are factors that affect the level of ability to improve (Lidianti, 2022). The development of learning media is a field that should be mastered by any professional teacher. Teacher awareness of the importance of media development learning must be improved in the learning process (Handayani, 2018). Starting from this problem, the method used can be one of the interactive learning materials designed to be as attractive as possible to arouse children's enthusiasm in learning the Makhorijul Huruf of the Qur'an so as to improve the quality of learning and minimize the estimated time used in the learning process. The results of this study are an application for the introduction of makhorijul Al-Qur'an letters that can assist teachers in conveying learning with new and interesting methods.
Problem Statement
The new educational paradigm indicates that the purpose of learning is not only to change student behavior, but to shape the character and mental attitude of professionals who are oriented towards a global mindset. How to learn can be planned by the teacher through innovative learning (Safitri, 2013). The Muslim community considers that the Qur'an is a great holy book, in which contains very important values that can be used as examples or guidelines for human life and the environment. All Muslims believe that if you want to get a peaceful, noble, happy, and prosperous life, then it is mandatory. Practice the values contained in the Qur'an. However, don't just practice it . the values, in reading must also be fluent (fluent) and correct in accordance with the rules or the rules. The rules that must be considered are, the science of recitation, makhārij al-ḥurūf (place out letters), and gharib (foreign readings in the Qur'an). The most important rule read the Qur'an with tartil (Amrullah, 2022). The lack of time in teaching the introduction of Al-Qur'an Makhorijul Huruf to children at PAUD Awwalussalaam made some parents protest and even quarrels occurred because the children who studied while being taught by their teachers could not yet pronounce Makhorijul Huruf properly and correctly. Berbagai penelitian yang telah dilakukan terhadap penggunaan media pembelajaran dalam proses belajar mengajar sampai pada kesimpulan bahwa proses dan hasil belajar siswa menunjukkan perbedaan yang berarti antara pembelajaran tanpa media dengan pembelajaran menggunakan media (Nurhasanah, 2021). However, this happened not because of the teacher's carelessness, but the short time of meeting once every two weeks made this happen. It is unfortunate if this problem is not addressed immediately, it will make the children who study focus too much on general knowledge compared to learning the Makhorijul Huruf of the Qur'an. Based on the background described above, the following problems are obtained: c. How to implement the Al-Qur'an letter How to implement the Al-Qur'an Makhorijul Huruf recognition application in PAUD Awwalussalaam?
Research and Study Purposes
Research conducted at PAUD Awwalussalaam has the intention of providing solutions to problems that occur in order to have a positive impact on children as well as their parents and teachers as facilitators. The expected goals in research at PAUD Awwalussalaam are as follows: a. This study aims to explain the design in making an application for recognizing the Makhorijul Huruf of the Qur'an using the Multimedia Development Life Cycle Method which can increase the comprehension of children in PAUD Awwalussalaam in the learning process. b. Applying the Multimedia Development Life Cycle Method in the process of testing the application for recognizing the Makhorijul Huruf of the Qur'an in PAUD Awwalussalaam. c. Make an application to recognize the Makhorijul Huruf of the Qur'an that utilizes interactive media to make it easier for young children to learn it.
Scope of Problem
Based on the research background and problem formulation, it can be identified that this research has a limitation, namely: a. The design of the application consists of the introduction of Makhorijul Huruf consisting of Fathah and tanwin, as well as learning while singing and learning while playing approaches. b. All application developments for the recognition of Al-Qur'an Makhorijul Huruf in PAUD Awwalussalaam use the Multimedia Development Life Cycle Method. c. This application only discusses the introduction of makhorijul Al-Qur'an letters in PAUD Awwalussalaam.
METHODS
The method used for developing applications for recognizing the Makhorijul Huruf of the Qur'an uses the Multimedia Development Life Cycle (MDLC) method, which is a multimedia product development cycle that begins with product analysis, product development, and the launch stage. Despite having the same development roots as the Software Development Life Cycle (SDLC), MDLC has unique characteristics related to the development and use of multimedia elements.
Concept
At this stage, the purpose is determined and who uses this application . At this stage, the application system requirements are also determined, such as the concept to be made. The purpose of this application is to introduce the Makhorijul Huruf of the Qur'an to early childhood which will be made as attractive as possible so that children are interested in the application for the introduction of the letters of the Qur'an which is made by adding a menu of singing and also playing. The description of the concept of the Al-Qur'an letter makhorijul recognition application that was developed can be seen in the following
Design
According to Susilo et al. (2021) design in multimedia is a stage where the specifications made contain several aspects including application architecture, style, appearance, and material or material requirements for the application being made. In this research a design was carried out to make it easier for users to use the application to be made. The design of the makhorijul letter recognition application is made in the form of a navigation structure that describes the relationship between menus in a hierarchical form. The navigation structure in this application can be seen in the following Figure 2: Figure 2. Application navigation structure.
Material Collecting
At this development stage, collecting materials according to needs. Material planning that will be made and collected is 2D objects along with audio and other supporting materials such as Muslim animation. So that the application made will attract the attention of children and not be boring.
Assembly
The assembly stage is carried out in the manufacture of multimedia objects or materials in the application being developed. At this stage it can also be said as the assembly stage where objects and multimedia materials are made into an application.
Testing
This stage aims to ensure that the application that is created and developed is free from errors. This test is in the form of questions that are asked to parents and teachers to calculate what percentage of the feasibility of this application when used by children.
Distribution
Distribution is carried out to spread and deliver products to users from applications that have been completed and have gone through the testing phase. This distribution is spread through the WhatsApp group and also other social media
RESULTS AND DISCUSSION
In the results of the discussion that is designed, there is a storyboard and user interface as a reference in making applications. The following is the storyboard flow in the Al-Qur'an makhorijul letter recognition application on Table 2. Table 2. Application storyboards.
Scene
User Interface Storyboard 1.
This scene displays the loading view when entering the first page of the application. In addition, on this page there will be a tone of voice reminding that the user has entered the application's home page with the words "Back Again at MAJI, let's recite the Koran".
2.
This scene shows the login view: Apart from discussing the user interface and story board, here is a display that explains the applications made such as the login menu, main menu, hijaiyah menu, hijaiyah content, singing menu, playing menu, and quiz menu. The following is an explanation of the menu that functions as the main menu in the Al-Qur'an makhorijul letter recognition application.
Display Menu Login
Menu for login when the user will go to the main page, the user must login first. In this login menu there is a logo from HIMPAUDI and also the title of the application itself, namely MaJi, which stands for Mari Koran. On the login page the user must fill in the username and password, if the user does not have a username and password, then the user must first create an account by clicking "Don't have an account", but if the case is that the user forgets the password, then the user must delete the password. first via email so you can create a new password. The display of the Login menu is shown on Figure 3.
Display Main Menu
This main menu serves to display the options that users want to use when doing learning. In the main menu there are 4 options, namely the Koran button, sing, play and quiz. Which illustrates the main view of the application on
Hijaiyah Menu Display
It has been explained above, that the user is free to choose whatever menu he wants. Here is an example of choosing the hijaiyah menu, which is when there are hijaiyah letters in it, as shown in Figure 5.
View of Hijaiyah Content
If the user clicks on one of the letters of the hijaiyah, one of the letters that clicked will appear. For example, if the user presses the letter Alif, the letter would show such as Alif with an explanation in what the sound of the letter Alif looks like and it will appear as shown in the Figure 6.
Display of Sing Menu
On the sing menu displays two options when the user opens it. Some sing with the hijaiyah only and some sing or can be said to be nadhom Makhorijul Huruf. This display of the sing menu can be seen on Figure 7.
Play Menu Display
The play menu is a menu that shows simple games to entertain the user after understanding the existing material. In this menu there are three options according to the recitation menu above. In each menu there are levels that users can choose from level 1 which is still light to level 3 which is quite heavy. In this play menu the user can also have points collected as practice to fill in the questions on the quiz menu. On the display of the play menu there is also an example when we display or click on the level contained in the play menu this can be seen at Figure 8.
Display Menu Quiz
The image below shows an example of a quiz menu display that contains questions that will be worked on by students. | 2023-07-11T15:59:11.556Z | 2023-06-21T00:00:00.000 | {
"year": 2023,
"sha1": "a0c3785979113e53b3a1702365eef26948ff2a0a",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.upi.edu/index.php/Edsence/article/download/56955/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fdb554f93605e44e62b66fde8eeb7c2a86a8cdee",
"s2fieldsofstudy": [
"Education",
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": []
} |
247428929 | pes2o/s2orc | v3-fos-license | Prediction of Communicative Disorders Linked to Autistic Spectrum Disorder Based on Early Psychomotor Analysis
This systematic review evaluated psychomotor differences between children with and without siblings who have autism spectrum disorder (ASD), as well as the most reliable psychomotor skills that can help predict ASD and its associated language disorders. Literature from 2005 to 2020 was searched using the following databases: PubMed, Trip Medical Database, Cochrane, Web of Science, Science Direct, and Brain. A total 11 papers were included. Fine motor skills and joint attention displayed reliable results in order to predict ASD and its associated language disorders. The period between the first and the second year of life was considered the most appropriate one for the assessment of psychomotor skills. The best period to predict language disorders and ASD diagnosis is around 36 months old.
Introduction
Autism spectrum disorder (ASD) is a neurodevelopment disorder characterized by qualitative difficulties in speech and social interaction areas, as well as restricted and repetitive interest ranges [1].
Quite possibly, the first reference to this disorder goes back to the 16th century, when Johannes Mathesius (1504-1565) wrote the story of a twelve-year-old child with severe symptoms that resembled autistic features [2]. In 1934, Leo Kanner would, for the first time, define children with this disorder as "children [who] have come into the world with an innate inability to form the usual, biologically provided contact with people" [3,4].
Nowadays, the International Classification of Functioning, Disability, and Health (ICF), developed by the World Health Organization (WHO), defines the ASD disorder in their latest version of their International Classification of Diseases (ICD-11) as a disorder "characterized by persistent deficits in the ability to initiate and to sustain reciprocal social interaction and social communication, and by a range of restricted, repetitive, and inflexible patterns of behavior, interests or activities that are clearly atypical or excessive for the individual's age and sociocultural context" [5].
Meanwhile, the American Psychological Association (APA) classifies ASD and its diagnosis criteria in the fifth edition of their Diagnostic and Statistical Manual (DSM-V). In this new update, the previous diagnosis for childhood disintegrative disorder, pervasive developmental disorder-not otherwise specified (PDD-NOS), Asperger syndrome, and autistic disorder fall under the term ASD [6]. Regarding its diagnosis, the DSM-V establishes the following criteria: persistent deficits in social communication and social interaction; restricted, repetitive patterns of behavior, interests, or activities; symptoms must be present in the early developmental period; symptoms cause clinically significant impairment in social, occupational, or other important areas of current functioning; these disturbances The Autism Diagnostic Observation Schedule (ADOS-2), and the Autism Diagnostic Interview-Revised (ADI-R) [23] are among the most widespread tools for diagnosis. The ADOS-2 is a standardized observational tool divided into five modules, which are adapted to the age and/or language development level of the subject at the time of the test. The ADI-R tool is designed to detect ASD through personal interviews with family members or carers of a subject from 2 years of age [24].
Based on the risk that genetic inheritance represents in children who have close relatives with ASD, this systematic review suggests a possible relationship between the development of early psychomotor skills during the first three years of life, and the potential children with brothers and/or sisters with ASD have of having language disorders and/or ASD. This proposal is based on the high risk of having ASD that these children present due to their etiological genetic factors.
This initial goal leads to the exposition of a PICO question to guide the intervention, which should in turn reveal whether an early analysis of the psychomotor skills in children at a high risk of having ASD can predict language disorders related to the disorder. To do so, motor, linguistic, and ASD diagnosis assessment scales will be used.
The goal of this systematic review is to determine whether there are differences during the first three years of psychomotor development in children at a high risk of having ASD, which could guide the prediction of potential speech deficits and the later diagnosis of ASD.
To achieve this goal, a systematic review will be carried out, taking into account not only the results extracted from the selected bibliography, but their methodological quality and their risk of bias.
Search Strategy
A systematic literature search was carried out on February 2020, focusing on articles published between January 2005 and February 2020 that might answer the PICO question aforementioned. The search strategy included both Spanish and English literature from PubMed, Trip Medical Database, Cochrane, Web of Science, Science Direct, and Brain. Specific terms were used. Those terms were determined before starting the literature search, based on the definitions DSM-V gives for ASD, the goal of this systematic review, and the MeSH criteria. The selected terms were: (1) "autism spectrum disorder" AND "motor skills" AND "language disorder"; (2) "autistic disorder" AND "motor skills" AND "language disorder"; (3) "Asperger syndrome" AND "motor skills" AND "language disorder"; Children 2022, 9, 397 3 of 11 (4) "pervasive development disorder" AND "motor skills" AND "language disorder"; (5) "autism spectrum disorder" AND "psychomotor disorder" AND "language disorder"; (6) "autistic disorder" AND "psychomotor disorder" AND "language disorder"; (7) "Asperger syndrome" AND "psychomotor disorder" AND "language disorder"; (8) "pervasive development disorder" AND "psychomotor disorder" AND "language disorder".
Besides reviewing the articles on the selected literature, a secondary review of these articles' literature was carried out in order to assess their potential integration in the systematic review.
Eligibility
Studies were eligible if they were longitudinal observational analytical studies. Those studies assessed psychomotor performance in children for the development of a subsequent analysis of communicative skills, establishing a relationship between both data. Selected studies should evaluate any psychomotor skill and its relation with the child's communicative skills up to 36 months of age. Specific language disorders diagnosis tests and ASD diagnosis tests were accepted for the evaluation of communicative skills. Every chosen article should select subjects at high risk of ASD due to having siblings with a positive diagnosis.
Reviewing Method and Eligibility
Authors ran a peer review of the resulting articles based on their title and/or their abstract. Every article likely to be included in the systematic review was full-text analyzed. The following inclusion and exclusion criteria were applied.
Inclusion Criteria
Case-control and cohort studies analyze the connection between motor skills and language acquirement in HR children.
Subjects are no more than 36 months old by the last data collection record. HR subjects must have direct relatives with ASD. LR and control subjects must not have direct relatives with ASD. Articles evaluate psychomotor skills, as well as communicative skills, using validated assessment tools.
Exclusion Criteria
Subjects who have any other motor or communicative disorder different from ASD. Studies with no LR as a control group. Non-scientific or opinion articles. Not full-text articles. The assessed outcome measures were: the age of the subjects at the time the study was concluded (≤36 months), the risk of having ASD (HR or LR) based on the existence of direct siblings with ASD, the evaluated psychomotor skills, the evaluated communicative skills, follow-up time, the assessment tools applied, methodological quality, the risk of bias, and results and conclusions.
Assessment of Methodological Quality and Risk of Bias
This systematic review was developed according to the PRISMA statement for reporting systematic reviews [25].
The methodological quality of each analyzed piece of research was assessed using the Newcastle-Ottawa Scale (NOS) [26], which determines the methodological quality based on the content, design, and usability of the analyzed literature. The NOS comprises eight items, split into three different dimensions (selection, comparability, and exposure). Each item grants a maximum of one star, except for 'comparability', which grants up to two stars, making a total of nine stars [27]. The risk of bias was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2), which checks random sequence generation, allocation concealments, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective reporting, and other bias. Each item can be assessed as "high risk of bias", "low risk of bias" or "some concerns" [28].
The methodological quality of each analyzed piece of research was assessed using the Newcastle-Ottawa Scale (NOS) [26], which determines the methodological quality based on the content, design, and usability of the analyzed literature. The NOS comprises eight items, split into three different dimensions (selection, comparability, and exposure). Each item grants a maximum of one star, except for 'comparability', which grants up to two stars, making a total of nine stars [27].
The risk of bias was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2), which checks random sequence generation, allocation concealments, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective reporting, and other bias. Each item can be assessed as "high risk of bias", "low risk of bias" or "some concerns" [28].
Regarding language, four articles made a global analysis of communicative skills [29,30,33,36,37], and six carried out a specific analysis of expressive language [31][32][33]35,36,39]. The characteristics of these studies have been summarized in Table 1. Fine and gross motor skills at 10 months old had a direct impact on expressive language skills (HR and LR) at 36 months old. Poor motor skills implied a trigger effect on both joint attention and language development in HR subjects.
Reliable predictions of language disorders could be made based on early motor skills of HR subjects.
NOS: 8/9 Choi et al. Lower marks in gross and fine motor skills were shown by HR subjects; fine motor skills data being more reliable. Significant differences between HRD and HRND/ND groups were found. Only fine motor skills data at 6 months old was able to predict ASD severity at 36 months old based on ADOS.
NOS: 8/9 Iverson et al. The HR group presented a significant delay on the achievement of developmental milestones (independent stable sitting, posture, language development, rhythmic movements and babbling). Language reception and execution delays on 64.2% of subjects at 18 months old.
Qualitative Analysis
In order to present the results and to develop a proper qualitative analysis, each outcome measure included in the selected literature has been connected to language development.
Psychomotor Development and Language Development
Five articles made a general analysis of psychomotor skills and their relation with communicative skills and the language ability of the subjects [33][34][35][36][37][38]. All of them focused on intervention on the subject's psychomotor development and the capacity to make strong predictions between language skills disorders and ASD diagnosis. Iverson et al., (2007) [34] presented conclusive results, associating fine motor skills at 6 months old with the prediction of ASD symptomatology at 36 months old. Iverson et al. (2007) [34] also proved the presence of both receptive and/or expressive vocabulary deficits in 9 out of 14 HR 18-month-old subjects. Besides this, two subjects that presented gait milestones deficits, and joint attention and first words disorders during the intervention subsequently had a positive ASD diagnosis. Landa et al. (2006) [35] demonstrated the worst motor and communicative skills results in the HR group with a later positive diagnosis (HRD), and a slower psychomotor development between 12 and 24 months old. LeBarton et al. (2019) [36] described motor skills at 6 months old as a reliable predictive outcome measure for ASD at 24-36 months old. Based on this data, expressive communication skills could also be predicted at 30-36 months. Leonard et al. (2015) [38] analyzed psychomotor development. Results showed that expressive deficits in ASD subjects could be predicted based only on both fine and gross motor skills.
Gross Motor Skills, Fine Motor Skills, and Language Development
Bhat et al. (2012) [29] compared early gross motor skills between HR and LR groups, as well as their relation with language disorders. Results demonstrated that 78% of HR subjects scored lower in gross motor skills tests at 3 months old, in contrast with 33% of LR subjects. At 6 months old, low scores in gross motor skills tests were seen in 50% of HR subjects in contrast with 8.3% of LR subjects. Half of the HR subjects presented both motor and communicative deficits. LeBarton et al. (2013) [36] confirmed a delay in fine motor skills development, between 12 and 24 months old, in 86% of HR subjects later diagnosed with ASD. A reliable prediction of expressive language at 36 months old based on fine motor skills at 12-18 months old was also confirmed in both LR and HR groups. [31] retroactively classified HR subjects depending on whether they had a positive or negative diagnosis for ASD after the intervention (hereon HRD and HRND respectively). Results showed worst fine motor skills scores in HRD subjects in comparison with HRND and LR subjects at 12 months old. Thus, the results regarding fine motor skills prove to be a reliable predictor to detect expressive language deficits. Correlative differences related with visual perception were not found in any of the groups.
Motor Imitation, Joint Attention and Language Development
Bruyneel et al. (2019) [30] simultaneously analyzed motor skills, communicative skills and joint attention. It was proved that joint attention has a relevant role for LR and HR groups, with HR subjects being more vulnerable to having language disorders if they show both motor and joint attention deficits at the same time. Edmunds et al. (2017) [32] evaluated motor imitation, expressive communication skills, and joint attention. Results showed that motor imitation skills at 12 months old predict expressive vocabulary at 18 months old.
Gait and Language Development
West et al. (2019) [39] carried out a specific assessment of motor skills in two different periods: transition towards gait achievement and gait achievement. The results of both periods were later related with their language prediction capacity. Post-intervention classifications were made, dividing HR subjects in HRD, HRND, and HR subjects with language disorders (HRLD). Significantly lower scores were achieved by HR and HRLD in every communicative outcome measure when compared to LR and HRND groups.
A revised tool to assess risk of bias in randomized trials (RoB 2) was applied to each article. Six articles were found to have a "low risk of bias" [29,30,[32][33][34]36], four articles were found to have "some concerns" [31,35,38,39], and one article was found to have a "high risk of bias" [36].
Discussion
The findings of this systematic review demonstrate that a reliable prediction of language disorders and/or ASD can be made based on the early psychomotor development of HR children. Diverse psychomotor skills have been assessed. Gross and fine motor skills have been the most specifically measured parameters [29,31,36]. Even in those studies whose purpose was not their precise assessment, gross and fine motor skills have shown significant outcomes for ASD prediction [33,35,38]. In this manner, Bath et al. (2012) [29] proved that 78% of HR subjects achieved significantly worse scores in gross motor skills than their LR peers. In addition, a direct relationship between the development of motor skills and the prediction of communicative skills of the subjects is determined. This data is consistent with Leonard et al. (2015) [38], who suggest that expressive language at 36 months old is predicted by gross motor skills data at 7 months old.
As for the study of fine motor skills, more reliable data is produced in various studies owing to a greater statistical significance. Some studies, like LeBarton et al. (2013) [36], exposed a developmental delay in the acquirement of fine motor skills in 86% of HR subjects at 12-24 months old. Furthermore, a significant prediction of language development at 36 months old was made, based on fine motor skills acquirement data. In relation with these fine motor skills conclusions, some similarities were found in Choi et al. (2018) [31]. Their research proves a significantly worst development of fine motor skills at 12 months old in the HRD group. Finally, Leonard et al. (2015) [38] demonstrated that language disorders at 36 months old can be predicted based on fine motor skills data. The data in these studies establish a direct link between deficits in early fine motor skills and the prediction of language disorders, especially in HR subjects for ASD.
Regarding joint attention, both Bruyneel et al. (2019) [30] and Edmunds et al. (2017) [32] affirmed that when joint attention and gross motor skills (Bruyneel et al., 2019) [30] or motor imitation (Edmunds et al., 2017) [32] deficits are seen simultaneously in HR children, subjects are more likely to develop language disorders. These conclusions strengthen the presumption of joint attention as a pre-linguistic process, which is coherent with the fact that communicative development in HR children is altered.
Even though the article by Iverson et al. (2007) [34] was not designed for the specific analysis of gait development, but for the study of the development of psychomotor skills, significant outcomes were found. According to Iverson et al. (2007) [34], 100% of HRD subjects exhibited delays in gait milestones achievement. By contrast, those gait milestones were assessed in a specific way by West et al. (2019) [39]. When assessments concurred with the main gait milestones, more difficulties for producing and comprehending words were found in HRD subjects, compared with their HR and LR peers. Data presented on both articles suggest that gait milestones are sensitive moments for the prediction of communicative disorders associated to ASD. In this manner, the development of motor skills in this specific period of time could also be relevant for the prediction and an early diagnosis of ASD. This is so, in the first place, because HRD subjects show significantly worst gait skills and delays in their consecution (Iverson et al., 2007) and, in the second place, because of the connection between this period and language development (West et al., 2019) [39].
In relation to the prediction of language in HR subjects, similar outcomes were obtained in each study. More specifically, prediction of expressive language tends to be more reliable compared with receptive language prediction. This is due to the fact that expressive language can be significantly predicted based on fine motor skills [31,36], motor imitation, and joint attention (Edmunds et al., 2017) [32], first words production (Iverson et al., 2007), gross motor and general motor skills [29,36,38]. These predictions are mostly made for subjects 36 months old.
As for the age ranges analyzed in the literature, the period between 12 and 36 months old is considered to be the most decisive for the prediction of language disorders related to ASD, based on early psychomotor skills. Even though psychomotor skills assessments were made before age 12 months [29,30,33,36], most of the literature evaluated those skills after age 12 months [31,32,[34][35][36]38,39].
It is important to emphasize that, in every article analyzed, outcome measures and ASD assessment tools can be combined to diagnose both ASD and related language disorders.
With reference to the methodological quality, and after the literature analysis, "representativeness of cases" was the most common item for which some articles did not obtain positive scores in the NOS scale. This is so because the selection of HR population for the studies is done in close collaboration with ASD associations and ASD support groups. Thus, HR and LR subjects cannot be selected from the same source. Acceptable results were revealed in the rest of items after the assessment and analysis with the NOS scale. Concerning the analysis of the risk of bias, "high risk of bias" is proved for one article [37]. This is caused by the second intervention of the study, in which only the HR group was admitted, excluding the LR group previously included in the first intervention.
During the search for literature, exclusively articles in Spanish and English were selected. Because of this reason, the articles' language is identified as a limitation of the systematic review.
Conclusions
After evaluating every psychomotor outcome measure included in the literature, fine motor skills have been identified as the most analyzed and reliable outcome measure in order to predict expressive language disorders linked to ASD and its diagnosis in HR subjects. Likewise, authors find it essential to emphasize the relevance of joint attention in the prediction of language disorders linked to ASD, especially because of the connection between this ability, language, and socio-emotional development. After a deep analysis, 12-24 months old has been proved as the most reliable age range to properly evaluate psychomotor skills in order to predict ASD and related language disorders. Pointedly, it is from age 24 months when the best and most reliable outcomes are found. In this process, the importance of language development and gait milestones is also highlighted. In relation to the prediction of language disorders and ASD, data at 36 months old is the most reliable, in comparison with outcomes before that age. | 2022-03-14T15:17:14.726Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "a24dda55df629fd0d888c43193590361a5630b52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/9/3/397/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e081af0c7a8f145e87013b0a466fe05b7606c6c6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252232794 | pes2o/s2orc | v3-fos-license | Automatic Modeling for Concrete Compressive Strength Prediction Using Auto-Sklearn
: Machine learning is widely used for predicting the compressive strength of concrete. However, the machine learning modeling process relies on expert experience. Automated machine learning (AutoML) aims to automatically select optimal data preprocessing methods, feature preprocessing methods, machine learning algorithms, and hyperparameters according to the datasets used, to obtain high-precision prediction models. However, the effectiveness of modeling concrete compressive strength using AutoML has not been verified. This study attempts to fill the above research gap. We construct a database comprising four different types of concrete datasets and compare one AutoML algorithm (Auto-Sklearn) against five ML algorithms. The results show that Auto-Sklearn can automatically build an accurate concrete compressive strength prediction model without relying on expert experience. In addition, Auto-Sklearn achieves the highest accuracy for all four datasets, with an average R 2 of 0.953; the average R 2 values of the ML models with tuned hyperparameters range from 0.909 to 0.943. This study verifies for the first time the feasibility of AutoML for concrete compressive strength prediction, to allow concrete engineers to easily build accurate concrete compressive strength prediction models without relying on a large amount of ML modeling experience.
Introduction
Concrete is a heterogeneous composite material comprising several materials with different properties (e.g., cement, water, and coarse and fine aggregates), which are mixed together [1,2]. Compared with other civil construction materials, concrete has the advantages of higher economy, plasticity, safety, durability, and so on. Therefore, it is widely used in projects such as housing construction, bridges, and roads. Compressive strength is an important indicator of concrete quality [3]. To ensure the safety of engineering construction, it is necessary to understand the development trends of concrete compressive strength during the planning, design, and construction stages [4]. Therefore, predicting the compressive strength of concrete is of great significance.
The compressive strength of concrete is affected by several factors. Studies have shown that it has a complex nonlinear relationship with the cement-mixing water ratio, the cement-aggregate ratio, and the gradation of aggregate particles [5,6]. In addition, in practical engineering, to achieve the two objectives of improving concrete strength and performance, certain admixtures are often added during the concrete preparation process, which also increases the complexity of concrete strength prediction [7][8][9][10]. The above complex conditions limit the accuracy of traditional empirical models and linear regression methods (LR) in the prediction of concrete compressive strength.
Machine learning (ML) algorithms have been widely applied for the compressive strength prediction of concrete, owing to their excellent nonlinear modeling abilities in complex problems [11][12][13][14][15][16]. The machine-learning-based prediction process for concrete compressive strength generally includes data preprocessing, feature preprocessing, ML algorithm selection, and hyperparameter optimization stages. Table 1 reviews the latest research on concrete compressive strength prediction from the perspective of the methods used in the various stages of the modeling process. As displayed in Table 1, in terms of data preprocessing methods, directly applying the raw data or using normalization [17] for prediction represent the primary methods. Owing to the powerful capabilities of ML algorithms, concrete researchers need not perform complex data preprocessing on concrete data [18,19]. In terms of feature preprocessing, most existing studies require human experts to analyze the factors affecting the compressive strength of concrete [20]. Algorithm selection and hyperparameter optimization constitute the focus of ML-based concrete compressive strength prediction research. In terms of model selection, the artificial neural network (ANN) [21][22][23][24], support vector regression (SVR) [11,25], random forest (RF) [13,26], adaptive boosting (AdaBoost) [27], Laplacian kernel ridge regression (LKRR) [28], light gradient boosting method (LGBM) [29], and extreme gradient boosting (XGBoost) [30,31] are widely used for concrete compressive strength prediction; however, different ML algorithms are suitable for different concrete datasets. For example, XGBoost performs best on steel-fiber-reinforced concrete datasets [32], and gradient boosting (GB) outperforms XGBoost [11] on recycled aggregate concrete datasets. This means that when dealing with new concrete datasets, concrete engineers must perform significant amounts of testing to select the optimal modeling method. In addition, several studies have integrated multiple ML models to develop models with higher accuracy. For example, on a concrete dataset containing recycled concrete aggregate (RCA) and ground granular blast furnace slag (GGBFS), the accuracy of an integrated model comprising LR and RF exceeds that of a single model [33]. In terms of hyperparameter optimization, the choice of hyperparameters significantly affects the performance of ML models. Therefore, to improve the modeling ability of ML algorithms, researchers have used grid search (GS) or metaheuristics to optimize hyperparameters [34,35]. For example, a hybrid of the SVR and GS models outperformed SVR [12] on a common concrete dataset.
To summarize, when required to use ML to build a compressive strength prediction model for a new type of concrete, concrete engineers must optimize the parameters of numerous algorithms in the ML algorithm library and test the performance of each on the new concrete dataset. In addition, to obtain a higher prediction performance, concrete engineers must consider the possibility of an ensemble of multiple ML models; however, complex combination testing is time-consuming and highly dependent on human expertise. Therefore, it is difficult for concrete engineers who lack experience in ML modeling to build an accurate concrete compressive strength prediction model. Concrete engineers with ML modeling experience spend significant amounts of time conducting comparative experiments to select the optimal model. Time consumption and reliance upon ML modeling experience slow down the development of new concrete materials or the application of predictive models. Hence, automated ML methods that utilize computer programs are urgently required to free concrete engineers from the complex and time-consuming process of ML modeling, so that they can focus on concrete material research. Automated ML (AutoML) is a research frontier at the intersection of automation technology and ML [41]. The goal of AutoML is to replace the complex selection and parameter optimization problems of ML learning algorithms in the process of using computer programs, so that ML users can obtain accurate prediction models based on end-to-end datasets [42]. When using AutoML to build a concrete compressive strength prediction model, the data preprocessing, feature preprocessing, model selection, parameter optimization, and evaluation stages are encapsulated, and concrete engineers can automatically obtain a concrete compressive strength prediction model without focusing on the intermediate process. This greatly simplifies the concrete compressive strength modelling process and reduces the requirement of ML modeling experience. However, AutoML, as a new technology, has not been verified as a feasible approach for predicting concrete compressive strength.
To address the gaps in the existing research, this study makes the following three contributions.
(1) We conduct-for the first time in the literature-a feasibility study of AutoML for the prediction of concrete compressive strength. (2) We obtain a database (containing four types of concrete datasets) from the literature, and we conduct a comprehensive comparison of one AutoML algorithm (i.e., Auto-Sklearn) against five ML algorithms (ANN, SVR, RF, AdaBoost, and XGBoost), to verify the superiority of AutoML over ML. (3) We verify that Auto-Sklearn can automatically build an accurate concrete compressive strength prediction model without relying on expert experience, and the resulting method is more robust than traditional ML methods.
The remainder of this paper is organized as follows: First, the principles of the proposed method are given in Section 2. Then, an experimental case study is presented in Section 3, to validate the effectiveness of the proposed method. Finally, conclusions are drawn in Section 4.
Materials and Methods
To improve the reproducibility and practical applicability of this work, this section presents detailed materials and methods information, including the constructed concrete compressive strength database, the AutoML algorithm, and a comparison of the five ML algorithms using the concrete compressive strength prediction model performance evaluation index.
Concrete Database
Most concrete databases thus far studied have contained only one type of concrete [33,43,44], which is not conducive to testing the performance of ML algorithms on multiple types of concrete. Hence, to test the robustness of the AutoML algorithm for predicting the compressive strength of various types of concrete, we collected four concrete compressive strength datasets via a literature survey. The dataset comprised four types of concrete: ordinary concrete, rice husk ash concrete, high-strength concrete, and machine-made sand concrete. The sample size, variable type, variable number, and data distribution of the four datasets differed. All datasets were randomly divided into training and test sets at a ratio of 80%:20%.
Conventional Concrete Dataset
Conventional concrete (CC) is the most widely used building material for its purpose. The CC dataset adopted in this study was obtained experimentally by a research group at Chung Hwa University, Taiwan [45]. The dataset consisted of 1030 pieces of data, and each piece of data included eight independent variables and one dependent one. The content range of each ingredient in concrete is listed in Table 2. Figure 1 shows the correlation matrix for the dataset.
Rice Husk Ash Concrete Dataset
A large amount of agricultural waste has been used as a substitute for cement to produce sustainable concrete, which helps to reduce greenhouse gas emissions. The agricultural-waste-based concrete dataset used in this study was the rice husk ash concrete (RHA) compressive strength prediction dataset [46]. The dataset comprises 192 pieces of data, and each piece of data includes six independent variables and one dependent one. The content range of each ingredient in the RHA is shown in Table 3. Figure 2 shows the correlation matrix for the dataset.
Rice Husk Ash Concrete Dataset
A large amount of agricultural waste has been used as a substitute for cement to produce sustainable concrete, which helps to reduce greenhouse gas emissions. The agricultural-waste-based concrete dataset used in this study was the rice husk ash concrete (RHA) compressive strength prediction dataset [46]. The dataset comprises 192 pieces of data, and each piece of data includes six independent variables and one dependent one. The content range of each ingredient in the RHA is shown in Table 3. Figure 2 shows the correlation matrix for the dataset.
High-Strength Concrete Dataset
High-strength concretes (HSCs) are widely used in the modern construction industry because of their superior strength and durability. The HSC compressive strength prediction dataset [47] used in this study consists of 357 pieces of data, each of which includes five independent variables and one dependent variable. The content range of each ingredient in the HSC dataset is listed in Table 4. Figure 3 shows the correlation matrix for the dataset.
High-Strength Concrete Dataset
High-strength concretes (HSCs) are widely used in the modern construction industry because of their superior strength and durability. The HSC compressive strength prediction dataset [47] used in this study consists of 357 pieces of data, each of which includes five independent variables and one dependent variable. The content range of each ingredient in the HSC dataset is listed in Table 4. Figure 3 shows the correlation matrix for the dataset. Artificial sand made from crushed stone or gravel, also known as machine-made sand, artificial sand, or gravelly sand, has been used as a substitute for natural sand in concrete, to preserve limited natural sand resources. Concrete with manufactured sand (MSC) has gradually become an indispensable green building material. The MSC dataset [48] used in this study comprises 280 pieces of data, each of which included 11 independent variables and one dependent variable. The content range of each ingredient in MSC is listed in Table 5. Figure 4 shows the correlation matrix for the dataset.
Concrete with Manufactured Sand Dataset
Artificial sand made from crushed stone or gravel, also known as machine sand, artificial sand, or gravelly sand, has been used as a substitute for natural s concrete, to preserve limited natural sand resources. Concrete with manufacture (MSC) has gradually become an indispensable green building material. The MSC d [48] used in this study comprises 280 pieces of data, each of which included 11 ind ent variables and one dependent variable. The content range of each ingredient in M listed in Table 5. Figure 4 shows the correlation matrix for the dataset.
AutoML Algorithm
AutoML is a current research frontier in the computing community. The goal of Au-toML is to automatically select the optimal data modeling pipeline in the data preprocessing, feature preprocessing, model selection, and hyperparameter optimization stages of the ML modeling process, without human intervention or time delays. Figure 5 shows the difference between AutoML and ML for the prediction of the concrete compressive strength.
AutoML Algorithm
AutoML is a current research frontier in the computing community. The goal of AutoML is to automatically select the optimal data modeling pipeline in the data preprocessing, feature preprocessing, model selection, and hyperparameter optimization stages of the ML modeling process, without human intervention or time delays. Figure 5 shows the difference between AutoML and ML for the prediction of the concrete compressive strength.
Mathematical Model
In the mathematical description of AutoML, the dataset is denoted by and is divided into the disjoint training set and validation set . The configuration space of data preprocessing methods is , and each data preprocessing method can be defined as ∈ ; the configuration space of feature preprocessing methods is , and each feature preprocessing method can be defined as ∈ ; the configuration space of ML algorithms is ; each ML algorithm ∈ has hyperparameters, and its hyperparameter space is = ℎ × ℎ × … × ℎ (where each ℎ can be an integer, real, floating-point, or label value); the hyperparameter of each can be defined as ℎ ∈ . The evaluation function for calculating the loss is defined as , and the data pipeline is , where = argmin ∈ , ∈ , ∈ , ∈ ( , , , , , ℎ) According to Equation (1), when the configuration spaces of , , and are known, it is only necessary to input the training set and test set , because the optimal model can be obtained by minimizing the error of the model on the test set data pipeline. In addition to model and parameter selection, Equation (1) also considers the data preprocessing and feature preprocessing links in the ML pipeline; thus, the above problem can be defined as a generalized joint optimization problem of combined algorithm selection and hyperparameter tuning (CASH) [49]. Thus, the construction of the configuration space and the optimization of the generalized CASH are key steps for realizing AutoML.
Auto-Sklearn
Auto-Sklearn [50] is the current "state-of-the-art" algorithm in AutoML research [51]. Auto-Sklearn first incorporates the entire ML pipeline design problem (including its structural design and hyperparameter configuration) into a custom hyperparameter space; then, it uses a Bayesian optimizer to solve the generalized CASH problem in this new hyperparameter space, to obtain the optimal predictive model. In addition, Auto-Sklearn In the mathematical description of AutoML, the dataset is denoted by D and is divided into the disjoint training set D train and validation set D valid . The configuration space of data preprocessing methods is DP, and each data preprocessing method can be defined as dp ∈ DP; the configuration space of feature preprocessing methods is FP, and each feature preprocessing method can be defined as f p ∈ FP; the configuration space of ML algorithms is M; each ML algorithm m ∈ M has N hyperparameters, and its hyperparameter space where each h i can be an integer, real, floating-point, or label value); the hyperparameter of each m can be defined as h ∈ H. The evaluation function for calculating the loss is defined as Score, and the data pipeline is P, where P = argmin dp∈DP, f p∈FP,m∈M,h∈H Score(D train , D valid , dp, f p, m, h) According to Equation (1), when the configuration spaces of DP, FP, and M are known, it is only necessary to input the training set D train and test set D valid , because the optimal model can be obtained by minimizing the error of the model on the test set D valid data pipeline. In addition to model and parameter selection, Equation (1) also considers the data preprocessing and feature preprocessing links in the ML pipeline; thus, the above problem can be defined as a generalized joint optimization problem of combined algorithm selection and hyperparameter tuning (CASH) [49]. Thus, the construction of the configuration space and the optimization of the generalized CASH are key steps for realizing AutoML.
Auto-Sklearn
Auto-Sklearn [50] is the current "state-of-the-art" algorithm in AutoML research [51]. Auto-Sklearn first incorporates the entire ML pipeline design problem (including its structural design and hyperparameter configuration) into a custom hyperparameter space; then, it uses a Bayesian optimizer to solve the generalized CASH problem in this new hyperparameter space, to obtain the optimal predictive model. In addition, Auto-Sklearn integrates two techniques to further improve algorithm performance. First, a meta-learner was used to obtain the initial configuration space according to prior information, to improve the efficiency of the algorithm; second, a model integrator was used to combine multiple ML pipelines to improve the algorithm's accuracy. The Auto-Sklearn algorithm framework is shown in Figure 6, which consists of a configuration space, Bayesian optimizer, meta-learner, and model integrator.
Machine Learning Algorithms
We selected five ML algorithms (ANN, SVR, RF, AdaBoost, and XGBoost), widely used for concrete compressive strength prediction, as comparison algorithms for AutoML. This section briefly reviews the principles of the five ML algorithms.
Artificial Neural Network
By simulating the structure and function of a biological neural network (brain), an ANN connects a large number of artificial neurons to model complex relationships between data [52]. The focus of ANNs is to build artificial neuron models and network structures. For each artificial neuron, if we take the input values {X 1 , X 2 , . . . , X n } and their weight coefficients {W 1 , W 2 , . . . , W n } and we further assume that the bias of the neuron was b, then the activity value of the neuron is a = (X 1 × W 1 ) + (X 2 × W 2 ) + · · · + (X i × W i ) + · · · + (X n × W n ) + b. To obtain the output value of the neuron, its activity value is passed through the activation function. ANNs are composed of many neurons designed according to the above rules and combined with certain other rules. ldings 2022, 12, x FOR PEER REVIEW 10 o framework is shown in Figure 6, which consists of a configuration space, Bayesian o mizer, meta-learner, and model integrator.
Machine Learning Algorithms
We selected five ML algorithms (ANN, SVR, RF, AdaBoost, and XGBoost), wid used for concrete compressive strength prediction, as comparison algorithms for AutoM This section briefly reviews the principles of the five ML algorithms.
Artificial Neural Network
By simulating the structure and function of a biological neural network (brain), ANN connects a large number of artificial neurons to model complex relationships tween data [52]. The focus of ANNs is to build artificial neuron models and network str tures. For each artificial neuron, if we take the input values , , … , and th weight coefficients , , … , and we further assume that the bias of the neuron w , then the activity value of the neuron is = ( × ) ( × ) ⋯ ( × ⋯ ( × ) . To obtain the output value of the neuron, its activity value is pas through the activation function. ANNs are composed of many neurons designed acco ing to the above rules and combined with certain other rules.
Support Vector Regression
SVR was obtained by generalizing the support vector machine (SVM) from class cation problems to regression ones [53]. The principle of SVR in implementing data m eling is to identify a hyperplane that minimizes the distance to the sample point farth from the hyperplane (an SVM needs to maximize the distance to the sample point clos
Support Vector Regression
SVR was obtained by generalizing the support vector machine (SVM) from classification problems to regression ones [53]. The principle of SVR in implementing data modeling is to identify a hyperplane that minimizes the distance to the sample point farthest from the hyperplane (an SVM needs to maximize the distance to the sample point closest to the hyperplane). SVR transformed the process of identifying a hyperplane into a convex quadratic programming problem, and it obtained a hyperplane by solving the problem, thereby realizing nonlinear data modeling.
Random Forest
The core idea of RF was to combine a single-classifier decision tree (DT) with overfitting and local convergence problems into multiple-classifier forests [54]. The bootstrap resampling method was used to extract multiple samples from the original samples, train a DT for each bootstrap sample, combine these DTs, and obtain the final evaluation result by arithmetically averaging the predicted values of a single DT. Assuming that the inputs x and y represent the prediction result of the RF model, n represents the number of DTs, and y i represents the prediction value of the i-th DT, then the calculation formula for y is
Adaptive Boosting
AdaBoost is one of the best boosting algorithms. Its core idea is to upgrade a weak classifier (that has a classification accuracy slightly better than random guessing) to a strong classifier with a high classification accuracy [55]. The AdaBoost algorithm uses multiple iterations. It determines the weights of the samples in the dataset according to the correctness of the sample set classification after each training round and the accuracy of the previous classification. Further, it sends the new dataset with modified weights to the lower-level classifier for training. Each classifier obtained from the second training is fused, resulting in a classifier more accurate than a weak classifier; this is used as the final decision classifier.
Extreme Gradient Boosting
XGBoost has been widely praised in academia and industry for its fast computational speed, good model performance, and excellent efficacy and efficiency in application practice [56]. XGBoost selects a DT as its weak learner. When training a single weak learner, it marginally increases the weight of the previous misclassified data, learns the current single weak learner, then adds a new weak learner to try to correct the residuals of all the previous weak learners; finally, the weighted summation of multiple learners is used to obtain the final prediction.
Root-Mean-Squared Error
The RMSE is generally used as a loss function in regression, and it can be defined as where a is the predicted output value, b is the actual value, and Z is the number of data samples. The higher the RMSE value, the larger the error is. Therefore, the RMSE value should be minimized to improve the performance of the model.
Mean Absolute Error
The MAE is the arithmetic mean of the deviation, which can be expressed as The optimal value for the MAE is 0.0.
Coefficient of Determination (R 2 )
R 2 represents the level of accuracy. The higher the value of R 2 , the higher the similarity between the predicted and actual values is. R 2 ranges from 0 to 1 and is expressed as where Predicted i is the predicted intensity of the ith sample, Actual i denotes the actual ith sample, and Actual i is the average of the actual intensities of all samples.
Results and Discussion
To verify the efficacy of AutoML, this study first tested it on the constructed database, before testing the ML algorithm. Finally, the results of AutoML and ML are discussed. All experiments were performed on a computer with an integrated NVIDIA GTX 1080 graphics card (8-GB RAM), 32 GB RAM, and an Intel Core i7-6770 CPU. The algorithm used for the experiment was implemented in the Python programming language using an Ubuntu 16.04 operating system. The ML algorithm was implemented using the Scikit library (https://scikit-learn.org/) (accessed on 10 July 2022).
Concrete Compressive Strength Prediction Using AutoML
To verify the effectiveness of using AutoML for concrete compressive strength prediction, this study applied Auto-Sklearn, a representative AutoML algorithm, to conduct experiments on four concrete datasets. The max runtime of each Auto-Sklearn was set to 2.0 h. To prevent overfitting, the ten-fold crossover method [61] was used to calculate the optimizer score.
R 2 is an important index for evaluating the concrete compressive strength predictions. To monitor the optimization process of Auto-Sklearn, we plotted the change curves of training R 2 , optimized R 2 , and test R 2 for the optimal single model and ensemble model during the training process, as shown in Figure 7. In the initial stage of optimization, each indicator shows a significant upward trend as the optimization progresses, which indicates that the performance of the model rapidly improves in the initial stage of optimization. Among the datasets, Auto-Sklearn achieves a high level after~15 min optimization on the CC dataset, and it gradually converges. On the RHA, HSC, and MSC datasets, Auto-Sklearn converges within~5 min,~30 min, and~3 min, respectively. The difference of each index suggests that the accuracy of the ensemble model (i.e., a weighted combination of multiple models) identified by Auto-Sklearn for each dataset exceeded that of a single model, which indicates that the ensemble model is more suitable for accurate concrete resistance predictions; the compressive strength prediction model corroborates the findings from previous research [33]. In addition, the performance of the model obtained via Auto-Sklearn optimization for the training set exceeds that for the test set, which accords with the general laws of ML modeling.
After training, Auto-Sklearn obtained four ensemble models, each of which was weighted and combined via multiple ML pipelines with certain weights. Table 6 shows the detailed parameters of the four ensemble models; it can be seen that the ensemble models built for the CC, RHA, HSC, and MSC datasets consisted of 10, 9, 7, and 4 ML pipelines. This complex combination is difficult, even for experienced concrete engineers. Table 7 shows the performance evaluation results of the four ensemble models on the test set: all the test R 2 values exceeded 0.9. Among the models, the R 2 value of the ensemble MSC model was the highest, reaching 0.991. The predicted and real results of the four ensemble models for the four concrete datasets are shown in Figure 8, further validating the high performance of the Auto-Sklearn algorithm.
To summarize, Auto-Sklearn can automatically build accurate compressive strength prediction models for various types of concretes.
Prediction of Concrete Compressive Strength Using Machine Learning
Five ML algorithms were used to conduct experiments on the four datasets. In the data preprocessing stage, most algorithms did not perform data preprocessing. In terms of feature preprocessing, the features used were derived from features manually selected through expert experience; for ML algorithm selection, the ANN, SVR, RF, AdaBoost, and XGBoost algorithms were used; for hyperparameter selection, to obtain the optimal performance of each algorithm, the GS method was used [28,32] to select the hyperparameters.
Through the experiments, we obtained the hyperparameters adopted by each model and the results of the model performance evaluation. As can be seen from Table 8, for Datasets 1, 2, and 4, the multiple performance evaluation metrics of the XGBoost algorithm were optimal. For the HSC dataset, the ANN algorithm achieved the best concrete compressive strength prediction performance. Thus, on the one hand, XGBoost is the most robust ML algorithm for concrete compressive strength prediction among the five ML algorithms, though on the other hand, none of the ML algorithms tested in this study can be used to build an optimal compressive strength prediction model for all concrete datasets. To construct accurate concrete compressive strength prediction models, concrete engineers must extensively test multiple ML algorithms.
Buildings 2022, 12, x FOR PEER REVIEW 13 of 23 models built for the CC, RHA, HSC, and MSC datasets consisted of 10, 9, 7, and 4 ML pipelines. This complex combination is difficult, even for experienced concrete engineers. Table 7 shows the performance evaluation results of the four ensemble models on the test set: all the test values exceeded 0.9. Among the models, the value of the ensemble MSC model was the highest, reaching 0.991. The predicted and real results of the four ensemble models for the four concrete datasets are shown in Figure 8, further validating the high performance of the Auto-Sklearn algorithm.
To summarize, Auto-Sklearn can automatically build accurate compressive strength prediction models for various types of concretes.
Prediction of Concrete Compressive Strength Using Machine Learning
Five ML algorithms were used to conduct experiments on the four datasets. In the data preprocessing stage, most algorithms did not perform data preprocessing. In terms of feature preprocessing, the features used were derived from features manually selected through expert experience; for ML algorithm selection, the ANN, SVR, RF, AdaBoost, and XGBoost algorithms were used; for hyperparameter selection, to obtain the optimal performance of each algorithm, the GS method was used [28,32] to select the hyperparameters.
Through the experiments, we obtained the hyperparameters adopted by each model and the results of the model performance evaluation. As can be seen from Table 8, for Datasets 1, 2, and 4, the multiple performance evaluation metrics of the XGBoost algorithm were optimal. For the HSC dataset, the ANN algorithm achieved the best concrete compressive strength prediction performance. Thus, on the one hand, XGBoost is the most robust ML algorithm for concrete compressive strength prediction among the five ML algorithms, though on the other hand, none of the ML algorithms tested in this study can be used to build an optimal compressive strength prediction model for all concrete datasets. To construct accurate concrete compressive strength prediction models, concrete
Comparison of Concrete Compressive Strength Prediction Using AutoML and ML
Box plots were used to count the test results of the representative algorithms under the AutoML and ML methods, and the results are shown in Figure 9. Through comparison and analysis, we summarize the advantages of the AutoML representative algorithm, Auto-Sklearn, for building concrete compressive strength prediction models: 1.
The accuracy of the Auto-Sklearn algorithm is higher. The multiple algorithm performance metrics presented in Figure 9 show that the Auto-Sklearn algorithm outperforms the five ML algorithms (ANN, SVR, RF, AdaBoost, and XGBoost) on all four datasets. This is because the Auto-Sklearn algorithm can both build complex ensemble models and optimize the entire ML pipeline (including data preprocessing methods, feature preprocessing methods, ML algorithms, and hyperparameters).
2.
The Auto-Sklearn algorithm is more robust. By comparing the range of the box plot in Figure 9, it can be seen that the fluctuation range of each performance evaluation index of the Auto-Sklearn algorithm (applied to multiple datasets) is significantly smaller than that of the other five ML algorithms. Existing studies have shown that each machine-learning algorithm has a certain scope of application, and there is currently no ML algorithm that performs best on any given dataset [50]. The Auto-Sklearn algorithm can automatically identify the optimal machine-learning pipeline for the dataset in the configuration space and combine them. Therefore, the Auto-Sklearn algorithm is more robust.
3.
The Auto-Sklearn algorithm can reduce the modeling time and the dependence on concrete engineer expertise. When building a compressive strength prediction model based on a new concrete dataset, concrete engineers must comprehensively compare multiple ML algorithms and exhaustively optimize the hyperparameters. This results in a considerable time restraint. This study shows that the Auto-Sklearn algorithm can train an accurate concrete compressive strength prediction model within a short time. In addition, once the Auto-Sklearn algorithm is run, there is no need for manual intervention from the concrete engineer, which means that the concrete engineer spends very little time performing modeling. Meanwhile, the automated modeling process means that concrete engineers do not need machinelearning modeling experience and can therefore devote more time to concrete research.
4.
The Auto-Sklearn algorithm has better scalability. More advanced ML algorithms can be integrated into the configuration space of the Auto-Sklearn algorithm (in particular, numerous ML algorithms that perform well in concrete compressive strength prediction), to satisfy more complex modeling requirements. Traditional ML algorithms can only improve the model performance in a limited manner, by tuning the hyperparameters.
multiple ML algorithms and exhaustively optimize the hyperparameters. This results in a considerable time restraint. This study shows that the Auto-Sklearn algorithm can train an accurate concrete compressive strength prediction model within a short time. In addition, once the Auto-Sklearn algorithm is run, there is no need for manual intervention from the concrete engineer, which means that the concrete engineer spends very little time performing modeling. Meanwhile, the automated modeling process means that concrete engineers do not need machine-learning modeling experience and can therefore devote more time to concrete research. 4. The Auto-Sklearn algorithm has better scalability. More advanced ML algorithms can be integrated into the configuration space of the Auto-Sklearn algorithm (in particular, numerous ML algorithms that perform well in concrete compressive strength prediction), to satisfy more complex modeling requirements. Traditional ML algorithms can only improve the model performance in a limited manner, by tuning the hyperparameters. Table 9 presents a comparison of the present method and several previous methods reported in studies regarding the use of ML for predicting the compressive strength of concrete. Through comparison, it can be concluded that the advanced nature of the proposed method lies in the following:
Comparison with Related Work
1. High degree of automation; no reliance upon human experience. To a certain extent, existing research relies upon expert experience to select the hyperparameters. The selection of the hyperparameters is important, but difficult. The present method facilitates automated modeling without relying upon expert experience. 2. Stronger robustness. The proposed method achieves accuracies greater than 0.9 (R2) on all datasets, and most of the accuracies approach or exceed those of well-tuned methods in existing studies.
Through comparison, it can be seen that the adopted method does not achieve the optimal performance on certain datasets. This shows that Auto-Sklearn did not search for Table 9 presents a comparison of the present method and several previous methods reported in studies regarding the use of ML for predicting the compressive strength of concrete. Through comparison, it can be concluded that the advanced nature of the proposed method lies in the following:
1.
High degree of automation; no reliance upon human experience. To a certain extent, existing research relies upon expert experience to select the hyperparameters. The selection of the hyperparameters is important, but difficult. The present method facilitates automated modeling without relying upon expert experience.
2.
Stronger robustness. The proposed method achieves accuracies greater than 0.9 (R2) on all datasets, and most of the accuracies approach or exceed those of well-tuned methods in existing studies. Through comparison, it can be seen that the adopted method does not achieve the optimal performance on certain datasets. This shows that Auto-Sklearn did not search for the optimal model in a short period of time. The reason is that expertise produces higher accuracies than automatic ML on certain datasets. For example, on the HSC dataset, existing studies have used RF to achieve higher accuracies, and the methods employed do not ensure that the parameters are optimal. On the other hand, the limitation of the adapted method's model library is also responsible. For example, the best model on the CC dataset is LKRR, and the best model on the RHA dataset is GEP. Neither of these has yet been included in Auto-Sklearn's configuration space.
To summarize, the greatest significance of the method adopted in this paper is to simplify the modeling process for concrete compressive strength and reduce the dependence upon engineer expertise. To further improve the performance of the Auto-Sklearn algorithm, it is necessary to further expand the configuration space and improve the performance of the optimizer in the future.
Conclusions and Future Work
This study aimed to verify-for the first time in the literature-the feasibility of using AutoML for concrete compressive strength prediction. We first collected four different types of concrete datasets, introduced the principles of AutoML and a representative algorithm (Auto-Sklearn), and compared this representative against five ML algorithms. The following conclusions were drawn: | 2022-09-15T17:04:58.286Z | 2022-09-07T00:00:00.000 | {
"year": 2022,
"sha1": "2f0892df7f2b38f765343489f5dd90941d12df8a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/12/9/1406/pdf?version=1662551039",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5a2a83b69927e536b030a117a94f9280ef81af61",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
231721728 | pes2o/s2orc | v3-fos-license | Estrogen deficiency – a central paradigm in age-related impaired healing?
Wound healing is a dynamic biological process achieved through four sequential, overlapping phases; hemostasis, inflammation, tissue proliferation and remodeling. For effective wound healing, all four phases must occur in the appropriate order and time frame. It is well accepted that the wound healing process becomes disrupted in the elderly, increasing the propensity of non-healing wound states that can lead to substantial patient morbidity and an enormous financial burden on healthcare systems. Estrogen deprivation in the elderly has been identified as the key driver of age-related delayed wound healing in both genders, with topical and systemic estrogen replacement reversing the detrimental effects of aging on wound repair. Evidence suggests estrogen deprivation may contribute to the development of chronic wound healing states in the elderly but research in this area is somewhat limited, warranting further investigations. Moreover, although the beneficial effects of estrogen on cutaneous healing have been widely explored, the development of estrogen-based treatments to enhance wound repair in the elderly have yet to be widely exploited. This review explores the critical role of estrogen in reversing age-related impaired healing and evaluates the prospect of developing more focused novel therapeutic strategies that enhance wound repair in the elderly via activation of specific estrogen signaling pathways in regenerating tissues, whilst leaving non-target tissues largely unaffected.
BACKGROUND
Declining levels of estrogen in both genders with increasing age suggests that age-related impaired wound healing may result in part from the loss of protection that was once afforded by estrogen during youth. Indeed, estrogen treatments appear to reverse the detrimental effects of age-related impaired healing, resulting in accelerated wound repair in both genders. Despite these findings, the use of estrogen-based treatments to reverse delayed healing in the elderly has not been widely adopted outside research settings. Moreover, the potential role of the sex steroid hormones in chronic wounds remains unclear but evidence suggests that being male is a risk factor for venous ulceration, whilst the use of hormone replacement therapy (HRT) by postmenopausal women appears to reduce the risk of venous ulceration (Bérard et al., 2001;Margolis et al., 2002). Furthermore, polymorphisms in the estrogen receptor-beta (ER-) gene are associated with venous ulceration (Ashworth et al., , 2008. Thus, it is feasible that estrogen deprivation may contribute to the development of chronic wound healing states in the elderly. The lack of extensive research in this area highlights the need for further investigations to explore the precise mechanisms by which estrogen deficiency may contribute to the development or progression of chronic wounds in the elderly. This review explores current knowledge in this field, highlighting the critical role of estrogen in reversing age-related impaired healing and prospects for developing more focused therapies in the form of local dressings that promote healing in the elderly via activation of specific estrogen signaling pathways in regenerating tissues, whilst leaving other non-target tissues in the body largely unaffected.
ACUTE WOUND HEALING
Acute wound healing is a complex and dynamic biological process divided into four sequential, overlapping phases; hemostasis, inflammation, tissue proliferation and remodeling of the tissue scar ( Figure 1). Immediately after trauma, degranulating platelets adhere to damaged blood vessels and start a hemostatic reaction, increasing the coagulation cascade and producing a fibrin clot to prevent extreme blood loss and provide a temporary protection for the wound against foreign bodies (Vaughan et al., 2000;Weyrich and Zimmerman, 2004;Gilliver et al., 2007). Platelets in the clot release a variety of pro-inflammatory cytokines and growth factors including platelet-derived growth factor (PDGF), transforming growth factor-beta (TGF-β), fibroblast growth factor-2 (FGF-2), vascular endothelial growth factor (VEGF) and epidermal growth factor (EGF) (Bauer et al., 1985;Guo and DiPietro, 2010). These cytokines, chemokines and growth factors attract inflammatory cells from circulation to the wound site, initiating the inflammatory phase. Neutrophils are the first inflammatory cells recruited from circulation (Ley et al., 2007). They peak in numbers at 24 to 36 hours post-injury (Dovi et al., 2004). Neutrophils remove foreign materials and invading microorganisms, such as bacteria, via the release of reactive oxygen species (ROS) and lysosomal enzymes, and degrade damaged matrix tissues by collagenases and proteinases (Mosser and Edwards, 2010). The majority of neutrophils are enclosed in the wound clot and are either eliminated with the eschar or by macrophages via phagocytosis (Newman et al., 1982;Kondo and Ishida, 2010).
In response to chemoattractants such as TGF-β, macrophage chemoattractant protein 1 (MCP-1), and macrophage inflammatory protein (MIP), monocytes from the bloodstream subsequently arrive at the wound area and differentiate into tissue macrophages, peaking in numbers around day 5 to day 7 post-injury (Lorenz and Longaker, 2008;Sen and Roy, 2008). Macrophages replace neutrophils as the predominant inflammatory cells at the wound site and carry out the process of phagocytosis of invading microorganisms, removal of damaged tissues and dead neutrophils, and the release of growth factors such as PDGF and TGF-β (Beanes et al., 2003;El Mohtadi et al., 2020). Damaged extracellular matrix is degraded by the action of macrophage-derived proteolytic enzymes such as metalloproteases. Macrophages also release growth factors that induce the proliferative phase including insulin-like growth factor-1 (IGF-1), keratinocyte growth factor (KGF), epidermal growth factor (EGF) and vascular endothelial growth factor (VEGF) (Shaw et al., 1990). Three to ten days after injury, tissue proliferation starts. It is characterized by the creation of new extracellular matrix (ECM) by fibroblasts, re-epithelialization (the restoration of an intact epidermis) by keratinocytes and angiogenesis (revascularization) by endothelial cells. The final phase is remodeling of the tissue scar, which can take several months or, in some cases, up to a year post-injury. It is characterized by the remodeling of collagen and the vascular maturation of newly formed capillaries, allowing vascular density to return to normal within the Immediately after injury, healing initiates with hemostasis. This results in the formation of a fibrin clot within minutes following injury. The inflammatory phase overlaps with hemostasis and occurs within minutes after injury, with neutrophils being recruited from circulation, followed by monocytes. Monocytes undergo a series of changes to differentiate into tissue macrophages, which carry out phagocytosis and release cytokines that encourage the recruitment and activation of further leukocytes to the injury site and initiation of the proliferation phase. Three to ten days after injury, the proliferation phase starts enabling granulation tissue formation, re-epithelialization and angiogenesis. The final phase is the remodeling of a mature tissue scar, which can take several months or, in some cases, up to a year postinjury. 0 = day of wounding/injury (El Mohtadi, 2019).
wound (Guo and DiPietro, 2010). For successful healing, wound repair requires progression through all four phases in the correct order and time frame (Singer and Clark, 1999;Guo and DiPietro, 2010).
AGING AND WOUND HEALING
With increasing age, acute wound healing proceeds but becomes delayed. This detrimental change in acute wound healing in the elderly is called age-related impaired healing and is linked with intrinsic cellular aging processes, including an elevated but delayed inflammatory response, reduced cell proliferation and migration, decreased extracellular matrix (ECM) production and increased enzymatic degradation of tissues leading to skin fragility (Thomas, 2001). Delayed wound healing in the elderly is associated with delayed hemostasis (Ashcroft et al., 1999), prolonged and excessive inflammation, delayed re-epithelialization, impaired angiogenesis and reduced matrix deposition ( Figure 2) (Ashcroft et al., 1997b(Ashcroft et al., , 2002. Although the inflammatory response becomes more pronounced with increasing age, the propensity for wounds to become infected increases in the elderly (Ashcroft et al., 2002;Cooper et al., 2015), due to the delay in wound repair and the impaired ability of inflammatory cells to eliminate bacteria from the wound site (Emori et al., 1991;Thomas, 2001).
In contrast, chronic wounds are characterized by failure of tissue repair processes to proceed through an orderly set of wound healing phases within an expected time frame. Wounds are deemed chronic if they do not heal within three months and in many cases they can take several months or even years to heal (if they heal at all) (Mustoe, 2005;Adeyi et al., 2009). Chronic wounds typically affect the elderly (over 65 years of age) and arise from one or more underlying pathologies, with more than 90 % of chronic wounds being venous, diabetic or pressure ulcers (Boulton et al., 2005). Chronic wounds have major clinical implications and cause an enormous burden on healthcare services, in terms of medical effort and cost (Harding et al., 2002;Boulton et al., 2005). Chronic wound treatment costs the UK National Health Service (NHS) about £5 billion per annum (Guest et al., 2015).
At present, effective therapies/treatments for chronic wounds are somewhat limited, making this an area of research that needs urgent attention. Chronic wounds become trapped within the inflammatory phase of wound repair and are characterized by an excessive, unabated inflammatory response that leads to tissue breakdown (Snyder, 2005;Taylor et al., 2005;Fazli et al., 2009). A shift in the balance between the formation and degradation of ECM occurs, leading to ECM breakdown by destructive inflammatory mediators such as proteases (Edwards et al., 2004;Schönfelder et al., 2005). Chronic wounds also have defective macrophage function that leads to increased propensity of bacterial infection, decreased growth factor secretion, impaired angiogenesis and delayed re-epithelialization (Hohn et al., 1976;Harding et al., 2002;Frykberg and Banks, 2015).
ESTROGEN AND AGING
Endogenous estrogens are produced from cholesterol, initially by several enzymes to create androgens, such as testosterone and androstenedione, which are then converted to estrogens through the action of the P450 enzyme aromatase, in the endoplasmic reticulum of estrogen-producing cells (Payne and Hales, 2004). In adipose tissues, androstenedione is converted to estrone whilst in ovarian granulosa cells testosterone is converted into estradiol. Aromatase is found in many peripheral tissues such as skin, bone, adipose tissue, brain and vascular smooth muscle (Nawata et al., 1995;Simpson, 2000;Azcoitia et al., 2001;Ling et al., 2004). In females at the age of reproduction, systemic estrogen is produced mainly by the ovary. It is predominantly biosynthesized in granulose cells of the ovarian follicles and the corpora lutea. In males, the gonad is the principle producer of systemic estrogen. However, a substantial amount of estrogen is also produced locally in peripheral tissues in both genders, acting in an autocrine and paracrine manner (Labrie et al., 1998). A significant amount of inactive steroid precursors including dehydroepiandrosterone (DHEA), its sulphate (DHEA-S), and androstenedione (4-dione) are produced by the adrenals and converted into active steroid hormones in peripheral tissues (Labrie et al., 1998). Several peripheral human tissues, such as adipose tissue, bone and skin can produce active estrogens and androgens locally from conversion of adrenalderived inactive precursors (Nelson and Bulun, 2001). Plasma DHEA-S is the major adrenal-derived steroid precursor and levels in adult men and women are around 100 to 500 times higher than those of testosterone and as much as 1000 to 10 000 times higher than those of estradiol . Thus, inactive adrenal-derived steroid precursors provide a large circulating reservoir for conversion into potent sex steroid hormones in peripheral tissues. However, the sharp decline in DHEA and DHEA-S production by the adrenals during aging in both genders re-sults in a dramatic fall in the synthesis of active androgens and estrogens in peripheral tissues, a phenomenon which could be associated with several age-related diseases (Labrie et al., 1998). Estrogen synthesized locally in peripheral tissues becomes progressively more important after the menopause in women, when systemic levels are lost (Picard et al., 2000). However, the rapid decline in local production of active estrogens with increasing age means peripheral estrogen production is insufficient to compensate for the loss in systemic estrogen levels in elderly women.
ESTROGEN RECEPTORS
Over the past decades, the existence of two nuclear estrogen receptor (ER) proteins have been identified, ER-alpha (ER-α) and ER-beta (ER-β), that are part of the nuclear receptor (NR) family. ER-α was first discovered in 1958 (Jensen and Jacobson, 1960) and is known to be predominant in reproductive tissues (Kuiper et al., 1997;Ali and Coombes, 2000;Campbell et al., 2010) whereas ER-β was first identified in rat prostate and ovary in 1996 (Mosselman et al., 1996) and predominates in peripheral, non-reproductive tissues (Kuiper et al., 1997;Ali and Coombes, 2000;Campbell et al., 2010). The biological effects of estrogens are largely mediated by the binding of estrogen to nuclear ER homodimers or heterodimers (Matthews and Gustafsson, 2003), and subsequent activation or repression of gene transcription (Paige et al., 1999). However, rapid, non-genomic estrogen signaling involving membrane-bound ER proteins has also been described (Gruber et al., 2002;Ascenzi et al., 2006). Recent research suggests estrogen can have direct effects on inflammatory cells, such as monocytes and macrophages, and skin-associated cells such as keratinocytes, due to the presence of nuclear and membrane-bound ER proteins (Weusten et al., 1986;Stimson, 1988;Cocchiara et al., 1990). The response of particular inflammatory cells depends on the local levels of estrogen and the maturity (stage of differentiation) of the cells (Ashcroft and Ashworth, 2003).
Estrogen signals predominantly by binding to inactive ER proteins in the nucleus of the cell (Klinge, 2000). ER proteins share a structure (Figure 3) that is typical of the NR family, consisting of six domains (A-F) (Kuiper et al., 1998;Klinge, 2000;Begam et al., 2017). ER proteins are expressed in skin, suggesting estrogen regulates skin function, maintenance and/or turnover (Ashworth, 2005). While ER-α and ER-β have 97 % homology in the C domain that acts as a DNAbinding domain (DBD), they only have 55 % homology in the E domain which forms the ligand-binding domain (LBD) (Barkhem et al., 1998;Webb et al., 1999;Klinge, 2000), enabling targeted ER activation using artificial ligands with ER-specific binding affinity.
When estrogen binds to ER proteins, they become activated and dimerize (Klinge, 2000). The DBD of each activated ER then binds to an estrogen response element (ERE) in the DNA of target genes and induces gene transcription (Kuiper et al., 1998;Klinge, 2000). In cells expressing a single ER subtype, homodimers of ER-α or ER-β are formed (Kuiper et al., 1998). In cells that express both ER subtypes, a heterodimer containing one ER-α and one ER-β may form (Kuiper et al., 1998). ER heterodimers and ER-α homodimers bind to DNA with a similar affinity. However, ER-β homodimers bind to DNA with a lower affinity (Kuiper et al., 1998).
Both ER-α and ER-β enhance aspects of acute wound repair but their roles are somewhat different; although ER-α regulates inflammatory cell activity, ER-β appears to modulate the overall wound healing response (Emmerson and Hardman, 2012). The delayed wound repair observed in ovariectomized mice can be reversed by stimulation of ER-β alone, whilst ER-α activation alone fails to enhance murine wound repair . Moreover, estrogen replacement therapy in ovariectomized mice lacking functional ER-β retards wound healing, suggesting ER-β may be critical to establishing prompt tissue formation during wound repair . In addition, a human study conducted by Ashworth (2005) indicates that polymorphisms in the 0N promoter region of the human ER-β gene are significantly associated with chronic venous ulceration in the British Caucasian population.
EFFECT OF ESTROGEN ON SKIN MAINTENANCE
It is commonly accepted that the age-related reduction in estrogen levels is linked with skin degeneration. However, most evidence in humans comes from studies performed in pre-and/or post-menopausal women. During pregnancy, skin syndromes such as psoriasis have been shown to improve, an effect that is directly linked to increased estrogen levels in the circulation (Boyd et al., 1996). Moreover, oral contraceptive pills are frequently used to treat severe acne. During the menopause, estrogen defi- (Webb et al., 1999;Klinge, 2000;Begam et al., 2017;El Mohtadi, 2019) ciency results in detrimental changes in skin appearance including sagging, wrinkling, dryness and fragility (Ashcroft et al., 1999;Shah and Maibach, 2001). These changes can often be reversed during the first 6 months of topical or systemic estrogen replacement therapy (Brincat et al., 1987).
There is a reduction in mainly collagen type III, but also type I to some degree, in the skin of post-menopausal women compared to pre-menopausal women, resulting in a decrease in the ratio of type III/type I collagen within the dermis that is associated with estrogen deficiency (Affinito et al., 1999;Horng et al., 2017). When applied locally to the skin of post-menopausal women, estradiol significantly increases the production of hydroxyproline, reflecting elevated collagen synthesis in the dermis (Albright et al., 1941;Affinito et al., 1999;Sator et al., 2001;Horng et al., 2017). Indeed, topical estrogen improves the external facial appearance of post-menopausal women by reducing skin sagging and wrinkling (Schmidt et al., 1994). Not only topical but also systemic estrogen supplementation conserves skin thickness by promoting dermal collagen deposition in post-menopausal women (Savvas et al., 1993;Sauerbronn et al., 2000).
It has also been reported that estrogen replacement therapy can improve skin elasticity by 5 % per year (Brincat et al., 1987). In line with this finding, topical estrogen supplementation improves the elasticity of ECM fibres in the dermis (Albright et al., 1941;Sator et al., 2001). Topical estrogen ointments notably increase the number and thickness of elastin fibres in the ECM, with histological examination demonstrating improved orientation and reduced fibre fragmentation in the dermis (Punnonen et al., 1987). Estrogen also promotes the synthesis of glycosaminoglycans in the ECM, restoring skin turgor and moisture levels (Brincat, 2000).
Topical estrogen application enhances stratum corneum barrier function of skin in post-menopausal women and increases the rate of mitosis and turnover of epidermal cells (Stumpf et al., 1974). Estrogen also enhances the vascularization of dermis and in terms of skin appendages, estrogen extends the life cycle of human hair follicles but retards hair growth and sebum secretion by sebaceous glands (Stumpf et al., 1974).
In summary, the age-related fall in the levels of estrogen detrimentally affects the maintenance and turnover of intact skin, whilst estrogen supplementation reverses these effects in the elderly by stimulating keratinocyte proliferation, increasing ECM deposition and quality, and enhancing skin turgor and moisture retention.
Research has demonstrated the key role of sex-steroid hormones in inflammation and the wound healing process (Guo and DiPietro, 2010; Gilliver et al., 2007). Estrogen has protective, anti-inflammatory properties in several tissues (Straub, 2007). Estrogen has also been reported to stimulate wound repair processes such as re-epithelialization and ECM production independently from its anti-inflammatory effects in elderly subjects of both genders (Ashcroft et al., 1997b). HRT-treated post-menopausal women heal acute wounds faster than their age-matched control counterparts, who have taken no estrogen supplementation (Ashcroft et al., 1997b). Other reports indicate that topical estrogen supplementation enhances wound healing in elderly male and female patients, connected with a reduced inflammatory response (Ashcroft et al., 1997b(Ashcroft et al., , 1999. Variances in the human immune system between male and female subjects have been identified in several epidemiological and medical studies (McGowan et al., 1975;Bone, 1992), with evidence indicating that women have a superior immune system compared to men (Gulshan et al., 1990;Wichmann et al., 1996). Other experiments have indicated that estrogen has immune-enhancing properties during stress, including increased resistance to several pathogenic infections (Yamamoto, 1999).
Since systemic and peripheral estrogens decline with age, it is suggested that estrogen deprivation in the elderly could increase the propensity for chronic wounds. Margolis et al. (2002) performed a case-cohort study to investigate the protective effects of estrogen against chronic wounds. Patients aged oved 65 years receiving HRT treatment were shown to be 30-40 % less likely to develop a venous leg ulcer than age-matched patients lacking HRT supplementation (Margolis et al., 2002).
Chronic wounds are characterized by an excessive and chronic prolonged inflammation. High levels of inflammatory mediators, including tumor necrosis factor alpha (TNFα), interleukin-1 beta (IL-1β), IL-6, IGF-1 and matrix metalloproteinases (MMPs), that are present in chronic wound exudate (Ashcroft et al., 1997b(Ashcroft et al., , 1999 are downregulated via the action of estrogen (Vural et al., 2006;Straub, 2007;Wira et al., 2015). In particular, TNF-α is elevated in humans that are predisposed to chronic wounds and has been identified as a therapeutic target for impaired wound healing in the elderly (Ashcroft et al., 2012). Both systemic and topical estrogen treatments enhance wound healing in elderly men and women by stimulating re-epithelialization, angiogenesis, matrix deposition and wound contraction whilst dampening the inflammatory response and expression of pro-inflammatory cytokines and proteolytic mediators (Ashcroft et al., 1997b;Ashcroft and Ashworth, 2003;Thornton, 2013;Archer, 2012;Stevenson and Thornton, 2007).
Effect of estrogen on the inflammatory phase of wound healing
It is commonly known that age-related impaired healing is associated with an excessive and prolonged inflammatory response, linked with increased but delayed inflammatory cell recruitment, and increased secretion of pro-inflammatory cytokines such as TNFα (Ralston et al., 1990;Pottratz et al., 1994). Moreover, TNF-α is elevated in elderly patients with venous ulcers compared to agematched healthy controls, with the highest levels of TNF-α typically found in patients carrying polymorphisms of the promoter region of the ER- gene that predispose to venous ulceration (Ashworth et al., 2008).
Recent research has indicated that chronic wounds are associated with elevated levels of elastase and MMPs, which are released by macrophages, keratinocytes and fibroblasts, and linked with excessive tissue destruction (Wysocki et al., 1993). Estrogen has been described to control and dampen the early inflammatory response during acute wound healing by inhibiting neutrophil infiltration to the wound via a reduction in the expression of cell adhesion molecules (Ashcroft et al., 1999;Sproston et al., 2018). Furthermore, estrogen increases the oxidative metabolism of neutrophils, suggesting estrogen deprivation could lead to diminished phagocytic capability of neutrophils, an increased risk of infection and a postponement in healing (Magnusson and Einarsson, 1990). Estrogen has been shown to have a direct influence on monocytes and macrophages, due to their possession of both nuclear and membrane-bound estrogen receptor (ER) proteins (Weusten et al., 1986;Suenaga et al., 1996Suenaga et al., , 1998. In addition, 17β-estradiol has been reported to reverse the substantial delay in cutaneous murine wound healing induced by bacterial lipopolysaccharide (Crompton et al., 2016).
Increased levels of epidermal pro-matrix metalloproteinase-2 (pro-MMP-2) have been observed in intact aging skin and is immediately activated following cutaneous injury, explaining the reported rise in MMP-2 and ECM degeneration observed in the wounds of the elderly (Ashcroft et al., 1997a). In addition, research suggests that estrogen deficiency inhibits the differentiation of monocytes into tissue macrophages during the inflammatory phase of wound healing, leading to an increase in protease expression (Calvin et al., 1998a). Estrogen decreases tissue-damaging protease levels, including elastase and MMP secretion, leading to an overall increase in the content of collagen and fibronectin in the dermis (Ashcroft et al., 1999).
In skin, the anti-inflammatory effect of estrogen is predominantly mediated through inhibition of the pro-inflammatory cytokine, macrophage migration inhibitory factor (MIF) . Macrophage migration inhibitory factor (MIF) has been identified as a global regulator of wound healing mediated by estrogen and released by monocytes, macrophages, neutrophils, endothelial cells and keratinocytes Emmerson et al., 2009). Ashcroft et al. (2003) reported that mice with estrogen deficiency have higher MIF levels, resulting in an elevated inflammatory response and delayed wound healing, whereas MIF null-mice displayed enhanced wound healing, with lower inflammation and greater matrix formation. Estrogen downregulates MIF expression leading to a decline in inflammation, enhanced matrix deposition, increased re-epithelialization and an overall accelerated wound repair .
Effect of estrogen on the proliferative phase of wound healing
Age-related impaired healing is linked with reduced growth factor expression, reduced keratinocyte proliferation and increased response to inhibitory cytokines, causing a delayed re-epithelialization in vivo (Butcher and Klingsberg, 1963;Rattan and Derventzi, 1991;Holt et al., 1992). Estrogen enhances the mitogenesis of keratinocytes and increases re-epithelialization in post-menopausal women (Ashcroft et al., 1997b). It has been reported that the rate of wound reepithelialization of post-menopausal women treated with HRT for more than 3 months was similar to levels of re-epithelialization in premenopausal females, whereas a non-HRT post-menopausal group showed diminished re-epithelialization. This improved re-epithelialization following estrogen supplementation is due to increased proliferation of epidermal keratinocytes (Raja et al., 2007).
In addition to its effect on epithelial migration and proliferation, estrogen indirectly effects matrix deposition by mesenchymal cells. Various in vivo animal studies report that estrogen increases fibroblast infiltration and collagen deposition. In contrast, a small number of studies report a decrease in fibroblast infiltration and collagen deposition following treatment with estrogen in mice (Lundgren, 1973;Pallin et al., 1975). A possible explanation for these contradictions include differences in the wound models, hormone concentrations and intervals of administration used. Furthermore, the duration of estrogen insufficiency results in distinct effects on several healing parameters; for instance, wound contraction becomes reduced after 4 months of estrogen deprivation whereas matrix deposition becomes reduced after only 1 month (Calvin et al., 1998b). In humans, topical estrogen supplementation in elderly men and women results in reduced wound size via stimulation of wound contraction (Ashcroft et al., 1999). Estrogen promotes PDGF expression by monocytes and macrophages (Mendelsohn and Karas, 1999), leading to mitogenesis and chemotaxis of fibroblasts and a subsequent increase in wound contraction and ECM deposition (Seppä et al., 1982). Estrogen also enhances the secretion of TGF-β1 by dermal fibroblasts in vivo (Ashcroft et al., 1997b(Ashcroft et al., , 1999, resulting in enhanced formation of ECM, particularly collagen deposition (Ashcroft and Ashworth, 2003).
Estrogen promotes angiogenesis, leading to increased granulation tissue (Iyer et al., 2012) through a direct stimulation of endothelial cells (Rubanyi et al., 2002). Estrogen modulates the synthesis of IL-1 by tissue macrophages, a key protein implicated in the creation of a new granulation tissue (Hu et al., 1988). Estrogen increases endothelial cell attachment to laminin, fibronectin and collagens I and IV in vitro. In addition, estrogen enhances the creation of capillary-like structures by endothelial cells, when positioned on a reconstructed basement membrane (Morales et al., 1995). Paradoxically, other in vitro studies report a reduction in vascularity following stimulation with estrogen (Nyman, 1971;Lundgren, 1973). The precise effect of estrogen on angiogenesis remains unknown, and additional investigations are needed to define the impact of estrogen on vascularzsation in acute and impaired wound healing.
In summary, despite some contradictions in the literature, estrogen appears on balance to enhance most tissue formation occurring in the proliferative phase of wound healing, particularly re-epitheliazation and ECM formation.
Effect of estrogen on the remodeling phase of wound healing
The age-related decline in estrogen levels causes a decrease in wound collagen and fibronectin in vivo. This has been associated with elevated levels of inflammatory cell-derived elastase, MMP-2 and MMP-9 (Herrick et al., 1997;Ashcroft et al., 1997a). Estrogen supplementation reverses the degradation of ECM by inhibiting the synthesis of wound proteases such as MMPs during wound remodeling (Ashcroft and Ashworth, 2003;Brincat, 2000).
Topical estrogen supplementation increases the deposition of collagen during the remodeling phase of wound repair in elderly patients (Ashcroft et al., 1997b(Ashcroft et al., , 1999. Previous animal studies report that 17β-estradiol increases the production of tissue inhibitor of metalloproteinases (TIMPs) by rabbit uterine fibroblasts, but reduces the production of procollagenase and pro-stromelysin (Sato et al., 1991). Another in vivo study reports that topical estrogen treatment increases collagen deposition in elderly males and females after 7 and 80 days post-injury (Ashcroft et al., 1999a). It was also noticed in other in vivo studies that matrix collagen deposition at 7 and 84 days post-wounding decreased in postmenopausal women lacking HRT treatment. In contrast, post-menopausal females who took HRT for more than 3 months had similar levels of matrix collagen deposition and wound remodeling as younger pre-menopausal females (Ashcroft et al., 1997b(Ashcroft et al., , 1999. Estrogen stimulates the expression of TGF-β1 in vivo. This results in improving collagen deposition in the dermis (Ashcroft et al., 1997b). Reports suggest a decreased wound collagen deposition associated with MMPmediated collagenolysis in ovarectomized rats (Pirila et al., 2001). These effects were reversed by estrogen replacement, implicating estrogen as a pivotal mediator involved in shifting the balance from matrix degradation to matrix synthesis (Pirila et al., 2001). Interestingly, an in vivo study indicated that the quality of mature tissue scars was greater in post-menopausal women in comparison with pre-menopausal women. This suggests that estrogen enhances wound repair at the expense of scar quality (Ashcroft et al., 1997b).
FUTURE PERSPECTIVES FOR ESTROGEN THERAPIES
Although many known effects of estrogen on wound healing have been established in the past two decades, fewer recent developments have been made and there remain substantial areas for further investigation. It has been established that estrogen plays a fundamental beneficial role in skin maintenance and acute wound healing processes. Moreover, the systemic and peripheral decline in estrogen with increasing age suggests estrogen deprivation could be linked with chronic wounds in the elderly. However, systemic estrogen replacement therapy is an unfocused, biological sledgehammer rather than a targeted treatment strategy. Although estrogen is protective against photoaging, an extrinsic aging process that correlates with higher mortality rates from skin cancers in men than women (Weinstock, 1994;Miller and Neil, 1997), unopposed systemic estrogen replacement therapy is a risk factor for breast and endometrial cancer development , thereby restricting its exploitation in clinical practice. The widespread distribution of estrogen-responsive tissues exposes nontarget cells to the potential hyper-proliferative and neoplastic effects of systemic estrogen therapies, suggesting either local estrogen or targeted therapies are needed. Interestingly, studies performed in vitro have shown that ER- is the dominant partner in heterodimers, resulting in an ER-predominant effect with repressed ER- transcriptional activity (Pettersson and Gustafsson, 2001). Thus, by modulating ER--mediated gene transcription, ER- may decrease the overall cellular sensitivity to estrogen and provide protection against the hyper-proliferative and neoplastic effects of ER- (Rollerova and Urbancikova, 2000). Thus, a clear understanding of tissuespecific regulation of ER expression and downstream cellular and molecular mechanisms of estrogen action might enable controlled manipulation of estrogen signaling pathways during wound repair, potentially leading to the development of more targeted therapies with fewer side effects on non-target tissues.
Selective estrogen receptor modulators (SERMs) are ER-interacting molecules that have the ability to bind ER proteins and act as agonists in some tissues whilst acting as antagonists in different tissues (Brzozowski et al., 1997;Cho and Nuttall, 2001). SERMs have been used clinically to promote the beneficial effects of estrogen in target tissues whilst reducing the detrimental effects of estrogen (e.g. increased risk of breast cancer) in non-target tissues (Mirkin and Pickar, 2015). Tamoxifen, raloxifene and the dietary phytoestrogen genistein are the most frequently documented SERMs in the literature. They are known to have estrogenic effects in nu-merous peripheral tissues, but are anti-estrogenic in the breast tissue and are therefore used extensively in breast cancer research (Furr and Jordan, 1984;Morris and Wakeling, 2002;Park and Jordan, 2002;Mirkin and Pickar, 2015). Tamoxifen was discovered and reported by the Food and Drug Administration (FDA) in 1977 (Park and Jordan, 2002;Jordan, 2006;Mirkin and Pickar, 2015;Quirke, 2017). Tamoxifen binds to both ER proteins and its effect depends on the cell and tissue type, being anti-estrogenic in breast tissue and therefore commonly used to prevent and/or treat breast cancer in both post-and pre-menopausal females (Zidan et al., 2004;Quirke, 2017). Tamoxifen has also been reported to maintain the density of bone in rats and humans (Jordan et al., 1987;Zidan et al., 2004). However, it has multiple side effects and is frequently linked with endometrial cancer due to its estrogenic effects in the uterus (Kedar et al., 1994).
There have been some investigations on the effect of SERMs on skin and wound healing processes. Tamoxifen and raloxifene have been shown to stimulate fibroblast proliferation in vitro (Stevenson et al., 2009). While raloxifene improves skin elasticity and collagen deposition in post-menopausal females (Sumino et al., 2009), genistein has been reported to improve the vascularization of the dermis and augment the loss of epidermal thickness typically observed in post-menopausal females (Moraes et al., 2009). Another study on mice indicated genistein stimulates wound healing via synthesis of TGF-β1 (Marini et al., 2010). Moreover, tamoxifen, raloxifene and genistein all significantly enhance wound healing in ovariectomized mice by stimulating re-epithelialization and dampening inflammation via activation of ER-β Emmerson et al., 2010). However, the use of existing SERMs have not yet been exploited in the treatment of chronic wound states.
The repurposing of existing pharmaceutical drugs or the development of novel therapies that act as ER-specific ligands or exhibit tissue-specific estrogenic effects, delivered locally within specialized wound dressings may have potential clinical applications in the treatment of chronic wound states in the elderly. Understanding the differential effects on downstream gene transcription or repression in various tissue/cell types may help develop more focused treatments for impaired wounds that can mediate specific estrogen-responsive signaling pathways in injured tissues whilst reducing unwanted side effects in nontarget tissues.
CONCLUSION
The literature indicates estrogen deficiency is a central paradigm of age-related impaired wound healing in both genders, with topical and systemic estrogen replacement reversing the detrimental effects of aging on both wound repair and skin maintenance. There is growing evidence indicating estrogen deprivation may also contribute to the development of chronic wounds in the elderly but further research is needed in this area. Interestingly, although the beneficial effects of estrogen on wound repair have been widely explored, the development of estrogen-based treatments to promote healing has failed to gain traction to date, most likely due to undesired cellular activity (including hyper-proliferative/neoplastic effects) in non-target tissues. However, a rekindled interest may be stimulated by prospects of developing targeted therapeutic strategies that might promote healing through selective activation of estrogen-responsive signaling pathways in regenerating peripheral tissues, whilst leaving non-target tissues largely unaffected.
Conflict of interest
The authors declare no conflict of interest. Ashcroft | 2021-01-29T05:25:55.922Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "5036e2df528469647c8ee05d7132326b5eec2662",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5036e2df528469647c8ee05d7132326b5eec2662",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236143216 | pes2o/s2orc | v3-fos-license | Influence of Thickness on the Structural, Morphological and Optical Properties of Co-doped TiO2 Thin Films Prepared by Sol-Gel Method
TiO2-based materials have high strength and suitable electronic properties that make TiO2 widely used. In this research, Co-doped TiO2 thin films were created through the sol-gel spin-coating method. The deposition process was conducted 3 times to prepare 1 to 3 layers. The structural, morphological, and optical properties of Co-TiO2 thin films were explored by XRD, SEM, and UV-VIS analyses. The prepared films were amorphous without a crystalline structure. SEM images demonstrate highly uniform particles on the surfaces. With the rise of thickness, nanoparticles get closer, and the particle size decreases. EDS spectra verify the existence of Ti, O, and Co in all samples. The transparency of thin films was reduced by increasing the thickness. Bandgap energy decreased with increasing the deposition layers, while Urbach energy increased.
Introduction
The suitable electronic properties, less amount of toxicity, and high strength have made Titania (TiO2) and Titania-based materials popular for various applications in photocatalytic activity [1], solar cells [2], and optical devices [1,3]. Titanium nanoparticles have some other compelling properties, such as chemical durability [3], high amount of dielectric constant and refractive index, and transparency in the visible and near-infrared area [3,4].
However, titanium nanoparticles have some drawbacks. TiO2 has a wide bandgap that gives rise to a low absorption activity [5][6][7][8]. Besides, one major limitation of using TiO2 as a photocatalyst application is its low photo quantum efficiency and the swift recombination of photogenerated electron-hole pairs [9]. It is reported that using titania alone demonstrates no activity or low activity for photocatalysis [10,11]. In this case, TiO2 doped with metals possess optimized properties for different applications [12].
Cobalt ion has intriguing properties among various metals, including wide bandgap, interesting ferromagnetic characteristics, and high Curie temperature [13][14][15]. Herein, we focus on Co 2+ ions as doping transition metal with TiO2 because Co 2+ possesses analogous radius like Ti 4+ and can easily enter into interstitial lattice sites to modify the nanostructure, create electron-trap in the crystal lattice, decrease the bandgap [16], and broaden some functions like photocatalytic activity [9,17]. Liu et al. (2018) [16] have reported that cobalt ion incorporates into TiO2 crystal structures. Thus, the bandgap is decreased because of the formation of the electron-trap center in the crystal lattice. Regarding some published articles [18][19][20][21], metal or non-metal dopants increases capacitive properties of TiO2 and its absorption shifts to the larger wavelengths. In terms of pollutants removal from water, it was shown in previous studies that titania modified by metals and dopant ions including Ag, Pd, Ru, Pt, Cu and Fe 3+ , Cr 3+ , Mg 2+ , and Co 3+ had shown the improved selectivity of nitrogen from water [22][23][24][25][26][27].
Various deposition techniques have been applied to get Co-doped TiO2 thin films such as sol-gel [12], chemical vapor deposition [28], pulsed laser deposition [29]. Of all the variety of deposition techniques, the sol-gel is the simplest and cheapest deposition technique [30].
This study aims to prepare titanium dioxide thin films doped with Co through the solgel method. The deposition process was repeated 3 times to prepare 1 to 3 layers. The structural, morphological, and optical properties of Co-doped TiO2 thin films are evaluated.
Materials and Methods
To prepare Co-doped TiO2 thin films, two solutions were prepared. The first one is 2 g of cobalt (II) acetate tetrahydrate was completely dissolved in deionized water (5cc). Afterward, the solution of ethanol and acetic acid was added to the beaker and stirred for 30 min. The second one, tetra-n-butyl orthotitanate, and ethanol were mixed with constant stirring for 15 min. After the two solutions' preparation, the second solution was added dropwise to that of the first one and kept stirring for an extra 30 min to obtain a clear homogeneous titanium dioxide doped with Co solution [12,30].
The glass substrates (2cm × 2cm) were respectively cleaned with ethanol, acetone, and deionized water in an ultrasonic cleaner in 15 min, then dried in the air. Titanium dioxide doped with Co films were coated on soda-lime glass through a homemade spin-coater with a speed of 4821 rpm for 30 sec. After each coating, layers were dried into an incubator (100 °C) for 15 min. The deposition process was done 3 times to prepare 1 to 3 layers.
To investigate titanium dioxide's structural and morphological characteristics, doped with Co thin films, an x-ray diffractometer (XRD) and scanning electron microscope (Model: DSM-960A) were utilized. XRD pattern of varying layers of titanium dioxide doped with Co films was recorded through an x-ray diffractometer (Model: STOE STADI MP) with CuKα source (1.54 Å) in 2θ angular ranges between 10° -90°. Furthermore, the energy-dispersive spectra analysis attached to SEM device provided the elemental and chemical content of the samples. The optical property of films was studied by a UV-VIS spectrophotometer (Model: Varian Cary500 Scan). Figure 1 presents the XRD pattern of 1-3 layers of Co-doped TiO2 films. All the films displayed amorphous nature that could be due to the low growth process at a lower temperature (100 °C in this study), as has also been reported by other research groups [31,32]. It is evident that single-layered Co-doped TiO2 film depicted short-range crystallinity, as reported previously [33,34]; however, with increasing the thickness, the film lost the slight crystalline nature displaying higher amorphous nature suggesting the effect of film thickness on the crystallinity [33]. The same behavior was observed with Renugadevi et al. [34]. They found that the XRD peaks of Co-doped TiO2 synthesized using sol-gel method peaks are not sharp, reflecting the small crystallite size. It can also be observed that the XRD pattern did not demonstrate any discrete cobalt phase indicating the uniform dispersion of cobalt ions [35].
Morphological characteristics.
Figure 2 presents FESEM images of 1-3 layered Co-doped TiO2 as-deposited films. The three micrographs depicted highly uniform TiO2 particle alignment on the coated surface, as indicated in [36]. All the layers showcased granular TiO2 morphology; however, the porosity was observed to decrease with the increasing film thickness with successive layering [32]. The increment of thickness leads nanoparticles to get closer, indicating the decrease of average nanoparticle size. In fact, the doping of cobalt impedes the development of the particles. [12,37,38] The surface morphology of films is identical, although the thickness is different, signifying their non-crystalline structures [39]. In addition, the lack of clusters confirms the amorphous structures of all samples that were consistent with the XRD data results (Figure 1). A significant feature of the surface of the 3-layered film was observed compared to the 1-and 2-layered Co-doped TiO2 films; the 3-layered film showed a better adhesion and a crack-free morphology. Figure 3 indicates the EDS spectra of 1-to 3-layered Co-doped TiO2 thin films to identify the elemental composition. The spectra verify the existence of titanium, oxygen, and cobalt. The peaks at 0.7 KeV and 6.9 KeV belong to the presence of cobalt. The good interaction of Co with TiO2 through the sol-gel preparation method is confirmed by the less intense peak of cobalt [12,[38][39][40][41][42].
Optical properties.
We have studied titanium dioxide doped with Co films by UV-Vis spectrophotometer. From Figure 4, TiO2 thin films doped with Co are transparent in the visible area. This transmission spectrum confirms the preparation of Co-doped TiO2 thin films by the sol-gel method [23,43]. The as-deposited all layers Co-doped TiO2 thin films show ∼80% transmittance that reduces to ∼10%, caused by the thickness difference of each layer. As the number of coating layers increased, the absorbance of the photons increased [44]. It can be attributed to the thickness change and the decrease of the average size of nanoparticles shown in FESEM images ( Figure 2).
The reflectance spectra ( Figure 5) are almost identical in the reflectivity minimum's peak position at 347 nm. The reflectance spectrum of 3-layered Co-coated TiO2 film was observed to depict a contrasting pattern to the similar 1-and 2-layered coated films. The pattern's deflection could be due to the variation in the film thickness in the 3-layered coating compared to the 1-and 2-layered counterparts. The 3-layered deposited film has a low percentage of reflectance caused by an amorphous phase that can be confirmed by XRD analysis resulting in the decreased film density. In this case, these films' surface morphology has highly dense into porous and oxygen vacancy defects. The oxygen vacancy (OV) formation energy (EF*) is 5.48 eV for Co dopant, which leads to strong bonds with oxygen atoms [45]. On the other side, the defects of oxygen also reduce the reflectance that was confirmed by EDS spectra with a more intense oxygen peak and published literature [46], which indicates the dominant composition of the TiOx films. The Co-doping content influenced to decrease the light scattering can pass through these films. Figure 6 shows the absorption spectrum of TiO2 thin films doped with Co with different growth layers at room temperature. The samples' absorption spectra demonstrate the dominant absorption peak in the UV region and less in the visible region. The transfer of charge from the valence band (O2p state) to the conduction band (Ti3d state) was indicated by absorption bands at 200-400 nm [38]. Based on the literature [47], it was a similarity in the Te +4 and Co +2 ionic radii (68 and 72 pm) that enables the occurrence of the interstitial incorporation of cobalt ions into titanium dioxide anatase structure, and Co cation has been inserted into titanium dioxide structure and results in bandgap energy reduction [12]. The relationship between the photon of energy excitation versus the bandgap energy (Eg) is given by the Tauc formula [48]: (αhѵ) n = A (hѵ -Eg) (1) Where A is constant, the power of n for the indirect energy gap is equal to 1/2, and for the direct energy, the gap is 2. The optical band gap energy Eg for TiO2 thin films doped with Co is obtained by extrapolating the curve hѵ versus (αhѵ) 1/2 . Regarding the literature [49], it was supposed that films own an indirect bandgap. Thus, the plots of (αhѵ ) 1/2 vs. photon energy (E) are illustrated in Figure 7.
The obtained Eg of Co-TiO2 thin films is 3.89, 3.80, and 3.75 eV, with a thickness of 395.44, 542.70, and 793.23 nm, respectively, which are larger than the bandgap of bulk TiO2 ∼3.20 eV [50], as well as the values published in the literature (3.44-3.64 eV) [51,52]. As expected, the value of the indirect bandgap decreased with the thickness of layers increased. This can be due to the increment of defects in the forbidden band. The bandgap energy decreased from 3.89 eV for 1-layered Co-doped TiO2 thin film to 3.80 eV for 2-layered and further decrement to 3.75 eV for 3-layered. The variation of the indirect bandgap is a minute decrease of about 0.14 eV. Further, there are no reports of the higher bandgap energy of Co-doped TiO2 as-deposited films grown by sol-gel method without annealing.
Furthermore, the extended states of the width below the conduction band are represented with Urbach energy given by equation [53]: where α is the coefficient of absorption, α0 is constant, E is the photon's energy, and EU is the Urbach energy.
The experimental value proposed with the Urbach energy empirical relationship between the straight line slope of Lnα vs. photon energy plot (Figure 8), using the following equation: The refractive index is an optical property that is essential for materials to be applied in optical devices. We proposed to determine the refractive index by the following Herve and Vandamme relation [54]: where A = 13.6, B = 3.4, Eg is bandgap energy, and n is the refractive index of the film.
The values of bandgap energy (Eg), Urbach energy, and refractive index for TiO2 thin films doped with Co are tabulated in Table 1. It is found that the Eg values decreased with increasing doping thickness, indicating a positive effect of Co 2+ on the activation under UV-Visible. Douven et al. [55] found a similar positive effect when using iron and nitrogen-doped with TiO2. Consequently, Urbach energy increased from 0.215 to 0.275 eV, as Ghasemi et al. [56] reported that Urbach energy increased from 237.64 to 257.2 meV. This shift indicates that there have been band-to-tail and tail-to-tail transitions [57], which is because of an increase in the oxygen defect in the host structure of TiO2 [58]. This may be attributed to the high density of defects throughout the intergranular regions, leading to the optical energy gap reduction. Urbach energy increases with increasing refractive index and the film thickness. The refractive index varies from 2.12 to 2.15 with increasing thickness. The value of the refractive index of the 3-layered film was the same as reported by Ref [59], Co-doped TiO2 film annealed at 400°C. Other researchers have reported refractive index values between 2.35-2.40 with increasing cobalt concentrations, indicating that the crystallinity of film becomes dense [60]. Table 1 indicates that the thickness of 1-layered (Eg=3.89 eV), 2-layered (Eg=3.80 eV), and 3-layered (Eg=3.75 eV) film are found to be 395.44, 542.70, and 793.23 nm, respectively. The 3-layered film thickness shows a significant effect on the absorption band. The thicker film has a relative reduction in the absorption and bandgap energy confirmed by the XRD spectra, as shown in Figure 1. The 3-layered film is amorphous, as shown by the lower percentage of reflectance. It can possibly happen as the Co dopant is embedded in the TiO2 matrix. The increase in Co dopant can cause greater crystallization into large particles due to greater deposition time as-deposited films by a spin-coater. It can be interpreted that the bandgap energy decreases with the increase of thickness because the film crystallinity also increases in the crystallite size. Interestingly, our results for the 3-layered film are similar to the calculated bandgap energy of 3.75 eV (330 nm) for annealed Co-doped TiO2 nanoparticle at 500°C with 8 wt% of Co 2+ via sol-gel method [38]. Figure 9 shows the dependence between bandgap energy and the Urbach energy for the film with thicknesses of 395.44 nm, 542.70 nm, and 793.23 nm. Note that with the increase of thickness, the bandgap's value increases, and Urbach energy decreases. Urbach energy increases from 0.215 to 0.275 eV as the bandgap energy decreases because of the donor's level into impurity bands. These Urbach energy values are close to those reported by Ref. [59], in which the Urbach energy of Co-doped TiO2 films observed from 0.237 to 0.257 eV with the addition of Co. Co-dopant content causes band-tailings due to the level of donor broadening into the impurities band merging with the conduction band [61]. It is similar to decrease the bandgap energy with increasing Urbach energy in the nanocrystals of TiO2 doped with Cr prepared by a sol-gel method [62].
Conclusions
Titanium dioxide thin films doped with Co were fabricated through the sol-gel method. EDS spectra verified the existence of elements used in the samples. XRD patterns displayed the amorphous nature of films due to the low growth process at lower temperatures. FESEM images showed a highly uniform particles alignment on the surface. The transparency of thin films was reduced with increasing the thickness that is attributed to the decrease of the average size of nanoparticles. The absorption spectra demonstrate a dominant absorption peak in the UV region. The absorption bands at 200-400 nm indicate a transfer of charge from the valence band (O2p state) to the conduction band. It also could be observed that with increment thickness, bandgap energy decreases, and, consequently, the Urbach energy increased.
Funding
This research received no external funding. | 2021-07-20T04:16:51.966Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "7ec43a16e0689386c2f6ed38d9fadca49b0ee5ce",
"oa_license": null,
"oa_url": "https://doi.org/10.33263/briac121.718731",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7ec43a16e0689386c2f6ed38d9fadca49b0ee5ce",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
267847633 | pes2o/s2orc | v3-fos-license | Resolving the In Situ Three-Dimensional Structure of Fly Mechanosensory Organelles Using Serial Section Electron Tomography
Mechanosensory organelles (MOs) are specialized subcellular entities where force-sensitive channels and supporting structures (e.g., microtubule cytoskeleton) are organized in an orderly manner. The delicate structure of MOs needs to be resolved to understand the mechanisms by which they detect forces and how they are formed. Here, we describe a protocol that allows obtaining detailed information about the nanoscopic ultrastructure of fly MOs by using serial section electron tomography (SS-ET). To preserve fine structural details, the tissues are cryo-immobilized using a high-pressure freezer followed by freeze-substitution at low temperature and embedding in resin at room temperature. Then, sample sections are prepared and used to acquire the dual-axis tilt series images, which are further processed for tomographic reconstruction. Finally, tomograms of consecutive sections are combined into a single larger volume using microtubules as fiducial markers. Using this protocol, we managed to reconstruct the sensory organelles, which provide novel molecular insights as to how fly mechanosensory organelles work and are formed. Based on our experience, we think that, with minimal modifications, this protocol can be adapted to a wide range of applications using different cell and tissue samples. Key features • Resolving the high-resolution 3D ultrastructure of subcellular organelles using serial section electron tomography (SS-ET). • Compared with single-axis tilt series, dual-axis tilt series provides a much wider coverage of Fourier space, improving resolution and features in the reconstructed tomograms. • The use of high-pressure freezing and freeze-substitution maximally preserves the fine structural details.
This protocol is used in: J Cell Biol (2023), DOI: 10.1083/jcb.202209116;J Cell Biol (2021), DOI: 10.1083/jcb.202004184;Proc Natl Acad Sci USA (2019), DOI: 10.1073/pnas.1819371116Mechanosensory organelles (MOs) are specialized subcellular entities where force-sensitive channels and supporting structures (e.g., microtubule cytoskeleton) are organized in an orderly manner.The delicate structure of MOs needs to be resolved to understand the mechanisms by which they detect forces and how they are formed.Here, we describe a protocol that allows obtaining detailed information about the nanoscopic ultrastructure of fly MOs by using serial section electron tomography (SS-ET).To preserve fine structural details, the tissues are cryo-immobilized using a high-pressure freezer followed by freeze-substitution at low temperature and embedding in resin at room temperature.Then, sample sections are prepared and used to acquire the dual-axis tilt series images, which are further processed for tomographic reconstruction.Finally, tomograms of consecutive sections are combined into a single larger volume using microtubules as fiducial markers.Using this protocol, we managed to reconstruct the sensory organelles, which provide novel molecular insights as to how fly mechanosensory organelles work and are formed.Based on our experience, we think that, with minimal modifications, this protocol can be adapted to a wide range of applications using different cell and tissue samples.n/a 10 mL Caution: Freeze-substitution solution contains toxic and radioactive compounds.Handle them inside a fume hood and wear personal protective equipment.
30%, 50%, 70% aqueous methanol
Add 30, 50, and 70 mL of anhydrous methanol into 70, 50, and 30 mL of ultrapure water, respectively.Then, mix each aqueous methanol thoroughly by vortexing.Leave them at room temperature overnight or shake them in an ultrasonic cleaner for ~10 min to remove all air bubbles inside the aqueous methanol.
2% uranyl acetate
Note: Filter the solution using a 0.22 μm syringe filter unit or centrifuge the solution at 10,625× g for 10 min and then collect the filtrate or supernatant to use.
Procedure
A. Sample preparation using high pressuring and freeze-substitution AFS2 freeze-substitution device.G.A microtube containing freeze-substitution solution and samples is placed into an automatic freeze-substitution device for substitution.H.A Teflon-coated slide is wrapped with parafilm strips at both its sides.I.The samples are separated apart from the membrane carriers using forceps and a fine tip needle.J.After polymerization, the samples are flat embedded in the resin layer.
b. Cut along the cutting line to produce a small sample capsule (<1.4 mm long) to fit the membrane carrier.c.Place a 100 μm deep membrane carrier onto a filter paper with the flat side down and add 20% BSA as cryoprotectant into the cavity of the carrier.Immediately transfer the sample capsule into the cavity and immerse it into the cryoprotectant (Figure 1B).d.Place another carrier with the flat side down onto the first carrier and carefully press the top carrier down to close the carrier assembly (Figure 1C).The overflow of cryoprotectant is carefully soaked away using a wedge of absorbent paper.Critical: Air bubbles will affect pressure transmission and compromise the freezing quality.Ensure that the membrane carrier is slightly overloaded with cryoprotectant to prevent any air inclusion inside the carrier.e.Once the top carrier is in place, immediately transfer the carrier assembly to a high-pressure freezer like Leica EM HPM100 to freeze the sample (Figure 1D and E).Note: The operating manual of this machine provides a complete, detailed freezing process and we will not describe it here.Be sure to read the operating manual carefully and be trained before use.
Pause point:
The frozen samples can be stored in liquid nitrogen for several months.f.Under liquid nitrogen, transfer the carriers into 2 mL screwcap microtubes (<10 carriers per microtube) containing 1 mL of liquid nitrogen-precooled freeze-substitution solution and immerse the microtubes into the liquid nitrogen.g.Transfer these microtubes to the precooled (-90 °C) automatic freeze-substitution device (Figure 1F and G).
Critical:
The freeze-substitution solution will expand during the process of freeze-substitution as the temperature will rise.In order to relieve pressure, be sure to loosen the microtube cap or poke a hole on the cap.h.Perform freeze-substitution using the following program: incubate at -90 °C for 40 h, warm gradually from -90 °C to -30 °C at a rate of 5 °C per hour, incubate at -30 °C for 8 h, warm up to 0 °C at the rate of 5 °C per hour, and keep at 0 °C for 1-6 h.
Infiltration and flat embedding
a. Freshly prepare a mixture of embedding media and anhydrous acetone with the ratio of 1:3, 1:1, and 3:1 (vol/vol) at room temperature and mix until homogeneous.b.When the substitution program comes to 0 °C, transfer the microtubes on ice.c.Wash the samples in ice-cold anhydrous acetone three times for 3 min and in anhydrous acetone at room temperature three times for 3 min.d.After washing, infiltrate the samples with 1:3, 1:1, and 3:1 embedding media/acetone mixture and infiltrate for 1 h at each step at room temperature.e. Replace the mixture with pure embedding media and infiltrate for overnight.f.Change the pure embedding media and infiltrate for another 5-8 h on a tube revolver at room temperature.g.During infiltration, prepare Teflon-coated slides, which are made by dipping the glass slides into Teflon solution for seconds, allowing them to dry in the air, and cleaning with Kimwipe tissues for use.
Critical: Be sure to use the Teflon-coated slides as they are easier to break from the polymerized embedding media.h.Wrap both sides of the Teflon-coated slide with a stack of two thin strips of parafilm and flatten the parafilm strips with the fingers (Figure 1H).i. Add a big drop of pure embedding media onto the center of the Teflon-coated slide and transfer all the carriers and samples into the resin drop under a stereoscope.j.With a firm clamp on the carriers by forceps, slowly scrape the rim of the carrier cavity and separate the samples from the cavity using a fine tip needle (Figure 1I).k.Remove all the empty carriers and place another Teflon-coated slide onto the first slide with the parafilm strips as spacer.
Critical: The Teflon-coated slides can be replaced by ACLAR ® 33C film.Note that the aclar film easily bends and causes the non-uniform thickness of the layer of the embedding media.Therefore, be sure to place the aclar film onto a glass slide to avoid bending.l.Transfer the sandwiched slides in an oven to polymerize the embedding media at 60 °C for at least 48 h.m.After polymerization, separate the sandwiched slides using a single-edge razor blade and store the slides in a slide storage box (Figure 1J).
Pause point: After polymerization, the samples are stable indefinitely under moisture-free conditions at room temperature.
B. Serial sectioning
1. Casting formvar-coated grids a. Prepare 1%-3% formvar casting solution before use.b.Wash copper slot grids with anhydrous acetone and transfer these grids to a new 100 mm Petri dish lined with filter paper.Let these grids air dry and then cover the dish with its lid.c.Use a cylindrical funnel with a velocity controller or a casting film device to cast formvar.d.Close the velocity controller and fill casting solution nearly to the top (Figure 2A).e. Clean a glass slide with Kimwipe tissue and place it inside the cylindrical funnel.f.Open the velocity controller and allow the formvar casting solution to drain with a steady flow.g.Leave the glass slide in the cylindrical funnel for a few seconds and then air dry.Finally, the surface of the glass slide is coated with a thin layer of formvar film (Figure 2B).Critical: For a selected cylindrical funnel and prepared formvars casting solution, the faster the drainage rate, the thicker the formvar film, and vice versa.Therefore, we recommend adjusting the drainage rate to control the thickness of the formvar film.h.Fill up a big container with ultrapure water and clean the water surface by spreading out a Kimwipe tissue on the water surface and then dragging it across the water surface.i. Use a single-edge razor blade to cut four lines parallel to the four edges of the slide on the formvar film coating (Figure 2B).j.Blow a breath on the rectangle and slowly dip the slide into the ultrapure water.The formvar film coating on the rectangle comes off the glass slide and floats on the water surface (Figure 2C).Critical: The formvar film shows a difference in color under an incandescent light when floating on the water surface.A purple, blue, or green color signifies that the formvar film is too thick and will affect the contrast of tilt-series images.Silver or grey color suggests that the formvar film is too thin and breaks easily while acquiring tilt series under a TEM.In order to meet the imaging requirements, we recommend using formvar film coating with light-gold or dark-gold color.k.Use forceps to place the washed grids onto the formvar film with the light-colored side (rough side) facing up (Figure 2D).l.Place a clean glass coverslip or a sheet of parafilm against the edge of the formvar film and plunge the coverslip or parafilm vertically into the water (Figure 2E).The formvar film along with grids is stuck to the surface of coverslip or parafilm.m.Slowly lift the coverslip or parafilm out of the water and transfer it into a 100 mm Petri dish with gridcovered side facing up.Cover the dish with its lid and let the grids dry before use.n.Poke a few holes around the outer rim of the grids and then scratch carefully along the outer rim using the tip of the forceps.Finally, pick up the formvar-coated grids with forceps and transfer the grids into a grid storage box.
Serial sectioning
a. Cut the sample out from the polymerized embedding media and mount the sample block onto a blank resin block using cyanoacrylic adhesive.Note: Serial sectioning technique involves a great deal of skill.Users unfamiliar with this technique are advised to work with a skilled operator.b.Under an ultramicrotome (Figure 3A and B), roughly trim the sample block to a pyramid frustum using a double-edge razor blade (Figure 3C).o.Set feed at 300 nm and start cutting at a high cutting speed, e.g., 20 mm/s.p.As soon as the first section is cut, reduce the speed to ~1 mm/s and remain at that speed for the entire cutting process.When the length of the section ribbon is suitable for the grid slot, stop cutting and use an eyelash tool to gently touch the knife edge to separate the section ribbon from the knife edge (Figure 5A).Critical: As soon as the section ribbon is separated, immediately begin to cut the next ribbon.
Otherwise, the first one or two sections of the next ribbon will be too thin or too thick.q.Arrange all the ribbons in the knife reservoir using an eyelash tool (Figure 5B).r.Pick up the section ribbons onto formvar-coated grids (Figures 5C and D) and store the grids in a grid storage box.For pickup, clamp the grid bar with forceps, vertically dip the grid into the water with the dark-colored side (smooth side) facing to the section ribbon, and move the grid toward the ribbon or move the ribbon to the grid with the help of an eyelash tool.Once the section ribbon touches the formvar film at the center of the grid slot, slowly lift the grid and let the ribbon lay flat on the grid.Pause point: Sections can be stored in the grid storage box for several months in dry conditions at room temperature.
2.
Load the grids into the wells of the matrix body (one grid per well, up to 25 grids) of the grid-staining matrix system.3. Place the matrix body into one staining vessel, add 70% aqueous methanol to completely cover all the grids for a few seconds, and replace it with 2% uranyl acetate to stain for ~10 min (Figure 6A).Critical: At the beginning of each staining, be sure to move the matrix body back and forth several times inside the staining vessel to remove air bubbles wrapped within the wells.Otherwise, it will cause contamination of the sections.4. Sequentially replace the staining solution with 70%, 50%, and 30% aqueous methanol for washing.Wash three times for each step, followed by washing with ultrapure water three times in the staining vessel and another three times in plastic beakers. 5. Transfer the matrix body into the other staining vessel and immediately add 0.4% lead citrate to stain for ~5 min (Figure 6B).Then, wash with ultrapure water three times in the staining vessel and another three times in plastic beakers.6.Take the matrix body out, dry it with Kimwipe tissues, and carefully transfer the grids back into the grid storage box.
Critical:
The application of post staining with soluble heavy metal-containing negative staining salts
D. Acquisition of dual-axis tilt series
1. Check general conditions of TEM (Figure 7A).Here, we only describe the acquisition process using the FEI Tecnai F20 TEM equipped with Xplore3D.Note: The tilt series in the protocol are acquired using a FEI Tecnai F20 or FEI Tecnai F30 TEM.
Laboratories without this equipment can use existing TEMs equipped with automated acquisition software to achieve this purpose.Critical: Microscope training is required before users are allowed to perform TEM.Please contact microscopists for training or assisted use.2. Make sure that the vacuum level of the gun, column, and camera is reasonable and that the column valves are closed (yellow is closed, gray is open).3. Fill up the Dewar flask with liquid nitrogen to cool down the Cold Trap. 4. Start the TEM User Interface, Digital Micrograph, and TEM Imaging and Analysis software if they are closed.5. Make sure high tension is on and the value of the high tension is at 200 kV.6. Make sure the FEG Control parameters are set correctly (e.g., extraction voltage is 3950 V and gun lens is 3).7. Load the grid on the tomography single-tilt holder and insert the holder into the microscope.
Critical: Dual-axis tilt series are collected by tilting the object around two approximately orthogonal tilt axes (x-and y-axis, respectively) [22,23].Be sure to load the grid on the holder with a constant orientation (Figures 7B, C, and D).Otherwise, data processing about stitching consecutive sections is going to be a problem.8. Select a lower magnification (e.g., ~2,000×) and set spot size 1. 9. Click the Col. Valves Closed button to open the column.The beam should now be visible on the main screen.Adjust the intensity to spread the beam over the screen.10.Scan across the grid to locate an area of interest and then center the beam.11.Go to a higher magnification (e.g., 100,000×) and bring the area of interest to eucentric height.distinguished by adding the letter a or b to the same common name (e.g., "*a" for first tilt series and "*b" for second tilt series).Uncheck the Start At Eucentric Height and Start with AutoFocus since eucentric height and focus were done before.The 1° Tilt Step allows more image information to be obtained for tomographic reconstruction.Let the software do an autofocus every 4-6° at Low Tilt angle and every one degree at High Tilt (above 50-55°) for taking well-focused images.Set the Applied defocus value (e.g., -1 μm) to enhance the contrast of images.Check Track Before Acquisition to manually keep the feature of interest well centered throughout the whole tilt series.30.Press Apply to activate the changes and press Proceed to automatically acquire tilt series.
Critical: The first few images at high tilts are normally of poor quality; be sure to manually center or focus the region of interest if necessary.31.Wait until the acquisition of first tilt series is finished.32.Go to the save directory and check the tilt series stored in a file with the suffix .mrc.Also, keep the log files (only a few kB in size) as they might help you in the future if data acquisition was not satisfactory.33.Collect first tilt series for the other sections at the same working magnification.34.After data acquisition, set the magnification to the lowest "M" setting, spread the beam to cover the screen, close the column valves, and pull the holder out of the microscope.35.Rotate the grid by 90° clockwise and collect the second tilt series for each section at the same working magnification.36.Build a dual-axis tomogram reconstruction with the Etomo program in the IMOD software package [22].
The IMOD software package and a detailed Etomo tutorial can be downloaded at the IMOD homepage.Therefore, we do not provide a comprehensive tutorial for reconstruction.
2.
Click the Project button in the workroom toolbars to arrive at the Project workroom.
3. Load a reconstruction result with the suffix .recinto the system.Once the data object has been loaded, it is represented by a green icon in the Project View. 4. Click on the green data icon with the right mouse button and a popup menu appears containing all modules, which can be used to process this type of data.Choose the Slice module to display the data object in the 3D view window.5. Soften the image object with a smoothing module or/and crop it using the Crop Editor module if needed.6. Compute normalized cross-correlation for the image data.For computing, attach the Cylinder Correlation module to the image data, define the microtubules by setting the parameters in the Properties area, and finally press the Apply button.7.After several hours of computation, two objects, i.e., "*CorrelationField.am"and "*OrientationField.am,"are produced.8. Trace centerlines of microtubules.To trace, attach a Trace Correlation Lines module to the "*CorrelationField.am"object and, in the Properties area of Trace Correlation Lines module, select "*CorrelationField.am"object at the Data port and "*OrientationField.am"object in Orientation Field port to connect the two objects to the Trace Correlation Lines module.9. Set all other parameters in the Properties area of Trace Correlation Lines module and finally press Apply button to proceed.10.Just a few minutes later, a new object, "*CorrelationLines.am"for centerlines of microtubule, is produced and can be displayed with LineRaycast module.11.Switch into the Filament workroom and edit the traced centerlines until they match exactly to the real microtubules.12. Trace microtubules for the other tomographic reconstructions.
F. Stitching consecutive tomographic reconstructions to form a single larger volume
1.In the Amira ZIB edition 2016.16,switch to the Project workroom.
Validation of protocol
This protocol has been used and validated in the following published articles:
General notes
1. We suggest that scientists implementing the protocol described here should have extensive knowledge of TEM techniques.Also, scientists should already be familiar with manipulations of preparing and handling specimens and 3D tomographic image processing software.2. This protocol, with minimal adaptive modifications, could also be applied to the studies on other sensory processes, like olfaction, gustation, vision, etc.Moreover, 3D structural analysis is helpful to understand nonsensory processes, such as neuronal cytoskeleton regulations, synaptic remodeling, membrane-bound vesicle dynamics, intracellular trafficking, etc. 3.This protocol is still a time-consuming approach for ultramicrotomy, microscopy, and data processing; therefore, the choice of the technology must be made based on the specific biological questions.
Troubleshooting (Table 1) Press the sample to the bottom of the droplet with forceps and make sure there are no air bubbles around the sample.
Samples become black
Osmium tetroxide in freezesubstitution solution forms osmium black at room temperature.
Wash the samples several times with ice-cold anhydrous acetone to thoroughly remove the fixatives.
Published: Feb 20, 2024
Section ribbon is not formed, or section ribbons easily break into short ribbons The trimmed sample block does not meet the requirements for serial sectioning.
Retrim the sample block until the top and bottom edges of block face surface are parallel.
Improper alignment between block face surface and diamond knife.
Repeat the alignment process to make the alignment perfect.
Diamond knife is contaminated with hydrophobic substances such as oil from hands.
Thoroughly clean the knife with 70% alcohol.
Poor contrast after poststaining Different sources of heavy metals.
Select the optimal staining time for 2% uranyl acetate and 0.4% lead citrate.
Contamination of sections
Precipitates occur in the heavy metal solution.
Filter or centrifuge the solution, then collect the filtrate or supernatant to use.
1 .
Sample dissection a. Maintain all flies on standard medium at 23-25 °C.b.Anesthetize flies with CO2 on a fly anesthesia pad and cut the fly′s head, wings, and legs off with superfine vannas scissors under a stereoscope.c.Immediately immerse the remainder of the fly carcass into a big drop of phosphate buffer on a glass slide and remove the air bubbles around the haltere with the tip of sharp forceps.d.Dissect the haltere in phosphate buffer and immediately transfer the haltere into a new drop of phosphate buffer on another slide.2. High pressuring and freeze-substitution a.Aspirate the sample into the cellulose capillary tube and use the back of the stainless-steel surgical blades (installed on the surgical blade handles when using) to press the cellulose capillary tube at both sides of the sample without damaging the sample.By doing so, a small compartment is created to trap the sample (Figure1A).Critical: Tissues with a hydrophobic surface do not have a strong affinity with the BSA aqueous solution.Therefore, the transparent, porous cellulose capillary tubes with an inner diameter of 200 μm can be used as special containers to help the samples (diameter lower than 200 μm) immerse into the cryoprotectant.
Figure 1 .
Figure 1.Sample preparation using high pressuring and freeze-substitution. A. A haltere is aspirated into the cellulose capillary tube.B. A sample capsule is transferred into the cavity of the membrane carrier filled up with cryoprotectant.C. A carrier assembly is formed by two membrane carriers with the sample in between.D. Side view of a Leica EM HPM100 high-pressure freezer.E. A carrier assembly is placed in a high-pressure freezer for cryo-fixation.F. Side view of a Leica EM
Figure 2 .
Figure 2. Casting formvar-coated grids.A. A glass slide is placed into a cylindrical funnel almost filled with formvar casting solution.B. Four cutting lines on formvar film coating are made using a single-edge razor blade.C. The formvar film coating on the glass slide detaches from the glass slide and then floats onto the water surface.D. The grids are placed onto the formvar film with the lightcolored side facing up.E. The formvar film along with grids adheres to the coverslip when the coverslip is plunged into the water.
Figure 3 .
Figure 3. Sample trimming.A and B. Side view of an ultramicrotome (A); the white box in (A) indicates the enlarged region shown in (B).C. The sample block is roughly trimmed to a pyramid frustum.D. The pyramid frustum is finely trimmed to a smaller one with parallel top and bottom edges on the block face surface.
Figure 4 .
Figure 4. Schematic drawing of the alignment process between the sample block and diamond knife.A. When the sample block is very close to the knife edge, a band of reflected light appears on the block face surface under the illumination of the backlight.Orientate the knife block (the knife is accordingly rotated in the direction indicated by the curved arrows) until the top and bottom edges of the band of reflected light are exactly parallel with each other.B.When the sample block is slightly higher than the knife edge, there is a gap between the bottom edge of the block face and the knife edge.Orientate the specimen holder (the sample block is accordingly rotated in the direction as indicated by the curved arrows) until the bottom edge of the block face surface is exactly parallel to the knife edge.C and D. Move the sample block to the positions so that a band of reflected light appears again and orientate the segment arc (the sample block is accordingly rotated in the direction as indicated by the curved arrows) until the band of reflected light remains the same width as the sample moves up and down.
Figure 5 .
Figure 5. Pickup of section ribbons.A. A section ribbon is separated from the knife edge.B. The section ribbons float on the water surface in the knife reservoir.C. A section ribbon is picked up onto the formvar-coated grid with the help of an eyelash tool.D. A section ribbon lays flat on a formvarcoated grid.
13 Published:
Cite as: Sun, L. et al. (2024).Resolving the in-situ Three-Dimensional Structure of Fly Mechanosensory Organelles Using Serial Section Electron Tomography.Bio-protocol 14(4): e4940.DOI: 10.21769/BioProtoc.4940.Feb 20, 2024 before imaging is indispensable for significantly improving the image contrast.7. Pipette 100 μL of gold colloid into a 0.2 mL tube, vortex for a few seconds, centrifuge at ~295× g for 30 s at room temperature, and pipette the supernatant into a new 30 mm Petri dish.8. Clamp the grid bar with forceps, dip the grid into the drop of supernatant for 10-30 s (Figure 6C), slowly lift the grid, carefully absorb the residual solution with a wedge of absorbent paper, and place the grid back into the grid storage box.Critical: The gold nanoparticles serve as fiducial markers for subsequent image alignment.Be sure to check the density and distribution of gold nanoparticles in the area of interest under TEM before investing time in electron tomography.We recommend staining several times for 10 s each time until 40-100 gold particles are distributed on the surface of the area of interest.Pause point: Sections can be stored in the grid storage box for several months in dry conditions at room temperature.
Figure 6 .
Figure 6.Post-staining.A and B. Post-staining is performed using a grid staining matrix system.The red and blue staining vessel are used for uranyl acetate or lead citrate staining, respectively.C. Gold nanoparticles are added on both sides of the sections by immersing the grid into the drop of gold nanoparticle colloid.
Published: Feb 20, 2024 Figure 7 .
Figure 7.The grid loaded on the tomographic holder with a constant orientation.A. Side view of a transmission electron microscope (TEM).B. The tip of a tomographic holder.C and D. The grid loaded on holder with a constant orientation.There are grid sides, grid bars, and sections that can be used as orientation indicators.To acquire first tilt series, for example, the grid is loaded on the holder with the dark-colored side (smooth side) facing up, the straight bar perpendicular to the long axis of the holder (holder tilt axis), and the top edges of sections facing forward (C).Rotate the grid 90° clockwise for acquisition of second tilt series (D).
Published: Feb 20, 2024 2 . 4 .
Right-click in the blank of the Project View area and select the Create Object.A dialog box then appears; enter and select the SerialSectionStack module in the drop-down menu.A green icon of SerialSectionStack appears in the Project view area.3. Click Add Files in the Properties area of SerialSectionStack module, choose the tomographic reconstructions named "*.rec"and their corresponding microtubule objects named "*CorrelationLines.am,"and press the Open button to load these files.Critical: After loading, ensure that the tomographic reconstructions are in the same order as their sectioning sequence and no changes are made to these reconstruction results and their corresponding microtubule objects.Otherwise, the process of stitching adjacent tomographic reconstructions cannot be performed.Add the SerialSectionAligner module to SerialSectionStack and the 3D view window is separated into four panels.In the lower-right panel, each bar represents a tomographic reconstruction, and the blue and yellow bars represent the two working reconstructions, whose microtubules are displayed as blue or yellow dots in the upper and lower left panels and whose slices, together with these dots, are displayed in the upperright panel.5. Click the slide bars in the lower-right panel to choose the two working reconstructions.6. Left-click twice in the upper-left panel and red solid circles appear on the upper-and lower-left panels.7. Drag these red solid circles until the blue dots are well matched with the yellow dots.8. Check Matching in the Properties area of SerialSectionAligner module.9. Hold down the Ctrl key on the keyboard and left-click the matched blue and yellow dots, in the upperright panel, to connect the two microtubule segments.10.After connection, check the Alignment in the Properties area of SerialSectionAligner module and choose another two working reconstructions to complete the connection of microtubule segments in the same way mentioned above.11.After all the microtubule segments are connected, press the Create button.A few minutes later, two Objects are produced, and their icons appear in the Project View area: one is an image object of stitched consecutive reconstructions and the other is a line object of connected microtubule segments.
axis tilt series.Scale bar: 0.5 μm.C. Microtubules were traced from the tomographic reconstruction.Scale bar: 100 nm.Each microtubule was shown as a yellow rod.D. Microtubules were used as fiducial markers (upper) to stitch the sections (lower) in z-axis.Scale bar: 100 nm.E. Ten consecutive sections are combined into a single larger volume.Scale bar: 0.5 μm.F. Representative lateral view of a modified cilium generated from the single larger volume.Scale bar: 0.5 μm.G. Representative cross-sectional view of a mechanosensory organelle (MO) and its internal architectures.EDM, electron-dense materials.MMC, membrane-microtubule connector.Scale bar: 100 nm.
1. Phosphate buffer (0.1 mol/L, pH 7.2)
Osmium tetroxide is highly toxic and fatal.Handle it in a fume hood and wear personal protective equipment.Uranyl acetate is toxic and slightly radioactive.Handle it in a fume hood and wear personal protective equipment.
5 Published: Feb 20, 2024 Reagent Final concentration Quantity or volume
Unpolymerized embedding media are toxic.Handle them in a fume hood and wear personal protective equipment.
8. 10 mol/L NaOHNote: Boil the ultrapure water for 15-30 min and then cool it to room temperature before use. the tubes upside down several times and add 500 μL of 10 mol/L NaOH in it.Shake the tubes on an orbital shaker for at least 30 min and place them in the fume hood for at least two days before use.
1 .
Load the image object from the directory into the Amira ZIB edition 2016.16. 2. Go to the Segmentation workroom and a new green icon named "*labels.am"is created in the Project View area.3. Set an appropriate magnification by pressing the zoom in and out buttons in the Zoom and Data Window region.4. Draw the outlines of the same structure through all the slices using the brush.If the structure has little change among slices, draw the structure every few slices and click the Interpolate from the Selection menu bar to select the structure in between marked slices.Note: The brush is one of the basic segmentation tools.Other segmentation tools can also be used for this job.5. Select a Material in the material list and click the plus sign in the Selection region.The selected pixels in all slices are now assigned to the selected Material.Of course, more meaningful names or colors can be used to the material.The new Material can be added by pressing the New button or from the right-button menu.6. Go to slices and assign the other structures into different Materials.7.After drawing, add the Generate Surface module to "*labal.am"object in the Project workroom and press the Apply button.A dialog comes out; press Continue.8.A few minutes later, a new icon named "*surf.am"is created and the segmentation result can be displayed with the Surface View module.9. Add the Animate Ports to the Slice or other display modules and further adhere the Movie Maker module to Animate Ports.10.Right-click in the blank of the Project View area to create the Camera-Orbit. | 2024-02-25T05:23:45.600Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "677d74bf12ed1bf995e1b94e8f5abe35f59b764c",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "677d74bf12ed1bf995e1b94e8f5abe35f59b764c",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218775090 | pes2o/s2orc | v3-fos-license | Revision of the aperturally dentate Charopidae (Gastropoda: Stylommatophora) of southern Africa – genus Afrodonta s. lat., with description of five new genera, twelve new species and one new subspecies
The genus Afrodonta s. lat. is shown to comprise several lineages with distinctive shell characters primarily associated with the microsculpture of the protoconch and teleoconch, and the manner in which the apertural barriers are deposited. These lineages comprise Afrodonta s. str. and five new genera: Amatholedonta gen. nov., Biomphalodonta gen. nov., Costulodonta gen. nov., Iterodonta gen. nov. and Phialodonta gen. nov. Twelve new species are described, doubling the diversity of aperturally dentate charopid snails known from southern Africa. All new species are narrow-range endemics. A new subspecies of one of the more widely distributed species of Afrodonta s. str. is also described. Keys to genera and species are provided. New species and subspecies: Afrodonta geminodonta sp. nov., Af. inhluzaniensis leptolamellaris subsp. nov., Af. mystica sp. nov., Af. pentodon sp. nov., Amatholedonta fordycei gen. et sp. nov., Biomphalodonta forticostata gen. et sp. nov., Costulodonta bidens gen. et sp. nov., C. pluridens gen. et sp. nov., Iterodonta ammonita gen. et sp. nov., Phialodonta agulhasae gen. et sp. nov., P. atromontana gen. et sp. nov., P. aviana gen. et sp. nov. and P. rivalalea gen. et sp. nov. New synonyms: Afrodonta bilamellaris londonensis Solem, 1970 = Afrodonta bilamellaris Melvill & Ponsonby, 1908. New combinations: Afrodonta acinaces Connolly, 1933, Afrodonta burnupi Connolly, 1933 and Afrodonta trilamellaris Melvill & Ponsonby, 1908 are transferred to Costulodonta gen. nov.; Afrodonta bimunita Connolly, 1939 is transferred to Amatholedonta gen. nov.; Afrodonta introtuberculata Connolly, 1933 and Afrodonta perfida Burnup, 1912 are transferred to Phialodonta gen. nov.
Introduction
The phylogenetic relationships of Afrodonta Melvill & Ponsonby, 1908, treated for the moment sensu lato, are not well resolved. Solem (1970), following Connolly (1939), referred the genus to the Endodontidae, but at that time his concept of this family included the 'charopids' as a subfamily. Later (Solem 1976(Solem , 1983, he divided the 'endodontoid' snails of the Pacific Islands into three separate families, Charopidae, Endodontidae and Punctidae. The Endodontidae, he believed, were restricted to islands in the central and south-western Pacific and he regarded the southern African 'endodontoids' (Trachycystis s. lat. and Afrodonta s. lat.) as belonging to the Charopidae: Charopinae (Solem 1983). Subsequently, Schileyko (2001) grouped both Afrodonta s. lat. and Trachycystis s. lat. in the Endodontidae rather than the Charopidae, but referred them to different subfamilies (respectively the Endodontinae and Trachycystinae), on account of the lack of apertural barriers in Trachycystis s. lat. The most recent classification (Bouchet et al. 2017), which incorporated data from molecular studies, referred the Trachycystinae once again to the Charopidae, but the position of Afrodonta remains uncertain. It is currently maintained within the Endodontidae (MolluscaBase 2018), but it is very much a geographical outlier within this family, the other constituent genera all occurring on islands in the central and southwestern Pacific, largely following Solem (1983). In reality, it seems unlikely that Afrodonta s. lat. is genuinely related to these endodontid genera and Solem (1970), based on the anatomy of the pallial cavity described by Connolly (1925), was firmly of the opinion that whatever the true relationships of Afrodonta may be, "it is not a member of the Endodontinae" [Endodontidae] (see Remarks under genus Afrodonta s. str. below). In light of the above, and pending molecular evidence to suggest otherwise, I have opted to follow Solem (1976Solem ( , 1983 and Bruggen (1980Bruggen ( , 1988Bruggen ( , 2007 in considering both Trachycystis s. lat. and Afrodonta s. lat. to be referable to the Charopidae (see also Muratov, Abdou & Bouchet 2005).
To date, Afrodonta has been used as a taxon of convenience to house all minute, aperturally dentate charopid snails occurring in southern and East Africa. Revisions of the genus were undertaken by Burnup (1912), Connolly (1939) and Solem (1970), each author augmenting the data available and describing additional species. In the most recent of these revisions, Solem (1970) discussed a total of 12 species and one subspecies from southern Africa, and also referred the East African Endodonta kempi Connolly, 1925 to Afrodonta. His use of Afrodonta, however, was very much sensu lato, and he noted that it was likely to prove to be a polyphyletic assemblage, rather than a monophyletic entity.
The material studied by Solem included historical material obtained in the early 1900s by collectors such as H.C. Burnup, J. Farquhar, W.E. Jones, A.J. Taynton and H.P. Thomasset, as well as material collected in the early 1960s by A.C. van Bruggen. In the more than 50 years since then an active field work programme, including the collection and sorting of leaf-litter samples, has resulted in a five-fold increase in the amount of Afrodonta s. lat. material available. Study of this has resulted in the discovery of undescribed species and the identification of diagnostic characters that serve to delimit what appear to be natural groups within Afrodonta s. lat. The purpose of this contribution is to document these newly discovered species-and genus-level taxa, thus drawing attention to previously unrecognised diversity and narrow-range endemism in this group of minute, epiedaphic, forest-dwelling snails. The result is a doubling of the species-level diversity of aperturally dentate charopid snails known from southern Africa.
Material and methods
The material studied was derived primarily from the collection of the KwaZulu-Natal Museum. This was accumulated over many years, but has been significantly augmented in the last two decades through a programme of field work targeting poorly-surveyed regions of South Africa. Most specimens were collected by the sieving and sorting of dried leaf-litter samples. A key to genera is provided as well as keys to species within each genus. Diagnoses are given for all species, including previously described ones. Locality data for all material examined are given for all new species as well as for previously described, narrow-range endemic species. In the case of widely distributed species for which abundant material is available, I give only details of type material and a summary of the distribution. Terms used to describe vegetation types are taken from Mucina & Rutherford (2006). Eastern Cape (hereafter E. Cape), Northern Cape (hereafter N. Cape), Western Cape (hereafter W. Cape), Free State, KwaZulu-Natal, Limpopo and Mpumalanga refer to provinces in South Africa.
Distribution and conservation
Endemic to south-eastern South Africa (Fig. 2), ranging widely from central KwaZulu-Natal (Zinkwazi), through E. Cape to eastern W. Cape (Knysna area); in E. Cape and W. Cape it is confined to forests in the coastal hinterland, but in KwaZulu-Natal it ranges inland to the southern mistbelt forests of the Midlands. Inhabits a range of forest and dense thicket-like habitats, from the coast to 1500 m a.s.l., living in leaf-litter. Not of conservation concern. Solem (1970) described smaller, strongly dentate specimens from East London as a separate subspecies, Afrodonta bilamellaris londonensis, and similar specimens have been found subsequently at other localities (Port St Johns area, Tsitsikamma and Knysna). However, the reported differences between the reticulation; parietal region with two low, in-running ridges, upper one weaker and more deeply recessed; palatal region with 1-3 axially aligned pairs of rounded denticles recessed inside outer lip, outermost pair visible through aperture, others apparent only through translucent shell; lower denticle well below mid-whorl, upper one more or less at mid-whorl just below periphery; umbilicus relatively narrow. Shell corneous-brown to yellowish-brown when fresh; diameter up to 1.4 mm.
Description
Shell very small, diameter up to 1.4 mm, H/D ratio ±0.54; spire distinctly raised; suture indented and apical portion of whorls strongly convex, whorls thus weakly shouldered; periphery somewhat above mid-whorl; whorls slightly flattened below periphery. Protoconch comprising apical cap plus approx. 0.75 whorl; diameter ±360 μm; microscopically shagreened. Teleoconch of up to 3.0 whorls; surface texture silky; sculpture of simple, close-set, microscopic axial riblets; riblets alternating in strength and with even finer spiral threads in their intervals, producing a quadrate micro-reticulation. Umbilicus deep and relatively narrow. Aperture obliquely lunate, somewhat broader basally; parietal region with two low, in-running ridges, lower one ending level with edge of aperture, upper one weaker and more deeply recessed; baso-columellar dentition lacking; palatal region with 1-3 (usually 2) axially aligned pairs of rounded denticles recessed ⅛ -⅓ whorl behind outer lip, the outermost pair visible through aperture, the others apparent only through translucent shell; lower denticle well below mid-whorl, upper one more or less at mid-whorl just below periphery; number and position of denticle pairs somewhat variable and related to degree of development. Shell corneous-brown to yellowish-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 6), known only from the interior of the Albany Thicket biome, north of Port Elizabeth, at altitudes of 450-1000 m; in leaf-litter of isolated patches of southern mistbelt forest. Forest patches in the Addo Elephant National Park and in nature reserves around Somerset East and Grahamstown need to be surveyed in the hope of finding extant colonies of this species in formally protected areas.
Remarks
Afrodonta geminodonta sp. nov. is characterised by its weak parietal lamellae and strong, paired palatal dentition. (Burnup, 1912) As noted by Solem (1970), this species exhibits considerable variation in the shape and size of the apertural denticles. He was, however, unable to detect any pattern in this variation or any consistent differences between populations. This notwithstanding, the larger amount material now available has revealed some variation that is broadly congruent with geographical location. The typical form with a single stout, ridge-like baso-columellar denticle and a well-developed palatal lamella with a thickened apical crest is found in the forests of the interior of KwaZulu-Natal, from the Midlands (southern mistbelt forest) to the Drakensberg foothills (northern afrotemperate forest), at altitudes of over 1000 m. By contrast, throughout the coastal strip from Zululand to East London, in coastal and scarp forests up to 500 m in altitude, specimens exhibit consistent differences in the form and strength of the apertural dentition, by which they can be readily distinguished from the typical form. Since this coastal form is geographically disjunct from the typical one, with a hiatus in distribution records at altitudes between approx. 500 m to 1000 m, I propose to recognise it as a separate subspecies. (Burnup, 1912) Figs 4A-C, 5, 17I-J Endodonta [Endodonta (Afrodonta)] inhluzaniensis Burnup, 1912: 342, pl. 24, figs 14-17.
Diagnosis
Shell very small, spire raised; protoconch smooth, at most microscopically shagreened (diameter 320-350 μm); teleoconch texture silky; sculpture comprising simple, very fine and close-set axial riblets of alternating strength; spiral sculpture of indistinct threads in riblet intervals; parietal region lacking dentition; baso-columellar region typically with a strong, transversely-elongate denticle, its apex rounded or flat; palatal region with a single robust trigonal lamella just below periphery, angled upward and with a thickened crest; shape and strength of denticles somewhat variable; umbilicus relatively narrow. Shell corneous to golden-brown when fresh; diameter up to 1.5 mm.
Distribution and conservation
Endemic to eastern South Africa (Fig. 5), ranging widely from the Soutpansberg and Wolkberg in Limpopo, along the Mpumalanga escarpment, and throughout much of the KwaZulu-Natal interior, at altitudes in excess of 1000 m; in leaf-litter of mistbelt and northern afrotemperate forest. Not of conservation concern.
Remarks Specimens from the northern mistbelt forest in Mpumalanga and Limpopo closely resemble typical specimens from the KwaZulu-Natal interior. They retain a single baso-columellar denticle, but it is less robust than in the typical form and the palatal lamella is slender, strongly trigonal and its crest only slightly thickened. In addition, they rarely attain as large a size as those from KwaZulu-Natal and usually have a proportionately more elevated shell (H/D ratio closer to 0.65 compared with 0.55). Since these differences are relative and exhibit some overlap, I do not consider these northern populations worthy of recognition as a separate entity.
Diagnosis
Shell as in Af. inhluzaniensis inhluzaniensis, but baso-columellar denticle present as two smaller, unequally sized denticles, the larger of which is somewhat pointed with the smaller one lying to its right; the latter, though sometimes very small and hard to see through the aperture, is usually evident externally by transparency; palatal lamella prominent, but slender, strongly angled upwards, its crest only slightly thickened.
Etymology
From the Greek leptos (λεπτός): thin, slender, and the Latin lamina, diminutive of lamella: a small plate or blade; with reference to the narrow palatal lamella. (Burnup, 1912)
Distribution and conservation
Endemic to eastern South Africa (Fig. 5), occurring in the coastal hinterland of KwaZulu-Natal and E. Cape, from Zululand (Hluhluwe) to East London, from sea level to 460 m a.s.l.; in leaf-litter of coastal and scarp forests. Not of conservation concern.
Remarks Afrodonta inhluzaniensis leptolamellaris subsp. nov. is clearly close to the nominotypical subspecies. However, its geographical separation from the latter, together with slight but consistent differences in apertural characters indicate that it merits recognition as an entity distinct therefrom.
Diagnosis
Shell very small, spire slightly raised; protoconch microscopically shagreened or malleate; teleoconch surface silky; sculpture of simple, close-set, microscopic axial riblets; aperture with a well-developed, crescentic, in-running parietal lamella and a strong baso-columellar denticle; palatal region with a strong, deeply recessed, ridge-like denticle just above mid-whorl, plus a broad, thickened pad below mid-whorl, opposite and below parietal lamella; umbilicus relatively narrow. Shell corneous-brown to yellowish-brown when fresh; diameter up to 1.2 mm.
Etymology
From the Greek mystikos (μυστικός): a mystery, mysterious; with reference to the environs of Nkandla -long considered a region of mystery in Zulu folklore and to this day a place of secrecy, subterfuge and skulduggery.
Description
Shell very small, diameter up to 1.2 mm, H/D ratio ±0.53; spire slightly raised; suture indented and apical portion of whorls strongly convex, whorls thus weakly shouldered; periphery more or less at mid-whorl, evenly rounded; last adult whorl slightly descendant. Protoconch comprising apical cap plus approx. 0.75 whorl; diameter ±310 μm; microscopically shagreened or malleate. Teleoconch of up to 2.75 whorls; surface texture silky; sculpture of simple, close-set, microscopic axial riblets; riblets alternating in strength and with even finer spiral threads in their intervals. Umbilicus relatively narrow. Aperture obliquely lunate; parietal region with a well-developed, crescentic, in-running lamella situated well below mid-whorl, extending just beyond edge of aperture, its interior end with a distinct downward deflection; baso-columellar region with a strong denticle (shape somewhat variable); palatal region with a strong, deeply recessed, ridge-like denticle just above mid-whorl, its crest thickened, plus a broad, thickened pad below mid-whorl, opposite and below parietal lamella. Shell corneous-brown to yellowish-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 6), known only from Nkandla Forest, KwaZulu-Natal, at approx. 900 m a.s.l.; living in leaf-litter of transitional mistbelt/scarp forest. This forest is formally protected and under the care of the provincial conservation authority, Ezemvelo KwaZulu-Natal Wildlife.
Diagnosis
Shell very small, spire distinctly raised, last adult whorl relatively deep, lenticular to almost subglobose; protoconch smooth, at most microscopically shagreened or malleate (diameter ±320 μm); teleoconch texture silky; sculpture comprising simple, very fine and close-set axial riblets of alternating strength; spiral sculpture of fine threads in riblet intervals; parietal region with two well-developed, in-running lamellae; baso-columellar region with a strong, ridge-like denticle; palatal region with 5-8 in-running lamellae, those above periphery weaker; umbilicus relatively narrow. Shell whitish to dark purplishbrown when fresh; diameter up to 1.45 mm.
Distribution and conservation
The widest ranging of all aperturally dentate charopid snails in southern Africa (Fig. 3) Recorded from Grootvadersbosch in W. Cape through much of E. Cape and KwaZulu-Natal, the escarpment of Mpumalanga, and the Soutpansberg and Wolkberg massifs in Limpopo. Ranges beyond South Africa to Mount Vengo (Monte Panga) in the highlands between Zimbabwe and Mozambique (Connolly 1925), and northward to Malaŵi (Bruggen & Meredith 1984;Bruggen 2007). An isolated record (Connolly 1939) from Kimberley (N. Cape) requires confirmation. Lives in leaf-litter of a wide variety of forest and woodland habitats, from the coast to 1800 m a.s.l.. Not of conservation concern.
Remarks Closest to Af. farquhari, but that species has fewer palatal lamellae and a weaker baso-columellar denticle, and the last adult whorl is not as deep.
Diagnosis
Shell very small, spire distinctly raised; protoconch microscopically shagreened; teleoconch texture silky; sculpture of simple, close-set, microscopic axial riblets; aperture with two well-developed, inrunning parietal lamellae, lower one extending just beyond edge of aperture, upper one recessed a short distance, and a rounded baso-columellar denticle; palatal region with a low, broad, in-running, thickened pad situated well below mid-whorl, plus a small, narrow, deep-set, subsutural denticle; umbilicus narrow to moderate. Shell corneous-brown when fresh; diameter up to 1.5 mm.
Etymology
From the Greek pente (πέντε): five, and odontos (οδοντος): a tooth; with reference to the five apertural teeth.
in strength and with even finer spiral threads in their intervals (sometimes scarcely evident). Umbilicus narrow to moderate. Aperture obliquely lunate, broader basally; parietal region with two well-developed, in-running lamellae, lower one extending just beyond edge of aperture, upper one recessed a short distance and with a flat-topped crest internally; baso-columellar region with a rounded denticle; palatal region with a low, broad, in-running, thickened pad situated well below mid-whorl, plus a small, narrow, deep-set, subsutural denticle. Shell corneous-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 6), known only from Ngele Forest, near Kokstad, KwaZulu-Natal, at 1290-1350 m a.s.l.; living in leaf-litter of southern mistbelt forest. Although this forest is theoretically protected, access is completely uncontrolled and it is surrounded by exotic timber plantation heavily invaded by alien plants.
Diagnosis
Shell small, spire weakly raised, whorls somewhat flat-sided; protoconch smooth, at most microscopically shagreened (diameter ±375 μm); teleoconch texture silky; sculpture comprising simple, very fine and close-set axial riblets of alternating strength; spiral sculpture of fine indistinct threads in riblet intervals; parietal region with a single rounded in-running ridge-like lamella (occasionally the crest divided by a groove); baso-columellar and palatal regions lacking dentition; umbilicus relatively narrow. Shell buff to pale ochre when fresh; diameter up to 1.6 mm.
Distribution and conservation
A narrow-range endemic (Fig. 6), known only from the Mfongosi area (± 500 m a.s.l.) in the Thukela River valley below Kranskop, KwaZulu-Natal; no accurate habitat data available. This area is not afforded any formal protection and is threatened by impacts related to subsistence agriculture.
Remarks
Unlikely to be confused with any other species. The original samples collected by W.E. ['Mamba'] Jones remain the only ones known. The Mfongosi area contains a number of small, isolated limestone bodies belonging to the Ntingwe Group (Martini 1987) from which other narrow-range endemic land snails are known, e.g., Anisoloma falconiana (Pilsbry, 1929) and Gulella leucocion Connolly, 1929. Herbert & Kilburn (2004 discussed the interesting history of this locality and the original collector.
Diagnosis
Shell relatively large (max. diameter ±2.0 mm), biconcave with deeply sunken spire; umbilicus moderate to wide; whorls deep and tightly coiled, strongly rounded apically and basally, less so at periphery; last adult whorl not descendant. Protoconch smooth, apical cap noticeably swollen, almost circular; teleoconch with fine, close-set, axial riblets; riblets compound, composed of several periostracal lamellae; intervals between riblets with fine intermediary axial threads; spiral sculpture faint, mostly scarcely evident. Aperture narrowly lunate; parietal dentition absent; palatal region with 1-3 axially aligned rows of denticles set back up to ⅓ whorl behind outer lip.
Etymology
From the Amathole Mountains, and donta: a contraction of Afrodonta. Gender feminine.
Remarks
Amatholedonta gen. nov. is proposed for two deeply biconcave species from neighbouring regions in the Amathole Mountains. The genus is characterised by the deep shell, fine axial teleoconch sculpture, smooth globose protoconch and episodic deposition of palatal denticles. Species of Costulodonta gen. nov. may be similar, but they have less deep whorls, a flat or slightly raised spire, and a costate protoconch.
Diagnosis
Shell relatively large, deeply biconcave; protoconch with globose apical cap, smooth (diameter ±305 μm); teleoconch sculptured by distinct, close-set, compound axial riblets; intervals between riblets with numerous fine intermediary axial threads; spiral sculpture of indistinct microscopic spiral threads, usually strongest in umbilicus. Aperture narrowly lunate; parietal dentition lacking; palatal region with 1-3 axially aligned rows of small denticles set back ⅛ to ⅓ whorl behind outer lip, outermost row sometimes visible through aperture, other rows apparent only through translucent shell; each row with up to 5 denticles, with an additional small baso-columellar denticle (easily overlooked); number of denticles and position of rows somewhat variable and related to degree of development; those below periphery more elongate, those above more rounded. Shell translucent, honey-brown when fresh; diameter up to 2.0 mm. (Connolly, 1939)
Distribution and conservation
A narrow-range endemic (Fig. 9), known only from the Hogsback to Stutterheim area in the Amathole Mountains, E. Cape, at 1075-1300 m a.s.l.; in leaf-litter of southern mistbelt forest. Both the Hogsback and Kologha forests are formally protected areas.
Remarks
Resembles its congener Amatholedonta fordycei gen. et sp. nov.; however, in the latter the axial sculpture is finer and there are only three denticles in each of the rows of palatal teeth. Biomphalodonta forticostata gen. et sp. nov. has a similarly shaped shell, but the axial sculpture is much coarser, the rows of palatal denticles are more distinctly C-shaped and the minute denticle at the junction of the basal and columellar lips is lacking. In addition, the protoconch of B. forticostata gen. et sp. nov. has distinct axial sculpture.
Whereas Connolly (1939) cited six palatal denticles per row in Am. bimunita gen. et comb. nov., Solem (1970) cited only five. This discrepancy indicates either that Solem overlooked the minute basocolumellar denticle or that the NHMUK specimen he examined had fewer denticles than the holotype described by Connolly (this is sometimes the case). Since Solem described the lowest denticle as being the most posterior and the middle two the most anterior, this would suggest that he overlooked the basocolumellar denticle, which is slightly anterior to the lowest palatal denticle.
Diagnosis
Shell relatively large, biconcave with deep, tightly coiled whorls; spire distinctly sunken; protoconch smooth, globose; teleoconch sculptured by fine, close-set axial riblets, spiral sculpture virtually obsolete; aperture narrow, crescent-shaped, lacking parietal and columellar dentition; palatal region with 1-2 axially aligned rows of denticles set back 1 ⁄6 and 1 ⁄3 whorl behind outer lip, visible only by transparency; each row with three denticles, situated at, below, and above periphery; umbilicus of moderate width and deep, V-shaped. Shell translucent, straw-brown; diameter up to 2.0 mm.
Etymology
Named after the type locality, Fort Fordyce, E. Cape.
Description
Shell relatively large, diameter up to 2.0 mm, H/D ratio ±0.55, biconcave with deep, tightly coiled whorls; spire distinctly sunken; last adult whorl not descendant; suture strongly indented, and apical and basal portions of whorls strongly convex, peripheral portion evenly rounded. Protoconch comprising globose apical cap plus approx. 1.0 whorl; diameter ±325 μm; smooth. Teleoconch of up to 3.75 whorls; sculptured by fine, close-set axial riblets, with intervals 1.0-1.5 times their width at whorl periphery; spiral sculpture virtually obsolete. Umbilicus of moderate width and deep, V-shaped. Aperture narrow, crescent-shaped, but apical and basal limits rounded; parietal and columellar dentition lacking; palatal region with 1-2 axially aligned rows of denticles set back 1 ⁄6 and ⅓ whorl behind outer lip, not visible through aperture and apparent only through translucent shell; each row with three denticles, situated at, below and above periphery, the upper two rounded (uppermost sometimes axially elongate), the lower one an elongate, in-running ridge. Shell translucent, straw-brown.
Distribution and conservation
A narrow-range endemic (Fig. 9), known only from the Fort Fordyce Nature Reserve, near Fort Beaufort, E. Cape, at approx. 1100 m a.s.l.; in leaf-litter of southern mistbelt forest. Fort Fordyce Nature Reserve is a formally protected area managed by Eastern Cape Parks and Tourism Agency.
Remarks
Amatholedonta fordycei gen. et sp. nov. resembles Am. bimunita gen. et comb. nov. from the neighbouring Hogsback region. It differs from that species in having much finer axial sculpture and a somewhat less deeply sunken spire. In addition, the palatal dentition contains only three denticles per row (instead of 5-6), of which the most basal one is markedly more elongate.
Diagnosis
Shell relatively large (max. diameter 2.7 mm), biconcave with deeply sunken spire; umbilicus wide; whorls deep and tightly coiled, strongly rounded apically and basally, less so at periphery; last adult whorl not descendant. Protoconch with fine, close-set, regularly spaced axial riblets; riblets crisp and simple. Teleoconch with relatively coarse, widely spaced, lamellate axial riblets; intervals between riblets with numerous, fine, intermediary axial threads and even finer spiral threads. Aperture narrowly lunate; parietal dentition absent; palatal region with 1-2 axially aligned rows of denticles set back up to ½ whorl behind outer lip.
Etymology
From the Latin bi-: two, twice and the Greek omphalos (ομφαλός): navel, and donta: a contraction of Afrodonta; with reference to the biconcave shell morphology. Gender feminine.
Remarks
Biomphalodonta gen. nov. is proposed for a single highly characteristic, range-restricted species. The axially costate protoconch is similar to that found in Costulodonta gen. nov., but the deep, relatively large, biconcave shell and coarse teleoconch sculpture set it apart from members of that genus. The shape of the shell and the iterodont palatal dentition resemble those of Amatholedonta gen. nov., but in that genus the protoconch is smooth and globose, and the teleoconch sculpture much finer.
A number of small, edentate charopid species currently referred to Trachycystis Pilsbry, 1893 are conchologically similar to B. forticostata gen. et sp. nov., except for their lack of apertural dentition. These include, inter alia, Trachycystis bathycoele (Melvill & Ponsonby, 1892) and T. bifoveata Connolly, 1932. Further study is needed in order to establish whether these also belong to Biomphalodonta gen. nov. It may be that the presence/absence of apertural dentition is of limited phylogenetic significance and that the genus contains both dentate and edentate species.
Diagnosis
Shell relatively large, biconcave with deep, tightly coiled whorls; spire deeply sunken; protoconch sculptured by fine axial riblets; teleoconch with relatively strong, widely spaced axial ribs, their intervals with microscopic axial and even finer spiral threads; aperture narrow, crescent-shaped, lacking parietal and columellar dentition; palatal region with 1-2 axially aligned rows of small denticles set back ¼ and ½ whorl behind outer lip, visible only by transparency; rows broadly C-shaped, each with up to 6 denticles; umbilicus wide and deep, V-shaped. Shell somewhat translucent, typically pale straw-brown, but occasional specimens milky-white; diameter up to 2.7 mm.
Etymology
From the Latin fortis: strong and costa, costata: a rib, ribbed; with reference to the coarse axial sculpture.
Description
Shell relatively large, diameter up to 2.7 mm, H/D ratio ±0.56; biconcave with deep, tightly coiled whorls; spire deeply sunken; last adult whorl not descendant; suture strongly indented, and apical and basal portions of whorls strongly convex, peripheral portion less so; largest individuals with a shallow supra-peripheral indentation in second half of last adult whorl and a weakly angled periphery (Fig. 8M-N). Protoconch comprising apical cap plus 0.75 whorl; diameter ±340 μm; sculptured by fine axial riblets, relatively widely spaced on apical cap, becoming progressively more close-set toward junction with teleoconch. Teleoconch of up to 4.0 whorls, its sculpture coarse, comprising relatively strong, widely spaced axial ribs, their intervals with close-set microscopic axial threads; even finer, close-set microscopic spiral threads also evident in fresh specimens; axial ribs for the most part regularly spaced, but becoming less regular, more close-set and noticeably sinuous in last quarter whorl of largest specimens. Umbilicus wide and deep, V-shaped, but suture indented and whorl margins strongly convex. Aperture narrow, crescent-shaped, but apical and basal limits rounded; parietal and columellar dentition lacking; palatal region with 1-2 axially aligned rows of small denticles set back ¼ and ½ whorl behind outer lip, not visible through aperture and apparent only through translucent shell; each row with up to 6 denticles; number and alignment of denticles somewhat variable and related to degree of development; those nearest periphery usually slightly more anterior and rows thus broadly C-shaped at full development. Shell somewhat translucent, typically pale straw-brown, but occasional specimens milky-white.
Distribution and conservation
A narrow-range endemic (Fig. 9), known only from south-western KwaZulu-Natal, between Kokstad and Donnybrook, at 1300-1400 m a.s.l.; living in leaf-litter of southern mistbelt forest. Although forests in this area are theoretically protected, access is completely uncontrolled and they are often in close proximity to exotic timber plantations and thus exposed to threats associated with disturbance and alien plant invasion.
Remarks
The relatively large and deeply biconcave shell of this species resembles those of Amatholedonta bimunita gen. et comb. nov. and Am. fordycei gen. et sp. nov. The coarse axial sculpture of the present species, however, renders it distinctive. Additionally, in Am. bimunita gen. et comb. nov. the palatal dentition, as seen by transparency, comprises 1-3 more or less vertical rows of five denticles (occasionally four), the lower three in each row more elongate. A minute sixth denticle is also present at the junction of the basal and columellar lips, but is easily overlooked. In Am. fordycei gen. et sp. nov. there are similarly 1-3 axially aligned rows of denticles, but each has only three denticles, the lowest of which is markedly more elongate. In neither Am. bimunita gen. et comb. nov. nor Am. fordycei gen. et sp. nov. is the protoconch axially ribbed. With a maximum diameter of 2.7 mm, B. forticostata gen. et sp. nov. is the largest species of dentate charopid snail known to date from southern Africa.
Diagnosis
Shell small (max. diameter ±1.8 mm), spire flat or weakly raised; umbilicus moderate to wide; whorls tightly coiled, strongly rounded apically and basally, less so at periphery; last adult whorl weakly descendant. Protoconch with fine, close-set, regularly spaced axial riblets; riblets crisp and simple, often with indistinct traces of irregular spiral threads in their intervals. Teleoconch also with fine, closeset axial riblets, but these coarser than those on protoconch; riblets compound, composed of several periostracal lamellae; intervals between riblets with finer intermediary axial threads; spiral sculpture absent or restricted to microscopic spiral threads. Aperture obliquely lunate, slightly wider basally; aperture variously furnished with parietal and palatal dentition in the form of denticles and/or in-running lamellae/ridges. 10 specimens; same collection data as for preceding; NMSA A9185 • 3 specimens; same collection data as for preceding; NMSA A9208• 15 specimens; same collection data as for preceding; NMSA A9210.
Distribution and conservation
A narrow-range endemic (Fig. 11), known only from the vicinity of Van Reenen, KwaZulu-Natal, no accurate altitude or habitat data available, presumably in leaf-litter of northern afrotemperate forest at 1500-1800 m a.s.l. Forested habitats in nature reserves along the Free State/KwaZulu-Natal border (e.g. Ingula Nature Reserve) need to be surveyed in the hope of finding extant colonies of this species in formally protected areas. (Connolly, 1933) gen. et comb. nov., Mkolombe Mtn, Weenen, KwaZulu-Natal, diameter 1.32 mm (NMSA A9187). Connolly (1939) mentioned material from Mount Vengo (on the Zimbabwe/Mozambique border) collected by Bernard Cressy. No such material is in NMSA, but a single specimen is present in NHMUK (1937NHMUK ( .12.30.2727. This is damaged and the apertural dentition is missing (J. Ablett pers. comm., Dec. 2019). As a result it is unidentifiable, but the external sculpture is somewhat coarser than in topotypic C. acinaces gen. et comb. nov. and it seems unlikely that it is conspecific therewith. Thus, the original samples collected by Henry Burnup remain the only ones known. Judging by the number of specimens in these samples, the species must be locally common.
Diagnosis
Shell small, spire flat or at most slightly raised; protoconch for the most part sculptured by close-set axial riblets; teleoconch sculpture of close-set, compound axial riblets and microscopic spiral threads; aperture lacking parietal and columellar dentition; palatal region with two relatively small denticles, one at mid-whorl, the other basal. Shell translucent, corneous-brown to straw-brown when fresh; diameter up to 1.65 mm.
Description
Shell small, diameter up to 1.65 mm, H/D ratio ±0.50; spire flat or at most slightly raised; whorls tightly coiled; last adult whorl slightly descendant; suture indented, periphery evenly convex. Protoconch comprising apical cap plus approx. 0.75 whorl; diameter ±330 μm; initially smooth, but for the most part sculptured by close-set axial riblets, with indistinct traces of irregular spiral threads. Teleoconch of up to 3.5 whorls; sculptured by distinct, close-set, compound axial riblets with 3-4 finer intermediary axial threads; intervals between riblets 1-2 times riblet width at whorl periphery; spiral sculpture of microscopic threads, strongest below suture. Umbilicus of moderate width. Aperture lunate, somewhat broader basally; parietal and columellar dentition lacking; palatal region with two relatively small denticles, one at mid-whorl, the other basal, set back approx. ⅛ whorl behind outer lip (sometimes weak). Shell translucent, corneous-brown to straw-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 11), known only from the Drakensberg foothills ('Little Berg') in the Giant's Castle area, KwaZulu-Natal, at approx. 1700-1800 m a.s.l.; in leaf-litter of northern afrotemperate forest. The area falls within the Giant's Castle Game Reserve, which is part of the Maloti-Drakensberg World Heritage Site. It is thus afforded a high degree of protection.
Remarks
Amongst its congeners, Costulodonta bidens gen. et sp. nov. is rendered distinctive on account of its relatively simple apertural dentition. Superficially the shell shows considerable resemblance to that of 'Trachycystis' contabulata Connolly, 1932, but that species is larger (diameter up to 2.4 mm) and lacks apertural dentition. However, it does have a similarly sculptured protoconch and the two species may in fact be related. Though known to occur in the forests of the broader Giant's Castle area (Herbert & Kilburn 2004), 'T' contabulata has, to date, not been found to co-occur with C. bidens gen. et sp. nov. (Connolly, 1933) gen. et comb. nov.
Distribution and conservation
A narrow-range endemic (Fig. 11), known only from the environs of Weenen, KwaZulu-Natal; no accurate altitude or habitat data available. Wooded habitats in Weenen Nature Reserve need to be surveyed in the hope of finding extant colonies of this species in a formally protected area.
Remarks
May be confused with Afrodonta connollyi, which sometimes has a baso-columellar ridge, but in that species the parietal lamellae are stronger, the protoconch is smooth, the teleoconch silky, the umbilicus narrower, and both parietal lamellae extend to or slightly beyond the aperture edge.
The original samples from uMkholombe Mtn collected by H.P. Thomasset remain the only ones known. The specimens from Tugela Estates (28.74° S, 30.17° E) mentioned by Connolly (1933) were subsequently described as Afrodonta connollyi by Solem (1970).
Diagnosis
Shell small, spire flat or at most slightly raised; protoconch with close-set axial riblets; teleoconch sculpture of close-set, compound axial riblets and microscopic spiral threads; aperture with two rounded, in-running parietal lamellae, the lower one stronger, and a well-developed, in-running, ridge-like basocolumellar denticle; palatal region with four denticles recessed approx. ⅛ whorl behind outer lip. Shell translucent, corneous-brown to pale honey-brown when fresh; diameter up to 1.8 mm.
Etymology
From the Latin plus: more, and dens: tooth; with reference to the many apertural teeth.
Description
Shell small, diameter up to 1.8 mm, H/D ratio ±0.50; spire flat or at most slightly raised; whorls tightly coiled; last adult whorl very slightly descendant; suture indented, periphery evenly convex. Protoconch comprising apical cap plus approx. 0.75-1.0 whorl; diameter ±365 μm; initially smooth, but for the most part sculptured by close-set axial riblets, with indistinct traces of irregular spiral threads. Teleoconch of up to 3.25 whorls; sculptured by distinct, close-set, compound axial riblets with 3-4 finer, uneven, intermediary axial threads; intervals between riblets 1-2 times riblet width at whorl periphery; spiral sculpture of microscopic threads more or less throughout, 1-2 stronger threads below suture (visible only under SEM). Umbilicus relatively wide. Aperture lunate, broader basally; parietal region with two rounded, in-running lamellae, the lower one stronger and projecting slightly beyond aperture; baso-columellar region with a well-developed, in-running, ridge-like denticle; palatal region with four denticles recessed approx. ⅛ whorl behind outer lip, shape of denticles somewhat variable, the two below mid-whorl usually elongate, the one just above mid-whorl often more rounded, the fourth is subsutural and very small. Shell translucent, corneous-brown to pale honey-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 11), known only from the escarpment north of Utrecht, KwaZulu-Natal, at 1700-1850 m; northern afrotemperate forest, in leaf-litter and amongst epiphytic moss on tree trunks. The neighbouring Paardeplats, Pongola Bush and Tafelkop nature reserves need to be surveyed in the hope of finding extant colonies of this species in formally protected areas.
Diagnosis
Shell very small, spire slightly raised; protoconch with close-set axial riblets (diameter ±415 μm); teleoconch sculpture of compound axial riblets with 2-4 intermediary axial threads; spiral sculpture of close-set, spiral threads, strongest below suture, indistinct elsewhere; parietal region with a low, narrow, in-running ridge; baso-columellar region with a broad, low, in-running ridge; palatal region with a strong, broad, ridge-like denticle just below mid-whorl; umbilicus relatively wide. Shell buff to pale ochre; diameter up to 1.5 mm.
Other material
SOUTH AFRICA • 4 specimens; same collection data as for lectotype; ex Albany Museum; NMSA V3552 • 4 specimens; same collection data as for lectotype; ex Transvaal Museum; NMSA W462.
Distribution and conservation
A narrow-range endemic (Fig. 11), known only from the Dargle area, KwaZulu-Natal, at ±1150 m a.s.l.; presumably in leaf-litter of southern mistbelt forest. Forested habitats in nature reserves in the Bulwer-Dargle-Karkloof area need to be surveyed in the hope of finding extant colonies of this species in formally protected areas.
Remarks
Despite the KwaZulu-Natal Midlands being a relatively well-sampled area, the original samples collected by Henry Burnup remain the only ones known of this species.
Diagnosis
Shell small (max. diameter ± 1.7 mm), shallowly and symmetrically biconcave; umbilicus very wide; whorls tightly coiled; last adult whorl not descendant. Protoconch initially with traces of close-set spiral threads, later portion with spiral threads and close-set axial riblets producing a fine, reticulate sculpture; teleoconch with fine, close-set, axial riblets; riblets compound, composed of several periostracal lamellae; intervals between riblets with fine, uneven, intermediary axial threads and fine spiral threads. Aperture narrowly and symmetrically lunate; parietal dentition absent; palatal region with 1-3 broad, prosocline, axial ridge-like calluses.
Etymology
From the Latin itero, iteratus: repeat, repeated; and the Greek odontos (οδοντος): tooth; with reference to the episodic deposition and resorption of the palatal dentition. Gender feminine.
Remarks
Iterodonta gen. nov. is distinctive on account of its reticulate protoconch, relatively strong spiral sculpture on the teleoconch and prosocline palatal calluses.
Diagnosis
Shell small, symmetrically biconcave; protoconch initially with traces of close-set spiral threads, later portion with spiral threads and close-set axial riblets producing a fine, reticulate sculpture; teleoconch sculpture of close-set, compound axial riblets with finer, uneven intermediary axial threads; aperture lacking parietal and columellar dentition; palatal region with 1-3 broad, prosocline, axial ridge-like calluses set back from lip, visible through translucent shell; umbilicus very wide and shallow. Shell translucent, straw-brown to pale honey-brown; diameter up to 1.68 mm.
Etymology
From ammonite (Ammonoidea); with reference to the symmetrically biconcave shell.
Description
Shell small, diameter up to 1.68 mm, H/D ratio ±0.48; symmetrically biconcave, whorls tightly coiled, but not conspicuously deep; spire sunken, but not deeply so; last adult whorl not descendant; suture strongly indented and apical and basal portions of whorls strongly convex, less so at mid-whorl; periphery evenly convex. Protoconch comprising apical cap plus approx. 1.25 whorls; diameter ±330 μm; initially with traces of close-set spiral threads, later portion with spiral threads and close-set axial riblets producing a fine, reticulate sculpture. Teleoconch of up to 3.5 whorls; sculptured by distinct, close-set compound, orthocline axial riblets with finer, uneven intermediary axial threads; intervals between riblets approx. twice riblet width at whorl periphery; spiral sculpture relatively distinct, comprising microscopic spiral threads, most obvious in intervals between riblets; threads coarsest below suture and in umbilicus, but present throughout. Umbilicus very wide and relatively shallow (Fig. 13B). Aperture narrow, more or less symmetrically lunate, with apical limit rounded; parietal and columellar dentition lacking; palatal region with 1-3 broad, prosocline, axial, ridge-like calluses set back ⅛-½ whorl from lip (position variable), visible through translucent shell (Fig. 13D); calluses slightly curved, one usually well developed, the others in the process of resorption or deposition. Shell translucent, straw-brown to pale honey-brown.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the south-facing slopes of the Langeberge and Riviersonderendberge, W. Cape, at 300-450 m a.s.l.; in leaf-litter of southern afrotemperate forest. Grootvadersbosch Nature Reserve is a formally protected area and the indigenous forest of the Langeberge and Riviersonderendberge are generally well managed, with additional formally protected areas that should be surveyed in the hope of finding additional extant colonies of this species.
Remarks
Iterodonta ammonita gen. et sp. nov. is highly distinctive amongst the southern African charopid fauna. The only other African species with similar palatal dentition is Endodonta kempi Connolly, 1925 recorded from Kenya, Malaŵi and Zambia (Bruggen 1988(Bruggen , 2007, but in that species the spire is not sunken, the umbilicus is narrower and the protoconch lacks distinctive sculpture. Solem (1970) referred the latter to Afrodonta, but this seems improbable and its true relationships require further investigation.
Diagnosis
Shell very small to relatively large (adult diameter ± 1.3-2.0 mm), spire slightly sunken to slightly elevated; umbilicus moderate to very wide; whorls strongly rounded, periphery at mid-whorl. Protoconch smooth or with low, weakly undulant axial sculpture. Teleoconch with close-set axial riblets; riblets compound, composed of several periostracal lamellae; intervals between riblets with several finer intermediary axial threads; spiral sculpture weak, at most comprising extremely fine, close-set spiral threads. Aperture lunate to broadly lunate, variously furnished with parietal and palatal dentition in the form of denticles and/or in-running lamellae/ridges.
Remarks
Phialodonta gen. nov. is proposed for a group of species which share a smooth or weakly sculptured protoconch and a relatively coarse teleoconch sculpture of compound axial riblets with multiple fine intermediary axial threads. The genus ranges widely, from the Agulhas region, W. Cape to the KwaZulu-Natal Midlands. In Costulodonta gen. nov. the protoconch has distinct radial riblets and the teleoconch sculpture is generally somewhat finer. Members of Afrodonta have a somewhat similar protoconch, but they possess a silky teleoconch sculpture comprised of simple axial riblets of alternating strength.
Key to species of Phialodonta gen. nov.
Diagnosis
Shell very small, planorboid, spire slightly sunken; protoconch lacking axial sculpture; teleoconch sculpture of close-set axial riblets with extremely fine, close-set spiral threads in their intervals; aperture lacking parietal and baso-columellar dentition, palatal region with three recessed, in-running, ridge-like denticles; umbilicus wide. Shell corneous-brown; diameter up to 1.3 mm.
Etymology
Named after the Agulhas region.
Description
Shell very small, diameter up to 1.3 mm, H/D ratio 0.47; planorboid, spire slightly sunken; whorls tightly coiled, not conspicuously deep; last adult whorl not descendant; suture shallowly indented, periphery evenly convex. Protoconch comprising apical cap plus approx. 1.0 whorl; diameter ±320 μm; microscopically shagreened, lacking axial sculpture. Teleoconch of ±3.0 whorls; sculptured by distinct, close-set, axial riblets, intervals between riblets more or less equal to width of riblets at whorl periphery; spiral sculpture of extremely fine, close-set threads in riblet intervals. Umbilicus wide. Aperture lunate, relatively narrow, apical and basal limits rounded; parietal and baso-columellar regions lacking dentition; palatal region with three recessed, in-running, ridge-like denticles, one just above mid-whorl, one just below mid-whorl and one basal. Shell corneous-brown.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the type locality; in leaf-litter of southern afrotemperate forest. The type locality lies in a well-managed private nature reserve, but additional Agulhas Plain localities that retain patches of southern afrotemperate forest need to be surveyed in the hope of finding additional extant colonies of this species.
Remarks
The holotype is the only specimen available and though it is not a fresh shell, it is sufficiently distinct to permit its description as a new species. I have refrained from examining it under SEM due to its fragility and thus finer details of its sculpture are not available. On account of its smooth protoconch, relatively distinct axial riblets and wide umbilicus, I refer it to Phialodonta gen. nov. In its planorboid shape it resembles P. introtuberculata gen. et comb. nov., but that species has a proportionately narrower shell (mean H/D ratio 0.424 -Solem 1970) and very different apertural dentition.
Diagnosis
Shell very small, spire slightly raised; protoconch lacking axial sculpture; teleoconch sculpture of curved, close-set, compound, axial riblets, with 4-5 finer, intermediary axial threads; spiral sculpture of extremely fine, close-set threads; aperture with a low, rounded, in-running parietal ridge, lacking basocolumellar dentition; palatal region with two pairs of close-set, spirally aligned denticles, one pair close to mid-whorl, the other basal; umbilicus wide. Shell pale corneous-brown to honey-brown; diameter up to 1.4 mm.
Etymology
From the Latin ater: black, and mons: a mountain; with reference to the Groot Swartberg range.
Description
Shell very small, diameter up to 1.4 mm, H/D ratio ±0.43; spire slightly raised; whorls tightly coiled, relatively narrow; last adult whorl descending slightly below penultimate whorl; suture narrowly indented, periphery evenly convex. Protoconch comprising apical cap plus approx. 0.67 whorl; diameter ±410 μm; smooth to microscopically shagreened, lacking axial sculpture. Teleoconch of up to 2.25 whorls; sculptured by distinct, curved, close-set, compound axial riblets, with 4-5 finer, uneven, intermediary axial threads; intervals between riblets approx. twice riblet width at whorl periphery; spiral sculpture of extremely fine, close-set threads more or less throughout. Umbilicus wide. Aperture broadly lunate to almost D-shaped; parietal region with a low, rounded, in-running ridge just below mid-whorl; baso-columellar region lacking dentition; palatal region with two pairs of close-set, spirally aligned denticles, one pair close to mid-whorl, the other basal. Shell pale corneous-brown to honey-brown, usually encrusted with soil particles.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the southern edge of the Groot Swartberge in the region of Calitzdorp, at 360 m a.s.l.; in accumulations of leaf-litter in sheltered microhabitats within Gamka Thicket. The type locality lies close to the protected Klein Swartberg Mountain Catchment Area and there are also several private and provincial nature reserves in the vicinity. Further sites in these areas need to be surveyed in the hope of finding additional extant colonies of the species.
Remarks
Conchologically closest to Phialodonta aviana gen. et sp. nov. and P. rivalalea gen. et sp. nov., but differs in the form of the palatal dentition. In specimens at intermediate growth stages the palatal denticles may be single rather than paired, or even totally absent.
Diagnosis
Shell small, spire raised; protoconch lacking axial sculpture; teleoconch sculpture of distinct, close-set, compound axial riblets, intervals with finer intermediaries and extremely fine, close-set spiral threads; aperture lacking visible dentition; all dentition deeply recessed, comprising two low, rounded, inrunning parietal ridges and three in-running, ridge-like palatal denticles, visible by transparency; basocolumellar dentition lacking; umbilicus wide. Shell pale corneous-brown to honey-brown when fresh; diameter up to 1.55 mm.
Etymology
From the Latin avium: a desert, wilderness; with reference to the Wilderness region, W. Cape.
Description
Shell small, diameter up to 1.55 mm, H/D ratio ±0.5; spire raised, whorls tightly coiled; last adult whorl slightly descendant; suture narrowly indented, somewhat sunken; periphery evenly convex. Protoconch comprising apical cap plus approx. 1.0 whorl; diameter ±360 μm; smooth to microscopically shagreened, lacking axial sculpture. Teleoconch of up to 3.25 whorls; sculptured by distinct, close-set, compound axial riblets, with ±5 finer, intermediary axial threads; intervals between riblets approx. twice riblet width at whorl periphery; spiral sculpture of extremely fine, close-set threads more or less throughout. Umbilicus wide. Aperture broadly lunate, lacking any visible dentition; all dentition deeply recessed; parietal region with two low, rounded, in-running ridges, lower one stronger; baso-columellar region lacking dentition; palatal region with three in-running, ridge-like denticles visible by transparency, one just above mid-whorl, one basal and the third between these. Shell pale corneous-brown to honey-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the coastal hinterland in the Outeniqua-Tsitsikamma region, in the environs of Wilderness, Knysna and Nature's Valley, from the coast to 380 m a.s.l.; in leaflitter of southern afrotemperate forest. The forests in this region fall within the Garden Route National Park and are thus afforded a high degree of protection.
Remarks
As in Phialodonta perfida gen. et comb. nov., the internal dentition of P. aviana gen. et sp. nov. is recessed to such an extent that it is not visible in undamaged apertural view. The palatal denticles, however, are visible externally by transparency, but the parietal lamellae can only be seen if the palatal region is broken back. Phialodonta perfida gen. et comb. nov. differs from the present species in having a single, inwardly broadening, parietal lamella and only two palatal ridges. It also attains a larger size. P. perfida gen. et comb. nov. is only recorded from the Grahamstown area, and the known ranges of the two species are separated by a distance of over 300 km.
The easternmost population of P. aviana gen. et sp. nov. in the Nature's Valley area is unusual in that some individuals have four palatal denticles instead of three, the upper two of which are distinctly longer than the lower two. In other respects, however, they are identical to typical specimens from the Knysna-Wilderness area. This population is also noteworthy in that it shows that P. aviana gen. et sp. nov. and P. rivalalea gen. et sp. nov. are parapatric, perhaps even sympatric, in the vicinity of Nature's Valley. Additional survey work is needed to further explore this issue. The differences between the two are discussed in the remarks pertaining to P. rivalalea gen. et sp. nov. (Connolly, 1933) gen. et comb. nov. Figs 14, 16A-D, 19G-H Afrodonta introtuberculata Connolly, 1933 fig. 1(9), pl. 7, figs 5-8.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the Karkloof-Nottingham Road area in the KwaZulu-Natal Midlands, at 1300-1500 m a.s.l.; in leaf-litter of southern mistbelt forest. Judging by the numbers of specimens in the original samples collected by A.J. Taynton in the Nottingham Road area (pre-1928), the species may be locally common or abundant. There are a number of provincial and private nature reserves in this area in which the species has been collected in recent years.
Remarks
Phialodonta introtuberculata gen. et comb. nov. is distinguished from other species in the genus by its much flatter shell and paired palatal dentition. Afrodonta geminodonta sp. nov. has similar sets of paired palatal denticles, but its spire is raised, the axial sculpture much finer and the umbilicus much narrower.
Specimens somewhat resembling P. introtuberculata gen. et comb. nov. have been found in the Creighton area (Hlabeni Forest, 29.975° S, 29.742° E, NMSA V5219). However, although they have similar palatal dentition, their axial sculpture is noticeably finer, the umbilicus not as wide and they are milky-white in colour. Phialodonta perfida (Burnup, 1912)
Diagnosis
Shell relatively large, spire distinctly raised, last adult whorl descendant, whorls slightly flat-sided; protoconch evidently smooth (somewhat worn in the material available), diameter ±425 μm; teleoconch sculptured by relatively strong, close-set, compound axial riblets, intervals between riblets with 3-4 microscopic axial threads; spiral sculpture of faint microscopic threads, but for the most part scarcely evident, even in umbilicus; aperture broadly lunate, with no visible dentition; parietal region with a single deeply recessed, low lamella, broadening inwardly; baso-columellar dentition lacking; palatal region with two recessed, relatively long, in-running ridge-like denticles, one at periphery the other in middle of base, visible by transparency; umbilicus wide. Shell translucent, corneous-brown when fresh; diameter up to 1.95 mm.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the vicinity of Grahamstown, E. Cape, at ± 700 m a.s.l.; in leaf-litter of southern mistbelt forest. The only material available originates from the Albany Museum, Grahamstown and was collected by J. Farquhar in the early 1900s. Surveying forested habitats in the Grahamstown area should thus be identified as a priority in the hope of finding extant colonies of this species.
Description
Shell small, diameter up to 1.7 mm, H/D ratio ±0.47; spire slightly to distinctly raised; whorls tightly coiled; last adult whorl descendant; suture narrowly indented, somewhat sunken; periphery evenly convex. Protoconch comprising apical cap plus approx. 1.0 whorl; diameter ±410 μm; microscopically shagreened, lacking axial sculpture. Teleoconch of up to 3.5 whorls; sculptured by distinct, close-set, compound axial riblets, with 4-5 finer, intermediary axial threads; intervals between riblets 1-2 times width of riblets at whorl periphery; spiral sculpture of extremely fine, close-set threads more or less throughout. Umbilicus very wide, its margin relatively sharply rounded. Aperture broadly lunate; parietal region with two narrow, in-running lamellae, the upper one weaker and slightly more recessed; basocolumellar region with a similar recessed, narrow, in-running, lamella-like ridge (sometimes scarcely evident in apertural view); palatal region with three denticles recessed 1 ⁄5-1 ⁄4 whorl behind outer lip, denticles evident by transparency, but hardly visible in apertural view, one just above mid-whorl, one just below mid-whorl, the third more basal, middle denticle situated slightly closer to aperture and often more elongate (particularly in juveniles); a fourth palatal structure in the form of a narrow, thread-like ridge lying immediately below the suture is evident by transparency in fresh juveniles and subadults. Shell pale corneous-brown to pale honey-brown when fresh.
Distribution and conservation
A narrow-range endemic (Fig. 14), known only from the coastal hinterland in the Tsitsikamma region, southern Cape, from the coast to 260 m a.s.l.; in leaf-litter of southern afrotemperate forest. The forests in this region fall within the Garden Route National Park and are thus afforded a high degree of protection.
Remarks
In terms of the overall facies of the shell, Phialodonta rivalalea gen. et sp. nov. is closest to P. aviana gen. et sp. nov. It differs from the latter in having parietal lamellae that are visible through the aperture, and in possessing a narrow, in-running baso-columellar ridge and a wider umbilicus.
Discussion
Apertural barrier deposition Solem (1970) hypothesised that there are two contrasting modes of apertural tooth development in Afrodonta s. lat. In one group, including the type species, he suggested that tooth growth proceeds by continuous resorption and deposition at the posterior and anterior ends of the teeth, respectively, such that they develop at an early age and their appearance and position relative to the peristome remains more or less constant throughout growth. In a second group, he considered tooth deposition to be episodic. Here, after one set of denticles is laid down, tooth deposition ceases while the shell continues to grow for ¼ to ⅓ of a whorl, after which another set of denticles is deposited and the process of resorbing the earlier set begins. The position of the denticles relative to the peristome is thus continually changing and the number of denticles present at any given time is dependent on the interplay between denticle resorption and deposition. In some instances, evidence of three sets of denticles may be apparent. To this latter group, Solem (1970) referred 'Afrodonta' bimunita, 'Af.' introtuberculata and 'Af.' kempi. Solem (1970) believed that these contrasting modes of apertural dentition deposition were incompatible with monophyly, and that at least two lineages were present within Afrodonta s. lat., but he stopped short of proposing a new genus for those exhibiting episodic tooth deposition, preferring instead to wait for confirmatory anatomical data.
The present study has found a good deal of evidence to support Solem's continuous vs. episodic interpretation of apertural tooth deposition. Three of the new genera described herein on the basis of protoconch morphology and teleoconch form and sculpture, namely Amatholedonta gen. nov., Biomphalodonta gen. nov. and Iterodonta gen. nov., consistently exhibit episodic tooth deposition. In contrast, Afrodonta, Costulodonta gen. nov. and Phialodonta gen. nov., for the most part, show continuous tooth deposition. However, it is noteworthy that in some species within these latter genera, tooth deposition may be considered both continuous and episodic. For example, in Afrodonta geminodonta sp. nov. and Phialodonta atromontana gen. et sp. nov. the parietal teeth are deposited continually, but the palatal teeth are not spirally continuous and must be deposited in a more discontinuous or episodic manner. Similarly, in Phialodonta introtuberculata gen. et comb. nov., which lacks parietal dentition, deposition of the palatal teeth is spirally discontinuous and therefore episodic.
There is thus not a clear-cut dichotomy with species having either continuous or episodic tooth deposition and neither is this interpretation of tooth deposition consistent within the genera as shown above. An alternative way to interpret apertural dentition might be to consider it either a primarily spiral feature or a primarily axial one. Where the dentition takes the form of in-running ridges or lamellae, these are primarily spiral and in the few exceptions mentioned above that have discontinuous palatal dentition, the teeth are essentially interrupted structures of spiral origin. In Amatholedonta gen. nov., Biomphalodonta gen. nov. and Iterodonta gen. nov., which never exhibit any kind of in-running ridges or lamellae, the apertural teeth might more appropriately be considered primarily axial features which are either continuous axial ridges (Iterodonta gen. nov.) or interrupted (Amatholedonta gen. nov. and Biomphalodonta gen. nov.), taking the form of axially aligned rows of teeth. Viewing the development of apertural dentition in this manner is then fully consistent with the interpretation of genera as detailed above.
Phylogenetic considerations
The more detailed study of the aperturally dentate southern African charopids undertaken during the course of this revision has confirmed Solem's belief that Afrodonta s. lat. is not a single morphologically coherent entity. Rather, it is an assemblage of genus-level taxa that until now have been grouped together on account of a single shared, but not necessarily homologous character, namely the possession of apertural barriers. Closer scrutiny of protoconch and teleoconch microsculpture using scanning electron microscopy has revealed patterns of variation consistent with the subdivision of Afrodonta s. lat. into separate morphologically congruent groups that I have proposed as new genera. These in turn exhibit patterns of apertural tooth deposition congruent with the continuous vs. episodic (spiral vs. axial) interpretation discussed above. Considerable variation in protoconch and teleoconch microsculpture has also been observed in charopids from Australia (Stanisic 1990;Hyman & Stanisic 2005;Stanisic et al. 2010), New Zealand (Marshall & Barker 2008) and the Pacific Islands (Solem 1983) and, as in Afrodonta s. lat., congruent patterns in this variation have provided useful taxonomic characters for the delimitation and diagnosis of genera.
The considerable diversity in shell morphology and sculpture exhibited by the aperturally dentate charopids of southern Africa, together with the differing modes of apertural barrier development, strongly indicates, as suggested by Solem (1970), that they represent a polyphyletic assemblage, rather than a monophyletic entity. The single shared character -the possession of apertural barriersis almost certainly a convergent character with several independent origins. Climo (1978) and Solem (1980: 15) both considered apertural barriers to have evolved multiple times in New Zealand and Pacific Island charopids, respectively. At present there is insufficient evidence to illuminate the issue of the phylogenetic affinities of the genera discussed herein, but it is likely that some will prove to be more closely related to edentate species currently referred to Trachycystis s. lat. than to other dentate charopids. Examples indicative of such relationships include the similarity between Costulodonta bidens gen. et sp. nov. and 'Trachycystis' contabulata, and that between Biomphalodonta forticostata gen. et sp. nov. and 'T.' bathycoele and 'T.' bifoveata. A more in-depth investigation of these phylogenetic issues will require anatomical and molecular data.
Conservation
With the exception of Afrodonta novemlamellaris all species discussed herein are endemic to South Africa, though the morphologically similar Af. farquhari is very likely to range northward into coastal Mozambique. Three further species of Afrodonta range widely in eastern South Africa, but based on the available distribution data, the remaining Afrodonta species, plus all species of Amatholedonta gen. nov., Biomphalodonta gen. nov., Costulodonta gen. nov., Iterodonta gen. nov. and Phialodonta gen. nov., are narrow-range endemics. Within this rather imprecise categorisation, eight are local endemics with < 100 km between the most widely separated localities and eleven are site endemics known only from a single locality or with < 10 km between the most widely separated localities ( Table 1).
The widely distributed species are all catholic in their habitat requirements and are found in a wide variety of forest types, some extending into woodland and thicket. In contrast, the narrow-range species are all confined to a single forest type. In KwaZulu-Natal and the interior of E. Cape this is either southern mistbelt forest or, at higher altitudes, northern afrotemperate forest. In the southern Cape, narrow-range species are found primarily in southern afrotemperate forests, the sole exception being Phialodonta atromontana gen. et sp. nov., which has only been found in low succulent thicket.
As with all narrowly endemic species, the conservation of these local and site endemics is a matter of concern. In the species treatments above I have indicated whether the species are known to occur in formally protected areas or whether there are formally protected areas in the neighbourhood of occurrence that could be surveyed with the aim of locating additional colonies in conserved areas. Whereas many of the local and site endemics are recently discovered species described herein, five of the site endemics date from material collected in the early 1900s and in every case the original samples remain the only ones known. This likely indicates that they are genuinely of very limited distribution and highlights the need for targeted field work aimed at locating extant colonies. Ultimately, the conservation of these largely forest-dependent snails depends upon conserving their habitat. I have earlier highlighted the importance of indigenous forests from the perspective of terrestrial mollusc conservation in South Africa and outlined the threats facing these habitats (Herbert 1998). Preserving the integrity of isolated habitat fragments, however, may not be sufficient and may not be possible under conditions of rapid climate change. | 2020-04-23T09:12:24.059Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "0b988402d4a21d6855a52f2c38e836f05e46124b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5852/ejt.2020.629",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b4796cdf4e7350269932b8e39a7c38bb738c5045",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
12861694 | pes2o/s2orc | v3-fos-license | A 25-Year Longitudinal Evaluation of Gastric Emptying in Diabetes
OBJECTIVE To evaluate the natural history of gastric emptying in diabetes. RESEARCH DESIGN AND METHODS Thirteen patients with diabetes (12, type 1; 1, type 2) had measurements of gastric emptying, blood glucose levels, glycated hemoglobin, upper gastrointestinal symptoms, and autonomic nerve function at baseline and after 24.7 ± 1.5 years. RESULTS There was no change in gastric emptying of either solids (% retention at 100 min) (baseline 58.5 ± 5% vs. follow-up 51.9 ± 8%; P = 0.35) or liquids (50% emptying time) (baseline 29.8 ± 3 min vs. follow-up 34.3 ± 6 min; P = 0.37). Gastric emptying of solid at follow-up was related to emptying at baseline (r = 0.56, P < 0.05). At follow-up, blood glucose concentrations were lower (P = 0.006), autonomic function deteriorated (P = 0.03), and gastrointestinal symptoms remained unchanged (P = 0.17). CONCLUSIONS In unselected patients with diabetes, gastric emptying appears remarkably stable over 25 years.
T here is limited information about the natural history of gastric emptying in diabetes (1)(2)(3). We have reported that gastric emptying and symptoms changed little after 12 years of followup, possibly because a deterioration in autonomic function was counteracted by better glycemic control (4). We reexamined patients from the same cohort after 25 years.
RESEARCH DESIGN AND
METHODSdWe studied 13 patients (9 female) with diabetes (12, type 1; 1, type 2) in whom gastric emptying was measured in 1984-1989 (duration of follow-up 24.7 6 1.5 years). Age was 61 6 8 years at follow-up, and duration of known diabetes was 38 6 8 years. Baseline (5,6) and longitudinal (4) measurements in this cohort have been reported. Fifty-three of the original 86 patients were known to be alive; 30 were contactable, and 13 of these patients agreed to participate. When compared with the other 73 patients, those who participated were younger at entry (36.4 6 2.2 vs. 49.3 6 1.7 years, P = 0.003), but did not differ in regard to BMI, duration of known diabetes, symptom or autonomic function scores, proportion who smoked, or rates of solid or liquid emptying at baseline. No subject was taking medication known to affect gastrointestinal motility, and smoking was forbidden on the day of the test. Written, informed consent was obtained, and the protocol was approved by the Royal Adelaide Hospital Research Ethics Committee (protocol 091221).
At baseline and follow-up, glycated hemoglobin (HbA 1c ), plasma creatinine, and blood glucose were measured from an initial venous sample; blood glucose was also measured 30, 60, 90, and 120 min after meal ingestion (4).
Gastric and esophageal symptoms were assessed by a validated questionnaire (6), with a maximum score of 27.
Autonomic nerve function was evaluated by cardiovascular reflex tests (variation in heart rate during deep breathing; heart rate response and fall in systolic blood pressure after standing) (6,7). Each test was scored as 0 (normal), 1 (borderline), or 2 (abnormal). A total score $3 was taken to indicate definite autonomic neuropathy.
Statistical analysis
Data were normally distributed and comparisons between baseline and follow-up were evaluated using paired t tests and linear regression analysis, with the exception of autonomic function and symptom scores that were evaluated by Wilcoxon signed rank tests. The sample size of 13 had 80% power to detect a difference in solid emptying of one-third from the baseline value, at P , 0.05 significance. Data are mean 6 SEM (or median [interquartile range] for nonparametric data).
RESULTSdGastric emptying of solids was abnormally slow in 8 of 13 patients at baseline, and 5 of 13 at follow-up. In one patient, gastric emptying of liquid was not evaluated at follow-up because 67 Ga-ethylenediaminetetraacetic acid was unavailable. Liquid emptying was abnormally slow in 6 of 12 patients at baseline and 8 of 12 at follow-up. There was no change in gastric emptying of either solid (P = 0.35) or liquid (P = 0.37), and the rate of solid emptying at baseline was related to emptying at follow-up (r = 0.56, P , 0.05), with a trend for the liquid component (r = 0.49, P = 0.11), which became significant (r = 0.82, P = 0.002) on excluding one subject with abnormally rapid emptying at follow-up (Fig. 1).
At follow-up, autonomic nerve function was not assessed in one patient with atrial fibrillation. Three of the 12 patients had definite autonomic neuropathy at baseline and 8 at follow-up; the total score was lower at baseline (1 [3] vs. 4 [2], P = 0.03).
CONCLUSIONSdThis study represents the most prolonged longitudinal evaluation of gastric emptying in diabetes. After ;25 years, there was no change in gastric emptying of solids and liquids, or symptom scores, whereas autonomic function deteriorated, but glycemic control improved. The latter is likely to reflect the increased attention given to optimizing glycemic control in diabetes. These observations are consistent with our previous longitudinal study (4) and suggest that both gastric emptying and gastrointestinal symptoms are usually stable over time in patients with diabetes. We acknowledge that the number of subjects studied was relatively small, but there was no trend for a change in either gastric emptying or symptoms. Selection bias cannot be excluded, but other than being younger, the baseline characteristics of those studied did not appear exceptional; many patients who declined or could not be contacted had simply moved, or were very elderly.
Delayed gastric emptying in diabetes is not invariably related to irreversible autonomic neuropathy, but rather has a complex and heterogeneous etiology, including loss/dysfunction of interstitial cells of Cajal, and deficient neurotransmission (8). Gastric emptying is also slowed during acute hyperglycemia (9,10). We observed lower blood glucose at follow-up, which might have ameliorated any progression in irreversible pathology.
Gastric emptying at baseline and follow-up were related, certainly for solids and probably also for liquids. Hitherto, there has been limited information about the "reproducibility" of gastric emptying in diabetes (11). Our observations suggest that, as in health (7), the interindividual variation in gastric emptying in diabetes is much greater than intraindividual variation.
In summary, this prospective study indicates that gastric emptying in patients with long-term diabetes is relatively stable over time.
AcknowledgmentsdThis study was funded by a grant awarded by the National Health and Medical Research Council of Australia.
No potential conflicts of interest relevant to this article were reported. J.C. conducted the study, including preparation of the protocol, subject recruitment, and data collection and analysis, and prepared the manuscript. A.R. assisted in performance of the study, including preparation of the protocol and analysis of gastric emptying data. M.B. assisted in performance of the study. C.K.R. supervised preparation of the protocol and critically reviewed the manuscript. K.L.J. supervised preparation of the protocol and analysis of gastric emptying data, and critically reviewed the manuscript. M.H. conceived the study, supervised preparation of the protocol, and was responsible for final content of the manuscript.
M.H. is the guarantor of this work and, as such, had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
The authors thank Ms. Kylie Lange, Centre of Clinical Research Excellence in Nutritional Physiology, Interventions, and Outcomes, Adelaide, South Australia, Australia, who provided biostatistical advice. care.diabetesjournals.org DIABETES CARE, VOLUME 35, DECEMBER 2012 2595 | 2016-05-12T22:15:10.714Z | 2012-11-14T00:00:00.000 | {
"year": 2012,
"sha1": "5cf82ca918709439c9240c786e7fcca376a03343",
"oa_license": "CCBYNCND",
"oa_url": "https://care.diabetesjournals.org/content/diacare/35/12/2594.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "596f7b0b469a22f12780b88bf114e18f430fea1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270512912 | pes2o/s2orc | v3-fos-license | Incidence of side effects of antituberculosis drugs and their related factors in northern Iran: a retrospective cohort study
Background: Antituberculosis drugs may cause mild, moderate or severe adverse drug reactions (ADR) leading to poor compliance. Description of the pattern of ADR and their related factors can help tuberculosis (TB) control program as part of the WHO programs. This study aims to investigate the incidence of ADR and associated factors among TB patients in northern Iran. Methods: This is a retrospective cohort study. The required information, including year of diagnosis, age, gender, residence area, nationality, HIV co-morbidity, history of anti TB treatment and ADR, was obtained from the Deputy of Health, Mazandaran University of Medical Sciences, Iran. All data were analyzed using SPSS version 21 software. Results: Out of 3903 TB patients, 136 (3.5%) experienced major ADR. The incidence of ADR for men and women as well as for those with and without previous treatment history were 3.9% vs. 3.3% and 5.3% vs. 3.4%, respectively (p>0.05). Multiple logistic regression models showed a higher chance of ADR among those aged over 59 compared with those aged under 29 (OR=2.63, 95% confidence interval: 1.54–4.49). Conclusions: Age over 59 can be considered a risk factor for ADR with anti-TB drug administration.
Background
Despite the reduction in tuberculosis (TB) incidence, its morbidity and mortality is still one of the main global public-health concerns.The annual incidence and the mortality of TB has been estimated to be 9.6 million and 1.5 million, respectively [1], [2].The first line anti-TB drugs (Rifampin, Isoniazid, Pyrazinamide) have an efficacy exceeding 95% [3].Anti-TB drugs may cause some adverse drug reactions (ADR), varying from mild to severe forms.Just 2%-8% of TB patients may experience severe ADR, such as exanthema, vertigo, psychosis and hepatotoxicity, leading to a termination of or change in treatment regimen.Conversely, mild to moderate ADR, including gastroenteric problems, nausea/vomiting, arteritis, peripheral neuropathy, drug allergy, rash/itching, headache and behavioral problems (insomnia, anxiety, hypolibido), do not require emergent change in treatment regimen.Such complications may challenge the TB control program [4], [5], [6], [7], [8].Hepatotoxicity, one of the most severe drug reactions, occurs in the first month of treatment and can be fatal if diagnosed late [2].The ADR incidence is also affected by dosage and time of drug prescription.Age, nutritional status, co-morbidities such as liver or renal dysfunction as well as HIV infection and alcoholism are other determinants for TB ADR [6].In addition to high burdens associated with ADR for patients and communities, diagnosis and treatment of such complications cause high economic costs, including hospitalization, provision of drugs and food supplements, and a negative impact on the work force [4].Describing the pattern of ADR onset along with investigating the factors associated with such adverse reactions can help policymakers to control and manage the relevant costs [9].In this study, we aimed to determine the incidence of ADR and relevant risk factors among TB patients in Mazandaran University of Medical Sciences.
Methods
The retrospective cohort study was conducted among 3,903 patients with TB treated with anti-TB drugs from 2005 to 2017.All of them were recruited by census method.Inclusion criteria were: all TB patients who registered and were followed-up from treatment initiation until the end of the second month of treatment.Exclusion criteria include: 1. Subjects who were diagnosed and registered as TB patients but later on TB diagnosis was ruled out.2. Patients who were diagnosed in another center and referred to the TB registry system of Mazandaran University of Medical Sciences, but their history of adverse-reaction experience was not available.
Results
During the study period, 4,033 TB patients were registered, 130 of whom were excluded (86 patients due to wrong diagnosis and 44 cases were transferred from other regions).Of the 3,903 remaining patients, 136 (3.5%) had experienced adverse drug reactions during treatment.The type of drug reaction was identified and reported in 107 patients.Of these, 92 had one ADR, 13 patients reported two types, and 2 patients had three types of ADR.Renal adverse reaction, vertigo, vomiting, icter, feverless skin rashes and also skin rashes with fever were observed in 9, 4, 23, 77, 6, and 5 patients, respectively.
Univariate analyses showed that ADR frequency was higher among women than men (3.9% vs. 3.3% respectively), urban residents than rural residents (3.7% vs. 3.2% respectively), patients with previous treatment history vs. those without (5.3% vs. 3.4% respectively), HIVpositive TB patients vs. HIV-negative TB patients (5.9% vs. 4.3% respectively).None of these associations were statistically significant (p>0.05).The frequency of ADR among patients aged >59 was significantly higher than that among patients aged <30 (5% vs. 2% respectively, p=0.001).Using a multivariate logistic regression model and controlling for possible confounders, only age >59 significantly increased the odds of developing ADR (OR=2.633,95% CI: 1.54-4.49)(Table 1).It should be noted that there were no differences between the effect size of multivariate logistic regression and the univariate model.Moreover, there was no potential confounder.
Discussion
The results showed that 3.5% of TB patients in northern Iran experienced adverse drug reactions during anti-TB treatment.Although these reactions were higher among women and re-treatment cases, the associations were not statistically significant.Multivariate logistic regression models showed that out of the investigated factors, only age over 59 compared with age under 30 was significantly associated with ADR.
The rate of ADR in the present study was lower than those reported in studies performed in India [10], China [1], Brazil [11] and the Markazi province in Iran [12].It should be noted that demographic characteristics and methods of data registry were different in these other study regions.
In addition, different surveillance systems, ethnicities, study designs and various definitions of ADR were other factors responsible for these heterogeneities [13].Some of the previous studies reported higher ADR incidences among men [10], while others reported that women were more affected than men [1], [11], [12].Although the present study reported that ADR was more common among women, the difference was non-significant.Higher adverse reactions among women might be due to hormonal fluctuations during different periods of their lives.Moreover, interactions between oral contraceptives and anti-TB drugs might be another reason for such an association [14].
In the current study, icterus was the most common adverse reaction, while digestive complications and hyperuricemia were reported in some other studies [1], [10], [11].Hepatic and digestive problems were reported in the study conducted among Chinese patients [15].Our study revealed that age is related to adverse drug reactions during anti-TB treatment, which is in keeping with the results of several other studies [1], [10], [12].However, results of a meta-analysis did not report this factor as a risk factor for ADR [16].
Limitations
One of the limitations of the current study is using registry data of a medical university, which is not collected according to research purposes and is prone to some defects and biases.Therefore, it was not possible to assess the role of some factors such as weight and diabetes comorbidity in developing ADR.Further studies are recommended to prospectively investigate the association between all relevant factors with ADR onset.The high possibility of underreporting adverse reactions was another limitation of the present study.
Conclusions
This study shows that age >59 can be considered as a risk factor for ADR to anti-TB drugs.
Notes Competing interests
The authors declare that they have no competing interests.
The required information was provided from the TB registry system by the Health Deputy of Mazandaran University of Medical Sciences, Sari, Iran, in excel format.This information included year of diagnosis, age, gender, area of residence, nationality, HIV co-morbidity, history of anti TB treatment and ADR.The following adverse reactions were assessed: peripheral neuropathy (burning of the extremities), nausea, vomiting, abdominal pain, edema, mucosal ulcers, shock, hearing loss or deafness, vertigo, nystagmus, icterus, visual impairment, acute liver failure, thrombocytopenia, acute renal failure, feverless skin rashes and skin rashes with fever.Adverse reactions were evaluated and approved by a general practitioner who was in charge of the treatment of TB patients.According to the WHO Tuberculosis Control Program, we defined the presence of ADR as at least one of the above-mentioned side effects in patients receiving tuberculosis treatment.In the case of any missing
Table 1 :
Factors related to ADR based on univariate and multivariate analysis | 2024-06-16T05:08:30.628Z | 2024-05-17T00:00:00.000 | {
"year": 2024,
"sha1": "14f9bf91fcf6757e5d74a72bb704990183b617eb",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "14f9bf91fcf6757e5d74a72bb704990183b617eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245529075 | pes2o/s2orc | v3-fos-license | Outcomes of re-irradiation for oral cavity squamous cell carcinoma
Background To predict the outcome of reirradiation (re-RT) for oral cavity squamous cell carcinoma (OSCC). Methods Eighty-three patients met the criterion of having previously irradiated OSCC treated via curative intent re-RT for recurrent or new primary OSCC. The exclusion criteria were a suboptimal dose (<45 Gy) for the first RT and palliative intent for the second irradiation. Re-RT was defined as at least 75% volume at second RT after receiving at least 45 Gy at the first RT. Results The 2-year locoregional progression-free survival (LRPFS) and overall survival (OS) rates were 20% and 28%. For LRPFS, four predictors were noted through univariate analyses: performance status (PS) (p = 0.001), a dose of at least 60 Gy (p = 0.001), stage IVB (p = 0.020), and surgery before re-RT (p = 0.041). In multivariate analyses, only PS (p = 0.005) and a dose of at least 60 Gy (p = 0.001) remained significant. For OS, PS (p = 0.001) and a dose of at least 60 Gy (p = 0.042) were still independently associated predictors, but surgery before re-RT became marginally beneficial (p = 0.053). For patients with a poor PS (ECOG = 2–3), the 2-year OS was only 4.5%. Twenty-nine percent of the patients experienced severe late complications (≥Grade 3), and 18% had new episodes of osteoradionecrosis during their follow-up. Conclusion We identified PS and a re-RT dose ≥60 Gy as predictors for LRPFS and OS. Surgery before re-RT might improve OS. However, the treatment results of re-RT for OSCC were suboptimal. Prospective trials using modern RT techniques, in combination with new therapeutic drugs or radioenhancers, are warranted for improving these dismal outcomes.
Introduction
Taiwan is in a region with high prevalence of betel quid chewing, and oral cavity squamous cell carcinoma (OSCC) accounts for nearly 50% of head and neck (HN) cancer [1]. Fifty percent of postoperative patients belong to the highrisk group, which requires adjuvant radiotherapy with or without chemotherapy to improve oncologic outcomes [2e6]. Despite the aggressive locoregional treatment, approximately 30% of patients with OSCC experience postradiation locoregional recurrence (LRR). In addition, approximately 80% of patients with OSCC were habituated to betel quid chewing [2]. As a result of field cancerization, 15% of patients with OSCC experienced second primary tumors (SPTs) during their follow-up, and 70% of them were also in the oral cavity (OC) [6]. When feasible, salvage surgery is the treatment of choice for managing both postradiation recurrence or SPTs of HN cancer. For patients with resectable disease, more than 50% experienced LRR in the absence of further adjuvant treatment [7,8]. For patients with unresectable disease or those who refuse surgery, systemic therapy is traditionally considered as the standard of care. However, even the best available regimen provides a median overall survival (OS) shorter than 1 year and offers little chance of cure [9e11]. Therefore, reirradiation (re-RT), with or without concurrent chemotherapy, is the only potentially curative treatment. The Radiation Therapy Oncology Group (RTOG) conducted two similar phase II trials of repeated concurrent chemoradiotherapy and found that a small but substantial percentage (16%e25.9%) of patients could survive at 2 years [12,13] following re-RT.
However, most studies in Western populations have reported outcomes of re-RT for HN cancer with tumors originating from all anatomic subsites and even the nonsquamous subtype, and only 7%e32% of the patients had OC cancer [14e18]. Considering that different anatomic subsites or pathologic subtypes may result in different outcomes [19], we focused on OSCC and reviewed our previous re-RT experience as a benchmark for future clinical studies.
Methods and materials
This is a retrospective chart-review study. All patients were treated at a tertiary medical center, Chang Gung Memorial Hospital at Linkou, in Taiwan. The inclusion criterion was previously irradiated OSCC treated with curative intent re-RT for a diagnosis of recurrent or new primary OSCC, including isolated neck recurrences (r-T0N þ M0). The exclusion criteria were (1) a suboptimal dose (<45 Gy) of the first RT, (2) patients irradiated only on one side who received re-RT for the other side, (3) palliative intent for re-RT, and (4) re-RT for non-OSCC head and neck cancers, such as nasopharyngeal cancer (NPC) and pharyngolaryngeal cancer. These criteria ensured that characteristics were homogeneous within the study group and that the two RT fields had significant overlap. Re-RT was defined as at least 75% volume at second RT after receiving at least 45 Gy at the first RT. Patients were coded as having recurrent disease if the retreated primary tumor was located at the same subsite of the OC, or if they had lymph node recurrence without an SPT. If a 2-cm separation of normal mucosa existed, or the diagnostic interval was >3 years, then the tumor was coded as SPT. All patients received a restaging work-up according to the standard guidelines at that time, and their stages and recurrence/SPT classifications were reconfirmed by our HN tumor board.
Toxicity was graded according to the Common Toxicity Criteria, version 3.0. The endpoints for this study were locoregional progression-free survival (LRPFS) and OS, calculated from the first day of re-RT by using the KaplaneMeier method. Univariate analysis (UVA) and multivariate analysis (MVA) were performed using the log-rank and Cox regression methods with stepwise selection, respectively. A significance level of 0.1 was chosen for inclusion of variables into MVA. Hazard ratio (HR), 95% confidence interval (CI) and p values were calculated, and values of p < 0.05 were considered statistically significant.
At a glance commentary
Scientific background on the subject Most reirradiation (re-RT) studies for head and neck cancer in Western series included patients with all anatomical subsites and pathologic subtypes, which might not reveal the real prognosis of re-RT for oral cavity squamous cell carcinoma.
What this study adds to the field
We found that the 2-year locoregional progression-free survival and overall survival rates were only 20% and 28%, with the price of 29% of patients sustaining new grade 3 or higher late complications. For patients with a poor performance status (ECOG¼ 2-3), treatment with a palliative intent should be considered.
Ethics statement
This study was approved by the Institutional Review Board of our hospital (approval number: 201801575B0). Informed consent was not required due to the retrospective nature of the study.
Patients
From December 1999 to March 2010, 83 patients met the criteria of this retrospective chart review study. Of the patients included in this study, 95% were male, and the median age of the patients was 48 years (range, 32e79). All patients had histologically confirmed SCC, and most cases were welldifferentiated or moderately differentiated cancers. The most common subsite of OSCC was the buccal mucosa (40%), followed by tumors in the tongue (30%). Seventy percent of the patients were classified as having a recurrence and 30% were classified as having SPTs. The majority of the patients had a betel quid chewing habit (83%), advanced disease (stage IV ¼ 77%), and a good performance status (PS; Eastern Cooperative Oncology Group [ECOG] ¼ 0e1, 73%). The patient characteristics are detailed in Table 1.
Treatments
The median time to re-RT was 15 months, and the median dose of the first RT and re-RT was 64 Gy (range, 59.4e72 Gy) and 60 Gy (range, 28e80 Gy), respectively. Seventy-six patients (92%) received concurrent chemotherapy at re-RT; most of the regimens were cisplatin or methotrexate based, but two patients received a Taxol-based therapy. Regarding the re-RT technique, 59 patients (71%) received intensity modulated radiotherapy (IMRT) or intensity modulated arc therapy, and 24 patients (29%) received conventional or 3D conformal radiotherapy. Because this is not a prospective study, the treatment volume varies significantly by deliberations of treating physicians, but usually no elective nodal irradiation nor prophylactic treatment for low-risk clinical target volume was given, in consideration of severe late complications. Thirty-eight patients (46%) underwent immediate surgery prior to re-RT. The treatment fields included the OC with or without the neck for 58 patients (70%); the rest of the patients (30%) received re-RT only to the neck. In addition, 16 patients (19%) experienced additional cancer events and received irradiation a third time [detailed in Table 2].
Response rate, failure pattern and cause of death
Including patients who had received surgery, the treatment response rate was 70%. However, 11 patients (13%) had stable disease and 14 patients (17%) had progressive disease within 2 months after re-RT. Overall, 30% of the patients showed a poor response to the re-irradiation. Moreover, the posttreatment tumor response was a very strong predictor for the patients' LRPFS (p ¼ 0.000) and OS (p ¼ 0.000). Two-thirds of the patients experienced locoregional failure without distant metastasis. Sixty-three patients died of head and neck cancer which is related to re-RT. Among them, 51 patients (81%) died of locoregional progression, 12 patients (19%) died of distant progression with or without locoregional disease. Five patients died of another new primary malignancy and 2 patients died of non-cancerous causes.
Acute and late complications
Because the re-RT fields were limited to gross tumors and high-risk areas with adequate margins, most treatments were tolerated. During the follow-up period, 49 patients (59%) experienced new grade 2 or higher late complications, and 24 patients (29%) experienced new grade 3 or higher late complications. Fifteen patients (18%) had new episodes of grade 2 or higher osteoradionecrosis after re-RT [ Table 3]. Besides, we did not find that larger treatment volume at re-RT (200 cm 3 or not, p ¼ 0.18; 300 cm 3 or not, p ¼ 0.21) and higher accumulated dose (130 Gy or not, p ¼ 0.25) correlated with the incidence of new grade 3 or higher late complications.
Survival endpoints and potential prognostic factors
The 2-year LRPFS and OS rates were 20% and 28%, respectively [ Fig. 1]. For LRPFS, four pretreatment predictors were noted in univariate analyses: PS (p ¼ 0.001), a dose of at least 60 Gy (p ¼ 0.001), stage IVB (p ¼ 0.020), and surgery before re-RT (p ¼ 0.041) [Fig. 2]. In the multivariate analysis, only PS (p ¼ 0.005) and a dose of at least 60 Gy (p ¼ 0.001) remained significant. The results for univariate analyses of OS were the same as those of LRPFS. For multivariate analysis of OS, PS (p ¼ 0.001) and a dose of at least 60 Gy (p ¼ 0.042) were significant predictors, but surgery before re-RT became marginally beneficial (p ¼ 0.053) [ Tables 4 and 5]. There was no significant survival difference in LRPFS or OS after categorizing the subsites of primary tumor at re-RT into tongue, buccal mucosa and others.
Group stratification for prognosis
According to the results of univariate and multivariate analyses for LRPFS and OS, we stratified the patients into four prognostic groups: Group 1: ECOG ¼ 0e1 and stage IeIII, n ¼ 18; Fig. 3].
Discussion
Tumor recurrence and SPTs arising within a previously irradiated HN region represent a difficult clinical scenario for the following reasons: (1) survival of tumor cells after definitive RT or CCRT usually implies that they were radioresistant with high malignant potential [11,20]; (2) postradiation changes in normal and tumor tissues, such as decreasing microvascular density and hypoxia [21,22], could put patients at risk of sustaining more unhealed wounds after salvage surgery, developing more tissue necrosis after re-RT, or experiencing poor drug penetration with systemic therapy; and (3) when planning re-RT as a part of salvage treatment, radiation volumes and doses are usually reduced to avoid unnecessary complications, but the accumulated doses remain sufficiently high to cause unneglectable late effects. All of the aforementioned factors contribute to suboptimal outcomes of re-RT with high rates of treatment-related toxicity. Our low 2-year LRPFS and OS rates (20% and 28%, respectively) and new episodes of grade 3 or higher late complications occurring in 30% of the patients may be attributable to these factors.
In the present study, we did not include NPC, which usually has higher salvage 5-year LRPFS and OS rates of approximately 80% and 60%, respectively [23e25]. Lee et al. reported a 2-year OS rate of 37% for HN cancer salvage treatment through re-RT. A subset analysis of their patients with non-NPC SCC showed that the 2-year OS rate for the patients who underwent surgical resection before re-RT was significantly higher than that for patients who did not (36% vs. 12%, p ¼ 0.008) [16]; the corresponding figures in our cohort were 41% and 16% (p ¼ 0.007). Thus, excluding a subsite with a favorable prognosis, such as the nasopharynx, can worsen the oncologic outcomes relative to those obtained when such subsites are included. Takiar et al. reported re-IMRT results for patients with HN cancer and found that the SCC histologic subtype had a significantly lower OS and locoregional control (LRC) than those of the non-SCC subtype. Moreover, they reported that the OC subsite had unfavorable LRC among patients who underwent surgery and shorter PFS among the patients with definitive re-RT [19]. Although unsatisfactory, the results of our OSCC re-RT were in an acceptable range compared with those of Western studies after we considered the anatomic subsites and pathologic subtypes. Theoretically, surgery before re-RT removes a majority of the radioresistant clones, thus improving the statistical chance for tumor eradication with re-RT [11]. It is reasonable to assume a more favorable outcome for patients who receive surgery before re-RT. However, in the present study, this factor had statistically significant effects on LRPFS and OS in UVA, but lost their significance in the multivariate analysis, probably because of the influence of PS, disease severity and limited patient numbers.
Some reports state that IMRT can achieve improved LRC or OS [15,16]. However, in the present study, we could not determine the superiority of IMRT in LRPFS or OS, compared with non-IMRT techniques; this could be because not all of our linear accelerators during the treatment time in this series were IMRT capable, and IMRT was administered only to the patients who required it. Most non-IMRT treatments were administered for postoperative neck irradiation or less advanced conditions, in which the dose-delivery technique plays a minor role in optimizing dose distribution. The existence of patient selection bias may obscure the benefits of better dose homogeneity and target coverage offered by the intensity modulation technique.
As compared to SPTs, recurrent disease is usually believed to be more radioresistant because tumor cells were selected from the first RT (11,20). We observed a 3.5-month shorter median LRPFS (4.7-month vs. 8.2-month, p ¼ 0.147) and OS (9.0-month vs. 12.5-month, p ¼ 0.081) of recurrent disease without reaching statistical significance. We had a relatively small number of SPTs (n ¼ 25) patients in our study, and the SPTs arose in a postirradiated hypoxic microenvironment, which may reduce the therapeutic effect of re-RT and chemotherapy (21,22). Both factors may contribute to the nonstatistical significance observed.
Another crucial re-RT prognostic indicator frequently mentioned by other investigators is the time interval between the two radiation treatments. The shorter the interval is, the higher the probability that tumors are radioresistant. The RTOG 9610 trial noted statistically improved survival for patients who received re-RT >1 year from the prior RT compared with patients who received re-RT <1 year after prior RT [12]. However, in the present study, the time interval was not found to be a prognostic factor, regardless of whether the cutoff point was 6 months, 10 months, or 1 year. The relatively short median interval (15 months) of our OSCC cohort, together with the fact that 70% of the patients belonged to the recurrent group, may have diminished the discrimination ability of radioresistance over time.
The prognostic factors for LRPFS and OS in the current study, as determined through univariate analyses, were PS, a dose of at least 60 Gy, stage IVB, and surgery before re-RT. Half of the stage IVB patients had a poor PS (Pearson chi-square test, p ¼ 0.002). Therefore, the impact of stage IVB might be diluted by the poor PS in the multivariate analysis. Tanvetyanon et al. found that comorbidity and pre-existing organ dysfunction are among several crucial prognostic factors for patients undergoing re-RT, and should be considered in treatment decisions [26]. In the present study, we found that PS was a significant predictor for both LRPFS and OS. Therefore, we suggest that PS can be a simple alternative for evaluating comorbidity and pre-existing organ dysfunction when making a re-RT decision. Most radiation oncologists were reluctant to perform full-dose re-RT for recurrent or second primary HN cancers for fear of severe late complications and poor expected outcomes. Therefore, we stratified the patients into four groups and tried to identify which group benefitted the least from aggressive salvage treatment. Group 4 (ECOG ¼ 2e3, n ¼ 22) had the shortest median LRPFS and OS (3.0 and 5.0 months, respectively), and 2-year OS was only 4.5%. Most of their treatments were palliative in nature.
Based on the retrospective nature of this study, the patient selection bias in determining who received surgery, the RT dosedelivery technique, and limited patient numbers, we only can provide an outline of outcomes for OSCC patients who received salvage re-RT. However, because we confined the study population to oral cavity primary and squamous cell carcinoma only, the relatively focused data may offer as a benchmark for future prospective clinical studies for such patients.
Conclusion
At the price of 29% of patients sustaining new Gr3 late complications, the 2-year LRPFS and OS rates were 20% and 28%, respectively, for patients with OSCC who received salvage re-RT. Among all prognostic factors, PS was the most crucial, and for patients with a poor PS, treatment with a palliative intent should be considered. Future prospective studies using physical or chemical radioenhancers are warranted to improve the dismal outcomes in such radioresistant OSCC patients.
Funding sources
This work was supported by grants 107-2314-B-182A-062-MY3 from the Ministry of Science and Technology in Taiwan and CMRPG3G1411~3 from the Chang Gung Memorial Hospital in Taiwan.
Conflicts of interest
None.
r e f e r e n c e s | 2021-12-29T16:09:08.280Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "4aeb89a37772257d87d8af7a5db81149e49a0bc8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.bj.2021.12.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d09987544fe0b2ffbc05a61b708547c98c6eeb5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57013201 | pes2o/s2orc | v3-fos-license | The effect of orlistat and weight loss diet on plasma ghrelin and obestatin
Background: The objective of this study was to evaluate the effect of weight loss with hypocaloric diet and orlistat treatment in addition to hypocaloric diet on gut-derived hormones ghrelin and obestatin. Materials and Methods: A total of 52, euglycemic and euthyroid, obese female patients were involved in the study. The patients were assigned to two groups: Group 1 (n = 26) received hypocaloric diet alone and Group 2 (n = 26) received orlistat in addition to hypocaloric diet for 12 weeks. Anthropometric measurements, serum lipid, insulin levels, and obestatin and ghrelin values were assessed at the beginning of the study and after 12 weeks of therapy. Results: Baseline clinical characteristics and laboratory parameters including serum ghrelin and obestatin concentrations and ghrelin/obestatin ratio were similar between the two groups. After 12 weeks, mean change in BMI, fat mass, and fat-free mass (FFM) were −1.97 ± 1.56 kg/m2 (P = 0.003), −2.63% ±2.11% (P = 0.003), and −1.06 ± 0.82 kg (P = 0.003), respectively, in Group 1. In Group 2, mean change in BMI was −2.11 ± 1.24 kg/m2 (P = 0.001), fat mass was −3.09% ±2.28% (P = 0.002), and FFM was −1.26 ± 0.54 kg (P = 0.001). However, fasting glucose, lipid, and insulin levels did not change in Group 1. Furthermore, except serum high-density lipoprotein cholesterol and triglyceride levels, no significant change was observed in Group 2. Although serum ghrelin and obestatin concentrations increased significantly in both groups (Group 1: pGhrelin: 0.047, pobestatin: 0.001 and Group 2: pGhrelin: 0.028, pobestatin: 0.006), ghrelin/obestatin ratio did not change significantly. When the changes in anthropometric assessments and laboratory parameters were compared, no significant difference was observed between the two groups. Furthermore, no correlation was observed between ghrelin or obestatin and any other hormonal and metabolic parameters. Conclusion: Weight loss with diet and diet plus orlistat is both associated with increased ghrelin and obestatin concentrations.
INTRODUCTION
Obesity is a main health-care problem of the new century that is associated with coronary heart disease and cerebrovascular disease.The World Health Organization reported that more than 1 billion adults were overweight, and at least 300 million of them were obese. [1]4][5] Calorie restriction and regular physical activity are still the mainstay of treatment in obesity; nevertheless, record. [8,9]Previous reports have confirmed the potency of orlistat in weight reduction, along with amelioration in cardiovascular risk factors in obese patients. [10]t-derived hormones ghrelin and obestatin have been described as an important physiologic regulator of appetite and energy homeostasis recently.Ghrelin is an endogenous ligand for the growth hormone (GH) secretagogue receptor, and it regulates energy homeostasis and appetite and consequently increases body weight. [11]Whereas exogenous ghrelin administration stimulated appetite and food intake, recent studies reported decreased ghrelin levels in obese patients and serum ghrelin levels are negatively correlated with body mass index (BMI) both in obese and lean participants. [11,12]Obestatin is encoded by the same gene with ghrelin and down-regulates the effects of ghrelin on food intake.[15][16] However, recent studies were unable to confirm the role of obestatin on food intake, body weight, or GH secretion in humans. [17,18]The data about the effect of orlistat treatment on ghrelin levels are limited. [19,20]However, to the best of our knowledge, there is no study concerning the effect of orlistat -the only approved pharmacologic treatment of obesity in our country -on both ghrelin and obestatin levels and ghrelin/obestatin ratio in obese patients.Therefore, the aim of our study was to assess the effect of weight loss with orlistat and hypocaloric diet on gut-derived hormones ghrelin and obestatin and ghrelin/obestatin ratio.
MATERIALS AND METHODS
The study was a single-center, prospective, case-control study.A total of 88 sedentary obese women who were admitted to the Outpatient Endocrinology Clinic, for the treatment of obesity, were assessed for eligibility, and the flow of the study was described in Figure 1.The eligible individuals were between 18 and 60 years of age with a BMI >35 kg/m 2 .All the eligible individual were evaluated by serum thyroid-stimulating hormone, fT3, fT4, and morning serum cortisol after 1 mg overnight dexamethasone administration and 2 h oral glucose tolerance test with 75 g of glucose in fasting state.Patients with thyroid dysfunction or abnormal glucose metabolism or serum cortisol values >1.8 mg/dl after overnight dexamethasone suppression were excluded from the study.Furthermore, patients who had a history of cardiovascular or renal diseases and gastrointestinal disorders or who had been taking any medication known to affect body weight were excluded from the study.Finally, a total of 52 patients were enrolled in the study.The Local Ethics Committee of Tepecik Research and Training Hospital approved the study protocol, and the procedures followed were in accordance with the Declaration of Helsinki 1975, as revised in 2000.Informed written consents were obtained from all participants at the beginning of the study.The patients were assigned to two groups: Group 1 (n = 26) received hypocaloric diet alone and Group 2 (n = 26) received orlistat (Thincal and Kocak) 120 mg, 3 times per day in addition to hypocaloric diet for 12 weeks.The percentages of carbohydrate, fat, and proteins were 50%, 25%, and 25%, respectively.The daily regular calorie intake was adjusted as 24 calories/kg of ideal body weight.Ideal body weight was calculated by method of Devine. [21]All the patients were followed up for 12 weeks.
All the participants underwent a thorough physical examination and laboratory assessment.Anthropometric measurements were performed at the beginning and at the end of the study.Anthropometric measurements included measurement of body weight, height, and evaluation of body composition.Measurements of body weight and height were presented by the nearest 0.1 kg and 0.5 cm, respectively.Body composition was evaluated by applying leg-to-leg bioelectrical impedance (Tanita Body Fat Analyzer, TBF 300 M, Tanita, Tokyo, Japan).Assessments of body composition were standardized and performed on the morning of the study visits.The participants were questioned about their menstrual status and their fluid and food consumption, at that morning.Throughout a study period, all assessments were completed with the same material by the same investigator.The accuracy of bioelectrical impedance in the assessment of body composition in obese cases has already been reported. [22]e blood samples were drawn -at the beginning of the study and after 12 weeks of therapy.Venous fasting blood samples were collected from an antecubital vein in 8 ml evacuated tubes without anticoagulant (Vacuette, Greiner Bio-One, Austria).Blood in the plain tubes was allowed to clot for 30 min and was centrifuged at 3000 rpm for 10 min at room temperature.After centrifugation, the serum samples were aliquoted, frozen, and stored at −80°C for ghrelin and obestatin analysis.Repeated freezing and thawing process
RESULTS
Baseline clinical characteristics, anthropometric assessments, and laboratory parameters of the two groups are presented in Table 1.There were no significant differences between the two groups with respect to age, body weight, BMI, fat mass, and fat-free mass (FFM).Furthermore, basal glucose, insulin and lipid levels, and HOMA values were similar between the two groups.In addition, serum ghrelin and obestatin concentrations and ghrelin/obestatin ratio were similar between the two groups.
We found no correlation between change in BMI and ghrelin (r = −0.136,P = 0.567) or change in BMI and obestatin (r = −0.228,P = 0.335).Furthermore, there was no correlation between ghrelin or obestatin and any other anthropometric, hormonal, and metabolic parameters both at baseline and at the end of the study (P > 0.05).No complication was observed followed by orlistat intake.
DISCUSSION
Our current research revealed that both hypocaloric diet and medical treatment with orlistat plus low-calorie diet increased serum ghrelin and obestatin concentrations in a reasonably short period of 12 weeks.
Ghrelin and obestatin (ghrelin-associated peptide) both are developed from the identical peptide precursor (preproghrelin).Ghrelin, a 28-amino acid peptide produced mainly by the stomach, was initially determined as the natural ligand of the GH Secretagogue Receptor type 1a (GHS-R1a).In addition to stimulating GH secretion, ghrelin stimulates prolactin and ACTH release, increases gastric motility and gastric acid secretion, and promotes pancreatic peptide synthesis. [23]After the elucidation of orexigenic behavior of ghrelin in rodents, it was speculated that raised levels of ghrelin could contribute to obesity in humans.[26][27] This may be explained by the downregulation of ghrelin with a result of energy excess in obese patients.
Obestatin is purified from the rat stomach and binds to the orphan G protein-coupled receptor 39 that is expressed in the brain.Following peripheral administration of obestatin, significant decrease in food ingestion and body weight was observed in rodents. [17]This raises the possibility that obestatin might be involved in the regulation of energy balance and body weight.However, the data about obestatin levels in obesity are conflicting.While some studies report increased levels, [17] some reported decreased levels. [18]o et al. demonstrated increased ghrelin/obestatin ratio in obesity, but Vicennati and Zamrazilova found the decreased ratio. [14,16,17]The difference in results might be explained by the difference in the study population.Although obesity was associated with decreased serum ghrelin levels, after diet-induced weight loss plasma ghrelin levels were found to be increased. [24]This increase in ghrelin might be an adaptive response to prevent further weight loss by upregulating hunger levels and energy intake. [24,25]In accordance with these findings, in our study, we found increased ghrelin levels both after diet-induced weight loss or diet plus orlistat treatment-induced weight loss.However, weight loss after RYGP was reported to be associated with decreased serum ghrelin levels.
Patients who underwent gastric by-pass felt less hungry and consumed fewer meals.Hence, the suppression of ghrelin was proposed as a potential mechanism by which this procedure caused weight reduction. [28]In contrast to these findings, Martins et al. reported increased plasma ghrelin and obestatin levels, but there is no difference in ghrelin/obestatin ratio after RYGB. [18]listat is a reversible inhibitor of gastrointestinal lipases that are extensively used in the pharmacotherapy of morbid obesity, and it blocks the fat absorption in the intestine, controls hypertension and dyslipidemia apart from weight loss, and decreases the risk of development of diabetes mellitus.It has been reported that orlistat had a favorable effect on insulin resistance, blood pressure, and TG levels in addition to weight loss.Previous trials demonstrated that orlistat reduced total cholesterol, LDL-cholesterol, and TG levels. [29]milar to previous studies, our study revealed that orlistat treatment with a low-calorie diet regimen caused a significant decrease in TG and statistically significant increase in HDL-cholesterol levels in obese women after 12 weeks of treatment.There are only two studies which investigated the effect of orlistat therapy on serum ghrelin levels. [19,20]zkan et al. compared serum leptin and ghrelin levels in obese participants who take orlistat with those receiving only dietary treatment.They found lower ghrelin levels in obese participants with respect to controls and reported that ghrelin levels increased in both obese groups after 12 weeks of therapy, but the increase was similar. [19]In the other study, orlistat was found to have no effect on ghrelin levels. [20]In our study, we observed increased ghrelin levels after 12 weeks of orlistat treatment and similar to the study of Ozkan et al., we found that the increase was not different between the diet and orlistat groups.
There are no studies about the effect of orlistat on obestatin levels or ghrelin/obestatin ratio.Our study is the first study which evaluated the effect of orlistat on obestatin or ghrelin/obestatin ratio.We found increased obestatin levels both after diet-induced or diet plus orlistat therapy-induced weight loss.However, we observed no significant change in ghrelin/obestatin ratio in both groups after 12 weeks of treatment.
Ghrelin/obestatin ratio was negatively correlated with BMI and indices of abdominal fat distribution, fasting insulin, and HOMA-IR, but positively correlated with ISI composite in previous reports. [16]However, Guo et al. found a positive correlation between ghrelin/obestatin ratio and BMI. [14]In our study, we did not observe any correlation between BMI or antropometric data and ghrelin, or obestatin both at baseline and at the end of the study.This may be related to that previous studies included distinct ethnic groups of the younger age range.They showed that decreased obestatin concentrations were independently and significantly associated with impaired glucose regulation and type 2 diabetes. [30]However, in our study, we did not find any correlation between insulin or HOMA-IR and ghrelin or obestatin both at baseline and at the end of the study.This may be explained by that in our study; we excluded patients with type 2 diabetes.
Our study has a couple of limitations.First, it has a very small sample size.Second, we measured total ghrelin, not acylated ghrelin, which is thought to be principal for ghrelin biologic actions.Nevertheless, in a previous report, total ghrelin levels correlated well with the acylated ghrelin. [18,23]Third, ghrelin and obestatin concentrations merely measured in the fasting state in our study.However, ghrelin concentrations increase expeditiously after fasting in normal-weight individuals, but this rise is postponed in obese animals, suggesting that excess energy deposit adjusts short-term ghrelin secretion.On the other hand, a measuring of morning fasting ghrelin concentrations has been reported as a reliable approach to characterize ghrelin status even if ghrelin is released episodically. [31]Furthermore, our study sample included only female patients and we did not measure fasting plasma ghrelin and obestatin levels in lean control volunteers for comparison.
In summary, this is the first study that evaluated the effect of low-calorie diet and diet plus orlistat in obese premenopausal women on orexigenic hormone -ghrelin and anorexigenic hormone -obestatin ratio.Weight loss with diet and diet plus orlistat is associated with increased ghrelin and obestatin concentrations.
Financial support and sponsorship
Nil.
Statistical analysisSPSS 11.0 (SPSS Inc.Chicago, Illinois, USA) software was used for statistical comparisons of data.Data were presented as mean ± standard deviation.The hypocaloric diet group and hypocaloric diet plus orlistat group were compared using Student's-t-test.The data of basal and follow-up values after 12 weeks of therapy were compared using paired-sample t-tests and Wilcoxon signed-ranks test.Pearson correlation analysis was used to evaluate the relationship between ghrelin, obestatin, ghrelin/obestatin ratio, and various anthropometric and metabolic variables.P < 0.05 was considered as statistically significant.
Table 1 : Baseline clinical characteristics, anthropometric assessments, and laboratory parameters of the study population
Student-t-test, P<0.05 is considered as statistically significant.BMI=Body mass index; FFM=Fat-free mass; FBG=Fasting blood glucose; TG=Triglyceride; LDL-C=Low-density lipoprotein cholesterol; HDL-C=High-density lipoprotein cholesterol; HOMA-IR=Homeostasis model assessment of insulin resistance
Table 3 : Changes in body composition and laboratory parameters in both groups
The study of Guo et al. included both male and female participants; however, the study of Vicenatti et al. and Zamrazilova et al. included only female participants.Furthermore, the mean baseline BMI in Guo's study was 30.1 ± 1.9 kg/m 2 and in Vicennati's study was 35.3 ± 4.19 kg/m 2 .Since the obese group still had higher ghrelin to obestatin ratio even after adjustment for
Table 2 : Baseline and 12 th week anthropometric assessments and laboratory parameters of both groups
Paired sample t-test for each group.*P<0.05.BMI=Body mass index; FFM=Fat-free mass; FBG=Fasting blood glucose; TG=Triglyceride; LDL-C=Low-density lipoprotein cholesterol; HDL-C=High-density lipoprotein cholesterol; HOMA-IR=Homeostasis model assessment of insulin resistance gender and age in the study of Guo et al., it is unlikely that the discrepancy in data resulted from the gender difference.Hence, larger researches are required to elucidate these controversial results. | 2019-01-22T22:23:51.237Z | 2018-11-28T00:00:00.000 | {
"year": 2018,
"sha1": "b1f1e519cf59bd6c83186676cb6069a64e6ccb5a",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jrms.jrms_928_17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01be66536c81c9eee1da6116733b553c2a500c39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119120214 | pes2o/s2orc | v3-fos-license | Globally hyperbolic moment model of arbitrary order for one-dimensional special relativistic Boltzmann equation
This paper extends the model reduction method by the operator projection to the one-dimensional special relativistic Boltzmann equation. The derivation of arbitrary order globally hyperbolic moment system is built on our careful study of two families of the complicate Grad type orthogonal polynomials depending on a parameter. We derive their recurrence relations, calculate their derivatives with respect to the independent variable and parameter respectively, and study their zeros and coefficient matrices in the recurrence formulas. Some properties of the moment system are also proved. They include the eigenvalues and their bound as well as eigenvectors,hyperbolicity, characteristic fields, linear stability, and Lorentz covariance. A semi-implicit numerical scheme is presented to solve a Cauchy problem of our hyperbolic moment system in order to verify the convergence behavior of the moment method. The results show that the solutions of our hyperbolic moment system can converge to the solution of the special relativistic Boltzmann equation as the order of the hyperbolic moment system increases.
hyperbolic and linearly stable. The readers are referred to [27,28,29]. Following the approach used in [27,28], it is easy to show that the above conclusion is not true if the viscosity exists, that is, the Israel and Stewart moment system in the Landau frame is not globally hyperbolic too if the viscosity exists. There does not exist any result on the hyperbolicity or loss of hyperbolicity of (existing) general higher-order moment systems for the relativistic kinetic equation. Such proof is very difficult and challenging. The loss of hyperbolicity will cause the solution blow-up when the distribution is far away from the equilibrium state. Even for the non-relativistic case, increasing the number of moments could not avoid such blow-up [10].
Up to now, there has been some latest progress on the Grad moment method in the non-relativistic case. A regularization was presented in [5] for the 1D Grad moment system to achieve global hyperbolicity. It was based on the observation that the characteristic polynomial of the Jacobian of the flux in Grad's moment system is independent of the intermediate moments, and further extended to the multi-dimensional case [6,8]. The quadrature based projection methods were used to derive hyperbolic PDE systems for the solution of the Boltzmann equation [36,37] by using some quadrature rule instead of the exact integration. In the 1D case, it is similar to the regularization in [5]. Those contributions led to well understanding the hyperbolicity of the Grad moment systems. Based on the operator projection, a general framework of model reduction technique was recently presented in [19]. It projected the time and space derivatives in the kinetic equation into a finite-dimensional weighted polynomial space synchronously, and might give most of the existing moment systems mentioned above. The aim of this paper is to extend the model reduction method by the operator projection [19] to the one-dimensional special relativistic Boltzmann equation and derive corresponding globally hyperbolic moment system of arbitrary order. The key is to choose the weight function and define the polynomial spaces and their basis as well as the projection operator. The theoretical foundations of our moment method are the properties of two families of the complicate Grad type orthogonal polynomials depending on a parameter.
The paper is organized as follows. Section 2 introduces the special relativistic Boltzmann equation and some macroscopic quantities defined via the kinetic theory. Section 3 gives two families of orthogonal polynomials dependent on a parameter, and studies their properties: recurrence relations, derivative relations with respect to the variable and the parameter, zeros, and the eigenvalues and eigenvectors of the recurrence matrices. Section 4 derives the moment system of the special relativistic Boltzmann equation and Section 5 studies its properties: the eigenvalues and its bound as well as eigenvectors, hyperbolicity, characteristic fields, linear stability, and Lorentz covariance. Section 6 presents a semi-implicit numerical scheme and conducts a numerical experiment to check the convergence of the proposed hyperbolic moment system. Section 7 concludes the paper. To make the main message of the paper less dilute, all proofs of theorems, lemmas and corollaries in Sections 2-6 are given in the Appendices A-E respectively.
Preliminaries and notations
In the special relativistic kinetic theory of gases [12], a microscopic gas particle of rest mass m is characterized by the (D + 1) space-time coordinates (x α ) = (x 0 , x) and momentum (D + 1)-vectors (p α ) = (p 0 , p), where x 0 = ct, c denotes the speed of light in vacuum, and t and x are the time and D-dimensional spatial coordinates, respectively. Besides the contravariant notation (e.g. p α ), the covariant notation such as p α will be also used in the following and the covariant p α is related to the contravariant p α by p α = g αβ p β , p α = g αβ p β , where (g αβ ) denotes the Minkowski space-time metric tensor and is chosen as (g αβ ) = diag{1, −I D }, I D is the D × D identity matrix, (g αβ ) denotes the inverse of (g αβ ), and the Einstein summation convention over repeated indices is used. For a free relativistic particle, one has the relativistic energy-momentum relation (aka "on-shell" or "mass-shell" condition) E 2 − p 2 c 2 = m 2 c 4 . If putting p 0 = c −1 E = p 2 + m 2 c 2 , then the "mass-shell" condition can be rewritten as p α p α = m 2 c 2 .
As in the non-relativistic case, the relativistic Boltzmann equation describes the evolution of the one-particle distribution function of an ideal gas in the phase space spanned by the space-time coordinates (x α ) and momentum (D+1)-vectors of particles (p α ). The oneparticle distribution function depends only on (x, p, t) and is defined in such a way that f (x, p, t)d D xd D p gives the number of particles at time t in the volume element d D xd D p. For a single gas the Boltzmann equation reads [12] p α ∂f ∂x α = Q(f, f ), (2.1) where the collision term Q(f, f ) depends on the product of the distribution functions of two particles at collision, e.g.
where f and f * are the distributions depending on the momenta before a collision, while f and f * depend on the momenta after the collision, dΩ denotes the element of the solid angle, the collision kernel B = σ (p α * p α ) 2 − m 2 c 2 for a single non degenerate gas (e.g. electron gas), and σ denotes the differential cross section of collision,. The collision term satisfies so that 1 and p α are called collision invariants. Moreover, the Boltzmann equation (2.1) should satisfy the entropy dissipation relation (in the sense of classical statistics) where the equal sign corresponds to the local thermodynamic equilibrium. In kinetic theory the macroscopic description of gas can be represented by the first and second moments of the distribution function f , namely, the partial particle (D+1)-flow N α and the partial energy-momentum tensor T αβ , which are defined by They can be decomposed into the following forms (i.e. the Landau-Lifshitz decomposition)
i.e. 12) or particle transport (the Eckart frame) [17]), i.e. in which the velocity is specified by the flow of particles The former can be applied to multicomponent gas while the latter is only used for single component gas. This work will be done in the Landau-Lifshitz frame (2.11).
Remark 2 At the local thermodynamic equilibrium, n α , Π, and π αβ will be zero.
Remark 3
In order to simplify the collision term, several simple collision models have been proposed, see [12]. Similar to the BGK (Bhatnagar-Gross-Krook) model in the non-relativistic theory, two simple relativistic collision models are the Marle model [41] Q (2.13) and the Anderson-Witting model [3] Q where f (0) = f (0) (x, p, t) denotes the distribution function at the local thermodynamic equilibrium, and τ is the relaxation time and may rely on ρ, θ. In the non-relativistic limit, both models (2.13) and (2.14) tend to the BGK model. However, the Marle model (2.13) does not satisfy the constraints of the collision terms in (2.2). The relaxation time τ can be defined by τ = 1 nπd 2ḡ , where n denotes the particle number density, d denotes the diameter of gas particles, and g is proportional to the mean relative speedξ between two particles, e.g.ḡ = √ 2ξ orξ [12]. In the non-relativistic case,ξ = 4 kT πm , but the expression ofξ in relativistic case is very complicate, see Section 8.2 of book [12]. Usually,ξ orḡ is suitably approximated, for example,ḡ ≈ c (that is,ḡ is approximated by using the ultra-relativistic limit). Under such simple approximation, one has This paper will only consider the one-dimensional form of relativistic Boltzmann equation (2.1). In this case, the vector notations x and p will be replaced with x or x 1 and p or p 1 , respectively, the Greek indices α and β run from 0 to 1, and (2.1) reduces to the following form In the 1D case, the shear-stress tensor π αβ disappears even though the local-equilibrium is departed from, and the local-equilibrium distribution f (0) can be explicitly given by 16) which is like the Maxwell-Jüttner distribution [12] for the case of D = 3 and Maxwell gas and obeys the common prescription that the mass density ρ and energy density ε are completely determined by the local-equilibrium distribution f (0) alone, that is, is the ratio between the particle rest energy mc 2 and the thermal energy of the gas k B T , k B denotes the Boltzmann constant, T is the thermodynamic temperature, and K n (ζ) denotes the modified Bessel function of the second kind, defined by satisfying the recurrence relation For ζ 1 the particles behave as non-relativistic, and for ζ 1 they behave as ultrarelativistic.
Similar to (2.7), from the knowledge of the equilibrium distribution function f (0) it is also possible to determine the values of some macroscopic variables by where G(ζ) := K −1 1 (ζ)K 2 (ζ). Now, the conservation laws (2.10) become where h := ρ −1 (ε+P 0 ) = c 2 G(ζ) denotes the specific enthalpy. They are just the macroscopic equations of special relativistic hydrodynamics (RHD). In other words, when f = f (0) , the special relativistic Boltzmann equation (2.15) can lead to the RHD equations (2.21). We aim at finding reduced model equations to describe states with f = f (0) . This paper will extend the moment method by operator projection [19] to (2.15) and derive its arbitrary order moment model in Section 4.
Before ending this section, we discuss the macroscopic variables calculated by a given distribution f , in other words, for the nonnegative distribution f (x, p, t), which is not identically zero, can the physically admissible macroscopic states {ρ, u, θ = ζ −1 } satisfying ρ > 0, |u| < c and θ > 0 be obtained?
For the nonnegative distribution f (x, p, t), which is not identically zero, the density current N α and energy-momentum tensor T αβ calculated by (2.3) satisfy (2.22) where the macroscopic velocity u is the unique solution satisfying |u| < c of the quadratic equation which has a solution satisfying |u| < c and And the positive mass density ρ is calculated by Furthermore, the equation has a unique positive solution θ in the interval (0, +∞).
Furthermore, the following conclusion holds.
Remark 4
The proofs of those theorems are given in the Appendix A. Theorem 2.1 provides a recovery procedure of the admissible primitive variables ρ, u, and θ from the nonnegative distribution f (x, p, t) or the given density current N α and energy-momentum tensor T αβ satisfying (2.22). It is useful in the derivation of the moment system as well as the numerical scheme.
Before discussing the moment method, we first non-dimensionalize the relativistic Boltzmann equation (2.15). Here we only consider the Anderson-Witting model (2.14). If setting where L denotes the macroscopic characteristic length, n 0 and θ 0 = mc 2 /k B are the reference particle number and temperature, respectively, then the 1D relativistic Boltzmann equation (2.15) with (2.14) is non-dimensionalized as follows Thanks to K n = λ L = τ0c L = 1 n0Lπd 2 , the above equation is rewritten aŝ Thus, ifτ := Kn ρ may be considered as a new "relaxation time", then the collision term of relativistic Boltzmann equation (2.27) has the same form of non-relativistic BGK model. For the sake of convenience, in the following, we still use τ , x, t, f , p, p 0 , ρ to replaceτ ,x, t,f ,p,p 0 ,ρ, respectively.
Two families of orthogonal polynomials
This section introduces two families of orthogonal polynomials dependent on a parameter ζ, similar to those given in [2], and studies their properties, which will be used in the derivation and discussion of our moment system. All proofs are given in the Appendix B.
If considering as the weight functions in the interval [1, +∞), where ζ ∈ R + denotes a parameter, then the inner products with respect to ω ( ) (x; ζ) can be introduced as follows where δ m,n denotes the Kronecker delta function, which is equal to 1 if m = n, and 0 otherwise. Obviously, {P ( ) which imply for any polynomial Q(x; ζ) of degree ≤ n in L 2 ω ( ) [1, +∞). The orthogonal polynomials {P ( ) n (x; ζ)} can be obtained by using the Gram-Schmidt process. For example, several orthogonal polynomials of lower degree are given as follows It shows that the coefficients in those orthogonal polynomials are so irregular that it will be very complicate to study the properties of {P n (x; ζ) can be rewritten as follows In the following, we want to derive the recurrence relations of {P ( ) n (x; ζ)}, calculate their derivatives with respect to x and ζ, respectively, and study the properties of zeros and coefficient matrices in the recurrence relations.
Recurrence relations
This section presents the recurrence relations for the orthogonal polynomials {P n (x; ζ)}, and the specific forms of the coefficients in those recurrence relations.
Using the three-term recurrence relation and the existence theorem of zeros of general orthogonal polynomials in Theorems 3.1 and 3.2 of [45] gives the following conclusion. 6) or in the matrix-vector form where both coefficients are positive, e n+1 is the last column of the identity matrix of order (n + 1), and which is symmetric positive definite tridiagonal matrix with the spectral radius larger than 1.
Besides, the recurrence relations between {P n (x; ζ)} can also be obtained. n (x; ζ)} can be given by 10) or in the matrix-vector form , (3.13) and (ii) Two two-term recurrence relations between {P n (x; ζ)} can be derived as follows
Partial derivatives
This section calculates the derivatives of the polynomial P ( ) n (x; ζ) with respect to x and ζ, = 0, 1.
Theorem 3.4 The first-order derivatives of the polynomials {P ( )
Zeros
Using the separation theorem of zeros of general orthogonal polynomials [45] gives the following conclusion on our orthogonal polynomials {P ( ) n (x; ζ)}.
There is still another important separation property for the zeros of the orthogonal polynomials {P ( ) According to Theorems 3.5 and 3.6, we can further know the sign of the coefficients of the recurrence relations in Theorem 3.2.
Corollary 1 All quantities p n , q n , r n in (3.13) andp n ,q n ,r n in (3.16) are positive.
n give the following corollary.
According to Theorems 3.3 and 3.5, we have further the following conclusion.
Generalized eigenvalues and eigenvectors of coefficient matrices in the recurrence relations
This section discusses the generalized eigenvalues and eigenvectors of two (2n + 1) × (2n + 1) matrices A 0 n and A 1 n , defined by n , and J n appear in the recurrence relations in Theorems 3.1 and 3.2. Consider the following generalized eigenvalue problem (2nd sense): Find a vector y that obeys A 1 n y =λA 0 n y. If let u denote the first n + 1 rows of y, and v be the last n rows of y, thenλ Multiplying (3.7), (3.11), and (3.12) by P for |x| > 1, where It is not difficult to find that if the second terms at the right-hand sides of (3.28) and (3.29) disappear, then (3.28) and (3.29) reduce to two equations in (3.21). Thus in order to obtain the generalized eigenvalues and eigenvectors of A 0 n and A 1 n , one has to study the zeros of Q 2n (x; ζ).
With the aid of Theorems 3.3 and 3.4, we can calculate the partial derivatives at z i,n of Q 2n (x; ζ) with respect to x and ζ.
Moreover, one has .
Thanks to Lemmas 1 and 3, the generalized eigenvalues and eigenvectors of two (2n+1)×(2n+1) matrices A 0 n and A 1 n can be obtained with the aid of the zeros of Q 2n (x; ζ).
Theorem 3.7 Besides a zero eigenvalue denoted byλ 0,n , the matrix pair A 0 n and A 1 n has 2n non-zero, real and simple generalized eigenvalues, which satisfŷ and Corresponding (2n + 1) generalized eigenvectors can be expressed as for i = ±1, · · · , ±n, and
Moment method by operator projection
This section begins to extend the moment method by operator projection [19] to the onedimensional relativistic Boltzmann equation (2.15) and derive its arbitrary order hyperbolic moment model. For the sake of convenience, without loss of generality, units in which both the speed of light c and rest mass m of particle are equal to one will be used in the following. All proofs are given in the Appendix C.
Weighted polynomial space
In order to use the moment method by the operator projection to derive the hyperbolic moment model of the kinetic equation, we should define weighted polynomial spaces and norms as well as the projection operator. Thanks to the equilibrium distribution f (0) in (2.16), the weight function is chosen as g (0) , which will be replaced with the new notation g [u,θ] , considering the dependence of g (0) on the macroscopic fluid velocity u and Associated with the weight function g (0) [u,θ] , our weighted polynomial space is defined by which is an infinite-dimensional linear space equipped with the inner product Similarly, for a finite positive integer M ∈ N, a finite-dimensional weighted polynomial space can be defined by which is a closed subspace of H g (0) [u,θ] obviously. Thanks to Theorem 2.2, for all physically admissible u and θ satisfying |u| < 1 and θ > 0, introduce two notations
for s = t and x. It indicates that M +1 .
Lemma 6 (Recurrence relations) The basis functions {P
, n ≥ 1} satisfy the following recurrence relations where e 1 2M +1 and e 2 2M +1 are the penultimate and the last column of the identity matrix of order (2M + 1), respectively, and in which P p M is a permutation matrix making or in a compact form and the symbol [·, ·] M denotes the common inner product of two (2M + 1)-dimensional vectors.
Lemma 7
The operator Π M [u, θ] is linear bounded and projection operator in sense that
Remark 6
The so-called Grad type expansion is to expand the distribution function f (x, p, t) in the weighted polynomial space H g (0) [u,θ] as follows where the symbol [·, ·] ∞ denotes the common inner product of two infinite-dimensional vectors, and
Derivation of the moment model
where the square matrix D W M depends on θ and is of the following explicit form Step 2: Calculating the partial derivatives in time and space provides for s = t and x, where C M +1 is a square matrix of order (2M + 3) and directly derived with the aid of the derivative relations of the basis functions in Lemma 5.
Step 3 (Projection 2): Projecting the partial derivatives in (4.12) into the space H where the (2M + 1)-by-(2M + 1) matrix D M can be obtained from C M and D W M and is of the following form and where the elements " * " of D M in (4.14) are explicitly given by n−1 f 1 n−1 .
Step 4: Multiplying (4.13) by the particle velocity (p α ) yields Step 6: Substituting them into the 1D special relativistic Boltzmann equation (2.15) derives the abstract form of the moment system (4.20). For the Anderson-Witting model (2.14), the right-hand side of (4.19) becomes which implies that the source term S(W M ) can be explicitly given by where f It is shown that those equations become the macroscopic RHD equations (2.21) by multiplying those equations by (B 0 1 ) −1 . Thus, the conservation laws are a subset of the equations.
Properties of the moment system
This section studies some mathematical and physical properties of moment system (4. 19) or (4.20). All proofs are given in the Appendix D.
Hyperbolicity, eigenvalues, and eigenvectors
In order to prove the hyperbolicity of the moment system (4.20), one has to verify that B 0 M to be invertible and B M := (B 0 M ) −1 B 1 M to be real diagonalizable. In the following, we always assume that the first three components of W M satisfy ρ > 0, |u| < 1, and θ > 0.
Characteristic fields
This section further discusses whether there exists the genuinely nonlinear or linearly degenerate characteristic field of the quasilinear moment system.
which implies the dispersion relation between ω and k.
The following linear stability result holds for the moment system (4.20)-(4.21). Theorem 5.4 The moment system (4.20) with the source term (4.21) is linearly stable both in space and in time at the local equilibrium, that is, the linearized moment system (5.3) is stable both in time and in space, that is, Im(ω(k)) ≥ 0 for each k ∈ R and Re(k(ω))Im(k(ω)) ≤ 0 for each ω ∈ R + , respectively.
Lorentz covariance
In physics, the Lorentz covariance is a key property of space-time following from the special theory of relativity, see e.g. [18]. This section studies the Lorentz covariance of the moment system
Numerical experiment
This section conducts a numerical experiment to check the behavior of our hyperbolic moment equations (HME) (4.19) or (4.20) with (4.21) by solving the Cauchy problem with initial data where W L M = (7, 0, 1, 0, · · · , 0) T and W R M = (1, 0, 1, 0, · · · , 0) T . It is similar to the problem for the moment system of the non-relativistic BGK equation used in [5].
Numerical scheme
The spatial grid {x i , i ∈ Z} considered here is uniform so that the stepsize ∆x = x i+1 − x i is constant. Thanks to Theorem 5.1, the grid in t-direction {t n+1 = t n + ∆t, n ∈ N} can be given with the stepsize ∆t = C CFL ∆x, where C CFL denotes the CFL (Courant-Friedrichs-Lewy) number. Use f n i and ρ n i to denote the approximations of f (x i , p, t n ) and ρ(x i , t n ) respectively. For the purpose of checking the behavior of our hyperbolic moment system, similar to [9], we only consider a first-order accurate semi-implicit operator-splitting type numerical scheme for the system (4. 19) or (4.20), which is formed into the convection and collision steps: are derived based on the nonconservative version of the HLL (Harten-Lax-van Leer) scheme [44] and given by
It is worth noting that when
Step (ii), see the following discussion (Lemma 12). The other steps are similar to them.
holds.
Lemma 12 implies that in order to calculate only u * i and θ * i have to be obtained. It can be done the following procedure. For the given "distribution function" Π M [u n i , θ n i ](Πf ) * i , calculate corresponding partial particle flow N α and partial energy-momentum tensor T αβ , and then solve directly (2.24) to obtain u * i and solve iteratively (2.26) to obtain θ * i by using Newton-Raphson method. Remark 9 The function G(θ −1 ) − θ in (2.26) is a strictly monotonic and convex function of θ in the interval (0, +∞), because is the leading coefficient of the polynomial P Before ending this subsection, we discuss the stability of the collision step (6.3) even though τ is very small. Theorem 6.1 Semi-implicit scheme (6.3) is unconditionally stable.
All proofs have been given in the Appendix E.
Numerical results
In our numerical experiment, the Knudsen number Kn is chosen as 0.05 and 0.5, respectively, the spatial domain [−1.5, 1.5] is divided into a uniform grid of 1000 grid points, and C CFL = 0.9. In order to verify our results, the reference solutions are provided by using the discrete velocity model (DVM) [42] with a fine spatial grid of 10000 grid points and 50 Gaussian points in the velocity space. Fig. 6.2 shows the profiles of the density ρ, velocity u and thermodynamic pressure P 0 at t = 0.3 obtained by using our scheme (6.2) and (6.3) with M = 1, 2, · · · , 9, where Kn = 0.05, and the thin lines are the numerical results of the HME (4.20), and the thick lines are the results of DVM, provided as reference solutions. The solid lines denote ρ, dashed lines denote u, and dash-dotted lines denote P 0 .
It is clear that the numerical solutions of the HME (4.20) converge to the reference solution of the special relativistic Boltzmann equation (2.15) as M increases. When M = 1, the contact discontinuity and shock wave can be obviously observed. It is reasonable because the HME (4.20) are the same as the macroscopic RHD equations (2.21). When M = 2, the discontinuities can also observed, but they have been damped. When M ≥ 3, the discontinuities are fully damped and the solutions are almost in agreement with the reference solutions. It is similar to the phenomena in the non-relativistic case [4,5].
The results at t = 0.3 for the case of Kn = 0.5 are shown in Fig. 6.2. The discontinuities are clearer than the case of Kn = 0.05 when M = 1, 2, · · · 9, and the convergence of the moment method can also be readily observed, but it is slower than the case of Kn = 0.05. The contact discontinuities and shock waves are obvious when M ≤ 2, but when M > 6, the discontinuities are fully damped and the solutions are almost the same as the reference solutions.
Conclusions
The paper derived the arbitrary order globally hyperbolic moment system of the onedimensional (1D) special relativistic Boltzmann equation for the first time and studied the properties of the moment system: the eigenvalues and their bound as well as eigenvectors, hyperbolicity, characteristic fields, linear stability, and Lorentz covariance. The key contribution was the careful study of two families of the complicate Grad type orthogonal polynomials depending on a parameter. We derived the recurrence relations and derivative relations with respect to the independent variable and the parameter respectively, and studied their zeros and coefficient matrices in the recurrence formulas. Built on the knowledges of two families of the Grad type orthogonal polynomials with a parameter, the model reduction method by the operator projection [19] might be extended to the 1D special relativistic Boltzmann equation.
A semi-implicit operator-splitting type numerical scheme was presented for our hyperbolic moment system and a Cauchy problem was solved to verify the convergence behavior of the moment method in comparison with the discrete velocity method. The results showed that the solutions of our hyperbolic moment system could converge to the solution of the special relativistic Boltzmann equation as the order of the hyperbolic moment system increases.
Now we are deriving the globally hyperbolic moment model of arbitrary order for the 3D special relativistic Boltzmann equation. Moreover, it is interesting to develop robust, high order accurate numerical schemes for the moment system and find other basis for the derivation of moment system with some good property, e.g. non-negativity.
A.1 Proof of Theorem 2.1
Proof For the nonnegative distribution f (x, p, t), which is not identically zero, using (2.3) gives which implies the first inequality in (2.22). Using the definition of ∆ αβ in (2.6) and the tensor decomposition of T αβ in (2.5) gives (2.23), which is a quadratic equation with respect to u. The first inequality in (2.22) tells us that (2.23) has two different solutions whose product is equal to c 2 , while one of them with a smaller absolute value is (2.24).
Using further (2.3) gives i.e. the second inequality in (2.22), and then using the tensor decomposition of N α in (2.4) gives Using the second identity in (2.17), the expression of ε 0 in (2.20), and (2.5) gives (2.26). And the inequality E ≥ mc 2 holds because and the third inequality in (2.22) holds, and thus implies that G(θ −1 ) − θ > 1 for θ ∈ (0, +∞). On the other hand, one has one obtains which is equivalent to the following inequality Thus, one has which implies that G(θ −1 ) − θ is a strictly monotonic function of θ in the interval (0, +∞). Thus (2.26) has a unique solution in the interval (0, +∞). The proof is completed.
A.2 Proof of Theorem 2.2
Proof Under Theorem 2.1, for the nonnegative distribution f (x, p, t), which is not identically zero, one obtains {ρ, u, θ} satisfying Due to the last equations in (2.7) and (2.20), one obtains which completes the proof.
B.2 Proof of Theorem 3.3
Proof With the aid of definition and recurrence relation of the second kind modified Bessel function in (2.18) and (2.19), one has Taking the partial derivative of both sides of identities with respect to ζ and using (3.8) gives Thus one has ∂ζ is a polynomial and its degree is not larger than n + 1, using (3.3) gives (3.17). The proof is completed.
B.3 Proof of Theorem 3.4
Proof Similar to the proof of Theorem 3.3, one has Because the degrees of polynomials = n + 1 p n P (1) n , P (1) and The proof is completed.
B.4 Proof of Theorem 3.6
Proof Substituting {x which implies thatr n = 0. In fact, if assumingr n = 0, then the above identity and the fact that (x i,n+1 ; ζ) = 0, which contradicts with P (1) n being a polynomial of degree n.
Using Theorem 3.5 gives Thus there exist no less than one zero of the polynomial P (1) i+1,n+1 . The proof is completed.
B.5 Proof of Corollary 1
Proof It is obvious that Using Theorems 3.1 and 3.6 gives which imply q n > 0 andq n > 0.
Comparing the coefficients of the term of order n at two sides of (3.14) gives 0,n = 0. Combining Theorem 3.6 givesr n > 0. The proof is completed.
B.6 Proof of Corollary 3
Proof Taking partial derivative of P Combining them completes the proof.
B.7 Proof of Lemma 1
Proof According to the definition of Q 2n (x; ζ) in (3.30), it is not difficult to know that Q 2n (x; ζ) is an even function and a polynomial of degree 2n.
Using Lemma 2 completes the proof.
B.10 Proof of Theorem 3.7 Proof Obviously, both vectors u i,n and v i,n defined in (3.35) are not zero at the same time, i = ±1, · · · , ±n. The nonzero eigenvalues and eigenvectors of the matrix pair A 0 n and A 1 n in (3.32) and (3.34) can be obtained with the aid of (3.28)-(3.29) and Lemma 1. Using Lemma 3 further gives (3.33).
In the following, let us discuss the eigenvector y 0,n . Multiplying (3.12) by P which is a special case of (3.21) withλ = 0. The proof is completed.
Appendix C Proofs in Section 4
C.
where the decomposition of the particle velocity vector (2.9) has been used. Assume that the linear combination p µ1 p µ2 · · · p µ M g holds. One has to show that p µ1 p µ2 · · · p µ M +1 g [u,θ] can be written into a linear combination of components of P M +1 [u, θ]. Because p µ1 p µ2 · · · p µ M +1 g (1) Because of (2.9), one has dp p 0 = dp <1> Only a simple case is discussed in the following. As shown in Remark 2, at the local thermodynamic equilibrium, Π = 0 and n α = 0, thus one has M (z i,M ; ζ). | 2016-12-01T02:20:45.000Z | 2016-08-21T00:00:00.000 | {
"year": 2017,
"sha1": "8b772bf0a7cdab27a1f65ca860e6bc76bc190ee1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.06555",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8b772bf0a7cdab27a1f65ca860e6bc76bc190ee1",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.